Alternative compression algorithm
Take datastream, let's call it A.
Record length of A in bytes. Store in output file OUT.
Generate n random number sequences with unique seeds.
For byte a to byte b of A beginning at a=1 and b=length of A look for equal sequence to a same length section of random sequences. If none then reduce b by 1 and try again.
If find a match:
Record seed s, start of sequence s0 and length of sequence s1 in output file OUT.
Move pointer in A to position b+1, set new value for a=b+1 and b to length of A.
Repeat process.
Given sufficient compression time file ought to be smaller.
Note should be able to even compress white noise datastreams too.