Can file compression be done this way...?

Can compression be done the following…

  1. Say we take a few terabytes of data,
  2. Then we look at all the ones and zeros that make it up as one number
  3. Next we create a math problem that works out to that number
    so 111111011100011 could be 1111110111 * 100000 (you can use other numbers and math problems to get the math problem to take up less space than the original data)
  4. Lastly, when decompressing the data we solve the math problem.

The idea is you can create a math problem that 10 or so characters long that could be solved to be terabytes of data.

It could possibly work but would be very cpu intensive for not much compression. You either need a large enough dataset to make the compression worth doing or a small enough dataset to ease up load on the cpu to decompress it.

This is for text but the same principles apply. You represent the most common parts in the least space taking way possible and the least common ones doesnt really matter if they take up a bit more space


The only true way to know if it works or not is to write code and see if it works.

1 Like

Could you give an example where an equation takes up less data than the original data?

Also, keep in mind that you would need to make additional space for the equation data itself. For example, if you only used addition, subtraction, multiplication, and division, you’d need at least two bits to store that value. You’d also need to structure the data, which would require more data itself. For example, giving the length of numbers in the equation.

2 Likes

Ok, I might be mistaken but I think brotli or zopfli do some preprocessing like that.

That’s basically how bit parity works, but that normally takes up more space…