Intel to Build Silicon for Fully Homomorphic Encryption: This is Important

“This is important” okay.

I don’t fully get it. Is this truly something that will revolutionze encryption?

An example given:

With FHE, that researcher can take the encrypted data, perform the analysis and get a result, without ever knowing any specifics of the dataset.

Which does seem nice. Is that the limit? And I wrong to be underwhelmed?

This just sounds like encryption with the backdoor built into the silicon.

Seems like a step in the exact wrong direction.

1 Like

It also seems like there would still need to be a lot of operations that couldn’t be done in practice. Otherwise, it’s just a matter of time before someone finds a way to poke at the data in order to tease out some kind of details.

And how would you even know if the results of your “analysis” are accurate or totally bogus?

1 Like

This does not explain much more but, uh… explains whatbthey are trying to do. I don’t understand enough to say if it is a good or bad idea*.

This would be immediately appealing to data-crunching organizations everywhere, at least those genuinely interested in security and privacy, if computer scientists could devise a way to make it affordable and speedy enough.

“In traditional encryption mechanisms to transfer and store data, the overhead is relatively negligible,” explained Jason Martin, a principal engineer in Intel’s Security Solutions Lab and manager of the secure intelligence team at Intel Labs. “But with fully homomorphic encryption, the size of homomorphic ciphertext is significantly larger than plain data.”

How much larger? 1,000x to 10,000x larger in some cases, Martin said, and that has implications for the amount of computing power required to use homomorphic encryption.

“This data explosion then leads to a compute explosion,” said Martin. “As the ciphertext expands, it requires significantly more processing. This processing overhead increases, not only from the size of the data, but also from the complexity of those computations.”

Martin said this computational overhead is why homomorphic encryption is not widely used. Intel, he said, is working on new hardware and software approaches, and to build broader ecosystem support and standards. It may take a while.

*but based on previous “security and speed improvents” built into silicon, I don’t hold much hope this will be secure or fast for long.

There are no backdoors here - you can encrypt your own data then give the data (and not the encryption keys) to someone else who runs an algorithm compatible with FHE. They then return the results still encrypted to you. You then decrypt it using your key. At no point does the person who processes the encrypted data get to know what it is.

What operations can be effectively done in this way and how useful they will be is still an active area of research.

1 Like

I don’t see how that is possible. In order to have a meaningful result out of any operation, you have to have sane input. Ciphertext is not sane input, unless the function of the operation is to decrypt.

If this is actually possible without decryption, I’m very curious how.

The problem though, is that if the cpu decrypts the data at any point in time, even in some sort of separate namespace, the data is available and the entire process cannot be trusted. Call me a luddite, but this whole thing is just asking for trouble.

I am struggling with the meyself. Unless it is just something like comparison, like SHA hashes of old.

If at some point you start to recognise patters in the ciphertext, and thus recognise what was used to encrypt it then it becomes a task to build a reference table for $ciphertext == specific real world value. And effectively decrypts it at best before running what ever tests on it, at worst it decrypts based on bad assumptions and returns garbage or potentially harmful results.

That last part to me, a layperson, is what would be happening if they did not decrypt it first anyway.

And further still, when we reach the quantum future where we can break encryption in countable cycles, would it not be a race between the “secure” working on encrypted data approach and some qualtum computer breaking the encryption, running the tests on plain text and then reencrpying it?

2 Likes

I need a ELI5 sort of flow chart to explain how this encryption and computation process is supposed to work. I can’t wrap my head around it.

1 Like

It’s a thing alright. I’ve implemented it in my day job using polynomial rings.

You start with a clear text and encode it as n elements of a predetermined polynomial ring. (it can be done with other methods, see wiki), and then encrypt it using various degrees of voodoo math that takes a long time getting used to. When you send me your ciphertext all I see are a bunch of elements of the ring which have well defined operations you can perform on them. I can add or multiply multiple elements or add or multiply a scalar number on those elements. When you decrypt it you get the result.

The catch is that for every homomorphic operation you add some noise to the output and after n operations it becomes random garbage.

The other catch is that it requires stupidly large numbers to be secure. Our ring learning with error implementation had to use 16kbit numbers (that is 2 to the power of 16k) to approach AES-256.

So what they are doing is a welcome addition to the field.

Further reading:

https://palisade-crypto.org/

Edit: some of these lattice based implementations are considered safe against quantum computers for now. They are not vulnerable if discrete log or rsa is broken.

5 Likes