Infineon makes an RSA library which is used for Smart Cards and TPM modules, and some of the keys generated by this library are easily factored. This appears to be CVE-2017-15361

Research paper at Centre for Research on Cryptography and Security

Vulnerable key factorisation times

When generated properly, an RSA key with 2048 bits should require several quadrillion years—or hundreds of thousands of times the age of the universe—to be factorized with a general-purpose computer. Factorizing a 2048-bit RSA key generated with the faulty Infineon library, by contrast, takes a maximum of 100 years, and on average only half that. Keys with 1024 bits take a maximum of only three months.

The factorization can be dramatically accelerated by spreading the load onto multiple computers. While costs and times vary for each vulnerable key, the worst case for a 2048-bit one would require no more than 17 days and $40,300 using a 1,000-instance machine on Amazon Web Service and $76 and 45 minutes to factorize an affected 1024-bit key. On average, it would require half the cost and time to factorize the affected keys.

Affected

Bitlocker

The vulnerability is especially acute for TPM version 1.2, because the keys it uses to control Microsoft’s BitLocker hard-disk encryption are factorizable.

TPM version 2.0 doesn’t use factorizable keys for BitLocker

The researchers also found 2,892 PGP keys used for encrypted e-mail, 956 of which were factorizable. The researchers speculated that the majority of the PGP keys were generated using the Yubikey 4, which allows owners to use the faulty library to create on-chip RSA keys. Other functions of the USB device, including U2F authentication, remain unaffected. Yubico has more details here.

Seems like this bug has been in the know for more than a year, or at least this exact exploitation method.
Just happens to be Infineon getting shit atm.

Not really, the talk you link is about classification, specifically not exploitation. In the Q&A they mention certain smart-cards generating bad values by failing prime generation, then choosing the next greatest prime from zero, but that is not the mistake that Infineon made.

That you can classify public keys this way suggests that there may be factorisation vulnerabilities; but it is not proof. As I think the speaker mentioned, some primes might be avoided for fear that they are to short and that sounds like a good reason to bias your prime generation.

If libraries do bias their primes differently to avoid weak ones, but we can’t tell which way is better, I don’t think that’s an exploit, and the researcher did not suggest anything like that.

I think. I’m not a cryptographer. – Does @wendell moonlight as one maybe?

It’s a game of determinism vs nondeterminism. An arbitrarily large number is chosen that is plugged into an equation that often, but not always, reports a non-prime as non-prime. This equation can also have other variables plugged into it which effectively results in different measure of a particular numbers primeness.

Repeating this process many times to test a possible prime number for primeness increases the probability it is prime, but does not guarantee it.

It may be the case that , like with the dual eliptic curve stuff, that the method by which the repeated steps for testing primality are flawed in some specific but easy to overlook way. Like how the random numbers were not actually random.

A trivial example might be choosing an odd number at random testing and finding it is not prime then adding two to it until you find one that is prime. Because you are adding two every time and not a random number all initial random numbers between 101 and whatever will “gather” at the next available prime number.

Prime numbers can be really far apart for large numbers so… problematic because any number chosen between two primes always gathers at a particular prime.

That’s a bit worrying, I thought prime generation was a bit less “shot in the dark” than that.

Any idea if just the fact that there is a noticeable difference in prime generation between implementations is a concern for security rather than privacy? The researchers in @Baz’s post seemed to only talk about privacy.

Can two implementations generate discernibly different keys and still both be secure?

The math behind it is pretty solid. The odds of a potential prime checked thousands or tens of thousands of times not actually being prime is extremely low. Theoretically.

Only known way of deterministically verifying a potential prime is prime is by attempting division of all many numbers less than the number chosen at random.

If you could randomly choose a random prime number then you are to linear computation space when trying to guess others prime numbers. So it’s kind of good for security that it is a shot in the dark as it were.

… The range of approaches identified indicates that the question
of how to generate an optimal RSA key has not yet been
settled. The tested keys were generally found to contain a high
level of entropy, sufficient to protect against known fac-
torization attacks. However, the [lists biases] serve as fingerprints of the sources. …

End of January - The vulnerability found 1st of February - The vulnerability disclosed to Infineon Technologies AG
May to October 2017 - Cooperation with the manufacturer and other affected parties to help evaluate and mitigate the vulnerability

Estonia got info at the end of August directly from researchers.
Infineon (chip maker) or Gemalto (card manufacturer) didn’t notify affected parties about vulnerability.