**This clears the way for broad deployment of post-quantum cryptography to begin in earnest. **

The National Institutes of Standards and Technology (NIST) has announced the final approval of the first three full federal standards for post-quantum cryptographic algorithms. The three standards are:

* FIPS 203, Module-Lattice-Based Key-Encapsulation Mechanism Standard (ML-KEM)

* FIPS 204, Module-Lattice-Based Digital Signature Standard (ML-DSA)

* FIPS 205, Stateless Hash-Based Digital Signature Standard (SLH-DSA)

The announcement comes nearly 8 years after the kickoff of NIST’s Post-Quantum Cryptography Project, which aimed to build public consensus in the cryptographic research community on new asymmetric cryptographic algorithms to guard against potential attack by an adversary equipped with advanced quantum computing capabilities.

This represents a significant moment in the process of migration to post-quantum cryptography, as the specific algorithms are now settled. This clears the way for broad deployment of post-quantum cryptography to begin in earnest.

**We need the hardness of mathematical problems to reap the benefits of cryptography – and these are benefits we badly need. **

For a week in April of 2024, the cryptologic research community was trying to make sense of a freshly announced result. Yilei Chen of Tsinghua University in Beijing, China published a paper, “Quantum Algorithms for Lattice Problems” on the International Association for Cryptologic Research (IACR)’s ePrint Archive. The author claimed to have discovered a quantum polynomial-time algorithm for solving certain mathematical problems, which happen to be closely related to the new post-quantum cryptographic algorithms ML-KEM and ML-DSA which NIST plans finalize this year in advance of a significant national push for post-quantum cryptography. If true, this could have led to a loss of confidence in lattice-based techniques, with huge implications for national policy.

A week later, a flaw was found in the paper. The author has acknowledged that the main claim is no longer justified and they did not see any workaround. So the worry has now passed regarding this paper. But it seems alarming that so much time and money building up cyber infrastructures that rest on cryptographic algorithms could potentially be undone by a new discovery. It seems like a huge risk. Is there any way to avoid the risk? Why have we made the decision to accept it?

In 1976, Whitfield Diffie and Martin Hellman published their famous “New Directions in Cryptography.” They proposed a method based on modular arithmetic to allow two remote parties to agree on a secret value without any shared secret to start with. This method was a new direction in the sense that all previous forms of cryptography assumed that a secret was already shared by the communicating endpoints in advance – now known as *symmetric* cryptography.

The newer *asymmetric* type of cryptography makes use of two distinct keys: one public (for example, for encrypting messages) and one private (for decrypting them.) This dramatically improves the usability of cryptography by providing an approach to distributing keys when the endpoints have no existing shared secrets and no means of private communication.

The magic of asymmetric cryptography is possible only because the two keys are mathematically linked. In the RSA cryptosystem, the public key includes a “modulus” which is the product of two large prime numbers, and the private key includes the factorization of the modulus – the two primes separately. There is a one-wayness relationship between these two values. It is definitely easy to compute the modulus from the factors, but computing the factors from the modulus is so hard to do efficiently, we can regard calculating the private key from the public one impractical.

Or to be more careful: because factoring is so hard to do efficiently *as far as we know*.

This mathematical link between the asymmetric keys will always exist for any asymmetric algorithm, which makes the very proposition of asymmetric cryptography linked to the hardness of mathematical problems.

Even usable symmetric cryptography (excluding options like the one-time pad) can be viewed as having security properties based on a mathematical problem. For any cipher with a reusable key, identifying the key from a short list of examples of its use is a problem that must be hard to efficiently solve for the cipher to be secure.

Unfortunately, no.

This question relates to the classic problem of P vs. NP. P and NP are classes of problems: P are the problems that can be solved in polynomial time, and NP are the problems where the solution can be *verified *in polynomial time.

Factoring is an example of an NP problem: it is easy to verify a factorization by multiplying the factors and comparing the result to the target number.

The P vs NP problem asks simply, which is true: P = NP, or P ¹ NP? This problem was first posed by Stephen Cook in 1971, and is considered the most important unsolved problem in computer science, and one of the most important unsolved problems in mathematics generally. It is one of the seven “Millenium Problems” chosen by the Clay Mathematics Institute, for which a $1 million prize is offered for a solution.

The prevailing opinion of computer scientists is that P ¹ NP is likely true, partly because since 1971 there has been a known path for proving that P = NP: to prove that an NP-hard problem has a polynomial-time solution. Despite an ever-increasing number of identified NP-hard problems, no efficient solution to any of them has been identified in over 50 years. NP-hard problems are the problems that, if P ¹ NP, are actually not solvable in polynomial time.

Actually proving that a problem of use for cryptography IS hard (harder than polynomial) would imply that P ¹ NP. So unfortunately, we are stuck with having to trust in the hardness of these problems. Which means, in a certain sense, that we don’t really know these problems are hard.

That said, we don’t trust in the hardness of just any problem. When it comes to choosing our cryptographic algorithms broadly, the standard is to trust in the hardness of problems that have over a long period of time been of sustained interest to researchers and yet no efficient algorithms have been found.

The most classic example of a problem like this is the integer factorization problem: a fundamental arithmetic problem known from ancient Greece. The other problem that has been historically relied upon is called the “discrete logarithm problem” which is really just the problem of finding integer solutions to exponential equations in finite mathematical structures; this has been of keen interest to mathematicians dating back to the early 1800s.

Since Shor’s famous 1994 paper, we have known that factoring and discrete logarithm problems (including on elliptic curves) could be in theory solved efficiently on a powerful quantum computer. This discovery has launched a huge race to develop those powerful quantum computers (for this as well as other potential applications).

In 2016, amid growing concern that powerful quantum computers capable of running Shor’s algorithm to break our cryptography would be eventually developed, NIST launched its Post-Quantum Cryptography project to select “post-quantum” alternatives – that is, algorithms not known to be vulnerable to attack on a quantum computer.

NIST announced from the beginning that it was open to selecting multiple, redundant algorithms with distinct mathematical basis as a result of this process. As of today, NIST has announced final selections for four algorithms and has produced draft standards for three: ML-KEM, an encryption / key exchange algorithm based on the modular lattice Learning With Errors (LWE) problem, ML-DSA, a digital signature algorithm also based on modular lattice LWE, and SLH-DSA, a digital signature algorithm based on hash functions. NIST has not ruled out selecting another encryption / key exchange algorithm for standardization; the remaining finalists are Classic McEliece, BIKE, HQC, all based on coding theory problems.

The lattice problems ML-KEM and ML-DSA relate to fundamental mathematical questions about lattices, a field of study that reaches back nearly a century. As early as 1981, a lattice problem was proven to be NP-complete. Cryptography based on lattices has been around since the early 1990s. The shortest vector problem on lattices was shown to be NP-hard *even to approximate* in 2001. However, these lattice problems are fundamental to the field of lattice mathematics, which became popular in the early 20^{th} century.

Of course, the *quantum* hardness is a trickier question. Quantum algorithm theory has only become an area of study since Shor’s algorithm, but since lattice and coding problems were the leading options for quantum-resistant cryptography at the time Shor’s algorithm was discovered, it is fair to say that quantum algorithms for lattice problems became of immediate interest after Shor’s algorithm. So there is a 30-year history of failure of the world research community to solve these problems with a quantum algorithm, and a much longer history of failure to solve these problems classically.

News about this kind of thing is dramatic and attention-grabbing and there is a lot of reason to be cautious when you hear about a new result.

Most genuine improvements in cryptanalysis are either (1) incremental, reducing security level by a few bits at a time, or (2) a massive break, but of a relatively new or untested idea – not one with a long history of researchers trying and failing to break them. Most new results that claim to be a large break of a well-established algorithm turn out to contain a flaw – as was the case with the Yilei Chen paper. When the news breaks about an advance in cryptanalysis, keep in mind the following:

**“Breaking” a cryptographic algorithm does not necessarily make it unsafe to use.**It is important to understand by*how much*the security is reduced (compared to our margin for error).

The strength of a cryptographic algorithm is normally measured using bits, and the bit strength of an algorithm is determined by comparing the parameter options available to the best known attacks. Parameters are chosen to set the bit strength at a desired level: these days, 128 bits is the normal low end for ordinary commercial or personal use, and higher bit strengths may be chosen for more high-security circumstance. 128 bits of strength means that approximately 2^{128} computational steps would be required to compromise a key. It is complex to explain why 128 bits is the “standard” level for today’s cryptography, but suffice to say it is *not* chosen to be a bare minimum for security. Two actual cryptanalytic attacks performed and known of in public are the first collision found for SHA-1 (2^{61} steps) and the factorization of an 829-bit RSA key (»2^{67.3} steps). The difference between 2^{67.3} and 2^{128} is staggeringly huge; an incremental attack might chip away at the 128 bit strength number by a few bits, but it is a long way to effort levels that are within reach in practice.

**Be wary of inappropriate comparisons between different cryptographic algorithms or their mathematical basis.**The safety of assumptions about hardness derives from broad interest in finding efficient solutions over a substantial period of time. Not all assumptions are created equally. Some are solid and believable conclusions based on a long history of research failure, others are speculative and later proven to be wrong. Attacks on newer ideas – the SIKE algorithm from the NIST competition being a notable example – should not be taken as evidence that more solid hardness assumptions are “next”.

**Peer review can take time, particularly for genuine breakthroughs in mathematics.**The recent Yilei Chen result was alarming to the research community because it focused on the hardness of lattice problems and claimed to make a dramatic degree of progress against them. It was a very long and complicated paper, with a lot of new ideas. While it was being examined, it was impossible to guess whether a flaw would be found or whether the result would eventually be vetted and believed. In this case, a flaw was found and it took about a week.

But until that process was complete, the proper take is: this paper has not yet been validated, but this paper has also not yet been invalidated.

It may seem like a gamble to trust so much in the hardness of these problems. But if we want the benefits of cryptography, we really have no choice. We have an approach to this that works: we trust what humans have tried and failed over decades to find efficient solutions for.

And so far, that approach has never failed.