Standards with Open Questions regarding PQC Adoption
Authors: Panos Kampanakis (AWS), Marc Manzano (SandboxAQ), Deirdre Connolly (SandboxAQ), Jayati Dev (Comcast)August 28, 2024
This document analyzes standards that would require the integration of post-quantum (PQ) algorithms or the mitigation these algorithms bring in order to mitigate the quantum computing risk to cryptography.
Successfully updating open standards used for protecting information and data is a necessary step for full adoption of and migration to quantum resistant cryptography. As Public or Open Standards Developing Organizations (SDOs) are often volunteer-driven, the pace at which such specifications evolve and get finalized depends on the degree of committed effort by the community. This document analyzes standards that would require the integration of post-quantum (PQ) algorithms or the mitigation these algorithms bring in order to mitigate the quantum computing risk to cryptography. These standards may seemingly have not officially started this work or are in its early stages. This document aims to provide their current status as of the time of this writing.
SSH
SSH is a widely used protocol developed in the IETF. The IETF has concluded all Working Groups (WG) focusing on SSH. Although it has come up multiple times, there is currently no intention to have a new SSH maintenance WG to bring PQ algorithms to SSH.
draft-kampanakis-curdle-ssh-pq-ke is the PQ SSH draft specification which introduces NIST’s KEM finalist, ML-KEM, to the SSH protocol for quantum-resistant hybrid key exchanges. This draft was interoperability tested in NIST’s NCCOE PQ Migration effort between different participating entities. It is already supported in open-source OQS OpenSSH and WolfSSH, and in SFTP file transfers in AWS Transfer Family. draft-josefsson-ntruprime-ssh is another draft specification which introduces Streamlined NTRU Prime, a quantum-resistant KEM in NIST’s Project. Streamlined NTRU Prime was eliminated in NIST’s 3rd round.
Currently there is no path for any SSH draft specification to be ratified in the IETF. Fortunately, the IANA Registry which maintains identifiers for SSH now allows for Expert Review codepoint assignments. This means that any stable specification can be reviewed by the proper experts in the IETF and be assigned identifiers for use in the protocol. The sntrup761x25519-sha512 codepoint was assigned for SSH with draft-josefsson-ntruprime-ssh-02 as the stable specification. After ML-KEM is finalized (FIPS 203), draft-kampanakis-curdle-ssh-pq-ke will serve as the stable specification to request and get assigned codepoints for the ML-KEM hybrid key exchange methods in SSH. OpenSSH, wolfSSH, and AWS’ SSH have indicated intent to implement PQ-hybrid key exchange following draft-kampanakis-curdle-ssh-pq-ke with ML-KEM and leverage the IANA algorithm identifiers assigned by IETF Expert Review.
MACsec
MACsec is a protocol used for Layer 2 Network Encryption. It was specified in the IEEE 802.1AE standard. MACsec uses the MKA protocol to distribute or establish keys between two parties and then uses these keys to symmetrically encrypt traffic. The symmetric keys can be derived from a pre-shared secret or from EAP-TLS which establishes a main secret. Two publications [1] [2] have investigated how MACsec can become quantum-resistant by leveraging quantum-resistant EAP-TLS or pre-shared secrets with enough entropy. IEEE confirmed the above assessment and verified that quantum-resistant EAP-TLS would make MACsec quantum-resistant. Post-quantum key exchanges in TLS 1.3 are being standardized in the IETF [3]. Thus, using quantum-resistant TLS 1.3 for EAP-TLS authentication in MACsec would address the quantum resistance requirement.
UEFI
UEFI is a specification for firmware and software signing. It is implemented by various vendors which sign their firmware and software. It usually consists of a verified booting sequence that starts from a secure hardware location and boots firmware and software which is authenticated.
The software signing topic is relatively more urgent because updating BIOS or firmware is not trivial. BIOS or firmware images stay in the field for a long time, thus adding quantum-resistant signatures early is important. Introducing the new signatures does not mean removing the classical ones given that an upgrade is not always straightforward. UEFI / PKCS #7 / CMS which define the structure for signing firmware and software allow for multiple signatures to exist so that verifiers can verify the signatures they support. There is precedent for this approach from the migration of using SHA-1 to SHA-256.
There have been some academic papers that investigate quantum-resistant signatures for UEFI [3], [4]. A presentation in 2021 [5] laid out the landscape for new quantum-resistant hash-based signatures in UEFI BIOS. A proof-of-concept implementation brought stateful hash-based signatures to UEFI [6]. The UEFI 2.10 Spec [7] introduced more crypto agility [8] by removing mandates of specific digest for the images as discussed in an old agility Bugzilla issue [9]. Theoretically, the specification can include any signature today if it was defined externally. The IETF is working on bringing quantum-resistant hash-based signatures to PKCS#7/CMS [10] [11]. So, after the IETF has specified the identifiers for these signatures, they could be used in UEFI.
Tianocore EDK II, UEFI’s open-source implementation does not support post-quantum signatures yet. It relies on other libraries to provide the cryptographic algorithms. As such EDK II does not yet implement any post-quantum signatures. The PoC from 2021 [12] showed how to include Stateful Hash-Based Signatures in EDK II by leveraging third-party open-source implementations.
Two Bugzilla reports [4089], [4087] brought up quantum-resistant signatures in Tianocore and showed EDK II maintainers are considering the introduction of hash-based signature support.
TCP
New post-quantum algorithms standardized by NIST would have implications on transport protocols. Post-quantum signatures (such as all ML-DSA) could lead to >15KB of authentication data (certificate chain, certificate transparency and CertificateVerify Signature) in TLS connections over TCP.
15KB could exceed the TCP initial congestion window of ~14KB (initwnd=10) which could trigger an extra round-trip time (RTT) with large ML-DSA certificate chains. Additional RTTs could affect the time-to-first-byte of connections. This issue was shown in various publications [13], [14], [15], [16]. Increasing the initial congestion window would address the problem and is already done by CDNs which set larger initcwnd. initcwnd values for today’s networks have not been studied like it was done in [17] before increasing it to 10MSS (RFC6928).
The topic was discussed in IETF’s TCPM Working Group which is the group that maintains and updates the TCP protocol. The discussion was about the value of the initial congestion window and if it should be picked by each sender instead changing it globally. The TCPM WG had an old draft draft-allman-tcpm-no-initwin which proposed using any arbitrary initcwnd as chosen by the sender. The rationale is explained in the draft. Also, draft draft-ietf-tsvwg-careful-resume allows reusing discovered network parameters in subsequent TCP connections which could prevent round-trips with ML-DSA certificates under good network conditions. Additionally, Appendix C of RFC9040 includes an algorithm of how to track and calibrate the initcwnd in TCP connections. The consensus of the discussion in the TCPM WG was that senders can choose and update the TCP initcwnd for their connections based on network conditions. Thus, TCP has become more flexible and could support large quantum-resistant signatures in TLS if implemented correctly.
QUIC
New post-quantum algorithms standardized by NIST would have implications on transport protocols. Post-quantum signatures (ML-DSA) could lead to >15KB of authentication data (certificate chain and CertificateVerify Signature) in QUIC connections.
15KB means that the ~4KB QUIC Amplification window and the QUIC initial congestion window could lead to additional round-trip times (RTTs). Additional RTTs could affect the time-to-first-byte of connections. The issue was shown in recently published in NIST NCCOE’s SP 1800-38C (Section 7.3, Figure 4) and was discussed in [16].
Addressing the amplification protection issue should be carefully considered. Increasing it to 10-15x as the new signature algorithms would require to prevent the round-trip, increases the amplification risk. QUIC validation tokens could alleviate the issue especially for clients that keep communicating with the same servers, but it is not a general solution. Although it wastes bandwidth, the client could otherwise send its ClientHello multiple times which would increase the 3x window for the response and could serve as a ClientHello loss prevention mechanism.
What is more, NIST NCCOE’s SP 1800-38C showed (in Section 7.3, Figure 5) that packet pacing in QUIC can introduce ~65ms to each handshake. The packet pacing slowdown is controlled by the kInitialRtt. The default kInitialRtt=333 for QUIC is a “SHOULD” in RFC9002 and comes from TCP’s initial 1 second PTO (and subsequently a 333ms RTO) recommendation (in Section 2 of RFC6298). Changing the kInitialRtt on the server to alleviate the packet pacing slowdown is possible, but it should be carefully considered.
These issues were brought up in the QUIC WG list without generating much discussion. We also brought them up to the QUIC WG chairs for awareness and to explain they will need to be addressed if quantum-resistant signatures made it to QUIC.
FIDO2
FIDO2 is a protocol for securely authenticating in web applications without passwords but with security tokens instead. In a recent whitepaper published by the FIDO Alliance they make a clear statement that selecting the most suitable post-quantum cryptographic algorithms and facilitating a smooth transition from current algorithms to quantum-resistant algorithms are two key objectives. This is particularly challenging for using classical-PQ cryptographic hybrid algorithms in very constrained devices such as NFC tokens while respecting the protocol’s specifications. The first end-to-end post-quantum secure implementation of the FIDO2 protocol has been open-sourced and can be found in GitHub. It uses ML-KEM and ML-DSA.
There is pending work to be done to ensure that FIDO2 with PQC is efficient enough to run on constrained devices without significantly impacting usability. Moreover, it is possible that instead of ML-DSA, other signature algorithms are chosen, potentially from the new NIST on-ramp standardization process.
DNSsec
Doman Name Service (DNS) is a distributed service that translates domain names to their IP addresses so that the corresponding resources can be accessed and retrieved. IETF standardized a set of extensions for this protocol that provide additional security ensure the integrity of DNS information, calling it DNSSec. DNSsec uses signatures to validate the integrity of DNS responses.
PQ signatures introduce multiple challenges to the DNSsec architecture. A drop-in replacement is not straightforward because of the large public keys and signature sizes. Various publications [18] [19] have investigated the challenges the new algorithms bring to the protocol.
PQ DNSSec is an open problem. There have been proposals to address the concerns like request-based fragmentation [20], a testbed for experiments, and proposals from DNS providers [21]. Although the Internet Corporation for Assigned Names and Numbers (ICANN) had declared in 2022 that it was premature to consider the PQ DNSsec challenges, PQ DNSsec is currently scoped as a research topic in the IETF.