• FrodoKEM: A conservative quantum-safe cryptographic algorithm

    In this post, we describe FrodoKEM, a key encapsulation protocol that offers a simple design and provides strong security guarantees even in a future with powerful quantum computers.
    The quantum threat to cryptography
    For decades, modern cryptography has relied on mathematical problems that are practically impossible for classical computers to solve without a secret key. Cryptosystems like RSA, Diffie-Hellman key-exchange, and elliptic curve-based schemes—which rely on the hardness of the integer factorization anddiscrete logarithm problems—secure communications on the internet, banking transactions, and even national security systems. However, the emergence of
    Quantum computers leverage the principles of quantum mechanics to perform certain calculations exponentially faster than classical computers. Their ability to solve complex problems, such as simulating molecular interactions, optimizing large-scale systems, and accelerating machine learning, is expected to have profound and beneficial implications for fields ranging from chemistry and material science to artificial intelligence.

    Spotlight: AI-POWERED EXPERIENCE

    Microsoft research copilot experience
    Discover more about research at Microsoft through our AI-powered experience

    Start now

    Opens in a new tab
    At the same time, quantum computing is poised to disrupt cryptography. In particular, Shor’s algorithm, a quantum algorithm developed in 1994, can efficiently factor large numbers and compute discrete logarithms—the very problems that underpin the security of RSA, Diffie-Hellman, and elliptic curve cryptography. This means that once large-scale, fault-tolerant quantum computers become available, public-key protocols based on RSA, ECC, and Diffie-Hellman will become insecure, breaking a sizable portion of the cryptographic backbone of today’s digital world. Recent advances in quantum computing, such as Microsoft’s Majorana 1, the first quantum processor powered by topological qubits, represent major steps toward practical quantum computing and underscore the urgency of transitioning to quantum-resistant cryptographic systems.
    To address this looming security crisis, cryptographers and government agencies have been working on post-quantum cryptography—new cryptographic algorithms that can resist attacks from both classical and quantum computers.
    The NIST Post-Quantum Cryptography Standardization effort
    In 2017, the U.S. National Institute of Standards and Technologylaunched the Post-Quantum Cryptography Standardization projectto evaluate and select cryptographic algorithms capable of withstanding quantum attacks. As part of this initiative, NIST sought proposals for two types of cryptographic primitives: key encapsulation mechanisms—which enable two parties to securely derive a shared key to establish an encrypted connection, similar to traditional key exchange schemes—and digital signature schemes.
    This initiative attracted submissions from cryptographers worldwide, and after multiple evaluation rounds, NIST selected CRYSTALS-Kyber, a KEM based on structured lattices, and standardized it as ML-KEM. Additionally, NIST selected three digital signature schemes: CRYSTALS-Dilithium, now called ML-DSA; SPHINCS+, now called SLH-DSA; and Falcon, now called FN-DSA.
    While ML-KEM provides great overall security and efficiency, some governments and cryptographic researchers advocate for the inclusion and standardization of alternative algorithms that minimize reliance on algebraic structure. Reducing algebraic structure might prevent potential vulnerabilities and, hence, can be considered a more conservative design choice. One such algorithm is FrodoKEM.
    International standardization of post-quantum cryptography
    Beyond NIST, other international standardization bodies have been actively working on quantum-resistant cryptographic solutions. The International Organization for Standardizationis leading a global effort to standardize additional PQC algorithms. Notably, European government agencies—including Germany’s BSI, the Netherlands’ NLNCSA and AIVD, and France’s ANSSI—have shown strong support for FrodoKEM, recognizing it as a conservative alternative to structured lattice-based schemes.
    As a result, FrodoKEM is undergoing standardization at ISO. Additionally, ISO is standardizing ML-KEM and a conservative code-based KEM called Classic McEliece. These three algorithms are planned for inclusion in ISO/IEC 18033-2:2006 as Amendment 2.
    What is FrodoKEM?
    FrodoKEM is a key encapsulation mechanismbased on the Learning with Errorsproblem, a cornerstone of lattice-based cryptography. Unlike structured lattice-based schemes such as ML-KEM, FrodoKEM is built on generic, unstructured lattices, i.e., it is based on the plain LWE problem.
    Why unstructured lattices?
    Structured lattice-based schemes introduce additional algebraic properties that could potentially be exploited in future cryptanalytic attacks. By using unstructured lattices, FrodoKEM eliminates these concerns, making it a safer choice in the long run, albeit at the cost of larger key sizes and lower efficiency.
    It is important to emphasize that no particular cryptanalytic weaknesses are currently known for recommended parameterizations of structured lattice schemes in comparison to plain LWE. However, our current understanding of the security of these schemes could potentially change in the future with cryptanalytic advances.
    Lattices and the Learning with Errorsproblem
    Lattice-based cryptography relies on the mathematical structure of lattices, which are regular arrangements of points in multidimensional space. A lattice is defined as the set of all integer linear combinations of a set of basis vectors. The difficulty of certain computational problems on lattices, such as the Shortest Vector Problemand the Learning with Errorsproblem, forms the basis of lattice-based schemes.
    The Learning with Errorsproblem
    The LWE problem is a fundamental hard problem in lattice-based cryptography. It involves solving a system of linear equations where some small random error has been added to each equation, making it extremely difficult to recover the original secret values. This added error ensures that the problem remains computationally infeasible, even for quantum computers. Figure 1 below illustrates the LWE problem, specifically, the search version of the problem.
    As can be seen in Figure 1, for the setup of the problem we need a dimension \that defines the size of matrices, a modulus \that defines the value range of the matrix coefficients, and a certain error distribution \from which we sample \matrices. We sample two matrices from \, a small matrix \and an error matrix \; sample an \matrix \uniformly at random; and compute \. In the illustration, each matrix coefficient is represented by a colored square, and the “legend of coefficients” gives an idea of the size of the respective coefficients, e.g., orange squares represent the small coefficients of matrix \ ). Finally, given \and \, the search LWE problem consists in finding \. This problem is believed to be hard for suitably chosen parameterssufficiently large) and is used at the core of FrodoKEM.
    In comparison, the LWE variant used in ML-KEM—called Module-LWE—has additional symmetries, adding mathematical structure that helps improve efficiency. In a setting similar to that of the search LWE problem above, the matrix \can be represented by just a single row of coefficients.
    FIGURE 1: Visualization of theLWE problem.
    LWE is conjectured to be quantum-resistant, and FrodoKEM’s security is directly tied to its hardness. In other words, cryptanalysts and quantum researchers have not been able to devise an efficient quantum algorithm capable of solving the LWE problem and, hence, FrodoKEM. In cryptography, absolute security can never be guaranteed; instead, confidence in a problem’s hardness comes from extensive scrutiny and its resilience against attacks over time.
    How FrodoKEM Works
    FrodoKEM follows the standard paradigm of a KEM, which consists of three main operations—key generation, encapsulation, and decapsulation—performed interactively between a sender and a recipient with the goal of establishing a shared secret key:

    Key generation, computed by the recipient

    Generates a public key and a secret key.
    The public key is sent to the sender, while the private key remains secret.

    Encapsulation, computed by the sender

    Generates a random session key.
    Encrypts the session key using the recipient’s public key to produce a ciphertext.
    Produces a shared key using the session key and the ciphertext.
    The ciphertext is sent to the recipient.

    Decapsulation, computed by the recipient

    Decrypts the ciphertext using their secret key to recover the original session key.
    Reproduces the shared key using the decrypted session key and the ciphertext.

    The shared key generated by the sender and reconstructed by the recipient can then be used to establish secure symmetric-key encryption for further communication between the two parties.
    Figure 2 below shows a simplified view of the FrodoKEM protocol. As highlighted in red, FrodoKEM uses at its core LWE operations of the form “\”, which are directly applied within the KEM paradigm.
    FIGURE 2: Simplified overview of FrodoKEM.
    Performance: Strong security has a cost
    Not relying on additional algebraic structure certainly comes at a cost for FrodoKEM in the form of increased protocol runtime and bandwidth. The table below compares the performance and key sizes corresponding to the FrodoKEM level 1 parameter setand the respective parameter set of ML-KEM. These parameter sets are intended to match or exceed the brute force security of AES-128. As can be seen, the difference in speed and key sizes between FrodoKEM and ML-KEM is more than an order of magnitude. Nevertheless, the runtime of the FrodoKEM protocol remains reasonable for most applications. For example, on our benchmarking platform clocked at 3.2GHz, the measured runtimes are 0.97 ms, 1.9 ms, and 3.2 ms for security levels 1, 2, and 3, respectively.
    For security-sensitive applications, a more relevant comparison is with Classic McEliece, a post-quantum code-based scheme also considered for standardization. In this case, FrodoKEM offers several efficiency advantages. Classic McEliece’s public keys are significantly larger—well over an order of magnitude greater than FrodoKEM’s—and its key generation is substantially more computationally expensive. Nonetheless, Classic McEliece provides an advantage in certain static key-exchange scenarios, where its high key generation cost can be amortized across multiple key encapsulation executions.
    TABLE 1: Comparison of key sizes and performance on an x86-64 processor for NIST level 1 parameter sets.
    A holistic design made with security in mind
    FrodoKEM’s design principles support security beyond its reliance on generic, unstructured lattices to minimize the attack surface of potential future cryptanalytic threats. Its parameters have been carefully chosen with additional security margins to withstand advancements in known attacks. Furthermore, FrodoKEM is designed with simplicity in mind—its internal operations are based on straightforward matrix-vector arithmetic using integer coefficients reduced modulo a power of two. These design decisions facilitate simple, compact and secure implementations that are also easier to maintain and to protect against side-channel attacks.
    Conclusion
    After years of research and analysis, the next generation of post-quantum cryptographic algorithms has arrived. NIST has chosen strong PQC protocols that we believe will serve Microsoft and its customers well in many applications. For security-sensitive applications, FrodoKEM offers a secure yet practical approach for post-quantum cryptography. While its reliance on unstructured lattices results in larger key sizes and higher computational overhead compared to structured lattice-based alternatives, it provides strong security assurances against potential future attacks. Given the ongoing standardization efforts and its endorsement by multiple governmental agencies, FrodoKEM is well-positioned as a viable alternative for organizations seeking long-term cryptographic resilience in a post-quantum world.
    Further Reading
    For those interested in learning more about FrodoKEM, post-quantum cryptography, and lattice-based cryptography, the following resources provide valuable insights:

    The official FrodoKEM website: /, which contains, among several other resources, FrodoKEM’s specification document.
    The official FrodoKEM software library:, which contains reference and optimized implementations of FrodoKEM written in C and Python.
    NIST’s Post-Quantum Cryptography Project:.
    Microsoft’s blogpost on its transition plan for PQC:.
    A comprehensive survey on lattice-based cryptography: Peikert, C. “A Decade of Lattice Cryptography.” Foundations and Trends in Theoretical Computer Science.A comprehensive tutorial on modern lattice-based schemes, including ML-KEM and ML-DSA: Lyubashevsky, V. “Basic Lattice Cryptography: The concepts behind Kyberand Dilithium.”.Opens in a new tab
    #frodokem #conservative #quantumsafe #cryptographic #algorithm
    FrodoKEM: A conservative quantum-safe cryptographic algorithm
    In this post, we describe FrodoKEM, a key encapsulation protocol that offers a simple design and provides strong security guarantees even in a future with powerful quantum computers. The quantum threat to cryptography For decades, modern cryptography has relied on mathematical problems that are practically impossible for classical computers to solve without a secret key. Cryptosystems like RSA, Diffie-Hellman key-exchange, and elliptic curve-based schemes—which rely on the hardness of the integer factorization anddiscrete logarithm problems—secure communications on the internet, banking transactions, and even national security systems. However, the emergence of Quantum computers leverage the principles of quantum mechanics to perform certain calculations exponentially faster than classical computers. Their ability to solve complex problems, such as simulating molecular interactions, optimizing large-scale systems, and accelerating machine learning, is expected to have profound and beneficial implications for fields ranging from chemistry and material science to artificial intelligence. Spotlight: AI-POWERED EXPERIENCE Microsoft research copilot experience Discover more about research at Microsoft through our AI-powered experience Start now Opens in a new tab At the same time, quantum computing is poised to disrupt cryptography. In particular, Shor’s algorithm, a quantum algorithm developed in 1994, can efficiently factor large numbers and compute discrete logarithms—the very problems that underpin the security of RSA, Diffie-Hellman, and elliptic curve cryptography. This means that once large-scale, fault-tolerant quantum computers become available, public-key protocols based on RSA, ECC, and Diffie-Hellman will become insecure, breaking a sizable portion of the cryptographic backbone of today’s digital world. Recent advances in quantum computing, such as Microsoft’s Majorana 1, the first quantum processor powered by topological qubits, represent major steps toward practical quantum computing and underscore the urgency of transitioning to quantum-resistant cryptographic systems. To address this looming security crisis, cryptographers and government agencies have been working on post-quantum cryptography—new cryptographic algorithms that can resist attacks from both classical and quantum computers. The NIST Post-Quantum Cryptography Standardization effort In 2017, the U.S. National Institute of Standards and Technologylaunched the Post-Quantum Cryptography Standardization projectto evaluate and select cryptographic algorithms capable of withstanding quantum attacks. As part of this initiative, NIST sought proposals for two types of cryptographic primitives: key encapsulation mechanisms—which enable two parties to securely derive a shared key to establish an encrypted connection, similar to traditional key exchange schemes—and digital signature schemes. This initiative attracted submissions from cryptographers worldwide, and after multiple evaluation rounds, NIST selected CRYSTALS-Kyber, a KEM based on structured lattices, and standardized it as ML-KEM. Additionally, NIST selected three digital signature schemes: CRYSTALS-Dilithium, now called ML-DSA; SPHINCS+, now called SLH-DSA; and Falcon, now called FN-DSA. While ML-KEM provides great overall security and efficiency, some governments and cryptographic researchers advocate for the inclusion and standardization of alternative algorithms that minimize reliance on algebraic structure. Reducing algebraic structure might prevent potential vulnerabilities and, hence, can be considered a more conservative design choice. One such algorithm is FrodoKEM. International standardization of post-quantum cryptography Beyond NIST, other international standardization bodies have been actively working on quantum-resistant cryptographic solutions. The International Organization for Standardizationis leading a global effort to standardize additional PQC algorithms. Notably, European government agencies—including Germany’s BSI, the Netherlands’ NLNCSA and AIVD, and France’s ANSSI—have shown strong support for FrodoKEM, recognizing it as a conservative alternative to structured lattice-based schemes. As a result, FrodoKEM is undergoing standardization at ISO. Additionally, ISO is standardizing ML-KEM and a conservative code-based KEM called Classic McEliece. These three algorithms are planned for inclusion in ISO/IEC 18033-2:2006 as Amendment 2. What is FrodoKEM? FrodoKEM is a key encapsulation mechanismbased on the Learning with Errorsproblem, a cornerstone of lattice-based cryptography. Unlike structured lattice-based schemes such as ML-KEM, FrodoKEM is built on generic, unstructured lattices, i.e., it is based on the plain LWE problem. Why unstructured lattices? Structured lattice-based schemes introduce additional algebraic properties that could potentially be exploited in future cryptanalytic attacks. By using unstructured lattices, FrodoKEM eliminates these concerns, making it a safer choice in the long run, albeit at the cost of larger key sizes and lower efficiency. It is important to emphasize that no particular cryptanalytic weaknesses are currently known for recommended parameterizations of structured lattice schemes in comparison to plain LWE. However, our current understanding of the security of these schemes could potentially change in the future with cryptanalytic advances. Lattices and the Learning with Errorsproblem Lattice-based cryptography relies on the mathematical structure of lattices, which are regular arrangements of points in multidimensional space. A lattice is defined as the set of all integer linear combinations of a set of basis vectors. The difficulty of certain computational problems on lattices, such as the Shortest Vector Problemand the Learning with Errorsproblem, forms the basis of lattice-based schemes. The Learning with Errorsproblem The LWE problem is a fundamental hard problem in lattice-based cryptography. It involves solving a system of linear equations where some small random error has been added to each equation, making it extremely difficult to recover the original secret values. This added error ensures that the problem remains computationally infeasible, even for quantum computers. Figure 1 below illustrates the LWE problem, specifically, the search version of the problem. As can be seen in Figure 1, for the setup of the problem we need a dimension \that defines the size of matrices, a modulus \that defines the value range of the matrix coefficients, and a certain error distribution \from which we sample \matrices. We sample two matrices from \, a small matrix \and an error matrix \; sample an \matrix \uniformly at random; and compute \. In the illustration, each matrix coefficient is represented by a colored square, and the “legend of coefficients” gives an idea of the size of the respective coefficients, e.g., orange squares represent the small coefficients of matrix \ ). Finally, given \and \, the search LWE problem consists in finding \. This problem is believed to be hard for suitably chosen parameterssufficiently large) and is used at the core of FrodoKEM. In comparison, the LWE variant used in ML-KEM—called Module-LWE—has additional symmetries, adding mathematical structure that helps improve efficiency. In a setting similar to that of the search LWE problem above, the matrix \can be represented by just a single row of coefficients. FIGURE 1: Visualization of theLWE problem. LWE is conjectured to be quantum-resistant, and FrodoKEM’s security is directly tied to its hardness. In other words, cryptanalysts and quantum researchers have not been able to devise an efficient quantum algorithm capable of solving the LWE problem and, hence, FrodoKEM. In cryptography, absolute security can never be guaranteed; instead, confidence in a problem’s hardness comes from extensive scrutiny and its resilience against attacks over time. How FrodoKEM Works FrodoKEM follows the standard paradigm of a KEM, which consists of three main operations—key generation, encapsulation, and decapsulation—performed interactively between a sender and a recipient with the goal of establishing a shared secret key: Key generation, computed by the recipient Generates a public key and a secret key. The public key is sent to the sender, while the private key remains secret. Encapsulation, computed by the sender Generates a random session key. Encrypts the session key using the recipient’s public key to produce a ciphertext. Produces a shared key using the session key and the ciphertext. The ciphertext is sent to the recipient. Decapsulation, computed by the recipient Decrypts the ciphertext using their secret key to recover the original session key. Reproduces the shared key using the decrypted session key and the ciphertext. The shared key generated by the sender and reconstructed by the recipient can then be used to establish secure symmetric-key encryption for further communication between the two parties. Figure 2 below shows a simplified view of the FrodoKEM protocol. As highlighted in red, FrodoKEM uses at its core LWE operations of the form “\”, which are directly applied within the KEM paradigm. FIGURE 2: Simplified overview of FrodoKEM. Performance: Strong security has a cost Not relying on additional algebraic structure certainly comes at a cost for FrodoKEM in the form of increased protocol runtime and bandwidth. The table below compares the performance and key sizes corresponding to the FrodoKEM level 1 parameter setand the respective parameter set of ML-KEM. These parameter sets are intended to match or exceed the brute force security of AES-128. As can be seen, the difference in speed and key sizes between FrodoKEM and ML-KEM is more than an order of magnitude. Nevertheless, the runtime of the FrodoKEM protocol remains reasonable for most applications. For example, on our benchmarking platform clocked at 3.2GHz, the measured runtimes are 0.97 ms, 1.9 ms, and 3.2 ms for security levels 1, 2, and 3, respectively. For security-sensitive applications, a more relevant comparison is with Classic McEliece, a post-quantum code-based scheme also considered for standardization. In this case, FrodoKEM offers several efficiency advantages. Classic McEliece’s public keys are significantly larger—well over an order of magnitude greater than FrodoKEM’s—and its key generation is substantially more computationally expensive. Nonetheless, Classic McEliece provides an advantage in certain static key-exchange scenarios, where its high key generation cost can be amortized across multiple key encapsulation executions. TABLE 1: Comparison of key sizes and performance on an x86-64 processor for NIST level 1 parameter sets. A holistic design made with security in mind FrodoKEM’s design principles support security beyond its reliance on generic, unstructured lattices to minimize the attack surface of potential future cryptanalytic threats. Its parameters have been carefully chosen with additional security margins to withstand advancements in known attacks. Furthermore, FrodoKEM is designed with simplicity in mind—its internal operations are based on straightforward matrix-vector arithmetic using integer coefficients reduced modulo a power of two. These design decisions facilitate simple, compact and secure implementations that are also easier to maintain and to protect against side-channel attacks. Conclusion After years of research and analysis, the next generation of post-quantum cryptographic algorithms has arrived. NIST has chosen strong PQC protocols that we believe will serve Microsoft and its customers well in many applications. For security-sensitive applications, FrodoKEM offers a secure yet practical approach for post-quantum cryptography. While its reliance on unstructured lattices results in larger key sizes and higher computational overhead compared to structured lattice-based alternatives, it provides strong security assurances against potential future attacks. Given the ongoing standardization efforts and its endorsement by multiple governmental agencies, FrodoKEM is well-positioned as a viable alternative for organizations seeking long-term cryptographic resilience in a post-quantum world. Further Reading For those interested in learning more about FrodoKEM, post-quantum cryptography, and lattice-based cryptography, the following resources provide valuable insights: The official FrodoKEM website: /, which contains, among several other resources, FrodoKEM’s specification document. The official FrodoKEM software library:, which contains reference and optimized implementations of FrodoKEM written in C and Python. NIST’s Post-Quantum Cryptography Project:. Microsoft’s blogpost on its transition plan for PQC:. A comprehensive survey on lattice-based cryptography: Peikert, C. “A Decade of Lattice Cryptography.” Foundations and Trends in Theoretical Computer Science.A comprehensive tutorial on modern lattice-based schemes, including ML-KEM and ML-DSA: Lyubashevsky, V. “Basic Lattice Cryptography: The concepts behind Kyberand Dilithium.”.Opens in a new tab #frodokem #conservative #quantumsafe #cryptographic #algorithm
    WWW.MICROSOFT.COM
    FrodoKEM: A conservative quantum-safe cryptographic algorithm
    In this post, we describe FrodoKEM, a key encapsulation protocol that offers a simple design and provides strong security guarantees even in a future with powerful quantum computers. The quantum threat to cryptography For decades, modern cryptography has relied on mathematical problems that are practically impossible for classical computers to solve without a secret key. Cryptosystems like RSA, Diffie-Hellman key-exchange, and elliptic curve-based schemes—which rely on the hardness of the integer factorization and (elliptic curve) discrete logarithm problems—secure communications on the internet, banking transactions, and even national security systems. However, the emergence of Quantum computers leverage the principles of quantum mechanics to perform certain calculations exponentially faster than classical computers. Their ability to solve complex problems, such as simulating molecular interactions, optimizing large-scale systems, and accelerating machine learning, is expected to have profound and beneficial implications for fields ranging from chemistry and material science to artificial intelligence. Spotlight: AI-POWERED EXPERIENCE Microsoft research copilot experience Discover more about research at Microsoft through our AI-powered experience Start now Opens in a new tab At the same time, quantum computing is poised to disrupt cryptography. In particular, Shor’s algorithm, a quantum algorithm developed in 1994, can efficiently factor large numbers and compute discrete logarithms—the very problems that underpin the security of RSA, Diffie-Hellman, and elliptic curve cryptography. This means that once large-scale, fault-tolerant quantum computers become available, public-key protocols based on RSA, ECC, and Diffie-Hellman will become insecure, breaking a sizable portion of the cryptographic backbone of today’s digital world. Recent advances in quantum computing, such as Microsoft’s Majorana 1 (opens in new tab), the first quantum processor powered by topological qubits, represent major steps toward practical quantum computing and underscore the urgency of transitioning to quantum-resistant cryptographic systems. To address this looming security crisis, cryptographers and government agencies have been working on post-quantum cryptography (PQC)—new cryptographic algorithms that can resist attacks from both classical and quantum computers. The NIST Post-Quantum Cryptography Standardization effort In 2017, the U.S. National Institute of Standards and Technology (NIST) launched the Post-Quantum Cryptography Standardization project (opens in new tab) to evaluate and select cryptographic algorithms capable of withstanding quantum attacks. As part of this initiative, NIST sought proposals for two types of cryptographic primitives: key encapsulation mechanisms (KEMs)—which enable two parties to securely derive a shared key to establish an encrypted connection, similar to traditional key exchange schemes—and digital signature schemes. This initiative attracted submissions from cryptographers worldwide, and after multiple evaluation rounds, NIST selected CRYSTALS-Kyber, a KEM based on structured lattices, and standardized it as ML-KEM (opens in new tab). Additionally, NIST selected three digital signature schemes: CRYSTALS-Dilithium, now called ML-DSA; SPHINCS+, now called SLH-DSA; and Falcon, now called FN-DSA. While ML-KEM provides great overall security and efficiency, some governments and cryptographic researchers advocate for the inclusion and standardization of alternative algorithms that minimize reliance on algebraic structure. Reducing algebraic structure might prevent potential vulnerabilities and, hence, can be considered a more conservative design choice. One such algorithm is FrodoKEM. International standardization of post-quantum cryptography Beyond NIST, other international standardization bodies have been actively working on quantum-resistant cryptographic solutions. The International Organization for Standardization (ISO) is leading a global effort to standardize additional PQC algorithms. Notably, European government agencies—including Germany’s BSI (opens in new tab), the Netherlands’ NLNCSA and AIVD (opens in new tab), and France’s ANSSI (opens in new tab)—have shown strong support for FrodoKEM, recognizing it as a conservative alternative to structured lattice-based schemes. As a result, FrodoKEM is undergoing standardization at ISO. Additionally, ISO is standardizing ML-KEM and a conservative code-based KEM called Classic McEliece. These three algorithms are planned for inclusion in ISO/IEC 18033-2:2006 as Amendment 2 (opens in new tab). What is FrodoKEM? FrodoKEM is a key encapsulation mechanism (KEM) based on the Learning with Errors (LWE) problem, a cornerstone of lattice-based cryptography. Unlike structured lattice-based schemes such as ML-KEM, FrodoKEM is built on generic, unstructured lattices, i.e., it is based on the plain LWE problem. Why unstructured lattices? Structured lattice-based schemes introduce additional algebraic properties that could potentially be exploited in future cryptanalytic attacks. By using unstructured lattices, FrodoKEM eliminates these concerns, making it a safer choice in the long run, albeit at the cost of larger key sizes and lower efficiency. It is important to emphasize that no particular cryptanalytic weaknesses are currently known for recommended parameterizations of structured lattice schemes in comparison to plain LWE. However, our current understanding of the security of these schemes could potentially change in the future with cryptanalytic advances. Lattices and the Learning with Errors (LWE) problem Lattice-based cryptography relies on the mathematical structure of lattices, which are regular arrangements of points in multidimensional space. A lattice is defined as the set of all integer linear combinations of a set of basis vectors. The difficulty of certain computational problems on lattices, such as the Shortest Vector Problem (SVP) and the Learning with Errors (LWE) problem, forms the basis of lattice-based schemes. The Learning with Errors (LWE) problem The LWE problem is a fundamental hard problem in lattice-based cryptography. It involves solving a system of linear equations where some small random error has been added to each equation, making it extremely difficult to recover the original secret values. This added error ensures that the problem remains computationally infeasible, even for quantum computers. Figure 1 below illustrates the LWE problem, specifically, the search version of the problem. As can be seen in Figure 1, for the setup of the problem we need a dimension \(n\) that defines the size of matrices, a modulus \(q\) that defines the value range of the matrix coefficients, and a certain error distribution \(\chi\) from which we sample \(\textit{“small”}\) matrices. We sample two matrices from \(\chi\), a small matrix \(\text{s}\) and an error matrix \(\text{e}\) (for simplicity in the explanation, we assume that both have only one column); sample an \(n \times n\) matrix \(\text{A}\) uniformly at random; and compute \(\text{b} = \text{A} \times \text{s} + \text{e}\). In the illustration, each matrix coefficient is represented by a colored square, and the “legend of coefficients” gives an idea of the size of the respective coefficients, e.g., orange squares represent the small coefficients of matrix \(\text{s}\) (small relative to the modulus \(q\)). Finally, given \(\text{A}\) and \(\text{b}\), the search LWE problem consists in finding \(\text{s}\). This problem is believed to be hard for suitably chosen parameters (e.g., for dimension \(n\) sufficiently large) and is used at the core of FrodoKEM. In comparison, the LWE variant used in ML-KEM—called Module-LWE (M-LWE)—has additional symmetries, adding mathematical structure that helps improve efficiency. In a setting similar to that of the search LWE problem above, the matrix \(\text{A}\) can be represented by just a single row of coefficients. FIGURE 1: Visualization of the (search) LWE problem. LWE is conjectured to be quantum-resistant, and FrodoKEM’s security is directly tied to its hardness. In other words, cryptanalysts and quantum researchers have not been able to devise an efficient quantum algorithm capable of solving the LWE problem and, hence, FrodoKEM. In cryptography, absolute security can never be guaranteed; instead, confidence in a problem’s hardness comes from extensive scrutiny and its resilience against attacks over time. How FrodoKEM Works FrodoKEM follows the standard paradigm of a KEM, which consists of three main operations—key generation, encapsulation, and decapsulation—performed interactively between a sender and a recipient with the goal of establishing a shared secret key: Key generation (KeyGen), computed by the recipient Generates a public key and a secret key. The public key is sent to the sender, while the private key remains secret. Encapsulation (Encapsulate), computed by the sender Generates a random session key. Encrypts the session key using the recipient’s public key to produce a ciphertext. Produces a shared key using the session key and the ciphertext. The ciphertext is sent to the recipient. Decapsulation (Decapsulate), computed by the recipient Decrypts the ciphertext using their secret key to recover the original session key. Reproduces the shared key using the decrypted session key and the ciphertext. The shared key generated by the sender and reconstructed by the recipient can then be used to establish secure symmetric-key encryption for further communication between the two parties. Figure 2 below shows a simplified view of the FrodoKEM protocol. As highlighted in red, FrodoKEM uses at its core LWE operations of the form “\(\text{b} = \text{A} \times \text{s} + \text{e}\)”, which are directly applied within the KEM paradigm. FIGURE 2: Simplified overview of FrodoKEM. Performance: Strong security has a cost Not relying on additional algebraic structure certainly comes at a cost for FrodoKEM in the form of increased protocol runtime and bandwidth. The table below compares the performance and key sizes corresponding to the FrodoKEM level 1 parameter set (variant called “FrodoKEM-640-AES”) and the respective parameter set of ML-KEM (variant called “ML-KEM-512”). These parameter sets are intended to match or exceed the brute force security of AES-128. As can be seen, the difference in speed and key sizes between FrodoKEM and ML-KEM is more than an order of magnitude. Nevertheless, the runtime of the FrodoKEM protocol remains reasonable for most applications. For example, on our benchmarking platform clocked at 3.2GHz, the measured runtimes are 0.97 ms, 1.9 ms, and 3.2 ms for security levels 1, 2, and 3, respectively. For security-sensitive applications, a more relevant comparison is with Classic McEliece, a post-quantum code-based scheme also considered for standardization. In this case, FrodoKEM offers several efficiency advantages. Classic McEliece’s public keys are significantly larger—well over an order of magnitude greater than FrodoKEM’s—and its key generation is substantially more computationally expensive. Nonetheless, Classic McEliece provides an advantage in certain static key-exchange scenarios, where its high key generation cost can be amortized across multiple key encapsulation executions. TABLE 1: Comparison of key sizes and performance on an x86-64 processor for NIST level 1 parameter sets. A holistic design made with security in mind FrodoKEM’s design principles support security beyond its reliance on generic, unstructured lattices to minimize the attack surface of potential future cryptanalytic threats. Its parameters have been carefully chosen with additional security margins to withstand advancements in known attacks. Furthermore, FrodoKEM is designed with simplicity in mind—its internal operations are based on straightforward matrix-vector arithmetic using integer coefficients reduced modulo a power of two. These design decisions facilitate simple, compact and secure implementations that are also easier to maintain and to protect against side-channel attacks. Conclusion After years of research and analysis, the next generation of post-quantum cryptographic algorithms has arrived. NIST has chosen strong PQC protocols that we believe will serve Microsoft and its customers well in many applications. For security-sensitive applications, FrodoKEM offers a secure yet practical approach for post-quantum cryptography. While its reliance on unstructured lattices results in larger key sizes and higher computational overhead compared to structured lattice-based alternatives, it provides strong security assurances against potential future attacks. Given the ongoing standardization efforts and its endorsement by multiple governmental agencies, FrodoKEM is well-positioned as a viable alternative for organizations seeking long-term cryptographic resilience in a post-quantum world. Further Reading For those interested in learning more about FrodoKEM, post-quantum cryptography, and lattice-based cryptography, the following resources provide valuable insights: The official FrodoKEM website: https://frodokem.org/ (opens in new tab), which contains, among several other resources, FrodoKEM’s specification document. The official FrodoKEM software library: https://github.com/Microsoft/PQCrypto-LWEKE (opens in new tab), which contains reference and optimized implementations of FrodoKEM written in C and Python. NIST’s Post-Quantum Cryptography Project: https://csrc.nist.gov/projects/post-quantum-cryptography (opens in new tab). Microsoft’s blogpost on its transition plan for PQC: https://techcommunity.microsoft.com/blog/microsoft-security-blog/microsofts-quantum-resistant-cryptography-is-here/4238780 (opens in new tab). A comprehensive survey on lattice-based cryptography: Peikert, C. “A Decade of Lattice Cryptography.” Foundations and Trends in Theoretical Computer Science. (2016) A comprehensive tutorial on modern lattice-based schemes, including ML-KEM and ML-DSA: Lyubashevsky, V. “Basic Lattice Cryptography: The concepts behind Kyber (ML-KEM) and Dilithium (ML-DSA).” https://eprint.iacr.org/2024/1287 (opens in new tab). (2024) Opens in a new tab
    0 Comentários 0 Compartilhamentos 0 Anterior
  • Identity Security Has an Automation Problem—And It's Bigger Than You Think

    For many organizations, identity security appears to be under control. On paper, everything checks out. But new research from Cerby, based on insights from over 500 IT and security leaders, reveals a different reality: too much still depends on people—not systems—to function. In fact, fewer than 4% of security teams have fully automated their core identity workflows.
    Core workflows, like enrolling in Multi Factor Authentication, keeping credentials secure and up to date, and revoking access the moment someone leaves—are often manual, inconsistent, and vulnerable to error. And when security execution relies on memory or follow-up, gaps appear fast.
    Human error remains one of the biggest threats to enterprise security. Verizon's 2025 Data Breach report found that the human element was involved in 60% of breaches. The same manual missteps that led to breaches a decade ago still expose identity systems today. Cerby's 2025 Identity Automation Gap research report shows just how widespread the issue is—and how far automation still has to go.
    The last mile still runs on human error
    The data reveals a persistent reliance on human action for tasks that should be automated across the identity security lifecycle.

    41% of end users still share or update passwords manually, using insecure methods like spreadsheets, emails, or chat tools. They are rarely updated or monitored, increasing the likelihood of credential misuse or compromise.
    Nearly 89% of organizations rely on users to manually enable MFA in applications, despite MFA being one of the most effective security controls. Without enforcement, protection becomes optional, and attackers know how to exploit that inconsistency.
    59% of IT teams handle user provisioning and deprovisioning manually, relying on ticketing systems or informal follow-ups to grant and remove access. These workflows are slow, inconsistent, and easy to overlook—leaving organizations exposed to unauthorized access and compliance failures.

    Organizations can't afford to wait
    The consequences are no longer hypothetical.
    According to the Ponemon Institute, 52% of enterprises have experienced a security breach caused by manual identity work in disconnected applications. Most of them had four or more. The downstream impact was tangible: 43% reported customer loss, and 36% lost partners.
    These failures are predictable and preventable, but only if organizations stop relying on humans to carry out what should be automated. Identity is no longer a background system. It's one of the primary control planes in enterprise security. As attack surfaces expand and threat actors become more sophisticated, the automation gap becomes harder—and riskier—to ignore.
    Why the automation gap persists
    Why do these manual gaps still exist if automation is so critical to identity security? They've emerged as a byproduct of rapid growth, application sprawl, and fragmented infrastructure.

    Disconnected applications are everywhere, and they don't support the common identity standards required for integration into existing providers. A majority of enterprise applications fall into this category, and that number continues to grow. They span every business function and are packed with sensitive data.
    IT & security teams assume tools = coverage. Environments today stretch across SaaS, mobile, cloud, and on-prem systems. Shadow IT continues to grow faster than anyone can track, as each business unit brings its own stack. Achieving full control across all applications remains highly elusive.
    Stopgap solutions don't scale. Password managers, manual scripts, and other vaulting tools are difficult to maintain and often create fragmented infrastructure. When integrations don't exist, they're frequently patched together—but these fixes are costly to build and fragile to sustain. What starts as a workaround quickly becomes an ongoing operational burden.

    Closing the automation gap
    The good news: closing the automation gap doesn't require rebuilding or replacing your identity stack. It means completing it.
    Forward-thinking organizations are bringing automation to every corner of their application ecosystem without waiting for native integrations. Some teams are also exploring AI agents to help close this gap. But trust is still evolving: 78% of security leaders say they don't trust AI to fully automate core identity tasks—yet 45% support a collaborative human-in-the-loop model.
    Cerby provides organizations with the flexibility to support both approaches—meeting teams where they are and delivering automation where it's needed most.
    Cerby's research report, The 2025 Identity Automation Gap, includes findings from 500+ IT and security leaders and practical steps for closing one of the most overlooked risks in enterprise security.
    Download the full report or schedule a 15-minute demo to see how Cerby brings automation across your entire identity surface.

    Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    #identity #security #has #automation #problemand
    Identity Security Has an Automation Problem—And It's Bigger Than You Think
    For many organizations, identity security appears to be under control. On paper, everything checks out. But new research from Cerby, based on insights from over 500 IT and security leaders, reveals a different reality: too much still depends on people—not systems—to function. In fact, fewer than 4% of security teams have fully automated their core identity workflows. Core workflows, like enrolling in Multi Factor Authentication, keeping credentials secure and up to date, and revoking access the moment someone leaves—are often manual, inconsistent, and vulnerable to error. And when security execution relies on memory or follow-up, gaps appear fast. Human error remains one of the biggest threats to enterprise security. Verizon's 2025 Data Breach report found that the human element was involved in 60% of breaches. The same manual missteps that led to breaches a decade ago still expose identity systems today. Cerby's 2025 Identity Automation Gap research report shows just how widespread the issue is—and how far automation still has to go. The last mile still runs on human error The data reveals a persistent reliance on human action for tasks that should be automated across the identity security lifecycle. 41% of end users still share or update passwords manually, using insecure methods like spreadsheets, emails, or chat tools. They are rarely updated or monitored, increasing the likelihood of credential misuse or compromise. Nearly 89% of organizations rely on users to manually enable MFA in applications, despite MFA being one of the most effective security controls. Without enforcement, protection becomes optional, and attackers know how to exploit that inconsistency. 59% of IT teams handle user provisioning and deprovisioning manually, relying on ticketing systems or informal follow-ups to grant and remove access. These workflows are slow, inconsistent, and easy to overlook—leaving organizations exposed to unauthorized access and compliance failures. Organizations can't afford to wait The consequences are no longer hypothetical. According to the Ponemon Institute, 52% of enterprises have experienced a security breach caused by manual identity work in disconnected applications. Most of them had four or more. The downstream impact was tangible: 43% reported customer loss, and 36% lost partners. These failures are predictable and preventable, but only if organizations stop relying on humans to carry out what should be automated. Identity is no longer a background system. It's one of the primary control planes in enterprise security. As attack surfaces expand and threat actors become more sophisticated, the automation gap becomes harder—and riskier—to ignore. Why the automation gap persists Why do these manual gaps still exist if automation is so critical to identity security? They've emerged as a byproduct of rapid growth, application sprawl, and fragmented infrastructure. Disconnected applications are everywhere, and they don't support the common identity standards required for integration into existing providers. A majority of enterprise applications fall into this category, and that number continues to grow. They span every business function and are packed with sensitive data. IT & security teams assume tools = coverage. Environments today stretch across SaaS, mobile, cloud, and on-prem systems. Shadow IT continues to grow faster than anyone can track, as each business unit brings its own stack. Achieving full control across all applications remains highly elusive. Stopgap solutions don't scale. Password managers, manual scripts, and other vaulting tools are difficult to maintain and often create fragmented infrastructure. When integrations don't exist, they're frequently patched together—but these fixes are costly to build and fragile to sustain. What starts as a workaround quickly becomes an ongoing operational burden. Closing the automation gap The good news: closing the automation gap doesn't require rebuilding or replacing your identity stack. It means completing it. Forward-thinking organizations are bringing automation to every corner of their application ecosystem without waiting for native integrations. Some teams are also exploring AI agents to help close this gap. But trust is still evolving: 78% of security leaders say they don't trust AI to fully automate core identity tasks—yet 45% support a collaborative human-in-the-loop model. Cerby provides organizations with the flexibility to support both approaches—meeting teams where they are and delivering automation where it's needed most. Cerby's research report, The 2025 Identity Automation Gap, includes findings from 500+ IT and security leaders and practical steps for closing one of the most overlooked risks in enterprise security. Download the full report or schedule a 15-minute demo to see how Cerby brings automation across your entire identity surface. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post. #identity #security #has #automation #problemand
    THEHACKERNEWS.COM
    Identity Security Has an Automation Problem—And It's Bigger Than You Think
    For many organizations, identity security appears to be under control. On paper, everything checks out. But new research from Cerby, based on insights from over 500 IT and security leaders, reveals a different reality: too much still depends on people—not systems—to function. In fact, fewer than 4% of security teams have fully automated their core identity workflows. Core workflows, like enrolling in Multi Factor Authentication (MFA), keeping credentials secure and up to date, and revoking access the moment someone leaves—are often manual, inconsistent, and vulnerable to error. And when security execution relies on memory or follow-up, gaps appear fast. Human error remains one of the biggest threats to enterprise security. Verizon's 2025 Data Breach report found that the human element was involved in 60% of breaches. The same manual missteps that led to breaches a decade ago still expose identity systems today. Cerby's 2025 Identity Automation Gap research report shows just how widespread the issue is—and how far automation still has to go. The last mile still runs on human error The data reveals a persistent reliance on human action for tasks that should be automated across the identity security lifecycle. 41% of end users still share or update passwords manually, using insecure methods like spreadsheets, emails, or chat tools. They are rarely updated or monitored, increasing the likelihood of credential misuse or compromise. Nearly 89% of organizations rely on users to manually enable MFA in applications, despite MFA being one of the most effective security controls. Without enforcement, protection becomes optional, and attackers know how to exploit that inconsistency. 59% of IT teams handle user provisioning and deprovisioning manually, relying on ticketing systems or informal follow-ups to grant and remove access. These workflows are slow, inconsistent, and easy to overlook—leaving organizations exposed to unauthorized access and compliance failures. Organizations can't afford to wait The consequences are no longer hypothetical. According to the Ponemon Institute, 52% of enterprises have experienced a security breach caused by manual identity work in disconnected applications. Most of them had four or more. The downstream impact was tangible: 43% reported customer loss, and 36% lost partners. These failures are predictable and preventable, but only if organizations stop relying on humans to carry out what should be automated. Identity is no longer a background system. It's one of the primary control planes in enterprise security. As attack surfaces expand and threat actors become more sophisticated, the automation gap becomes harder—and riskier—to ignore. Why the automation gap persists Why do these manual gaps still exist if automation is so critical to identity security? They've emerged as a byproduct of rapid growth, application sprawl, and fragmented infrastructure. Disconnected applications are everywhere, and they don't support the common identity standards required for integration into existing providers. A majority of enterprise applications fall into this category, and that number continues to grow. They span every business function and are packed with sensitive data. IT & security teams assume tools = coverage. Environments today stretch across SaaS, mobile, cloud, and on-prem systems. Shadow IT continues to grow faster than anyone can track, as each business unit brings its own stack. Achieving full control across all applications remains highly elusive. Stopgap solutions don't scale. Password managers, manual scripts, and other vaulting tools are difficult to maintain and often create fragmented infrastructure. When integrations don't exist, they're frequently patched together—but these fixes are costly to build and fragile to sustain. What starts as a workaround quickly becomes an ongoing operational burden. Closing the automation gap The good news: closing the automation gap doesn't require rebuilding or replacing your identity stack. It means completing it. Forward-thinking organizations are bringing automation to every corner of their application ecosystem without waiting for native integrations. Some teams are also exploring AI agents to help close this gap. But trust is still evolving: 78% of security leaders say they don't trust AI to fully automate core identity tasks—yet 45% support a collaborative human-in-the-loop model. Cerby provides organizations with the flexibility to support both approaches—meeting teams where they are and delivering automation where it's needed most. Cerby's research report, The 2025 Identity Automation Gap, includes findings from 500+ IT and security leaders and practical steps for closing one of the most overlooked risks in enterprise security. Download the full report or schedule a 15-minute demo to see how Cerby brings automation across your entire identity surface. Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Twitter  and LinkedIn to read more exclusive content we post.
    0 Comentários 0 Compartilhamentos 0 Anterior
CGShares https://cgshares.com