Standard model (cryptography)
In cryptography the standard model is the model of computation in which the adversary is only limited by the amount of time and computational power available. Other names used are bare model and plain model.
Cryptographic schemes are usually based on complexity assumptions, which state that some problems, such as factorization, cannot be solved in polynomial time. Schemes that can be proven secure using only complexity assumptions are said to be secure in the standard model. Security proofs are notoriously difficult to achieve in the standard model, so in many proofs, cryptographic primitives are replaced by idealized versions. The most common example of this technique, known as the random oracle model,[1][2] involves replacing a cryptographic hash function with a genuinely random function. Another example is the generic group model,[3][4] where the adversary is given access to a randomly chosen encoding of a group, instead of the finite field or elliptic curve groups used in practice.
Other models used invoke trusted third parties to perform some task without cheating; for example, the public key infrastructure (PKI) model requires a certificate authority, which if it were dishonest, could produce fake certificates and use them to forge signatures, or mount a man in the middle attack to read encrypted messages. Other examples of this type are the common random string model, where it is assumed that all parties have access to some string chosen uniformly at random, and its generalization, the common reference string model, where a string is chosen according to some other probability distribution.[5] These models are often used for non-interactive zero-knowledge proofs (NIZK). In some applications, such as the Dolev–Dwork–Naor encryption scheme,[6] it makes sense for a particular party to generate the common reference string, while in other applications, the common reference string must be generated by a trusted third party. Collectively, these models are referred to as models with special setup assumptions.
References
- Mihir Bellare; Phillip Rogaway (1993). "Random Oracles are Practical: A Paradigm for Designing Efficient Protocols". Conference on Computer and Communications Security (CCS). ACM. pp. 62–73. Retrieved 2007-11-01.
- Ran Canetti; Oded Goldreich; Shai Halevi (1998). "The Random Oracle Methodology Revisited". Symposium on Theory of computing (STOC). ACM. pp. 209–218. Retrieved 2007-11-01.
- Victor Shoup (1997). "Lower bounds for discrete logarithms and related problems" (PDF). Advances in Cryptology – Eurocrypt ’97. Vol. 1233. Springer-Verlag. pp. 256–266. Retrieved 2007-11-01.
- Ueli Maurer (2005). "Abstract models of computation in cryptography" (PDF). IMA conference on Cryptography and Coding (IMACC). Vol. 3796. Springer-Verlag. pp. 1–12. Archived from the original (PDF) on 2017-07-06. Retrieved 2007-11-01.
- Canetti, Ran; Pass, Rafael; Shelat, Abhi (2007). "Cryptography from Sunspots: How to Use an Imperfect Reference String". 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS'07). pp. 249–259. doi:10.1109/focs.2007.70. ISBN 978-0769530109.
- Danny Dolev; Cynthia Dwork; Moni Naor (1991). "Non-Malleable Cryptography" (PDF). Symposium on Theory of Computing (STOC). ACM. pp. 542–552. Retrieved 2011-12-18.