37
FACULTEIT TOEGEPASTE WETENSCHAPPEN DEPARTEMENT ELEKTROTECHNIEK – ESAT/COSIC KASTEELPARK ARENBERG 10 B–3001 LEUVEN-HEVERLEE KATHOLIEKE UNIVERSITEIT LEUVEN Cryptology: An Introduction Bart Preneel Dept. Electrical Engineering-ESAT/COSIC Katholieke Universiteit Leuven Kasteelpark Arenberg 10, B–3001 Leuven-Heverlee, Belgium [email protected] April 2004 Abstract This paper provides an overview of cryptographic algorithms. It discusses the role of cryptology in information security. Next it presents the different type of algorithms for encryption and data authentication and explains the principles of stream ciphers, block ciphers, hash functions, MAC algorithms, public-key encryption algorithms, and digital signature schemes. Subsequently the design and evaluation procedures for cryptographic algorithms are discussed and a perspective is offered on research challenges.

Cryptology: An Introduction - KU Leuven (ESAT)

Embed Size (px)

Citation preview

FACULTEIT TOEGEPASTE WETENSCHAPPENDEPARTEMENT ELEKTROTECHNIEK – ESAT/COSICKASTEELPARK ARENBERG 10B–3001 LEUVEN-HEVERLEE A

KATHOLIEKEUNIVERSITEIT

LEUVEN

Cryptology:

An Introduction

Bart Preneel

Dept. Electrical Engineering-ESAT/COSIC

Katholieke Universiteit Leuven

Kasteelpark Arenberg 10, B–3001 Leuven-Heverlee, Belgium

[email protected]

April 2004

Abstract

This paper provides an overview of cryptographic algorithms. It discusses the role ofcryptology in information security. Next it presents the different type of algorithms forencryption and data authentication and explains the principles of stream ciphers, blockciphers, hash functions, MAC algorithms, public-key encryption algorithms, and digitalsignature schemes. Subsequently the design and evaluation procedures for cryptographicalgorithms are discussed and a perspective is offered on research challenges.

Contents

1 Introduction 1

2 Encryption for Secrecy Protection 22.1 Symmetric Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 The One-Time Pad or the Vernam Scheme. . . . . . . . . . . . . . . . 52.1.2 Additive Stream Ciphers. . . . . . . . . . . . . . . . . . . . . . . . . . 62.1.3 Block Ciphers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2 Public-Key Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.1 Public-Key Agreement. . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2.2 Public-Key Encryption. . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.3 The Factoring and the Discrete Logarithm Problem. . . . . . . . . . . 152.2.4 Basing Public-Key Cryptology on Other Problems. . . . . . . . . . . . 162.2.5 Applying Public-Key Cryptology. . . . . . . . . . . . . . . . . . . . . . 16

3 Hashing and Signatures for Authentication 173.1 Symmetric Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.1.1 MDCs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183.1.2 MAC Algorithms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.2 Digital Signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.2.1 Certificates. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Analysis and Design of Cryptographic Algorithms 224.1 Different Approaches in Cryptography . . . . . . . . . . . . . . . . . . . . . . 224.2 Life Cycle of a Cryptographic Algorithm . . . . . . . . . . . . . . . . . . . . . 25

4.2.1 Open Competitions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2.2 Public versus Secret Algorithms. . . . . . . . . . . . . . . . . . . . . . 26

4.3 Insecure Versus Secure Algorithms . . . . . . . . . . . . . . . . . . . . . . . . 264.3.1 Incompetence. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.3.2 Political Reasons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264.3.3 Economic Reasons. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.3.4 Brute Force Attacks Become Easier over Time. . . . . . . . . . . . . . 274.3.5 Shortcut Attacks Become more Efficient. . . . . . . . . . . . . . . . . . 284.3.6 New Attack Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

5 Concluding Remarks 29

i

Bart Preneel Cryptology: An Introduction

1 Introduction

Our society is more and more dependent on information systems. An ever increasing numberof interactions is moving on-line, supported by a growing infrastructure of networks (wired andwireless), servers and databases. End users, organizations and governments are exchanginginformation and setting up e-services; after the end of the dot-com hype, we see a steadygrowth of e-commerce and e-business, and even the first successes in e-government. In thenext years we can expect the growth of mobile services and in five to ten years we will witnessthe emergence of ubiquitous (or pervasive) computing, and ambient intelligence,. . .

However, information systems and communication networks are increasingly subjected to abroad range of abuses. The risk for misuse has increased considerably, as potential attackerscan operate from all over the globe. Moreover, if someone gains access to an electronicinformation system, the scale and impact of the abuse can be much larger than in a brick-and-mortar or paper-based context. Hackers break into computer systems and networks withmany goals: just for fun, to obtain sensitive data, for financial gain, to corrupt critical data, tobring down the system (a denial of service attack) or to use them as a launching platform forother attacks (such as a distributed denial of service attack). Malware (computer viruses andworms) spreads at a growing speed through our networks (the Sapphire/Slammer worm inJanuary 2003 needed only 8.5 minutes to infect more than 70 000 machines). While the currentanti-virus software is effective for known malware, new malware keeps popping up and keepscausing damage in the time window between the detection and the effective update of the anti-virus software. Eavesdropping information over open networks is becoming a larger threat; allforms of wireless communications (e.g., mobile phones, wireless lans (WLAN) and PersonalArea Networks (PAN)) are particularly vulnerable. Privacy of users is compromised in variousways: sometimes data collection is pushed to the limit for commercial reasons, search andstorage technologies become more and more effective, and standardized data formats (such asXML) make data collection easier. SPAM is becoming a growing problem: early 2004, 60-80%(figures vary depending on the source consulted) of all email traffic consists of unsolicited bulkemail.

There is no doubt that in the past 25 years our security technology has become moresophisticated. We have realized that security problems cannot be solved only by technology,but that securing a system requires an integrated and process-oriented approach taking intoaccount the complete context: the development process, user education, secure deployment,management of security systems including audit and intrusion detection, . . . . Nevertheless,for the time being the security of our on-line environment seems to get worse, in spite of allthese efforts and insights. The following reasons can be given for this:

• as mentioned above, the growth of information systems and their increased connectivityimplies that the target and motivation for attackers is growing as well; simultaneously,the time between the discovery of a security weakness and its exploit is decreasing;

• systems become increasingly complex, open, and interconnected; however, security andcomplexity do not mix very well. We seem to lack the know-how and tools to makea large system bug-free; a substantial part of the bugs has security implications hencedeploying new systems keeps creating new problems;

• the threshold to attack the systems becomes lower because of the on-line availability ofa large number of hacker tools (password sniffers, password crackers, virus authoring

1

Bart Preneel Cryptology: An Introduction

tools, . . . ); this implies that unsophisticated attackers can launch smart attacks (e.g.,write a clever virus) using a few mouse clicks;

• a final reason is purely economic: it is often the case that the entities who need to investin security may not be the ones who benefit from it. This can create a market failure,which can only be resolved with regulation or self-regulation (such as a gradual increaseof liability for software).

In the 1990s, protocols have been developed to secure communications: the most successfulones in the Internet are TLS (Transport Layer Security), IPsec at the network layer andSSH (Secure Shell) [73]. S/MIME, the IETF standard to protect email seems to be slowto catch on. The trend for the coming years is the improvement of the security of the endsystems. Important initiatives are the TPM (Trusted Platform Module) from TCG (TrustedComputing Group) [75], formerly TCPA and Microsoft’s NGSCB (Next Generation SecureComputing Base), formerly known as Palladium [54]. In both approaches, the goal is toadd a simple trusted subsystem to the computer system, which can verify the configuration,remotely attest the configuration, and store data securely. NGSCB adds additional trustedfunctionality in a secure subsystem.

In all these security mechanisms, cryptology plays an essential role. Without cryptol-ogy, securing the infrastructure consisting of communication networks and computer systemswould not be possible. Cryptology also plays an essential role in protecting applications suchas e-banking, financial transactions, electronic elections, storage of sensitive data such ashealth information, etc. While cryptology as an art is very old, it has developed as a sciencein the last 60 years; most of the open research dates from the last 30 years. A significantnumber of successes has been obtained, and it is clear that cryptology should no longer bethe weakest link in our modern security systems. Nevertheless, as a science and engineeringdiscipline, cryptology is still facing some challenging problems.

This article intends to present the state of the art and to offer a perspective on openresearch issues in the area of cryptographic algorithms. We discuss the principles underlyingthe design of cryptographic algorithms; we distinguish between algorithms for confidentialityprotection (the protection against passive eavesdroppers) and for authentication (the pro-tection against active eavesdroppers, who try to modify information). Then we review thedifferent approaches taken in cryptography, and we discuss the issues and research problemsthat arise when selecting, designing, and evaluating a cryptographic algorithm. Finally wepresent some concluding remarks.

2 Encryption for Secrecy Protection

The use of cryptography for protection the secrecy of information is as old as writing itself(for an excellent historical overview, see D. Kahn [41] and S. Sing [72]). Some even say that‘cryptographer’ is the second oldest profession. The basic idea in cryptography is to applya complex transformation to the information to be protected. When the sender (usuallycalled Alice) wants to convey a message to the recipient (Bob), she will apply to the plaintextP the mathematical transformation E(). This transformation E() is called the encryptionalgorithm; the result of this transformation is called the ciphertext or C = E(P ). Bob willdecrypt C by applying the inverse transformation D = E−1; this way he recovers P orP = D(C). For a secure algorithm E, the ciphertext C does not make sense to outsiders:

2

Bart Preneel Cryptology: An Introduction

Eve, who is tapping the connection, can obtain C, but she cannot obtain (partial informationon) the corresponding plaintext P .

This approach only works when Bob can keep the transformation D secret. While this isacceptable for a person-to-person exchange, it is not feasible for large scale use. Bob needs asoftware or hardware implementation of D: either he has to program it himself, or he has totrust someone to write the program for him. Moreover, he will need a different transformation(and program) for each correspondent, which is not very practical. Bob and Alice always haveto face the risk that somehow Eve will obtain D (or E), for example by bribing the author ofthe software or their system manager, or by breaking into their computer system.

This problem can be solved by introducing into the encryption algorithm E() a secretparameter, the key K. Typically such a key is a binary string of 40 to a few thousand bits. Acorresponding key K∗ is used for the decryption algorithm D. One has thus C = EK(P ) andP = DK∗(C) (see also Fig. 1, which assumes that K∗ = K). The transformation has to dependstrongly (and in a very complex way) on the keys: if one uses a wrong key K∗′ 6= K∗, onedoes not obtain the plaintext P but a ‘random’ plaintext P ′. Now it is possible to publish theencryption algorithm E() and the decryption algorithm D(); the security of the system reliesonly on the secrecy of two short keys. This implies that E() and D() can be evaluated publiclyand distributed on a commercial basis. One can think of the analogy with a mechanical lock:everyone knows how such a lock works, but in order to open a particular lock, one needsto know the key or the secret combination.1 The assumption that the algorithm shouldremain secure even if it is known to the opponent is known in cryptography as “Kerckhoffs’sprinciple”; Kerckhoffs was a 19th century Dutch cryptographer who was the first to formulatethis principle.

E� �

D� �

- - -

-

?

�� �?

P PC

Ksecure channel

Figure 1: Model for conventional or symmetric encryptionA simple example of an encryption algorithm is the so-called ‘Caesar cipher,’ after the

Roman emperor who used it. The plaintext is encrypted letter by letter; the ciphertext isobtained by shifting the letters over a fixed number of positions in the alphabet. The secretkey indicates the number of positions. It is claimed that Caesar always used the value ofthree, such that “an example” would be encrypted to “dq hadpsoh”. Another exampleis the name of the computer “hal” from S. Kubrick’s “A Space Odyssey (2001),” whichwas obtained by replacing the letters of “ibm” by their predecessor in the alphabet. Thiscorresponds to a shift over 25 positions (or -1 position). It is clear that such a system iscompletely insecure: it suffices to try the 26 values of the key; the correct plaintext can beidentified easily based on the redundancy in the plaintext.

The simple substitution cipher replaces a letter by any other letter in the alphabet. For1Note however that Matt Blaze has demonstrated in [11] that many modern locks are easy to attack and

that their security relies to a large extent on security through obscurity, that is, the security of locks relies onthe fact that the methods to design and attack locks are not published.

3

Bart Preneel Cryptology: An Introduction

example, the key could be

ABCDEFGHIJKLMNOPQRSTUVWXYZMZNJSOAXFQGYKHLUCTDVWBIPER

which means that an A is mapped to an M, a B to a Z. . . , hence THEEVENING would beencrypted to VXSSBSHFHA. For an alphabet of n letters (for English n = 26), there aren! substitutions, which implies that there are n! values for the secret key. Note that even forn = 26, trying all keys is not possible since 26! = 403291461126605635584000000 = 4 · 1026.Even if a fast computer could try 1 billion (109) keys per second, it would take 1 billion yearsto try all keys. However, it is easy to break this scheme by frequency analysis: in a standardEnglish text, the character E accounts for 12 out of every 100 characters (here it is assumedthat spaces are omitted from the plaintext). Hence it is straightforward to deduce that themost common ciphertext character (in our example S) corresponds to an E. This way wehave reduced the key space with a factor 26. It is easy to continue this analysis, based onlower frequency letters and based on frequent combinations of two (e.g., TH) and three (e.g.,THE) letters (digrams and trigrams respectively). In spite of the large key length, simplesubstitution is a very weak cipher, even if the cryptanalyst has only access to the ciphertext(in practice the cryptanalyst may also know part of the plaintext, for example a standardopening such as DEARSIR).

A second technique applied in cryptology is a transposition cipher, that is, symbols aremoved around. For example, the following mapping could be obtained:

TRANS ORI SPOSIT −→ NOTITIONS OSANP

Here the key would indicate where the letters are moved. If letters are grouped in blocks ofn (in this example n = 15), there are n! different transpositions. Again solving this cipheris rather easy, for example by exploiting digrams and trigrams or by fragments of knownplaintexts. In spite of these weaknesses, modern ciphers designed for electronic computersare still based on a combination of several transpositions and substitutions (cf. Sect. 2.1.3).

A large number of improved ciphers have been invented; in the 15th and 16th centurypolyalphabetic substitution was introduced, which uses t different alphabets (for each letterone of these alphabets is selected based on some simple rule). The complexity of these cipherswas limited by the number of operations that an operator could carry out by hand. None ofthese manual ciphers is considered to be secure today. With the invention of telegraph andradio communications, more complex ciphers were developed; the most advanced schemesused mechanical or electromechanical systems with rotors, such as the famous Enigma (usedby Germany in World War II) and the Hagelin machines. Rotor machines were used betweenthe 1920s and the 1950s. In spite of their increased complexity, most of these schemes werenot sufficiently secure in their times (one of the weak points was users not following theprocedures for correct use). The analysis of one of these ciphers (the Lorenz cipher) resultedin the development of Colossus, one of the first electronic computers.

A problem which has not yet been addressed is how Alice and Bob exchange the secretkey. The easy answer is that cryptography does not solve this problem; cryptography onlymoves the problem, and hopefully at the same time it makes problems easier. In this casethe secrecy of a (large) plaintext has been reduced to that of a short key, which can beexchanged on beforehand. The problem of exchanging keys is studied in more detail in an

4

Bart Preneel Cryptology: An Introduction

area of cryptography that is called key management. We will not discuss it in further detailhere; see [53] for an overview of key management techniques.

The branch of science which studies the encryption of information is called cryptography.A related branch tries to ‘break’ encryption algorithms, by recovering the plaintext withoutknowing the key or by deriving the key from the ciphertext and parts of the plaintext; it iscalled cryptanalysis. The term cryptology covers both aspects. For more extensive introduc-tions to cryptography, the reader is referred to [18, 44, 53, 59, 71, 74].

So far we have assumed that the key for decryption KD is equal to the encryption keyKE , or that it is easy to derive KD from KE . This type of algorithms are called conventionalor symmetric ciphers. In public-key or asymmetric ciphers, KD and KE are always different;moreover, it should be difficult to compute KD from KE . This has the advantage that onecan make KE public, which has important implications to the key management problem. Theremainder of this section discusses symmetric algorithms and public-key algorithms.

2.1 Symmetric Encryption

This section introduces three types of symmetric encryption algorithms: the one-time pad(also known as the Vernam scheme), additive stream ciphers and block ciphers.

2.1.1 The One-Time Pad or the Vernam Scheme.

In 1917 G.S. Vernam invented a simple encryption algorithm for telegraphic messages; itwas published in [77]. The encryption operation consists of adding bit by bit (with additionmodulo 2 or exor) a random key to the plaintext. The decryption operation adds the samekey to the ciphertext to recover the plaintext (see Fig. 2). In practice, Vernam stored thekeys on paper tapes.

� ��+ � ��

+- - -

-

?

�� �?1010 10101000

0010 0010secure channel

Figure 2: The Vernam scheme or one-time pad

We now describe the Vernam scheme more formally. The ith bit of the plaintext, cipher-text, and key stream are denoted with pi, ci, and ki, respectively. The encryption operationcan then be written as ci = pi ⊕ ki. Here ⊕ denotes addition modulo 2 or exor. Thedecryption operation is identical to the encryption (the cipher is an involution): indeed,pi = ci⊕ ki = (pi⊕ ki)⊕ ki = pi⊕ (ki⊕ ki) = pi⊕ 0 = pi. Vernam proposed to use a perfectlyrandom key sequence, that is, the bit sequence ki, i = 1, 2, . . . should consist of a uniformlyand identically distributed sequence of bits.

In 1949 C. Shannon, the father of information theory, published his mathematical proofthat shows that from observing the ciphertext, the opponent cannot obtain any new infor-mation on the plaintext, no matter how much computing power he has [68]. Shannon calledthis property perfect secrecy. The main disadvantage of the Vernam scheme is that the secret

5

Bart Preneel Cryptology: An Introduction

key is exactly as long as the message (one should never reuse a key stream). C. Shannon alsoshowed that this the best one can do if one wants perfect secrecy. In spite of the long key, theVernam algorithm is still used by diplomats and spies; it has been used until the late 1980sfor the ‘red telephone’ between Washington and Moscow.

Spies used to carry key pads with random characters: in this case pi, ci and ki areelements from ZZ26 (representing the letters A through Z). The encryption operation is ci =(pi +ki) mod 26 and the decryption operation is pi = (ci−ki) mod 26. The keys were writtenon sheets of paper contained in a pad. The security of the scheme relies on the fact that everypage of the pad is used only once, which explains the name ‘one-time pad.’ Note that duringWorld War II the possession of pads with random characters was sufficient to be convictedas a spy.

The one-time pad was also used for Soviet diplomatic communications. Under the code-word Venona, US cryptologists attempted to break this system in 1943, which seems ridiculousas it offers perfect secrecy. However, after 2 years, they discovered that the Soviets used theirpads twice (rather than once), that is, two messages were encrypted using the same keystream. This error was made due to time pressure in the production of the pads. If c and c′

are ciphertexts generated with the same pad, one finds that

c⊕ c′ = (p⊕ k)⊕ (p′ ⊕ k) = (p⊕ p′)⊕ (k ⊕ k) = p⊕ p′ .

This implies that one can find the sum of the corresponding plaintexts. Note that if theplaintexts are written in natural language, this sum is not uniformly distributed, so one candetect that the correct c and c′ have been matched (if c and c′ would have been encryptedwith different keys k and k′, the sum c⊕ c′ would be uniformly distributed). By guessing orpredicting parts of plaintexts, a clever cryptanalyst can derive most of p and p′ from the sump⊕ p′. In practice, the problem was more complex since the plaintexts were encoded with asecret method before encryption and cover names were used to denote individuals. Between1943 and 1980, approximately 3 000 decryptions (out of 25 000 messages) were obtained.Some of these plaintexts contained highly sensitive information on Soviet spies. The Venonasuccesses were only made public in 1995. They teach an important lesson: errors in using acryptosystem can be fatal, even if the cryptosystem itself is perfectly secure. More details onVenona can be found in [34]. In Sect. 4.3 we discuss another weakness that can occur whenimplementing the Vernam scheme.

2.1.2 Additive Stream Ciphers.

Additive stream ciphers are ciphers for which the encryption consists of a modulo 2 additionof a key stream to the plaintext. They try to mimic the Vernam scheme by replacing theperfectly random key stream by a pseudo-random key stream (that is, a key stream thatlooks random to an observer who has limited computing power), which is generated from ashort key. In practice one generates the bit sequence ki with a keyed finite state machine (seeFig. 3). Such a machine stretches a short secret key K into a much longer key stream sequenceki. The sequence ki is eventually periodic. One important (but not sufficient) design criterionfor the finite state machine is that the period has to be sufficient long (280 is a typical lowerbound), as a repeating key stream leads to a very weak scheme (cf. the Venona project). Thevalues ki should have a distribution that is close to uniform; another condition is that thereshould be no correlations between output strings (note that cryptanalytic attacks may exploitcorrelations of less than 1 in 1 million).

6

Bart Preneel Cryptology: An Introduction

�� ��

G

F

����-

?

-

-

?

?

- �

+

K3

K2

K1�� ��

G

F

����-

?

-

-

?

?

- �

+

K3

K2

K1

P - PC

Figure 3: An additive stream cipher consisting of a keyed finite state machine. The initialstate depends on K1, the next state function F depends on K2 and the output functionG depends on K3. The three keys K1, K2, and K3 are derived from the user key K; thisoperation is not shown

Formally, the sequence ki can be parameterized with a security parameter; then onerequires that the sequence satisfies every polynomial time statistical test for randomness(here polynomial means polynomial in the security parameter). Another desirable propertyis that no polynomial time machine can predict the next bit of the sequence (based on theprevious outputs) with a probability that is significantly better than 1/2. An important (andperhaps surprising) result in theoretical cryptology by A. Yao shows that these two conditionsare in fact equivalent [80].

Stream ciphers have been popular in the 20th century: they operate on the plaintextcharacter by character, which is very convenient and allows for a simple and inexpensiveimplementation. Most of the rotor machines are additive stream ciphers. Between 1960and 1990, stream ciphers based on Linear Feedback Shift Registers (LFSRs) have been verypopular (see for example the book by Rueppel [65]). An example of a simple LFSR-basedstream cipher is presented in Fig. 4. However, most of these algorithms were trade secrets;everyone used his own cipher, and no standards were published. The most widely used LFSR-based stream ciphers are A5/1 and A5/2, which are implemented in hardware in the GSMphones (more than 1 billion subscribers); these algorithms were kept secret, but they leakedout and were shown to be rather weak [8]. In the last 15 years it has become clear that mostLFSR-based stream ciphers are much less secure than expected (see for example [15, 50]).

RC4, designed in 1987 by Ron Rivest, is based on completely different principles. RC4 isdesigned for 8-bit microprocessors and was initially kept as a trade secret. It was leaked outin 1994 and it is currently widely used in browsers (TLS protocol). While several statisticalweaknesses have been identified in RC4 [27, 57] the algorithm still seems to resist attacks thatrecover the key.

In the last 15 years, a large number of very fast stream ciphers has been proposed thatare software oriented (suited for 32-bit processors) and that intend to offer a high level ofsecurity. However, for the time being weaknesses have been identified in most proposals andno single scheme has emerged as a standard or a de facto standard. Nevertheless, streamciphers can be very valuable for encryption with very few hardware gates or for high speed

7

Bart Preneel Cryptology: An Introduction

-

��6

����+ �

-

��6

����+ �

-

��6

����+ �

-

��6

����+ �

-

��6

����+ �

- -

-

��6

����+ �

-

��6

����+ �

-

��6

����+ �

-

��6

����+ �

-

��6

����+ �

- -

f -

Figure 4: Toy example of a non-linear combiner: this additive stream cipher consists of twoLFSRs (the squares represents 1-bit memories or D flip-flops) and a non-linear combiningfunction f . Not all the feedback elements will be active in the LFSR: the key can consist ofthe feedback functions and the initial states of the LFSRs

encryption. Developing strong stream ciphers is clearly an important research topic for thenext years.

2.1.3 Block Ciphers.

Block ciphers take a different approach to encryption: the plaintext is divided into largerwords of n bits, called blocks; typical values for n are 64 and 128. Every block is encipheredin the same way, using a keyed one-way permutation, i.e., a permutation on the set of n-bitstrings that is controlled by a secret key. The simplest way to encrypt a plaintext using ablock cipher is as follows: divide the plaintext into n-bit blocks Pi, and encrypt these blockby block. The decryption also operates on individual blocks:

Ci = EK(Pi) and Pi = DK(Ci) .

This way of using a block cipher is called the ECB (Electronic CodeBook) mode. Note thatthe encryption operation does not depend on the location in the ciphertext as is the case foradditive stream ciphers.

Consider the following attack on a block cipher (the so-called tabulation attack): thecryptanalyst collects ciphertext blocks and their corresponding plaintext blocks (this is possi-ble as part of the plaintext is often predictable); this is used to build a large table. With sucha table, one can deduce information on other plaintexts encrypted under the same key. Inorder to preclude this attack, the value of n has to be quite large (e.g., 64 or 128). Moreover,the plaintext should not contain any repetitions (or other patterns), as these will be leakedto the ciphertext.

This last problem shows that even if n is large, the ECB mode is not suited to encryptstructured plaintexts (such as text, images, etc.). This mode should only be used in veryspecial cases, where the plaintext is random, such for as the encryption of cryptographickeys. There is however an easy way to randomize the plaintext, by using the block cipher ina different way. The most popular mode of operation for a block cipher is the CBC (Cipher

8

Bart Preneel Cryptology: An Introduction

Block Chaining) mode (see Fig. 5). In this mode the different blocks are coupled by addingmodulo 2 to a plaintext block the previous ciphertext block:

Ci = EK(Pi ⊕ Ci−1) and Pi = DK(Ci)⊕ Ci−1 .

Note that this ‘randomizes’ the plaintext, and hides patterns. To enable the encryption ofthe first plaintext block (i = 1), one defines C0 as the initial value IV , which should berandomly chosen and transmitted securely to the recipient. By varying this value, one canensure that the same plaintext is encrypted into a different ciphertext under the same key.The CBC mode allows for random access on decryption: if necessary, one can decrypt only asmall part of the ciphertext. A security proof the CBC mode has been provided by Bellare etal. [3]; it holds as long as the opponent can only obtain ciphertext corresponding to chosenplaintexts (the CBC mode is insecure if the opponent can choose ciphertexts and obtain thecorresponding plaintexts).

E��K-

d+??

?

-P1

C1

IV

E��K-

d+??

?

-P2

C2

E��K-

d+??

?

-P3

C3

...

D��K-

d+

?

?

?-

C1

P1

IV

D��K-

d+

?

?

?-

C2

P2

D��K-

d+

?

?

?-

C3

P3

...

Figure 5: The CBC mode of a block cipher

One can also use a block cipher to generate a key stream that can be used in an additivestream cipher: in the OFB (Output FeedBack) mode the block cipher is applied iterativelyby feeding back the n-bit output to the input; in the CTR (Counter) mode one encryptssuccessive values of a counter. An alternative stream cipher mode is the CFB (Cipher Feed-Back) mode; this mode is slower but it has better synchronization properties. The modes ofoperation have been standardized in FIPS 81 [23] (see also [56] which adds the CTR mode)and ISO/IEC 10116 [37].

Block ciphers form a very flexible building block. They have played an important role inthe past 25 years because of the publication of two US government FIPS standards (FederalInformation Processing Standard) for the protection of sensitive but unclassified governmentinformation. An important aspect of these standards is that everyone can use them for free.

The first standardized block cipher is the Data Encryption Standard (or DES) of FIPS 46 [22],which was published in 1977. This block cipher was developed by IBM together with NSA(National Security Agency) in response to a call by the US government. DES represents aremarkable effort to provide a standard for government and commercial use; its impact onboth practice and research can hardly be overestimated. For example, DES is widely used

9

Bart Preneel Cryptology: An Introduction

in the financial industry. DES has a block size of 64 bits and a key length of 56 bits (moreprecisely, the key length is 64 bits, but 8 of these are parity bits). The 56-bit key lengthwas a compromise: it would allow the US government to find the key by brute force, that it,by trying all 256 ≈ 7 · 1016 keys one by one, but would put key search beyond limits for anaverage opponent. However, as hardware gets faster, this key length was only sufficient for10-15 years, until 1987. . . 1992 (see Sect. 4.3 for details). The block length of 64 is no longeradequate either: the reason is that there exist attacks on the modes of operation of an n-bitblock cipher which require about 2n/2 ciphertext blocks (the attack is known as a ‘matchingciphertext’ attack [42]). For n = 64, this corresponds to 4 billion ciphertexts; with a highspeed encryption device this number is reached in less than a minute. The DES design wasoriented towards mid 1970s hardware. For example, it uses a 16-round Feistel structure (seeFig. 6), which implies that the hardware for encryption and decryption is identical. Eachround consists of non-linear substitutions (from 6 bits to 4 bits) followed by some bit per-mutations (transpositions). The performance of DES in software is suboptimal; for exampleDES runs at 40 cycles/byte on a Pentium III, which corresponds to 200 Mbit/s for a clockfrequency of 1 GHz.

Already in 1978, an improved variant of DES was proposed: triple-DES consists of threeiterations of DES: EK1(DK2(EK3(x))). This variant has only been included in 1999 into the3rd revision of FIPS 46 [22]. The choice of a decryption for the middle operation is motivatedby backward compatibility: indeed, choosing K1 = K2 = K3 results in single DES. Three-keytriple DES has a 168-bit key, but the security level corresponds to a key of approximately100 bits. Initially two-key triple DES was proposed, with K3 = K2: its security level is about80-90 bits. At first sight one would believe that double-DES with a key length of 112 bitsis sufficient; however, it has been shown that this corresponds to a security level of about70 bits. For an overview of these attacks, see [53]. The migration from DES to triple-DESin the financial sector has started in 1986, but it is progressing very slowly; it will take until2005 to complete. Triple-DES has the disadvantage that it is rather slow (115 cycles/byte ona Pentium III) and that the block length is still limited to 64 bits.

In 1997 the US government decided to replace DES by the AES (Advanced EncryptionStandard). AES is a block cipher with a 128-bit block length and key lengths of 128, 192,and 256 bits. An open call for algorithms was issued; 15 candidates have been submittedby the deadline of June 1998. After the first round, 5 finalists remained and in October2000 it was announced that the Rijndael algorithm (designed by the Belgian cryptographersVincent Rijmen and Joan Daemen) was the winner. The FIPS standard was published inNovember 2001 [26]. It may not be a coincidence that the U.S. Department of CommerceBureau of Export Administration (BXA) relaxed export restrictions for US companies inSeptember 2000 (otherwise it would have been illegal to export AES software from the USto Belgium). In 2003, the US government announced that it would also allow the use ofAES for secret data, and even for top secret data; the latter application requires key lengthsof 192 or 256 bits. AES is a rather elegant and mathematical design, that among the fivefinalists offered the best combination of security, performance, efficiency, implementability andflexibility. AES allows for efficient implementations on 32-bit architectures (15 cycles/byteon a Pentium III), but also for compact implementations on 8-bit smart cards. Moreover,hardware implementations of AES offer good trade-offs between size and speed. AES consistsof a Substitution-Permutation (SP) network with 10 rounds for a 128-bit key, 12 rounds fora 192-bit key and 14 rounds for a 256-bit key (see Fig. 6). Each round consists of non-linearsubstitutions (from 8 bits to 8 bits) followed by some affine transformations (which move

10

Bart Preneel Cryptology: An Introduction

around the information). Note that Rijndael also supports 192-bit and 256-bit block lengths,but these have not been included in the AES standard. For a complete description of theAES and its design, see [17].

? ?

hhhhhhhhh

(((((((((

?�

SSSS

�P�m+

Feistel cipher

?S

?

?

?S

?

?

?S

?

?

?S

?

?

?S

?

?

?S

?

?

?S

?

?

?S

?

?

P

SP network

Figure 6: One round of a Feistel cipher (left) and of a Substitution-Permutation (SP) network.Here S represents a substitution and P a permutation (this can be a bit permutation oran affine mapping). In a Feistel cipher a complex operation on the right part is added tothe left part; it has the advantage that the decryption operation is equal to the encryptionoperation, which simplifies hardware implementations. The substitution operation does notneed to be invertible here. In an SP network all the bits are updated in every round, whichguarantees faster diffusion of information; the substitution and permutation operations needto be invertible

There exist many other block ciphers; a limited number of these has been included inproducts (such as Camellia, members of the CAST family, FEAL, Gost, IDEA, Kasumi, andSkipjack). For more details the reader is referred to the cryptographic literature.

2.2 Public-Key Encryption

The main problem that is left unsolved by symmetric cryptography is the key distributionproblem. Especially in a large network it is not feasible to distribute keys between all userpairs (in a network with t users there are t(t−1)/2 such pairs, hence even for 1000 users abouthalf a million keys are needed). An alternative is to manage all keys in a central entity, whichshares a secret key with every user. However, this entity then becomes a single point of failureand an attractive target of attack. A much more elegant solution to the key managementproblem is offered by public-key cryptography, that has been invented in 1976, independentlyby W. Diffie and M. Hellman [19] and by R. Merkle [51].

2.2.1 Public-Key Agreement.

A public-key agreement protocol allows two parties who have never met to agree on a secretkey via a public conversation. Diffie and Hellman showed how to achieve this using theconcept of commutative one-way functions. A one-way function is a function that is easyto compute, but hard to invert. For example, in a block cipher, the ciphertext has to be aone-way function of the plaintext and the key: it is easy to compute the ciphertext from theplaintext and the key, but given the plaintext and the ciphertext it should be hard to recover

11

Bart Preneel Cryptology: An Introduction

the key (otherwise the block cipher would not be secure). Similarly one can show that theexistence of additive stream ciphers (pseudo-random string generators) implies the existenceof one-way functions. A commutative one-way function is a one-way function for which theresult is the same independent of the order of the evaluation: for a function f(·, ·) with twoarguments, one has that f(f(z, x), y) = f(f(z, y), x).

The candidate commutative one-way function proposed by Diffie and Hellman is f(α, x) =αx mod p; here p is a large prime number (large means 1024 bits or more), x ∈ [1, p − 1],and α is a generator modp, which means that α0, α, α2, α3, . . . , αp−2 mod p run through allvalues between 1 and p − 1. For technical reasons, we require that p is a safe prime, whichmeans that (p−1)/2 is a prime number as well. The Diffie-Hellman protocol works as follows(see also Fig. 7).

• Alice and Bob agree on a prime number p and a generator α mod p.

• Alice picks a value xA uniformly at random in the interval [1, p − 1], computes yA =αxA mod p and sends this to Bob;

• Bob picks a value xB uniformly at random in the interval [1, p − 1], computes yB =αxB mod p and sends this to Alice;

• On receipt of yB, Alice checks that yB 6∈ {−1, 0, 1} and computes kAB = yxAB mod p =

αxAxB mod p.

• On receipt of yA, Bob checks that yA 6∈ {−1, 0, 1} and computes kBA = yxBA mod p =

αxBxA mod p.

• Alice and Bob compute the secret key as h(kAB) = h(kBA), with h() a hash function orMDC (see Sect. 3.1).

Alice Bob

xA ∈R [1, p− 1], αxA mod p

−αxA mod p

−−−−−−−−−−−−→

xB ∈R [1, p− 1], αxB mod p

←αxB mod p

−−−−−−−−−−−−−

kAB = (αxB )xA mod p kAB = (αxA)xB mod p

Figure 7: The Diffie-Hellman protocol

It is easy to see that the commutativity implies that kAB = kBA, hence Alice and Bobobtain a common value. Eve, who is eavesdropping the communication, only observes yA =αxA mod p and yB = αxB mod p; there is no obvious way for her to obtain kAB = (αxB )xA modp. If Eve could compute discrete logarithms, that is, derive xA from yA and/or xB from yB,she could of course also derive kAB. However, if p is large, this problem is believed to bedifficult (cf. infra). Eve could try to find another way to compute kAB from yA and yB. So

12

Bart Preneel Cryptology: An Introduction

far, no efficient algorithm has been found to solve this problem, and this is stated as theDiffie-Hellman assumption: it is hard to solve the Diffie-Hellman problem, that is, to deduce(αxB )xA mod p from αxA mod p and αxB mod p. If the Diffie-Hellman assumption holds, theDiffie-Hellman protocol results in a common secret between Alice and Bob after a publicconversation. It is clear from the above discussion that the Diffie-Hellman problem cannotbe harder than the discrete logarithm problem. It is known that for some prime numbers thetwo problems are equivalent.

It is very important to check that yA, yB 6∈ {−1, 0, 1}: if not, Eve could modify yA andyB by one of these values and ensure in this way that kAB ∈ {−1, 0, 1}. However, the Diffie-Hellman protocol has another problem: how does Alice know that she is talking to Bob orvice versa? In the famous ‘person-in-the-middle’-attack, Eve sets up a conversation with Alice(resulting in the key kAE) and with Bob (resulting in the key kBE). Eve now shares a keywith both Alice and Bob; she can decrypt all messages received from Alice, read them, andre-encrypt them for Bob (and vice-versa). Alice or Bob are unable to detect this; they believethat they share a common secret only known to the two of them. This shows that the commonsecret can only be established between two parties if there is an authentic channel, that is, achannel on which the information can be linked to the sender and the information cannot bemodified. This means that the authenticity of the values yA and yB has to be established, bylinking them to Alice and Bob respectively. One way to achieve this is to read these values(or hash values of these values, see Sect. 3.1) over the phone; this solution works if Alice andBob know each other’s voices or if they trust the phone system to connect them to the rightperson.

The Diffie-Hellman problem has another limitation: Alice and Bob can only agree on asecret key, but Alice cannot use this protocol directly to tell Bob to meet her tonight at9pm. Alice can of course use the common key kAB to encrypt this message using the AESalgorithm in the CBC-mode. We will explain in the next section how public-key encryptioncan overcome this limitation.

2.2.2 Public-Key Encryption.

The key idea behind public-key encryption is the concept of trapdoor one-way functions.Trapdoor one-way functions are one-way functions with an additional property: given someextra information (the trapdoor), it becomes possible to invert the one-way function.

With such functions Bob can send a secret message to Alice without the need for priorarrangement of a secret key. Alice chooses a trapdoor one-way function with public parameterPA (Alice’s public key) and with secret parameter SA (Alice’s secret key). Alice makes herpublic key widely available (she can put it on her home page, but it can also be includedin special directories). Anyone who wants to send some confidential information to Alice,computes the ciphertext as the image of the plaintext under the trapdoor one-way functionusing the parameter PA. Upon receipt of this ciphertext, Alice recovers the plaintext by usingher trapdoor information SA (see Fig. 8). An attacker, who does not know SA, sees only theimage of the plaintext under a one-way function, and will not be able to recover the plaintext.The conditions which a public-key encryption algorithm has to satisfy are:

- the generation of a key pair (PA, SA) has to be easy;

- encryption and decryption have to be easy operations;

- it should be hard to compute the public key PA from the corresponding secret key SA;

13

Bart Preneel Cryptology: An Introduction

- DSA(EPA

(P )) = P .

E� �

D� �

- - -

?

��� �

?P PC

(PA, SA)

PA

PA

SA

authentic channel

Figure 8: Model for public-key or asymmetric encryptionNote that if one wants to send a message to Alice, one has to know Alice’s public key

PA, and one has to be sure that this key really belongs to Alice (and not to Eve), since it isonly the owner of the corresponding secret key who will be able to decrypt the ciphertext.Public keys do not need a secure channel for their distribution, but they do need an authenticchannel. As the keys for encryption and decryption are different, and Alice and Bob havedifferent information, public-key algorithms are also known as asymmetric algorithms.

Designing a secure public-key encryption algorithm is apparently a very difficult problem.From the large number of proposals, only a few have survived. The most popular algorithmis the RSA algorithm [64], which was named after its inventors (R.L. Rivest, A. Shamir, andL. Adleman). RSA was published in 1978; the patent on RSA has expired in 2000. Thesecurity of RSA is based on the fact that it is relatively easy to find two large prime numbers(in 2004 large means 150 decimal digits or more) and to multiply these, while factoring theirproduct (of 300 decimal digits) is not feasible with the current algorithms and computers.RSA can be described as follows:

key generation: Find 2 prime numbers p and q with at least 150 digits and compute theirproduct, the modulus n = p · q. Compute the Carmichael function λ(n), the leastcommon multiple of p − 1 and q − 1. Choose an encryption exponent e (at least 32 to64 bits long), which is relatively prime to λ(n) and compute the decryption exponent asd = e−1 mod λ(n) (with Euclid’s algorithm). The public key consists of the pair (e, n),and the secret key consists of the decryption exponent d or the pair (p, q);

encryption: represent the plaintext as an integer in the interval [0, n− 1] and compute theciphertext as C = P e mod n;

decryption: P = Cd mod n.

The prime factors p and q or the secret decryption exponent d are the trapdoor informationthat allows to invert the function f(x) = xe mod n. Note that the RSA function is the dual ofthe Diffie-Hellman function f(x) = αx mod p, which has a fixed base and a variable exponent.

Without explaining the mathematical background of the algorithm, one can observe thatthe security of the RSA algorithm depends on the factoring problem: indeed, if an attackcan factor n, he can find λ(n), derive d from e and decrypt any message. However, in orderto decrypt it is sufficient to extract modular eth roots (one may be able to do this withoutknowing p and q). The RSA problem is the extraction of random modular eth roots, sincethis corresponds to the decryption of arbitrary ciphertexts. Cryptographers believe that theRSA problem is hard, which is known as the RSA assumption. It is easy to see that the RSAproblem cannot be harder than factoring the modulus. Some indication has been found thatthe two problems may not be equivalent.

14

Bart Preneel Cryptology: An Introduction

For special arguments, the RSA problem is very easy: for example −1, 0 and 1 are al-ways fixed points for the RSA encryption function and for small arguments, P e < n andextracting modular eth root simplifies to extracting natural eth roots, which is an easy prob-lem. However, the RSA assumption states that extracting random modular eth roots is hard,which means that the challenge ciphertext needs to be uniformly distributed. This can beachieved by transforming the plaintext with a randomizing transform. A large number ofsuch transforms is known; many of these are ad hoc and there is no reason to believe thatthey should be effective. In 1993, Bellare and Rogaway published a new transform underthe name OAEP (Optimal Asymmetric Encryption), together with a security proof [4]. Thisproof essentially states that if someone can decrypt a challenge ciphertext without knowingthe secret key, he can extract random modular eth roots. The proof is in the random oraclemodel, which means that the hash functions used in the OAEP construction are assumed tobe perfectly random. However, seven years later Shoup pointed out that the proof was wrong[70]; the error has been corrected for by Fujisaki et al. in [28], but the resulting reduction isnot very meaningful, that is, the coupling between the two problems is not very tight in thisnew proof. Currently the cryptographic community believes that the best way of using RSAis the RSA-KEM mode [55, 61]: this is a so-called hybrid mode in which RSA is only usedto transfer a session key, while the plaintext is encrypted using a symmetric algorithm withthis key. It is interesting to note that it has taken more than 20 years before cryptographershave understood how RSA should be used properly for encryption.

2.2.3 The Factoring and the Discrete Logarithm Problem.

The more complex properties of public-key cryptography seem to require some ‘high level’mathematical structure; most public-key algorithms are based on problems from algebraicnumber theory. While these number theoretic problems are believed to be difficult, it shouldbe noted that there is no mathematical proof that shows that these problems are hard.Moreover, since the invention of public-key cryptography significant progress has been madein solving concrete instances. This evolution is due to a combination of more sophisticatedfactoring algorithms with progress in hardware and parallel processing. Table 1 summarizesthe progress made in factoring over the past 40 years. It is believed that the discrete logarithmproblem modp is about as hard as the factoring problem for the same size of modulus.

The best known algorithm to factor an RSA modulus N is the General Number FieldSieve; it has been used in all factoring records since 1996 and it has (heuristic) asymptoticcomplexity

O(exp

[(1.923 + o(1)) · (lnN)1/3 · (ln lnN)2/3

]).

Note that this asymptotic expression should be used with care; extrapolations can only bemade in a relatively small range due to the o(1) term. Lenstra and Verheul provide aninteresting study on the selection of RSA key sizes [48]. Currently it is believed that factoringa 1024-bit RSA modulus (308 digit) modulus requires 280 steps. With special hardwareproposed by Shamir and Tromer [67], the following cost estimates have been provided: withan investment of US$ 10 M, a 1024-bit modulus can be factored in 1 year. A 768-bit moduluscan be factored for US$ 5000 in 95 days, and a 512-bit modulus can be factored with aUS$ 10 000 device in ten minutes. Note that these cost estimates do not include the linearalgebra step at the end; while this step takes additional time and effort, it should not pose anyunsurmountable problems. Nevertheless, these estimates show that for long-term security, an

15

Bart Preneel Cryptology: An Introduction

Table 1: Progress of factorization records for products of two random prime numbers. OneMIPS year (MY) is the equivalent of a computation during one full year at a sustained speedof one Million Instructions Per Second, which corresponds roughly to the speed of a VAX11/780

year # digits # bits computation1964 20 661974 45 150 0.001 MY1983 50 1661984 71 236 0.1 MY1991 100 332 7 MY1992 110 365 75 MY1993 120 399 835 MY1994 129 429 5000 MY1996 130 432 1000 MY1999 140 465 2000 MY1999 155 512 8400 MY2003 174 578

RSA modulus of 2048 bits is recommended.

2.2.4 Basing Public-Key Cryptology on Other Problems.

There has been a large number of proposals for other public key encryption algorithms. Manyof these have been broken, the most notable example being the class of knapsack systems.The most important alternative to RSA is the ElGamal scheme, which extends the Diffie-Hellman scheme to public-key encryption. In particular, the group of integers modp canalso be replaced by a group defined by an elliptic curve over a finite field, as proposed byMiller and Koblitz in the mid-eighties. Elliptic curve cryptosystems allow for shorter keysizes (a 1024-bit RSA key corresponds to a 170-bit elliptic curve key), but the operationson an elliptic curve are more complex (see [10, 52] for more details). Other alternatives areschemes based on multivariate polynomials over finite fields, lattice-based systems such asNTRU and systems based on braid groups; while these systems have particular advantages,it is believed that they are not yet mature enough for deployment. It is a little worrying thatour digital economy relies to a large extent on the claimed difficulty of a few problems inalgebraic number theory.

2.2.5 Applying Public-Key Cryptology.

The main advantage of public-key algorithms is the simplified key management; deploymentof cryptology on the Internet largely relies on public-key mechanisms (e.g., TLS and SSH[73]). An important question that remains is how authentic copies of the public keys canbe distributed; this problem will be briefly discussed in Sect. 3.2. The main disadvantagesare the larger keys (typically 64 to 512 bytes) and the slow performance: both in softwareand hardware public-key encryption algorithms are two to three orders of magnitude slowerthan symmetric algorithms. For example, a 1024-bit exponentiation with a 32-bit exponent

16

Bart Preneel Cryptology: An Introduction

takes 360 µs on a 1 GHz Pentium III; this corresponds to 2800 cycles/byte; a decryptionwith a 1024-bit exponent takes 9.8 ms or 76 000 cycles/byte. This should be compared to15 cycles/byte for AES. Because of the large difference in performance, the large block length(which influences error propagation), and the security reasons indicated above, one alwaysemploys hybrid systems: the public-key encryption scheme is used to establish a secret key,which is then used in a fast symmetric algorithm.

3 Hashing and Signatures for Authentication

Information authentication includes two main aspects:

- data origin authentication, or who has originated the information;

- data integrity, or has the information been modified.

Other aspects which can be important are the timeliness of the information, the sequence ofmessages, and the destination of information. These aspects can be accounted for by usingsequence numbers and time stamps in the messages and by including addressing informationin the data. In data communications, the implicit authentication created by recognition ofthe handwriting, signature, or voice disappears. Electronic information becomes much morevulnerable to falsification as the physical coupling between information and its bearer is lost.

Until the mid 1980s, it was widely believed that encryption of information (with a sym-metric algorithm) was sufficient for protecting its authenticity. The reasoning was that if acertain ciphertext resulted after decryption in a meaningful plaintext, it had to be createdby someone who knew the key, and therefore it must be authentic. A few counterexamplesare sufficient to refute this claim: if a block cipher is used in ECB mode, an attacker caneasily reorder the blocks. For any additive stream cipher (including the Vernam scheme),an opponent can always modify any plaintext bit (without knowing whether a 0 has beenchanged to a 1 or vice versa). The concept ‘meaningful’ information implicitly assumes thatthe information contains redundancy, which allows to distinguish genuine information froman arbitrary plaintext. However, one can envisage applications where the plaintext containsvery little or no redundancy (for example the encryption of keys). The separation betweensecrecy and authentication has also been clarified by public-key cryptography: anyone whoknows Alice’s public key can send her a confidential message, and therefore Alice has no ideawho has actually sent this message.

Two different levels of information authentication can be distinguished. If two parties trusteach other and want to protect themselves against malicious outsiders, the term conventionalmessage authentication is used. In this setting, both parties are at equal footing (for example,they share the same secret key). If however a dispute arises between them, a third party willnot be able to resolve it (for example a judge cannot tell whether a message has been createdby Alice or by Bob). If protection between two mutually distrustful parties is required (whichis often the case in a commercial relationships), an electronic equivalent of a manual signatureis needed. In cryptographic terms this is called a digital signature.

3.1 Symmetric Authentication

The underlying idea is similar to that for encryption, where the secrecy of a large amountof information is replaced by the secrecy of a short key. In the case of authentication, one

17

Bart Preneel Cryptology: An Introduction

replaces the authenticity of the information by the protection of a short string, which is aunique ‘fingerprint’ of the information. Such a ‘fingerprint’ is computed as a hash result.This can also be interpreted as adding a special form of redundancy to the information. Thisprocess consists of two components. First one compresses the information to a string of fixedlength, with a (cryptographic) hash function. Then the resulting string (the hash result) isprotected as follows:

- either the hash result is communicated over an authentic channel (e.g., it can be readover the phone). It is then sufficient to use a hash function without a secret parameter,which is also known as a Manipulation Detection Code or MDC;

- or the hash function uses a secret parameter (the key); it is then called a MessageAuthentication Code or MAC algorithm.

3.1.1 MDCs.

If an additional (authentic) channel is available, MDCs can provide authenticity without re-quiring secret keys. Moreover an MDC is a flexible primitive, which can be used for a varietyof other cryptographic applications. An MDC has to satisfy the following conditions:

- it should be hard to find an input with a given hash result (preimage resistance);

- it should be hard to find a second input with the same hash result as a given input (2ndpreimage resistance);

- it should be hard to find two different inputs with the same hash result (collision resis-tance).

An MDC satisfying these three conditions is called a collision resistant hash function.For a strong hash function with an n-bit result, solving the first two problems requires

about 2n evaluations of the hash function. This implies that n = 90 . . . 100 is sufficient(cf. Sect. 4.3); larger values of n are required if one can attack multiple targets in parallel.However, finding collisions is much easier: one will find with high probability a collision ina set of hash results corresponding to 2n/2 inputs. This implies that collision resistant hashfunctions need a hash result of 160 to 256 bits. This last property is also known as the birthdayparadox based on the following observation: within a group of 23 persons the probability thatthere are two persons with the same birthday is about 50%. The reason is that a group of thissize contains 253 different pairs of persons, which is rather larger compared to the 365 daysin a year. The birthday paradox plays an essential role in the security of many cryptographicprimitives. It is important to note that not all applications need collision resistant hashfunctions; sometimes (2nd) preimage resistance is sufficient.

The most efficient hash functions are dedicated hash function designs. The hash functionsMD4 and MD5 with a 128-bit hash result are no longer recommended. The most popular hashfunction today is SHA-1; RIPEMD-160 is an alternative; both offer a 160-bit result. Recentadditions to the SHA-family include SHA-256, SHA-384, and SHA-512 (see FIPS 180-2 [24]).The ISO standard on dedicated hash functions (ISO/IEC 10118-3) contains RIPEMD-128,RIPEMD-160, SHA-1, SHA-256, SHA-384, SHA-512 and Whirlpool [38]. Part 2 of thisstandard specifies hash functions based on a block cipher, while Part 4 specifies hash functionsbased on modular arithmetic.

18

Bart Preneel Cryptology: An Introduction

3.1.2 MAC Algorithms.

MAC algorithms have been used since the 1970s for electronic transactions in the bankingenvironment. They require the establishment of a secret key between the communicatingparties. The MAC value corresponding to a message is a complex function of every bit of themessage and every bit of the key; it should be infeasible to derive the key from observing anumber of text/MAC pairs, or to compute or predict a MAC without knowing the secret key.

A MAC algorithm is used as follows (cf. Fig. 9): Alice computes for her message P thevalue MACK(P ) and appends this MAC to the message (here MAC is an abbreviation ofMAC result). Bob recomputes the value of MACK(P ) based on the received message P , andverifies whether it matches the received MAC result. If the answer is positive, he accepts themessage as authentic, i.e., as a genuine message from Alice. Eve, the active eavesdropper,can modify the message P to P ′, but she is not able to compute the corresponding valueMACK(P ′), as she is not privy to the secret key K. For a secure MAC algorithm, the bestEve can do is guess the MAC result. In that case, Bob can detect the modification with highprobability: for an n-bit MAC result Eve’s probability of success is only 1/2n. The value of nlies typically between 32 and 96. Note that if encryption and authentication are combined, thekeys for encryption and authentication need to be different. Moreover, the preferred optionis to apply the MAC algorithm to the ciphertext, as this protects the encryption algorithmagainst chosen ciphertext attacks.

.. -

.. - = ?239215682364

Where dips the rockyhighland of Sleuth Woodin the lake, There lies aleafy island where flap-ping herons wake thedrowsy water-rats; therewe’ve hid our faery vats,full of berries and ofreddest stolen cherries.Come away, o human. . .

?@

@�

�-

K?

MAC

239215682364

Where dips the rockyhighland of Sleuth Woodin the lake, There lies aleafy island where flap-ping herons wake thedrowsy water-rats; therewe’ve hid our faery vats,full of berries and ofreddest stolen cherries.Come away, o human. . .

?@

@�

�-

K?

MAC

2392156823646

Figure 9: Using a Message Authentication Code for data authentication

A popular way to compute a MAC is to encrypt the message with a block cipher using theCBC mode (yet another use of a block cipher), and to keep only part of the bits of the last blockas the MAC. However, Knudsen has shown that this approach is less secure than previouslybelieved [43]. The recommended approach to use CBC-MAC consists of super-encrypting thefinal block with a different (derived) key. This scheme is known as EMAC; a security prooffor EMAC has been provided by Petrank and Rackoff in [58]. Almost all CBC-MAC variantsare vulnerable to a birthday type attack which requires only 2n/2 known text-MAC pairs [60].Another popular MAC algorithm is HMAC, which derives a MAC algorithm from a hashfunction such as SHA-1 [24]; an alternative for HMAC is MDx-MAC [60]. A large number of

19

Bart Preneel Cryptology: An Introduction

MAC algorithms has been standardized in ISO/IEC 9797 [36].For data authentication, the equivalent of the Vernam scheme exists. This implies that one

can design a MAC algorithm which is unconditionally secure, in the sense that the security ofthe MAC algorithm is independent of the computing power of the opponent. The requirementis again that the secret key is used only once. The basic idea of this approach is due toG.J. Simmons [71], who defined authentication codes and Carter and Wegman [14, 78], whoused the term universal hash functions. The first ideas date back to the 1970s. It turns outthat these algorithms can be computationally very efficient, since the properties required fromthis primitive are combinatorial rather than cryptographic. Recent constructions are thereforeone order of magnitude faster than other cryptographic primitives (encryption algorithms,hash functions), and achieve speeds up to 1-2 cycles/byte on a Pentium III for messageslonger than 256 bytes (e.g., the UMAC proposal [9]). A simple example is the polynomialhash function (see [40]). The key consists of two n-bit words denoted with K1 and K2. Theplaintext P is divided into t n-bit words, denoted with P1 through Pt. The MAC value, whichconsists of a single n-bit word, is computed based on a simple polynomial evaluation:

MACK1,K2(P ) = K1 +t∑

i=1

Pi · (K2)i ,

where addition and multiplication are to be computed in the finite field with 2n elements.It can be proved that the probability of creating another valid message/MAC pair is upperbounded by t/2n. A practical choice is n = 64, which results in a 128-bit key. For messagesup to 1 Mbyte, the success probability of a forgery is then less than 1/247. Note that one canreuse K2; however, for every message a new key K1 is required. This key could be generatedfrom a short initial key using an additive stream cipher, but then the unconditional securityis lost. However, one can argue that it is easier to understand the security of this schemethan that of a computationally secure MAC algorithm.

3.2 Digital Signatures

A digital signature is the electronic equivalent of a manual signature on a document. Itprovides a strong binding between the document and a person, and in case of a dispute, athird party can decide whether or not the signature is valid based on public information.Of course a digital signature will not bind a person and a document, but will bind a publickey and a document. Additional measures are then required to bind the person to his orher key. Note that for a MAC algorithm, both Alice and Bob can compute the MAC result,hence a third party cannot distinguish between them. While block ciphers (and even one-way functions) can be used to construct digital signatures, the most elegant and efficientconstructions for digital signatures rely on public-key cryptography.

We now explain how the RSA algorithm can be used to create digital signatures withmessage recovery. The RSA mapping is a bijection (a trapdoor one-way permutation). IfAlice wants to sign some information P intended for Bob, she adds some redundancy to theinformation, resulting in P , and decrypts the resulting text with her secret key. This operationcan only be carried out by Alice. Upon receipt of the signature, Bob encrypts it using Alice’spublic key, and verifies that the information P has the prescribed redundancy. If so, he acceptsthe signature on P as valid. Such a digital signature (which is a signature with ‘messagerecovery’) requires the following condition on the public-key system: EPA

(DSA(P )) = P .

20

Bart Preneel Cryptology: An Introduction

Anyone who knows Alice’s public key can verify the signature and recover the message fromthe signature.

Note that if the redundancy is left out, anybody can pick a random ciphertext C∗ andclaim that Alice has signed P ∗ = C∗ e mod n. It is not clear that P ∗ is a meaningful message(this will require some extra tricks), but it shows why redundancy is essential. A provablysecure way to add the redundancy is PSS-R [5]; however in practice other constructions arewidely used; most of them combine a hash function with a digital signature scheme.

S

� �V

� �- - -

?

� ����-

?P PC

(PA, SA)

SA

PA

PA

authentic channel

Figure 10: A digital signature scheme with message recovery based on a trapdoor one-waypermutation; S and V denote the signing and verification operation respectively

If Alice wants to sign very long messages, digital signature schemes with message recoveryresult in signatures that are as long as the message. Moreover, signing with a public-keysystem is a relatively slow operation. In order to solve these problems, Alice does not sign theinformation itself, but the hash result of the information computed with an MDC (see alsoFig. 11). This corresponds to the use of an MDC to replace the authenticity of a large text bythat of a short hash value (cf. Sect. 3.1). The signature now consists of a single block, whichis appended to the information; this is sometimes called a digital signature with appendix.In order to verify such a signature, Bob recomputes the MDC of the message and encryptsthe signature with Alice’s public key. If both operations give the same result, Bob acceptsthe signature as valid. MDCs used in this way need to be collision resistant: if Alice can findtwo different messages (P, P ′) with the same hash result, she can sign P , and later claim tohave signed P ′ (P and P ′ will have the same signature!).

Note that there exist other signature schemes with appendix (such as the DSA fromFIPS 186 [25]), which are not derived immediately from a public-key encryption scheme. Forthese schemes one can define a ‘signing operation’ (using the secret key) and a ‘verification op-eration’ (using the public key), without referring to ‘decryption’ and ‘encryption’ operations.The security of the DSA is based on the discrete logarithm problem (as the Diffie-Hellmanscheme); there also exists an elliptic-curve based variant of DSA, called ECDSA. Thereare many more digital signature schemes than public-key encryption schemes. Other digitalsignature schemes include ESIGN, Fiat-Shamir, Guillou-Quisquater, Schnorr and SFLASH.

3.2.1 Certificates.

Digital signatures move the problem of authenticating data to the problem of authenticatinga link between an entity and its public key. This problem can be simplified using digitalcertificates. A certificate is a digital signature of a third party on an entity’s name, itspublic key and additional data (such algorithms and parameters, key usage, begin and end

21

Bart Preneel Cryptology: An Introduction

.. -

.. -

Where dips the rockyhighland of SleuthWood in the lake, Therelies a leafy island whereflapping herons wakethe drowsy water-rats;there we’ve hid ourfaery vats, full of berriesand of reddest stolencherries. Come away, o

?@

@�

�?

hash

signSA-

?12239215682364

Where dips the rockyhighland of SleuthWood in the lake, Therelies a leafy island whereflapping herons wakethe drowsy water-rats;there we’ve hid ourfaery vats, full of berriesand of reddest stolencherries. Come away, o

?@

@�

�?

hash

verifyPA-

612239215682364

- OK?

Figure 11: A digital signature scheme with appendix

of validity period); it corresponds roughly to an electronic identity. The verification of acertificate requires the public key of the trusted third party; in this way the problem ofauthenticating data has been replaced by authenticating a single public key. In principlethis is very easy: this key can be published in a newspaper, listed on a webpage, added ina browser, etc. The infrastructure which provides public keys is called a PKI (Public KeyInfrastructure). Deploying such an infrastructure in practice is rather complex: one needs todeal with revocation of certificates, upgrades of keys, multiple trusted parties, and integrationof the public key management with the application (see for example [1]).

4 Analysis and Design of Cryptographic Algorithms

In this section we compare several approaches in cryptography. Next we describe the typicalphases in the life of a cryptographic algorithm. Finally we discuss why – in spite of theprogress in cryptology – weak cryptographic algorithms are still in use and which problemscryptographers are facing in the next decades.

4.1 Different Approaches in Cryptography

Modern cryptology follows several approaches: the information theoretic approach, the com-plexity theoretic approach, the bounded-storage approach, the quantum cryptology approachand the system based approach. These approaches differ in the assumptions about the ca-pabilities of an opponent, in the definition of a cryptanalytic success, and in the notion ofsecurity.

The most desirable from the viewpoint of the cryptographer are unconditionally securealgorithms; this design approach is also known as the information theoretic approach. It wasdeveloped in the seminal work of C. Shannon in 1943 and published a few years later [68].This is a perfect solution, since the security can be proved independent of the computationalpower of the opponent, and security will not erode over time. However, few such schemesexist that are secure in this model: examples are the Vernam scheme (Sect. 2.1) and the

22

Bart Preneel Cryptology: An Introduction

polynomial MAC algorithm (Sect. 3.1.2). While they are computationally very efficient, thecost in terms of key material may be prohibitively large. For most applications one has tolive with schemes which offer only conditional security.

A second approach is to reduce the security of a cryptographic algorithm to that of otherwell known difficult problems, or to that of other cryptographic primitives. The complexitytheoretic approach starts from an abstract model for computation, and assumes that theopponent has limited computing power within this model [30]. The most common modelsused are Turing machines and RAM models. This approach was started in cryptology byYao [80] in 1982. It has many positive sides and has certainly contributed towards movingcryptology from an art to a science:

• It forces the formulation of exact definitions, and to state clearly the security propertiesand assumptions. This may seem trivial, but it has taken the cryptographic communitya long time to define what secure encryption is, secure message authentication, securedigital signatures, and secure authenticated encryption. It turns out that there aremany variants of these definitions (depending on the power of the opponent and hergoals), and it takes quite some effort to establish the relation between them. For morecomplex primitives (e-payment, e-voting, interactions with more than two parties and ingeneral multi-party computation), establishing correct definitions is even more complex.

• This approach results in formal reductions: one formally proofs that if a particularobject exists, another object exists as well. For example, one-way functions imply digitalsignature schemes, or the existence of pseudo-random permutations (a ‘secure’ blockcipher)is sufficient to prove that the CBC mode with secret and random initial valueis a ‘secure’ mode for encryption (if the opponent can choose plaintexts and observethe corresponding ciphertexts). The terms ‘secure’ have very specific definitions asexplained above. These reductions often work by contradiction: for example, one showsthat if an opponent can find a weakness in CBC-encryption using a chosen plaintextattack, this implies that there is a weakness in the underlying block cipher which allowsto distinguish this block cipher from a pseudo-random permutation. Once the proofsare written down, anyone can verify them; this is very important, as it turns out thatsome of these proofs have very subtle errors.

The complexity theoretic approach also has some limitations:• Many cryptographic applications need building blocks, such are one-way functions, one-

way permutations, collision resistant compression functions, and pseudo-random func-tions, which cannot be reduced to other primitives. In terms of the existence of suchprimitives, complexity theory has only very weak results: it is not known whether one-way functions exist. In non-uniform complexity (Boolean circuits) the best result provedthus far is that there exist functions which are twice as hard to invert as to compute,which is far too weak to be of any use in cryptography [35]. To some extent we need torely on number theoretic assumptions (such as the assumptions underlying the securityof the Diffie-Hellman protocol and RSA). For the others, we rely on unproven assump-tions on functions such as RC4, DES, AES, RIPEMD-160 and SHA-1. This means thateven when we have strong reductions, the foundations of the reductions are still ratherweak.

• Sometimes the resulting scheme is much more expensive (in terms of computation ormemory) than a scheme without security proof; however, in the last decade a substantial

23

Bart Preneel Cryptology: An Introduction

effort has been made to improve the efficiency of the constructions for which there existsa security proof.

• The security proof may not be very efficient. Some reductions are only asymptotic orare very weak. For example, for RSA-OAEP, if one can violate the security property,that is, decrypt a random ciphertext, this results only with a very small probabilityin a violation of the assumption, e.g., the computation of a modular eth root. Manyreductions require the random oracle assumption: one assumes that the hash functionused in the scheme behaves as a perfectly random function. It has been shown that thisapproach can result in security proofs for insecure schemes; however, these reductionsstill have some value as a heuristic in the following sense: these schemes are typicallybetter than schemes for which no reduction in the random oracle model is known.

The term ‘concrete complexity’ has been coined by Bellare and Rogaway to denote securityproofs with concrete reductions focusing on efficiency, that is, without asymptotics and hiddenconstants. For an overview of complexity theoretic results in cryptology, the reader is referredto the work of Goldwasser and Bellare [32] and Goldreich [31].

There has also been some interest in the bounded-storage model, where it is assumed thatthe storage capacity of an adversary is limited (see e.g., [20]). This model can be consideredto be part of the information theoretic approach (one imposes that only limited information isavailable to the adversary) or part of the complexity theoretic approach (where space ratherthan time is the considered to be limited).

Quantum cryptology does not work based on any computational assumptions, but it startsrather from the assumption that quantum physics provides a complete model of the physicalworld; it relies on concepts such as the Heisenberg uncertainty principle and quantum entan-glement. Quantum cryptography provides a means for two parties to exchange a commonsecret key over an authenticated channel. The first experimental results have been obtainedin the early 1990s and ten years later several companies offer encryption products based onquantum cryptography. While this offers a fascinating approach, quantum cryptology needsauthentic channels which seems to make them not very useful to secure large open networkssuch as the Internet. Indeed, without public-key cryptography, the only way to achieve anauthentic channel is by establishing prior secrets manually, which seems rather impracticalfor a large network. Moreover, most implementations have currently practical limitations:they are not yet compatible with packet switched systems, they typically allow for low speedsonly (although high speed links have been demonstrated recently) and the communicationdistances are limited. For high speed communication, one still needs to stretch the shortkeys with a classical stream cipher; of course the resulting scheme then depends on stan-dard cryptographic assumptions, which to some extent defeats the purpose of using quantumcryptology in the first place. Moreover, one can expect that the current implementations arevulnerable to side channel attacks discussed below.

In view of the limitations of the above approaches, modern cryptology still has to relyto some extent on the system-based or practical approach. This approach tries to producepractical solutions for basic building blocks such as one-way functions, pseudo-random bitgenerators (stream ciphers), pseudo-random functions and permutations etc. The security es-timates are based on the best algorithm known to break the system and on realistic estimatesof the necessary computing power or dedicated hardware to carry out the algorithm. By trialand error procedures, several cryptanalytic principles have emerged, and it is the goal of thedesigner to avoid attacks based on these principles. The second aspect is to design building

24

Bart Preneel Cryptology: An Introduction

blocks with provable properties, and to assemble such basic building blocks to design crypto-graphic primitives. The complexity theoretic approach has also improved our understandingof the requirements to be imposed on these building blocks. In this way, this approach is alsoevolving towards a more scientific approach. Nevertheless, it seems likely that for the nextdecades we still need to rely the security of some concrete functions such as AES, SHA-512and their successors.

4.2 Life Cycle of a Cryptographic Algorithm

In this section we discuss the life cycle of a cryptographic algorithm. We also discuss theimpact of open competitions, and compare the use of public versus secret algorithms.

A cryptographic algorithm usually starts with a new idea of a cryptographer. A firststep should be an evaluation of the security properties and of the cryptographic algorithm,in which the cryptographer tries to determine whether or not the scheme is secure for theintended applications. If the scheme is unconditionally secure, he has to write the proofs, andto convince himself that the model is correct and matches the application. For computationalsecurity, it is again very important to write down security proofs, and to check these forsubtle flaws. Moreover, one has to assess whether the assumptions behind the proofs arerealistic. For the system-based approach, it is important to prove partial results, and to writedown arguments which should convince others of the security of the algorithm. Often suchcryptographic algorithms have security parameters (the number of steps, the size of the key,. . . ); it is then very important to give lower bounds for these parameters, and to indicate thevalue of the parameters which corresponds to a certain security level.

The next step is the publication of the algorithm at a conference, in a journal, or in anInternet Request for Comment (RFC). This (hopefully) results in an independent evaluation ofthe algorithm. Often more or less subtle flaws are then discovered by other researchers. Thiscan vary from small errors in proofs, to complete security breaks. Depending on the outcome,this can lead to a small fix of the scheme or to abandoning the idea altogether. Sometimessuch weaknesses can be found ‘in real-time’ when the author is presenting his ideas at aconference, but often evaluating a cryptographic algorithm is a very time consuming task;for example, the design effort of the Data Encryption Standard (DES) has been more than17 man-years, and the open academic evaluation since has taken a multiple of this effort.Cryptanalysis is quite destructive; in this respect it differs from usual scientific activities,even when proponents of competing theories criticize each other.

Few algorithms survive the evaluation stage; ideally, this stage should last for several years.The survivors can be integrated into products and find their way to the market. Sometimesthey are standardized by organizations such as NIST (National Institute of Standards andTechnology, US), IEEE, IETF, ISO, or ISO/IEC.

As will be explained below, even if no new security weaknesses are found, the securityof a cryptographic algorithm degrades over time; if the algorithm is not parameterized, themoment will come when it has to be taken out of service (or when its parameters need to beupgraded). Often such upgrades are not planned for.

4.2.1 Open Competitions.

In the past several open competitions have been launched to stimulate this open and inde-pendent evaluation process: such competitions were organized in the US for the selection of

25

Bart Preneel Cryptology: An Introduction

DES (1974) and AES (1997), in Europe by the RIPE (1988-1992) [63] and NESSIE (2000-2003) [55, 61] projects and in Japan by the CRYPTREC project (2000-2003) [16]. While eachof these competitions has slightly different goals, it is clear that they have advanced the stateof the art and have helped to develop better solutions. The main disadvantage of most ofthese competitions is that the results are not much better than the state of the art at the timewhen the competition starts. This is a direct consequence of the fact that the designs cannotbe modified too much during the process. However, it turns out that the lessons learnedduring these competitions have been very valuable.

4.2.2 Public versus Secret Algorithms.

The open and independent evaluation process described above offers a strong argument forpublishing all details of a cryptographic algorithm. Publishing the algorithm opens it up forpublic scrutiny, and is the best way to guarantee that it is as strong as claimed. (Note that apublic algorithm should not be confused with a public-key algorithm.) Published algorithmscan be standardized, and will be available from more than one source.

Nevertheless, certain governments and organizations prefer to keep their algorithms se-cret. They argue (correctly) that obtaining the algorithm raises an additional barrier for theattacker. Moreover, governments or organizations may want to protect their know-how onthe design of cryptographic algorithms. Note, however, that obtaining a description of thealgorithm is often not harder than just bribing one person. An unacceptable excuse for usinga secret algorithm is the desire to hide that an insecure algorithm is being used. The useof secret algorithms is acceptable in closed systems, provided that sufficient experience andresources are available for independent evaluation and re-evaluation of the algorithm.

4.3 Insecure Versus Secure Algorithms

In spite of the fact that secure cryptographic algorithms are publicly available, which offergood performance at an acceptable cost, one finds in many applications insecure cryptographicalgorithms. In this section we try to address some of the reasons behind this.

4.3.1 Incompetence.

Cryptography is a fascinating discipline, which tends to attract ‘do-it-yourself’ people, who arenot aware of the scientific developments of the last 60 years; their home-made algorithms cantypically be broken in a few minutes by an expert. For example, popular software sometimes‘encrypts’ data by adding a constant key word to all data words. It is of course very hard tobeat the performance of this solution.

4.3.2 Political Reasons.

In a large part of the 20th century, governments have attempted to control know-how oncryptology and the use of cryptology. This results in the use of deliberately weak algorithms(such as A5/1 and A5/2 in the GSM phones), or in short keys (40-bit RC4 keys in browsers,and 54-bit session keys in GSM phones2). General export from the US was only allowedfor short key lengths: 40 bits (or 56 bits) for symmetric ciphers, 512 bits for factoring based

2It was discovered that 10 bits out of 64 bits are set to 0 by the operators at the request of nationalauthorities.

26

Bart Preneel Cryptology: An Introduction

systems (RSA) and discrete logarithm modulo a large prime (Diffie-Hellman). The US exportrestrictions have been lifted to a large extent in January and October 2000 (see Koops [47]for details). In several countries, domestic controls were imposed; the best known example isFrance, where the domestic controls were lifted in January 1999. While the export restrictionsare now a smaller concern, it still takes a long time before all legacy systems with short keylengths have been upgraded. Moreover, many countries still have some legislation on use,import and export of cryptographic algorithms.

4.3.3 Economic Reasons.

Even when it is obvious that an algorithm needs to be replaced or upgraded for technicalreasons, it may not be economical to do so. This may be because it is estimated that therisk of using weak cryptographic algorithms is acceptable, and because the system has manyother much weaker points. Economic reasons can probably explain why the financial sectorhas taken almost 20 years to upgrade DES to triple-DES.

4.3.4 Brute Force Attacks Become Easier over Time.

Brute force attacks are attacks which exist against any cryptographic algorithm that is con-ditionally secure, no matter how it works internally. These attacks only depend on the sizeof the external parameters of the algorithm, such as the block length of a block cipher, or thekey length of any encryption or MAC algorithm. It is the task of the designer to choose theseparameters such that brute force attacks are infeasible.

A typical brute force attack against an encryption or MAC algorithm is an exhaustive keysearch; it is equivalent to breaking into a safe by trying all the combinations of the lock. Thelock should be designed such that this is not feasible in a reasonable amount of time. Thisattack requires only a few known plaintext/ciphertext (or plaintext/MAC) pairs, which onecan always obtain in practice. It can be precluded by increasing the key length: adding onebit to the key doubles the time for exhaustive key search. One should also ensure that thekey is selected uniformly at random in the key space.

On a standard PC, trying a single key for a typical algorithm requires between 0.1 and10 µs (depending on the complexity of the algorithm). For example, a 40-bit key can berecovered in 1 to 100 days. If a LAN with 100 machines can be used, one can find the keyin 20 minutes to a day. For a 56-bit key such as DES, a key search requires a few months ifseveral thousand machines are available (as has been demonstrated in the first half of 1997).However, if dedicated hardware is used, a different picture emerges. M. Wiener has designedin 1993 a US$ 1 M hardware DES key search machine that can recover a 56-bit DES key inabout 3 hours [79]. If such a machine would be built in 2004, it would be about 150 timesfaster (recovering a 56-bit key would take a little over a minute on average). In 1998, a US$250 000 machine (called “Deep Crack”) has been built that finds a 56-bit DES key in about4.5 days [21]; the design (that required 50% of the cost) has been made available for free.

These numbers makes one wonder how the US government could publish in 1977 a blockcipher with a 56-bit key. However, one should also take into account a variant of Moore’s law(which was formulated in 1967) [66], which states that computers double their speed every18 months (for the same cost). This implies that in 1977, recovering a 56-bit key with aUS$ 10 M hardware machine would take about 20 days; this was clearly only feasible for verylarge organizations including the US government. This also explains the initial controversy

27

Bart Preneel Cryptology: An Introduction

over the DES key length.Experts believe that Moore’s law will be holding for at least another 15 years. This means

that if one has data data which needs to be protected for 15 years against an opponent witha budget of US$ 10 M, one needs a key of about 90 bits (which corresponds to the securitylevel of triple-DES, see Sect. 2.1.3). However, as the cost of increasing the key size is quitelow, it is advisable to design new algorithms with variable key size from 128 to 256 bits.Indeed, searching for the smallest AES key (128 bits) is a factor 272 times more expensivethan finding a DES key. Even with a key search machine of US$ 1012, it would take in 20241 million years to recover a 128-bit key. This shows that if for symmetric algorithms 128-bitkeys are used, it may no longer necessary to worry about exhaustive key search for the nextdecades.

4.3.5 Shortcut Attacks Become more Efficient.

Many algorithms are less secure than suggested by the size of their external parameters. It isoften possible to find more effective attacks than trying all keys (see for example the discussionon 2-key and 3-key triple-DES in Sect. 2.1.3). Assessing the strength of an algorithm requirescryptanalytic skills and experience, and often hard work. During the last 15 years, powerfulnew tools have been developed: this includes differential cryptanalysis [7], which analyzes thepropagation of differences through cryptographic algorithms, linear cryptanalysis [49], whichis based on the propagation of bit correlations, fast correlation attacks on stream ciphers[50] and algebraic attacks on stream ciphers [15]. For example, the FEAL block cipher with8 rounds, which was published in 1987, can now be broken with only 10 chosen plaintexts.Similar progress has been made in the area of public key cryptology, for example with attacksbased on lattice reduction [39] and factoring methods discussed in Sect. 2.2.2.

4.3.6 New Attack Models.

The largest threats however originate from new attack models. One of these models is aquantum computer. Another new threat is the exploitation of side channel attacks andattacks based on fault injection.

Feynman realized in 1982 that computers based on the principles of quantum physicswould be able to achieve exponential parallelism: having n components in the quantum com-puter would allow it to perform 2n computations in parallel. Note that in a classical computer,having n computers can speed up the computation with a factor up to n. He also observedthat such computers would be very well suited to simulate quantum physical phenomena. Afew years later, Deutsch realized that a general purpose quantum computer could be devel-oped, on which any physical process could be modeled (at least in principle). However, thefirst interesting application of quantum computers outside physics was proposed by Shor in1994 [69]. He showed that quantum computers were perfectly suitable to number theoreticproblems such as factoring and discrete logarithm problems. This implies that if a largequantum computer could be built, the most popular public-key systems would be completelyinsecure. Building quantum computers is however a huge challenge. A quantum computermaintains state in a set of qubits. A qubit can hold a one, or a zero, or a superposition ofone and zero; this superposition is the essence of the exponential parallelism, but it can bedisturbed by outside influences (decoherence). The first quantum computer with 2 qubits wasbuilt in 1998. Currently the record is a quantum computer with 7 bits (three of these are used

28

Bart Preneel Cryptology: An Introduction

for quantum error correction); this allowed the factorization of the integer 15 in 2002 [76].Experts are divided on the question whether sufficiently powerful quantum computers can bebuilt in the next 15-20 years, but no one seems to expect a breakthrough in the next 5-10years. For symmetric cryptography, quantum computers are less of a threat: they can reducethe time to search a 2n-bit key to the time to search an n-bit key (using Grover’s algorithm[33]). Hence doubling the key length (from 128 to 256 bits) offers an adequate protection.Collision search on a quantum computer reduces from 2n/2 steps to 2n/3 steps [33], so it issufficient to increase the number of bits of a hash result from 256 to 384.

If one wants to use cryptographic algorithms, they have to be implemented in hardware orsoftware (which runs on hardware). The opponent can try to make her life easier by obtaininginformation from the implementation, rather than trying to break the cryptographic algorithmusing fast computers or clever mathematics. Side channel attacks have been known for a longtime in the classified community; these attacks exploit information on the time to perform acomputation [45], on the power consumption [46], or on the electromagnetic radiation [29, 62]to extract information on the plaintext or even the secrets used in the computation. A verysimple side channel attack on the Vernam scheme (see Sect. 2.1) exploits the fact that if thereis a logical ‘0’ on the line, this can be the result of 0⊕ 0 or 1⊕ 1. If the device implementingthe scheme is not properly designed, the two cases may result in different electrical signals.This leaks immediately information on half of the plaintext bits.

Protecting implementations against side channel attacks is notoriously difficult: it requirestypically a combination of countermeasures at the hardware level (adding noise, special logic,decoupling power source), at the algorithmic level (e.g., blinding and randomization) and atthe protocol level (e.g., frequent key updates) (see [13] for a brief overview). While manycountermeasures have been published, many fielded systems (even new ones) are still veryvulnerable. This is in part due to cost reasons, delays in upgrades, but also due to thedevelopment of every more sophisticated attacks. Developing efficient implementations whichoffer a high security level against side channel attacks is a very important research challengefor the coming years.

The most powerful attacks induce errors in the computations, by varying clock frequencyor power level, by applying light flashes, etc. Such attacks are typically devastating: smallchanges in inputs during a cryptographic calculation typically reveal the secret key material[12]. Protecting against these attacks is non-trivial, as it requires continuous verification of allcalculations, which should also include a check on the verifications. And even that may notbe enough, as has been pointed out in [81]. It is also clear that pure cryptographic measureswill never be sufficient to protect against such attacks.

5 Concluding Remarks

This paper only scratches the surface of modern cryptology, as the discussion is restrictedto an overview of cryptographic algorithms. Other problems solved in cryptography includeidentification, timestamping, sharing of secrets, electronic cash,. . . Many interesting problemsare studied under the umbrella of secure multi-party computation; examples are electronicelections, and the generation and verification of digital signatures in a distributed way. Avery important aspect is the underlying key management infrastructure, which ensures thatsecret and public keys can be established and maintained throughout the system in a secureway. This is where cryptography meets the constraints of the real world.

29

Bart Preneel Cryptology: An Introduction

We have demonstrated that while cryptology has made significant progress as a science,there are some interesting research challenges ahead, both from a theoretic and from a prac-tical viewpoint. We need to make progress in theoretical foundations (how to prove that aproblem is hard?), develop new cryptographic algorithms (for example new public-key cryp-tosystems which do not depend on algebraic number theory), and learn how to implementcryptographic algorithms efficiently and securely. Hopefully this will allow us to build schemesthat offer very long term security at a reasonable cost.

References

[1] C. Adams, S. Lloyd, “Understanding PKI. Concepts, Standards, and Deployment Con-siderations,” 2nd edition, Addison-Wesley, 2003.

[2] M. Bellare, R. Canetti, H. Krawczyk, “Keying hash functions for message authentica-tion,” Advances in Cryptology, Proceedings Crypto’96, LNCS 1109, N. Koblitz, Ed.,Springer-Verlag, 1996, pp. 1–15. Full version http://www.cs.ucsd.edu/users/mihir/papers/hmac.html.

[3] M. Bellare, A. Desai, E. Jokipii, P. Rogaway, “A concrete security treatment of symmetricencryption,” Proceedings 38th Annual Symposium on Foundations of Computer Science,FOCS ’97 IEEE Computer Society, 1997, pp. 394–403.

[4] M. Bellare, P. Rogaway, “Random oracles are practical: A paradigm for designing effi-cient protocols,” Proceedings ACM Conference on Computer and Communications Se-curity, ACM, 1993, pp. 62–73.

[5] M. Bellare, P. Rogaway, “The exact security of digital signatures. How to sign with RSAand Rabin,” Advances in Cryptology, Proceedings Eurocrypt’96, LNCS 1070, U. Maurer,Ed., Springer-Verlag, 1996, pp. 399–414.

[6] C.H. Bennett, G. Brassard, A.K. Ekert, “Quantum cryptography,” Scientific American,October 1992, p. 50–57.

[7] E. Biham, A. Shamir, “Differential Cryptanalysis of the Data Encryption Standard,”Springer-Verlag, 1993.

[8] A. Biryukov, A. Shamir, D. Wagner, “Real time cryptanalysis of A5/1 on a PC,” FastSoftware Encryption, LNCS 1978, B. Schneier, Ed., Springer-Verlag, 2002, pp. 1–18.

[9] J. Black, S. Halevi, H. Krawczyk, T. Krovetz, P. Rogaway, “UMAC: Fast and securemessage authentication,” Advances in Cryptology, Proceedings Crypto’99, LNCS 1666,M. Wiener, Ed., Springer-Verlag, 1999, pp. 216–233.

[10] I.F. Blake, G. Seroussi, N.P. Smart. “Elliptic Curves in Cryptography,” Cambridge Uni-versity Press, 1999.

[11] M. Blaze, “Rights amplification in master-keyed mechanical locks,” IEEE Security &Privacy, Vol. 1, No. 2, pp. 24–32.

[12] D. Boneh, R.A. DeMillo, R.J. Lipton, “On the importance of eliminating errors in cryp-tographic computations,” Journal of Cryptology, Vol. 14, 2001, pp. 101–119.

30

Bart Preneel Cryptology: An Introduction

[13] J. Borst, B. Preneel, V. Rijmen, “Cryptography on smart cards,” Journal of ComputerNetworks, Vol. 36, 2001, pp. 423–435.

[14] J.L. Carter, M.N. Wegman, “Universal classes of hash functions,” Journal of Computerand System Sciences, Vol. 18, 1979, pp. 143–154.

[15] N. Courtois, W. Meier, “Algebraic attacks on stream ciphers with linear feedback,”Advances in Cryptology, Proceedings Eurocrypt’03, LNCS 2656, E. Biham, Ed., Springer-Verlag, 2003, pp. 345–359.

[16] CRYPTREC project, http://www.ipa.gov.jp/security/enc/CRYPTREC/index-e.html.

[17] J. Daemen, V. Rijmen, “The Design of Rijndael. AES – The Advanced Encryption Stan-dard,” Springer-Verlag, 2001.

[18] D.W. Davies, W.L. Price, “Security for Computer Networks. An Introduction to DataSecurity in Teleprocessing and Electronic Funds Transfer,” (2nd Ed.), Wiley, 1989.

[19] W. Diffie, M.E. Hellman, “New directions in cryptography,” IEEE Transactions on In-formation Theory, Vol. IT–22, No. 6, 1976, pp. 644–654.

[20] S. Dziembowski, U. Maurer, “Optimal randomizer efficiency in the bounded-storagemodel,” Journal of Cryptology, Vol. 17, 2004, pp. 5–26.

[21] EFF, “Cracking DES. Secrets of Encryption Research, Wiretap Politics & Chip Design,”O’Reilly, May 1998.

[22] FIPS 46, “Data Encryption Standard,” Federal Information Processing Standard, NBS,U.S. Dept. of Commerce, January 1977 (revised as FIPS 46-1:1988; FIPS 46-2:1993,FIPS 46-3:1999).

[23] FIPS 81, “DES Modes of Operation,” Federal Information Processing Standard, NBS,US Dept. of Commerce, December 1980.

[24] FIPS 180, “Secure Hash Standard,” Federal Information Processing Standard (FIPS),Publication 180, National Institute of Standards and Technology, US Department ofCommerce, Washington D.C., May 11, 1993 (revised as FIPS 180-1:1995; FIPS 180-2:2003).

[25] FIPS 186, “Digital Signature Standard,” Federal Information Processing Standard, NIST,US Dept. of Commerce, May 1994 (revised as FIPS 186-1:1998; FIPS 186-2: 2000; changenotice published in 2001).

[26] FIPS 197, “Advanced Encryption Standard,” Federal Information Processing Standard,NIST, U.S. Dept. of Commerce, November 26, 2001.

[27] S. Fluhrer, I. Mantin, A. Shamir, “Weaknesses in the key scheduling algorithm of RC4,”Selected Areas in Cryptography, LNCS 2259, S. Vaudenay, A. Youssef, Eds., Springer-Verlag, 2001, pp. 1–24.

31

Bart Preneel Cryptology: An Introduction

[28] E. Fujisaki, T. Okamoto, D. Pointcheval, J. Stern, “RSA-OAEP is secure under the RSAassumption,” Advances in Cryptology, Proceedings Crypto’01, LNCS 2139, J. Kilian,Ed., Springer-Verlag, 2001, pp. 260–274.

[29] K. Gandolfi, C. Mourtel, F. Olivier, “Electromagnetic analysis: Concrete results,” Pro-ceedings Cryptographic Hardware and Embedded Systems – CHES 2001, LNCS 2162,C.K. Koc, D. Naccache, C. Paar, Eds., Springer-Verlag, 2001, pp. 251–261.

[30] M.R. Garey, D.S. Johnson, “Computers and Intractability: A Guide tot the Theory ofNP-Completeness,” W.H. Freeman and Company, San Francisco, 1979.

[31] O. Goldreich, “Foundations of Cryptography: Volume 1, Basic Tools,” Cambridge Uni-versity Press, 2001.

[32] S. Goldwasser, M. Bellare, “Lecture Notes on Cryptography,” http://www.cs.ucsd.edu/users/mihir/papers/gb.html.

[33] L.K. Grover, “A fast quantum mechanical algorithm for database search,” Proceedings28th Annual ACM Symposium on Theory of Computing, 1996, pp. 212–219.

[34] J.E. Haynes, H. Klehr, “Venona. Decoding Soviet Espionage in America,” Yale UniversityPress, 1999.

[35] A.P.L. Hiltgen, “Construction of feebly-one-way families of permutations,” ProceedingsAuscrypt’92, LNCS 718, J. Seberry, Y. Zheng, Eds., Springer-Verlag, 1993, pp. 422–434.

[36] ISO/IEC 9797 “Information technology – Security techniques – Message AuthenticationCodes (MACs). Part 1: Mechanisms using a block cipher,” 1999. “Part 2: Mechanismsusing a dedicated hash-function,” 2002.

[37] ISO/IEC 10116, “Information technology – Security techniques – Modes of operation ofan n-bit block cipher algorithm,” 1997.

[38] ISO/IEC 10118, “Information technology – Security techniques – Hash-functions, Part1: General,” 2000. “Part 2: Hash-functions using an n-bit block cipher algorithm,”2000. “Part 3: Dedicated hash-functions,” 2003. “Part 4: Hash-functions using modulararithmetic,” 1998.

[39] A. Joux, J. Stern, “Lattice reduction: a toolbox for the cryptanalyst,” Journal of Cryp-tology, Vol. 11, 1998, pp. 161–185..

[40] G.A. Kabatianskii, T. Johansson, B. Smeets, “On the cardinality of systematic A-codesvia error correcting codes,” IEEE Transactions on Information Theory, Vol. IT–42, No. 2,1996, pp. 566–578.

[41] D. Kahn,“The Codebreakers. The Story of Secret Writing,” MacMillan, New York, 1967.

[42] L. Knudsen, “Block Ciphers – Analysis, Design and Applications,” PhD thesis, AarhusUniversity, Denmark, 1994.

[43] L. Knudsen, “Chosen-text attack on CBC-MAC,” Electronics Letters, Vol. 33, No. 1,1997, pp. 48–49.

32

Bart Preneel Cryptology: An Introduction

[44] N. Koblitz, “A Course in Number Theory and Cryptography,” Springer-Verlag, 1987.

[45] P. Kocher, “Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and othersystems,” Advances in Cryptology, Proceedings Crypto’96, LNCS 1109, N. Koblitz, Ed.,Springer-Verlag, 1996, pp. 104–113.

[46] P. Kocher, J. Jaffe, B. Jun, “Differential power analysis, Advances in Cryptology, Pro-ceedings Crypto’99, LNCS 1666, M. Wiener, Ed., Springer-Verlag, 1999, pp. 388–397.

[47] B.-J. Koops, “Crypto law survey,” http://rechten.kub.nl/koops/cryptolaw.

[48] A.K. Lenstra, E.R. Verheul, “Selecting cryptographic key sizes,” Journal of Cryptology,Vol. 14, 2001, pp. 255–293.

[49] M. Matsui, “The first experimental cryptanalysis of the Data Encryption Standard,”Proceedings Crypto’94, LNCS 839, Y. Desmedt, Ed., Springer-Verlag, 1994, pp. 1–11.

[50] W. Meier, O. Staffelbach, “Fast correlation attacks on stream ciphers,” Journal of Cryp-tology, Vol. 1, 1989, pp. 159–176.

[51] R. Merkle, “Secrecy, Authentication, and Public Key Systems,” UMI Research Press,1979.

[52] A.J. Menezes, “Elliptic Curve Public-Key Cryptosystems,” Kluwer Academic Publishers,1993.

[53] A.J. Menezes, P.C. van Oorschot, S. Vanstone, “Handbook of Applied Cryptography,”CRC Press, 1997.

[54] Microsoft NGSCB, http://www.microsoft.com/resources/ngscb/.

[55] NESSIE, http://www.cryptonessie.org.

[56] NIST, “SP 800-38A Recommendation for Block Cipher Modes of Operation – Methodsand Techniques,” December 2001.

[57] S. Paul, B. Preneel, “Analysis of non-fortuitous predictive states of the RC4 key streamGenerator,” Progress in Cryptology, Lecture Notes in Computer Science 2904, T. Jo-hansson, S. Maitra, Eds., Springer-Verlag, 2003, pp. 30–47.

[58] E. Petrank and C. Rackoff, “CBC MAC for real-time data sources,” Journal of Cryptol-ogy, Vol. 13, 2000, pp. 315–338.

[59] State of the Art in Appplied Cryptography, Lecture Notes in Computer Science 1528,B. Preneel, V. Rijmen, Eds., Springer-Verlag, 1998.

[60] B. Preneel, P.C. van Oorschot, “MDx-MAC and building fast MACs from hash func-tions,” Proceedings Crypto’95, LNCS 963, D. Coppersmith, Ed., Springer-Verlag, 1995,pp. 1–14.

33

Bart Preneel Cryptology: An Introduction

[61] B. Preneel, A. Biryukov, C. De Canniere, S.B. Ors, E. Oswald, B. Van Rompay,L. Granboulan, E. Dottax, G. Martinet, S. Murphy, A. Dent, R. Shipsey, C. Swart,J. White, M. Dichtl, S. Pyka, M. Schafheutle, P. Serf, E. Biham, E. Barkan, Y. Braziler,O. Dunkelman, V. Furman, D. Kenigsberg, J. Stolin, J-J. Quisquater, M. Ciet, F. Sica,H. Raddum, L. Knudsen, M. Parker, “Final report of NESSIE, New European Schemesfor Signatures, Integrity, and Encryption”, LNCS, Springer-Verlag, in print.

[62] J.-J. Quisquater, D. Samide, “ElectroMagnetic Analysis (EMA): Measures and counter-measures for smart cards,” Smart Card Programming and Security, International Con-ference on Research in Smart Cards, E-smart 2001, LNCS 2140, I. Attali, T. Jensen,Eds., Springer-Verlag, 2001, pp. 200–210.

[63] RIPE, “Integrity Primitives for Secure Information Systems. Final Report of RACE In-tegrity Primitives Evaluation (RIPE-RACE 1040),” LNCS 1007, A. Bosselaers, B. Pre-neel, Eds., Springer-Verlag, 1995.

[64] R.L. Rivest, A. Shamir, L. Adleman, “A method for obtaining digital signatures andpublic-key cryptosystems,” Communications ACM, Vol. 21, No. 2, 1978, pp. 120–126.

[65] R.A. Rueppel, “Analysis and Design of Stream Ciphers,” Springer-Verlag, 1986.

[66] R.R. Schaller, “Moore’s law: past, present, and future,” IEEE Spectrum, Vol. 34, No. 6,June 1997, pp. 53–59.

[67] A. Shamir, E. Tromer, “Factoring large numbers with the TWIRL device,” Advances inCryptology, Proceedings Crypto’03, LNCS 2729, D. Boneh, Ed., Springer-Verlag, 2003,pp. 1–26.

[68] C.E. Shannon, “Communication theory of secrecy systems,” Bell System Technical Jour-nal, Vol. 28, No. 4, 1949, pp. 656–715.

[69] P.W. Shor, “Algorithms for quantum computation: discrete logarithms and factoring,”Proceedings 35nd Annual Symposium on Foundations of Computer Science, S. Gold-wasser, Ed., IEEE Computer Society Press, 1994, pp. 124–134.

[70] V. Shoup, “OAEP reconsidered,” Advances in Cryptology, Proceedings Crypto’01,LNCS 2139, J. Kilian, Ed., Springer-Verlag, 2001, pp. 239–259.

[71] “Contemporary Cryptology: The Science of Information Integrity,” G.J. Simmons, Ed.,IEEE Press, 1991.

[72] S. Singh, “The Code Book. The Science of Secrecy from Ancient Egypt to QuantumCryptography,” Anchor, 2000.

[73] W. Stallings, “Cryptography and Network Security,” 3rd edition, Prentice Hall, 2003.

[74] D. Stinson, “Cryptography. Theory and Practice,” CRC Press, 1995.

[75] TCG, http://www.trustedcomputinggroup.org.

[76] L.M.K. Vandersypen, M. Steffen, G. Breyta, C.S. Yannoni, M.H. Sherwood, I.L. Chuang,‘’Experimental realization of Shor’s quantum factoring algorithm using nuclear magneticresonance,” Nature, 414, 2001, pp. 883–887.

34

Bart Preneel Cryptology: An Introduction

[77] G.S. Vernam, “Cipher printing telegraph system for secret wire and radio telegraphcommunications,” Journal American Institute of Electrical Engineers, Vol. XLV, 1926,pp. 109–115.

[78] M.N. Wegman, J.L. Carter, “New hash functions and their use in authentication and setequality,” Journal of Computer and System Sciences, Vol. 22, No. 3, 1981, pp. 265–279.

[79] M.J. Wiener, “Efficient DES key search,” Presented at the Rump Session of Crypto’93.Reprinted in “Practical Cryptography for Data Internetworks,” W. Stallings, Ed., IEEEComputer Society, 1996, pp. 31–79.

[80] A.C. Yao, “Theory and applications of trapdoor functions,” Proceedings 23rd IEEE Sym-posium on Foundations of Computer Science, IEEE, 1982, pp. 80–91.

[81] S.-M. Yen, M. Joye, “Checking before output may not be enough against fault-basedcryptanalysis,” IEEE Transactions on Computers, Vol. 49, No. 9, 2000, pp. 967-970.

35