11
Future Generation Computer Systems 16 (2000) 319–329 Key management in a non-trusted distributed environment Ed Dawson * , Andrew Clark, Mark Looi Information Security Research Centre, Queensland University of Technology, GPO Box 2434, Gardens Point, Brisbane 4001, Qld., Australia Accepted 3 March 1999 Abstract Despite the fact that the World Wide Web is an untrusted environment, increasing use is being made of this network (the Internet) in electronic commerce applications. To prevent attacks a strong security architecture is required. A fundamental part of such an architecture is a method for key management. This paper discusses the various components of cryptographic key management especially in relation to the World Wide Web environment. Issues and problems with key generation, key distribution and key storage are raised. An overview is presented of key management systems in several security architectures including SSL, Kerberos and Sesame. ©2000 Elsevier Science B.V. All rights reserved. Keywords: Key management; Key distribution; Key storage; Random number generator; World Wide Web 1. Introduction Many modern applications are reliant upon crypto- graphic systems for the provision of security services such as confidentiality, integrity, authentication and non-repudiation. Regardless of whether the type of cryptographic algorithm used is symmetric or asym- metric, and regardless of the specific algorithm used, all cryptographic systems are reliant on the use of one or more cryptographic keys. An important component of such a system is a secure method to manage these keys. With the increased network connectivity available today, applications such as electronic commerce, home shopping, banking and share trading are becoming more widely used. These applications are typically be- ing provided using a non-trusted distributed network * Corresponding author. E-mail addresses: [email protected] (E. Dawson), [email protected] (A. Clark), [email protected] (M. Looi). environment such as the Internet. The World Wide Web (WWW) is perhaps the most common method for delivering such applications, with a wide range of transactional applications being available today. This article discusses the concepts of secure crypto- graphic key management in a non-trusted distributed environment such as the WWW. It commences with a brief discussion on the principles of secure key man- agement and why secure key management is a prob- lem in a non-trusted distributed environment. A dis- cussion is given of the three distinct areas of key man- agement, namely key generation, key distribution and key storage, with reference to this particular environ- ment. Examples of how key management is used in se- lected applications, and examples of current schemes are provided. 2. Principles of secure key management Key management plays a crucial role in any security architecture. Unless secure key management 0167-739X/00/$ – see front matter ©2000 Elsevier Science B.V. All rights reserved. PII:S0167-739X(99)00056-4

Key management in a non-trusted distributed environment

Embed Size (px)

Citation preview

Future Generation Computer Systems 16 (2000) 319–329

Key management in a non-trusted distributed environment

Ed Dawson∗, Andrew Clark, Mark LooiInformation Security Research Centre, Queensland University of Technology, GPO Box 2434,

Gardens Point, Brisbane 4001, Qld., Australia

Accepted 3 March 1999

Abstract

Despite the fact that the World Wide Web is an untrusted environment, increasing use is being made of this network (theInternet) in electronic commerce applications. To prevent attacks a strong security architecture is required. A fundamentalpart of such an architecture is a method for key management. This paper discusses the various components of cryptographickey management especially in relation to the World Wide Web environment. Issues and problems with key generation, keydistribution and key storage are raised. An overview is presented of key management systems in several security architecturesincluding SSL, Kerberos and Sesame. ©2000 Elsevier Science B.V. All rights reserved.

Keywords:Key management; Key distribution; Key storage; Random number generator; World Wide Web

1. Introduction

Many modern applications are reliant upon crypto-graphic systems for the provision of security servicessuch as confidentiality, integrity, authentication andnon-repudiation. Regardless of whether the type ofcryptographic algorithm used is symmetric or asym-metric, and regardless of the specific algorithm used,all cryptographic systems are reliant on the use of oneor more cryptographic keys. An important componentof such a system is a secure method to manage thesekeys.

With the increased network connectivity availabletoday, applications such as electronic commerce, homeshopping, banking and share trading are becomingmore widely used. These applications are typically be-ing provided using a non-trusted distributed network

∗ Corresponding author.E-mail addresses:[email protected] (E. Dawson),[email protected] (A. Clark), [email protected] (M. Looi).

environment such as the Internet. The World WideWeb (WWW) is perhaps the most common methodfor delivering such applications, with a wide range oftransactional applications being available today.

This article discusses the concepts of secure crypto-graphic key management in a non-trusted distributedenvironment such as the WWW. It commences with abrief discussion on the principles of secure key man-agement and why secure key management is a prob-lem in a non-trusted distributed environment. A dis-cussion is given of the three distinct areas of key man-agement, namely key generation, key distribution andkey storage, with reference to this particular environ-ment. Examples of how key management is used in se-lected applications, and examples of current schemesare provided.

2. Principles of secure key management

Key management plays a crucial role in anysecurity architecture. Unless secure key management

0167-739X/00/$ – see front matter ©2000 Elsevier Science B.V. All rights reserved.PII: S0167-739X(99)00056-4

320 E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329

techniques are used, a security system may be exposedto attack despite the fact that strong cryptographicalgorithms are used. This is especially the case inan untrustworthy environment such as the WWW.Key management is concerned with the generation,distribution and storage of cryptographic keys.

Additionally, each of these processes must becontrolled by a suitable security policy. It shouldbe noted that prior to designing a key managementscheme for a security architecture a security policyfor this architecture should be provided. This pol-icy should identify the threats for which the securityarchitecture is aiming to guard against.

In relation to key management this security pol-icy should identify the procedures to be conducted aswell as describe the responsibilities and accountabilityof the different users. In addition the security policyshould identify the audit requirements.

2.1. Key generation

Secure key generation is the foundation of any keymanagement scheme. Keys should be generated in anunpredictable fashion. Cryptographic keys should begenerated in such a manner that each of the possiblekeys is easily likely to occur. The keys should be gen-erated in such a fashion that knowledge of one keyshould not provide any knowledge about any otherkeys. The key generation process should involve theuse of random or pseudo-random bit sequences. Themost important measure of random sequences forcryptographic applications is entropy which gives anindication of the number of independent bits in a se-quence, i.e. a sequence ofn bits from a truly randomsource has entropy equal ton bits. A more detaileddescription of key generation will be presented inSection 4.

2.2. Key distribution

Cryptographic keys need to be distributed betweentwo users in a secure fashion. If two users alreadyshare a cryptographic key then this process is greatlysimplified. However, in a distributed environment itmay not be reasonable to assume that two users sharea secret key initially. Hence the process of key distri-bution in such a network is more difficult. It may not

be possible to use a trusted courier to transfer keys be-tween two users. As well in many cases keys should bedistributed in such a fashion that compromise of onekey should not influence the security of other keys.

Key distribution in many cases involves the use ofintricate cryptographic protocols. There are two typesof protocols which are used, namely key transport pro-tocols and key agreement protocols. In key transportprotocols one user selects the key and transports it se-curely to the other user. In a key agreement each usercontributes to the generation of keys. Key distributionwill be described in more detail in Section 5.

2.3. Key storage

Keys need to be stored in a secure manner in sucha fashion that they will not be revealed to an unau-thorised person. In general this is a difficult task ina distributed environment if the keys are stored inan insecure workstation. In order to provide a securestorage method it may be necessary to use a tamperresistant device such as a smart card. In certain cir-cumstances key storage may need to involve the useof auditing processes such as archival and recoverymechanisms as in some cases the keys may need tobe stored for archival measures for a long time. Thekey recovery mechanisms may be needed in case ofequipment failure, or they may also be required forlegal reasons. As well, key storage may require mech-anisms for destroying cryptographic keys when theiruse is no longer required. A more detailed descriptionof key storage is presented in Section 6.

3. Problems with non-trusted distributedenvironments

Key management is difficult in a non-trusted dis-tributed environment. This is caused by the two keyfactors,non-trustedanddistributed.

Key management is significantly easier in an envi-ronment where the communications, users and systemadministrators are trusted. Such trust does not gener-ally exist on the WWW.

On the Internet (and therefore the WWW) networkcommunications (and thus security critical data) canand does travel through a multitude of often unknownand untrusted systems. There is no guarantee thattransmitted data has not been intercepted or modified

E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329 321

by an untrusted system. There is also no guaranteethat any received data did in fact originate from thesystem that claims to have sent the data.

In such a non-trusted environment, users are alsogenerally untrusted. Users are typically dealing withfaceless entities – entities that may not necessarilyhave any existence outside the electronic environment.It is difficult to trust an entity that you have nevermet or visited before, an entity whose only proof oflegitimacy is that of a pattern of bits in a computersystem. This applies not only to the faceless individual,but also to the faceless large organisation. Not onlydo large organisations (or electronic merchants) haveto trust individual users, but individual users equallyneed to trust the purported organisations. Such trust isgenerally not forthcoming on the Internet, except byunknowledgeable or naïve users.

It is also often the case that system administratorscannot be trusted. In almost every system, it is possiblefor the administrator to view all data files stored onthat system.

Because of the distributed nature of the WWW, atleast some, if not most, of the communication chan-nels, users and systems will be outside the control ofthe participating entities.

The lack of ability to trust in the above means thatnot only must cryptographic mechanisms be used toprotect the applications, but appropriate security mustalso be in place to protect the cryptographic keys.

4. Key generation

The generation of random keying data for cryp-tographic applications is a difficult task. Sources oftruly random data are not always readily available andtherefore there is a heavy reliance on pseudo-randomnumber generators for creating sequences of randomdata for keying ciphers. A pseudo-random generatorrequires an initial seed, from which the subsequentrandom data is derived. The security of the genera-tor, then, relies on the randomness and length of theseed and the properties of the particular generationalgorithm.

The process of generating random data can bebroken into two phases:

Phase 1. The first phase is the generation of seeding(or keying) data for the random number generator. This

data must be obtained from a reliable random source.Depending on the entropy of the source, the bits maybe used directly, or they may be hashed with somecryptographically secure algorithm or compressed toremove redundancy.

Phase 2. If the data obtained from Phase 1 is suffi-cient for the required application then no further pro-cessing is required. If more (perhaps even a continu-ous supply of) data is required then other techniqueswhich utilise deterministic processes must be used.The random data from Phase 1 may be used to seed apseudo-random generator in order to provide a contin-uous (but not infinite) sequence of random material.As well, the seed output from Phase 1 may be inputinto a hash function. This may be the case for examplewhen in order to be confident of generating a key withn bits of entropy it may be required to collect muchmore thann bits from the random number generator.

It is important to note that the deterministic pro-cesses described above in Phase 2 will not increase theentropy. The only method to obtain entropy is to usea random process. Several methods involving randomprocesses are described below.

Various sources of random bits have been proposed.Dedicated hardware devices can be reliable but maybe impractical due to cost or physical computer limita-tions. On the other hand, software sources are readilyavailable but are less reliable in providing data whichis truly random. The distinction between hardware andsoftware sources of random data is somewhat fuzzy.Often a hardware device (such as a keyboard, a mouseor a hard disk drive) is monitored by a software appli-cation which interprets the signals sent from the de-vice and then outputs the random data. The softwaremay also be used to remove redundancy from the re-ceived signal if the source is not truly random. In eachof the following examples the entropy of the data fromthe random source should be tested to determine if re-moval of redundancy is required. Possible sources ofrandom data are:• Radioactive sources:A radioactive element decays

randomly over time, emitting radioactive particles.These particles can be counted and the rate of theiremission calculated in order to provide highly ran-dom data.

• Semiconductors:A noisy diode or resistor may be avery effective source of random data. Such devicesare not expensive.

322 E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329

• Disk drives: In [1], it was shown that there is arandom variation in the rotation speed of a sealedhard disk drive due to air turbulence. Using lowlevel timing measurements on a dedicated disk driveproves to be an excellent source of random data.It is important that the drive not be used for anyother purpose (for example, storing data) since suchactivity could adversely affect the randomness ofthe output – either by unintentional use or by amalicious entity influencing the performance of thedisk.

• Mouse movement:The location of a mouse pointerwithin a computer screen and the times betweenmovements have been proposed as a good sourceof random data. This approach may work well if, assuggested in [5], the user is asked to sign their name(using a mouse or joystick). However, it has beenpointed out that care must be taken with particularapplications since the number of possible mousestrokes required by the application may be small,making the users responses predictable. This isespecially important in network applications (forexample, Web Browsers) where an attacker mayglean information by monitoring the network [1].

• Keystroke timings:A user may be asked to type arandom sequence of characters while the time be-tween successive keystrokes is recorded and usedto generate random data. The randomness of thisdata depends on the precision and reliability ofthe clock used for timing and the users’ typingcharacteristics.

• Network traffic: The statistics of the traffic of acomputer network has been proposed as a source ofrandom data. This approach should not be consid-ered secure as others with access to the network canmonitor and influence the traffic on the network.

• Microphones:Background noise obtained from amicrophone may provide a suitable source of ran-dom data. Also, the port of an unplugged micro-phone can provide noise. A Sun SPARCStation hasthe device /dev/audio which can be used easily togather such data.

• System time:Programmers requiring random num-ber generators for software simulations, etc., fre-quently use the system time as a seed and a sys-tem pseudo-random number generator. Such an ap-proach by itself is inappropriate for cryptographicapplications since the time is predictable, as are

the pseudo-random number generators provided bycommon compilers.

• High frequency clock:The Intel Pentium range ofprocessors contain a 64-bit counter register (calleda Time Stamp Counter) which is incremented uponeach internal processor clock cycle. For example,on a 200 MHz processor the register will be in-cremented 200,000,000 times every second. Eventhough any system timer is deterministic (and there-fore predictable), if read at irregular intervals (forexample, upon a human keystroke) their values mayreveal usable entropy.None of the random sources listed above provide a

simple mechanism for collecting good data. It wouldbe useful if the producers of microprocessors (used inall computers) incorporated a small, inexpensive semi-conductor capable of generating random bits. It wouldthen be simple for the developers of operating sys-tems to provide a mechanism by which a programmercould easily access the random data.

The difficulty of finding suitable seeding materialfor random number generators is highlighted by thenumber of attacks on existing systems which havebeen published. Typically, the bits obtained from thesources outlined above are not independent. Calculat-ing the entropy of the collected data gives an indica-tion of the number of independent bits. If the entropyis low then the amount of usable random data ob-tained may be insufficient for a particular application,in which case the random data can be used to seed apseudo-random number generator which will producethe required amount of information.

Many pseudo-random number generators exist butnot all are suitable for cryptographic applications.The algorithm used must have good properties andthe seeding data used by the algorithm must be trulyrandom. It is essential that the source of the seedbe totally unpredictable to prevent an attacker fromguessing it. It is also important that the length of theseed required by the pseudo-random number genera-tor be large enough to prevent an attacker exhaustivelytrying all seeds. When a cipher is used to generaterandom data the seed is usually used as a key forthe cipher in which case the required length of theseed is dictated by the key size of the cipher. Ciphersare usually designed with keys large enough to pre-vent them from being determined by an exhaustiveattack.

E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329 323

Hardware devices such as smart cards may havesome random seed data programmed into them atproduction time or they may have their own genuinenoise source on the card (such as a semiconductorof some type). Such data may be used to seed apseudo-random number generator which runs on thehardware in the card.

It is important to ensure that random numbersgenerated satisfy randomness properties. Black-boxmethods to test the generating device for randomnessproperties are described in [2,3]. It is recommendedthat these procedures be applied as a first-step inanalysing the strength of the key generating algo-rithm. If such an algorithm fails to satisfy these tests,then it is recommended that it should not be used forthe generation of keys where the secrecy of informa-tion is paramount. To gain a true understanding ofthe actual entropy it is necessary to be provided withthe actual source code, or complete description ofthe method, used to generate random numbers. Thisallows an analyst to conduct an in-depth examinationof the technique used to generate random numbers.

5. Key distribution

In this section we discuss techniques for establish-ing a secret key between two parties who wish to com-municate securely. This secret key is typically usedto secure a cryptographic session, i.e. a session key.Typically the key distribution phase incorporates somekind of authentication mechanisms so that each partycan be assured of the identity of the other. In manycases key distribution schemes can be generalised toallow for the establishment of a key between twoormoreparties but in this paper we limit ourselves to thecase of two parties. As mentioned in Section 2.2, thereare two types of key distribution schemes: key trans-port protocols and key agreement protocols. Each ofthese schemes are now described in some detail.

5.1. Key transport protocols

In key transport protocols one party is responsi-ble for generating the session key and transmitting itsecurely to the other parties that require it. The keyshould be generated using a technique similar to theones described above in Section 4. Secure transmis-sion of the key to the other party can be achieved

using either symmetric cryptography or asymmetriccryptography.

When symmetric cryptography is used, a trustedserver is required, which shares long-term secret keyswith each of the parties wishing to communicate inthe session. The trusted server usually generates therequired session key and distributes it to each party,protected with the long-term key that is shared withthat party. This process is outlined in Example 1. Notethat in all the examples below, as is standard whendescribing cryptographic protocols, A stands for Aliceand B stands for Bob. Hence, keys belonging to Aare referred to as ‘hers’ and keys belonging to B arereferred to as ‘his’.

Example 1. Assume parties A and B wish to es-tablish a session key. A shares a long-term secretkey, KAS, with the trusted server, S, and B shares along-term secret key, KBS, with S.

1. A sends a message to S requesting a session keyfor communication with B.

2. S generates a session key KAB.3. S encrypts KAB using KAS and sends the result

to A.4. S encrypts KAB using KBS and sends the result

to B.5. A and B decrypt KAB using their respective

long-term keys (shared with S).

When asymmetric cryptography is used, no trustedserver is required, i.e. the session key can be gener-ated and distributed directly to the other party. Thetransmitted session key is usually protected using thepublic key of its recipient. The sender of the key mayoptionally sign the transmitted data with their privatekey, providing the recipient with the ability to ver-ify who sent it. An example of a key transport pro-tocol which utilises asymmetric cryptography is theNeedham–Schroeder public key protocol [11] whichis described now in Example 2.

Example 2. In this example each party sends a sym-metric key to the other. This need not be the case –that is, only one key need be communicated if thatis all that is required. This protocol provides mutualauthentication.

1. A generates a session key KAB.2. A encrypts KAB using the public key of B and sends

it, along with her identity, to B.

324 E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329

3. B generates a session key KBA.4. B decrypts KAB using his private key and sends

both KAB and KBA to A, encrypted under A’spublic key.

5. A decrypts KAB and KBA using her private key andverifies that the received KAB is the same as theone sent to B in Step 2.

6. A encrypts KBA with B’s public key and sends itback to B.

7. B decrypts KBA with his private key and verifiesthat it is the same as the one sent to A in Step 4.

There are advantages and disadvantages of each ofthese two forms of key transport in an untrusted dis-tributed environment. One advantage of the first pro-tocol is that the generation of session keys is greatlysimplified since it is only necessary to have the trustedserver generate these keys. Hence, it is only neces-sary to place a dedicated device to generate keys asdescribed in Section 4 at this server. However, funda-mental problems with this protocol are that a trustedserver is required and that each of the users must sharea long term secret key with the trusted server, S. Thisrequirement is not possible in the WWW for manyapplications especially if one considers the case of in-ternational electronic commerce. On the other hand,the second protocol does not have this problem sinceno trusted server is required. However, there are sev-eral disadvantages with this protocol since it involvesthe use of public key algorithms. These algorithmsusually require greater bandwidth and computationaleffort than symmetric algorithms.

Another problem with public key algorithms is thata Public Key Infrastructure (PKI) is required – for ex-ample, a certification authority (CA) is needed to cer-tify the public key of each user. A certificate is a signeddata structure which provides assurance between anentity and its alleged public key. There are many issuesto finalise before a PKI is used on the Internet. For ex-ample, the type, format and legal status of certificatesneed to be resolved. Currently, there are several work-ing groups investigating the design of such certificatesincluding the Internet Engineering Task Force (IETF)[8] and the Open Group [12].

5.2. Key agreement protocols

In contrast to key transport protocols, key agree-ment protocols require input from each of the parties

in the generation of the session key. These protocolsgenerally utilise public key cryptography. Proba-bly the best known key agreement protocol is theDiffie–Hellman key agreement scheme [4]. The fol-lowing example presents a key agreement schemebased on the Diffie–Hellman scheme known as theStation-to-Station protocol (STS) [10].

Example 3. This example presents the STS whichgives key agreement and at the same time, mutualentity authentication. Prior to establishing a sessionkey the following public parameters must be set: aprime,p, and a generator,α of Z∗

p (2 ≤ α ≤ p − 2).It is also assumed that A and B each possess a pub-lic/private key pair and that each trusts the public keyof the other. The protocol then progresses as follows:

1. A chooses a random secretx (1 ≤ x ≤ p − 2) andsendsαx modp to B.

2. B chooses a random secrety (1 ≤ y ≤ p − 2) andcomputesαy modp. B computes the session key,K, by raisingαx modp (received from A) to thepowery(modp). B also forms a signature (usinghis private key) ofαy andαx concatenated whichhe encrypts with the session key, K. B then for-wardsαy modp and the encrypted signature to A.

3. A computes the session key, K, by raisingαy modp

(received from B) to the powerx(modp) and thendecrypts the encrypted signature (also receivedfrom B) which can then be verified. When B’smessage has been successfully verified, A gener-ates a similar message by forming a signature ofαx concatenated withαy modp and encrypting itwith the session key, K. A then sends the encryptedsignature to B.

4. B then decrypts the incoming message using K andverifies the signature contained within. If the signa-ture is successfully verified then both A and B canbe satisfied that the other has the session key, K.

The protocol in Example 3 has the same advantagesas described above for Example 2 in that no trustedserver is required. Also, the disadvantages are similarto Example 2 in that the protocol involves public keyciphers so a PKI is required, and the protocol is costlyin relation to band-width and computation.

Key agreement seems to be the ideal protocol touse in a distributed environment where no trustedserver is available and the two parties who wish to setup a secure communication channel do not trust one

E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329 325

another. The reason is that each party is able to haveinput into the format of the session key. One majorproblem with key agreement is that each party needsto have a method to generate random numbers. Asdescribed in Section 4, this is not a simple process.There are many applications such as home bank-ing where a person is communicating with a bank(which is usually assumed to be trusted) where a keytransport protocol may be much more appropriate.

6. Key storage

Because of the untrusted and distributed nature ofthe WWW,key storageis a difficult problem, but onethat must be solved in order to provide a secure keymanagement system, and thus a secure application. Aspreviously discussed, the security of any application,whether distributed or not, hinges upon the protectionof the cryptographic keys. It is pointless generatingand distributing the keys in a secure manner, if the keysare then stored by the end user in an insecure manner.Two aspects of key storage are described here, namelysecure storage devices and secure storage mechanisms.

6.1. Secure key storage devices

The primary threats to the keys caused by inade-quate security in the key storage area are that the user’skeys may be revealed, obtained and/or misused by anillegitimate person. This could be an intruder who hasmanaged to gain access to the user’s key storage areaor key storage device, or a system administrator whohas access to the files stored on the system, or to pro-grams stored on the system.

It is certainly inadequate to store the keys in theclear (unencrypted) on a device that is accessible orusable by anyone other than the legitimate user. Forexample, storing the keys on a shared, network harddrive means that the system administrator will mostprobably have access to the keys. If the hard drive isaccessed via the network, any person monitoring thenetwork will potentially be able to intercept the keysas they are being transmitted from the hard drive, overthe network, onto the user’s workstation.

Even if the keys are stored in encrypted form, itis clear that the keys must be temporarily decryptedin order to be used. If this decrypted key were to be

made available, albeit temporarily, to an insecure oruntrusted workstation, this key could be obtained by anattacker or by a system administrator with appropriateaccess to the workstation. For a particular application,it must be decided if there is a need to protect the keyat all times, ensuring that it is never revealed to anuntrusted source.

A common method of protecting keys is by the useof a user specific secure computing and storage device,such as a smart card or a processor based token. In thiscase, the cryptographic key to be protected is storedon the smart card, and the security of the smart cardis established in such a way so that no one can reador retrieve the key from the smart card. Whenever thekey is needed (to perform a cryptographic operation),the data to be encrypted or decrypted is sent to thesmart card and the cryptographic work is performedby the smart card.

This technique allows the user to securely carry theircryptographic key with them from one location to an-other, and to store this key in a manner that is reason-ably secure. Placing the execution of the cryptographicalgorithms on the smart card also means that the cryp-tographic key is never revealed outside the smart card.As long as trust can be maintained in the smart card,then the key storage problem can be addressed usingthis technique.

Of course, there are several disadvantages to thistechnique. A smart card reader is required to inter-face between the user’s smart card and the comput-ing workstation. Every user must be issued with asmart card containing appropriate cryptographic keys(key distribution then becomes an issue). Users arerequired to possess the smart card (which, depend-ing on the application, may actually be an advan-tage). The smart cards must thus be managed in asecure manner, imposing additional administrativeoverheads.

Despite these problems, smart cards seem to offera practical method to store cryptographic keys in aninsecure environment such as the Internet. The cost ofsmart cards has greatly decreased over the past fewyears. It is anticipated that in the future, most com-puters will be equipped with smart card readers, andmost operating systems will have functions to accesssmart cards and smart card readers.

In any system that incorporates the use of smartcards, the security policy must be clear in the

326 E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329

responsibilities of each entity as to the use and man-agement of each smart card. Ultimately, the usermust bear some responsibility, in this case to properlymanage their own smart card.

6.2. Secure key storage mechanisms

In addition to selecting a particular device for keystorage, appropriate mechanisms are needed to sup-port the storage of these keys. The application of cryp-tography to certain communications systems may re-quire complex key storage mechanisms. For example,key recovery mechanisms may be required for legalwire-tapping or for audit trails in secure electroniccommerce.

In the case where public key ciphers are used inthe security architecture a PKI will be required. Asdescribed in Section 5.1 a PKI will need mechanismsto support certificates. A certificate authority structuremay greatly increase the complexity and cost of thesecurity architecture. For example, it may cost severalhundred dollars to obtain a certificate from a privateissuer in the USA.

7. Applications

There are a multitude of applications that run ina non-trusted distributed environment, such as theWWW, that require secure key management. In fact,any application that relies on being able to securelyidentify an entity, or being able to ensure data con-fidentiality, integrity or authenticity requires securekey management.

Examples of applications that operate in anon-trusted distributed environment that require se-cure key management include the following:• Internet banking• Home shopping• Credit card purchasing over the Internet• Share market transactions• Electronic mail

A number of security architectures have been ap-plied to each of the examples above. We now give abrief description of some of these architectures high-lighting the key management techniques used in eachsystem.

7.1. Secure sockets layer

The Secure Sockets Layer (SSL) [6] is a protocolwhich was originally proposed by Netscape to providesecurity services for the WWW. SSL is now supportedby most WWW browsers and servers and has also beenincorporated into a number of other networking appli-cations – for example, TELNET and FTP. SSL pro-vides for mutual authentication using certificates, al-though in most applications only the server is authen-ticated. SSL provides no solutions to the key genera-tion problem, instead the application is responsible forgenerating random data. In fact, an early version of theNetscape Navigator used a predictable seed for its ran-dom number generator, leading to a much-publicisedattack [7]. The SSL specification suggests two alterna-tives for key distribution – RSA and Diffie–Hellman.As discussed above, the use of RSA requires one partyto generate a session key, whereas Diffie–Hellman re-quires input from both parties. SSL does not specifyany key storage requirements, leaving that task to theimplementor of the application.

7.2. Secure shell (SSH)

The SSH [15,16] is an application used mostly asa secure replacement to TELNET and RSH (remoteshell) for connecting to Unix computers over the net-work. SSH can also be used to securely transfer filesbetween Unix computers and also between client hostsand Unix servers. SSH prevents passwords from be-ing exposed on the network in the clear. As well asallowing the traditional username/password authenti-cation on Unix hosts, SSH also allows users to iden-tify themselves using public key cryptography. SSHprovides a tool for users to generate their own pub-lic/private key pairs. Depending on the platform ofthe SSH client (it could, for example, be a MS Win-dows client or a Unix client), different techniques areused for seeding the random number generator. TheF-Secure SSH client for MS Windows initialises itsseed using mouse movement. The Unix client gathersenvironmental information from the operating systemwhich it combines to form a seed. In both cases theseed is stored in a file locally and updated each timekeys are generated. In the SSH protocol, the clientgenerates the session key and sends it to the server

E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329 327

encrypted under the server’s public key. User’s RSAprivate keys are stored on the client computer’s filesystem encrypted using a password provided by theuser.

7.3. Kerberos

Kerberos [9] is a security architecture developed byMassachusetts Institute of Technology (MIT) in themid 1980s. Kerberos is a purely symmetric key basedarchitecture originally developed for a university envi-ronment with the aim of providing tighter authentica-tion mechanisms than the traditional password basedsystems as well as privacy services. Kerberos allowssingle sign-onto the network, relinquishing the userof the need to repeatedly enter username/password in-formation for each new application.

Kerberos makes use of an on-lineAuthenticationServer(AS) andTicket Granting Server(TGS) in itsarchitecture. The AS authenticates users and providesthem with aTicket Granting Ticket(TGT). This isthe sign-on phase in the Kerberos protocol. The TGTcontains information about the user and their privi-leges. When the user wishes to contact a server (us-ing client software), they pass their TGT to the TGSwhich issues them with a ticket. The ticket containsa session key which can be used for communicat-ing with the server application. Thus, in the Kerberosenvironment all session keys are generated by theTGS.

As with all purely symmetric key based systems,there is the requirement for an initial shared secret. Inthe case of Kerberos the user shares a password withthe AS. This password is used as a symmetric keywhen the user first contacts the AS. It is also used bythe AS in its response to the user.

Kerberos does not define any specific techniquesfor key generation. Key storage is performed by theAS which must store each of the users passwords forthe purpose of authenticating the users and encryptingcommunications sent from the AS to the user. If the ASis compromised then it is possible to masquerade asany user. The TGS also stores keys in its ticket cache.If the TGS is compromised then all future session keyscan be obtained giving the perpetrator the ability tolisten in to, and modify any future sessions betweenusers and servers.

7.4. Sesame

Similar to Kerberos, Sesame [14] is a distributedsecurity architecture which improves on Kerberos byadding (among other things) public key cryptography.Sesame was developed in a joint research project be-tween Bull, ICL and Siemens – all European basedcompanies. Sesame supports all the symmetric keyprotocols that are implemented but has also beenextended to incorporate the use of certificates.

Sesame uses the same ‘single sign-on’ concept asKerberos through the use of the AS. In the Sesamearchitecture, the AS returns to the user a PrivilegeAttribute Certificate (PAC) which defines the usersprivileges. The PAC must be submitted to the appli-cation server on connecting. An additional feature ofSesame is that it makes use of role-based access con-trol (RBAC) to restrict users access based on their rolein an organisation.

In Sesame, since the user has a public private keypair, it is possible for the user and the AS to commu-nicate without the need for a prior shared secret (pro-vided that the user trusts the public key of the AS andvice versa). Also, the client can now generate the ses-sion key and send it to the application server using theirpublic key. Thus, key generation is now the responsi-bility of the client, or it can be a shared responsibilitywith the server using a Diffie–Hellman-like protocol.In any case, the need for a ticket granting server is nolonger present. For Sesame, key storage is required byeach user and each server in order to store their privatekey. Sesame supports the use of secure cryptographictokens (such as smart cards) for this purpose.

7.5. Pretty Good Privacy (PGP)

PGP [13] is perhaps the most widely used appli-cation for securing electronic mail. PGP utilises itsown public key mechanism. Rather than using a hier-archical model, PGP uses a peer-based model whereusers keys are signed by their peers, creating a ‘webof trust’. In this model, for example, if A trusts B, andB trusts C, then A should trust C. There is no methodfor distributing the master public keys of each user.Users are responsible for the distribution process ofmaster public keys. This is usually done through a dif-ferent channel, by electronic mail, through a web page,

328 E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329

or physically transferred on electronic media such asfloppy disks. These master public keys are used to en-crypt message keys, and these message keys are thenused to encrypt each individual message.

The Unix version of PGP utilises key-stroke tim-ings (the difference in time between successivekey-strokes) to generate a seed before generating apublic/private key pair. A random seed file is storedfor each user – initially based on the keystroke timings– which is updated each time a symmetric or asym-metric key is generated. PGP secret keys are storedon the host file system encrypted under a pass-phrasedesignated by the user.

8. Conclusions

In this paper we have outlined the most importantaspects of key management and its requirements foruse in Internet applications such as the WWW. Thekey areas of key generation, key distribution and keystorage have each been described in detail and a num-ber of examples of applications using these technolo-gies have been given.

The lack of a global PKI has impeded thewidespread use of cryptographic protocols on the In-ternet. A number of solutions are currently in use butthese systems are not gaining wide acceptance dueto the lack of a global PKI. A number of hardwaredevices are available to assist with secure key gener-ation and key storage but the wide-spread use of suchdevices will take time.

As technology improves, and the problems associ-ated with the development of a global PKI are grad-ually solved, a greater amount of trust in the use ofcryptography to secure applications such as those out-lined in this paper will be achieved.

References

[1] D. Davis, Internet Newsgroup Posting to sci.crypt (Re: NeedAdvice: RNG), October 1996.

[2] E. Dawson, H. Gustafson, Statistical methods for testing thestrength of key generators, in: Proc. PRAGOCRYPT ‘96,Prague, October 1996, pp. 452–466.

[3] E. Dawson, H.M. Gustafson, A method for measuring entropyof symmetric key generators, Computers and Security 17 (2)(1998) 177–184.

[4] E. Diffie, M.E. Hellman, New directions in cryptography,IEEE Trans. Information Theory 22 (1976) 644–654.

[5] C. Ellison, B. Kaliski, Cryptographic random numbersDraft v1.0, Appendix E of P1363, November 1995.http://www.clark.net/pub/cme/P1363/ranno.html

[6] A.O. Freier, P. Karlton, P.C. Kocher, The SSL ProtocolVersion 3.0, Transport Layer Security Working Group,Internet Draft, 18 November 1996.

[7] I. Goldberg, D. Wagner, Randomness and the NetscapeBrowser: How secure is the World Wide Web?, Dr Dobb’sJournal (1996).

[8] IETF, The Internet Engineering Task Force. http://www.ietf.org/

[9] Kerberos, The Network Authentication Protocol, 29 July1998. http://web.mit.edu/kerberos/www/

[10] A.J. Menezes, P.C. van Oorschot, S.A. Vanstone, Handbookof Applied Cryptography, CRC Press LLS, Florida, 1997, pp.489–586.

[11] R. Needham, M. Schroeder, Using encryption forauthentication in large networks of computers, Commun.ACM 21 (1978) 993–999.

[12] Open Group (The). http://www.opengroup.org/[13] PGP, The International PGP Home Page. http://

www.pgpi.com/[14] Sesame, A secure European System for Application in

a Multi-vender Environment, 26 January 1999. http://www.cosic.esat.kuleuven.ac.be/sesame/

[15] SSH, IETF Secure Shell Working Group, Secure Shell (secsh)Charter, 31 July 1998. http://www.ietf.org/html.charters/secshcharter.html

[16] T. Ylonen, The SSH (Secure Shell) Remote Login Protocol,Network Working Group, Internet Draft, 15 November1995.

Ed Dawson, Associate Professor, hasbeen a member of the academic staffat Queensland University of Technology(QUT) since 1974. He has a PhD fromQUT’s Information Security ResearchCentre (ISRC) in the area of cryptology.

Currently Professor Dawson is Directorof the ISRC within the Faculty of Infor-mation Technology, and leader of thecryptology group within the ISRC. He has

extensive research in all areas of cryptology, especially concernedwith the design and analysis of encryption algorithms. Over thelast five years Professor Dawson has published over 60 researchpapers on various aspects of cryptology. He has been involved innumerous industrial projects in the area of information security,especially related to the applications of cryptology to secureelectronic commerce.

Professor Dawson is a Fellow of the International Combi-natorial Association, Member of the International Associationof Cryptologic Research, Member of the Australian ComputerSociety, and a Member of the IEEE.

E. Dawson et al. / Future Generation Computer Systems 16 (2000) 319–329 329

Dr. Andrew Clark is employed as a se-curity consultant and programmer withinthe Information Security Research Centreat Queensland University of Technology(QUT). He has been employed at QUTsince 1992.

His research interests lie in the fields ofcryptology and network security. Dr. Clarkcompleted his PhD in early 1998 in thefield of optimisation algorithms and their

application to the field of cryptology.Dr. Clark has participated in a number of projects which in-

volved the implementation of cryptologic software – for bothcryptanalytic purposes and cryptographic purposes. He has alsobeen involved in numerous consulting projects relating to networksecurity in the government, financial and industrial sectors.

Dr. Mark Looi is a Senior Lecturer withthe School of Data Communications at theQueensland University of Technology. Hehas held this position since 1997.

His research interests are in the area ofsmart cards, network security and securitypolicy. Dr. Looi’s PhD was in the area ofsmart cards in a networked environment,and was completed in 1995. He has beenworking with the Information Security Re-

search Centre since 1991.Dr. Looi has been involved in consultancies on a wide vari-

ety of projects for many organisations, including federal and stategovernments, financial organisations and gaming authorities, bothwithin Australia and internationally.