10
Countermeasures for mobile agent security W.A. Jansen National Institute of Standards and Technology, Gaithersburg, MD 20899, USA Abstract Security is an important issue for the widespread deployment of applications based on software agent technology. It is generally agreed that without the proper countermeasures in place, use of agent-based applications will be severely impeded. However, not all applications require the same set of countermeasures, nor can they depend entirely on the agent system to provide them. Instead, countermeasures are applied commensurate with the anticipated threat profile and intended security objectives for the application. While countermeasures typically include any action, device, procedure, technique, or other measure that reduces the vulnerability of or threat to a system, our focus here is specifically on technical mechanisms, as opposed to procedural or non-technical measures. Such countermeasures can be integrated directly into an agent system, or incorporated into the design of an agent to supplement the capabilities of an underlying agent system. This paper gives an overview of the threats associated with software agent systems and their technical countermeasures, including the strengths and weaknesses of the techniques involved. Emphasis is on mobile software agents, since they typically face a more severe set of threats than do static agents and, therefore, demand more rigorous countermeasures. q 2000 Elsevier Science B.V. All rights reserved. Keywords: Mobile agents; Computer security; Countermeasures 1. Introduction Over the years, computer systems have evolved from centralized monolithic computing devices supporting static applications, into networked environments that allow complex forms of distributed computing. A new phase of evolution is now under way based on software agents. A software agent is loosely defined as a program that can exercise an individual’s or organization’s authority, work autonomously towards a goal, and meet and interact with other agents. Possible inter- actions among agents include such things as contract and service negotiation, auctioning, and bartering. Software agents may be either stationary or mobile. Station- ary agents remain resident at a single platform, while mobile agents are capable of suspending activity on one platform and moving to another, where they resume execution. The concept of mobile code is not new, dating back to the 1960s when remote job entry systems were used to submit programs to a central computer. More recently, code mobility has been popularized through the use of web browsers to download Java 1 applets from web servers. Mobile agents go one step further, allowing the complete mobility of software among supporting platforms to form large-scale, loosely-coupled distributed systems. The reader is referred to [34] for a more extensive introduction to the subject of software agents. As the sophistication of mobile software increases, so too do the associated security threats and vulnerabilities. The focus of this paper is on the security issues that arise when agents are mobile, and the possible countermeasures for dealing with those issues. Various degrees of mobility exist, corresponding to the possible variations of relocating code and state information, including the values of instance variables, the program counter, execution stack, etc. [7]. For example, a simple agent written as a Java applet [9] has mobility of code through the movement of class files from a web server to a browser. However, no associated state information is conveyed. In contrast, Aglets [18], developed at IBM Japan, builds upon Java to allow the values of instance variables, but not the program counter or execution stack, to be conveyed along with the code as the agent relocates. For a stronger form of mobility, Sumatra [1], developed at the University of Maryland, allows Java threads to be conveyed along with the agent’s code during relocation. The mobile agent systems under discussion here involve the relocation of both code and state information. 2. Security threats Threats to the security of mobile agents generally fall into Computer Communications 23 (2000) 1667–1676 www.elsevier.com/locate/comcom 0140-3664/00/$ - see front matter q 2000 Elsevier Science B.V. All rights reserved. PII: S0140-3664(00)00253-X E-mail address: [email protected] (W.A. Jansen). 1 Certain computer manufacturers’ products and standards are discussed in this paper. The discussion is not intended to imply recommendation or endorsement by the National Institute of Standards and Technology, nor is it intended to imply, that the products and standards identified are necessa- rily the best available.

Countermeasures for mobile agent security

Embed Size (px)

Citation preview

Page 1: Countermeasures for mobile agent security

Countermeasures for mobile agent security

W.A. Jansen

National Institute of Standards and Technology, Gaithersburg, MD 20899, USA

Abstract

Security is an important issue for the widespread deployment of applications based on software agent technology. It is generally agreedthat without the proper countermeasures in place, use of agent-based applications will be severely impeded. However, not all applicationsrequire the same set of countermeasures, nor can they depend entirely on the agent system to provide them. Instead, countermeasures areapplied commensurate with the anticipated threat profile and intended security objectives for the application. While countermeasurestypically include any action, device, procedure, technique, or other measure that reduces the vulnerability of or threat to a system, ourfocus here is specifically on technical mechanisms, as opposed to procedural or non-technical measures. Such countermeasures can beintegrated directly into an agent system, or incorporated into the design of an agent to supplement the capabilities of an underlying agentsystem. This paper gives an overview of the threats associated with software agent systems and their technical countermeasures, including thestrengths and weaknesses of the techniques involved. Emphasis is on mobile software agents, since they typically face a more severe set ofthreats than do static agents and, therefore, demand more rigorous countermeasures.q 2000 Elsevier Science B.V. All rights reserved.

Keywords: Mobile agents; Computer security; Countermeasures

1. Introduction

Over the years, computer systems have evolved fromcentralized monolithic computing devices supporting staticapplications, into networked environments that allow complexforms of distributed computing. A new phase of evolution isnow under way based on software agents. A software agent isloosely defined as a program that can exercise an individual’sor organization’s authority, work autonomously towards agoal, and meet and interact with other agents. Possible inter-actions among agents include such things as contract andservice negotiation, auctioning, and bartering.

Software agents may be either stationary or mobile. Station-ary agents remain resident at a single platform, while mobileagents are capable of suspending activity on one platform andmoving to another, where they resume execution. The conceptof mobile code is not new, dating back to the 1960s whenremote job entry systems were used to submit programs to acentral computer. More recently, code mobility has beenpopularized through the use of web browsers to downloadJava1 applets from web servers. Mobile agents go one stepfurther, allowing the complete mobility of software among

supporting platforms to form large-scale, loosely-coupleddistributed systems. The reader is referred to [34] for a moreextensive introduction to the subject of software agents.

As the sophistication of mobile software increases, so toodo the associated security threats and vulnerabilities. Thefocus of this paper is on the security issues that arise whenagents are mobile, and the possible countermeasures fordealing with those issues. Various degrees of mobilityexist, corresponding to the possible variations of relocatingcode and state information, including the values of instancevariables, the program counter, execution stack, etc. [7]. Forexample, a simple agent written as a Java applet [9] hasmobility of code through the movement of class files froma web server to a browser. However, no associated stateinformation is conveyed. In contrast, Aglets [18], developedat IBM Japan, builds upon Java to allow the values ofinstance variables, but not the program counter or executionstack, to be conveyed along with the code as the agentrelocates. For a stronger form of mobility, Sumatra [1],developed at the University of Maryland, allows Javathreads to be conveyed along with the agent’s code duringrelocation. The mobile agent systems under discussion hereinvolve the relocation of both code and state information.

2. Security threats

Threats to the security of mobile agents generally fall into

Computer Communications 23 (2000) 1667–1676www.elsevier.com/locate/comcom

0140-3664/00/$ - see front matterq 2000 Elsevier Science B.V. All rights reserved.PII: S0140-3664(00)00253-X

E-mail address:[email protected] (W.A. Jansen).1 Certain computer manufacturers’ products and standards are discussed

in this paper. The discussion is not intended to imply recommendation orendorsement by the National Institute of Standards and Technology, nor isit intended to imply, that the products and standards identified are necessa-rily the best available.

Page 2: Countermeasures for mobile agent security

four comprehensive classes: disclosure of information;denial of service; corruption of information; and interfer-ence or nuisance. We use the components of an agentsystem to further delineate threats by identifying the possi-ble source and target of an attack with respect to elementswithin that paradigm. It is important to note that many of thethreats that are discussed have counterparts in classicalclient–server systems and have always existed in someform in the past (e.g. executing any code from an unknownsource either downloaded from a network or supplied onfloppy disk). Mobile agents simply offer a greater opportu-nity for abuse and misuse, broadening the scale of threatssignificantly. New threats arising from the mobile agentparadigm are due to the fact that, contrary to the usualsituation in computer security where the owner of the appli-cation and the operator of the computer system are the same,the agent’s owner and system’s operator are different.

A number of models exist for describing the agentsystems [2,19,34], however, for discussing security issuesit is sufficient to use a very simple one, consisting of onlytwo main components: the agent and the agent platform. Anagent comprises the code and state information needed tocarry out some computation. Multiple agents cooperate withone another to carry out some application. Mobility allowsan agent to move or hop among agent platforms. The agentplatform provides the computational environment in whichan agent operates. The platform where an agent originates isreferred to as the home platform, and normally is the mosttrusted environment for an agent. One or more hosts maycomprise an agent platform, and an agent platform maysupport multiple locations or meeting places where agentscan interact. Since some of these aspects do not affect thediscussion of security issues, they are omitted from the agentsystem model illustrated in Fig. 1, which depicts the move-ment of an agent among several agent platforms.

Four threat categories are identified: threats stemmingfrom an agent attacking an agent platform; an agent plat-form attacking an agent; an agent attacking another agent onthe agent platform; and other entities attacking the agentsystem. The cases of an agent attacking an agent on anotheragent platform and of an agent platform attacking another

platform are covered within the last category, since theseattacks are focused primarily on the communicationscapability of the platform to exploit potential vulnerabil-ities. The last category also includes more conventionalattacks against the underlying operating system of theagent platform.

2.1. Agent against agent platform

The mobile agent paradigm requires an agent platform toaccept and execute code developed elsewhere. An incomingagent has two main lines of attack. The first is to gainunauthorized access to information residing at the agentplatform; the second is to use its authorized access in anunexpected and disruptive fashion.

Unauthorized access may occur simply through a lack ofadequate access control mechanisms at the platform or weakidentification and authentication, which allows an agent tomasquerade as one trusted by the platform. Once access isgained, information residing at the platform can bedisclosed or altered. Besides confidential data, this informa-tion could include the instruction codes of the platform.Depending on the level of access, the agent may be ableto completely shut down or terminate the agent platform.Even without gaining unauthorized access to resources, anagent can deny platform services to other agents by exhaust-ing computational resources, if resource constraints are notestablished or not set tightly. Otherwise, the agent canmerely interfere with the platform by issuing meaninglessservice requests wherever possible.

2.2. Agent platform against agent

A receiving agent platform can easily isolate and capturean agent and may attack it by extracting information,corrupting or modifying its code or state, denying requestedservices, or simply reinitializing or terminating it comple-tely. Extracting electronic cash directly from the agent isone simple example. An agent is very susceptible to theagent platform and may be corrupted merely by the platformresponding falsely to requests for information or service,altering external communications, or delaying the agentuntil its task is no longer relevant. Extreme measuresinclude the complete analysis and reverse engineering ofthe agent’s design so that subtle changes can be introduced.Modification of the agent by the platform is a particularlyinsidious form of attack, since it can radically change theagent’s behavior (e.g. turning a trusted agent into maliciousone) or the accuracy of the computation (e.g. changingcollected information to yield incorrect results).

2.3. Agent against other agents

An agent can target another agent using several generalapproaches. These include actions to falsify transactions,eavesdrop upon conversations, or interfere with an agent’sactivity. For example, an attacking agent can respond

W.A. Jansen / Computer Communications 23 (2000) 1667–16761668

Fig. 1. Agent system model.

Page 3: Countermeasures for mobile agent security

incorrectly to direct requests it receives from a target ordeny that a legitimate transaction occurred. An agent cangain information by serving as an intermediary to the targetagent (e.g. through masquerade) or by using platformservices to eavesdrop on intra-platform messages. If theagent platform has weak or no control mechanisms inplace, an agent can access and modify another agent’sdata or code, or interfere with the agent by invoking itspublic methods (e.g. attempt buffer overflow, reset to initialstate, etc.). Even with reasonable control mechanisms inplace, an agent can attempt to send messages repeatedlyto other agents in an attempt to deny them the ability tocommunicate.

2.4. Other entities against agent system

Even assuming the locally active agents and the agentplatform are well behaved, other entities both outside andinside the agent framework may attempt actions to disrupt,harm, or subvert the agent system. The obvious methodsinvolve attacking the inter-agent and inter-platform commu-nications through masquerade, (e.g. through forgery orreplay) or intercept. For example, at a level of protocolbelow the agent-to-agent or platform-to-platform protocol,an entity may eavesdrop on messages in transit to and from atarget agent or agent platform to gain information. Anattacking entity may also intercept agents or messages intransit and modify their contents, substitute other contents,or simply replay the transmission dialogue at a later time inan attempt to disrupt the synchronization or integrity of theagent framework. Denial of service attacks through avail-able network interfaces is another possibility.

3. Countermeasures

Countermeasures refer to any action, device, procedure,technique, or other measure that reduces the vulnerability ofor threat to a system. Many conventional security techni-ques used in contemporary distributed applications (e.g.client–server) also have utility as countermeasures withinthe mobile agent paradigm.

Moreover, there are a number of extensions to conven-tional techniques and techniques devised specifically forcontrolling mobile code and executable content (e.g. Javaapplets) that are applicable to mobile agent security. Wereview these countermeasures by considering those techni-ques that can be used to protect agent platforms, separatelyfrom those used to protect the agents that run on them.

Most agent systems rely on a common set of baselineassumptions regarding security. The first is that an agenttrusts the home platform where it is instantiated and beginsexecution. The second is that the home platform and otherequally trusted platforms are implemented securely, with noflaws or trapdoors that can be exploited, and behave non-maliciously. The third is that public key cryptography,primarily in the form of digital signature, is utilized through

certificates and revocation lists managed through a publickey infrastructure.

4. Protecting the agent platform

Without adequate defenses, an agent platform is vulner-able to attack from many sources including maliciousagents, malicious platforms, and other malicious entities.Fortunately most conventional protection techniques, tradi-tionally employed in trusted systems and communicationsecurity, can be used to provide analogous protectionmechanisms for the agent platform. This is due in largepart to the traditional role hardware plays as the foundationupon which software protection mechanisms are built. Thatis, within the mobile agent paradigm, the agent platform is acounterpart to a trusted host within the traditional frame-work.

One of the main concerns with an agent system imple-mentation is ensuring that agents are not able to interferewith one another or with the underlying agent platform. Onecommon approach for accomplishing this is to establishseparate isolated domains for each agent and the platform,and control all inter-domain access. In traditional terms thisconcept, as it applies to a trusted computing base, is referredto as a reference monitor [31]. An implementation of areference monitor has the following characteristics:

• it is always invoked and non-bypassable, mediating allaccesses;

• it is tamper-proof;• it is small enough to be analyzed and tested.

Implementations of the reference monitor concept havebeen around since the early 1980s and employ a number ofconventional security techniques, which are applicable tothe agent environment. Such conventional techniquesinclude the following:

• mechanisms to isolate processes from one another andfrom the control process;

• mechanisms to control access to computationalresources;

• cryptographic methods to encipher informationexchanges;

• cryptographic methods to identify and authenticate users,agent, and platforms;

• mechanisms to audit security relevant events occurring atthe agent platform.

More recently developed techniques aimed at mobilecode and mobile agent security have for the most partevolved along these traditional lines. Techniques devisedfor protecting the agent platform are described in thefollowing sections.

W.A. Jansen / Computer Communications 23 (2000) 1667–1676 1669

Page 4: Countermeasures for mobile agent security

4.1. Software-based fault isolation

Software-Based Fault Isolation [33], as its name implies,is a method of isolating application modules into distinctfault domains enforced by software. The technique allowsuntrusted programs written in an unsafe language, such as C,to be executed safely within the single virtual address spaceof an application. Untrusted machine interpretable codemodules are transformed so that all memory accesses areconfined to code and data segments within their faultdomain. Access to system resources can also be controlledthrough a unique identifier associated with each domain.The technique, commonly referred to as sandboxing, ishighly efficient compared to using hardware page tables tomaintain separate address spaces for modules, when themodules communicate frequently among fault domains. Itis also ideally suited for situations where most of the codefalls into one domain that is trusted, since modules in trusteddomains incur no execution overhead.

4.2. Safe code interpretation

Agent systems are often developed using an interpretedscript or programming language. The main motivation fordoing this is to support agent platforms on heterogeneouscomputer systems. Moreover, the higher conceptual level ofabstraction provided by an interpretative environment canfacilitate the development of the agent’s code [24]. The ideabehind Safe Code Interpretation is that commands consid-ered harmful can be either made safe for, or denied to, anagent. For example, a good candidate for denial would bethe command to execute an arbitrary string of data as aprogram segment.

One of the most widely used interpretative languagestoday is Java. The Java programming language and runtimeenvironment [8,9] enforces security primarily throughstrong type safety. Java follows a so-called sandbox securitymodel, used to isolate memory and method access, andmaintain mutually exclusive execution domains. Securityis enforced through a variety of mechanisms. Static typechecking in the form of byte code verification is used tocheck the safety of downloaded code. Some dynamic check-ing is also performed during runtime. A distinct name spaceis maintained for the untrusted downloaded code and linkingof references between modules in different name spaces isrestricted to public methods. A security manager mediatesall accesses to system resources, serving in effect as a refer-ence monitor. In addition, Java inherently supports codemobility, dynamic code downloading, digitally signedcode, remote method invocation, object serialization,platform heterogeneity, and other features that make it anideal foundation for agent development. There are manyagent systems based on Java, including Aglets [16,18],Mole [29], Ajanta [17], and Voyager [21]. However, limita-tions of Java to account for memory, CPU, and network

resources consumed by individual threads [5] and to supportthread mobility [7] have been noted.

Probably the best known of the safe interpreters for script-based languages is Safe Tcl [23], which was used in theearly development of the Agent Tcl [10] system. Safe Tclemploys a padded cell concept, whereby a second “safe”interpreter pre-screens any harmful commands from beingexecuted by the main Tcl interpreter. The term padded cellrefers to this isolation and access control technique, whichprovides the foundation for implementing the referencemonitor concept. More than one safe interpreter can beused to implement different security policies, if needed.Constructing policy-based interpreters requires skill toavoid overly restrictive or unduly protected computingenvironments.

In general, current script-based languages do not incor-porate an explicit security model in their design, and relymainly on decisions taken during implementation. Onerecent exception is the implementation of secure JavaScriptin Mozilla [3]. The implementation uses a padded cell tocontrol access to resources and external interfaces, preventsresidual information from being retained and accessibleamong different contexts operating simultaneously orsequentially, and allows policy, which partitions the namespace for access control purposes, to be specified indepen-dently of mechanism.

4.3. Signed code

A fundamental technique for protecting an agent systemis signing code or other objects with a digital signature. Adigital signature serves as a means of confirming the authen-ticity of an object, its origin, and its integrity. Typically thecode signer is either the creator of the agent, the user of theagent, or some entity that has reviewed the agent. Becausean agent operates on behalf of an end-user or organization,mobile agent systems [10,16,17,30] commonly use thesignature of the user as an indication of the authorityunder which the agent operates.

Code signing involves public key cryptography, whichrelies on a pair of keys associated with an entity. One keyis kept private by the entity and the other is made publiclyavailable. Digital signatures benefit greatly from the avail-ability of a public key infrastructure, since certificatescontaining the identity of an entity and its public key (i.e.a public key certificate) can be readily located and verified.Passing the agent’s code through an irreversible hash func-tion, which provides a fingerprint or unique message digestof the code, and then encrypting the result with the privatekey of the signer forms a digital signature. Because themessage digest is unique, and thus bound to the code, theresulting signature also serves as an integrity mechanism.The agent code, signature, and public key certificate canthen be forwarded to a recipient, who can easily verify thesource and authenticity of the code.

Note that the meaning of a signature may be different

W.A. Jansen / Computer Communications 23 (2000) 1667–16761670

Page 5: Countermeasures for mobile agent security

depending on the policy associated with the signaturescheme and the party who signs. For example, the authorof the agent, either an individual or organization, may use adigital signature to indicate who produced the code, but notto guarantee that the agent performs without fault or error.In fact, author-oriented signature schemes were originallyintended to serve as digital shrink wrap, whereby the origi-nal product warranty limitations stated in the license remainin effect (e.g. manufacturer makes no warranties as to thefitness of the product for any particular purpose).

Microsoft’s Authenticode, a common form of code sign-ing, enables Java applets or Active X controls to be signed,ensuring users that the software has not been tampered withor modified and that the identity of the author is verified. Formany users, however, the signature has gone beyond estab-lishing authenticity and become a form of trust in the soft-ware, which could ultimately have disastrous consequences.

Rather than relying solely on the reputation of a codeproducer, it would be prudent to have an independent reviewand verification of code performed by a trusted party orrating service [13]. For example, the decision to executean agent may be taken only with the endorsement, in theform of a digital signature over the code, by a site securityadministrator.

4.4. State appraisal

The goal of State Appraisal [6] is to ensure that an agenthas not been somehow subverted due to alterations of itsstate information. The success of the technique relies on theextent to which harmful alterations to an agent’s state can bepredicted, and countermeasures, in the form of appraisalfunctions, can be prepared before using the agent. Appraisalfunctions are used to determine what privileges to grant anagent, based both on conditional factors and whether iden-tified state invariants hold. An agent whose state violates aninvariant can be granted no privileges, while an agent whosestate fails to meet some conditional factors may be granted arestricted set of privileges.

Both the author and owner of an agent produce appraisalfunctions that become part of an agent’s code. An ownertypically applies state constraints to reduce liability and/orcontrol costs. When the author and owner each digitally signthe agent, their respective appraisal functions are protectedfrom undetectable modification. An agent platform uses thefunctions to verify the correct state of an incoming agentand to determine what privileges the agent can possessduring execution. Privileges are issued by a platformbased on the results of the appraisal function and theplatform’s security policy. It is not clear how well the theorywill hold up in practice, since the state space for an agentcould be quite large, and while appraisal functions forobvious attacks may be easily formulated, more subtleattacks may be significantly harder to foresee and detect.The developers of the technique, in fact, indicate it may not

always be possible to distinguish normal results from decep-tive alternatives.

4.5. Path histories

The basic idea behind Path Histories [4,22] is to maintainan authenticatable record of the prior platforms visited by anagent, so that a newly visited platform can determinewhether to process the agent and what resource constraintsto apply. Computing a Path History requires each agentplatform to add a signed entry to the path, indicating itsidentity and the identity of the next platform to be visited,and to supply the complete Path History to the next plat-form. To prevent tampering, the signature of the new pathentry must include the previous entry in the computation ofthe message digest. Upon receipt, the next platform candetermine whether it trusts the previous agent platformsthat the agent visited, either by simply reviewing the listof identities provided or by individually authenticating thesignatures of each entry in the Path History to confirm iden-tity. While the technique does not prevent a platform frombehaving maliciously, it serves as a deterrent, since theplatform’s signed path entry is non-repudiable. One obviousdrawback is that path verification becomes more costly asthe Path History increases. The technique is also dependenton the ability of a platform to judge correctly whether totrust the visited platforms identified.

4.6. Proof carrying code

The approach taken by Proof Carrying Code [20], obli-gates the code producer (e.g. the author of an agent) toformally prove that the program possesses safety propertiespreviously stipulated by the code consumer (e.g. securitypolicy of the agent platform). The aim of the technique isto prevent the execution of unsafe code. The code and proofare sent together to the code consumer where the safetyproperties can be verified. A safety predicate, representingthe semantics of the program, is generated directly from thenative code to ensure that the companion proof does in factcorrespond to the code. The proof is structured in a way thatmakes it straightforward to verify without using crypto-graphic techniques or external assistance. Once verified,the code can run without further checking. Any attemptsto tamper with either the code or the safety proof result ineither a verification error or, if the verification succeeds, asafe code transformation.

Initial research has demonstrated the applicability ofProof Carrying Code for fine-grained memory safety, andshown the potential for other types of safety policies, suchas controlling resource use. Thus, the technique could serveas a reasonable alternative to Software-Based Fault Isola-tion in some applications. The theoretical underpinnings ofProof Carrying Code are based on well-established princi-ples from logic, type theory, and formal verification. Thereare, however, some potentially difficult problems to solvebefore the approach is considered practical. They include a

W.A. Jansen / Computer Communications 23 (2000) 1667–1676 1671

Page 6: Countermeasures for mobile agent security

standard formalism for establishing security policy, auto-mated assistance for the generation of proofs, and techni-ques for limiting the potentially large size of proofs that intheory can arise. In addition, the technique is tied to thehardware and operating environment of the code consumer,which may limit its applicability.

5. Protecting agents

While countermeasures directed toward platform protec-tion are a direct evolution of traditional mechanismsemployed by trusted hosts, those directed toward agentprotection often depart somewhat more radically from tradi-tional lines. This is due to the fact that traditional mechan-isms were not devised to address threats stemming fromattacks on the application by the execution environment,which is precisely the situation faced by an agent executingon an agent platform that it may not completely trust.

The problem stems from the inability to effectivelyextend the trusted environment of any agent’s home plat-form to other agent platforms visited by the agent. For someapplications, the threat profile may not demand complexprotection schemes. For example, agents that do not accu-mulate state, or convey their results back to the home plat-form after each hop, have less exposure to certain attacks.For other applications, basic protection schemes may proveadequate. For example, the JumpingBeans [14] agentsystem originally addressed mobile agent security issuesby implementing a client–server architecture, whereby anagent always returns to a secure, trusted central host first,before moving onto any other platform. Agent systems thatallow for more decentralized mobility, such as IBM Aglets,prevent the receiving platform from accepting agents froman agent platform that is not defined as a trusted peer withinthe receiving platform’s security policy. Alternatively, theoriginator can restrict an agent’s itinerary to only a trustedset of platforms known in advance.

While these basic schemes may have value in specificsituations, they do not support the free-roaming, autono-mous capabilities envisioned for many agent applications.More general-purpose techniques for protecting an agent arediscussed in the following sections.

5.1. Partial result encapsulation

One approach used to detect tampering by malicioushosts is to encapsulate the results of an agent’s actions ateach platform visited, for subsequent verification eitherwhen the agent returns to the point of origination, or possi-bly at intermediate points as well. Encapsulation may bedone for different purposes with different mechanisms,such as providing confidentiality using encryption, orproviding integrity and accountability using digital signa-ture. The information encapsulated depends somewhat onthe goals of the agent, but typically includes responses toinquiries made or transactions issued at the platform. In

general, there are three alternative ways to encapsulatepartial results:

• provide the agent with a means for encapsulating theinformation;

• rely on the encapsulation capabilities of the agent plat-form;

• rely on a trusted third party to timestamp a digital finger-print of the results.

While none of the alternatives prevents malicious beha-vior by the platform, they do allow certain types of tamper-ing to be detected. One difference with an agent-controlledencapsulation is that it can be applied independently in thedesign of an appropriate agent application, regardless of thecapabilities of the agent platform or supporting infrastruc-ture.

Often the amount of information gathered by an agent israther small in comparison to the size of the encryption keysinvolved and the resulting ciphertext. A solution called slid-ing encryption [36] allows small amounts of data to beencrypted and yield efficient sized results. The scenarioenvisioned for use of sliding encryption is one in whichthe agent, using a public key it carries, encrypts the informa-tion as it is accumulated at each platform visited. Later,when the agent returns to the point of origination, the infor-mation is decrypted using the private key maintained there.While the purpose of sliding encryption is confidentiality,an additional integrity measure could be applied as well,before encryption occurs.

Another method for an agent to encapsulate result infor-mation is to use Partial Result Authentication Codes(PRAC) [35], which are cryptographic checksums formedusing secret key cryptography (i.e. message authenticationcodes). This technique requires the agent and its originatorto maintain or incrementally generate a list of secret keysused in the PRAC computation. Once a key is applied toencapsulate the information collected, the agent destroys itbefore moving onto the next platform, guaranteeing forwardintegrity. The forward integrity property ensures that if oneof the servers visited is malicious, the previous set of partialresults remains valid. However, only the originator canverify the results, since no other copies of the secret keyremain. As an alternative, public key cryptography and digi-tal signatures can be used in lieu of secret key techniques.One benefit is that authentication of the results can be madea publicly verifiable process at any platform along the way,while maintaining forward integrity.

The PRAC technique has a number of limitations. Themost serious occurs when a malicious platform retainscopies of the original keys or key generating functions ofan agent. If the agent revisits the platform or visits anotherplatform conspiring with it, a previous partial result entry orseries of entries could be modified without the possibility ofdetection. Since the PRAC is oriented toward integrity andnot confidentiality, the accumulated set of partial results can

W.A. Jansen / Computer Communications 23 (2000) 1667–16761672

Page 7: Countermeasures for mobile agent security

also be viewed by any platform visited, although this iseasily resolved by applying sliding key or other forms ofencryption.

Rather than relying on the agent to encapsulate the infor-mation, each platform can be required to encapsulate partialresults along the way [4]. The distinction is not only one ofwhere the encapsulation mechanisms are retained, eitherwith the agent or at a platform, but also one of responsibilityand associated liabilities. While it appears rather straightfor-ward to change from one orientation to the other, the nuan-ces can be significant. For example, an agent that used tosign partial results with the public key of its originator mustbe revised to have all platforms it visits sign the results withtheir private key. If the agent triggers a digital signatureencapsulation via a platform programming interface, theimplementation must ensure that enough context is included(e.g. the time and query issued) so that the agent cannotinduce the platform to sign arbitrary information.

Karjoth and his associates [15] devised a platform-oriented technique for encapsulating partial results, whichreformulates and improves upon the PRAC technique. Theapproach is to construct a chain of encapsulated results thatbinds each result entry to all previous entries and to theidentity of the subsequent platform to be visited. Each plat-form digitally signs its entry using its private key, and uses asecure hash function to link results and identities within anentry. Besides forward integrity, the encapsulation techni-que also provides confidentiality by encrypting each pieceof accumulated information with the public key of the origi-nator of the agent. Moreover, the forward integrity is stron-ger than that achieved with PRAC, since a platform isunable to modify its entry in the chain, should it be revisitedby the agent, or to collude with another platform to modifyentries, without invalidating the chain. A variant of thistechnique, which uses message authentication codes inlieu of digital signatures, is also described.

Yee, who proposed the PRAC technique, noted thatforward integrity could also be achieved using a trustedthird party that performs digital timestamping. A digitaltimestamp [11] allows one to verify that the contents of afile or document existed as such, at a particular point in time.Yee raises the concern that the granularity of the timestampsmay limit an agent’s maximum rate of travel, since it mustreside at one platform until the next time period. Anotherpossible concern is the general availability of a trusted time-stamping infrastructure.

5.2. Mutual itinerary recording

One interesting variation of Path Histories is a generalscheme for allowing an agent’s itinerary to be recorded andtracked by another cooperating agent and vice-versa [26], ina mutually supportive arrangement. When moving betweenagent platforms, an agent conveys the last platform, currentplatform, and next platform information to the cooperatingpeer through an authenticated channel. The peer maintains a

record of the itinerary and takes appropriate action wheninconsistencies are noted. Attention is paid so that anagent avoids platforms already visited by its peer. The ratio-nale behind this scheme is founded on the assumption thatonly a few agent platforms are malicious, and even if anagent encounters one, the platform is not likely to collabo-rate with another malicious platform being visited by thepeer. Therefore, by dividing up the operations of the appli-cation between two agents, certain malicious behavior of anagent platform can be detected.

The scheme can be generalized to more than two coop-erating agents. For some applications it is also possible forone of the agents to remain static at the home platform.Because the path records are maintained at the agentlevel, this technique can be incorporated into any appro-priate application. Some drawbacks mentioned include thecost of setting up the authenticated channel and the inabilityof the peer to determine which of the two platforms isresponsible if the agent is killed.

5.3. Itinerary recording with replication and voting

A faulty agent platform can behave similarly to a mali-cious one. Therefore, applying fault tolerant capabilities tothis environment should help counter the effects of mali-cious platforms. One such technique for ensuring that amobile agent arrives safely at its destination is through theuse of replication and voting [28]. The idea is that ratherthan using a single copy of an agent to perform a computa-tion, multiple copies are used. Although a malicious plat-form may corrupt a few copies of the agent, enough replicasavoid the encounter to successfully complete the computa-tion.

For each stage of the computation, the platform ensuresthat arriving agents are intact, carrying valid credentials.Platforms involved in a particular stage of a computationare expected to know the set of acceptable platforms for theprevious stage. Based on the inputs it receives, the platformpropagates onto the next stage only a subset of the replicaagents it considers valid. Underlying assumptions are thatthe majority of the platforms are well behaved, and mostagents arrived unscathed with equivalent results at eachstage. One of the protocols used in this technique requiresforwarded agents to convey the accumulated signatureentries of all forwarding nodes onto each stage. Theapproach taken is similar to Path Histories, but extendedwith fault tolerant capabilities. The technique seems appro-priate for specialized applications where agents can beduplicated without problems, the task can be formulatedas a multi-staged computation, and survivability is a majorconcern. One obvious drawback is the additional resourcesconsumed by replicate agents.

5.4. Execution tracing

Execution tracing [32] is a technique for detectingunauthorized modifications of an agent through the faithful

W.A. Jansen / Computer Communications 23 (2000) 1667–1676 1673

Page 8: Countermeasures for mobile agent security

recording of the agent’s behavior during its execution oneach agent platform. The technique requires each platforminvolved to create and retain a non-repudiable log or trace ofthe operations performed by the agent while resident there,and to submit a cryptographic hash of the trace upon conclu-sion as a trace summary or fingerprint. A trace is composedof a sequence of statement identifiers and platform signatureinformation. The signature of the platform is needed onlyfor those instructions that depend on interactions with thecomputational environment maintained by the platform. Forinstructions that rely only on the values of internal variables,a signature is not required and, therefore, is omitted. Thetechnique also defines a secure protocol to convey agentsand associated security related information among thevarious parties involved, which may include a trustedthird party to retain the sequence of trace summaries forthe agent’s entire itinerary. If any suspicious results occur,the appropriate traces and trace summaries can be obtainedand verified, and a malicious host identified.

The approach has a number of drawbacks, the mostobvious being the size and number of logs to be retained,and the fact that the detection process is triggered sporadi-cally, based on suspicious results or other factors. Othermore subtle problems identified include the lack of accom-modating multi-threaded agents and dynamic optimizationtechniques. While the goal of the technique is to protect anagent, the technique does afford some protection for theagent platform, providing that the platform can also obtainthe relevant trace summaries and traces from the variousparties involved.

5.5. Environmental key generation

Environmental Key Generation [25] describes a schemefor allowing an agent to take predefined action when someenvironmental condition is true. The approach centers onconstructing agents in such a way that upon encounteringan environmental condition (e.g. via a matched searchstring), a key is generated, which is used to unlock someexecutable code cryptographically. The environmentalcondition is hidden through either a one-way hash or publickey encryption of the environmental trigger. The techniqueensures that a platform or an observer of the agent cannotuncover the triggering message or response action bydirectly reading the agent’s code.

The procedure is somewhat akin to the way in whichmodern operating systems apply passwords to determinewhether login attempts are valid (i.e. the password is usedas a cryptographic key). One weakness of this approach isthat a platform, which completely controls the agent, couldsimply modify the agent to print out the unlocked execu-table code upon receipt of the trigger, instead of executingit. Another drawback is that an agent platform typicallylimits the capability of an agent to execute code createddynamically, since it is considered an unsafe operation.An author of an agent can apply the technique, however,

in conjunction with other protection mechanisms for speci-fic applications on appropriate platforms.

5.6. Computing with encrypted functions

The goal of Computing with Encrypting Functions [27] isto determine a method whereby mobile code can safelycompute cryptographic primitives, such as a digital signa-ture, even though the code is executed in untrusted comput-ing environments and operates autonomously withoutinteractions with the home platform. The approach is tohave the agent platform execute a program embodying anenciphered function without being able to discern the origi-nal function; the approach requires differentiation between afunction and a program that implements the function.

For example, Alice has an algorithm to compute a func-tion f. Bob has inputx and wants to computef �x� for Alice,but she doesn’t want Bob to learn anything aboutf. If f canbe encrypted in a way that results in another functionE� f �,then Alice can create a programP�E� f ��; which implementsE� f �; and send it to Bob, embedded within her agent. Bobthen runs the agent, which executesP�E� f �� on x, andreturns the result to Alice who decrypts it to obtainf �x�: Iff is a signature algorithm with an embedded key, the agenthas an effective means to sign information without the plat-form discovering the key. Similarly, if it is an encryptionalgorithm containing an embedded key, the agent has aneffective means to encrypt information at the platform.

Although the idea is straightforward, the trick is to findappropriate encryption schemes that can transform arbitraryfunctions as intended. The technique has been shown to beuseful to encrypt rationale functions, and whether moregeneral functions can be encrypted in a similar fashionremains to be seen. The technique, while very powerful,does not prevent denial of service, replay, experimentalextraction, and other forms of attack against the agent.

5.7. Obfuscated code

Hohl [12] gives a detailed overview of the threats stem-ming from an agent encountering a malicious host as moti-vation for Blackbox Security. The strategy behind thistechnique is simple-scramble the code in such a way thatno one is able to gain a complete understanding of its func-tion (i.e. specification and data), or to modify the resultingcode without detection. A serious problem with the generaltechnique is that there is no known algorithm or approachfor providing Blackbox protection. However, the paper citesComputing with Encrypted Functions as an example of anapproach that falls into the Blackbox category, with certainreservations concerning the limited range of input specifica-tions that apply.

A time limited variation of Blackbox protection is intro-duced as a reasonable alternative, whereby the strength ofthe scrambling does not necessarily imply encryption aswith the unqualified one, but relies mainly on obfuscationalgorithms. Since an agent can become invalid before

W.A. Jansen / Computer Communications 23 (2000) 1667–16761674

Page 9: Countermeasures for mobile agent security

completing its computation, Obfuscated Code is suitable forapplications that do not convey information intended forlong-lived concealment. The examples given for pure obfus-cation algorithms seem rather trivial and potentially ineffec-tive. One promising method relies on a trusted host totrigger the execution of an agent’s code segment. It is notstrictly speaking a pure obfuscation algorithm, however,since code is redistributed to a trusted host, which is notpart of the originally proposed scheme. The method doessuggest a possible relationship between Environmental KeyGeneration and Obfuscated Code techniques. A seriousdrawback to this technique is the lack of an approach forquantifying the protection interval provided by the obfusca-tion algorithm, thus making it difficult to apply in practice.Furthermore, no techniques are currently known for estab-lishing the lower bounds on the complexity for an attackerto reverse engineer an agent’s code.

6. Summary

This paper identifies a range of measures for counteringidentified threats and fulfilling the security objectives ofsome, but not all, agent-based applications. While a widevariety of techniques do exist, not all are compatible withone another, due to their redundancy in purpose and overlapin functionality. For example, Path Histories and its publickey cryptography underpinnings would be unnecessary inprotecting an agent platform, if adequate proofs for an agentcould be formulated to meet the platform’s policy using theProof Carrying Code technique. Similarly, Computing withEncrypted Functions conceptually overshadows ExecutionTracing in protecting the agent. A useful way of viewingcountermeasures in this regard is to differentiate betweentechniques oriented toward detection and those orientedtoward prevention. Detection implies that the technique isaimed at discovering unauthorized modification of code orstate information. Prevention implies that the technique is

aimed at keeping the code and state information frombeing affected in a meaningful but negative way, asopposed to random alterations. To be effective, detec-tion techniques are more likely than prevention techni-ques to depend on a legal or other social framework topenalize misbehavior.

The distinction between detection and prevention can bea bit arbitrary at times, since prevention often implicitlyinvolves detection. Nevertheless, a determination is possi-ble. Table 1 provides a categorization of the countermea-sures reviewed. All other things being equivalent, aprevention technique is preferable to one oriented towarddetection. However, many characteristics influence thechoice of countermeasures. Such characteristics includethe scope of protection, when and how the countermeasureis applied, dependencies, permanency, and the degree ofdifficulty in applying it.

No application is expected to benefit from wholesaleadoption of the available techniques, simply due to the over-head and complexity involved. In order to be effectivelyapplied, many of these techniques must be embodied withan agent system, while others can be applied independentlywithin the context of a particular application. An overallframework that integrates these techniques into an effectivesecurity model is clearly needed.

7. Conclusions

The area of mobile agent security is in a state of imma-turity, but rapidly improving. The traditional orientationtoward host-based security persists and, therefore, availableprotection mechanisms tend to focus on protecting the agentplatform. Emphasis is beginning to move toward developingtechniques that are oriented toward protecting the agent, amuch more difficult problem. The initial efforts are encoura-ging and will hopefully continue. There are a number ofagent-based application domains for which basic andconventional security techniques should prove adequate.The countermeasures reviewed in this paper complementthose techniques to reach a wider range of applicationdomains. However, applications are expected to require amore comprehensive set of mechanisms and a flexibleframework in which to apply a subset of selected mechan-isms to meet their needs. In many ways, the success ofsoftware agent technology in general, and mobile agenttechnology in particular, depends on how well those needsare met.

Whether this evolutionary branch of computing success-fully propagates forward or eventually dies out, remains tobe seen. Nevertheless, mobile agent technology is intriguingand offers a new paradigm for application development thatcannot be summarily dismissed. In addition, the securitytechniques being devised as countermeasures for this envir-onment are expected to have applicability within othercontexts as well.

W.A. Jansen / Computer Communications 23 (2000) 1667–1676 1675

Table 1Categorization of countermeasures

Countermeasure Category Security objective

Signed code Detection Agent platformState appraisal Detection Agent platformPath histories Detection Agent platformPartial result encapsulation Detection AgentMutual itinerary recording Detection AgentItinerary recording withreplication and voting

Detection Agent

Execution tracing Detection AgentSoftware-based fault isolation Prevention Agent platformSafe code interpretation Prevention Agent platformProof carrying code Prevention Agent platformEnvironmental key generation Prevention AgentComputing with encryptedfunctions

Prevention Agent

Obfuscated code Prevention Agent

Page 10: Countermeasures for mobile agent security

References

[1] A. Acharya, M. Ranganathan, J.S. Sumatra: A language for resource-aware mobile programs, in: J. Vivek, C. Tschudin (Eds.), MobileObject Systems: Towards the Progammable Internet, Lecture Notesin Computer Science, 1222, Springer, Berlin, 1997, pp. 111–130(http://www.cs.umd.edu/~acha/papers/lncs97-1.html).

[2] Agent Management, FIPA. ’97 Specification, part 1, version 2.0,Foundation for Intelligent Physical Agents, October 1998 (http://www.fipa.org/spec/FIPA97.html).

[3] V. Anupam, A. Mayer, Secure web scripting, IEEE Internet Comput-ing 2 (6) (1998) 46–55.

[4] D. Chess, et al., Itinerant agents for mobile computing, IEEE PersonalCommunications 2 (5) (1995) 34–49.

[5] G. Czajkowski, T. von Eicken, JRes: A Resource Accounting Inter-face for Java, ACM Conference on Object Oriented Languages andSystems (OOPSLA), Vancouver, Canada, October 1998.

[6] W. Farmer, J. Guttman, V. Swarup, Security for mobile agents:authentication and state appraisal, Proceedings of the 4th EuropeanSymposium on Research in Computer Security (ESORICS’96), 1966,pp. 118–130.

[7] A. Fuggetta, G.P. Picco, G. Vigna, Understanding code mobility,IEEE Transactions on Software Engineering 24 (5) (1998) 342–361(http://www.cs.ucsb.edu/~vigna/listpub.html).

[8] L. Gong, Java Security Architecture (JDK 1.2), Draft Document,revision 0.8, Sun Microsystems, March 1988 (http://service.felk.cvut.cz/doc/java/share/jdk1.2beta3/docs/guide/security/spec/security-spec.doc.html).

[9] J. Gosling, H. McGilton, The Java language environment: a WhitePaper, Sun Microsystems, May 1996 (http://SunSITE.sut.ac.jp/java/whitepaper/).

[10] R.S. Gray, Agent Tcl: a flexible and secure mobile-agent system,Proceedings of the Fourth Annual Tcl/Tk Workshop (TCL 96), July1996, pp. 9–23 (http://actcomm.dartmouth.edu/papers/#security).

[11] S. Haber, W.S. Stornetta, How to time-stamp a digital document,Journal of Cryptology 3 (1991) 99–111.

[12] F. Hohl, Time limited Blackbox security: protecting mobile agentsfrom malicious hosts, in: G. Vinga (Ed.), Mobile Agents and Security,Lecture Notes in Computer Science, 1419, Springer, Berlin, 1998, pp.92–113 (http://mole.informatik.uni-stuttgart.de/papers.html).

[13] N. Islam, et al., A flexible security system for using internet content,IEEE Software September issue (1997) 52–59.

[14] Jumping Beans White Paper, Ad Astra Engineering Inc., SunnyvaleCA, December 1998 (http://www.jumpingbeans.com/).

[15] G. Karjoth, N. Asokan, C. Gu¨lcu, Protecting the computation resultsof free-roaming agents, Proceedings of the Second InternationalWorkshop on Mobile Agents, Stuttgart, Germany, September 1998.

[16] G. Karjoth, D.B. Lange, M. Oshima, A Security model For Aglets,IEEE Internet Computing August issue (1997) 68–77.

[17] N. Karnik, Security in Mobile Agent Systems, PhD dissertation,Department of Computer Science, University of Minnesota, October1998 (http://www.cs.umn.edu/Ajanta/).

[18] D.B. Lange, M. Oshima, Programming and Deploying Java MobileAgents with Aglets, Addison-Wesley, Reading, MA, 1998.

[19] Mobile Agent System Interoperability Facilities Specification, ObjectManagement Group (OMG) Technical Committee (TC) Documentorbos/97-10-05, November 1997 (http://www.omg.org/techprocess/meetings/schedule/Technology_Adoptions.html#tbl_MOF_Specification).

[20] G. Necula, P. Lee, Safe kernel extensions without run-time checking,Proceedings of the 2nd Symposium on Operating System Design andImplementation (OSDI ’96), Seattle, Washington, October 1996, pp.229–243 (http://www.cs.cmu.edu/~necula/papers.html).

[21] ObjectSpace Voyager Core Package Technical Overview, version1.0, ObjectSpace Inc., December 1997 (http://www.objectspace.com/).

[22] J.J. Ordille, When agents roam, who can you trust? Proceedings of theFirst Conference on Emerging Technologies and Applications inCommunications, Portland, Oregon, May 1996 (http://plan9.bell-labs.com/cm/cs/who/joann/ordille2.html).

[23] J.K. Ousterhout, J.Y. Levy, B.B. Welch, The safe-Tcl security model,Technical Report SMLI TR-97-60, Sun Microsystems, 1997.

[24] J.K. Ousterhout, Scripting: higher-level programming for the 21stcentury, IEEE Computer March (1998) 23–30.

[25] J. Riodan, B. Schneier, Environmental key generation towards clue-less agents, in: G. Vinga (Ed.), Mobile Agents and Security, LectureNotes in Computer Science, 1419, Springer, Berlin, 1998.

[26] V. Roth, Secure recording of itineraries through cooperating agents,Proceedings of the ECOOP Workshop on Distributed Object Securityand 4th Workshop on Mobile Object Systems: Secure Internet MobileComputations, INRIA, France, 1998, pp. 147–154.

[27] T. Sander, C. Tschudin, Protecting mobile agents against malicioushosts, in: G. Vinga (Ed.), Mobile Agents and Security, Lecture Notesin Computer Science, 1419, Springer, Berlin, 1998, pp. 44–60 (http://www.icsi.berkeley.edu/~tschudin/).

[28] F.B. Schneider, Towards fault-tolerant and secure agentry, Proceed-ings of the 11th International Workshop on Distributed Algorithms,Saarbucken, Germany, September 1997.

[29] M. Straber, J. Baumann, F. Hohl, Mole — a Java based mobile agentsystem, in: M. Mu¨hlhauser (Ed.), Special Issues in Object OrientedProgramming, Verlag, Berlin, 1997, pp. 301–308 (http://mole.informatik.uni-stuttgart.de/papers.html).

[30] J. Tardo, L. Valente, Mobile agent security and telescript, Proceed-ings of IEEE COMPSON ’96, Santa Clara, California, February 1996,IEEE Computer Society Press, pp. 58–63.

[31] Trusted Computer System Evaluation Criteria, Department ofDefense, CSC-STD-001-83, Library no. S225 711, August 1983(http://www.cru.fr/securite/Documents-generaux/orange.book).

[32] G. Vigna, Protecting mobile agents through tracing, Proceedings ofthe 3rd ECOOP Workshop on Mobile Object Systems, Jyva¨lskyla,Findland, June 1997 (http://www.cs.ucsb.edu/~vigna/listpub.html).

[33] R. Wahbe, S. Lucco, T. Anderson, Efficient software-based faultisolation, Proceedings of the 14th ACM Symposium on OperatingSystems Principles, ACM SIGOPS Operating Systems Review,December 1993, pp. 203–216 (http://www.cs.washington.edu/homes/tom/).

[34] J.E. White, Mobile agents, in: J.M. Bradshaw (Ed.), Software Agents,AAAI, MIT Press, Cambridge, MA, 1997.

[35] B.S. Yee, A sanctuary for mobile agents, Technical Report CS97-537,University of California in San Diego, April 1997 (http://www.cse.ucsd.edu/users/bsy/index.html).

[36] A Young, M. Yung, Sliding encryption: a cryptographic tool formobile agents, Proceedings of the 4th International Workshop onFast Software Encryption, FSE ’97, January 1997.

W.A. Jansen / Computer Communications 23 (2000) 1667–16761676

Mr Jansen is a researcher at the National Insti-tute for Standards and Technology, InformationTechnology Laboratory in Gaithersburg, Mary-land. Since 1998, he has been a guest scientist atECMA, an international standards association inGeneva, Switzerland. His research interests liemainly in communications, distributed applica-tions, and security. Currently, his focus is onsecuring mobile agent systems and applyingthem to provision services within an organiza-tional infrastructure. Mr Jansen received the BS

and MS degrees in Computer Science from the University of Dayton andthe Ohio State University, and the MS degree in Engineering Manage-ment from the University of Dayton.