Abstract This paper presents a decision-making patternthat uses trust mechanisms. It comprises an expectation mod-ule that performs two functions. First, it infers—based on pastperformance—how information is to be understood (e.g. con-sider the expected outcome of a promise to deliver tomorrow,morgen, mañana or domani). Second, it infers—based onpast performance—the uncertainty on the expected behavior.This pattern makes a key contribution by enabling coop-eration in large systems of systems without requiring con-ventional integration, which typically results in rigidity andmonolithic system designs. It contributes to the develop-ment of holonic systems or nearly decomposable systemsthat deliver most of the advantages of integration without thedisadvantages.
Keywords Trust · Networked production systems ·Holonic manufacturing systems · Nearly-decomposablesystems
Manufacturing systems and their corresponding manufactur-ing control systems can be considered as systems of systems.Both the physical or organizational distribution as well as thecomplexity and multiple objectives compels the introductionof a compounded system (Panetto and Molina 2008).
B. Saint Germain (B) · P. Valckenaers · J. Van Belle · P. Verstraete ·H. Van BrusselK.U. Leuven—Mechanical Engineering, Celestijnenlaan 300,3001 Leuven, Belgiume-mail: [email protected]: www.mech.kuleuven.be/macc
Many excellent research focuses on solutions for one par-ticular problem. These problems correspond typically to oneaspect or are focused on one very particular and well definedapplication.
In contrast, the industrial practice is interested in an inte-grated and coherent set of systems, together controlling themanufacturing system or supply chain. Coordinating the setof systems, algorithms or solutions still poses big challengesto organizations.
In decentralized manufacturing coordination and con-trol systems, intelligent products and intelligent resourcesexchange information and cooperate to achieve an efficientand effective production organization. Such decentralizeddesigns easily scale to networked production and beyond.However, without a central organization in control, trustbecomes a relevant issue: if a service provider or parts sup-plier promises to deliver by next week, what is most likelyto happen? How uncertain are someone’s expectations? Itbecomes a matter of trust (that is not too frequently betrayed).
With suitable trust mechanisms in place, a loosely cou-pled set of systems can be productive and competitive. With-out it, the organizational challenges will be formidable. Thispaper presents a decision taking pattern benefiting from trustmechanisms and enabling advanced cooperation in looselycoupled networks of production facilities. The focus of thepattern is to structure the decision making process and theinformation needed to take the decision. The decision canbe made automatically or by humans. Note that the researchaims at semi-open networks in which organizations interactby means of frequent small-grained interactions; it consid-ers highly repetitive games with multiple players in a supplychain as its working point.
The paper first discusses networked production from asystem of systems perspective. It addresses the degrees inwhich systems cooperate and what the implications are of
2636 J Intell Manuf (2012) 23:2635–2646
the manner in which they cooperate. Next, trust in multi-agent systems is covered. Thereafter, the novel contributionis presented: a decision-making pattern including an expec-tation module. This pattern shows how trust mechanisms andtechnology can be embedded and deployed in manufactur-ing coordination and control. A sample application illustrateshow the pattern may be applied. Finally, the contributionof this pattern is discussed, especially the contribution toholonic manufacturing systems.
Networked production systems
Networked manufacturing systems, and the accompanyingproduction control systems, forcibly are systems of systems.Their physical and/or organizational distribution as well astheir complexity and the presence of multiple objectives,originating from various owners and stakeholders, unavoid-ably lead to the omnipresence of these compounded systems(Panetto and Molina 2008).
Panetto and Molina (2008) identify three coordinationtypes for systems of systems: compatible systems, (mono-lithically) integrated systems and interoperable systems.Koestler (1990), rephrasing Simon (1996, 1997), adds holon-ic systems to this list; they are nearly-decomposable systemsof systems. Several developments and applications of holonicmanufacturing systems are described in literature (Babiceanuand Chen 2006). Ideally, holonic systems combine the per-formance of integrated systems with the adaptability of inter-operable systems. This makes them effective in the transientstates that occur when they are adapting to the pressures fromtheir environment. The following sections discuss these fourcategories of systems of systems.
Making all individual systems compatible ensures that onesystem does not interfere with the correct functioning ofanother system (Panetto and Molina 2008). Compatibilityexists in different levels, from completely separate systemsto interoperable systems and holonic systems. Interopera-ble systems are compatible systems which share/make useof each other’s functionalities. Interoperability implies com-patibility but compatibility does not imply interoperability.
Only ensuring compatibility typically results in a disjointset of systems with overlapping functionalities and dupli-cation of data. This stands in contrast to a good softwareengineering principle: a single source of truth.
Monolithically integrated system of systems
Integrated systems make use of each others functionalitiesand act together as one monolithic system. Being integrated
into a single system implies a high functional dependenceamongst two or more individual systems.
In a conventional approach, integration comprises thecreation of a clean and perfect environment in which theindividual systems can reside. Integration requires both stan-dardization and harmonization. Standardization provides aninterface between the set of systems to promote easy com-munication. Harmonization selects and configures the rightset of systems to obtain the integrated system. Summariz-ing for conventional integration approaches, the followingproperties can be identified:
• Every system is highly dependent on other systems.• Individual systems need to fulfill strict predefined stan-
dards.• Systems should behave as specified.
Conventional integration of a number of systems results in arigid monolithic system. The rigidity of the integrated systemimplies lost opportunities:
• Flexibility: the integration effort cannot be transferredfrom one application to another. Because the integratedsystem is constructed for one particular application, func-tionality beyond the original specification is very difficultto capture or add.
• Robustness: a typical integrated system is vulnerable tounexpected behaviors or situations. The dependencies arein many cases too strong in order to respond to an unfore-seen situation.
• Complexity: integrated systems tend to have a big andcomplex structure which makes them difficult to man-age.
Interoperability can be defined as the ability of two or moresystems or components to exchange information and to usethe information that has been exchanged (Geraci 1991).
The difference between an integrated system and an inter-operable system can be understood by the analogy of travelaccommodation. A chain of hotels, providing a standard levelof quality to the customers, is an example of an integratedsystem. To fulfill all customer requirements, the servicesthat the hotels provide should comply with a diverse set ofrequirements. Serving this diversity implies a reduced quality(e.g. large choice but average quality breakfast), an increasedprice, or both.
A portal offering a booking service for small personal-ized housings (e.g. bed and breakfast), is an example of aninteroperable system. Each accommodation provider is freeto define his own quality and price. Travelers can chose whatfits best for them.
J Intell Manuf (2012) 23:2635–2646 2637
The advantage of interoperable systems is their flexibil-ity—being capable of adding and removing systems—androbustness—coping with the uncertainty in their environ-ment—as the dependency between systems is kept low.
Two levels of interoperability can be defined: syntacticinteroperability and semantic interoperability1 (Camarinha-Matos and Afsarmanesh 2005).Syntactic interoperability: if two or more systems are capa-ble of communicating and exchanging data, they are exhib-iting syntactic interoperability.Semantic interoperability: Beyond the ability of two ormore computer systems to exchange information, semanticinteroperability is the ability to automatically interpret theinformation exchanged meaningfully and accurately in orderto produce useful results as defined by the end users of bothsystems.
Holonic systems are system of systems—by definition—where the subsystems both (Koestler 1990):
• Cooperate toward the overall goals of their system(s)-of-systems.
• Remain self-sustaining and sufficiently autonomous toabsorb the real-world pressures and dynamics.
Holonic systems are fractal in that their structure repeats itselfwhen moving along the “is a subsystem of “-axis.
Nobel Prize Winner Simon (1996, 1997) reveals that,in a dynamic and competitive environment, larger systemsemerge, adapt and sustain themselves through subsystemreplacement (at the various levels of subsystems). Thisinsight follows from assuming bounded rationality and theinsight that competition will drive organizations and systemsto the largest size that remains manageable. The dynamicnature of our environment implies that the time window forconstruction and/or adaptation of large systems is bounded.
Overall, this results in nearly decomposable (systems of)systems. Simon calls it a law of the artificial in the sense thatit is unavoidable (in the way the unavoidability of gravity iscaptured in a law of nature). System developers and design-ers have no choice, no alternatives whenever and whereverSimon’s assumptions hold (i.e. in a competitive and dynamicenvironment).
Holonic systems go beyond semantic interoperability. Asin integrated systems, systems make use of each other’s ser-vices, may support a single source of truth, comply withconventions, etc. However, they remain robust and resilient.They have means to verify whether the other systems per-form adequately and have fall-back mechanisms. They avoid
the disadvantage of conventional monolithic integration andsimultaneously grasp the benefits of integrated systems thatgo beyond interoperability, which only exchanges informa-tion but does not include team formation, collective actions...
Note that in natural systems, this resilience often involvesregeneration: (1) new systems—not locked into old and out-dated design choices—grow up and receive education/train-ing in parallel with the current systems in operation, and (2)these new systems take over from the current ones when theyare ready and better-adapted to the present situation than theirpredecessors.
Overall, interoperability is not the final stage in system ofsystems development. Holonic systems continue where inter-operability ends, contributing with their cooperating subsys-tems that retain adequate levels of autonomy and resilience.
It is important to recognize that interoperability does notaim to address all issues in systems of systems. Indeed,attempts to solve problems by enhancing the communica-tion while the root cause goes beyond information exchangewill result in failures, lost time and underutilization of scarce(human) resources.
E.g. when an ERP system considers a production scheduleassuming infinite capacities, perfect communication (inter-operability) will not undo this naïve view and the problemthis causes in a manufacturing execution system. If the ERPis holonic, subsystems that need this naïve view to be ableto function would be replaced. Obviously, many unresolvedresearch issues remain before society will be able to buildfully holonic systems and before monolithic legacy systemswill be replaced.
This paper treats trust mechanisms in networked produc-tion systems. The research was conducted in the context ofHolonic Manufacturing Execution Systems (Valckenaers andVan Brussel 2005; Valckenaers et al. 2006a,b). Trust mech-anisms make a fundamental contribution to holonic systemsin that they allow cooperation without rigid couplings. Trustbrings autonomic properties: they assist subsystems to self-manage their interactions and relationships within the over-all system (network). Trust mechanisms enable precisely thekind of services needed for holonic non-monolithic integra-tion.
The next sections present a decision making pattern thatcomprises an expectation module. This pattern enables onesystem to cooperate with another system while assessingthe service performance of this other system (e.g. if a fac-tory promises to deliver by Monday noon, trust mechanismsprovide information, which is based on the analysis of pastperformance, on the expected delivery time and tell howmuch uncertainty/spread is plausible); thus the cooperationis decoupled and adaptive.
In a production environment uncertain and incomplete infor-mation is omnipresent. This is caused by uncertain processesand varying order demand.Due to the uncertainty, decision making comprises twoaspects, knowledge and confidence:
• Knowledge: decision making identifies and chooses alter-natives in line with the preferences of the decision takerbased on the known information.
• Confidence: decision making tries to make reasonablechoices by removing/identifying uncertainty and doubtabout the different alternatives.
Decision making under uncertainty
In literature, trust is presented as a promising concept in orderto deal with uncertainty and incomplete information.
Research on trust focuses primarily on the developmentand testing of adequate trust models. The trust conceptoriginates from sociological, philosophical and economi-cal research domains. In the computer science domain trustgained much attention with the popularity of e-commerceand social applications. In these domains trust is applied ina straightforward way. E.g. e-bay considers buy-sell interac-tions where the quality of the interaction(s) is indicated by atuple <information quality, communication quality, deliverytime>. The decision taker using the trust information is in thisexample a human being, the buyer. Although useful in e-com-merce and in sociological studies, trust cannot be mappeddirectly to the interactions within a holonic system in general.The section on trust describes what the challenges are whenimplementing trust models in a holonic production system.
Example: order quotation
Order quoting is one particular example of a decision to betaken in a production environment.
Determining the price and delivery date for orders in acompetitive production (the price and delivery date deter-mines the success or failure of a lead) requires informationof several systems. Amongst others:
• Manufacturing execution system—giving information onthe current status of the production (e.g. resource status,stock, etc).
• Production planning system—giving information on theproduction plan.
• Process information system—giving information on therespective process plans needed for the production of theorder (in extreme cases a production engineer is requiredto analyze the system).
All this information is subject to uncertainty, which makesit difficult to make a correct decision: price and due-date esti-mation. Among others:
• Resource status, stock, etc. deviate from reality.• Production plan is likely to be invalidated by resource
breakdowns or process time variations.• Depending on the outcome of certain operations (e.g. after
quality check), additional operations may be required.
For this particular example, trust can measure the uncer-tainty of the individual systems. This measure helps the deci-sion taker to make a rational decision rather than a guess.
This section discusses the concept of trust in multi-agent sys-tems research. Note that the research presented in this paperfocuses on applications in which agents have frequent small-grained interactions. Such a repetitive game situation is lesscommon in research on trust in multi-agent systems (e.g.many researchers focus on e-Bay transactions and e-com-merce).
Agents in a multi-agent system (MAS) rely on interactionswith their environment and a subset of the surrounding agentswithin the MAS. Agent decision making is dependent on theinformation accumulated by these interactions. Reasoningon the quality of this information is essential in a rationaldecision process.
Trust inference determines whether an entity will trustanother entity in a given situation. Trust inference is basedon many different beliefs, evidence or logical premises. Eachof these beliefs are derived form several sources. Some pos-sible types of belief sources can be distinguished:
• Direct experience: the history of interactions, positive ornegative.
• Categorization: the relation between a group experienceand an individual, there is typically a bias from the groupexperience towards the individual.
• Reputation: other’s experiences influencing the truster’sbeliefs.
The evidence available by interacting with another entity ismodeled by an interaction model e.g. the number of positiveinteractions and number of negative interactions. This inter-action model is typically the basis to infer the trust level (todetermine the trustworthiness of the other entity).
J Intell Manuf (2012) 23:2635–2646 2639
Multi-agent systems in general tend to have complex inter-actions (in terms of content and message protocols). Agentinteractions typically are a result of the execution of aninteraction protocol. The information exchanged between theinteracting agents varies from simple Boolean information tocomplex ontological information (Mönch and Stehli 2003).
The prediction of true meaning of the exchanged infor-mation is not solely dependent on the information itself butalso from the context in which the interaction takes place, forexample, the stage of an interaction protocol.
Trust models are typically based on simple interactionmodels such to limit the complexity of the statistical modelsto infer the trust values. The interaction models are singledimensional and do not account for any contextual informa-tion. Clearly, information is lost when mapping informationfrom rich interactions in a single dimensional model. It cre-ates a very narrow view on the environment.
Examples of trust models are the model of Marsh (1994),fuzzy trust models (Griffiths 2006; Falcone et al. 2003), beta-binomial trust model (Klos and Poutré 2006), etc.
Using trust in the decision making process requires atten-tion to the specific context and behavior dynamics. The fol-lowing section discusses context and behavior dynamics indetail.
Trust depends on the context in which the interactions hap-pen. Interactions in one context are not necessarily meaning-ful in other contexts. A context in which interactions happenis determined by many variables. Some examples of contextvariables are:
• Service type. Some organizations can be trusted for deliv-ering one type of service but not for other types (outsidetheir core business).
• Entity role. Entities like suppliers, clients, etc. can actaccording to different roles. E.g. entities usually have aclient and a provider role. In some cases, entity behaviorcan differ depending on its role. In every-day-life, personscan have a working role and a husband role. It is clear thatthe behavior in these roles is completely different.
• Commitment. Commitment is a value indicating the will-ingness/ability to keep an intention. The level of commit-ment is important to indicate whether or not the intentionis accurate, independent of the other context parameters.
Some variables are directly observable while others areimplicitly defined. The “social” setting in which the interac-tion is taking place may also give rise to expectations whichare not explicitly stated in the interaction itself.
Many current approaches (Marsh 1994; Jøsang et al. 2007;Sabater and Sierra 2005) do not have an explicit support for
context. Current approaches using context information makeuse of the following schema:
1. Identify possible entities (entities capable of performingthe desired service).
2. For all entities, identify the context. In most cases thecontext is defined by the contract parameters.
3. Locate similar (with respect to the context) situationsin history. If the context is not present in the history, acontext generalization or specialization is performed.
4. Calculate/retrieve trust level for all entities.5. Sort and select the best trust level.
Context generalization or specialization typically is difficultto perform. For instance, one can ask the question whetherto trust a printer office in order to print an A0 sheet whenhaving printed many high quality A4 sheets at that printeroffice.
A second issue needs attention: when applying trust in multi-agent systems, the behavior of the different agents does notremain unchanged during their life-time. The quality of ser-vice providers is not static. A print shop that previously onlyhad a low quality A0 printer and consequently a bad trust-worthiness can be very trustworthy now as the print shopinvested in a new printer.
The state of the art (Jøsang et al. 2007; Sabater and Sierra2005) deals with the dynamism within the behavior of ser-vice providers by controlling the history of the interactions.All interactions older than a certain threshold are not consid-ered. This direct history control mechanism has the disad-vantage of being non-selective. Some interactions are indeedoutdated but others are still informative. When the thresholdis very big, less informative interactions will be erased butthe history will possibly contain more interactions which areoutdated.
This trade-off is in many applications difficult to make.E.g. a specific subset of agents (within an application) canbe very dynamic, whereas others evolve slower. An agentrepresenting an entity at the beginning or end of its lifecyclewill tend to evolve faster than agents representing entities ina steady state situation. In many cases evolving behavior canbe noticed by reasoning on the provided information. Forinstance, the print shop can indicate the new—better qual-ity—printer functionality.
Decision making pattern
When applying trust to MAS in general, there is a need to takethe above mentioned aspects—context, evolving behavior
2640 J Intell Manuf (2012) 23:2635–2646
Fig. 1 Decision pattern
and interaction information—into account. The architecturalpattern proposed in this paper enables to use existing trustmodels, which are extensively studied, while coping with thespecific problems of a MAS implementation. Note that theresearch contribution is the pattern, not (yet) a(nother) trustmodel, where the pattern supports existing and also forth-coming trust models.
Agents need to take the unreliability of the available infor-mation into account when making decisions. In this context,expectations about the environment are an essential conceptin the decision making pattern. The Pinocchio fairytale intro-duces the concept of an expectation and the relation withtrust. Pinocchio is a liar, and on first sight he is untrustworthy.However, Pinocchio is very predictable. Whenever he tells alie his nose grows. The growing of his nose is observable andcan be related to his trustworthiness at that specific moment.This relation can be modeled by an interaction pattern. Thisinformation makes decision making much easier. The abilityto deduct a correct interaction pattern depends on the fact thatthe observer interprets the growing of the nose in a correctway.
Summarizing, making good decisions based on informa-tion given by another entity depends on the relationshipbetween the entities. The better the entities are able to inter-pret information, the better decisions can be taken. The resultof an interpretation is called an expectation, which is a firstclass concept in the architectural pattern. Whether or not twoentities are able to interpret each other well is reflected in atrust measure.
Figure 1 shows a module decomposition view of the pat-tern. The different modules and relations are elaborated sub-sequently.
Information or received/perceived data is in many multi-agent applications essential to ensure that agents takeacceptable decisions. In a multi-agent system, the avail-able information consists of information from both theenvironment as well as from internal information. Internalinformation originates from facts observed in the past andinformation which is generally known by the community ofagents, e.g. norms (Vasconcelos et al. 2007).
The information module gives access to the external infor-mation sources, the environment, and manages this perceivedinformation. Part of the information can be stored (cached)locally, depending on the type of agent.
Managing the perceived information includes updating thestored information based on new observations and forgettingoutdated information. Note that the information module onlyreflects what is observed. No interpretation of the perceivedinformation is involved.
In contrast to the information module, the expectation mod-ule reasons about the available information. The expecta-tion module constructs the agent’s expectation accordingto the available information (e.g. if Pinocchio has a longnose, the expectation is the opposite of the information pro-vided). Expectations relate to the past, the present or thefuture.
The expectation build by the expectation module is notthe statistically expected value. An expected value may notbe expected in the general sense—the “expected value” itselfmay be unlikely or even impossible (such as having 2.5 chil-dren), just like the sample mean. An expectation not onlygenerates what is most probable; an expectation may alsogive information about other less probable options, inform-ing how improbable they are.
An expectation can be characterized by its accuracy andprecision. Accuracy is the degree of how close the samplesare from a reference point. Precision gives an indication onthe variability of the difference between the reference pointand the samples.The target comparison2 can be used as an analogy to explainthe difference. In this analogy, repeated measurements arecompared to arrows that are fired at a target. Accuracydescribes the closeness of arrows to the bull’s eye at the targetcentre (Fig. 2). Arrows that strike closer to the bull’s eye areconsidered more accurate. The closer the system’s measure-ments to the accepted value, the more accurate the system isconsidered to be.
2 Adopted from http://en.wikipedia.org/wiki/Accuracy_and_precision.
To continue the analogy, if a large number of arrows arefired, precision would be the size of the arrow cluster (whenonly one arrow is fired, precision is the size of the cluster onewould expect if this were repeated many times under the sameconditions). When all arrows are grouped tightly together,the cluster is considered precise since they all struck closeto the same spot, if not necessarily near the bull’s eye. Themeasurements are precise, though not necessarily accurate.
In the best case, an expectation is accurate and precise. Inpractice it is not always possible to generate an accurate andprecise expectation. There will often be a trade-off betweenthe accuracy and precision. This trade-off is determined bythe particular goal involved in the decision and the alterna-tives considered in the decision module.
Expectation module implementation
Figure 3 shows a component and connector diagram of theexpectation module. The different components available andthe information flow between the components are explained.Centrally to the expectation module is a prospect. A prospectis defined as a foreseeable possibility; the potential things thatmay happen.3 The prospect module builds prospects accord-ing to the available information (from the information mod-ule).A prospect comprises two important aspects; an inferredmodel of the future and the confidence that this model is cor-rectly inferred. Each aspect is delegated to a specific module;inferring a model of the future is handled by the inferencemodule. The correctness of the inference is determined bythe confidence evaluator module.
In the order quotation example (proposing due-date) themodules would give following results:
• Inference module: depending on the estimated lead-timethe inference module calculates the due-date. Suppose theestimated lead time is 2 days the inference module canadd a safety margin of 1 day.
• Confidence module: depending on the history the confi-dence determines the confidence of the inferred due-date.
3 Merriam-Webster dictionary.
• The prospect groups the inferred due-date and the confi-dence as a tuple <due-date, confidence>.
Both the inference and the confidence module are pro-vided with a history controlled by the history control module.There is a need for separate storage as other history repre-sentations are required. The representations align with theinference and confidence module, respectively. The separa-tion of the history representation enables to tune the repre-sentation towards the specific module in order to reduce thecomputational complexity.
The goal indicates the preference amongst the consideredalternatives. Examples are due-date performance for orders,work in progress, etc.
The preference amongst the alternatives can have multiplerepresentations. In its most simple representation the differ-ent alternatives are ranked. More useful representations willgive a weight to each alternative indicating the preference ofthe respective alternative.
The decision module selects the appropriate actions depend-ing on expectations and the particular goal. The preferenceamongst the different alternatives will be taken into accounttogether with the uncertainty and the previously made deci-sions.
When making decisions in an uncertain environment thereis an inherent exposure to risk. Choosing amongst alterna-tives requires in some cases a trade-off between the amount ofrisk one wants and the expected performance. Three typicalpossible behaviors can be identified:
• Optimism: uses always the best estimate to compare alter-natives.
• Pessimism: uses always the worse estimate to comparealternatives.
2642 J Intell Manuf (2012) 23:2635–2646
Fig. 3 Expectation module
• Realism: uses a weighted average between the best andworst estimate to compare alternatives.
This module is responsible for taking the appropriate actionsin the environment. For example, actions to initiate machin-ing operations on a car engine part on the factory floor.
Sample application: order quotation
In the order quotation process, and in particular due-datedetermination, multiple policies can be used. Some examplepolicies used in companies are listed below:
• Precise and accurate order quotation. In this policy, thecompany proposes a quote which is very close to the realcost/timing of the order. To keep up the company’s trust-worthiness, the company does not deviate from the orig-inal offer. This is typical for competitive industries inwhich the company‘s reputation is important.
• Precise and loose order quotation. This policy will alsoprovide a sharp quote, but takes the freedom to deviatefrom the original quote. This to maximize the chance ona potential lead (effective order). This policy is typically
used in a competitive industry where the client acceptsoffer deviations (e.g. due to process variations, unex-pected production costs).
• Indicative order quotation. This policy provides only anindicative quote.
Due-date determination is illustrated for each of the abovedefined policies.Precise and accurate order quotation. As there is process var-iation, the company will add slack to the expected lead-time.To assure an accurate order quote, the company will searchfor the minimum slack time that results in a trust value abovea predefined threshold (Fig. 4).Precise and loose order quotation. The company will adda predefined slack time (absolute or relative). The companychecks whether the accuracy is acceptable to the companystandards. This policy will typically result in an earlier due-date as compared to the precise and accurate policy.
Indicative order quotation. In this policy, the companydoes not bother about the accuracy. The company has twooptions:
• Adding no slack to the expected lead-time.• Adding enough slack-time to make sure that the proposed
due-date exceeds the worst-case scenario.
J Intell Manuf (2012) 23:2635–2646 2643
Fig. 4 Accuracy threshold
Following experiments contain one subsystem: a resource.This resource provides one operation. The real operation timeof the resource follows a normal distribution with an averageof 100 min and a variance of 20 min.
The resource holon provides a order quotation service. Itprovides the estimated finishing time given the order, opera-tion and arrival time. The resource holon uses the mean timeto formulate the answer on the order quotation question. Theexperiments look at the effect of 0 to 100 order quotations.The resulting expectations contain an order due-date.
To test the power and flexibility of the pattern multipleimplementations of the various modules are implemented.
(Sierra and Debenham 2005) preservingSlack Entropy trust Precision
(Sierra and Debenham 2005) preservingVariance
• Fixed expectation inference module: This moduleserves as nominal scenario and uses the virtual end-timeof the resource holon as the exact due-date.
• Slack expectation inference module: This module addsa specified slack-time to the given end-time. The slacktime correlates with the precision, the bigger the slacktime the lower the precision and vice versa.
• Variance expectation inference module: This modulecalculates the sample variance in order to generate theexpected end-time. The slack time can then be chosen infunction of this variance.
• Accuracy preserving prospect module: This modulecreates prospects based on a desired precision.
Fig. 5 Precision preserving prospect module: slack time = 0
Fig. 6 Precision preserving prospect module: slack time = 20
• Precision preserving prospect module: This prospectmodule generates prospects based on a desired accuracy.
• Both the binomial and the entropy trust model canbe used to evaluate the confidence in a order quotation(Sierra and Debenham 2005).
One experiment is explained in detail. Other experiments canbe found in Saint Germain (2010).
Accuracy versus precision
In the precise and accurate policy, the company wants a min-imal slack time which results in an acceptable accuracy. Toshow the trade-off between precision and accuracy, the evo-lution of the accuracy over 100 quotes is shown for threedifferent precisions: slack time = 0, slack time = 20, slacktime = 30.
2644 J Intell Manuf (2012) 23:2635–2646
Fig. 7 Precision preserving prospect module: slack time = 30
Fig. 8 Sample variance inference: slack time = σ
Fig. 9 Sample variance inference: slack time = 2 σ
In the experiment, the binomial trust model and the slack/fixed time expectation inference module are used as imple-mentation of respectively the confidence module and theexpectation inference module.
Figures 5, 6 and 7 show the evolution of the trust level(Y-axes) depending on the number of order quotations (X-axes) with a respective slack time of 0, 20 and 30. In case ofslack time 0 (fixed time expectation: nominal scenario), theclient expects the resource to indicate an exact end time, novariation is allowed. In case of slack time 30, the real end timeof the operation is expected to be smaller than the given endtime increased with 30 time units. Colors correspond to repli-cations. One replication corresponds to 100 order quotations.
In all figures we notice a clear trend. On the figuresyou notice clearly a higher trust value when the slack timeincreases. Nevertheless the trust values are subject to the sto-chastic process resulting in a saw-toothed curve. This meansa decision taker should not react on marginal differences inthe trust.
J Intell Manuf (2012) 23:2635–2646 2645
Sample variance inference
The previous experiments used slack time as precision mea-sure. Defining the appropriate slack time is not always easywhen there is little knowledge on the process. Another,easier to configure, precision measure is the variance. Theslack time can be inferred as a function of the variance: e.g.2 σ, 3 σ . . .
In this experiment an alternative inference module is used.The experiment makes use of a precision preserving prospectmodule and the binomial trust model. The sample varianceinference module calculates the sample variance and addsthis value to the given end time. Figure 8 shows the evo-lution of the confidence and the evolution of the expectedadditional time (slack based on variation).
Both the slack time and the confidence converge after alimited number of iterations. If a higher precision is desired,2 σ, 3 σ . . . can be used as slack time. Figure 9 gives the trustand slack evolutions with a 2 σ slack time.
This paper presents a decision-making pattern that uses trustmechanisms. This pattern enables effective cooperation inloosely coupled systems that belong to a supply network.The paper situates the contribution in the context of systemintegration, system interoperability and nearly-decompos-able holonic systems. It discusses trust in multi-agent systemsrelevant for the repetitive game and multi-step (along supplychains) of the application domain. It presents the pattern andillustrates how it may be applied. Overall, it makes a usefulcontribution to networked production in that less organiza-tion is needed to get adequate or better performance. How-ever, in the context of holonic manufacturing, its contributionhas strategic value, as is discussed below.
Contribution to HMES and the design of systems of systems
The decision-making pattern with its expectation module,discussed in this paper, represents a contribution of strate-gic importance to the HMES technology (Valckenaers andVan Brussel 2005; Hadeli et al. 2007), especially from a sys-tem of systems perspective. Simon (1997) shows—by formalreasoning starting from a small and plausible set of assump-tions—that larger systems forcibly are systems composedof nearly-decomposable subsystems. These systems emerge,adapt and survive by subsystem replacement in a fractal man-ner. Each of those subsystems addresses a subset of the over-all concerns, where nearly-decomposable implies that thesesubsets ought to be nearly-disjoint.
In a HMES, the PROSA (Weyns 2006) architecture facil-itates (maximizes) this separation of concerns. In particular,
technology/process-related concerns are cleanly separatedfrom the logistic concerns (both intra- and inter-factory);these concerns are addressed by product (type) agents and(production) order agents, respectively. Order agents areagnostic concerning production processes and systematicallyconsult their product agents about what to do next. Orderagents are managers. Conversely, product agents are agnos-tic concerning instantiations of their product type, and haveno information on work-in-process at all. Product agents arethe technology experts of the team.
However, there remained a weakness in this architecture.Indeed, whenever a production system becomes operational,the logistic and process related concerns become more tightlycoupled because the organization gains experience in anasymmetric manner. More precisely, amongst the technicallyfeasible routings, only some specific routings are used. Inprinciple, product agents need to start indicating preferredroutings based on the fact that more experience and infor-mation is available for some routings than for others. Thenasty consequence is that these preferences cannot be derivedfrom technological grounds alone, and that these preferencesare time and location dependent. Product agents would beexposed to concerns that fit them poorly. Separation of con-cerns would suffer dramatically.
The research contribution in this paper solves the aboveand safeguards the separation of concerns. The preferencefor certain routings over others is automatically detected andaccounted for. The evolution over time is discovered (e.g.when during a period of high demand alternative routingsare used in parallel and information becomes available). Thepattern delivers an essential missing link and remedies thelast remaining issue of the PROSA architecture. More infor-mation is contained in Saint Germain (2010).
Acknowledgments This paper presents work funded by the ResearchFund of the K.U.Leuven (Concerted Research Action on Agents forCoordination and Control) and the European Commission (EU projectsMascada, MPA and MABE).
Babiceanu, R., & Chen, F. (2006). Development and applications ofholonic manufacturing systems: A survey. Journal of IntelligentManufacturing, 17(1), 111–131.
Camarinha-Matos, L., & Afsarmanesh, H. (2005). Collaborative net-works: A new scientific discipline. Journal of Intelligent Manu-facturing, 16(4), 439–452.
Falcone, R., Pezzulo, G., & Castelfranchi, C. (2003). A fuzzy approachto a belief-based trust computation. In: Trust, reputation, andsecurity: Theories and practice (pp. 55–60). Berlin: Springer.
Geraci, A. (1991). IEEE standard computer dictionary: Compilationof IEEE standard computer glossaries. New York: IEEE Press.
Griffiths, N. (2006). A fuzzy approach to reasoning with trust, dis-trust and insufficient trust. In: 10th international workshop on
2646 J Intell Manuf (2012) 23:2635–2646
cooperative information agents, September 11–13, 2006, Edin-burgh, Scotland.
Hadeli, V. P., Van Brussel, H., Saint Germain, B., Verstraete, P., & VanBelle, J. (2007). Towards the design of autonomic nervousnesshandling in holonic manufacturing executing systems. In: IEEEinternational conference on systems, man and cybernetics, 7–10October 2007, pp. 2883–2888.
Jøsang, A., Ismail, R., & Boyd, C. (2007). A survey of trust andreputation systems for online service provision. Decision SupportSystems, 43(2), 618–644.
Klos, T., & Poutré, H. L. (2006). A versatile approach to combiningtrust values for making binary decisions. In: Trust management(pp. 206–220). Berlin/Heidelberg: Springer.
Koestler, A. (1990). The ghost in the machine. Penguin Group ISBN0-14-019192-5.
Lorini, E., & Castelfranchi, C. (2007). The cognitive structure ofsurprise: Looking for basic principles. Topoi, 26(1), 133–149.
Marsh, S. P. (1994). Formalising trust as a computational concept.Ph.D. Thesis, Department of Mathematics and Computer Science,University of Stirling.
Mönch, L., & Stehli, M. (2003). Multiagent system technolo-gies (p. 341). Berlin: Springer.
Panetto, H., & Molina, A. (2008). Enterprise integration and interoper-ability in manufacturing systems: Trends and issues. Computersin Industry, 59, 641–646.
Sabater, J., & Sierra, C. (2005). Review on computational trust andreputation models. Artificial Intelligence Review, 24(1), 33–60.
Saint Germain, B. (2010). Distributed coordination and control fornetworked production systems. Belgium: K.U. Leuven.
Sierra, C., & Debenham, J. (2005). An information-based modelfor trust. In: AAMAS ’05: Proceedings of the fourth internationaljoint conference on Autonomous agents and multiagent systems(pp. 497–504).
Simon, H. (1996). The sciences of the artificial (3rd ed.). Cambridge:MIT Press.
Simon, H. (1997). Models of bounded rationality, (Vol. 3). Cambridge:MIT Press.
Valckenaers, P., & Van Brussel, H. (2005). Holonic manufacturingexecution systems. CIRP Annals-Manufacturing Technology,54(3), 427–432.
Valckenaers, P., Hadeli, K., Saint Germain, B., Verstraete, P., & VanBrussel, H. (2006). Emergent short-term forecasting through antcolony engineering in coordination and control systems. AdvancedEngineering Informatics, 20(3), 261–278.
Valckenaers, P., Cavalieri, S., Saint Germain, B., Verstraete, P.,Hadeli, K., Bandinelli, R., Sergio, T., & Van Brussel, H. (2006). Abenchmarking service for the manufacturing control research com-munity. Journal of Intelligent Manufacturing, 17(16), 667–679.
Vasconcelos, W., Kollingbaum, M. J., & Norman, T. J. (2007).Resolving conflict and inconsistency in norm-regulated vir-tual organizations. In: Proceedings of the 6th international jointconference on Autonomous agents and multiagent systems, ACM(pp. 1–8).
Weyns, D. (2006). An architecture-centric approach for software engi-neering with situated multi agent systems. Belgium: K.U. Leuven.