32
17th ICCRTS “Operationalizing C2 Agility” Operationalizing and Improving C2 Agility: Lessons from Experimentation Topics: (1): Experimentation, Metrics, and Analysis (2): Approaches and Organizations (3): Concepts, Theory, and Policy Abstract "Operationalizing C2 Agility" is the process of moving from a conceptual model or theory of C2 Agility to applying this theory to improve an entity’s C2 Agility in practice. This process is greatly facilitated if one builds a proper foundation. In this paper, we begin by identifying the critical elements or cornerstones of such a foundation and the role they play in enabling an entity to systematically and efficiently improve their agility. We show that these prerequisites already exist, that they permit extensions that are important for both continuing research and practice applications, and that they have been successfully employed in experiments. We illustrate the capability of the existing foundation by presenting the designs for and the results of a series of experiments that look at the Agility-related implications of various stresses and conditions. We conclude with an assessment of what is on the critical path to “Operationalizing C2 Agility”. David S. Alberts Institute for Defense Analyses, USA email: [email protected] Marco Manso SAS-085 Member, Portugal email: [email protected] Point of Contact: David S. Alberts Institute for Defense Analyses 4850 Mark Center Drive Alexandria, VA 22311, USA +1-703-845-2411 [email protected]

Operationalizing C2 Agility

Embed Size (px)

DESCRIPTION

Operationalizing and Improving C2 Agility: Lessons from Experimentation

Citation preview

Page 1: Operationalizing C2 Agility

17th ICCRTS

“Operationalizing C2 Agility”

Operationalizing and Improving C2 Agility: Lessons from Experimentation

Topics: (1): Experimentation, Metrics, and Analysis

(2): Approaches and Organizations (3): Concepts, Theory, and Policy

Abstract "Operationalizing C2 Agility" is the process of moving from a conceptual model or theory of C2 Agility to applying this theory to improve an entity’s C2 Agility in practice. This process is greatly facilitated if one builds a proper foundation. In this paper, we begin by identifying the critical elements or cornerstones of such a foundation and the role they play in enabling an entity to systematically and efficiently improve their agility. We show that these prerequisites already exist, that they permit extensions that are important for both continuing research and practice applications, and that they have been successfully employed in experiments. We illustrate the capability of the existing foundation by presenting the designs for and the results of a series of experiments that look at the Agility-related implications of various stresses and conditions. We conclude with an assessment of what is on the critical path to “Operationalizing C2 Agility”.

David S. Alberts Institute for Defense Analyses, USA

email: [email protected]

Marco Manso SAS-085 Member, Portugal

email: [email protected]

Point of Contact: David S. Alberts

Institute for Defense Analyses 4850 Mark Center Drive

Alexandria, VA 22311, USA +1-703-845-2411 [email protected]

Page 2: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 2 of 32

1 INTRODUCTION

As individuals and entities come to understand that their preferred approach to command and control, management or governance is not well-suited for all of the situations and mission challenges they currently face or expect to face, particularly those characterized as Complex Endeavors (1), they will seriously consider the proposition that survival in the Age in which we live, requires Agility in general and C2 Agility, specifically (2).

However, it is one thing to decide that one wants to become more agile and quite another to know what specific actions could, if taken, improve one’s agility. This paper considers what it means to “operationalize agility”. We provide an example of the application of extant theory, in an experimental setting, to instantiate two forms of the Collaborative C2 Approach option and compare their relative agilities to other C2 Approach options in other regions of the C2 Approach Space specified in the NATO NEC2 Maturity Model (N2C2M2) (SAS-065 2010).

2 APPROACH

The effort reported here involved the following tasks:

1. Define, in the context of C2 Agility, what it means to “operationalize” a concept or theory and identify the cornerstones of the foundation that must be in place to move from theory to efforts aimed at improving the practice.

2. Determine the extent to which such a foundation exists.

3. Test this foundation by designing and conducting a series of experiments to ascertain if the existing foundation can support an analysis of an arbitrary C2 Approach option, and compare the results of a set of experiments employing this option to the results for C2 Approaches previously considered.

3 OPERATIONALIZING C2 AGILITY: REQUIRED FOUNDATION

In this section of the paper, we define what it means to “operationalize” a concept or theory by identifying the foundational cornerstones that must be in place to move from theory to practice. We look to the existing literature and on-going efforts of NATO Research Group SAS-085 to see if these cornerstones exist in the case of C2 Agility.

The term “operational” is commonly defined as “able to be used or in use.” Synonyms included “in working order, active, in effect, and in force”. When applied to an idea or theory, the term operational means that the theory is defined in a way that permits it to be tested, demonstrated, and/or applied. The term operationalizing can refer to either the process of taking a concept and applying it in practice or actually achieving the intended result. Therefore, operationalizing C2 Agility is about 1) both demonstrating that C2 Agility can be improved, as well as, 2) actually improving C2 Agility for a given entity in its operating environment.

Page 3: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 3 of 32

Being able to demonstrate that an entity’s agility has indeed been improved or that the agility of one C2 approach is greater than another requires that we are able to measure agility and recognize a difference in the level of agility that is significant in an operational setting. Thus, operationalizing C2 Agility requires, at a minimum, a way to measure the agility of an entity with a minimum degree of accuracy and precision. For practical applications, the cost of the measurement, to include the cost of instrumentation, data collection, and computation must be affordable. Therefore, the existence of at least one agility metric and an affordable process of measurement are two of the cornerstones of the foundation necessary to operationalize C2 Agility. With these two cornerstones in place, it is possible to improve agility; but without some additional foundational support in the form of relevant theory, the improvements achieved will result from a process of trial and error, that is, more from luck or happenstance than from understanding and a systematic approach. Thus, these improvements may not be repeatable or efficient. To move beyond a trial and error improvement process, we need a valid and “practical” theory. Theory should help us by suggesting possibilities, ways that are hypothesized to improve, in this case, agility. Theory, in the form of a conceptual model, also suggests intermediate measures that are thought to affect the level of agility realized. These intermediate measures can be helpful in developing a better understanding of the enablers and inhibitors of agility. Two theories form the remaining two foundational cornerstones: the NEC C2 Maturity Model (N2C2M2) and the Conceptual Model of C2 Agility (CMC2A). With these remaining two cornerstones in place, improvements in C2 Agility are realizable in a more systematic and efficient fashion. The foundation for operationalizing C2 agility is depicted in Figure 1. Given that each of these four cornerstones currently exists, in some form, we conclude that, at this point in time, the necessary elements are in place to embark on a journey to improve not only our C2 Agility but also our understanding of C2 Agility.

3.1 Foundation for Operationalizing C2 Agility Before reporting on our efforts to employ the foundation that current exists, we provide a brief overview of each of the cornerstones to design, conduct and analyze the results of a series of experiments.

3.1.1 Cornerstone 1: C2 Agility (Conceptual) Model (the theory of C2 Agility)1 We will not, in this paper, try to presents a complete recitation of the conceptual model of Agility but simply provide the reader with enough to understand this paper. Those readers interested in a more complete description of the conceptual model of agility that we are using are 1 NATO SAS-085 is currently engaged in an effort to build upon previous C2 Agility-related work by developing an updated Conceptual Model of C2 Agility and testing this model using case studies and experiments to see if the theory is sufficient to guide effort to measure, assess, and improve agility. Their final report is expected in 2013.

Figure 1 - Foundation for

Operationalizing C2 Agility

Page 4: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 4 of 32

directed to Alberts (2011)2. Agility was initially defined as the “capability to successfully cope with changes in circumstances”3. The verb “to cope” was later replaced by the phrase “to effect, cope with, and/or exploit”. This change in the definition was made to highlight the fact that agility is not only about avoiding or mitigating adverse consequences; but that the concept of Agility also involves the ability to improve one’s situation by taking advantage of opportunities that arise, whether or not they have been anticipated or expected. Thus, agility encompasses both a passive and an active component. This change also serves to make clear that one can “create one’s own luck” by being proactive and by shaping circumstances to create opportunities.

To apply the concept of agility to a given entity, whether to ascertain how agile a particular entity currently is, to compare two entities’ agility, or to evaluate alternative approaches to improving an entity’s agility, we need to specify:

1) What it means for the entity to be “successful” and

2) The scope of relevant “changes in circumstances” Success can be defined in terms of the achievement of a particular outcome (e.g. “be profitable”), or in terms of operating in an acceptable range of performance (e.g. keeping the temperature) between 68 and 72 degrees. The consequences of not being successful can vary from entity to entity and situation to situation. As a result, success can be defined in a more nuanced manner. For example, a system’s performance could fall outside of the range of acceptable performance by less than some amount for some period of time and still be considered successful. It would be unreasonable to expect that an entity could be successful under all conceivable circumstances. Certainly, it would be extremely costly to design or build something that worked acceptably at all times. Specifying the scope of missions, tasks, and circumstances of interest, what is referred to as the “Endeavor Space”4, is a critical element in measuring agility, since it defines the universe of circumstances. Care must be taken in determining the dimensions of this space as well as the ranges for each of the dimensional variables. If the Endeavor Space is too narrowly defined, it is probable that the circumstances one does not expect will be left out and entities may get a false sense of security.

3.1.2 Cornerstone 2: NATO NEC C2 Maturity Model5 The Endeavor Space tells us what our “challenge universe” is. We need a corresponding construct to know what the set of possible C2 approaches are. This is provided by the N2C2M2, together with an Approach Space and a set of regions in this space that are considered to be of interest to entities participating in Complex Endeavors. Figure 2 depicts this approach space and five regions of interest within this space that correspond to C2 Approaches labeled Conflicted C2, De-conflicted C2, Coordinated C2, Collaborative C2, and Edge C2. Each of the C2 approaches is characterized by operating at a specific region of the C2 approach space, being a N2C2M2 hypothesis that the more network-enabled a C2 approach is (i.e., more distribution of decision rights across the collective, less constrained patterns of interaction and broader

2 Alberts, D. S. The Agility Advantage 3 Alberts, D. S. The Agility Advantage, p.66 4 Alberts, D. S. The Agility Advantage p.287 http://dodccrp.org/files/agility_advantage/Agility_Advantage_Book.pdf 5 http://dodccrp.org/sas_files/n2c2m2_final_draft.pdf

Page 5: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 5 of 32

dissemination of information), the more likely it is to develop shared awareness and shared understanding (NATO SAS-065 2010, p69).

Figure 2 - C2 Approaches and the C2 Approach Space

The experiments reported on in this paper define and instantiate the three most network-enabled C2 approaches (i.e., Coordinated, Collaborative and Edge) and a forth C2 approach (a more constrained Collaborative) to verify if the existing foundation, depicted in Figure 1, was sufficient to permit explorations beyond those that have already been made in past N2C2M2 experimentation (see (SAS-065 2010) and (Manso and B. Manso 2010)). We felt that this was an important aspect to determining whether or not the existing foundation was sufficient to “operationalize C2 Agility”.

3.1.3 Cornerstone 3: C2 Agility Metrics Alberts (2011) provides a number of ways of comparing the agility of one or more C2 Approaches. These ways can be also used to compare entities that possess a capability to adopt more than one approach to C2.

To visualize agility in the context of a specified Endeavor Space, C2 Agility maps can be used. This method is qualitative in nature and is used when one wants to see the regions of Endeavor Space where, in this case, a given C2 Approach can operate successfully6 or to compare two or more approaches.

Visualizations provide a sense of Agility or relative Agility rather than a single Agility score. A simple “coverage” calculation can be made using an Agility Map that translates the areas7 in the Endeavor Space where an entity can operate successfully into a scalar value. When outcomes are not deterministic but probabilistic, Agility Maps, instead of being “black and white” present shades of grey that reflect the probabilities of success. In the cases of the Agility

6 Success being determined using the values of a selected measure of effectiveness and a defined range of acceptable performance 7 In mathematical terms this Agility metric represents the union of the regions where the entity can operate successfully.

Page 6: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 6 of 32

metric, coverage takes into consideration these probabilities and the value of the metric is calculated using weights that correspond to these probabilities. Figure 3 presents two illustrative Agility Maps (using a 10 by 10 resolution, hence containing 100 cells or conditions) and the corresponding values for the Agility Metric.

The map includes:

• A matrix that represents the Endeavor Space. A cell or region in the “map” corresponds to a particular mission, and set of circumstances. In our experiments, it represents the time it takes to find the first correct solution (X-axis) and the average amount of Correct Complete Identifications (Y-axis), and

• An indication of whether the approach is successful (“operating within acceptable bounds” - black cells) or not (empty cells) at a given region of the Endeavor Space (i.e., cell or group of cells).

Endeavor Space

C2 Approach X Value of Agility Metric = 0.28 The approach is successful in 28 out of 100 conditions.

Endeavor Space

C2 Approach Y Value of Agility Metric = 0.42 The approach is successful in 42 out of 100 conditions.

Not Successful

Successful

Figure 3 - Illustrative Agility Maps

To measure Agility, the following two metrics are used (Alberts 2011 p424):

• Absolute agility, being the percentage of area in endeavor space in which an entity can operate successfully.

• Benchmarked agility, being the decrease or decrease of the percentage of area in endeavor space in which an entity can operate successfully, relative to its baseline8.

Both metrics are calculated based on the Agility Maps. The method used to calculate agility is described in annex 8.1.

These Agility Maps can grow quite large depending on the number of dimensions (mission type, mission requirements, system condition (damage), information availability, and so on) and the number of instances for each dimension (5 mission types, 16 levels of network damage).

8 Baseline refers to normal circumstances.

Page 7: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 7 of 32

There are a number of ways of making these Agility Maps easier to work with in 2 dimensions. One consists in looking at only a portion of the map at one time, that is, to ignore a set of dimensions while focusing on just a few. Another one, which we add in this paper and to the analyst’s repertoire, is to present our results by stacking a series of Agility Map slices into one single Agility Map. In effect, we collapse one dimension and represent success in white (never successful), shades of grey (darker: more successful, lighter: less successful), and black (always successful). In these Stacked Agility Maps, for example, black means that the entity can operate successfully in all of the relevant range of the variable (condition) that is collapsed. Examples of these stacked Agility Maps are presented in section 5. Agility Maps reflect the outcome of a task or mission taken at some appropriate point in time. Agility Maps are thus, static visualizations. However, it is also important to understand how an entity is progressing over time. In this paper we introduce what we call Performance Maps, that is, 10 by 10 matrixes that look at the value of a selected measure indicating performance (Y-axis) over time (X-axis). These have been developed not by looking at the state of the simulation at fixed points in time, but rather by noting the time that pre-determined thresholds are reached. Thus, Performance Maps are a visualization of the time required to reach an objective. As was the case with Agility Maps:

• In Performance Maps a cell is black if the performance threshold has been achieved by the time indicated and white otherwise.

• Performance Maps can be stacked to illustrate the sensitivity of “time to objective” as a function of a selected condition or circumstance.

Examples of these Performance Maps are included in section 5.

3.1.4 Cornerstone 4: C2 Agility Measurement Process The fourth cornerstone is the capability to instantiate the propositions of Agility theory and Agility-related metrics in an experiment or real world situation by observing or collecting the data necessary to construct Agility Maps and calculate values for the Agility metric.

Being able to measure success in a meaningful way is critical. In the experiments previously reported on by Alberts (2011), success has been defined in terms of measures of effectiveness that have included: average correctness (the percentage of agents that get a correct solution by the end of the time available) and maximum timeliness (the proportion of time remaining after the first complete correct solution is obtained). In addition to measuring effectiveness in a manner that is comparable across different approaches to C2, there are effectiveness measures, developed in a research effort funded by the Center for Edge Power at the Naval Postgraduate School9, which take into consideration the nature of the approach itself. For example, leader-centric approaches (such as Hierarchy) are designed to funnel information to a leader so that the situation can be understood by the leader and a course of action determined and conveyed to lower echelons. As a result, Hierarchies may achieve low levels of shared awareness and yet their leaders determine the correct solution 10.

9 The Principal Investigator for this research grant was Marco Manso 10 Of course, the overall leader can delegate one or more of his decision rights completely or with certain constraints attached.

Page 8: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 8 of 32

We have, therefore, included in this paper an approach-specific effectiveness measure that reflects the ability of an entity to operate as intended. This allows one to make a distinction between design and execution. The approach-specific effectiveness measure looks at whether the approach works as intended (e.g., leader get the correct solution) rather than whether or not this level of performance is appropriate for a given mission (e.g., mission requires high shared awareness). Thus, one can make a distinction between an entity performing as intended and one that is performing adequately for the mission at hand. Additionally, the potential negative outcomes and associated risks of organizations designed to generate low shared awareness as opposed to those that are designed to generate high shared awareness, fall within the scope of the C2 agility experiments that are described next.

4 C2 AGILITY EXPERIMENTS

In this section of the paper, we explain our objective in conducting a series of experiments. We briefly describe the experimental platform, the C2 Approaches examined, and the conditions or circumstances considered. We also present a series of hypotheses that could be tested by experiments of the kind employed here.

4.1 Purpose and Scope of the Agility Experiments Our primary purpose here is to see if we, as a community, are in a position to undertake efforts to operationalize C2 Agility in a systematic fashion. To do so requires the following:

1. Characterization of one or more C2 Approaches, either the representation of these approaches in an experimental setting or the observation and classification of C2 Approaches in practice,

2. The definition of an Endeavor Space, 3. The collection and observation of Agility-related variables,

4. The calculation of Agility metrics, and the comparison of two or more C2 Approaches with respect to a set of Agility-related variables, Agility Maps, and metrics.

Since experiments have already been conducted and the results analyzed for a specified set of C2 Approaches, we wanted to further test the utility of the concept of the Approach Space by seeing if we could instantiate a variant on one of these approaches (the Collaborative approach). Having successfully been able to accomplish this (adding another Approach) we wanted to see if we could predict the outcome of the experiments by using the results obtained from approaches that were adjacent to this “new” approach. That is, we sought to establish whether or not our findings were consistent with theory and previous results. We felt that if this turned out to be the case, it would be reasonable to conclude that the state of the theory, the research methodology, and the existing tools could support efforts to operationalize C2 Agility.

4.2 Experimentation Platform The experiments reported on here use the ELICIT Platform. ELICIT was originally developed to test hypothesis related to edge and hierarchical (traditional) command and control practices. A form of ELICIT, abELICIT currently supports instantiating Software-Agents (complementing

Page 9: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 9 of 32

or replacing humans) and, in this way, allowing full control of experiments and the generation of large quantities of data for analysis in a cost-effective way (Ruddy 2009) (Ruddy 2011).

The ELICIT challenge is to find the Who, What, Where and When of a terrorist attack. Throughout the duration of the experiment, factoids (i.e., information elements that are pieces of the puzzle) are distributed to individuals. Individuals may (or may not) disseminate factoids to others by sharing information and collaborating using this instrumented platform. However, only by sharing information may they achieve sufficient levels of awareness to solve the problem completely.

The challenge in ELICIT is unambiguous and clear: it has a clearly defined objective and all information is accessible. Moreover, the Factoid Set (i.e., the set containing all factoids) has no ambiguity among the factoids and it doesn’t contain wrong information. Nonetheless, the dynamics caused by human subjects during the runs, as well as their behaviors, result in complex behaviors. For example, the order in which factoids are received by agents (in turn, a function of individual agent decisions) does have an impact on the final results.

4.3 C2 Approaches Instantiated For the purposes of our analysis we chose to instantiate three of the NATO NEC C2 Maturity Model Approaches (Coordinated, Collaborative, and Edge). For the Collaborative approach, in particular, we consider two forms: a one-echelon collaborative approach, called “CTC-TL Collaborative”, that involves interactions between and among team leaders only; and a second, named “Collaborative”, that goes beyond just Team Leader collaboration to include the creation of two “cross team members” on each team. These cross team members are each assigned one additional area of interest and can access the websites that correspond to their areas of interest. In Table 1, we describe each of the C2 approaches and graphically depict the connections between and among participants in each of these approach options. The participants include a Cross Team coordinator (CTC), Team Leaders (TL) and Team Members (TM).

C2 Approaches Modeling Description (*)

Coo

rdin

ated

C2

• 3-tier organization: CTC, TL and TM roles.

• Cross-team communication via CTC only.

• Within teams, full connectivity between team members and respective team leader.

• CTC has access to all websites. Teams exclusive access to their team websites.

• Success Criterion: Organization success depends on the Coordinator finding the correct solution.

Page 10: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 10 of 32

C2 Approaches Modeling Description (*) C

TC-T

L C

olla

bora

tive

C2

• 3-tier organization: CTC, TL and TM roles.

• Cross-team communication: fully connected CTC and TLs.

• Within teams, full connectivity between team members and respective team leader.

• CTC and TLs have access to all websites. Teams members only access their team websites.

• Success Criterion: Organization success depends on the Coordinator finding the correct solution to all problem spaces OR Team leaders finding the correct solution to their problem space.

Col

labo

rativ

e C

2

• 3-tier organization: CTC, TL and TM roles.

• Cross-team communication: fully connected CTC and TLs. 2 cross-TMs are each assigned one additional area of interest and can interact with another cross-TM that correspond to her/his area of interest.

• Within teams, full connectivity between team members and respective team leader.

• CTC and TLs have access to all websites. Teams members only access their team websites, except cross-TMs that have access to website that correspond to her/his area of interest.

• Success Criterion: Organization success depends on the Coordinator finding the correct solution to all problem spaces OR Team leaders finding the correct solution to their problem space.

Edge

C2

• 1-tier organization: TM role only.

• Fully connected network: all TMs connected.

• All TMs have access to all websites.

• Success Criterion: Organization success depends on the individuals’ IDs plurality being correct in each problem space.

(*) The following acronyms are used: Cross-Team Coordinator (CTC), Team Leader (TL) and Team Member (TM). Table 1 - C2 Approaches Modeling and Connectivity

Page 11: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 11 of 32

We instantiated these C2 Approaches in the ELICIT platform using software agents calibrated as average performing and using the ELICIT setup and configuration that represents normal conditions (Manso 2012). The results obtained form the Agility Agent Baseline, that is, the outcomes that represent the normal condictions for each of the C2 approaches instantiated and to which further manipulations to measure agility will be made. Its results are presented next.

C2 Approaches Projections on the C2 Approach Space Dimensions Each of the C2 Approaches modeled yield different results in the key-dimensions of the C2 approach space. The relative locations of these different approaches along the dimensions that define the Approach Space are presented in Table 2.

The modeling used suceeded in increasing each of the key-dimension values when changing the C2 approach from less network-enabled (e.g., Coordinated C2) to more network-enabled (e.g., Edge C2).

C2 Key-dimens.

C2 Approach

ADR-C DI-C1 PI-C2 Graphical Depiction

Coord C2 None 30% 20%

ADR-C

DI-C

PI-C

Centralized

Distributed Across Higher

Echelons Fully

Distributed

E

E

E

c

c

c

C

cC

cC

c

C

E

Coordinated C2

Collaborative C2

Edge C2

cC CTC-TL Collaborative C2

C

C

C

Low Medium High

C2 Approach

CTC-TL Collab C2

Distributed across Higher

Echelons 60% 25%

Collab C2 Distributed

across Higher Echelons

70% 28%

Edge C2 Fully Distributed 100% 100%

1 Measured as average information accessed. 2 Measured as average network reach

Table 2 - C2 Approaches Projections on the C2 Approach Space Dimensions

Measures of Effectiveness

Table 3 presents the measures of effectiveness defined and measured for this work pertaining to the Agility Agent Baseline.

All approaches are effective in ELICIT, but the more a network-enabled approach is the better its performance score, being the ‘Maximum Timeliness’ score for Collaborative C2, with the best value (therefore, the fastest one reaching a complete correct identification), the only exception. To what extent the C2 approaches are able to maintain acceptable levels of performance while coping with changes and disturbances is the scope of C2 Agility.

Page 12: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 12 of 32

C2 Approach Average Correctness

Maximum Timeliness Effectiveness*

Coordinated C2 0.06 0.17 1

CTC-TL Collaborative C2 0.29 0.20 1

Collaborative C2 0.35 0.40 1

Edge C2 1.00 0.20 1 * C2 Approach specific

Table 3 – Agility Agent Baseline: Agility Results

4.4 Conditions and Circumstances Varied Agility is about maintaining or enhancing effectiveness (or efficiency) as circumstances change. Therefore, it is about being able to successfully operate in a variety of circumstances. In this paper, we report upon a set of experiments that considered the experimental manipulations or variations presented in Table 4.

Table 4 - Experimentation Manipulations

4.5 Hypotheses The experiments reported on here involved manipulating agents performance, time pressure (a form of problem difficulty), damage to the network (links) or connectivity and damage to organization. It is expected that these changes will affect the ability of the organization to function successfully. In additional to expectations that as conditions become more stressing (e.g. more network damage, lower performing agents) entity performance will degrade for any given C2 Approach, it is also expected that some C2 Approaches will be able to deal with these stresses better than others. In other words, some C2 Approaches will manifest higher levels of Agility than others.

Manipulations Description Agility component to observe

Key Information Availability

The point at which key information is made available is varied. When information is provided later there is less time to accomplish the task.

Versatility (over problem difficulty) Responsiveness

Agent Performance

Three levels of agent performance were simulated to include: ‘lower performing’, ‘normal performing’ and ‘high performing’.

Versatility (of organization) Responsiveness

Infostructure Degradation

Infostructure performance was degraded by removing links between nodes and websites.

Resilience (of infostructure) Responsiveness

Organization Disruption

The organization will be disrupted by eliminating nodes (i.e., individuals).

Resilience (of organization) Responsiveness

Page 13: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 13 of 32

Based on the N2C2M2 expectations and past experimentation results, more network-enabled C2 Approaches are expected to be more Agile than less network-enabled approaches.

Shared Awareness is considered to be necessary to accomplish certain missions and tasks. Based upon previous results it is expected that more network-enabled C2 Approaches will exhibit greater levels of shared awareness and hence to be more appropriate for these types of missions.

5 RESULTS OF EXPERIMENTS

This section presents the results of the abELICIT runs described briefly above. It is important to note that our ability to instantiate two different Collaborative C2 Approaches and the manipulations identified in Table 4 supports the “readiness” of the Operationalizing C2 Foundation to support efforts aimed at measuring C2 Agility in experiments as well as operational settings. This, in turn, supports efforts to diagnose problems that are inhibiting agility or assess innovations that may enable agility.

It is also important to note that the composition of the set of factoids and the manner in which these factoids are distributed makes it possible for all of the C2 Approaches to have at least one agent get the complete solution. This is unlike the series of experiments in Alberts (2011) where there were mission challenges for which some C2 Approaches proved unable to have anyone develop a complete solution.

5.1 Impact of Key Information Availability The baseline ELICIT experiments consist of 68 factoids distributed among human participants or their corresponding agents in a pre-determined manner. For the purposes of this set of experiments, two additional datasets were created that varied the time relevant information was distributed to agents. Factoids containing relevant information are referred to as “signal”; those that are not relevant to the solution are labeled “noise”. In these experiments, all information is true. Moreover, the time distribution of factoids was changed from 5 time units (the baseline case) to 20 time units.

By manipulating the time key information is available (discoverable) to agents, we can make it possible to reach effectiveness thresholds sooner or prevent the development of a correct solution for a given period of time. In effect, these manipulations enable or inhibit individual, and hence, entity timeliness.

In addition to the effect this manipulation has on timeliness, it can also affect average correctness or shared awareness. This is because the agent may run out of available time if they do not obtain needed information soon enough. The original manner of factoid distribution and our manipulations are presented in Table 5.

Page 14: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 14 of 32

Disruption Information Distribution Manipulation

Standard Problem • 34 factoids with half signal and half noise posted in first wave. • 17 factoids with half signal and half noise are posted in second wave. • Remaining 17 factoids with half signal and half noise are posted in final wave.

More Timely Problem

• All information necessary to solve problem posted in first wave • 17 noise factoids posted in second wave • Remaining noise factoids posted in final wave

Less Timely Problem

• All noise factoids (34) posted in first wave • 17 Signal factoids posted in second wave • 17 Signal factoids posted in final wave

Table 5 - Factoid Distribution as a function of Key Information Timeliness

As expected, changing when relevant factoids (signal) are available does, in fact, impact Maximum Timeliness and the impact is considerable. A “more timely” distribution of relevant factoids results in a large increase in timeliness for all C2 Approaches over the Baseline case, while a “less timely” distribution of relevant factoids results in a decrease in timeliness. The “Less Timely Problem” negatively affected the Average Correctness score for Coordinated (zero correctness), CTC-TL Collaborative (zero correctness) and Collaborative (correctness reduced to 0.18). The “More Timely Problem” did not affect Average Correctness except for the Collaborative approach (where timeliness increased to 0.35). Edge C2 Average Correctness score was not affected by the problem timeliness.

Figure 4 presents a Stacked Agility Map for each of the C2 Approaches considered here. The “stacked variable” is the Key Information Availability (timeliness of the distribution of signal v. noise). The values of the Agility Metric (and related measurements) derived from these runs correspond to the visuals from Figure 4 and are presented in Table 6.

Page 15: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 15 of 32

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 4 - Stacked Agility Maps for Information Signal Timeliness

C2 Approach Agility Map Coverage Map Downgrade

Absolute Benchmarked Anytime Always Absolute Relative

Coordinated C2 0.05 0.9 0.09 0 0.09 1 CTC-TL Collaborative C2 0.14 0.9 0.27 0 0.27 1

Collaborative C2 0.2 1.47 0.36 0.08 0.28 0.78 Edge C2 0.6 1.3 0.9 0.4 0.5 0.56

Table 6 - Key Information Availability: Agility Measurements

Page 16: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 16 of 32

The values obtained for Absolute Agility results are: Coordinated = .05, CTC-TL Collaborative = .14, Collaborative = .20 and Edge = .60. Regarding Benchmarked Agility (outcome relative to the baseline case), the highest agility scores were obtained for Collaborative (1.47) followed by Edge (1.3), CTC-TL Collaborative and Coordinated (both at 0.9). In terms of coverage, the percentage of the Endeavor Space covered some of the time (non white area) for these C2 Approaches is .09, .27, .36, and .90. Only Collaborative and Edge have non-zero values for Always Coverage metric meaning that they obtain, at least, a successful correct identification. Coordinated and CTC-TL Collaborative, on the other hand, have situations in which no correct identifications are provided.

The levels of Agility associated with the C2 Approaches are perfectly correlated with “network-enabled-ness”: for the ‘Key Information Availability’ experiments, more network-enabled C2 Approaches are more Agile than less network-enabled approaches.

To further assess each C2 approach, we also calculated performance related metrics, namely Shared Awareness11 (i.e., average number of correct identifies over time) and an approach-specific measure of effectiveness. This measure takes in to consideration the ways that Decision Rights (roles and responsibilities) are allocated (see Table 1).

“Shared Awareness” increases by increasing the degree of network-enabled of the approach. Per C2 approach, the score is slightly reduced in some cases when changing the problem timeliness, except for Edge that always keeps the maximum score. All “Baseline” and “More Timely” runs were fully effective. This score was reduced to 0.75 for Coordinated and CTC-TL Collaborative in “Less Timely” runs. Looking into final scores only provides a limited view of the overall performance of each approach. To add further insights to the assessment exercise, we generate Performance Maps (see Figure 6) that are built in a similar way as for the Agility Maps described in Figure 4, using for the Y-axis the assessment variable we are measuring (e.g., Shared Awareness) and for the Y-axis Timeliness. Additionally, from the Performance Maps two performance metrics are obtained applying the same process as the one used for the agility metrics, namely: Absolute Performance and Benchmark Performance.

The resulting performance maps for Shared Awareness and Effectiveness are presented in Figure 5 and Figure 6 respectively. Table 7 presents the overall performance metrics.

11 Measured as the average number of correct identifies in the organization. Partial correct answers are allowed.

Page 17: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 17 of 32

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 5 - Performance Maps for Shared Awareness

as a function of Key Information Availability

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 6 - Performance Maps for Approach Specific Effectiveness

as a function of Key Information Availability

C2 Approach Shared Awareness

Performance Effectiveness Performance

Absolute Benchmarked Absolute Benchmarked

Coordinated C2 0.1 1 0.67 0.89 CTC-TL Collaborative C2 0.32 0.83 0.69 0.91

Collaborative C2 0.39 1.16 0.73 0.99 Edge C2 0.66 1.1 0.7 1.04

Table 7 - Key Information Availability: Performance Measurements

By looking at the Performance Maps one can see that it takes each of the C2 Approaches more time to develop shared awareness and accomplish assigned tasks when the “signal” factoids are distributed later. Understanding these charts may take a bit of practice, since lower timeliness scores mean than it took the organization longer for it to accomplish assigned tasks. There are a number of things to look for in Performance Maps. First, did the entity reach maximum performance level (of Shared Awareness or Effectiveness) and, if so, when did it achieve this level? Looking at Figure 5, we see that Shared Awareness develops with different timeliness values, but only Edge achieves maximum score. This results in Edge having the highest performance score for shared awareness (0.66).

Page 18: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 18 of 32

From Figure 6, we see that all C2 Approaches achieved their performance goals at some point during the time available for each of the key information timeliness conditions. Further, that all C2 Approaches took the same time to achieve their designed level of performance when the signal was more timely (see when Effectiveness = 1 that white turns to light grey). However, when the signal was less timely, the Coordinated and Edge Approaches took a bit longer than the Collaborative Approaches (when at Effectiveness = 1, grey turns to black). Readers should note that Collaboration turns in the best performance for the Baseline case (see the dark grey coming in at Timeliness = .4).

Looking a bit more deeply into these results may provide some information on improving an entity’s performance because these Performance Maps also show progress over time. Hence they can help identify “laggards” and lead to being able to enable them to accomplish their assigned tasks in a more- timely manner. All C2 Approaches were able to achieve their performance goals when the signal was more timely (look across the chart at Effectiveness = 1 and see when the white disappeared).

These difference maps and their associated metrics provide considerable insights into the behavior of the difference C2 Approaches under different conditions, in this case, the timeliness of distribution of signal v. noise. Although we began this discussion by explaining that the Effectiveness measure used in these Performance Maps is approach-specific, it is worth mentioning this again since the work expected of individuals in each of these C2 Approaches differs considerably and there will be cases when, even if an organization works perfectly as designed, it may fail; while there may be other cases where an organization that is not performing up to its standards, may succeed.

5.2 Impact of Agent Performance An important consideration in selecting a particular approach to C2 is whether or not the characteristics and capabilities of the individuals are compatible with the tasks assigned and the workload required. Perhaps some C2 approaches require “high performing” individuals while other do not. Perhaps some roles require high performing individuals. This set of experiments varied agent performance to illustrate that this kind of manipulation can be performed and to gain insights into the appropriateness of various C2 Approaches as a function of individual performance. The agents in abELICIT are initially configured to perform in a pre-determined manner. We used this as our Baseline. We created high and low performing agents relative to this baseline. Low-performing agents share, post and pull less factoids (slower or decreased rate of information sharing and seeking) and spend more time processing factoids (decreased cognitive performance) than the Baseline agents; while the High-performing agents share, post and pull more factoids (faster or increased rate of information sharing and seeking) and spend less time processing factoids (increased cognitive performance) than the Baseline agents.

Space does not permit a detailed review of the experimental results, but we will provide the highlights. Although it does not come as a surprise, a major take away here is that low performing agents can cause some C2 Approaches to fail while other approaches, although their Agility and Timeliness are degraded, still maintain a level of success. Of note is that, for all C2 Approaches, High-performing agents improved Maximum Timeliness. The Agility Maps and measurements are presented Figure 7 and Table 8.

Page 19: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 19 of 32

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 7 - Agility Maps as a function of Agents Performance

C2 Approach Agility Map Coverage Map Downgrade

Absolute Benchmarked Anytime Always Absolute Relative Coordinated C2 0.02 1.25 0.05 0 0.05 1

CTC-TL Collaborative C2 0.07 1.25 0.15 0 0.15 1

Collaborative C2 0.16 1.03 0.24 0.09 0.15 0.62 Edge C2 0.29 1.68 0.6 0.07 0.53 0.88

Table 8 – Agents Performance Runs: Agility Measurements

The agility results indicate that agility increases with more network-enabled approaches: Edge has the best agility score (0.29), followed by Collaborative (0.16), CTC-TL Collaborative (0.07) and Coordinated (0.02). The overall performance resulted in an increase over the baseline (benchmarked agility >1). This means that the high-performing agent was an enabler to agility. Only Collaborative and Edge provided correct complete identifications in all runs (relative downgrade not 1).

The levels of Agility associated with the C2 Approaches are perfectly correlated with “network-enabled-ness”: for the ‘Agent Performance’ experiments, more network-enabled C2 Approaches are more Agile than less network-enabled approaches.

The resulting performance maps for Shared Awareness and Effectiveness are presented in Figure 8 and Figure 9 respectively. Table 9 presents the overall performance metrics.

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 8 - Performance Maps for Shared Awareness

as a function of Agents Performance

Page 20: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 20 of 32

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 9 - Performance Maps for Approach Specific Effectiveness

as a function of Agents Performance

C2 Approach Shared Awareness

Performance Effectiveness Performance

Absolute Benchmarked Absolute Benchmarked

Coordinated C2 0.11 1.2 0.54 0.86 CTC-TL Collaborative C2 0.33 1.06 0.40 0.28

Collaborative C2 0.36 0.99 0.46 0.25 Edge C2 0.48 1.05 0.51 1.1

Table 9 - Agents Performance: Performance Measurements

By looking at the Performance Maps one can see that it takes each of the C2 Approaches more time to develop shared awareness and accomplish assigned tasks when the agents’ performance is decreased. Additionally, more network-enabled approaches achieve higher performance on shared awareness than less network-enabled approaches.

Regarding effectiveness, all approaches are negatively affected when decreasing the agents’ performance, but were, in overall, effective (final score above 0.9).

5.3 Impact of Infostructure Degradation This set of experiments was designed to explore the ability of the various C2 Approaches to remain effective when faced with infostructure degradation, that is, impede the connectivity between individuals and/or websites.

The number of direct connections between individuals varies considerable across the C2 Approaches from 28 for Coordinated to 136 in Edge organizations. When website connections are added, these number range from 48 to 204. The experiment referenced here consisted of 16 runs, each of which involved the removal of one-cross echelon direct connection and a related website site link12.

12 Four runs involved the removal of a link between the overall coordinator and a team leader plus the link between the team leader and his team’s website. Twelve of the runs involved the removal of a link between each team member and their team leader plus their link to their team’s website.

Page 21: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 21 of 32

We can see that Coordinated C2 Approached failed to find the correct solution in 11 of the 16 runs, the CTC-TL Collaborative Approach failed in 5 of the runs, while the Collaborative Approach failed in only 1 of the 16 runs. The Edge never failed. The overall impact in ‘Average Correctness’ is presented in Figure 10.

C2 Approach Average Correctness

Basel. Mean Downg.

Coordinated C2 0.06 0.02 0.04 CTC-TL

Collaborative C2 0.29 0.15 0.15

Collaborative C2 0.35 0.28 0.08

Edge C2 1.00 1.00 0.00

− Baseline performance

− 1 link down performance (mean value)

Figure 10 – Impact on Average Correctness as a function of Impact of Infostructure Degradation

When eliminating 1 node link, all approaches except Edge decreased their Average Correctness results from the baseline results. The Collaborative approaches were the most affected ones (with a 0.13 and 0.08 decrease respectively), followed by Coordinated (0.04). Nonetheless, more network-enabled approaches achieve better Average Correctness results than less network-enabled approaches (when facing Infostructure Degradation). The resulting Agility Maps and corresponding measurements are presented in Figure 11 and Table 10 respectively.

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 11 - Agility Maps as a function of Infostructure Degradation

Page 22: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 22 of 32

C2 Approach Agility Map Coverage Map Downgrade

Absolute Benchmarked Anytime Always Absolute Relative

Coordinated C2 0.01 0.47 0.03 0 0.03 1

CTC-TL Collaborative C2 0.04 0.72 0.09 0 0.09 1

Collaborative C2 0.13 0.8 0.25 0 0.25 1

Edge C2 0.32 1.66 0.7 0.2 0.5 0.71

Table 10 - Infostructure Degradation: Agility Measurements

The most network-enabled approaches achieve the best agility scores: Edge leads with 0.32, followed by Collaborative (0.13), CTC-TL Collaborative (0.04) and Coordinated (0.01). Edge, in fact, improved its performance13 as indicated by its Benchmarked Agility. Edge has the highest map coverage (0.7) and is the only approach that is always effective (relative downgrade is not 1).

The levels of Agility associated with the C2 Approaches are perfectly correlated with “network-enabled-ness”: for the ‘Infostructure Degradation’ experiments, more network-enabled C2 Approaches are more Agile than less network-enabled approaches.

Performance Maps were generated for Shared Awareness and Effectiveness and are presented in Figure 12 and Figure 13 respectively. Table 11 presents the overall performance metrics.

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 12 - Performance Maps for Shared Awareness

as a function of Infostructure Degradation

13 Edge contains many redundant links between individuals, thus any single link is redundant. Removing only one link results in a more efficient Edge.

Page 23: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 23 of 32

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 13 - Performance Maps for Approach Specific Effectiveness

as a function of Infostructure Degradation

C2 Approach Shared Awareness

Performance Effectiveness Performance

Absolute Benchmarked Absolute Benchmarked

Coordinated C2 0.1 1.04 0.42 0.7 CTC-TL

Collaborative C2 0.33 1.02 0.62 0.91

Collaborative C2 0.34 0.92 0.64 0.9 Edge C2 0.44 0.96 0.52 1.09

Table 11 - Infostructure Degradation: Performance Measurements

The shared awareness performance increases by increasing the degree of network enabled of the approach. Regarding effectiveness, most approaches achieved success, but the less network-enabled failed more than the most network-enabled approaches.

5.4 Impact of Organization Disruption This final experimentation set was designed to explore the ability of the various C2 Approaches to remain effective when faced with organization disruption by disconnecting individuals from the organization. In particular, one individual is isolated from the organization (simulating a disable) thus resulting in the individual’s factoids becoming inaccessible to the organization, a more stringent condition than the previous one (Impact of Infostructure Degradation). This setting results in 17 possible cases (and runs).

The Coordinated C2 Approached failed to find the correct solution in 10 of the 17 runs and the remaining approaches failed in 7 (8 for collaborative). Thus, the more mature approaches were much more affected in this setting than in the previous one.

Page 24: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 24 of 32

The overall impact in ‘Average Correctness’ is presented in Figure 14.

C2 Approach Average Correctness

Basel. Mean Downg.

Coordinated C2 0.06 0.02 0.04 CTC-TL

Collaborative C2 0.29 0.09 0.20

Collaborative C2 0.35 0.09 0.26

Edge C2 1.00 0.30 0.70

− Baseline performance

− 1 node down performance (mean value)

Figure 14 – Impact on Average Correctness as a function of Impact of Organization Disruption

When eliminating one node, all approaches are affected: looking into the results for average correctness, Coordinated decreases to almost zero (0.02), CTC-TL Collaborative and Collaborative decrease to 0.09 and Edge to 0.30 (a decrease of 0.70). Still, more network-enabled approaches achieve better Average Correctness results than less network-enabled approaches (when facing Organization Disruption). The resulting Agility Maps and corresponding measurements are presented in Figure 15 and Table 12 respectively.

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 15 - Agility Maps as a function of Organization Disruption

C2 Approach Agility Map Coverage Map Downgrade

Absolute Benchmarked Anytime Always Absolute Relative

Coordinated C2 0.01 0.62 0.03 0 0.03 1 CTC-TL

Collaborative C2 0.04 0.59 0.09 0 0.09 1

Collaborative C2 0.04 0.22 0.16 0 0.16 1

Edge C2 0.11 0.55 0.4 0 0.4 1

Table 12 – Organization Disruption: Agility Measurements

Page 25: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 25 of 32

All approaches were negatively affected and the most network-enabled approaches achieve the best agility scores: Edge leads with 0.11, followed by CTC-TL Collaborative and Collaborative (both with 0.04) and then Coordinated (0.01). No approach was able to be successful in all circumstances (relative downgrade is 1 for all).

The levels of Agility associated with the C2 Approaches are perfectly correlated with “network-enabled-ness”: for the ‘Organization Disruption’ experiments, more network-enabled C2 Approaches are more Agile than less network-enabled approaches.

Performance Maps were generated for Shared Awareness and Effectiveness and are presented in Figure 16 and Figure 17 respectively. Table 13 presents the overall performance metrics.

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 16 - Performance Maps for Shared Awareness

as a function of Organization Disruption

Coordinated C2

CTC-TL Collaborative C2

Collaborative C2

Edge C2

Figure 17 - Performance Maps for Approach Specific Effectiveness

as a function of Organization Disruption

C2 Approach Shared Awareness

Performance Effectiveness Performance

Absolute Benchmarked Absolute Benchmarked

Coordinated C2 0.1 1 0.41 0.68 CTC-TL

Collaborative C2 0.31 0.9 0.61 0.87

Collaborative C2 0.3 0.79 0.59 0.89 Edge C2 0.37 0.79 0.44 0.92

Table 13 - Organization Disruption: Performance Measurements

Page 26: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 26 of 32

The shared awareness performance increases by increasing the degree of network enabled of the approach, except for the slight decrease (0.01) from CTC-TL Collaborative to Collaborative. The shared awareness performance maps (Figure 16) clearly illustrate the coverage areas and their reduction (grey cells) due to the manipulations made.

Regarding effectiveness (Figure 17), the approaches had a decrease in performance, a situation that is visible through the grey cells for the highest scores and via benchmarked value (<1).

6 CONCLUSIONS

In this paper, we have taken a look at what is needed to “Operationalize C2 Agility” and expressed these needs in the form of an Operationalizing Foundation consisting of four cornerstones (C2 Theory, the NATO NEC C2 Maturity Model (N2C2M2), C2 Agility Metrics, and ways to observe and calculate the values for these metrics).

We have tested the readiness of this foundation to support systematic effects to measure and analyze the Agility of entities by designing and conducting a series of abELICIT experiments. Of particular note was that we were able to “create” a new C2 Approach, instantiate it in ELICIT, and that the experimental results obtained were completely consistent with C2 Theory. Specifically, based on the set of experiments realized - that included problem timeliness, agent performance, infostructure degradation and organization disruption - it was observed that:

• More network-enabled approaches achieved higher shared awareness than less network-enabled approaches.

• More network-enabled approaches were more agile than less network-enabled approaches.

This paper also strengthens this operationalizing foundation by introducing additional ways to visualize Agility and its enablers (e.g. shared awareness) in the form of Agility Maps and Performance Maps as well as associated metrics that include an approach-dependent measure of effectiveness that measures the extent to which an entity or collective is functioning as intended. This permits us to ascertain whether it is sufficient for an entity to function as intended or if an entity’s design is simply inappropriate for a particular set of missions and circumstances.

We conclude that our community is fully prepared to support efforts to “Operationalize C2 Agility”. While it is important to note that there is no doubt that C2 Agility theory can be further developed, tested, and refined and that improvements certainly can be made in our tools, methods, and measures, we conclude that what is most urgently needed is more experimentation, particularly with actual organizations and systems. In other words, we need to focus now on applying what we know and our analytical skills to improving the State of the Practice. However, we recognize that this is not something that is in the control of the research community and that perhaps there are still many individuals who do not recognize the potential value associated with improving C2 Agility. Therefore, in addition to seeking partners in the operational community, we need to improve our simulation capabilities so that we are able to tell a better story and seek to partner with those in the education and training communities.

Page 27: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 27 of 32

7 BIBLIOGRAPHY

[1] Alberts, David S. 2011. The Agility Advantage: A survival guide for complex enterprises and endeavors. CCRP Publication Series.

[2] Manso, Marco and Bárbara Manso. 2010. “N2C2M2 Experimentation and Validation: Understanding Its C2 Approaches and Implications.” Paper presented at the 15th ICCRTS, Santa Monica, USA, June 22-24.

[3] Manso, Marco. 2012. “N2C2M2 Validation using abELICIT: Design and Analysis of ELICIT runs using software agents” Paper presented at the 17th ICCRTS, Fairfax, USA, June 19-21.

[4] NATO SAS-065. 2010. NATO NEC C2 Maturity Model. CCRP Publication Series.

[5] NATO SAS-085. 2011. SAS-085 Working definitions and explanations April 21, 2011. Internal Document.

[6] Ruddy, Mary. ELICIT 2.2.1 Web-based Software Guide for Human Experiment Directors. Parity Communications, Inc., Dec 2008. Paper presented at the 14th ICCRTS, Washington DC, USA, 2009.

[7] Ruddy, M, ELICIT 2.5 Software Guide, CCRP, August, 2011

Page 28: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 28 of 32

8 ANNEX - CALCULATING AGILITY

8.1 Agility Measurements The Agility map is a two-dimensional matrix in which two variables that determine success are used as follows. For the ELICIT agility experiments, the following variables are used:

• For the x-axis, ‘Required Timeliness’ is used, which measures the timeliness taken by the first subject to determine the correct answer (in all 4 Ws). If none, its value is zero.

• For the y-axis, ‘Average Number of Correct Answers’ is used, which measures the average number of complete correct answers (in all 4 Ws) across the organization.

Each run generates an agility map. To visually depict agility across multiple runs, a final agility map is built by overlaying all runs, as described next.

In an agility map, for run A, the value of the cell at position i and j is calculated as follows:

!"##!"#$#%&!!"#!!"#!!"#!!(!, !) = ! 1!!"!!""#$!!"#$%"#&!(!, !)0!!"ℎ!"#$%!

In which the criteria(i,j) is achieved if the average number of correct answers (j) at a given timeliness(i) is met. The value of the cell at position i and j for the overall agility map, aggregating all runs, is calculated as follows:

!"##!"#$#%&!!"#(!, !) = ! !"##!"#$#%&!!"#!!"#!!"#!!(!,!)!""!!"#$!"#$!"_!"_!"#$

The division by the ‘Number of Runs’ is made to normalize the scale.

Two examples of agility maps are provided in Figure 18. The one at the left refers to a single run (and includes annotations explaining the scale and the measurement obtained) and the one at the right results from overlaying three agility maps.

Max Average Correctness (=1) Max Timeliness (=1)

Max Average Correctness (=1) Min Timeliness (=0.1)

Med Average Correctness (=0.3) Med Timeliness (=0.5)

Figure 18 - Agility Maps Example

Page 29: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 29 of 32

In the chart at the left, the organization achieved an average correctness below 0.3 and timeliness below 0.5 (thus filling the cells at left and below those values). In the chart at the left, overlaying three runs result in the different shades of grey: the black cells refer to the space that all runs met, namely with an average correctness below 0.7 and timelines below 0.1; the dark grey cells refer to the space that most runs (2 in this case) comply with, namely, average correctness of 1 and timelines below 0.2; and the light gray refers to the space that only a few runs (1 in this case) comply with, namely, average correctness of 1 and timelines below 0.6. For both charts, the white cells refer to conditions that are never met in any run.

In this work, the agility maps to be used will be 10x10. Next, for purposes of exemplifying, a calculation example for a 3x3 agility map obtained from the 3 performance maps depicted in Figure 19 is presented next.

1   0   0     0   0   0     1   1   0     2   1   0     0.7   0.3   0    1   0   0     1   0   0     1   1   0     3   1   0     1   0.3   0    1   0   0     1   0   0     1   1   0     3   1   0     1   0.3   0    

Map Run1 Map Run2 Map Run3 Resulting agility map (not normalized)

Resulting agility map (normalized)

Figure 19 – Agility Map Calculation Example

The resulting matrix can be represented visually by using a grey-scale (black=1, white=0, different tones of grey are used for the values that lie in between). An example of 3 agility maps (3-by-3) pertaining to single runs and a fourth representing the outcome of the three runs is presented in Figure 20.

Baseline

single run

Low performing agents

single run

High performing agents

single run

Agility Map (overlaying

the three runs)

Figure 20 – Example of Agility Maps

The formulas to calculate agility are described next.

Calculating Agility

Absolute agility can be obtained by adding all the cells of the agility map and by dividing it by the total number of cells (to normalize it).

!"#$#%&!"#$%&'( = !"##!"#$#%&!!"#(!,!)!,!!"#$%&_!"_!"##$

Page 30: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 30 of 32

Thus, using the example in Figure 19,

Benchmarked agility is calculated as follows:

!"#$#%&!"#$!!"#$%& = !

!"#$%&_!"_!"##$_!ℎ!"!_!"##$!!%"&!(!"#)!""!!"#$!!"#!$%!!"#$%&'$

(!"#$%&_!"_!"#$ − 1) !∗ !!"#$%&_!"_!"##$_!ℎ!"!_!"##$!!%"&!(!"#$%&'$) Benchmarked agility is 1 if performance, when compared with baseline run, doesn’t change, less than 1 if performance degrades and higher than 1 if performance improves. Thus, using the example in Figure 19,

Furthermore, additional metrics will be used to assess the agility of an organization:

• Agility Map Covered (A-MC), being the percentage of cells of the complete map where an organization is successful in one or more runs.

• Agility Map Always Covered (A-MAC), being the percentage of cells of the complete map where an organization is successful in all runs.

• Agility Map Downgrade (A-MD), being the percentage of cells of the complete map where an organization became not successful (thus, requires being successful under specific conditions). AMD can be obtained by subtracting AMC and AMAC.

• Agility Map Relative Downgrade (A-MRD), being the percentage of cells of the map where an organization is successful at some point, but became not successful.

The performance of an entity will also be assessed using performance maps, as described next.

8.2 Performance Map The performance map allows assessing the performance of an organization. It has similarities to the agility map in the sense that it is also a two-dimensional matrix and its metrics are calculated in the same way. The performance map is built by measuring a variable, relevant for the assessment of the organization (e.g., shared awareness), and its increase in time (or, more specifically, its timeliness).

More specifically:

• The performance variable is represented in the Y-axis, whose scale is between 0 and 1, where 0 means no subject provided any correct identification and 1 is all subjects correctly identified) and

• Timeliness is represented in the X-axis, whose scale is between 0 and 1, where 1 is the beginning of the experiment and 0 the end of the experiment).

• The values are represented in percentage (0 to 1) with a resolution of 0.1. Therefore, the performance maps (and agility maps) contain 100 (10x10) cells.

!"#$#%&!"#$%&'( = ! !.!!!.!!!!!!!.!!!!!!!.!!!! != !0.41.

!"#$#%&!"#$!!"#$%& = ! !!!(!!!)!∗!! = !1.33

Page 31: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 31 of 32

In a performance map, the value of the cell at position i and j is calculated as follows:

!"##!"#$%#&'()"!!"#(!, !) = ! 1!!"!!"#$%&%"#(!)!!"!!"##$!!%"&&!!"!!"#$%"&$''(!)0!!"ℎ!"#$%!

For example, in a 10x10 map, cell(2, 3)=1 means that the organization was successful in achieving a score up to 20% at timeliness 0.3.

An example of two performance maps is presented in Figure 21. The performance variable being measured is ‘Shared Awareness’. Black cells represent conditions in which an organization is successful (achieved a certain amount of shared awareness within a specific timeframe) while white cells represent the opposite condition (no success).

Max Shared Awareness (=1) Max timeliness (=1)

Max Shared Awareness (=1) Min timeliness (=0.1)

Low Shared Awareness (=0.2) Medium timeliness (=0.7)

Figure 21 – Performance Maps Example

The chart at the left presents how to map the axis to the values in the chart. Note that the value of the corresponding variable results from dividing by 10 the value of the axis. It may be derived from the figure that 10% of shared awareness was obtained at the beginning of the experiment (timeliness=1.0), that 50% shared awareness was obtained close to the half-time of the run (timeliness=0.5) and that maximum shared awareness (100%) was obtained when timeliness was 0.2. Thus, the more areas in black the better the performance.

The chart at the right is build from stacking/overlaying performance maps, in a similar way as for the agility maps.

Page 32: Operationalizing C2 Agility

17th ICCRTS - Operationalizing C2 Agility David S. Alberts and Marco Manso - Operationalizing and Improving C2 Agility: Lessons from Experimentation

Page 32 of 32

The performance metrics to be used use the same definitions and formulas as those defined for agility. The mapping is presented in Table 14.

Performance Metric Corresponding Agility Metric

Absolute Performance Absolute Agility

Relative Performance Relative Agility

Performance Map Covered (P-MC) Agility Map Covered (A-MC)

Performance Map Always Covered (P-MAC) Agility Map Always Covered (A-MAC)

Performance Map Downgrade (P-MD) Agility Map Downgrade (A-MD)

Performance Map Relative Downgrade (P-MRD) Agility Map Relative Downgrade (A-MRD)

Table 14 – Mapping between Performance and Agility metrics