10
Journal of Visual Languages & Computing Journal of Visual Languages and Computing 17 (2006) 292–301 Cognitive dimensions ‘beyond the notation’ Marian Petre Centre for Research in Computing, Computing Department, Open University, Milton Keynes MK7 6AA, UK Abstract This personal reflection on cognitive dimensions (CDs) over the past decade is in three parts: an account of how empirical studies of professional software developers informed the development of CDs in the early 1990s; an articulation of unresolved issues ‘beyond the notation’ which were emphasized by the empirical studies and which might yet be addressed by CDs; and a speculation on the application of facet theory to CDs as a possible approach to those issues. r 2006 Elsevier Ltd. All rights reserved. Keywords: Cognitive dimensions; Empirical studies; Software development; Facet theory 1. Introduction From the time I joined Thomas Green in his formulation of cognitive dimensions (CDs) in 1989, our discussions were bedded in examples and empirical results. What we both wanted to know was how notations (or, more broadly, information artefacts) work when they do, and why they don’t when they fail. CDs were an attempt to capture and articulate these issues. Thomas had been drawing on his extensive knowledge of notation types and examples, and I brought along my questions and examples from empirical studies of professional software developers’ use of programming languages and other representations to solve problems. In order to make sense to each other—and especially when we didn’t— we referred continually to examples. The ideal for CDs was to make cognition-related attributes of notations evident, in a way that related those attributes concretely and clearly to how notation users employ and experience notations. We strove for a theory that could reveal hidden attributes of the concrete. We needed ARTICLE IN PRESS www.elsevier.com/locate/jvlc 1045-926X/$ - see front matter r 2006 Elsevier Ltd. All rights reserved. doi:10.1016/j.jvlc.2006.04.003 Tel.: +44 1908 653 373; fax: +44 1908 652 140. E-mail address: [email protected].

Cognitive dimensions ‘beyond the notation’

Embed Size (px)

Citation preview

Page 1: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESS

Journal ofVisual Languages & ComputingJournal of Visual Languages and Computing

17 (2006) 292–301

1045-926X/$

doi:10.1016/j

�Tel.: +44

E-mail ad

www.elsevier.com/locate/jvlc

Cognitive dimensions ‘beyond the notation’

Marian Petre�

Centre for Research in Computing, Computing Department, Open University, Milton Keynes MK7 6AA, UK

Abstract

This personal reflection on cognitive dimensions (CDs) over the past decade is in three parts: an

account of how empirical studies of professional software developers informed the development of

CDs in the early 1990s; an articulation of unresolved issues ‘beyond the notation’ which were

emphasized by the empirical studies and which might yet be addressed by CDs; and a speculation on

the application of facet theory to CDs as a possible approach to those issues.

r 2006 Elsevier Ltd. All rights reserved.

Keywords: Cognitive dimensions; Empirical studies; Software development; Facet theory

1. Introduction

From the time I joined Thomas Green in his formulation of cognitive dimensions (CDs)in 1989, our discussions were bedded in examples and empirical results. What we bothwanted to know was how notations (or, more broadly, information artefacts) work whenthey do, and why they don’t when they fail. CDs were an attempt to capture and articulatethese issues. Thomas had been drawing on his extensive knowledge of notation types andexamples, and I brought along my questions and examples from empirical studies ofprofessional software developers’ use of programming languages and other representationsto solve problems. In order to make sense to each other—and especially when we didn’t—we referred continually to examples.The ideal for CDs was to make cognition-related attributes of notations evident, in a way that

related those attributes concretely and clearly to how notation users employ and experiencenotations. We strove for a theory that could reveal hidden attributes of the concrete. We needed

- see front matter r 2006 Elsevier Ltd. All rights reserved.

.jvlc.2006.04.003

1908 653 373; fax: +44 1908 652 140.

dress: [email protected].

Page 2: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301 293

user-relevant dimensions, and we needed to express them or interpret them into forms that wereuser-friendly. That drove the ‘straw tests’ in the 1996 paper, and it carried through later in thework Thomas did with Alan Blackwell on the CDs questionnaire [1].

This paper reflects on how empirical studies of professional software developers haveinformed the development of CDs—and have raised persistent questions for CDs. Theremainder of this paper is in three sections: the first recalls how empirical studiescontributed to the CDs portrayed in the 1996 paper; the second considers unresolvedissues, especially issues ‘beyond the notation’ which might yet be addressed by CDs; andthe third explores a possible approach to those issues.

2. Using lessons from empirical studies of professional software developers

At the time Thomas Green drew me into his development of CDs, I was spending timeobserving professional software developers in an attempt to understand something aboutthe relationships between their reasoning and the ways they represented their solutions,whether as programs or in some other form. As a result, there was ample opportunity to‘float’ some of the CDs ideas with them. What was striking was their response to the CDs,which appeared to offer names for things these expert, reflective, software developers hadthought about, but which they had never given a name. A common reaction was that thedevelopers immediately recognized the meaning of the CD, and applied it to their own,already existing, but preverbal, concepts. For example, one team, when introduced to thenotion of ‘viscosity’, responded: ‘‘Oh, so that’s what it’s called’’. A week later, the term‘viscosity’ had been adopted seamlessly into the team’s vocabulary. Moreover, they lost notime in explaining to us that, although low viscosity was usually desirable, there came apoint in a project when the major design decisions were made, and where one wanted thedesign and its representation to stabilize. At that point—which they termed ‘‘the congealingpoint’’—the developers wanted the representation to resist change, to increase in viscosity.

In turn, our observations of these software developers contributed directly to theidentification of new CDs, two of which appeared in the 1996 paper: secondary notationand juxtaposability.

2.1. Secondary notation

One thing which came through strongly was the notion of secondary notation: thingswhich are not formally part of a notation which are nevertheless used to interpret it, suchas conventions (e.g., reading a circuit diagram left-to-right and top-to-bottom), use oflocality (i.e., placing logically related items near each other), and labelling. The notion ofsecondary notation emerged from a combination of studies: observations of professionalhardware and software development teams [2] and experiments comparing performanceusing graphical and textual programming notations [3].

Secondary notation was highlighted particularly by the ways professionals work withcircuit diagrams and comparable notations, such as the boxes and wires part of Labview.Key differences between the notations—and hence in the ways users interacted withthem—illustrated the role of secondary notation:

The concept of a functional grouping was better articulated in electronics than inprogramming.
Page 3: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301294

The inter-relationships between objects tended to be richer in circuit diagrams than invisual programming, and as a result circuit diagrams made more use of locality toexpress relationships. � Electronics CAD systems had better drawing environments, which allowed users to

manipulate elements such as locality, grouping, and spacing more easily, and so usersapplied these cues more expressively.

Secondary notation also showed up as novice-expert differences. Experts tend to exploitsecondary notation more than novices in both composition and comprehension. Expertscould reliably distinguish between diagrams created by experts and novices (of the samecontent), because experts use secondary notation to emphasize the important functionsand relationships captured in their diagrams. In reading, experts were able to exploitsecondary notation cues to improve their performance and to help them focus on elementswhich were important or relevant to their comprehension. Experts were also more adept atadapting their strategies to the task, including improvising ‘work-arounds’ for limitationsin a notation (such as using fingers and the cursor as memory aids while searchingdiagrams on screen).

2.2. Juxtaposability

Another notion which emerged strongly was juxtaposability: the ability to set side-by-side different representations, or different parts of a given representation. The need forjuxtaposibility is most apparent in large information artefacts (like software), when oneneeds to relate or compare dispersed information.The notion of juxtaposability emerged from observations on professional programmers’

practice and representations for ideas capture [4]. Key features of the sketches experts useto capture early design ideas concern setting elements side-by-side: mixing representationson a single sheet, using one representation to elaborate on some aspect of a differentrepresentation, and mixing levels of abstraction within one sketch. The juxtaposition ofdifferent views is used to highlight and explicate design issues and decisions. Experts alsouse juxtaposition of different representations to highlight omissions: one representationmay make explicit and accessible what another captures less directly.The significance of juxtaposability became evident in expert practice, particularly in the

ways they used existing editing and tracing tools and in the tools they built to supportcomprehension and exploration of software behaviour. Juxtaposability relates tocomprehension, tracing, and debugging activities, when programmers track softwarebehaviour, for example tracing data flow through a program, examining the roles and usesof key variables, tracking how parameter values are set and when functions or proceduresare invoked, and so on. Many of the visualizations experts create for themselves concernidentifying and juxtaposing functionally related elements of software—creating at-needstructural abstractions over the artefact—and relating those to specific programbehaviours [5]. A variety of standard editing tools can help identify dispersed elements(e.g., searching on a variable name); but experts’ own tools tend to exploit domainknowledge to trace phenomena such as conceptual variables across the whole design,including through all levels of abstraction, regardless of name changes. Such tools useembodied domain information to identify connections, for example having cognizance of

Page 4: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301 295

how given structures are used and of what typical uses signify, in order to operate acrosslevels.

3. Beyond the notation: unresolved challenges in CDs

CDs are a useful concept and metalanguage, providing an account which is derived froman interaction between theory and experience. They reflect, however, just one aspect of thedesign process. It is their relationship to other aspects of the design process—particularlythose contiguous with representation, such as convention and process—that has fuelled somuch discussion. Our biggest struggle centred on scope: where does the notation end, andhow much does a CDs analysis include? Many of the big unresolved issues concern not justdimensions of notation, but also cognitive issues of notations in use, and of their context of

use. Some specific examples follow, again examples which arise from empirical studies ofsoftware professionals. After that a suggestion is offered of a way to tie these aspectstogether formally by adopting the essence of facet theory.

3.1. Trade-offs and context of use

A regular feature of our collaboration was discussions about trade-offs among CDs, andto what extent context of use is considered—a series of debates we never resolved fully.Thomas had long identified ‘simple’ trade-offs among the dimensions. For example, aneditor can reduce repetition viscosity by introducing abstractions, such as global search-and-replace. The trade-off is between the effort required to effect global change manually,and the cognitive overhead associated with comprehending and applying the abstraction.Often, the trade-off occurs between different ‘layers’, where ‘layers’ refer to how much isincluded in the analysis: just the notation (e.g., a programming language), or a tool withinwhich it is used (e.g., an editor), or a larger system of which both the notation and the toolare part (e.g., a programming environment), and so on. A cost within a notation may beameliorated by a facility in another layer, as in the viscosity/global search-and-replaceexample. However, the exchange between layers cuts both ways: notational challenges maybe addressed in the broader context, but the broader context may also introduce newchallenges. There is much more to explore about this interaction among layers.

Studies of professional software developers also brought into focus the many differentways in which context affected CDs analyses and interpretations. Trade-offs are context-sensitive not just in terms of layers, but in other ways as well. Clearly, there is a task-sensitivity: the value one places on a notational challenge and its potential solution (andthe resultant trade-off of costs and benefits) varies depending on the task, on what thetask’s focus is (and hence what the user’s goals are), and on what information/representation needs it emphasizes. So, for example, juxtaposability is not particularlyrelevant for very short programs but has high value during the comprehension ordebugging of very large ones.

Further, trade-offs may be time-sensitive: how one assesses a trade-off may change overtime, as the user’s focus and goals change, subject to where in the process one is, the pointduring a task at which the notation is being used, and for what. This was illustrated by the‘congealing point’ story, in which that programmer’s values shifted as he moved throughdifferent phases of the task.

Page 5: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301296

3.2. Processes that use notation

Notations and representations don’t exist in a vacuum: they are used. One of the thingsthat distinguishes experts is their deliberate and reflective use of representations: to augmenttheir memories, to scaffold their reasoning (e.g., to support systematic exploration of thesolution space), to change their perspective in order to reveal different aspects of a problem,and so on. The cognition-related aspects of notation are coloured by the processes andpractices within which they are used—process might be thought of as another ‘layer’.Experts demonstrated a variety of processes that use notation. An example was given in the

section on juxtaposability, above: experts juxtapose two different representations in order touse the match or mis-match between them to support reasoning and to spot omissions orinconsistencies. Further, they deliberately change notation in order to ‘escape from theformalism’ (and hence the selection, orientation, or simplification) embodied in a givennotation and hence to highlight different aspects of a problem or solution. Another exampleis that experts create intermediate languages or representations to embody domain knowledgeand create abstractions closer to the problem domain. Again, experts are responding to theirown needs through processes which are deliberate but not necessarily verbalized. CDs, if theycan be shaped to respond to processes, may again provide a metalanguage for articulatingand hence focusing on the cognitive trade-offs in such approaches.The harnessing of domain knowledge into the notational environment raises its own

issues. One of the distinctive features of tools experts build for themselves is theembodiment of domain knowledge. The value of this approach to the experts is in theelimination of ‘noise’: tools can examine not just conformance to syntax but also todomain semantics. Knowledge of things like fundamental structures within a domain,operational constraints, and key functional relationships can be used to make tools morepowerful—and also to constrain what can be expressed easily to what it is appropriate toexpress in the domain.

3.3. Cultural conventions

Similarly, notations are elaborated or constrained by conventions of use. Conventionscan be adopted in a culture (such as an engineering discipline or the culture associated witha particular programming paradigm, or a specific organizational culture) to the extent thatthey form an extension to the language—a ‘secondary notation’. Such conventions (e.g.,reading a circuit diagram from left to right, and from top to bottom) can make a notationmore accessible, more usable, even more powerful, because they can exploit additionalknowledge which the user has already learned elsewhere and which therefore do not incurany extra learning costs. They can (e.g., ER diagrams) constrain how the notation isdeployed. They can constrain what is thought about and how it is considered.

4. Possible solution: one way to represent this is facets

It is clear that there is no single satisfactory structure which represents clearly therelationships between the various CDs. A possible solution involves using a multiplestructure, via facet theory. ‘‘The essence of facet analysis is the sorting of terms in a givenfield of knowledge into homogeneous, mutually exclusive facets, each derived from theparent universe by a single characteristic of division’’ [6].

Page 6: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301 297

Facet theory has been around in library science and information systems for decades—named by Ranganathan [7] and popularized by Vickery [6]—and it draws on conceptswhich have been around for centuries. It recognizes that there is often more than one wayto structure a domain, depending on the basis for categorization, on which aspect or ‘facet’is focal. Indeed, Ranganathan was trying to overcome the inability of traditionalenumerative bibliographic classification systems to allow for the expression of compoundsubjects. It allows different facets to be represented separately, and hence it allows adomain entity to be classified according to more than one aspect. A key advantage ofrepresenting ‘facets’ separately is that each separate structure can be represented cleanlywhile allowing and maintaining other structures—and yet avoiding a combinatorialexplosion. Ranganathan devised a system of notation using numbers and colons to providea classification mark synthesized from individual facet indicators. As introduced, facettheory classifies entities under several pre-defined hierarchies, allowing, for example, forthe accommodation of interdisciplinary documents within different structural perspectives.

As elaborated by Rugget al. [8, p. 251]: ‘‘A convenient example is afforded byclassification of animals. One facet for this is provided by the traditional Linneantaxonomy, which has several layers, and hundreds of thousands of classes and subclasses;another facet is provided by the preferred ecological habitat of the animal, which containsa much smaller set of categories. Two animals may be very close to each other within oneof these facets, but very distant from each other within the other facet. For instance, pinemartens and sea otters are quite closely related in Linnaean terms, but very different interms of their preferred habitat; conversely, sea otters and abalone are very close in termsof preferred habitat, but very distant in terms of Linnean classification.’’ Another exampleis given on FacetMap’s website (http://facetmap.com/browse.jsp), where one can browse awine classification with facets for varietal, region, and price. The user can explore thedemonstration and identify a wine in many different ways, depending on which aspects areof interest (e.g., something bubbly and exclusive for a celebration, something from afavourite holiday region, something cheap and cheerful).

Rugg and McGeorge [9] integrated the facets notion with graph theory, and particularlywith the concept of polyhierarchies, for their application in knowledge elicitation. Theytreat the multiple structural hierarchies as a set of separate, orthogonal, directed graphswhich share one or more entities. Their interpretation of facet theory also uses emergent,elicited categories, rather than a pre-defined scheme, in order to capture their informants’models of knowledge and belief.

Kwasnick [10] lists advantages of faceted classification: it does not require completeknowledge of the entities or their relationships, it is ‘hospitable’: accommodating newentities easily, it is flexible and expressive, it admits many different perspectives andapproaches. However, choosing the right facets is crucial—and difficult. Choosing theright facets requires good knowledge of whatever is being classified, and of the use andusers of the information.

5. Example application: drawing on specific expertise

So how might this work for CDs? Still thinking within the context of software designrepresentations, let’s consider how we might identify facets, and whether doing so providesany leverage on the issues raised.

Page 7: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301298

‘‘The essence of facet analysis is the sorting of terms in a given field of knowledge intohomogeneous, mutually exclusive facets, each derived from the parent universe by a singlecharacteristic division’’ [6, p. 1] The first step is to identify facets. Fig. 1 offers a firstattempt, drawing on classic categories to divide the space (explicit and implicit, extrinsicand intrinsic, knowledge and processes) as well as on the reflection in the previous sections.Four facets are identified:

1.

The core design representation (explicit, intrinsic) 2. The representation embedded in associated tool (explicit, extrinsic) 3. Knowledge about the representation (implicit) 4. Processes using the representation

Table 1 is a tabular version of Fig. 1. It tries to indicate reasoning that moved fromattributes of a given software design representation to this intial attempt at identifyingfacets. Of course it’s a rationalization; it captures the questions that divided the space ofrepresentations adequately for an initial attempt.This application of facets diverges deliberately from the Rugg and McGeorge approach,

in which hierarchies (of goals and beliefs) were important because they were dealing withexperts who often had rich and elaborate hierarchically arranged knowledge. Interestingly,this knowledge was often explicitly arranged in what were in practice facets, even thoughthe experts did not use this term. This is different from software representations, where theevidence suggests that experts’ knowledge, while rich and elaborate, is neitherhierarchically arranged nor articulated and conventionalized.The second step is to view a CDs analysis through this lens. The suggestion is simply that

each CD be examined from the perspective of each of the four facets. In this way, arepresentation is characterized by (in effect) four linked CDs analyses, and the profile of agiven representation is the composite of the four ‘values’ or judgments for each CD. Theaspiration is that combination of CDs and facets provides insight into the issues on whichCDs focus while addressing issues that have remained unresolved. For example, harkingback to an example used earlier, viscosity may be problematically high in the explicit

explicit

extrinsic: Representation embedded in associated tools

knowledge about representation

processes using representation

intrinsic: The core design representation

implicit

Software design representations

Fig. 1. A first attempt at identifying facets of software design representations.

Page 8: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESS

Table 1

A derivation table for this initial attempt at identifying facets of software design representations (with respect to

design representation)

Explicit

representation

Tools associated

with

representation

Knowledge about

representation

Process of using and

reasoning about

representation

Where does it

reside—explicit or

implicit?

Knowledge in the

world

Knowledge in the

world

Meta-knowledge

about the notation,

some in the world

(culture), some in the

mind (implicit)

Reasoning and

decision-making

What sort of

constituents?

The

representation

itself and its

formal definition

Implemented

systems such as

editors,

automated design

environments

Examples such as:

� conventions of

use, both formal

conventions that

are codified, and

informal

conventions that

are not

documented and

are learned

implicitly

� Constraints on

notation from

application

domain

Examples such as:

� Managing trade-

offs

� Sequencing—

which activities or

decisions in which

sequence (and

which can be done

in parallel)

� what can be

deferred or

ignored

� maintaining

alternatives,

managing

contingencies

M. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301 299

representation, conveniently low once the representation is embedded in an effectiveeditor, but problematically low when viewed from the perspective of a ‘congealed’ designin terms of the process. At different points in the analysis, focusing on different facetsfacilitates informed insights that take context into account.

Similarly, this sort of approach reveals context-related distinctions between similarnotations. For example, the graphical portion of LabView has much in common with circuitdiagrams, and the viscosity assessment for the representation alone might well be similar.However, the LabView environment (in the era of our studies) did little to ameliorate theviscosity—possibly increasing it by leaving unexpected editing artefacts around. On the otherhand, a state-of-the-art CAD environment for electronics reduced viscosity significantlythrough mechanisms such as grouping, sticky connections, and so on.

Although this is just a preliminary ‘finger in the wind’, this first attempt at applyingfacets to CDs does suggest that such an application may have genuine—and reasonablyeconomical—advantages for CDs:

CDs are interpreted for each facet. As a result, different features and side-effects of agiven CD may be salient within different facets. � Context can be taken into account, viewed through the different facets, and so the CDs

analysis of a given representation may be context-sensitive. Paying attention to context

Page 9: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301300

may allow us to understand better how contextual factors (such as cultures andconventions) interact with features of the core representation to influence theirinterpretation, use, and impact.

� The value of each facet is retained in understanding the whole (versus layers, where the

top layer may obscure what lies below).

� Facets introduce an element of orthogonality to CDs, which takes them back to the idea

of dimensions, and hence to one of Thomas’s original aspirations. This distinguishesthis approach from multi-criterion decision-making (MCDM) tables, because the facetsare orthogonal, and one cannot therefore sum the values across them.

Although this initial attempt is blatantly preliminary, it does suggest that the applicationof facets to CDs is worthy of further exploration.

6. Conclusion

CDs have been effective, providing a framework on which to hang guidelines, and bywhich to structure economical analyses of representations of various kinds. Further, theyprovide a metalanguage allowing really useful discussions with designers and expert usersof representations. They hold promise as a vehicle for eliciting and potentially capturingusers’ perceptions and understandings of representations [11].Like many frameworks, CDs ‘raise the stakes’, making evident and accessible issues

which might otherwise have eluded attention and which the framework itself is notyet equipped to cope. This paper has discussed some of the unresolved issues whichhave dogged our discussions of CDs over the years, issues ‘beyond the notation’ itself:scope, trade-offs, context of use, processes that use notations, culture and convention.It has tracked some of the empirical input into the CDs—and these unresolved issues—inorder to illustrate how CDs have evolved in close relationship to experience andexample.Finally, the paper suggests a possible application of facet theory to begin to address

issues of trade-offs and context. Although just an initial sketch, the example offersconsiderable potential for realizing some of Thomas Green’s original aspirations for CDs.

Acknowledgements

I will always be grateful to Thomas Green, for letting me play in his sandbox. Cognitivedimensions are his baby. Thanks to the software developers who have informed me overyears. Thanks to Alan Blackwell and his anonymous readers for constructive criticism.Thanks to Gordon Rugg for patient discussion and input. Some of the work discussed herewas funded under an EPSRC Advanced Research Fellowship (GR/A00126/), and someunder an EPSRC grant (FaCADE: Facilitating Communication Across Domains ofEngineering, GR/J48689/01).

References

[1] A.F. Blackwell, T.R.G. Green, A cognitive dimensions questionnaire optimised for users, in: A.F. Blackwell,

E. Bilotta (Eds.), Proceedings of the Twelfth Annual Meeting of the Psychology of Programming Interest

Group, 2000, pp. 137–152.

Page 10: Cognitive dimensions ‘beyond the notation’

ARTICLE IN PRESSM. Petre / Journal of Visual Languages and Computing 17 (2006) 292–301 301

[2] M. Petre, T.R.G. Green, Where to draw the line with text: some claims by logic designers about graphics in

notation, in: D. Diaper, et al. (Eds.), Human–Computer Interaction—Interact ’90, IFIP, Elsevier Science

Publishers, North-Holland, Amsterdam, 1990, pp. 463–468.

[3] T.R.G. Green, M. Petre, R.K.E. Bellamy, Comprehensibility of visual and textual programs: a test of

Superlativism against the ‘match-mismatch’ conjecture, in: Empirical Studies of Programmers Fourth

Workshop, New Brunswick, NJ, December. Ablex, 1991, pp. 121–141.

[4] M. Petre, Software development expertise. Expert Systems, in press.

[5] M. Petre, Mental imagery, visualisation tools and team work, Keynote paper, in: Proceedings of Second

Program Visualisation Workshop, June, Hornstrup Centret, Denmark, 2002.

[6] B.C. Vickery, Faceted Classification: A Guide to the Construction and Use of Special Schemes, ASLIB,

London, 1960.

[7] S.R. Ranganathan, Colon Classification, Asia Publishing House, New York, 1933.

[8] Rugg, McGeorge, Maiden, 2000, p. 251. Missing reference

[9] G. Rugg, P. McGeorge, Laddering, Expert Systems 12 (4) (1995) 339–346.

[10] B.H. Kwasnick, The role of classification in knowledge representation and discovery, Library Trends 48 (1)

(1999) 22–47.

[11] B. Khazaei, E. Triffitt, A study of usability of Z formalism based on cognitive dimensions, 14th Annual

Workshop of the Psychology of Programming Interest Group, Brunel University, June 2002.