139
Usability Engineering

Usability Engineering. 2 Contents Usability Engineering Know the user Task analysis Usability metrics The usability engineering process

Embed Size (px)

Citation preview

Usability Engineering

2

Contents

Usability Engineering Know the user Task analysis Usability metrics The usability engineering process

3

Usability Engineering

Usability Engineering tries to make the product suit the task for which it was designed

This involves not just the functionality of the product but the ergonomic considerations that make a product easy to use

4

Usability Engineering Usability engineering depends on

understanding three aspects of a product Know the user Know the task Know the environment

If all of these are understood, you will be able to design a better product

5

The Need for Usability Engineering We have all seen products which are

cumbersome and confusing to operate Other products are obvious in how to

use them and a joy to use One product is preferred over another

based on its ease of use, even though functionality is the same

Usability engineering can help turn a poorly designed product into a well designed one

6

The Need for Usability Engineering

The ease of use of a product has a major impact on how well that program will do in the market

The ease of use of a product is one area where one manufacturer can distinguish their product from competing products

7

The Need for Usability Engineering

Only 30% of IT systems are successful

The rest fail to produce the expected productivity increases

Often, the failure to produce expected results is a lack of ease of use that hinders attempts to use the product

8

The Need for Usability Engineering How does one determine if a product

meets the needs of the user? How does one determine if a product

is easy to use? You cannot produce a quality product

until you can answer these questions Usability engineering addresses these

questions

9

Usability Engineering:A Definition

“…the effectiveness, efficiency and satisfaction with which specified users can achieve specified goals in particular environments…” ISO DIS 9241-11

10

Usability Engineering:A Definition “…a process whereby the usability of a

product is specified quantitatively and in advance. Then as the product itself, or early ‘baselevels’ or prototypes of the product are built, it is demonstrated that they do indeed reach the planned levels of usability.” D. Tyldesley, Employing Usability Engineering in the Development of Office Products, in Human Computer Interaction, Preece & Keller (eds), Prentice Hall, 1990

11

Contents

Usability Engineering Know the user Task analysis Usability metrics The usability engineering process

12

Know the User

The first job is to discover who the end user of the system will be

Once you have identified the user, you must determine What level of expertise they have What they will assume about the system What is the environment in which they

will use the system

13

End User Classes

It is often helpful to be able to break the users into classes

Of course, there are several axes which can be used for user classification

We will consider two axes: How they use the system The level of user expertise

14

Classification by System Usage Direct Users

Make direct use of the system to carry out their duties

Data entry clerk or secretary who uses a word processor

Indirect Users Ask other people to use the system on their

behalf A passenger who asks a travel agent to

check the availability of a flight

15

Classification by System Usage Remote Users

Uses who do not use the system themselves but depend on services it offers

Bank customers depend on the bank systems for their account statements

Support Users Members of the administrative and

technical teams which support the system Help desk staff, administrators, technicians

16

Classification by System Usage We identify two extra categories which

might overlap the previous categories Mandatory Users

Those who must use the system as part of their job

Discretionary Users Do not have to use the system as part of their job Might make infrequent use of the system Probably have less familiarity with the system

17

Classification by Expertise Categorizes users as

Novice, intermediate, or expert These categories can be further

decomposed into mandatory, discretionary, and intermittent

users Categorizing users allows us to

generalize across types and eases design and development

18

Classification by Expertise

Novice users Have little or no experience with

computers Might be hesitant to use a computer Need feedback that they are doing things

the right way Like to progress at their own speed System must be robust to deal with a

user who does know what he/she is doing

19

Classification by Expertise Novice Users

Users need to be guided through processes Need the availability of human help Lists of answers to common questions and

problems can be created Actions should not have side effects so that

the system is easier to learn and understand In large systems, more complex areas can be

hidden from the user unless they are needed, making the interface simpler to deal with

20

Classification by Expertise Intermittent Users

Use the system occasionally with possibly long periods without using it

Display both novice and expert characteristics

Often remember general concepts but not low-level details

Need access to good manuals and help facilities

21

Classification by Expertise Expert Users

Require less help Perform many operations for the first time Make extensive use of help systems Should be provided with accelerator keys so

that they have a faster alternative to menus Should be allowed to customize their typed

commands to make interface look like others they know

22

Gathering Information about Users To know the user one must gather

information about the user This can be done by

Formal & informal discussion Observation An expert on the design team Questionnaire Interview

23

Formal & Informal Discussion Talking to users will reveal many unknown

details about how they work and their environment

Although users are not design experts, they have insights into which designs will work for them

Talking to users makes them feel part of the design process

Making users part of the process causes them to want the project to succeed whereas leaving them out often makes them wish it would fail

24

Understanding the User’s Workplace

Before designing a system, you must know the environment in which it will be used

For example, secretaries said that they did not want audible alerts when a mistake was made since they did not want others in the office to know

25

Understanding the User’s Workplace When you interview a worker’s

boss you find out How the boss thinks the job is done How the job used to be done How the boss wants the job done

When you observe the worker in his/her own environment, you find out how the job is actually done

26

Expert on the Design Team This involves putting one of the end

users on the design team This person can provide valuable insight

into the needs of the users If the user spends too much time on the

design team, they become a designer, not a user

Therefore, the user on the design team is often rotated to keep them representative

27

Questionnaires Questionnaires are good for

subjective responses – less good for objective ones

Questionnaires produce large amounts of data which can take a long time to analyze

Creating a good questionnaire is very time consuming

28

Questionnaires Questionnaires can be either

Interviewer administered Interviewer can explain meaning of questions Very time consuming Different interviewers must treat subjects in the

same way Self administered

Does not stop some groups answering a particular way if they have an axe to grind

Questions have to be clear to avoid misinterpretation

Faster than interviewer administered

29

Questionnaires Questions can be either

Open Gain broader information by allowing respondents

to answer in any way they choose Produces so much data that it can be difficult or

impossible to interpret Closed

Restricts answers to a few possibilities Can distort the answers by suggesting responses

the respondent would not have thought of on their own

30

Problems Designing Questionnaires Questionnaires must be prototyped and

tested before being administered This will pick up

Misinterpretation of questions That additional material is needed to

understand a question Questions not worded precisely Questions that bias the respondent.

Eg. Giving a list of computer applications the respondent knows might cause them to tick off more so that they seem more competent

31

Questionnaire Types The simplest questionnaire selects yes/no

or from a limited set of responses

* Usability Engineering, Xristine Faulkner, Macmillan, 2000

32

Questionnaire Types Checklists are useful when there are multiple

possible responses They also have the advantage that the user does

not have to remember which aspects of the system they have been using

* Xristine Faulkner

33

Questionnaire Types Scalar questionnaires ask the user to

register an opinion on a scale If there is a middle position on the scale, it

allows the user to register “no opinion” This might be a good thing or bad,

depending on the question

* Xristine Faulkner

34

Questionnaire Types A Likert scale is used to gather the

strength of an opinion

* Xristine Faulkner

35

Questionnaire Types A ranked order questionnaire asks

users to rank a set of responses based upon some criteria

* Xristine Faulkner

36

Observation Observation can be a very useful tool to

find out how people actually work One of the major dangers is that the

users alter their behaviour when they know they are being observed

The Hawthorne effect was noticed on a study of electrical workers when it was observed that productivity increased as light level were reduced to the point where workers could no longer see

37

Observation There are two ways to circumvent the

Hawthorne effect Expose the users to the observer for a

long period of time until they ignore the observer

Use videotape to record the workers without an actual observer being present

In most cases, acclimating the users to the observer produces the best results

38

Observation Video tape takes a long time to set up and

often records far more than is necessary This makes it less effective than just

recording a couple minutes of crucial activity

Users behaviour is changed as much by the act of videotaping as by being observed by a human directly

Generally, the use of videotape is less effective than it was anticipated to be

39

Activity Logging

This involves simply logging what the user does and how long it takes

The logging can be done by The user, although this affects

performance An observer The computer system itself

40

Activity Logging Using an observer is effective but the user

will have to become acclimatized to the presence of an observer

If the system does the logging, the user must be told of this This is required ethically and is enforced by law

in some areas A period of acclimatization will be

necessary for system based logging as well

41

Interviews Interviews can range from open ended

questions to closed question, with every variation in between

Open ended questions Useful when first starting interviews Useful when the interviewer does not know

what to ask Let’s the interviewer refine and direct the

questions as the interview proceeds Often uncovers new facts

42

Interviews Closed questions

The interview is directed by the interviewer Only a limited range of responses are

obtained The interview time is reduced

Mixed Questions This uses closed questions but allows the

interviewee to add additional thoughts The interviewer can also add questions or

direct the interview in new directions This is often the most effective interview style

43

Contents

Usability Engineering Know the user Task analysis Usability metrics The usability engineering process

44

Task Analysis A task is some human activity which will

achieve a goal The result of task analysis is to gain an

understanding of what a system should do

A task is decomposed into a hierarchy of subtasks

Good design can be performed once the user’s tasks have been understood

45

Task Analysis

A task consists of An input An output A process to convert the input to output

Input OutputTransformation

46

Task Analysis In order to break a task into

subtasks, we must ask a series of questions What information is needed to perform

the task? What are the characteristics of the

information sources? (reliable, wrong format, etc.)

What affects the availability of the information?

47

Task Analysis What are the errors which might

occur? The goal is to design a system which can

avoid the errors What initiates the task?

Often a task depends on the completion of a previous task before it can begin

The questions about the input should be followed by questions about the output

48

Task Analysis What are the performance

requirements? What happens to the output?

This will determine if the output is sent to another process and whether it needs to be in a certain format

How does the user get feedback on the progress of the task?

Then, we need to ask questions about the transformation

49

Task Analysis What are the decisions which are made by

the entity performing the transformation? What are the strategies for decision making

and how can these be incorporated into the new system?

What skills are needed for the task? Users of the system will need to be trained and

kept up to date with these skills What interruptions can occur and when can

they occur?

50

Task Analysis How often is the task performed and

when is it performed? Does the task depend on another task? What is the normal and maximal

workload? This will allow you to design a system to deal

with this load Can the user control the task workload?

Sometimes the user can delay a task or control the flow of data to a task

51

Task Analysis

The result of task analysis is often depicted as a graph of tasks and subtasks

Main task

SubTask 1 SubTask 2 SubTask 3

SubTask 1.1 SubTask 1.2

52

Ethnography Ethnography is immersing yourself in a

culture to determine how it works and how the people think

This can be one way to understand how users think and perform their tasks

Members of the design team join with the users for a prolonged period and literally become one of them in an effort to understand the tasks

53

Contents

Usability Engineering Know the user Task analysis Usability metrics The usability engineering process

54

Usability Metrics

After you understand the user and the task, you design and build a solution

The question then is, “How good is the solution”?

The answer to that is not obvious since there is no simple way to measure how good something is

55

Usability Metrics One solution is to give the users a

questionnaire and have them rate aspects of the product from 1..10

The problem is that everyone’s ideas are different

How hot do you like it? 1, 2 or 3 chilies

One person’s mild is another person’s hot

56

Usability Metrics

To begin to find an answer to how to measure the quality of a product, recall the definition from ISO that a product should be Efficient Effective Satisfying

57

Effectiveness To judge effectiveness we can measure

The success to failure ratio in completing the task

This provides an overall measure of effectiveness The frequency of use of various commands

or features This shows how often features are used It also shows the techniques employed to solve

problems and that there might be more efficient techniques

58

Effectiveness The measurement of user problems

This provides an indication of what problems the users are experiencing and how severe they are

The quality of the results or output This also provides an overall measure of

the effectiveness of the solution

59

Efficiency Efficiency is the amount of time to

complete a task or the amount of work performed per unit time

We can measure The time required to perform tasks The number of actions required for a task The time spent consulting documentation The time spent dealing with errors

60

Satisfaction

Measuring the user’s satisfaction with the system can be difficult

The best way to measure this is to get the users to rate their satisfaction on a scale

This is a noisy measurement, but a larger sample size will be meaningful

61

Learnability Learnability is a measure of how easy it

is to learn to use a system This is important since the shorter the

learning time, the less costly it is to train new users

Learnability can be measured by measuring the time it takes for a new user to be able to acquire the skills to perform a task

62

Learnability Most users never learn all capabilities of

a system All you need to measure is how long it

takes to acquire the most essential skills

Some systems use metaphors that take advantage of the user’s knowledge of similar tools in the real world

These systems claim to be zero knowledge since the user can use them immediately with no prior knowledge

63

Learnability

Other ways to measure how long it takes to learn a task are The frequency of error messages The frequency of a particular error

message The frequency of on-line help usage The frequency of help requests on a

particular topic

64

Errors

Errors rates are one of the classic measures of efficiency

The logic states that the fewer errors made indicate that there is more useful work being accomplished

Errors can be categorized in different ways

65

Errors An intentional error is one where the

user intended to perform the action but the action was wrong, usually due to a misunderstanding of the system

A slip is an unintentional error made by clicking the wrong button.

A slip is not a problem with understanding of the system

66

Errors Errors can also be categorized by severity

Minor errors Easy to spot and correct

Major errors Easy to spot but harder to correct

Fatal errors An error which prevents the task being completed

Catastrophic errors Those which have other side effects such as loss of

data or affecting other programs

67

Time

Time is another class measure of efficiency

Time simply measures how long it takes to perform a task

This is a good measure of the efficiency of the system in performing a specific task

68

Contents

Usability Engineering Know the user Task analysis Usability metrics The usability engineering process

69

The Usability Engineering Process Usability measurements and decisions

should be made by a usability engineer To make this a process, it is necessary to

produce a usability specification The specification states:

To which set of users a measurement applies What attribute is being measured What are the preconditions for the

measurement How will the criteria be measured What the criteria for success are

70

The Usability Engineering Process

After measurements have been performed, they can be used to guide the rest of the process Locating poorly designed aspects of

the system Redesigning them to better meet user

needs Re-evaluating the new design

71

Expanded Usability Specification The usability specification can be expanded to

include Worst case

The worst case for the system that will render it unacceptable to the user

Lowest acceptable level Lowest acceptable performance

Planned case The level that the system is expected to achieve

Best case The best possible scenario for the system

Now level The performance of the existing system

72

Usability Specification Checklist

You can use the following checklist to make sure that your usability specification is complete

Not every specification requires all of the following

These will serve to remind you of items you might have forgotten

73

Usability Specification Checklist

The time required to complete a task The fraction of a task completed The fraction of a task completed per

unit time The ratio of success to failure Time spent dealing with errors Frequency of use of documentation Time spent using documentation

74

Usability Specification Checklist

Ratio of favourable to unfavourable user comments

Number of repetitions or failed commands

The number of good features recalled by the user

The number of commands which were not used

75

Usability Evaluation While the previous section examined

what measures of usability could be used, this section will look at how the usability of a system can be evaluated

There are two types of evaluation Analytical evaluation

In which paper and pencil are used to evaluate tasks and goals

Empirical evaluation In which user performance is evaluated to judge

the usability of the system

76

Evaluation Methodology

All evaluation methods have the same basic structure and requirements Identify the target group Recruit users Establish the task Perform the evaluation Interpret the results and report the

findings

77

Identifying the Target Group

The target group will usually be the same as that identified during the process of requirements gathering

Some experimental designs might require targeting different groups, so be sure to check

78

Recruiting Users

This step can take a lot of time You must recruit users from the

right target group You should recruit more users than

necessary as problems might arise rendering some unsuitable or unavailable

79

Recruiting Users

You should avoid recruiting users who have been involved in the design process as they might be biased

Researchers have found that 3 users can give an accurate opinion that reflects that of the larger user community

80

Establishing the Task This is the task for the user to

perform that will be measured In the initial stages of evaluation, it

is better to use very specific tasks Later, more broad-based tasks can

be used The task must be specified as a

detailed set of steps to be performed

81

Establishing the Task The steps in the task must be tested to

show that they are correct and can be performed

The statement of the task will have to be checked with others to make sure that the instructions are written clearly

You will have to decide how much instruction is appropriate to provide the user before the task is performed

82

Preparing for the Evaluation Before the evaluation is conducted

Users will have to be introduced to the system

Users should be given a written introduction to the system to make sure that a walk through does not miss anything

Some method for recording the evaluation (observer or videotape) needs to be set up

Any questionnaires have to be prepared and tested

Instructions for conducting the evaluation need to be written, particularly if more than one person will be performing the evaluation

83

Sample Evaluator Instructions

* Xristine Faulkner

84

Sample Evaluator Instructions

* Xristine Faulkner

85

Evaluator Behaviour

* Xristine Faulkner

86

Sample Logging Procedures

* Xristine Faulkner

87

Performing the Evaluation Welcome the users and make them feel

comfortable Tell them about the process and what is

expected of them Make sure that they know that the

system is being tested, not the users Let them know that the result, whether

positive or negative, will be used to improve the performance of the system

88

Performing the Evaluation Make sure that the user understand that

if things go wrong, it is not their fault Familiarize the user with any observer,

questionnaire, or recording equipment being used

Let the user know that they can quit at any time

At the end, you can ask any required questions of the user

Finally, thank them for their time and input

89

Reporting the Findings

At this point, you should review the evaluation process itself

If any problems occurred during evaluation They should be listed They should be examined to find

causes and solutions

90

Evaluator Code of Conduct Remember, you are working with people, not

objects Explain

what is expected of the subject That the subject can leave at any time The purpose of the experiment That it is the system being tested That the results are confidential and what they will be

used for Make sure the subject is comfortable Do nothing to embarrass or distress the subject Get them to agree, in writing, to the guidelines

91

Experiments HCI is based on psychology which draws

it experimental methods from the scientific method Induction -- Form a hypothesis from existing

data Deduction -- Make predictions based on the

hypothesis Observation – gather data to prove or

disprove the hypothesis Verification – test the predictions against

further observations

92

Hypothesis Formation Observations are made and a

hypothesis is created to explain them The hypothesis is a testable statement The researcher must design

experiments which will test the validity of the hypothesis

It is important that the experiments be able to rule out other possible causes for the observed experimental results

93

Sample Hypothesis Hypothesis

The choice of background and text color will affect the user’s ability to check a document

The independent variable or test is the background and text color

These can be manipulated by the researcher

The dependent variable is the user’s performance in checking the document

94

Sample Hypothesis The hypothesis would be tested by varying

the text and background colors and measuring the user’s performance

The performance would have to be significantly different to indicate that it was being affected by the independent variable

Statistical methods can be used to determine if a difference in performance is significant or not

95

Sample Hypothesis Usually, tables are used to determine if a

result is statistically significant or not After the hypothesis is formed, a null

hypothesis is also formed This is usually a statement that the

independent variable will not affect the dependent variable The choice of background and text color will not affect

the user’s ability to check a document Some experiments prove the null

hypothesis to be false in order to prove the hypothesis

96

Experimental Design The major problem is experimental

design is to ensure that the independent variable is the only factor affecting the dependent variable

Several techniques can be employed to ensure this

The technique chosen depends on the nature of the experiment

97

Control Groups One way to ensure that the independent

variable is affecting the result is to use a control group

This is often in drug testing where a control group is given a placebo to nullify any psychological affects

The same idea can be applied to UI measurements where the control group is given the original interface and told that improvements have been made

98

Subject Selection Another problem is that one group selected

is inherently better at performing the task than another group

The process of matching is used to ensure that the members of two groups are the same

This involves identifying attributes that subjects in each group must have and making sure that they are present in each group

99

Subject Selection

Possible attributes to match include Both groups having equal ratios of men

and women The ages of both groups are similar The occupations of both groups are

similar Differences in groups are referred to

as confounding variables, as they can alter the test results

100

Related Measure Design One way to eliminate the differences

between individuals is to perform the experiment on the same individual

This is called related measure design The performance of a user can be

measured Performing the task alone without feedback Performing the task with a researcher

providing feedback The results can then be compared to

determine if feedback helps

101

Related Measure Design Related measure design also suffers

from potential problems The order effect

In which the order in which tasks are performed affects the results

The practice effect Where the user’s performance improves with

practice The fatigue effect

Where the user’s performance decreases with time

102

Related Measure Design

The way to overcome these problems is to use a technique called counterbalancing

In this technique one half of the subjects perform the experiment in one order and the other half perform the experiment in the other order

103

Problems with Experiments There are many known problems

with experiments You cannot prove a hypothesis since no

matter how many cases support it, one case to the contrary can disprove it

Experiments in the social sciences cannot remove extraneous variables as well as is done in the natural sciences and the results cannot be as well trusted

104

Problems with Experiments

Experiments prove or disprove the hypothesis of the researcher and ignore any input from the subjects

Experiments are often scaled down versions of real-world problems and it is not always guaranteed that the results will be applicable to the scaled up problem

105

Problems with Experiments Experiments are usually conducted in

a laboratory and this setting might affect the results One solution to this is to conduct the

experiment in the workplace to eliminate differences in setting

Some experiments are contrived and have little to deal with reality and the results might not apply to the real world

106

Think Aloud

Think aloud is a technique in which the user is asked to think aloud while performing a task

This allows the user to Determine how the user approaches a

task Determine the user’s model of the system Determine why the user makes the

decisions he or she does

107

Think Aloud

Think aloud requires that an observer be present to take notes or that the session be recorded

Some users will be uncomfortable thinking aloud or might be embarrassed

Thinking aloud might alter the performance of other users

108

Cooperative Evaluation This is a variation of think aloud in

which the user is encouraged to think of themselves as participating in the evaluation rather than just being a subject The user can make comments which

he/she thinks will aid in evaluation The evaluator can ask the subject for

clarification of a comment

109

Cooperative Evaluation The subject can ask the evaluator for

help in a poorly understood part so that the evaluation can continue

In short, the two parties cooperate to produce an evaluation of the system

These techniques produce a large volume of output which must be analyzed

110

Cooperative Evaluation Recommendations 5 users are enough The tasks should be specific (use

the drawing package to draw a circle)

The results should be recorded in some way

The results should be broken down into unexpected behaviour and user comments

111

Sample Cooperative Evaluation

* Xristine Faulkner

112

Sample Cooperative Evaluation

* Xristine Faulkner

113

Sample Cooperative Evaluation

* Xristine Faulkner

114

Sample Cooperative Evaluation

* Xristine Faulkner

115

Sample Cooperative Evaluation

* Xristine Faulkner

116

Sample Cooperative Evaluation

* Xristine Faulkner

117

Sample Cooperative Evaluation

* Xristine Faulkner

118

Wizard of Oz The Wizard of Oz is a technique

whereby a computer system can be evaluated without having to build it

A human, possibly located remotely, acts as the system

The user can then interact with a system that is very difficult to prototype and provide feedback on the design

119

Logging Logging all of the actions

performed by the user is a useful technique for evaluating how they use the system and how long tasks take

Logging can take the form of a diary maintained by the user

It can also be automatically performed by the system

120

Logging If automated logging is performed, the user

must be made aware of this from both an ethical, and in many jurisdictions, a legal perspective

Users who know they are being logged might change their behaviour and it will take them some time to revert to normal behaviour

Logging tends to produce large amounts of data

121

Expert Evaluations

Evaluation techniques which involve users can be both time consuming and expensive

Inspection methods have been developed whereby an HCI practitioner can evaluate the usability of a product without having users involved

122

Usability Inspection Methods 10 methods for evaluating usability by

inspection have been developed Heuristic evaluation Guideline review Pluralistic walkthrough Consistency inspections Standards inspections Cognitive walkthrough Formal usability inspections Feature inspections Expert appraisals Property checklists

123

Heuristic Evaluation This tests the interface against a

predefined set of guidelines or heuristics

The idea is that a product which satisfies the heuristics must be usable

Several sets of guidelines have been developed by different researchers

These heuristics are all similar

124

Guideline Review Many developers have developed a set of

style guidelines over the years Often these are developed to ensure

the consistency of interfaces or that new interfaces are similar interfaces

developed by the company That all interfaces for a platform are

consistent The usability engineer should examine

the quality of the guidelines before making sure that they are followed

125

Pluralistic Walkthrough A pluralistic walkthrough involves

Users Developers Usability experts

Each group brings its own expertise to the evaluation

Each group makes an evaluation based on heuristics

All groups then meet to reach a final evaluation

126

Consistency Inspection

This is carried out by experts without user involvement

The goal is to ensure consistency throughout the system

This is done by inspecting the controls on all aspects of the system and comparing them for consistency

127

Standards Inspection Standards differ from guidelines in that

they are adopted by a developer or software house

Sometimes standards exist for human factors reasons

Other standards exist simply to have things done consistently

Standards inspections ensure that a product conforms to standards

128

Standards Inspection Standards can come from many

sources Internal developer standards Internal client standards External industry standards External national standards External international standards

Often standards represent compromises, not the best solutions

129

Cognitive Walkthroughs This technique has an expert go

through a task pretending to be a user The goal is to identify any difficulties

that a user might encounter For this to work, the expert must have a

good knowledge of the user and be able to accurately act like the user

A walkthrough is only as good as the expert and his knowledge of the user and the task

130

Formal Usability Inspections This is a usability inspection

carried out by a team It is carried out as part of the

software testing process Normally, 3 testers are sufficient Any of the other test guidelines or

heuristics can be used by the team to evaluate usability

131

Feature Inspections

These inspections are concerned with What features and functionality exist How they are related to one another Whether these features and

functionality serve to meet the requirements for the system

132

Expert Appraisals This is an appraisal by an expert using

any of the other techniques Many experts will choose to use a

cognitive walkthrough Other experts will simply use their

expertise to evaluate a system An evaluation by experts is cheaper

than involving users The evaluation is only as good as the

expert

133

Property Checklists Similar to heuristics but take the form of

high-level goals for usability which are identified by attributes

The evaluator goes through the checklist and ticks off where the system meets the attributes

Researchers have produced long checklists

Once the checklist has been prepared, the test can be performed by relatively unskilled personnel

134

Usability Heuristics

Heuristics are broad based rules These are designed to ensure a

usable interface Many HCI experts have produced

lists of heuristics Two of these lists are presented

here

135

Nielsen’s Heuristics Simple and natural dialogue

Dialogue should be simple and minimal Speak the user’s language

Dialogue should be in terms with which the user is familiar

Minimize user’s memory load The user should not have to remember

values from one part of a dialog to another Instructions should be visible or easily found

136

Nielsen’s Heuristics Consistency

Systems should be consistent Feedback

The user should be provided with feedback about the state and actions of the system

Clear exits Users should be able to exit easily from any

part of the system, especially if they got there by accident

137

Nielsen’s Heuristics Shortcuts

There should be accelerators for experts which could be hidden from novices

Good error messages Messages should be easily understandable

Prevent errors Systems should try to prevent errors

Help & documentation This should be easy to use and search

138

Schneiderman’s Heuristics Strive for consistency Provide shortcuts for frequent users Offer informative feedback Dialogs should yield closure

Actions should have a beginning, middle and end

The user should know when an operation is complete so he can move on

139

Schneiderman’s Heuristics Offer simple error handling

Errors should be designed out of the system Incorrect commands should not harm the

system Easy Reversal of actions Internal locus of control

Users should initiate actions, not the system Reduce short term memory load

Reduce the amount the user has to remember