18
Interaction Design (IxD) SCIM2413 Evaluation and Refinement Week 9

Week 9 Notes IxD

Embed Size (px)

DESCRIPTION

Design Basis

Citation preview

Interaction Design (IxD)

Interaction Design (IxD)SCIM2413Evaluation and Refinement Week 9

Introducing EvaluationIntroduction:Evaluation is integral to the design process. Evaluation focuses on both the usability of the system and on the users experience when interacting with the system.

The Why, What, Where, and When of Evaluation:Why Evaluate?What to Evaluate?Where to Evaluate?When to Evaluate?

For further reading please refer to Main Ref Book (Page 432-443)

2Why Evaluate?

Users expect more than just a usable system, they also look for a pleasing and engaging experience.User experience encompasses all aspects of the end-users interaction the first requirement for an exemplary user experience is to meet the exact needs of the customer, without fuss or bother. Next come simplicity and elegance, which produces products that are a joy to own, a joy to use. Nielson Norman Group notes. What to Evaluate?

Can ranges from low-tech prototypes to complete system; a particular screen function to the whole workflow; and from aesthetic design to safety features.Example: Developers of a new web browser may want to know if the users find items faster with their product.Where to Evaluate?

Where evaluation takes place depends on what is being evaluated.Example: Remote studies of online behavior, such as social networking, can be conducted to evaluate natural interactions for a lot of participants in their own homes, cheaply and quickly.When to Evaluate?

At what stage in the product lifecycle evaluation takes place depends on the type of product.Formative evaluations:When evaluations are done during design to check that a product continues to meet users needs.Cover a broad range of design process, from the development of early sketches and prototypes through to tweaking and perfecting an almost finished design. Summative evaluations:Evaluations that are done to assess the success of a finished product.If the product is being upgraded then the evaluation may not focus on establishing a set of requirements, but may evaluate the existing product to ascertain what needs to improving.

Types of EvaluationControl Setting Involving UsersNatural Settings Involving UsersAny Setting Not Involving UsersChoosing and Combining MethodsOpportunistic Evaluations

Control Setting Involving Users

Controlled setting enable evaluators to control what users do, when they do it, and for how long. It also enables them to reduce outside influences and distractions.Usability testing:involves collecting data using a combination of methods (i.e. experiments, observation, interviews, questionnaires) in a controlled settings.

Natural Settings Involving Users

The aim of field studies is to evaluate people in their natural settings.They are used primarily:help identify opportunities for new technology;establish the requirements for a new design;facilitate the introduction of technology, or to deployment of existing technology in new contexts.

Any Setting Not Involving Users

Evaluations that take place without involving users are conducted in settings where the researcher has to imagine or model how an interface is likely to be used.Inspection methods are commonly employed to predict user behavior and to identify usability problems, based on knowledge of usability, users behavior, the context in which the system will be used and activities that users undertake.

Choosing and Combining Methods

Combination methods are used to obtain a richer understanding.Example: sometimes usability testing in labs is combined with observations in natural settings to identify the range of usability problems and find out how users typically use a product.

Opportunistic Evaluations

Evaluations may be detailed, planned studies or opportunistic explorations.The importance of getting early feedback in the design process whether it is worth proceeding to develop an idea into a prototype.Typically, early evaluations are informal and do not require many resources.Opportunistic evaluations with users can be conducted in addition to more formal evaluations.

Evaluation Case Studies

Case Study 1: An Experiment Investigating a Computer GameCase Study 2:In the Wild Study of Skiers

14Mean subjective ratingPlaying against computerPlaying against friendMEANST. DEV.MEANST. DEV.Boring2.30.9491.70.949Challenging3.61.083.90.994Easy2.70.8232.50.850Engaging3.80.4224.30.675Exciting3.50.5274.10.568Frustrating2.81.142.50.850Fun3.90.7384.60.699In the Wild Study of Skiers

A skier wearing a helmet with an accelerometer (dark red box) and a mini camera. For assessing the skiers performance.b) The smart phone that provides feedback to the skier in the form of visualizations.

Components of the e-skiing system. Back arrows indicate the data transfers betweendevices, servers, and linking systems. Arrow shapes indicate different types of communications and the red circles indicate the data collection points.

What Did We Learn from the Case Studies?

The examples of different evaluation methods are used in different physical settings that involve users in different ways to answer different kinds of questions.How evaluators need to be creative when working with innovative systems and dealing with constraints created by the evaluated setting, and the robustness of the technology being evaluated.

For further reading please refer to Main Ref Book (Page 443-448)18