9
Assignment Cover Sheet MSc in User Experience Design Student Name: Stephen Norman Student Number: N00147768 Programme: MSc UX Design Year of Programme: 2015/2016 Module Name: User Research and Usability Assignment: Understanding Users and Tasks Assignment Deadline: 08/01/2015 I declare that that this submission is my own work. Where I have read, consulted and used the work of others I have acknowledged this in the text. Signature: Stephen Norman Date: 08/01/2016

Understanding_Users_and_Tasks_CA1_N00147768

Embed Size (px)

Citation preview

Page 1: Understanding_Users_and_Tasks_CA1_N00147768

Assignment Cover Sheet

MSc in User Experience Design

Student Name: Stephen Norman

Student Number: N00147768

Programme: MSc UX Design

Year of Programme: 2015/2016

Module Name: User Research and Usability

Assignment: Understanding Users and Tasks

Assignment Deadline: 08/01/2015

I declare that that this submission is my own work. Where I have read, consulted and used the work of others I have acknowledged this in the text. Signature: Stephen Norman Date: 08/01/2016

Page 2: Understanding_Users_and_Tasks_CA1_N00147768

Understanding Users and Tasks By Stephen Norman

Introduction This paper is set out to discuss three user experience evaluation methods in detail. These are Ethnographic Field Studies, Card Sorting, and A/B Testing. Before going in to detail on these methods, the paper aims to present the reader with a full background of how these methods are categorised within a three dimensional matrix. This will allow the reader to understand why the author has decided to use these methods for different stages of the development lifecycle. From this understanding the discussion will continue on to the types of goals set during a project. How they change depending on the current stage of the project as well how these goals are informed by the data gathering requirements. Unfortunately, this paper will not discuss the eighty or so other research methods which are available and are just as equally useful at collecting qualitative and qualitative user information. The ones selected are merely a snippet, a view in to the user research field. It is hoped that the presentation of these studies will give the reader a fuller and better understanding of the methods presented.

The Three Dimensions Each research method can be categorised in to one of three dimensions, each with its own style of measuring user information. The three plains which will be discussed are Attitudinal vs. Behavioural, Qualitative vs. Quantitative, and Context of Use (See Fig. 1). These each vary in their own importance, as each dimension will enable data collection in a specific way.

Figure 1. A landscape of User research methods.

Page 3: Understanding_Users_and_Tasks_CA1_N00147768

Attitudinal vs. Behavioural dimensions

These two dimensions can be broken down to simply “what are the users thinking”; attitudinal, versus “what are they doing”; behavioural. Attitudinal studies may be used in exploratory research, such as surveys, interviews, and focus groups. These will give the designers a better understanding of how their designs are perceived. This is not to say that these types of studies cannot be used elsewhere in the lifecycle. In fact, post implementation studies using these methods can contribute just a must valuable data and result in either answering the project goals or not. It must be considered that what users think does not always correlate to what they actually do. Behavioural studies are a more hands on approach to research. These studies record the how the user behaves during a specific context, and research is conducted based on their interactions. A very common and widely used behavioural study is A/B Testing which collects quantitative user data; to be discussed later. Other behavioural methods such as journey maps, and contextual inquires or field studies will provide the same amount of both qualitative and quantitative information depending on their use in a project.

Figure 2. Questions answered by research methods across the landscape.

Qualitative vs. Quantitative

Qualitative and quantitative methods can be explained in many ways. Rohrer (2014) discusses that these methods differ in the way which the data is collected. While Davies (2014), states that one approach is scientific; quantitative, and qualitative as a natural approach bringing the world of context in to the fold. These are accurate in their own right. Quantitative is an indirect and scientific approach to research collection. It uses large amounts scientific data to answer goals. As mentioned above, A/B testing would be a quantitative study. Whereas qualitative would be direct observations of the user such as usability studies, contextual

Page 4: Understanding_Users_and_Tasks_CA1_N00147768

inquiries, card sorting and so forth. Qualitative types of research are based around smaller numbers of users and are much better suited for answering the “why”. For example, a quantitative study may provide a calculated percentage of users exiting a page within five seconds, but not tell why they left. Whereas a qualitative study of the same page will tell you what problems the user encountered, why, and their reason for leaving. These methods can are chosen as per project goal requirements and will be discussed in depth later.

Context of Use

Context is a term described and defined differently depending on the person asked. For example, Trivedi (2012) discusses context as a location, time of day, season, temperature. It can also be described as the current state of the user, whether it’s physical, social or emotional. Simply put “context is anything which has an effect on the human behaviour” – Trivedi (2012). This qualitative data can yield significant rich information about a product, and also unearth unforeseen issues. This is especially true around user artefacts. Artefacts are described as any tool or object the user uses during the study. For example, it could be that during the testing process the user picks up a pencil and jots down some information. These are artefacts of the study and must be then considered in the research and design process.

These dimensions are determined by the research goals of the project. Projects will use a variety of these during its lifecycle. For example, an attitudinal quantitative study such as a survey may be used at the initiation of project as exploratory research, then once the development has reach a design phase a more qualitative approach both using behavioural and context of use studies to gather the further details as to why.

The Selected Methods Understanding the dimensions above will help determine the optimal method of conducting research. As stated earlier there are many different types of research methods. For the purpose of this paper, three have been chosen; Ethnography Field Studies, Card Sorting and A/B Testing. Their characteristics and optimal range of use will be discussed and compared below;

Ethnography Field Studies

Ethnography field studies are a qualitative research method developed by anthropologist studying non-western cultures Jones (n.d.). It has become commonplace to be used in other fields, such as human computer interaction. Ethnography is the study of how users interact within their own environment while taking in all cultural and social factors. Boehm (2010)

Page 5: Understanding_Users_and_Tasks_CA1_N00147768

argues that ethnography field studies should not be associated with the term contextual inquiry. As contextual inquiries are more focused on the human-computer interface rather than the social interactions. However, this is matter of opinion, and a contextual inquiry could easily include ethnographic studies. Although it could be said that when a field study is ethnographic that it correlates closer to the attitudinal dimension, where as a contextual field study would be more behavioural. Again the overall purpose of this type of research is get into the minds of the users, understand their perspectives and point of view while observing their interactions with the system and each other.

Before any studies are granted, both parties must agree the terms of research. For instance, materials may be sensitive or classified. In this case a non-disclosure agreement (NDA) is signed. Only then can research begin, however it may limit the availability publishing the study.

Another form of field study is covert research. However, this is very sensitive area as some ethical codes strictly forbid this kind research. Jones (n.d.) discusses that where access is granted, it is recommended that participants be informed and retrospective consent be obtained. A major consideration is the protection of the individual. When required the study should guarantee anonymity, and should be agreed well before the results are published.

Figure 3. An example of an Open Card Sort where the participant was asked to organize topics then classify each into groups which they'd name.

Card Sorting When compared to ethnography, card sorting shares similarities in the way that the exercise is conducted. Card sorting could easily be incorporated in to an ethnography field study, as a subset to the overall research. Ethnography field studies encompass a wide range studies, such as interviews, and focus groups, and aims to understand the “why” about a user. Card

Page 6: Understanding_Users_and_Tasks_CA1_N00147768

sorting is different, and is a specific exercise designed to organise the information architecture and content of a website. Unlike the field studies, card sorting can be as qualitative as it can be quantitative. If the research is conducted one on one, or in group settings the results are both qualitative and quantitative. Card sorting remains as an attitudinal dimension because the users are vocalising and thinking aloud of their actions, and because users don’t necessarily do what they say at least fifteen users should be tested to ensure a good dataset.

There are two main ways of conducting a card sorting exercise; the first is an open card sort (See Fig. 3), this card sort allows the user to organise the cards in to their own group which they will then name. It is a good exercise to understand the user’s language. The other is closed card sorting, the difference is that group categories are named, and it is up the user to classify all content within these groupings. Card sorting is best done on either a one to one basis or small groups. If studying the ethnography of users, then a group setting may provide more qualitative data. Like ethnography field studies, card sorting may require prior consent and confidentially from the user(s).

Figure 4. An example of A/B Testing using Google Experiments.

A/B Testing A/B testing falls on the complete opposite field from card sorting. It is quantitative behavioural testing solution which in most cases is conducted post implementation of a project to assess specific key metrics. An A/B test is the process of testing one or more variations against the original design, usually on a page for page basis however this can be extended to test individual design elements as well. Benefits to A/B testing are that it is able to accurately measure small performance differences, for instance if a card sorting exercise was used to prioritise navigational elements, the A/B test could prove its effectiveness by analysing the change against the original design. The downside to this type of testing is that it doesn’t provide information as to why the user found it easier. To find the answer the researchers could resort the card sorting qualitative data, or gather user feedback through online surveys. Another benefit to A/B testing is that is cheap, or even free (See Fig.4) as Google now offers Experiments though their analytics. Depending on the in-house skillset this has almost eliminated the need for any third party testing platforms. A/B testing is quite narrowly focused and ideally will only to answer one goal or key performance indicator (KPI) at a time. Should A/B testing be required before final production Neilsen (2005) discusses that paper

Page 7: Understanding_Users_and_Tasks_CA1_N00147768

prototyping may offer an alternate solution and yield more qualitative results. This type of testing can eliminate bad designs and allow the time to polish good ones before any coding has been commenced saving time and money.

Research requirements are based entirely on the goals created on the project. Regardless of what the goal may be there is always a well suited research method designed to answer this. The examples above have been selected for their differences and, but at the same time using these together in a project would give the researcher better grounding to defend their designs.

Setting Goals Research goals are as varied as there are abilities to conduct research. One main point to make when creating goals, is they are not static. Research goals will change throughout the course of development with new goals coming from unforeseen results further along the timeline. A very good example by Rohrer (2014) is setting up research methods to coincide with the phase of the product development and its associated objectives (See Fig. 5). In the beginning stages of development is the time to consider new ideas, and opportunities for future development. Research during this phase is mostly exploratory but can be both qualitative and quantitative nature. This research period will provide information of who is using the product, and to what extent. During this early stage personas and scenarios are created to help address the pain points. Sometimes this can lead to altering the goal or adding new ones into the design process.

Figure 5. An example of product development phases and the types of research goals to coincide with each.

Page 8: Understanding_Users_and_Tasks_CA1_N00147768

When the process moves in to the designing phase, the goals become more qualitative to optimising the design. Examples of these would be card sorting, paper prototyping, and field studies.

Once the development moves to its final stages, the designs need to be assessed and proven against the original. Performance measurement and comparison goals are then set, with research methods switching to more quantitative methods. This doesn’t mean that quantitative methods are only methods to be used here. Some projects would require post implementation laboratory studies or in other cases field studies to assess the actual impact or benefits to the users. In some process flows such as Agile UX this is a highly iterative process and after one design has been launched and assessed it then goes back to the drawing board for the next version. In short research goals and project timeline are coupled, as one phase of goals is met and the project moves forward so to do the research goals which are then addressed by the unique set of research methods covering that particular dimension of study.

Conclusion This paper set out to discuss and compare three selected research methodologies. It did so by first presenting the reader with dimensions of study in which all methods are categorised. This categorisation was provided to allow a sufficient background information for each so that it was understood how these methods are conducted and what type of information can be expected. Three methods were covered; Ethnography Field Studies, Card Sorting, and A/B Testing. These were discussed and compared to set the reader up for understanding their relation to the goals set during a project lifecycle. The various types of goals were then addressed especially around the nature of their change throughout the design process, and also how these alterations can affect the types of research required.

In conclusion the information in this paper is only a small snippet of the full extent of user research methodologies available. A core understanding was applied to all the dimensions which led to a better explanation of three methods. More importantly it is hoped that the information of the selected methodologies will grant a better understanding of their placement within a project, and how they address particular goals.

Page 9: Understanding_Users_and_Tasks_CA1_N00147768

References Boehm, N. (2010). Ethnography in UX :: UXmatters. Uxmatters.com. Retrieved 7 January 2016, from http://www.uxmatters.com/mt/archives/2010/06/ethnography-in-ux.php

Davies, M. B., & Hughes, N. (2014). Doing a successful research project: Using qualitative or quantitative methods. Palgrave Macmillan.

Jones, M. Ethnographic And Field Study. Camtools.cam.ac.uk. Retrieved 3 January 2016, from https://camtools.cam.ac.uk/wiki/site/e30faf26-bc0c-4533-acbc-cff4f9234e1b/ethnographic%20and%20field%20study.html

Nielsen, J. (2005). Putting A/B Testing in Its Place. Nngroup.com. Retrieved 6 January 2016, from https://www.nngroup.com/articles/putting-ab-testing-in-its-place/

Rohrer, C. (2014). When to Use Which User-Experience Research Methods. Nngroup.com. Retrieved 7 January 2016, from https://www.nngroup.com/articles/which-ux-research-methods/

Trivedi, M. C., & Khanum, M. A. (2012). Role of context in usability evaluations: A review. arXiv preprint

arXiv:1204.2138.

Bibliography Croft, P. (2014). Improving Your Information Architecture With Card Sorting: A Beginner's Guide Smashing Magazine. Smashing Magazine. Retrieved 6 January 2016, from https://www.smashingmagazine.com/2014/10/improving-information-architecture-card-sorting-beginners-guide/

Fairhead, J., Kaur, R., Mookherjee, N., Staples, J., Degnen, C., & Marvin, G. (2011). Ethical Guidelines for good research practice (4th ed.). Association of Social Anthropologists of the UK and the Commonwealth (ASA). Retrieved from http://www.theasa.org/downloads/ASA%20ethics%20guidelines%202011.pdf

Jurca, G., Hellmann, T. D., & Maurer, F. (2014, July). Integrating agile and user-centered design: a systematic mapping and review of evaluation and validation studies of agile-ux. In Agile Conference (AGILE), 2014 (pp. 24-32). IEEE.

Nielsen, J. (2004). Card Sorting: How Many Users to Test. Nngroup.com. Retrieved 7 January 2016, from https://www.nngroup.com/articles/card-sorting-how-many-users-to-test/

Phetteplace, E., & Etches, A. (2013). Know Thy Users. Reference & User Services Quarterly, 53(1), 13-17. Retrieved from https://journals.ala.org/rusq/article/view/2882/2940

Usability.gov,. (2013). Card Sorting. Retrieved 3 January 2016, from http://www.usability.gov/how-to-and-tools/methods/card-sorting.html