Work in progress: case study about the impact of usability work

TonePrint Editor old version
Figure 1: Screenshot of the original version of the TonePrint Editor.

Currently, I’m spending a part of my summer writing a small-scale case study about the impact of usability work. Back in 2014, I was part of a team arranging a redesign workshop [1] for a development team at the company TC Electronic, now known as Music Group Innovation. They wanted to evaluate and improve an application called TonePrint Editor (see figure 1.) The essence of the workshop was to facilitate the development team fixing a list of previously identified usability problems. I recently returned to Music Group Innovation to conduct a small-scale case study about the redesign process of the TonePrint Editor. I wanted to do a follow-up to explore the process around making design changes to the application, and what had happened with the identified usability problems.

Project introduction

The use of usability evaluation methods is a widely accepted and used approach during iterative software development. One form of usability evaluations is the formative approach often conducted with a think aloud method. Formative usability evaluations are used to get feedback about users’ behavior when using an application and to get the users qualitative feedback about the concepts and designs used. The feedback reveals insights about how the users perceive, understand and interact with a system, insights that can be used to improve and develop an application. A lot of research has focused on both developing different usability evaluation methods, and evaluating the effectivity of existing methods [6]. Less attention has been paid to the relationship between the output from evaluations and improvements made to a given application [7], such research is complicated, time consuming and resource demanding [5]. Dennis Wixon boils this down to that: “…problems should be fixed and not just found” [8]. Since the point of making usability evaluations is to end with an improved interaction experience it is relevant to investigate what happens with identified usability problems, how the developers use the feedback from evaluations, and what they perceive as useful about the insights.

TonePrint App new version
Figure 2: Screenshot of the new version of the TonePrint App.

In the redesign workshop running two years ago [1], the main focus was to have the developers engaging in an innovative redesign suggestion process through active involvement. As part of the workshop, we included a short lecture about basic principles of interaction design. The intention with the lecture was to have the developers think about UI design in more broad terms and get inspiration for coming up with redesign proposals. Before the workshop, the developers had conducted a formative usability evaluation and compiled a usability problem list consisting of 19 problems. The outcome of the workshop was ideas for changing the flow of the main screen and minor changes to the interface. After participating in the redesign workshop, the development team has continued the redesign process, and have made several changes to the application.

During my revisit at Music Group Innovation I conducted a semi-structured interview with two members of the development team, a product manager, and a developer. We spent a couple of hours talking about the changes made to the application, the redesign process, and impact on the organization after engaging in user-centered design.

Preliminary insights

I’m still in the process of analyzing the interview in detail, but I will here outline a couple of interesting insights.

Prioritizing usability problems

When we talked about the list of the 19 identified usability problems, the first step after the evaluation was to prioritize the problems to decided on which ones to fix with. During the compilation of the list, a classic severity rating in the form: minor, moderate and severe was given. Additionally, two other ratings were added. The interviewed programmer would give a complexity rating (1-8). This rating is the estimated technical complexity of fixing the problem. The interviewed project manager would give a business value rating (1-8). This rating is the estimated importance related to the functionality of the application. Both are also related to the resource requirement estimation. The three ratings were then used to decide which problems to prioritize. Through this prioritizing process, the development team was able to understand and analyze the problems from more angles. This also served to make the fixing of usability problems more goal oriented. Initially, they prioritized seven problems. In the end, the team made fixes for 14 problems.

Getting problems confirmed and extended

Consistent with conclusions from another study [3], including the development team in the formative evaluation provided the developers with a more specific understanding of the usability problems. This understanding is more detailed that simply reading a usability problem report [3]. They were already aware of, or had ideas about possible usability problems, but in line with findings from another study [4], they found it useful to get confirmation or disconfirmation. What is more interesting is that their fuzzy ideas about problems were concretized and extended. For example, the flow of operations on a particular screen was not in line with the flow of operations found logic by the users, and how the users wanted to interact with the application. Getting insights into this design flaw was by the project manager characterized as a big eye opener. This was not identified as a specific usability problem but was by the development team identified as a more generic design problem leading to usability problems. Interestingly, the most significant redesign considerations sparked around feedback mainly gained through the involvement in the evaluation and redesign workshop, and less on the specific usability problems.

Design changes

During the redesign process, a couple of significant design changes was decided.

As mentioned above the flow of operations and order of options on a screen was found to be problematic. While this was not a specific usability problem, the development team decided to work on this problem during the redesign workshop. During the initial design of the application, they wanted to make the application ‘flashy’ (see figure 1.) During the workshop, they instead created redesign proposals based on the insights from the evaluation and the basic interaction design principles introduced during the short lecture. Afterward, they further evolved these proposals into a specific deign (see figure 2.) Similar findings have been reported by previously work [2].

At the time of the usability evaluation and redesign workshop, two applications existed the TonePrint Editor and the TonePrint App. The two applications have been merged into one application that runs on all major devices to make it easier for the users.

A couple of take-aways

The process of prioritizing the identified usability problems made the fixing of usability problems more goal oriented. For example, instead of simply adding problems to the backlog, there were some clear thoughts behind what problems to prioritize. This included considering severity, complexity, and business value ratings, as well as the estimated resources needed to fix a given problem.

Having the development team actively involved in both the formative usability evaluation and the redesign workshop provided insights about the current application design that would not have been gained if both the evaluation and creation of redesign proposals had been outsourced. Regarding the identified usability problems the developers got a more specific understanding and extensive understanding besides what was reported in the usability problem list. Insights about the current state of the usability of an application do not merely come from reading a report based on a formative usability evaluation.

The redesign workshop provided a frame for working with the insights. This sparked new ideas and a set of redesign proposals that were later matured and evolved into implementable designs. The final design shows that the development team was able to combine insights from the evaluation with basic principles of interaction design.

The short conclusion is that usability work makes sense and have an impact as long as the understanding of usability work goes beyond purely conducting usability evaluations.

Upcoming work

During the upcoming weeks, I will do a more comprehensive analysis of the interview and investigate the above themes in more detail. Along with my co-authors, we will be submitting this case study to the Industry Experiences track at the NordiCHI 2016 conference, so crossing fingers that the reviewers will find the paper interesting enough for a presentation.

References

  1. Bornoe, N., Billestrup, J., Andersen, J. L., Stage, J., & Bruun, A. (2014, October). Redesign workshop: involving software developers actively in usability engineering. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (pp. 1113-1118). ACM. DOI: 10.1145/2639189.2670288
  2. Bruun, A., Jensen, J. J., Skov, M. B., & Stage, J. (2014, September). Active Collaborative Learning: Supporting Software Developers in Creating Redesign Proposals. In International Conference on Human-Centred Software Engineering (pp. 1-18). Springer Berlin Heidelberg. DOI: 10.1007/978-3-662-44811-3_1
  3. Hoegh, R. T., Nielsen, C. M., Overgaard, M., Pedersen, M. B., & Stage, J. (2006). The impact of usability reports and user test observations on developers’ understanding of usability data: An exploratory study. International journal of human-computer interaction, 21(2), 173-196. DOI: 10.1207/s15327590ijhc2102_4
  4. Hornbæk, K., & Frøkjær, E. (2005, April). Comparing usability problems and redesign proposals as input to practical systems development. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 391-400). ACM. DOI: 10.1145/1005261.1005274
  5. Law, E. L. C. (2006). Evaluating the downstream utility of user tests and examining the developer effect: A case study. International Journal of Human-Computer Interaction, 21(2), 147-172. DOI: 10.1207/s15327590ijhc2102_3
  6. Nørgaard, M., & Hornbæk, K. (2009). Exploring the value of usability feedback formats. Intl. Journal of Human–Computer Interaction, 25(1), 49-74. DOI: 10.1080/10447310802546708
  7. Uldall-Espersen, T., Frøkjær, E., & Hornbæk, K. (2008). Tracing impact in a usability improvement process. Interacting with Computers, 20(1), 48-63. DOI: 10.1016/j.intcom.2007.08.001
  8. Wixon, D. (2003). Evaluating usability methods: why the current literature fails the practitioner. interactions, 10(4), 28-34. DOI: 10.1145/838830.838870

“Help users recognize, diagnose, and recover from errors”

Outlook password changeAt my organization, Aalborg University, it is a requirement to change the campus account password once every 90 days, a security imitative implemented last year. This is a widespread security policy used in many organizations, but also a policy whose significance has been questioned for more than a decade. I have very mixed feelings about this security measurement. A major advantage is of cause that leaked passwords will be unusable at some point (not considering the option that backdoors etc. can have been installed.) However, this approach is also associated with several obstacles from a user perspective. These include: coming up with a new easy to remember secure password and the hassle of changing password on all associated services requiring authentication. At Aalborg University, this applies to basically all IT-services such as access to WiFi, e-mail, printers, databases, etc. The new password has to be changed manually in several of these services.

Perhaps it’s because I’m a Mac user, but no notice is given about the upcoming expiring password. When I suddenly no longer can access different services I know it’s time to create a new password (after some frustration about trying to figuring out what the problem is.) Our passwords are changed through the Outlook Web App. To make sure that the password meets a certain security standard some requirements are in place. If the new password does not match this standard, the following error message is displayed:

“The password you entered doesn’t meet the minimum security requirements.”

Unfortunately, this error message does not tell anything about what the requirements are or how to get this information leaving the user in the unknown. This is a textbook example of a usability problem directly linkable to one of Jakob Nielsens’s ten heuristics:

“Help users recognize, diagnose, and recover from errors”.

I’m surprised to find this classic usability problem in software such as Outlook managed by a large organization with thousands of users. This must make the support phones glowing (update: after talking to our IT support department it actually does increase support requests.)

Understanding usability problem lists is challenging

In an ongoing study about creating GUI redesigns based on the results of a usability evaluation I asked the participants if they had problems understanding the usability problem list. 44 participants, a mix of informatics and information technology students following a design course, participated. Their assignment was to create redesign suggestions for a web shop selling merchandise and tickets. The company developing the web shop had conducted a think-aloud usability evaluation resulting in a simple usability problem list listing 36 usability problems. Each problem was described with the location, a short description, and severity of the problem. The table below shows how the participants answered.

 Were there any usability problems you could not interpret? (n=44)
 Disagree strongly  18%  41%
 Disagree  16%
 Slightly disagree  7%
 Neutral  16%  16%
 Slightly agree  27%  43%
 Agree  7%
 Agree strongly  9%

As can be seen, 43% found that at least one usability problem was difficult to interpret. While this aspect is not the focus of the study, it is still an interesting finding that a relatively large amount of the participants had troubles understanding all the usability problems of a relatively short list of problems. I suspect that the 16% choosing ‘neutral’ probably believed they understood all problems with some uncertainty if this actually was the case. Unfortunately, I have no quantitative data about the number of problems difficult to interpret, but I do have some qualitative data. Especially one particular problem was mentioned among the participants. Not surprisingly this was a semi-complex problem and one of the more important ones to investigate further. I’m sure people receiving and using usability problem lists can recognize similar problems. Another challenge faced by the participants was recreating problems. Some problems are only happening under certain conditions, recreating the same conditions based on a problem description is not straightforward. Despite the missing of details, this non-scientific presentation, and the number of participants, these numbers adds to earlier findings and research in the communication of usability problems.

Here a few papers discussing usability problem reporting:

  • Hornbæk, K., & Frøkjær, E. (2005, April). Comparing usability problems and redesign proposals as input to practical systems development. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 391-400). ACM. 10.1145/1054972.1055027
  • Høegh, R.T., Nielsen, C.M., Overgaard, M., Pedersen, M.B., and Stage, J. The Impact of Usability Reports and User Test Observations on Developers’ Understanding of Usability Data: An Exploratory Study. International Journal of Human-Computer Interaction 21, 2 (2006), 173–196. 10.1207/s15327590ijhc2102_4
  • Molich, R., Jeffries, R., and Dumas, J.S. Making usability recommendations useful and usable. Journal of Usability Studies 2, 4 (2007), 162–179. PDF
  • Nørgaard, M., & Hornbæk, K. (2008). Working together to improve usability: challenges and best practices. University of Copenhagen Dept. of Computer Science Technical Report no. 08/03. PDF
  • Nørgaard, M. and Hornbæk, K. Exploring the Value of Usability Feedback Formats. International Journal of Human-Computer Interaction 25, 1 (2009), 49–74. 10.1080/10447310802546708
  • Redish, J. G., Bias, R. G., Bailey, R., Molich, R., Dumas, J., & Spool, J. M. (2002, April). Usability in practice: formative usability evaluations-evolution and revolution. In CHI’02 extended abstracts on Human factors in computing systems (pp. 885-890). ACM. 10.1145/506443.506647

Using design cards to facilitate redesign: initial findings

When developing new design concepts and redesigning existing ones the idea of using design cards to facilitate the process and creativity has been explored in several different fields. The idea of using design cards as part of design workshops has been explored when developing new design concepts such as game designs [7], “playful experiences” [6], and tangible interactions [5]. For example, design cards have been used to “…probing and provoking users’ past experiences and current assumptions.” [3]. Several design cards have been developed based on theoretical frameworks and used to rephrase abstract frameworks into something more operational. Through this transformation the theory can be more tangible and applicable by making cards with keywords, pictures, and questions [5]. It is out of the scope of this blog post to include a detailed review of related literature. Instead, I will briefly outline recent HCI-related research about design cards.

In recent research design cards have mainly been evaluated during initial design phases, and a few during other phases such as evaluation of designs, and redesign. Past literature has pointed out the several general strengths and advantages of design cards. On an abstract level design cards can be considered a design material useable in a collaborative setting when making designs [4]. Wölfel and Merritt (2013) [8] conducted a review of 18 different forms of design cards or card methods used as part of a design activity, and highlight that design cards have been reported to:

  • Supporting design dialogues.
  • Cards can act as a common reference among participants.
  • Cards are something specific and concrete to talk about.
  • Making the design process visible and less abstract.
  • Facilitating a design process, for example, by providing structure in the process.
  • Physical tokens in the form of cards are easy to include, use, and manipulate.

They divided the purpose and scope of the card systems into three different categories:

  • General (8 card methods) (open-ended inspiration.)
  • Participatory design (7 card methods) (engage designers and users in the process.)
  • Context-specific/agenda driven (3 card methods) (focused on a particular context or design agenda.)

They divided the methodology of the design cards into three categories:

  • No methodology (3 card methods.)
  • Suggestion for use (7 card methods.)
  • Specific instructions (5 card methods.)

In usability engineering, a challenge for developers is how to correct usability problems, especially non-trivial problems. Running redesign workshops with developers and designers actively collaborating has been proposed as one method for exploring redesign opportunities [1, 2]. In a recent study, we decided to explorer if including design cards in redesign workshops would improve the proposed redesigns and support the process of fixing usability problems.

In summery we asked groups consisting of two – three students, following a class about designing and evaluating user interfaces, to make redesign proposals of a given web shop. We provided the groups a list of known usability problems. The groups were divided into four group clusters. The groups in three of the clusters were provided a design card system, a different system for each cluster. The groups of one cluster were acting as control groups and did not receive any design cards. During the redesign exercise, we observed a subset of groups. Afterward, we had the students filling out a survey about their impression of the quality of the redesign, and usefulness of the design cards. Finally, we interviewed a few groups. In addition, six evaluators, three academic researchers, and three practitioners involved in the development of the web shop assessed the quality of the redesigns.

Our initial findings indicate that design cards did not, at least not how we used them in this study, have any major positive effect on the quality of the redesigns. When comparing the quality assessment of the redesigns, we did not find any significant differences in comparison to the control group. The students did not find the design cards particular useful. However, some mentioned that the cards did provide some initial ideas and inspiration. These initial findings might be surprising when looking at the positive results reported from previous studies [5-7]. One major difference in this study is that design cards were included in a redesign workshop, and not during an initial design development or ideation workshop. The students were provided both an existing design, and a list of known usability problems. These seemed to be the main drivers and basis for the redesign proposals.

References

  1. Bornoe, N., Billestrup, J., Andersen, J. L., Stage, J., & Bruun, A. (2014). Redesign workshop: involving software developers actively in usability engineering. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (pp. 1113-1118). ACM. DOI: 10.1145/2639189.2670288
  2. Bruun, A., Jensen, J. J., Skov, M. B., & Stage, J. (2014). Active Collaborative Learning: Supporting Software Developers in Creating Redesign Proposals. In Human-Centered Software Engineering (pp. 1-18). Springer Berlin Heidelberg. DOI: 10.1007/978-3-662-44811-3_1
  3. Bødker, S., Mathiasen, N., & Petersen, M. G. (2012). Modeling is not the answer!: designing for usable security. interactions, 19(5), 54-57. DOI: 10.1145/2334184.2334197
  4. Halskov, K., & Dalsgård, P. (2006, June). Inspiration card workshops. In Proceedings of the 6th conference on Designing Interactive systems (pp. 2-11). ACM. DOI: 10.1145/1142405.1142409
  5. Hornecker, E. (2010). Creative idea exploration within the structure of a guiding framework: the card brainstorming game. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (pp. 101-108). ACM. DOI: 10.1145/1709886.1709905
  6. Lucero, A., & Arrasvuori, J. (2012). The PLEX Cards and its techniques as sources of inspiration when designing for playfulness. International Journal of Arts and Technology, 6(1), 22-43. DOI: 10.1504/IJART.2013.050688
  7. Mueller, F., Gibbs, M. R., Vetere, F., & Edge, D. (2014). Supporting the creative game design process with exertion cards. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (pp. 2211-2220). ACM. DOI: 10.1145/2556288.2557272
  8. Wölfel, C., & Merritt, T. (2013). Method card design dimensions: a survey of card-based design tools. In Human-Computer Interaction–INTERACT 2013 (pp. 479-486). Springer Berlin Heidelberg. DOI: 10.1007/978-3-642-40483-2_34