Work in progress: case study about the impact of usability work

TonePrint Editor old version
Figure 1: Screenshot of the original version of the TonePrint Editor.

Currently, I’m spending a part of my summer writing a small-scale case study about the impact of usability work. Back in 2014, I was part of a team arranging a redesign workshop [1] for a development team at the company TC Electronic, now known as Music Group Innovation. They wanted to evaluate and improve an application called TonePrint Editor (see figure 1.) The essence of the workshop was to facilitate the development team fixing a list of previously identified usability problems. I recently returned to Music Group Innovation to conduct a small-scale case study about the redesign process of the TonePrint Editor. I wanted to do a follow-up to explore the process around making design changes to the application, and what had happened with the identified usability problems.

Project introduction

The use of usability evaluation methods is a widely accepted and used approach during iterative software development. One form of usability evaluations is the formative approach often conducted with a think aloud method. Formative usability evaluations are used to get feedback about users’ behavior when using an application and to get the users qualitative feedback about the concepts and designs used. The feedback reveals insights about how the users perceive, understand and interact with a system, insights that can be used to improve and develop an application. A lot of research has focused on both developing different usability evaluation methods, and evaluating the effectivity of existing methods [6]. Less attention has been paid to the relationship between the output from evaluations and improvements made to a given application [7], such research is complicated, time consuming and resource demanding [5]. Dennis Wixon boils this down to that: “…problems should be fixed and not just found” [8]. Since the point of making usability evaluations is to end with an improved interaction experience it is relevant to investigate what happens with identified usability problems, how the developers use the feedback from evaluations, and what they perceive as useful about the insights.

TonePrint App new version
Figure 2: Screenshot of the new version of the TonePrint App.

In the redesign workshop running two years ago [1], the main focus was to have the developers engaging in an innovative redesign suggestion process through active involvement. As part of the workshop, we included a short lecture about basic principles of interaction design. The intention with the lecture was to have the developers think about UI design in more broad terms and get inspiration for coming up with redesign proposals. Before the workshop, the developers had conducted a formative usability evaluation and compiled a usability problem list consisting of 19 problems. The outcome of the workshop was ideas for changing the flow of the main screen and minor changes to the interface. After participating in the redesign workshop, the development team has continued the redesign process, and have made several changes to the application.

During my revisit at Music Group Innovation I conducted a semi-structured interview with two members of the development team, a product manager, and a developer. We spent a couple of hours talking about the changes made to the application, the redesign process, and impact on the organization after engaging in user-centered design.

Preliminary insights

I’m still in the process of analyzing the interview in detail, but I will here outline a couple of interesting insights.

Prioritizing usability problems

When we talked about the list of the 19 identified usability problems, the first step after the evaluation was to prioritize the problems to decided on which ones to fix with. During the compilation of the list, a classic severity rating in the form: minor, moderate and severe was given. Additionally, two other ratings were added. The interviewed programmer would give a complexity rating (1-8). This rating is the estimated technical complexity of fixing the problem. The interviewed project manager would give a business value rating (1-8). This rating is the estimated importance related to the functionality of the application. Both are also related to the resource requirement estimation. The three ratings were then used to decide which problems to prioritize. Through this prioritizing process, the development team was able to understand and analyze the problems from more angles. This also served to make the fixing of usability problems more goal oriented. Initially, they prioritized seven problems. In the end, the team made fixes for 14 problems.

Getting problems confirmed and extended

Consistent with conclusions from another study [3], including the development team in the formative evaluation provided the developers with a more specific understanding of the usability problems. This understanding is more detailed that simply reading a usability problem report [3]. They were already aware of, or had ideas about possible usability problems, but in line with findings from another study [4], they found it useful to get confirmation or disconfirmation. What is more interesting is that their fuzzy ideas about problems were concretized and extended. For example, the flow of operations on a particular screen was not in line with the flow of operations found logic by the users, and how the users wanted to interact with the application. Getting insights into this design flaw was by the project manager characterized as a big eye opener. This was not identified as a specific usability problem but was by the development team identified as a more generic design problem leading to usability problems. Interestingly, the most significant redesign considerations sparked around feedback mainly gained through the involvement in the evaluation and redesign workshop, and less on the specific usability problems.

Design changes

During the redesign process, a couple of significant design changes was decided.

As mentioned above the flow of operations and order of options on a screen was found to be problematic. While this was not a specific usability problem, the development team decided to work on this problem during the redesign workshop. During the initial design of the application, they wanted to make the application ‘flashy’ (see figure 1.) During the workshop, they instead created redesign proposals based on the insights from the evaluation and the basic interaction design principles introduced during the short lecture. Afterward, they further evolved these proposals into a specific deign (see figure 2.) Similar findings have been reported by previously work [2].

At the time of the usability evaluation and redesign workshop, two applications existed the TonePrint Editor and the TonePrint App. The two applications have been merged into one application that runs on all major devices to make it easier for the users.

A couple of take-aways

The process of prioritizing the identified usability problems made the fixing of usability problems more goal oriented. For example, instead of simply adding problems to the backlog, there were some clear thoughts behind what problems to prioritize. This included considering severity, complexity, and business value ratings, as well as the estimated resources needed to fix a given problem.

Having the development team actively involved in both the formative usability evaluation and the redesign workshop provided insights about the current application design that would not have been gained if both the evaluation and creation of redesign proposals had been outsourced. Regarding the identified usability problems the developers got a more specific understanding and extensive understanding besides what was reported in the usability problem list. Insights about the current state of the usability of an application do not merely come from reading a report based on a formative usability evaluation.

The redesign workshop provided a frame for working with the insights. This sparked new ideas and a set of redesign proposals that were later matured and evolved into implementable designs. The final design shows that the development team was able to combine insights from the evaluation with basic principles of interaction design.

The short conclusion is that usability work makes sense and have an impact as long as the understanding of usability work goes beyond purely conducting usability evaluations.

Upcoming work

During the upcoming weeks, I will do a more comprehensive analysis of the interview and investigate the above themes in more detail. Along with my co-authors, we will be submitting this case study to the Industry Experiences track at the NordiCHI 2016 conference, so crossing fingers that the reviewers will find the paper interesting enough for a presentation.

References

  1. Bornoe, N., Billestrup, J., Andersen, J. L., Stage, J., & Bruun, A. (2014, October). Redesign workshop: involving software developers actively in usability engineering. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (pp. 1113-1118). ACM. DOI: 10.1145/2639189.2670288
  2. Bruun, A., Jensen, J. J., Skov, M. B., & Stage, J. (2014, September). Active Collaborative Learning: Supporting Software Developers in Creating Redesign Proposals. In International Conference on Human-Centred Software Engineering (pp. 1-18). Springer Berlin Heidelberg. DOI: 10.1007/978-3-662-44811-3_1
  3. Hoegh, R. T., Nielsen, C. M., Overgaard, M., Pedersen, M. B., & Stage, J. (2006). The impact of usability reports and user test observations on developers’ understanding of usability data: An exploratory study. International journal of human-computer interaction, 21(2), 173-196. DOI: 10.1207/s15327590ijhc2102_4
  4. Hornbæk, K., & Frøkjær, E. (2005, April). Comparing usability problems and redesign proposals as input to practical systems development. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 391-400). ACM. DOI: 10.1145/1005261.1005274
  5. Law, E. L. C. (2006). Evaluating the downstream utility of user tests and examining the developer effect: A case study. International Journal of Human-Computer Interaction, 21(2), 147-172. DOI: 10.1207/s15327590ijhc2102_3
  6. Nørgaard, M., & Hornbæk, K. (2009). Exploring the value of usability feedback formats. Intl. Journal of Human–Computer Interaction, 25(1), 49-74. DOI: 10.1080/10447310802546708
  7. Uldall-Espersen, T., Frøkjær, E., & Hornbæk, K. (2008). Tracing impact in a usability improvement process. Interacting with Computers, 20(1), 48-63. DOI: 10.1016/j.intcom.2007.08.001
  8. Wixon, D. (2003). Evaluating usability methods: why the current literature fails the practitioner. interactions, 10(4), 28-34. DOI: 10.1145/838830.838870

Leave a Reply

Your email address will not be published. Required fields are marked *