Work in progress: case study about the impact of usability work

TonePrint Editor old version
Figure 1: Screenshot of the original version of the TonePrint Editor.

Currently, I’m spending a part of my summer writing a small-scale case study about the impact of usability work. Back in 2014, I was part of a team arranging a redesign workshop [1] for a development team at the company TC Electronic, now known as Music Group Innovation. They wanted to evaluate and improve an application called TonePrint Editor (see figure 1.) The essence of the workshop was to facilitate the development team fixing a list of previously identified usability problems. I recently returned to Music Group Innovation to conduct a small-scale case study about the redesign process of the TonePrint Editor. I wanted to do a follow-up to explore the process around making design changes to the application, and what had happened with the identified usability problems.

Project introduction

The use of usability evaluation methods is a widely accepted and used approach during iterative software development. One form of usability evaluations is the formative approach often conducted with a think aloud method. Formative usability evaluations are used to get feedback about users’ behavior when using an application and to get the users qualitative feedback about the concepts and designs used. The feedback reveals insights about how the users perceive, understand and interact with a system, insights that can be used to improve and develop an application. A lot of research has focused on both developing different usability evaluation methods, and evaluating the effectivity of existing methods [6]. Less attention has been paid to the relationship between the output from evaluations and improvements made to a given application [7], such research is complicated, time consuming and resource demanding [5]. Dennis Wixon boils this down to that: “…problems should be fixed and not just found” [8]. Since the point of making usability evaluations is to end with an improved interaction experience it is relevant to investigate what happens with identified usability problems, how the developers use the feedback from evaluations, and what they perceive as useful about the insights.

TonePrint App new version
Figure 2: Screenshot of the new version of the TonePrint App.

In the redesign workshop running two years ago [1], the main focus was to have the developers engaging in an innovative redesign suggestion process through active involvement. As part of the workshop, we included a short lecture about basic principles of interaction design. The intention with the lecture was to have the developers think about UI design in more broad terms and get inspiration for coming up with redesign proposals. Before the workshop, the developers had conducted a formative usability evaluation and compiled a usability problem list consisting of 19 problems. The outcome of the workshop was ideas for changing the flow of the main screen and minor changes to the interface. After participating in the redesign workshop, the development team has continued the redesign process, and have made several changes to the application.

During my revisit at Music Group Innovation I conducted a semi-structured interview with two members of the development team, a product manager, and a developer. We spent a couple of hours talking about the changes made to the application, the redesign process, and impact on the organization after engaging in user-centered design.

Preliminary insights

I’m still in the process of analyzing the interview in detail, but I will here outline a couple of interesting insights.

Prioritizing usability problems

When we talked about the list of the 19 identified usability problems, the first step after the evaluation was to prioritize the problems to decided on which ones to fix with. During the compilation of the list, a classic severity rating in the form: minor, moderate and severe was given. Additionally, two other ratings were added. The interviewed programmer would give a complexity rating (1-8). This rating is the estimated technical complexity of fixing the problem. The interviewed project manager would give a business value rating (1-8). This rating is the estimated importance related to the functionality of the application. Both are also related to the resource requirement estimation. The three ratings were then used to decide which problems to prioritize. Through this prioritizing process, the development team was able to understand and analyze the problems from more angles. This also served to make the fixing of usability problems more goal oriented. Initially, they prioritized seven problems. In the end, the team made fixes for 14 problems.

Getting problems confirmed and extended

Consistent with conclusions from another study [3], including the development team in the formative evaluation provided the developers with a more specific understanding of the usability problems. This understanding is more detailed that simply reading a usability problem report [3]. They were already aware of, or had ideas about possible usability problems, but in line with findings from another study [4], they found it useful to get confirmation or disconfirmation. What is more interesting is that their fuzzy ideas about problems were concretized and extended. For example, the flow of operations on a particular screen was not in line with the flow of operations found logic by the users, and how the users wanted to interact with the application. Getting insights into this design flaw was by the project manager characterized as a big eye opener. This was not identified as a specific usability problem but was by the development team identified as a more generic design problem leading to usability problems. Interestingly, the most significant redesign considerations sparked around feedback mainly gained through the involvement in the evaluation and redesign workshop, and less on the specific usability problems.

Design changes

During the redesign process, a couple of significant design changes was decided.

As mentioned above the flow of operations and order of options on a screen was found to be problematic. While this was not a specific usability problem, the development team decided to work on this problem during the redesign workshop. During the initial design of the application, they wanted to make the application ‘flashy’ (see figure 1.) During the workshop, they instead created redesign proposals based on the insights from the evaluation and the basic interaction design principles introduced during the short lecture. Afterward, they further evolved these proposals into a specific deign (see figure 2.) Similar findings have been reported by previously work [2].

At the time of the usability evaluation and redesign workshop, two applications existed the TonePrint Editor and the TonePrint App. The two applications have been merged into one application that runs on all major devices to make it easier for the users.

A couple of take-aways

The process of prioritizing the identified usability problems made the fixing of usability problems more goal oriented. For example, instead of simply adding problems to the backlog, there were some clear thoughts behind what problems to prioritize. This included considering severity, complexity, and business value ratings, as well as the estimated resources needed to fix a given problem.

Having the development team actively involved in both the formative usability evaluation and the redesign workshop provided insights about the current application design that would not have been gained if both the evaluation and creation of redesign proposals had been outsourced. Regarding the identified usability problems the developers got a more specific understanding and extensive understanding besides what was reported in the usability problem list. Insights about the current state of the usability of an application do not merely come from reading a report based on a formative usability evaluation.

The redesign workshop provided a frame for working with the insights. This sparked new ideas and a set of redesign proposals that were later matured and evolved into implementable designs. The final design shows that the development team was able to combine insights from the evaluation with basic principles of interaction design.

The short conclusion is that usability work makes sense and have an impact as long as the understanding of usability work goes beyond purely conducting usability evaluations.

Upcoming work

During the upcoming weeks, I will do a more comprehensive analysis of the interview and investigate the above themes in more detail. Along with my co-authors, we will be submitting this case study to the Industry Experiences track at the NordiCHI 2016 conference, so crossing fingers that the reviewers will find the paper interesting enough for a presentation.

References

  1. Bornoe, N., Billestrup, J., Andersen, J. L., Stage, J., & Bruun, A. (2014, October). Redesign workshop: involving software developers actively in usability engineering. In Proceedings of the 8th Nordic Conference on Human-Computer Interaction: Fun, Fast, Foundational (pp. 1113-1118). ACM. DOI: 10.1145/2639189.2670288
  2. Bruun, A., Jensen, J. J., Skov, M. B., & Stage, J. (2014, September). Active Collaborative Learning: Supporting Software Developers in Creating Redesign Proposals. In International Conference on Human-Centred Software Engineering (pp. 1-18). Springer Berlin Heidelberg. DOI: 10.1007/978-3-662-44811-3_1
  3. Hoegh, R. T., Nielsen, C. M., Overgaard, M., Pedersen, M. B., & Stage, J. (2006). The impact of usability reports and user test observations on developers’ understanding of usability data: An exploratory study. International journal of human-computer interaction, 21(2), 173-196. DOI: 10.1207/s15327590ijhc2102_4
  4. Hornbæk, K., & Frøkjær, E. (2005, April). Comparing usability problems and redesign proposals as input to practical systems development. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 391-400). ACM. DOI: 10.1145/1005261.1005274
  5. Law, E. L. C. (2006). Evaluating the downstream utility of user tests and examining the developer effect: A case study. International Journal of Human-Computer Interaction, 21(2), 147-172. DOI: 10.1207/s15327590ijhc2102_3
  6. Nørgaard, M., & Hornbæk, K. (2009). Exploring the value of usability feedback formats. Intl. Journal of Human–Computer Interaction, 25(1), 49-74. DOI: 10.1080/10447310802546708
  7. Uldall-Espersen, T., Frøkjær, E., & Hornbæk, K. (2008). Tracing impact in a usability improvement process. Interacting with Computers, 20(1), 48-63. DOI: 10.1016/j.intcom.2007.08.001
  8. Wixon, D. (2003). Evaluating usability methods: why the current literature fails the practitioner. interactions, 10(4), 28-34. DOI: 10.1145/838830.838870

Locked out of Windows XP

padlockandchain2Today I decided to find out if it would be possible to bring back life in my dated Dell Latitude D820 I got back in 2006. I haven’t used it for years, and Windows XP was last reinstalled back in 2008. Originally the plan was to replace XP with Windows 7 to figure out if this old guy still has a few years left. Of cause, I couldn’t remember either the user or administrator passwords and was locked out of Windows XP… Now what to do?

Several websites describe two simple methods to rest the password of a user. Both assumes that it’s possible to get access to the system with an account having administrator rights. The most simple method is to have someone with administrator rights resting the password. The other method is a bit more complex. The essence is to reboot in “safe mode with command prompt” and log in with the ‘hidden’ administrator account. With some simple commands, it’s then possible to reset the password of a user account. It turned out that the administrator account was also password protected, and I was unable to log in at all… While preparing for the big operation to remove the hard drive and secure a backup of as much data as possible before whipping the disk, I found Ophcrack. Ophcrack is an open source password cracker that can be used to crack LM and NTLM hashes. Fortunately, this tool works well for recovering Windows XP account passwords.

One option is to download ophcrack liveCD, an image, and burn it to a CD/DVD or USB stick. This process was easy, I downloaded the ISO image, burned it to a DVD and booted the Latitude laptop (after changing the boot order in the BIOS). The application more or less starts automatically, and less than a minute later I had recovered the passwords of the user and administrator accounts.

While Windows XP is a dated operating system first released back in 2001 I was quite surprised that it was possible to recover passwords this fast. Anyway, this tool saved me a lot of time and will make it a lot easier to take a system backup before upgrading to Windows 7.

Automatic generated birthday wishes

Happy birthdayA few days ago I celebrated my birthday. During the day, I received several birthday wishes through different media. This included several e-mails from both friends and different companies. Getting greetings is something I appreciate, but what is the point of getting auto generated greetings? This comes down to what the meaning of sending a greeting to someone is. Essentially the sender is showing friendliness and respect. However, I always had trouble appreciating greetings from a machine and found these kinds of greetings to be an illusion. Clearly no one is actually greeting you, it’s simply a scheduled job fully automated and executed by a machine like any other automated task. Two of the auto-generated messages I received were even identically since both companies happen to use the same e-mail provider. This strengthens the feeling of ‘industrialized’ greetings. The only plus is that sometimes these auto-generated birthday greetings will contain a coupon code or link to some special offer or gift even though this was not the case this year.

“Help users recognize, diagnose, and recover from errors”

Outlook password changeAt my organization, Aalborg University, it is a requirement to change the campus account password once every 90 days, a security imitative implemented last year. This is a widespread security policy used in many organizations, but also a policy whose significance has been questioned for more than a decade. I have very mixed feelings about this security measurement. A major advantage is of cause that leaked passwords will be unusable at some point (not considering the option that backdoors etc. can have been installed.) However, this approach is also associated with several obstacles from a user perspective. These include: coming up with a new easy to remember secure password and the hassle of changing password on all associated services requiring authentication. At Aalborg University, this applies to basically all IT-services such as access to WiFi, e-mail, printers, databases, etc. The new password has to be changed manually in several of these services.

Perhaps it’s because I’m a Mac user, but no notice is given about the upcoming expiring password. When I suddenly no longer can access different services I know it’s time to create a new password (after some frustration about trying to figuring out what the problem is.) Our passwords are changed through the Outlook Web App. To make sure that the password meets a certain security standard some requirements are in place. If the new password does not match this standard, the following error message is displayed:

“The password you entered doesn’t meet the minimum security requirements.”

Unfortunately, this error message does not tell anything about what the requirements are or how to get this information leaving the user in the unknown. This is a textbook example of a usability problem directly linkable to one of Jakob Nielsens’s ten heuristics:

“Help users recognize, diagnose, and recover from errors”.

I’m surprised to find this classic usability problem in software such as Outlook managed by a large organization with thousands of users. This must make the support phones glowing (update: after talking to our IT support department it actually does increase support requests.)

Two reviews submitted to NordiCHI 2016

Submitted reviews for two papers submitted to NordiCHI 2016 (formally known as the 9th Nordic Conference on Human-Computer Interaction.) I have now been reviewing papers for several years and have (finally) found somewhat of a review routine. Both papers are interesting and concerned with timely and relevant topics. Despite that the papers are not exactly related to my current research focus, the overall research focus and approach of both papers are known to me, so all in all this was some interesting and pleasant reviews to do.

In the past, I have had the pleasure to attend NordiCHI 2012 and 2014. This conference is one of my favorite HCI conferences (probably my overall favorite conference) due to the size and the research presented. The latest NordiCHI conference had around 550 attendees from 34 countries. In comparison to the 3000+- attendees normally attending the (also very interesting) CHI conference, NordiCHI is less “hectic” and more manageable to navigate.

I have also myself submitted a paper to NordiCHI 2016 so crossing fingers and hoping to be able to be part of this conference again this year.

The theme of this year’s NordiCHI conference is “Game-Changing Design”, and is further explained as:

Firstly how design and designs can completely change how we perceive and act in the world, but secondly – and just as importantly – whether and how we can change our perception of what design really is, and how it should be done.

NordiCHI 2016 will be held in Gothenburg, Sweden, October 23-27, 2016 and hosted in unity by Chalmers University of Technology and University of Gothenburg.

Fighting spam with fake MX records

No junk mailSpam is a well known problem to all users of the Internet, especially technical administrators of Internet services. I own several domain names for different purposes. Some are used for websites, some are used for e-mail, some are used for both, some are used for infrastructure (e.g. mapping easy to remember hostnames to IP addresses), and some are just sitting for future use. Most of my domains are not used for receiving e-mail. However, spammers don’t care and will still send spam mails to these domains. Even without Mail eXchange (MX) records a domain is still not safe as many e-mail servers will instead tryout the A record of the domain. With several domains not used for e-mail, this can at times be annoying to manage and causes extra server load.

To minimize the problem, using fake MX records, known as ‘nolisting‘ has been proposed as a trick to reduce spam.

I’m currently using a free service offered by Junk Email Filter Inc. They are running the project Tarbaby, essentially a cluster of fake MX servers. The project has two goals: 1) to help reduce incoming spam, and 2) to support the ongoing work of maintaining the Junk Email Filter blacklist of known spam sources.

The service is very simple to setup and use. Simply add the following hostname as the only MX record of the given domain:

tarbaby.junkemailfilter.com

You can set any value as the priority, for example, 10.

Every time a mail is received the system will respond with the code 550, which means that the message was not deliverable. Genuine senders will receive a reply with an error message and know that a given address is not available, and spam bots will move on and get registered in the blacklist.

Another free service is Fake MX. Add the following hostname as the MX record of the given domain:

mx.fakemx.net

Set any value as the priority, for example, 10. If you use more than one MX record, set the Fake MX record with a higher priority than the primary MX record. Also remember to read their terms of use before adding their mail server.

Using fake MX records is no ultimate solution to avoid all spam from getting in touch with your severs, but anecdotical experiences reported from different forums indicate that fake MX records significantly reduces spam.

More information about using fake MX records can be found at “Nolisting: Poor Man’s Greylisting” and “Other Trick For Blocking Spam.

As it is the case with most tricks also the nolisting strategy has some drawbacks. Especially if using a fake MX setup on a domain intended for receiving e-mail. Some of the drawbacks can be found at the Wikipedia page ‘Nolisting‘.

Merrild coffee machine joins our research group

Merrild coffee machine

We recently got a new member of our research group, a Merrild coffee machine. The Department of Computer Science at Aalborg University consists of several different research units, and luckily one of the other units passed on their used coffee machine which still can be considered hyper-modern in comparison to our former one.

Easy access to coffee is an essential part of the infrastructure at any academic institution. Our department is no different. One of my most appreciated benefits as a Ph.D. Fellow is the included unmetered amount of coffee. This benefit has changed what was once as a college student a significant expense in my finances into a remarkable less significant cost.

The coffee machine and coffee drinking serve a number of purposes besides getting coffee. Firstly, it’s simply a part of the daily schedule, like turning on the computer. It’s a ritual that cannot easily (or should not?) be changed. Secondly, the environment around the coffee machine and the act of drinking coffee is a catalyst for socializing with colleges. Pitching some research ideas, discussing the stack of student reports, or simply a bit of small talk. Thirdly getting coffee is an easy activity for procrastinating other more complex tasks. Whether or not the “fact” that I will be more efficient and creative after picking up coffee is true it is not something I currently plan on challenging (but probably should).

Welcome to the newest member of the Information Systems research group, our new (used) Merrild coffee machine.

First encounter with a laser cutter

Laser cutting 1During a recent course in interaction design research at Aalborg University Department of Architecture Design and Media Technology, I had my first encounter with a laser cutter. Here I was introduced to some of the many possibilities offered by this relatively simple technology. In essence, a laser cutter is a high-power laser burning or melting the material like a saw. A strength of this technique is the variety of different types of materials that can be used such as wood, cardboard, plastic, acrylic, and fabric. Another strength is the precision of the cutting making it possible to get clean and sharp edges. Laser cutters are still not at a price level making it possible for most people to get one at home, but several universities and workshops make them available to students and the public.

In simple terms, a laser cutter almost works like an ordinary printer – model something in a modelling program and send it to the laser cutter.

Laser cutting 2During this introduction, I used the modelling program Skatechup Make by Sketchup. Sketchup Make is the light version of Sketchup Pro and is available as freeware for non-commercial use and a great way to learn and experimenting with modelling programs and creating sketches for laser cutting. It turned out to be relatively easy and fast to learn the basic concepts of Sketchup Make. After a couple of hours of introduction to both laser cutting and Sketchup, I was able to make different simple shapes and to prepare them for laser cutting.

Laser cutting 3While I was only able to make some simple shapes during my first trial, it’s easy to get hooked and see a potential. With a bit of creativity, it is possible to make 2D models into 3D models by creating 2D parts and afterwards assemble the parts into something 3D. While I didn’t get so far during my first encounter, it is to easy to see why laser cutting is a cheap, fast, and easy tool for rapid prototyping of physical devices. Laser cutting is a very compelling and attractive technique, so I hope to get the chance to play further with this technology and even use it for one of my projects.

Understanding usability problem lists is challenging

In an ongoing study about creating GUI redesigns based on the results of a usability evaluation I asked the participants if they had problems understanding the usability problem list. 44 participants, a mix of informatics and information technology students following a design course, participated. Their assignment was to create redesign suggestions for a web shop selling merchandise and tickets. The company developing the web shop had conducted a think-aloud usability evaluation resulting in a simple usability problem list listing 36 usability problems. Each problem was described with the location, a short description, and severity of the problem. The table below shows how the participants answered.

 Were there any usability problems you could not interpret? (n=44)
 Disagree strongly  18%  41%
 Disagree  16%
 Slightly disagree  7%
 Neutral  16%  16%
 Slightly agree  27%  43%
 Agree  7%
 Agree strongly  9%

As can be seen, 43% found that at least one usability problem was difficult to interpret. While this aspect is not the focus of the study, it is still an interesting finding that a relatively large amount of the participants had troubles understanding all the usability problems of a relatively short list of problems. I suspect that the 16% choosing ‘neutral’ probably believed they understood all problems with some uncertainty if this actually was the case. Unfortunately, I have no quantitative data about the number of problems difficult to interpret, but I do have some qualitative data. Especially one particular problem was mentioned among the participants. Not surprisingly this was a semi-complex problem and one of the more important ones to investigate further. I’m sure people receiving and using usability problem lists can recognize similar problems. Another challenge faced by the participants was recreating problems. Some problems are only happening under certain conditions, recreating the same conditions based on a problem description is not straightforward. Despite the missing of details, this non-scientific presentation, and the number of participants, these numbers adds to earlier findings and research in the communication of usability problems.

Here a few papers discussing usability problem reporting:

  • Hornbæk, K., & Frøkjær, E. (2005, April). Comparing usability problems and redesign proposals as input to practical systems development. In Proceedings of the SIGCHI conference on Human factors in computing systems (pp. 391-400). ACM. 10.1145/1054972.1055027
  • Høegh, R.T., Nielsen, C.M., Overgaard, M., Pedersen, M.B., and Stage, J. The Impact of Usability Reports and User Test Observations on Developers’ Understanding of Usability Data: An Exploratory Study. International Journal of Human-Computer Interaction 21, 2 (2006), 173–196. 10.1207/s15327590ijhc2102_4
  • Molich, R., Jeffries, R., and Dumas, J.S. Making usability recommendations useful and usable. Journal of Usability Studies 2, 4 (2007), 162–179. PDF
  • Nørgaard, M., & Hornbæk, K. (2008). Working together to improve usability: challenges and best practices. University of Copenhagen Dept. of Computer Science Technical Report no. 08/03. PDF
  • Nørgaard, M. and Hornbæk, K. Exploring the Value of Usability Feedback Formats. International Journal of Human-Computer Interaction 25, 1 (2009), 49–74. 10.1080/10447310802546708
  • Redish, J. G., Bias, R. G., Bailey, R., Molich, R., Dumas, J., & Spool, J. M. (2002, April). Usability in practice: formative usability evaluations-evolution and revolution. In CHI’02 extended abstracts on Human factors in computing systems (pp. 885-890). ACM. 10.1145/506443.506647