The bookends of the story are well understood: A user tries a new feature and when it doesn’t work the way he expects, he voices his frustration. Some time later, he returns to the feature to complete the same task. This time, his frustration is gone and the user can get his job done as he expects. What happened between the time he voiced his feedback and the time the feedback was addressed?

In this post, I’ll share how the Rally UX team collects customer feedback across thousands of users and uses the information to improve our product. I hope you’ll be able to use some of these tips in your organization.

There are five acts to this story. Above, I described the first and fourth acts; here’s how they fit into the bigger picture:

  1. User encounters a need and lets us know

  2. We understand the user’s needs

  3. We communicate those needs and resolve them

  4. User returns to the scene, finds happiness

  5. We measure our success (or failure)

Act One: Users Let Us Know What They Need

Depending on where we are in our beta process, a single Rally feature can receive thousands of feedback submissions in a single day. For beta functionality, we know that about 1 in 10 new users enter something in a free-text field, giving us some indication about the specific aspect of the feature that they liked or didn’t.

Act Two: Understand Users’ Needs

These thousands of feedback submissions roll into our Product Data Hub, a tool we use to begin the analysis process. Here are a few examples around a particular topic, all from different users of the beta version of the Iteration Status page:

“It's harder to use tasks.”

“I can’t really see how to add new tasks to a story inflight.”

”Need a way to quickly create task for a user story with Name, Estimate Hour, To Do Hour, Owner in one shot.”

The challenge is distilling these pieces of feedback and thousands more like them into actionable items. We start by categorizing the feedback: It’s a two-step categorization scheme where we classify each piece of feedback into a general category and a feature-specific category.

Here’s an example from the Iteration Status page showing the number of feedback submissions over a couple of days in each of the relevant categories. (Note: the feedback submissions I described above were all categorized as Task Interaction. We’ll return to that category later in this post.)

We categorize the feedback as a feature-focused team and as distributed individuals throughout the Rally organization.  

Distributed Individuals

As a user experience researcher, I’m constantly soliciting people within Rally to categorize feedback. For example, whenever a new developer or a new product owner starts at Rally, they often ask me how they can learn about our users. I tell them to review and categorize feedback; it’s a great way for them to gain empathy with our users and a great way for me to distribute the feedback categorization. By distributing the categorizers, we ameliorate the bias inherent in such a subjective process.

Feature-focused Team

The feedback team includes the product owner, the UX designer, the technical lead, and some of the software engineers responsible for the feature. One of the goals of this process is to communicate what our users need to the development teams doing the work — and although the feedback can be a bit raw and painful to read at times, the unfiltered nature of users’ voices can be invaluable in making it heard within the engineering organization. As a team, we meet regularly to review the feedback and we typically begin those meetings by revisiting the most recent few dozen submissions together. Then we look at the charts to see what bubbles up in priority. Here’s an example where we identified Data Density as the top issue:

Any individual feedback comment can generate lots of debate. I’ve found that different roles have different biases about how feedback should be categorized and will argue in support of their particular bias. The most important thing is not to debate or respond to each feedback instance but to move quickly through a larger number of feedback comments so we can see what issues surface most frequently. If something is important to users, you’ll certainly hear about it more than once.

Act Three: Translate Feedback Into Action

By bringing together an interdisciplinary team to categorize the feedback, we can get an early start on communicating the problem and finding a solution. Simple heuristics apply based on how we categorize the feedback.

Generic Category

Definition

Action

Defect

Deficiency in the system

Triage and fix

Feature parity

Gap between new and old functionality

Parity gaps block users from adopting new features; prioritize a solution

Discoverability

The feature exists but users aren’t finding it

Design for improved discoverability or develop a campaign to educate users

Ease of use

The new functionality is arduous to users

We often need to investigate these issues with user interviews and design an improved solution from our learnings

Enhancement request

“The feature would be better if …”

Build a backlog of work

Act Four: Communicate Action Taken

Once we’ve addressed the issue, how do we let users know? If they tried and failed before, they aren’t likely to come back, so we use a variety of methods to get their attention:

We know that the experience of our users is a valuable commodity, so we’re careful in how we expose beta functionality, which may contain defects or feature gaps. We’ve discovered that the best process is to expose relatively small groups of users to new features, learn from those users, improve / fix / resolve issues, and then introduce some new users to the updated beta functionality.

Act Five: Monitor

How do we know our action was effective in resolving a user’s needs? We monitor several feedback mechanisms. When the beta functionality is a replacement for existing functionality, we carefully monitor retention. That is, after a user has experienced the beta functionality, when they return to the page, we monitor whether they return to the beta functionality or the existing functionality. We want to see a significant increase in retention for each improvement to the page. If we don’t, we need to discuss if our change really was an improvement.

We also closely monitor the feedback categorizations. When we believe we’ve improved a feature, we expect to see a drop in the negative feedback around that feature.I

We pay close attention to the qualitative feedback coming in and the score that a user submitted. Here’s some feedback from the Task Interaction category after we added the capability to in-line add new tasks on the Iteration Status page.  

I like being able to create children inline and choose between a task and a user story. It's very intuitive and quick. (Score: 9)

The re-adding of "add child inline" is a really nice touch. Also, the inverse "copy child from" splitting of a story is really nice too. I think the UI still has a long way to come, but these changes are really nice. (Score: 7)

I commented earlier when leaving the new page, that I liked the inline adding of the tasks in the old page. After giving the new one an attempt, I really like the new one. It just wasn't obvious that I should click the gear to the left and the 'child' that it offered to spawn, I assumed could only be a story, as I always think 'story' when I think 'child' but realize as 'task' could be a child. Now that my brain training is over, I think it will be great. Nice work. (Score: 9)

Takeaways and Lessons Learned

Segment your feedback by user persona. On the Iteration Status page, we didn’t do a good job of segmenting our feedback, so when we removed the option to switch back to the old version, we heard loud and clear from quality engineers that we had missed a couple of features they really valued. They submitted feedback on the topic before but because they (a) immediately switched back to the old page and (b) were such a small portion of the user population that their feedback was drowned out by other user groups who were actively using the page.

Get enough functionality in users hands so they understand the contextual value, then use feedback to guide development. Let your users tell you the most important things to work on.

Have the courage to deploy minimal value functionality when you know you can listen and respond quickly. It’ll help you work on the right things in the right order.

Budget time for listening and responding. We often refer to this as a Rapid Response process where your goals are to quickly respond to early adopters who are using the early deployment of a feature. When new features are released in a quarter, we allocate capacity to perform Rapid Response work for that feature in the next quarter to ensure we make adjustments in a timely fashion.  

We appreciate every single feedback submission, so please keep sending them — they help us build a better product.

Come meet the Rally UX team at RallyON! 2015 — a uniquely interactive Agile conference happening June 15–17 in Phoenix, Arizona. If you liked this blog, you may be especially interested in the workshop, “Raising The Bar: Bringing Discipline to Defining and Tracking Business Value.”

Thanks to Steve Stolt and William Surles for helping to write this blog post, and doing the work that made writing it possible. 

Request a Call

Looking for support?

Send Us Your Feedback

Provide us with some information about yourself and we'll be in touch soon. * Required Field