The Ultimate 2024 Guide to Measuring Customer Satisfaction

Sarah Chambers Sarah Chambers · 8 min read

Measuring customer satisfaction means having a better idea of what works to keep customers satisfied – and what leaves them unhappy.

Of all the metrics used to measure customer experience, customer satisfaction is the most prevalent. Support teams are always looking for ways to “move the needle” and bump their customer satisfaction (CSAT) scores up a few percentage points in the hope that customers will be happier and more loyal.

CSAT is the time-tested method of measuring customer service. It’s also extremely popular. In fact, as much as 41% of customer support teams said that CSAT is their most important KPI.

While it’s been around forever (it’s the earliest format was the rating index for radio shows in the 1940s), it’s definitely not going out of style. And that’s probably because it’s easy to set up, easy to reply to, and provides a great overview of the customer’s happiness levels.

What is Customer Satisfaction (CSAT)?

Customer Satisfaction (CSAT) is a metric used by customer service professionals to measure a customers’ feelings regarding a recent interaction. Customer satisfaction can also refer to how happy a customer is generally. For our purposes, however, we’re restricting the definition of CSAT to solely focus on the metric and how you’ll be measuring customer satisfaction.

“Our team’s CSAT this quarter was at an all-time high of 95%!”

Measuring customer satisfaction is usually displayed as a percentage, representing the number of positive customer responses out of 100. For example, receiving 80 positive responses and 20 negative responses would result in a CSAT score of 80%.

Why should I measure CSAT?

It goes without saying that happy customers stick around. Customers left unsatisfied after a customer service interaction, are far more likely to cancel their service or not return in the future. Customers who’ve had a bad experience with a company are only 40% likely to still be a customer in a year, compared to 75% of customers with great experiences sticking around. That’s a huge number of customers that could be walking out the door due to bad service. In addition, keep in mind:

  • It takes 12 positive experiences to make up for one bad experience. (Customer Experience Insight)
  • 67% of customers list a bad service experience as their main reason for churning (Kolsky)
  • 95% of customers will share their bad experience with friends and family (Zendesk)

Measuring customer satisfaction means having a better idea of what works to keep customers satisfied – and what leaves them unhappy. As the old saying goes “what gets measured, gets managed.” If we’re keeping customers’ satisfaction top of mind, and constantly looking for ways to improve, we reduce the chance of something going wrong.

CSAT is the most common metric for measuring customer satisfaction because it’s simple to use and easy to understand. Everyone in the company can easily interpret scores.  Customers clearly understand what’s being asked of them. Plus, because customers can give feedback with just one click, response rates are higher than traditional long-form surveys.

How do I measure customer satisfaction?

Most help desks offer a default customer satisfaction survey that can be automatically sent after they resolve a ticket. Customers will receive an email asking if they were happy or satisfied with the service they received, to which they can respond positively or negatively. These responses, along with any additional comments, are fed back into the ticketing system so that customer service teams can address them.

For teams that want more flexibility and customization in their CSAT surveys, they can choose to integrate a survey-specific tool, like Nicereply, with their help desk.

Every email, or when resolved?

It’s possible to ask customers for their opinion on the service at two main points – either include the survey at the bottom of every email or only ask once when you resolve the conversation.

CSAT- measuring customer satisfaction

Offering an opportunity to give feedback at every interaction means that the agent doesn’t need to wait until resolution to find out how customers feel. This means they can act quickly to turn a conversation around when it starts to derail.

However, asking for feedback before the conversation is finished might create a misleading overall score. Customers who were really upset before they had all the information might update their ratings when they are happy with the resolution. If the CSAT scores aren’t separated, it’s difficult to understand if customers are satisfied when everything is said and done – or if they are still waiting on a better resolution.

CSAT- measuring customer satisfaction

Finally, teams can also choose how long to wait post-resolution before sending out a survey. Remember that many tickets close because of inactivity automation (for example, close a ticket if the customer doesn’t respond in 48 hours). If the survey is sent out immediately after the conversation is marked closed, it’s possible that the conversation shouldn’t have actually been resolved.

This will result in frustration from the customer and bad ratings. We suggest building a 24 or 48-hour buffer to the rating flow to avoid this issue. It also gives customers time to make sure suggested fixes actually worked.

Different Questions

Not every CSAT survey is created equally. Depending on how the survey question is asked, teams might get very different feedback from their customers. If you want customers to focus more on the service side of things, then specify that in the question. If you’re more interested in the general perception of the experience, then the question can be more open-ended.

Experiment with different survey questions to identify which ones garner the clearest, most actionable feedback for the customer service team. We’ve come up with 18 different questions you can ask your customers to tailor how you get feedback. Here are three of our favorites:

  • How nice was my reply?
  • How was the help you received today?
  • Are you satisfied with the resolution of our last conversation

A Response Scale

What scale should customers provide their answers on? Many survey providers will allow for anything from a binary response (Good/Bad), a Likert Scale, or a range up to a 10-point scale. Is one better than another?

A Likert Scale is a balanced range of options that scale from disagree to agree. It contains equal numbers of positive and negative responses, symmetrically balanced along the scale. For example, a common Likert Scale looks like:

  1. Strongly disagree
  2. Disagree
  3. Neither agree nor disagree
  4. Agree
  5. Strongly agree

One of the disadvantages of a 3, 5 or 7-point scale is that there’s always a “neutral” option. Customers have an easy way to avoid taking a stance by selecting the middle response. Are customers who are “neutral” really happy? To avoid being rude, even unhappy customers might choose a less certain answer. Large scales have a similar issue. What’s the difference between a 6 and a 7 in customer happiness? Can customer service teams act on responses that are so subtle? In many ways, forcing a customer to choose between Good and Bad, or Satisfied and Unsatisfied makes CSAT responses easier to analyze.
There are no definitive answers on what scale works best for measuring customer satisfaction surveys. In fact, most studies have found that there are no statistical differences in responses between the different scales.

What do I use CSAT scores for?

CSAT scores are only helpful if the team takes the time to read and action them. Customer responses, along with the context of their tickets, contain a ton of useful data about what customers want. There are many different ways to put this important information to work for the business.

A real-time feedback mechanism

The most immediate advantage of collecting CSAT scores is the ability to take action when customers are dissatisfied. Even customers who might not reply to every email are likely to respond to a one-click customer satisfaction score to share their unhappiness.

Teams can build a process to alert supervisors in real time when a bad rating comes in. By following up quickly, there’s a much better chance of turning the customer’s experience around.

A big-picture idea of customer happiness

When you compile CSAT into one number, it’s simple to see how it changes over time. Tracking CSAT by week, month or quarter can help teams keep their focus on ensuring customers are satisfied with their service. If the number starts to decrease, it’s time to look for ways to improve.

What’s a good customer satisfaction benchmark? It varies by industry, country, as well as a contact channel, but most teams will want to aim for no lower than 80%. Last year, the average CSAT rating for Nicereply customers was 84%. Zendesk offers a benchmarking tool, along with reports on industry averages that anyone can access for free, here.

CSAT- measuring customer satisfaction

Evaluating agent performance

Many customer service teams will segment CSAT scores by an agent. This allows them to identify top performers and laggards in relation to customer happiness. However, managers should proceed with caution when making performance decisions based solely on CSAT scores. Senior agents may be taking on more complex, difficult cases which may result in lower CSAT scores, through no fault of their own.

Instead of ranking agents from highest to lowest CSAT, try setting a benchmark for agents to meet. If their score dips below the benchmark, bring it up at their next one-on-one to identify causes and opportunities for additional training.

Identifying trends and hotspots

By combining CSAT data with ticket data, it’s possible to uncover trends in customer satisfaction. Do certain product areas drive lower satisfaction scores? Are newer customers more likely to be satisfied than older customers? Take a look at this sample graph from Hubspot comparing CSAT scores across customer lifecycle stages. You can see that customers are the least satisfied during Onboarding, and their satisfaction peaks after 3 months of using the product. A customer service manager looking at this data might decide to invest more resources in improving the Onboarding experience. It’s possible to dig even deeper to see what customers in the Onboarding stage are writing in about.

CSAT data can be very influential when talking with product managers. Combining ticket data with CSAT scores can show where customers are the most frustrated, and what situations aren’t easily resolvable by customer service agents.
Combining CSAT data with other metrics can help make decisions about where to allocate resources and where improvements are most needed. Without quantifiable CSAT data, customer service isn’t as influential to the rest of the business.

Potential CSAT Issues

Like all metrics, there can be issues if you use CSAT  incorrectly.

CSAT is not a band-aid for customer loyalty issues. Customers who have responded positively to customer satisfaction surveys today could turn around and cancel tomorrow. Just because they are happy with the service, it doesn’t mean they are loyal to your company. Fred Reichheld, the creator of the Net Promoter Score system, found that companies that exclusively measured customer satisfaction still had very high churn rates for this very reason.

Focussing exclusively on increasing CSAT scores can create loyalty blindness. For example, a few years ago, I worked with a team that set a goal to raise CSAT from 90% to 95%. We sat down to brainstorm possible methods:

  • “What if we stop surveying cancellation requests?”
  • “Let’s only survey tickets that we solve in under 3 days!”
  • “We can exclude anyone who’s contacted us by mistake!”

Do you see the problem? We weren’t making anyone happier, we were just measuring customer satisfaction and happiness more selectively. All of these would likely improve our CSAT score, but they wouldn’t make any of our customers any more satisfied, or loyal.

In fact, by not asking customers who we thought might be unhappy, we were actually creating a huge blind spot. It’s like a little kid covering their eyes with their hands and yelling “You can’t see me now!” Just because we weren’t seeing the responses, it doesn’t mean that the customers were magically happier.

A single-minded focus on increasing CSAT can cause problems. Instead, keep a balanced view of what really matters. It’s not the number itself that helps businesses grow. It’s the mindset behind it.

Measuring customer satisfaction will help you grow

Every customer-facing team should be measuring customer satisfaction in some way. A CSAT score is the cumulation of feelings of real people. We, as customer service agents, have the privilege to move the needle, solve problems, and figure out how to make the unsatisfied, satisfied. Asking the customer for their feedback is a simple way to get the truth, straight from the cat’s mouth. If a customer says they are dissatisfied, they likely have reason to be – and their dissatisfaction needs to be addressed.

If you don’t have a feedback system in place to measure customer satisfaction, there’s no way to understand how great your service is. Your customers will never tell you unless you ask.


How did you like this blog?

NiceAwesomeBoo!

Sarah Chambers Sarah Chambers

Sarah Chambers is a Customer Support Consultant and Content Creator from Vancouver, Canada. When she’s not arguing about customer service, she’s usually outdoors rock climbing or snowboarding. Follow her on Twitter @sarahleeyoga to keep up with her adventures.

Related articles


Envelope icon

The best customer service tips every week. No spam, we promise.

Get guides, support templates, and discounts first. Join us.

Pencil icon

Are you a freelance writer? Do you want your articles published on Nicereply blog?

Get in touch with us

Facebook icon Twitter icon LinkedIn icon Instagram icon