Chris Taylor and I talk about what happens when CSAT takes a turn for the worse.
I’d love your thoughts on this episode! Comment below, and like/love/share/support if you found this inspiring, thought-provoking, or useful!
Charlotte Ward 0:13
Hello, and welcome to Episode 119 of the customer support leaders podcast. I’m Charlotte Ward. The theme for this week is customer satisfaction. So stay tuned for five leaders talking about that very topic. I’d like to welcome back to the podcast today, Chris Taylor, Chris, it’s lovely to have you back this week, we’re here to talk about customer satisfaction. Now I know you’re particularly data driven. And maybe we can get on to a little bit about how we respond to some of the data that we might get out of a CSAC programme. in a second, but But first, what’s your What do you think is the biggest takeaway we can have from from any effort into attempting to measure our customer satisfaction of our service? I
Chris Taylor 1:03
think for me, the biggest takeaway on like an operational management level is how healthy is your support team in that week, in that day in that month, CSAC can vary wildly, right? So when I was working in the contact centre, with 200 agents, you can very quickly get an after call sees that survey, that’s bad. pinpoint the agent, are they having a bad day, and then that’s a really nice and effective way of just having that conversation. Because it’s literally a customer score is straight offer an interaction, they’re feeling the emotion of that interaction with them. And so I think it’s a really good indicator of the health of your support area as a whole. But it’s a nice way of narrowing down on individual issues. People don’t need discussing and values in process as well, I think
Charlotte Ward 1:53
I’m in an environment like a call centre, which is pretty intense and pretty high volume, you’re getting a lot of measures through the door every, every minute or every hour of the day, aren’t you? So I guess when you see that volume of data, you’re able to respond quite quickly, as you said, you can just walk over to an agent and have a word, what what what, what are the other things that you know, how else might you respond to it to low C SAT scores? What, what, what, uh, what are the, what are the mitigations that you can make?
Chris Taylor 2:25
So whatever. So if I get a bouncy SAT score, now a tie for business to business? What I do with that, is I literally respond to that client straightaway, awesome. What went wrong, be really upfront and say, I’ve reviewed the interaction first, were you happy with it? If you were sitting on the other side of that table? Would you be okay with that? Again, thinking about us tyc. We want to make our customers feel like we’re part of their team, we’re on their side. So it’s really looking through that interaction. How’s it gone? Well, from my perspective, now, I messaged that client as well, how could we have done better here, and it’s trying to elicit feedback from that negative experience that we can feed into our continuous improvement processes. And that’s where I find the most value because I don’t see an unhappy client as necessarily a bad thing a lot of the time. And from my perspective, it can really help to highlight failures in process, organisational practice, and maybe agent training, knowledge, that kind of thing. So I actually think it’s quite useful. Obviously, you don’t want loads and loads of bouncy. So you want to be able to spot the problems in the data, do something to fix them. But I think it’s really valuable from that perspective.
Charlotte Ward 3:35
Yeah, I think I couldn’t agree more that that’s interesting what you said there that, you know, it’s really, I think it’s really easy to think we’ve got a bad survey, we better just fix this and shot it down as quickly as possible. But actually, I think it’s worth taking the time to spend, spend time with that customer figure out what that particular issue was, but maybe half the conversation in a slightly bigger way as well and see what else you can elicit. Because by filling out that survey, they’re effectively reaching out to you you’re they’re giving you a like that they’re investing time in you, when they respond to a survey, they’re investing time to tell you. Okay, it was probably let’s face it going to be a negative comment. Because we’re all human beings, we’re all much more much more likely to fill out a box with something negative than positive, particularly when it’s really specific. But but it really is a valuable opportunity to get a conversation going, isn’t it?
Chris Taylor 4:34
Absolutely. And I think from my where we’re at now for the last year we’ve had, I’ll see SATs be 97% week on week, we’ve really good score those that 3% is the area of opportunity for us. What can we do to this is where our process sales, and that’s how we narrow down where we can do better. So I only see it’s the same principle of when you make a mistake in anything in your career, isn’t it To learn, and Sisa is definitely a useful resource for that. The other side is for an agent training perspective and even not looking at wider process if you can sit down an agent or who’s had a call or a ticket or whatever, and give them that C SAT score, and then they can understand and rationalise How about has occurred? I think that’s a really useful tool as well.
Charlotte Ward 5:23
Yeah, absolutely. And I think, actually, it’s worth bearing in mind, as you said that it’s the 3%. That’s interesting, not the 97%. Because frankly, no, I’ve yet to come across an organisation that has a 100% sees out surveys. I think that the 3%, frankly, well done on 97%. And if you maintain that or improve it great, but nobody’s going to get to 100. And I think that the real value is in the in the difference. It’s where it’s where you can extract a really useful set of conversations.
Chris Taylor 6:00
Yeah. And I think if you’re a business that hasn’t got this metric already, right, and you’ve been running for 10 years, you implement it, and you’re getting a 50% cscap c SAT score, you know, there’s a massive problem. So it’s really, we work in startups. So we we’ve done customer support before, we’ve implemented this early on, so our agents can learn and grow with it. Whereas if you have agents in a business established, but they don’t have any metric like this, they never really know how their customers feel, or whether they’re happy. So how can you give proper feedback? Mm hmm. So that’s another flip side, I guess.
Charlotte Ward 6:37
It is and I guess it’s worth noting that it’s very likely when you introduce this, you’re going to start with low scores, right? Because because you don’t have that information. It’s a baseline. It’s a baseline.
Chris Taylor 6:49
Charlotte Ward 7:00
That’s it for today. Go to customer support leaders.com/119 for the show notes, and I’ll see you next time.
Transcribed by https://otter.ai
A little disclaimer about the podcast, blog interviews and articles on this site: the views, thoughts, and opinions expressed in the text and podcast belong solely to the author or interviewee, and not necessarily to any employer, organization, committee or other group or individual.