We're creating a new home for Customer Support Leaders. Please bear with us while we're building.
Charlotte Ward •

286: Measure What Matters When Humans And AI Share The Queue; with Craig Stoss

About this episode

AI has crossed a threshold in customer support: it no longer just routes and records—it actually does the work. We sat down with Craig Stoss, solutions lead at Codif, to unpack what changes when agents and co-pilots resolve real cases, trigger refunds, detect spam, and personalize replies across channels. The core shift is mindset: once software handles tasks, you need to manage it like a teammate—measure output, audit quality, plan capacity, and set honest expectations about what AI should own versus when a human steps in.

We map the strengths and limits on both sides. AI excels at repeatable workflows, multilingual responses, classification, and fast data gathering across commerce and CRM tools. Humans shine in ambiguity, ethical judgment, and emotional connection. That split demands better metrics: evaluate AI on loop frequency, hallucination rate, accuracy, and handoff health; evaluate humans on sentiment impact, recovery, and resolution quality. We also tackle the messy middle: what counts as “containment,” how AI prework changes case counts and AHT, and why your SLAs must adapt as simple issues disappear and human queues get harder.

Transparency matters for trust and compliance. Customers behave differently when they know they’re speaking with a bot, so make disclosure sensible and escalation simple. Inside your org, put AI into workforce management like a 24/7 agent with forecasted volume, coverage targets, and per-workflow goals—think 90 percent for FAQs, 80 percent for refund screening, and a lower bar for complex warranties. Budgeting evolves too: tokens and compute join salaries, compounding micro-saves into real capacity. The upside is big: more interesting human roles, fewer transfers, and a path to merge support and success so one empowered person, augmented by AI, owns the outcome end to end.

If you care about scaling support without losing humanity—smarter metrics, cleaner handoffs, and a realistic plan for AI and people to thrive together—this conversation is your playbook. Subscribe, share with a support leader who needs it, and leave a review with the one metric you’d change first.

craig-stossCraig Stoss

Transcript

Charlotte Ward: 0:13

Hello and welcome to episode two hundred and eighty-six of the Customer Support Leaders Podcast. I’m Charlotte Ward. Today, welcome Craig Stars talking about making AI part of your team. Craig, lovely to have you back after so long. Not least because I didn’t do very well on keeping the podcast going last year. But uh we had a bit of a catch-up in the interim, haven’t we? And I’m so pleased to have you join me again after what seems like an age actually on the airwaves with you. Um to that end, welcome. And would you do a quick reintroduction for uh our listeners old and new?

Craig Stoss: 0:58

Yeah, well, thanks for coming back, Charlotte. It’s always great to converse with you. Uh my name is Craig Stoss. Uh, I lead up uh the solutions team at a company called Codif. We do AI software for e-commerce teams, uh, and specifically AI agents, which is the theme that we’re going to talk about here. Uh my background is in a mixture of customer experience, uh, consulting in the customer experience and software space, uh, and most recently working very closely in the uh software tools uh sector for uh uh for customer experience and customer support team specifically.

Charlotte Ward: 1:32

Perfect. Um and thank you so much for touching briefly on uh where we’re heading with this conversation today. You and I were talking about this the other day, and uh I found this concept super interesting, just even just as a headline, right? Which is uh, I believe we began this conversation the other day talking about how AI is is and maybe should be considered a part of your team. Um so so what do you really mean by that? What what does that actually mean? Let’s start there.

Craig Stoss: 2:00

Yeah, you know, I think as I as I’ve been in the in the software industry for this for CX for so long now, for about four or five years, really focused on the industry and the software tools available. The biggest shift uh that I have seen is that all all tools used to be operational in nature. And what I mean by that is that you know Zendesk and your and Gorgeous and your help desk tools, they were there just to help humans organize and prioritize, you know, keep track of what was open, what was closed, keep track of what you said, be able to hand between you know humans without necessarily having a worn hand on. But it was all operational, right? And and that’s true for a lot of your tools. That’s true, cross your uh your returns uh tool, like if you use a loop returns or a SKIO or your subscription management tool, that’s true for your CSM tools. It’s used to track you know renewal dates and it’s just used to store data and help operationalize things. What’s changed in the AI world, and especially with the AI agents uh coming out and being something that we’re seeing so much more of, or even copilots to some degree, um, is that these are now uh actually doing the work. They’re they’re augmenting the human work instead of just operationalizing it. And why I think that’s an important distinction is that now that they’re doing work, uh they they need to be measured and and made and checked on to make sure they actually are doing the work. It’s no different than having an employee where you do reviews of their work, you measure their work, the quality of it, the the frequency of it, the the volume of it. Um you your AI needs to be measured very similarly. And I go as far as to say that this should be transparent because if people are worried that AI is going to take their jobs or AI is going to uh fundamentally change your corporate uh landscape or your organizational structure, they should be able to see transparently, well, this is what I’m looking at. This is how I’m measuring AI as the same as you, and this is how I’m measuring AI differently than you. Um, and and what does that mean for the team or for you as an individual?

Charlotte Ward: 4:18

Interesting, interesting. So when we think about, I think you’re spot on. I think tools is is how I would describe everything we were doing before AI. They were not accomplishing tasks, they were just the things that humans were wielding to accomplish tasks. And and what we’re seeing with AI now, like you said, with agents, co-pilots, all of that kind of thing, is um AI doing the tasks. Let’s let’s put little quotes around this, but somewhat independently, right? And AI can can travel off around your ecosystem and accomplish tasks without you, which is new. That’s the new nature of this. And so that independence, I think, is part of what makes them, you know, drive uh some of the kind of uh stresses and nerves that you’re talking about in the ecosystem there, like at for what can effectively be called their co-workers, i.e. your actual human team. Um but but also um the the relationship between those two sets of uh task accomplishers, i.e. the human workforce and the AI workforce. Um it it it’s it’s kind of to your point, like there are things that we should be transparently measuring and holding both sets of uh um both sides of the team to account on, but also like be transparent about how we’re doing that and why we’re doing that, right? Why we’re putting AIs to to task on certain parts of the work and and humans in others and and um what opportunities that might drive. I mean that there’s so much to unpack here, but but first of all, let’s consider like what the what the measurements might be, I suppose. How do you how do you even begin to compare two things, two things, two entities, a human and an AI that might be or might have been historically doing the same task independently, if we can consider it that level first? How do you compare them?

Craig Stoss: 6:32

I think it all starts with what AI can do and what AI cannot do, and and vice versa, what humans can do well and what humans can’t do well. And um, so I I start with things like what can AI do really well? Automate repeatable tasks. Absolutely. Summarize, classify, route based on, you know, uh based on training. Um hyperpersonalize, right? Because AIs can go out and gather information and previous content or through integration, you know, other pieces of data that a human will just take too long to gather, so we don’t do it. AIs can do this very quickly and per hyper-personalize. Um they can be multilingual, right? They can they can immediately switch languages um and and and switch to languages that you would probably not even think of hiring unless you had very regional people in in that in that region. Um and so those are things that AI does well. But what they don’t do well is you know build emotional connections and trust. You know, they uh they they don’t handle handle ambiguity at all. If they’ve never heard of something, they’re either gonna hallucinate or they’re gonna just simply say, I this is something I have no clue on. Um I think that there’s uh creating policy or or it or or sort of like um ethical reasoning, you know, like you know, you have a really distraught customer on the phone, and your policy says X, and you know, the AI is gonna say, well, that’s the policy, you know, whereas the human agent might be like, you know, listen, given your situation, you know, maybe we can exchange the policy. Uh so um and so I start there. So let’s separate those out because I think everything I’ve just said, you wanted a support team. You want hyper-personalization, you want uh ethical reasoning, you want to handle ambiguous and repeatable tasks. And so all of that has to be done. Okay, let’s start with that. That’s our that’s our beginning. And then it becomes okay, well, let’s measure who should do what and what can be done together, uh, what can be uh measured in overlap. Um, so what is what’s overlapping in this? Well, the obvious one is volume. What volume uh of our of our uh workload is repeatable, is is something that’s routine and can be trained to an AI, and who’s handling that load? Is is AI handling it? Is is a human handling it? Um uh you know, let’s let’s go to uh something ambiguity and ambiguity. How often is the AI attempting to handle ambiguity and handing it off to a human correctly, or are they failing at that ambiguity? Are they are they are they hallucinating? Are they making stuff up or trying to you know do something that frustrates a customer, putting them into a loop of you know, try again, try again, try again? Um so that’s where you start, right? You start to figure out what where there’s overlap, where there’s not, and who’s doing what. And then I think it goes into uh a second layer, um, something I discovered really recently that we probably could talk a whole session on, is that people come to uh you know Codif that provides these agents and says, you know, I want to measure sentiment. And what we found is that sentiment is a really human thing, right? Because you and I can have a conversation, we play off each other, my tone of voice, your tone of voice, uh, sometimes your facial gestures, or or depending if now that we can see each other, or uh, you know, the use of a bit in an email, it could be you know the use of certain words or all caps. You know, sentiment is a is a is a is kind of a human thing. What we’re finding is that with an AI, the sentiment is almost always neutral or maybe slightly frustrated because they’re talking to support and people tend to be slightly frustrated in general when they talk to support. But people understand, especially when you’re told it’s an AI, that to just be neutral, you know, hey, I need this information. And then they get angry for different reasons. They get angry, for example, when they’re put into a loop of I didn’t understand this try again, or they get angry when the answer doesn’t isn’t accurate, or uh it says I don’t know.

Charlotte Ward: 10:42

They and so or it’s by the book to your earlier point, right? It’s by the book following policy. Sorry, I can’t give you that refund, refund, right?

Craig Stoss: 10:51

Yeah, yeah, yeah. And so why that’s something that I would argue we should measure differently. You know, humans humans should be measured on that sentiment and how they’re making a customer feel and the reactions the customer is having to their their voice and the way they present things. AI should be measured on are you coming across these frustration points, like loops, like hallucinations, like incorrect answers, like like following policy. And so there’s a whole other metric there that the AI should be measured on that where sentiment maybe doesn’t cover it. And so that’s the starting point, right?

Charlotte Ward: 11:22

And yeah. I I’ve got I’ve got I’ve got an I’ve got a question there for you though, which which is that to your point, people, i.e. customers react and interact with an agent quite differently if it is a human agent or an AI agent, if they know it’s a human agent or an AI agent. Um, so customers facing an AI agent will try and game the system somehow if they really feel it’s not serving them and they will try and speak to a human, for instance, you know, or back in the days of everybody used to do the zero, zero, zero, zero, zero thing on their phones, right? Yeah, to try and get through to a human if they were frustrated with because at that point you know it’s a machine. Um, how transparent should we as organizations be with our customers, letting them know they are talking to a human or an AI? Because to your point, the way they approach it is gonna be different, and therefore the measurement that you’re talking about applying is going to be quite different because if if the human doesn’t know and they’re expecting certain sentimental type of reaction, let’s say not sentimental in that sense, but a human reaction, they’re gonna they’re gonna have a different, they’re gonna like the frustration get levels are gonna register differently, aren’t they? So does that does that alter the the proposal you’re making here that we’re measuring different things for different agents?

Craig Stoss: 12:54

Um well, I’ll address your first question first and then I’ll come to the second one. So I think I think your first question was how transparent should you be?

Charlotte Ward: 13:02

Yeah.

Craig Stoss: 13:02

Um, and that’s a really complex question, right? I think there are some people that want to hide it as much as possible. They want it to seem human, they name it as a human, which I’m I I kind of advocate for giving your bots names, whether you publicize that to the public or not, or whether it’s more of just an internal way of referring to it, you know, it’s up to you. But you know, they want to hide, they purposely obfuscate it. Um, and I think subtechnologies allow for that really easily, and and I don’t necessarily have a problem with that. Um the uh there are countries, I think the European Union uh as it in general is starting to crack down and say, no, you must inform your users uh if the if you’re talking to an AI or an AI was involved in the response in some way. Um so you know, and then there’s and then there’s ethical things like you know, if it does get into these these loops, you know, how easy is it is it to get a human, or does a human have to be available, or or do you create tickets and what’s your response time on those tickets and transparency around that? So I think, you know, just to answer that question, I I think you should be, I think in general, transparency is a good thing. Um, I don’t think there’s any reason to purposely hide it. I don’t know that you have to necessarily advertise this as AI with big red flashing lights. Yeah. But I don’t think there’s any I also don’t think there’s any reason to outright lie, you know, you know, and say, oh yeah, I I I am a human, you know, you know, and and no, you’re not. So that’s the first question. I think to your second question, I it goes back to what I mean by um, you know, AI workforce, an AI, sorry, an AI augmented workforce. So humans and and and AI working together. I think that if you you understand what we just talked about, about the fact that humans approach or users approach AI differently, if you understand that that AI is taking some volume of work away from from people potentially, um then showing that is important. One of the things that I advocate for is that if you have a workforce management tool, right? If you use a tool to manage a workforce like an assembled or that type of tool, um your AI should be in that tool just as just as a human would be, because it’s taking volume. It’s taking volume on different hours, typically 24 hours a day, or your human agents aren’t. And and it should be it should be budgeted in the same way. It should be like, okay, well, we expect our AI to take so much volume, uh, and and therefore uh we’re gonna budget for that that type of uh that type of volume. And that goes back to the success criteria, you know, and measuring it of if I think that if I think that 40% of my volume can can be answered by repeatable mechanisms using AI and it’s only hitting 20%, well, you know, that’s the same as a human. If you expect a human to take a thousand tickets a month and they’re taking 400 tickets a month, you’re gonna have a conversation with that human, right? You’re gonna say your performance isn’t where it needs to be. And I think it’s the exact same with AI, except you’re talking to a vendor and saying, hey, vendor, your AI isn’t doing what I want it to do. Your agent isn’t performing. Um, and so so to me, it’s it’s all that same thing of you decide what you’re measuring, you you and you show people what you’re measuring and how you’re measuring it against. Um, some people, some of our customers are very specific about measuring against specific out uh specific um workflows. So I want you to automate 80% of refunds. Um, but I do expect you to answer 100% or 90 plus percent of uh of FAQ questions because those questions are already documented. We don’t want a human to handle those really simple answers. Um and and some people say are you know when you have a complex warranty policy, you might only expect AI to end-to-end automate 30% of that.

Charlotte Ward: 16:55

Yeah.

Craig Stoss: 16:55

Because realistically, 70% of it is going to need some of that um ethical reasoning because it’s a warranty and it’s complex, you know, depending on the type of product you sell. Um so be transparent with that. Because, you know, we get again as a as an AI company, we get asked all the time, you know, what what what is the containment rate that we can expect? What can your AI contain? And it’s such a nuanced question. What is containment? I I ask this a lot of times back to the customer. I said, what is containment? Because if you’ve told our AI that in certain cases to hand to a human and the AI has done its smart thing to gather information and then do that handoff, is that contained or is it not contained? Because we’ve done what you’ve asked us to do. It just happened to be the thing that it asked us to do is send it to a human. Um, that is a very nuanced problem. And I don’t know that anyone at the market has fully solved it. I think some people have their opinions and and structure their product around uh um you know pricing it based on those opinions. But I don’t think it’s a solved problem because I think it differs industry to industry and and and uh product to product.

Charlotte Ward: 18:08

So absolutely, absolutely, and and and what just struck me about what you were saying there about what what is containment, uh I was just pondering like no support metric exists in a vacuum at the best of times. And so so what might look like you know a self-serve, you know, uh resolution or you know, a deflected ticket or something that didn’t get to a like there are so many ways in which you can lower the if we’re just talking about volume or load on your team, there are so many opportunities. And AI is just one of those, right? I mean, uh if we’re talking about AIs are also out there helping us create better documentation or, you know, providing hyper-personalized experiences, like there are so many facets to a successful deflection if we’re just talking about ticket volumes. And I I don’t know that anyone has an outright simple answer, even if we’re talking about how effective a KB is or something, right? Never mind, never mind your AI agents. So I think the the very notion that we should concentrate on one metric and focus on that without thinking about how it affects everything else. And I think this comes to your point about handing off to a human agent and what that looks like for the humans involved, or um the profile of the tickets that humans are gonna be left with at any point. So whether they’re tickets that have been half managed by an AI and handed off, or or whether the AI is really being very successful at serving FAQ answers and simple refunds, but your human agents are left with much more complex tickets. Their time to resolution is gonna increase, their volumes are gonna drop. And you know, so like thinking about those metrics. Not just as you apply them to AI agents, but also as you apply them to the whole team. But then also has as you apply, as you look at the ecosystem of metrics, like I think it’s really important to think about how the success of your AI, um, in the same way you think about success of um in or you know the the way in which you might influence any one of those metrics. I think any any support leader who’s just looking at first resolute first response time and only cares about that without worrying about any of the other metrics that you know um might be affected by them driving certain ops or uh initiatives to positively affect first response time, for instance, is kind of missing the point. And the same is true of every single other metric. Every support team runs on an ecosystem of metrics, even if they’re not all SLAs and KPIs, you have to be aware of the interplay.

Craig Stoss: 21:01

Yeah, yeah, absolutely. And and I mean the it’s so nuanced, and I and I think that that’s what people sometimes forget. Like AI, it’s not a black and white, you know, true or false solved versus not solved, especially in is with the agents, you know, especially when we’re talking about AI agents. It’s it’s more of what does it do? You know, we we had a customer uh uh where they were receiving some some some spam tickets, like some someone was just spamming them. And the AI was very good at picking out what was spam and and and and throwing it away so that they never kind of impacted their human team. And we started talking about that internally, uh, you know, um I’m not gonna go into the contract and and other things, but internally we were talking about this concept of, you know, is that a is that a resolution? Like is recognizing that something’s a spam ticket and then not keep not forwarding it to a human or not causing any sort of you know function uh of uh that’s gonna impact time in metrics. Is that a is that a contained ticket or a resolved ticket? Because it’s yeah, it’s not, you know, if that were to go to a human, they would just mark it as result they or spam or they would suddenly set some setting and it would probably not count in metrics at all. We’re not even getting to that point. So it’s it’s a and there’s a that’s a really interesting nuance. Um, we also have other nuances where let’s say, you know, uh again, think about how a human ticket works. If I came to a human and said, uh, what’s your refund policy? And the human answered that, that and closed the ticket. And then I came back, you know, and said, Oh, okay, I want to refund product X, you know, there that’s potentially two tickets, depending on how that you know conversation went. And whereas in in the AI world, that could, you know, you’re just back and forth. You know, you could say, What’s your refund policy? And it and we and the AI could simply say, Well, the policy is this, and if you give me your order number, I’ll see if you’re eligible. And then it could process that AI, that that, that, that refund automatically. Is that one ticket? And and is it an FAQ response or is it an automation response? And that’s important to some people. Um I think that I think that that that nuance is why I the more I feel it’s more and more important to start bringing that into your workforce because um if you want to do support in the future, there will be some level of AI involved. And you said about you know uh uh creating knowledge-based articles or recognizing knowledge gaps. Yeah, uh absolutely. And so how do you how do you um plan for that? And and I I I think this is this is gonna change the way we budget. You’re not gonna have uh like a software budget and a people budget anymore. You’re gonna have a budget, and it’s like this is the budget to solve this set of problems. You know, your your goal is to solve 10,000 tickets a month and have an up-to-date knowledge base. Here’s some money, solve it. And how you spend that on on AI versus humans is gonna depend a, you know, on your business, on your your type of products, on to some degree on your confidence in in the technology of the day. Um yeah, I I I feel like all of this is gonna change the way we we think about uh support leaders or CX leaders are gonna think about staffing and and accomplishing given tasks.

Charlotte Ward: 24:32

Yeah. I mean, to your point about budgets as well, I heard an interesting take on this recently, which is that the workforce of the future and certainly of the near future is is not going to be funded by more salaries. It’s gonna be funded by tokens. And I think that’s true.

Craig Stoss: 24:50

I can’t argue with that. Yeah. I talked to a leader this morning. Um, he actually has a he’s a founder of a company that um has pivoted, they they used to do kind of co-piloting agent assist type work, and they’ve pivoted more into the AI space. And and we were talking about that, that for the stuff that he his company solves, the tokens are relatively cheap, right? And so um we got talking about if you if you uh put in an automation or or something that saves 10 seconds per task, and that task is done hundreds of times a day against dozens of people, you know, all of a sudden the cost of that task in tokens is potentially less than the salaries of that 10 seconds. And you’re not replacing, you’ll never replace a person for 10 seconds worth of work, but you can speed that up by 10 seconds and and and save some money overall because that person can now take that 10 seconds on aggregate and put it somewhere else. Yeah. Um, and and so you know, that’s an interesting way of looking at it, right? Um, I don’t think that works in in all cases, right? I think that some of the stuff that that that my company deals with is very complex and probably could take a lot of money in tokens to to read massive emails and understand contextually what’s going on. But um I think it works in a lot of cases.

Charlotte Ward: 26:17

Yeah, yeah. And and it’s interesting. Uh, I I uh I do agree, like it’s very unlikely that you’re going to add up those. It becomes like the plot of Superman 3, you know, you take all these fractions of cents and put them in a bank somewhere, right? Like you’re not gonna take all of those five and 10 seconds saves and actually be able to save a whole salary out of them. But you save enough, you save enough, split across people, you reduce the cognitive load, you and I loved this take I saw on LinkedIn last week, which is that you know, one of the major influences of AI on support teams is not to take support jobs away, but to make support jobs more interesting. And I really love that take. So, like, absolutely, to like give give people the opportunity to do other things with those 10 seconds multiplied is is quite an interesting prospect for the future as well. And it’s a different nuance, a different opportunity, a different set of metrics we should be looking at again, right?

Craig Stoss: 27:18

The uh I mean that that’s something I’d advocated for a while, and I think you and I probably talked about this over a year ago, is I really do see a future where um the concepts of support and success are kind of merged together again. Yeah, yeah, yeah. You know, 15 years ago, I think they split up and then uh maybe 10, 10 years ago, but anyways, they split up, and I think that we’re gonna see them merge because exactly to your point, you if you if you have a it’s a situation that requires a human in the future, you know, that cannot be automated, is not repeatable, that requires uh empathy, reasoning, critical thinking, you’re gonna need someone on the other end of the phone that has those skills. And and um not saying that support agents don’t have those skills, but but I mean think about the power if you augment those skills with an industry knowledge, with a with the ability to how many support teams do you know today that can look up contract information or look up industry information, right? They they can. But if they’re if they’re empowered with that and they’re they’re augmented with that knowledge, um, and then you know have the support knowledge and and and have that empathy, et cetera, et cetera, um then all of a sudden your support experience is boosted because you’re you’re you’re you’re A, you’re making it more interesting to your point, and B, when someone gets to that point, you have the exact right person. There’s no more forwarding to a new department, there’s no uh you know taking a lack of ownership. It’s like, oh, well, that’s not me, that’s someone else. And you know, now you have to wait to book a call with them, or I’ll have to transfer you and repeat your issue. All of that goes away because you have this really um superpowered individual on the end. And and yeah, yeah, I’m a huge advocate for that. And I and I I think it’s just gonna naturally happen over the next five years um as companies start to realize that that’s what people expect. If I if my problem is so nuanced that something that that solves problems for a living can’t solve it, then it’s gonna go to a human that that has to know what they’re doing. And that would be the expectation.

Charlotte Ward: 29:22

Yeah, yeah. I couldn’t agree more. I couldn’t what an interesting future ahead of us, right?

Craig Stoss: 29:28

I mean, it’s a future where you know I have a seven-year-old son and and I can’t I can’t imagine what what his job will be. I mean, I I I I just don’t know. I I can’t I don’t know if he’ll even drive a car in ten years, you know.

Charlotte Ward: 29:43

Like it’s uh it’s one of those things that um there’s a lot of unknown.

Craig Stoss: 29:49

And I think that the you you start seeing the the impact that AI is having right now, and it’s still in its infancy. Um, you know, you start to see the uh the fake news stuff or the the fake images um as AI gets less and less recognizable as AI, which is absolutely gonna happen. Uh you know, you start to think, okay, well, what what does this mean? And how how do we compensate people? How do we you know change change the models? And and the biggest thing that that I think I I think that’s missed in all of this is there is a generation of people, including my seven-year-old, who are growing up where this is just normal. It is normal just to talk to the air and have Google or Alexa or whatever talk back to you and do the thing you want it to do. Um and you know, to to me, I’m still amazed when I can do that to some degree, and I’m in the industry, but to my son, that’s just that’s just normal. Yeah, if he goes to a house, he goes to my my parents’ house who don’t have a Google home, and you know, he’s like he’s like, Well, I can’t listen to music. It’s like, Yes, you can, you just can’t speak to the air.

Charlotte Ward: 31:05

Yeah, right, right. Absolutely.

Craig Stoss: 31:07

Now, translate that when he becomes a consumer, what is his expectation gonna be of a company if if the expectation is you can talk to the air and have things done for you, right?

Charlotte Ward: 31:17

I mean, this could be very it’s gonna change everything. Um, I I’m very philosophical about that. And I could easily spend another hour talking about this, but I’m very philosophical about the uh the impact of change over, let’s say, generations, but certainly when you get to the kind of decade level. Um the uh I’m not worried about the world they’re gonna grow into because yes, I don’t know what it’ll look like, but then I could not have, when I was 10 myself or 15 even myself, have articulated what my job would be. My parents certainly couldn’t, you know. So I’m and the same is true of theirs, right? I mean, for a maybe, maybe when you get to my my father, and I am not young, uh so but once you once you get to like the generation of you know the best part of a century ago and beyond, maybe the rate of change was slower, but the it was still there, like there were still new jobs and more nuanced jobs and more um alien jobs coming uh every decade and certainly every decade for the whole of my life. Like new jobs appear all the time. And I’m not I’m not I’m not uh um I’m not one of those people who thinks the AI is coming here to take our jobs. I just think jobs are just gonna change, you know. So, but um yeah, yeah.

Craig Stoss: 32:41

No, I’ll agree with that. I’m not a doomsayer at all. Uh I think that I think it’s just the impact on companies, right? You know, I make an argument it it it quite often that we still measure support largely the same way we did in my first support job in 2001. You know, it’s still CSAT, it’s still we didn’t have NPS back then, but even like you know, we still think about volume and channels and um response times and response times, all this stuff. And you know, and and I I mean I’ve done a whole talk series on on why I think that that’s wrong. But the point being that I think that leaders of tomorrow, quite literally of today, to be honest, need to start thinking about the stuff we talked about in this call. Like how do you incorporate this stuff into your workforce management plan? Because if you don’t, um, if you keep it separate, if you aren’t transparent, you’re gonna have nervous employees, you’re gonna have inefficient support, you’re gonna fall behind your competitors.

Charlotte Ward: 33:39

So Craig, thank you so much. I knew this would be uh super interesting. There is always uh such a uh level of uh thought and consideration you give to our conversations and um expertise as well. So thank you so much for sharing them again today. And uh it’s a pleasure to talk to you after such a gap. But um, will you come back very soon?

Craig Stoss: 34:01

And do I have to watch Sheridan? Have a wonderful day.

Charlotte Ward: 34:06

Thank you so much, Craig. Talk to you soon. That’s it for today. Go to customersupportleaders.com forward slash two eight six for the show notes, and I’ll see you next time.

A little disclaimer about the podcast, blog interviews, and articles on this site: the views, thoughts, and opinions expressed in the text and podcast belong solely to the author or interviewee, and not necessarily to any employer, organization, committee or other group or individual.
© 2026 Customer Support Leaders
Made with in the UK & AU