We're creating a new home for Customer Support Leaders. Please bear with us while we're building.
Charlotte Ward •

281: Accuracy Beats Hype - Building Trustworthy AI With KCS; with David Kay

About this episode

What if the fastest path to trustworthy AI starts with a better knowledge base?

David Kay is Principal at DB Kay & Associates, a consultancy focused on knowledge management and self-service for support. He was recognized as an Innovator by the Consortium for Service Innovation, and has been KCS v6 Certified as a Knowledge-Centered Service (KCS) practitioner, coach, and trainer. He held leadership roles at an innovative knowledge management technology provider from early 1998 to the end of 2002, and has been granted five patents for his work in knowledge management technology. His current work leverages thirty-five years of experience in envisioning, developing, marketing, and rolling out technology to aid knowledge-intensive businesses. David is co-author of Collective Wisdom: Transforming Support with Knowledge, and instructor of Customer Service: Knowledge Management and Customer Service with AI and Machine Learning on LinkedIn Learning.

I sit down with David to test a bold claim: you don’t do AI to get a knowledge base—you build a knowledge base to get good AI. Together we unpack what customers and agents actually need from information: it must be findable, usable, and accurate. Generative tools already deliver speed and clarity, but accuracy is where the stakes rise, especially in complex, safety-critical domains where a confident wrong answer can do real harm.

We explain hallucinations in plain language and why persuasive, well-phrased but incorrect outputs are harder to spot than sloppy forum posts. Then we explore retrieval augmented generation, a practical way to ground answers in trusted sources and expose citations for verification. That’s where a governed knowledge base becomes essential. KCS offers a proven way to capture knowledge in the flow of work, use the customer’s words, and evolve articles through real-world reuse—exactly the structured, current content an AI pipeline needs to stay reliable.

The conversation turns tactical: how to start or reboot your KB, which KCS practices matter most, and why teams see faster AI wins when they invest in knowledge quality first. We also dig into the elusive aha moment in troubleshooting. AI can summarize long case histories, but spotting the hypothesis that changes the outcome still leans on human meaning making. The exciting frontier is assistive AI that recognizes patterns across cases, nudges better questions, and shortens time to insight without sacrificing judgment.

If you’re being pitched quick fixes that “solve knowledge with AI,” this episode offers a saner roadmap: build a resilient knowledge base, ground your models with RAG, and keep humans in the loop for accuracy and nuance. Subscribe, share with your support and product teams, and leave a review telling us: are you building your KB to make your AI better?

David Kay

Transcript

Charlotte Ward: 0:13

Hello and welcome to episode 281 of the Customer Support Leaders Podcast. I’m Charlotte Ward. Today, welcome David Kay to talk about knowledge-centered service. I’d like to welcome to the podcast today for the first time actually. David Kaye. David, nice to have you here talking to me about something which I’m very passionate about and I’m super interested to uh to dig into with you. Um but first for our listeners, would you like to introduce yourself?

David Kay: 0:51

Absolutely. And thank you for having me, Charlotte. I can’t believe it’s been this long, but I’m glad we’re kicking off with a good topic. Uh, I’m David Kaye, principal of DBK and Associates. We are a small consultancy that focuses on knowledge management for technical support, mostly high complexity technical support. And we’re focused on a best practice called knowledge-centered service, or more commonly just KCS, which I think many of your listeners are familiar with. And uh we’ve been doing this for the last 22 years. I’ve been involved in shaping KCS for the last 24. So this is a topic that’s very close to my heart.

Charlotte Ward: 1:33

That’s awesome. I um I I when we when we spoke uh pre-recording, so we spoke what maybe three, four weeks ago now, as we were feeling out what this topic was, and you mentioned KCS. Um, I let slip, I think I was one of the first, well, perhaps not the first, but one of the early adopters of KCS and one of the one of the uh the I I was I was in an organization that was a really early adopter of KCS, actually. I was lucky enough to get there at the early stage training, and it’s just formed like it was one of those things, it just clicked, you know. I just thought, ah, that’s the thing. That’s it. That’s how you document stuff, uh, without it feeling like you’re documenting stuff. Yeah. Um, and so I’ve been a keen uh proponent and advocate for KCS for a very long time now, and obviously to have adopted it wherever I’ve had the opportunity and uh to varying degrees of both uh I adherence, I’d guess you’d say, and uh certainly success. But um uh anyone who’s listening to this who does not know what KCS is, uh, they should certainly go and look it up. It’s a service center for service innovation, right? Is that the stole the service innovation, yes, that’s right. Yeah, that’s that’s where you can find it. And there’s a bunch of super helpful uh documentation and and uh you know rubrics and everything on there, like uh get quick getting started guides. I mean, it’s it’s so you can spend, you know, you can make a profession out of it, can’t you? As you have. Um, but we are exactly living proof, living proof. Um, it’s such a pleasure to have you talk uh talking with me today, David. Um, we’re gonna talk about knowledge though, but we’re not gonna dive into KCS, are we? Because we’re gonna talk about you came up with the title for this, and it’s kind of uh the the premise of this discussion today is kind of with AI, do we even need a KB anymore, right?

David Kay: 3:31

Right. And uh it’s an awfully clever title, and I think you actually came up with it because I remember admiring it. So in any event, uh we were clever together somehow, so that’s great. Somehow. Um, before I answer the question, I’d like to give, you know, we’re recording this in fall of 2024, and um things stay around the internet, and so um, I don’t want this to be something that is thrown back at me in 2028 or something like that. I am I’m gonna caveat that that that my answer in the conversation that we’re having is very much at this time because as we all know, generative AI is moving so quickly and changing so quickly and improving by leaps and bounds. And there’s a possibility that that will even continue to accelerate. So um this is today’s answer, but I will give you my answer for today, which is absolutely, we absolutely need the knowledge base today, even with AI. And um, so maybe I can take a little bit of time to sort of lay out why I believe that and what I think that kind of means for practitioners of both AI and knowledge management and support. So uh what do people expect from information resources when they’re trying to solve a support issue, whether they’re you know delivering assisted support and helping customers or others, or whether they’re doing self-service and trying to help themselves, right? They need information that is findable, usable and accurate. Right. And it turns out that AI is really fantastic at the first two. Uh, generative AI is really good at gathering together information from a variety of sources and and helping you find things in a way that feels, for many people, I think, a lot more intuitive than you know, doing search, even you know, sophisticated machine learning-based search and kind of trying to grovel through those answers. And the usable because it can take information from a variety of sources and pull them together and put them out in a conversational tone, right? Full full marks for that. It’s when we get to accuracy that we start running into challenges. I think anybody who’s in this space has heard about hallucinations, right? And many of our clients are either doing fairly enterprise critical work for their customers or um their health and safety related issues. And I I was out at a uh customer site last week where they work on software to manage railroads, and they pointed out that if they tell the system to um no longer consider a section of track closed where people are working, right, the results can be truly horrifying. So accuracy is uh essential.

Charlotte Ward: 6:44

So if it’s okay, David, I’d like to um I’d like to dive into what I I I sort of think hallucinations are one of those things we all think we understand and and obviously we recognize that it is a barrier to accuracy. But can we talk about what hallucinations actually are in the context of AI? Because it’s it’s not just again because the the AIs are fed with sources that are accurate, fair to say, most of the time.

David Kay: 7:20

To a certain extent.

Charlotte Ward: 7:22

So I guess what is the hallucination? Fair point, fair point. So um so I guess what is the hallucination? What is the source of that? Is it an inaccurate source, or is it the AI, and I’m gonna put this in little bunny ear air quotes that our listeners won’t be able to hear, but won’t be able to see here, but air quotes that they won’t be able to see. But um is it that it’s an inaccurate source, or is it that the AI in air quotes thinking for itself and and and um inferring stuff from sources that is truly isn’t there? It feels like it’s that actually.

David Kay: 8:00

It it it is largely when people talk about hallucinations, they’re talking about the AI sort of taking a wrong turn somewhere. And then because it’s all of these large language models are about predicting, well, what words and tokens are gonna come next. Right. So once it once it jumps the tracks, it’s not getting mechan. Right. And um, because it’s gonna predict the next word based on the strange things that it said before, and it doesn’t know that, it’s just predicting the next words, right? That’s how how this works. Now, certainly inaccurate sources, uh we’ll certainly talk about this more. Inaccurate sources um are a real big concern uh and and can also result in inaccurate results. I mean, just like everything else, it’s garbage in, garbage out. But it’s that jumping the track and not being able to get back on phenomena. One of my favorite um examples of this was uh in the US presidential election eight years ago. Um, you can ask an AI, or at least historically you’ve been able to ask an AI if to describe what happened when the candidate that didn’t win won. And it will tell you all kinds of things about victory that never happened. Because you started it down the wrong track, right? That was mean. You started it down the wrong track. Um, but it doesn’t know or care, right?

Charlotte Ward: 9:36

Yeah, yeah, yeah, yeah. That makes sense. That makes sense. So you set the context essentially, and it’s inferring everything from the context that you set. And and the you in this case can be you, the user, or it can be a source.

David Kay: 9:50

Right. And it even just can be a weird word prediction, right? There’s a the way that um the way that these systems feel creative, generative, is that they’re not deterministic. In other words, if you ask it the same question twice, you’re not gonna get the same answer. And so sometimes if it just on its own um picks perhaps a lower likelihood next set of words, um, then it’s all it’s it’s off to the races in a bad way. So you didn’t set a bad context in that case. It didn’t do anything wrong, it just you know left uh consensus reality and is continuing to build the rest.

Charlotte Ward: 10:36

That makes sense. That makes sense. Um so we’ve we’ve covered findable, usable, and they’re it’s these these models are great for those and less so on the accuracy. Um controversial question. In most, and I’m gonna use the middle 80% of customer service, customer support teams in this bell curve of uh of support uh offerings and organizations and and uh uh uh teams that might be out there. Um Is it does it really matter if it gets it wrong occasionally?

David Kay: 11:18

Well, we certainly, you know, judgment is always required. And whether a knowledge base article is wrong or an answer coming from um the system uh is is wrong or hallucinated, we hope that the humans are not gonna just repeat what they say or a self-service user is just gonna do it without thinking about it, right? So um so you know, perfection is is not an option anywhere. The thing that we’ve seen about the generated answers is that they’re so darn compelling, right? They make so much sense. And you know, sometimes when you see, you know, a blog post or a knowledge base article or something on a forum, and it’s wrong, it’s it often is pretty clear that this person is not really engaging well with the topic or the subject material. So you can kind of say, all right, I’m not gonna look at that one so much. Um, whereas the generated content uh tends to be very sensible looking.

Charlotte Ward: 12:22

Yeah, and confident, right?

David Kay: 12:24

Right. Absolutely. These things are nothing if not confident.

Charlotte Ward: 12:29

Yeah, yeah, yeah. Uh I I think that makes them easier to spot sometimes, I’ll be honest. Um, and uh I don’t know if it makes me, you know, I’m British. I I don’t trust overconfidence in most contexts. It feels very alien to me. So uh so I feel like there’s a level of confidence which gets my kind of AI spidey sense going as well. And uh I I can’t decide if um I trust it more or less just because of the confidence or because uh it’s so I can I can not always, but you know, sometimes I can spot an AI response. And so yeah, you know, you trust to a degree that the AI is drawing from sources, but at the same time, it’s probably drawing from the same sources as a human, except a human is adding their own experience into that, and that can be a positive or negative modifier on the ultimate outcome, right?

David Kay: 13:26

Well, and and and this drawing from the same sources as a human is really the antidote that the industry is coming up with for hallucinations, right? So there is a technique called rag retrieval augmented generation, um, where basically rather than just asking the model the question, you kind of do an upfront search, and there’s a lot of variation in details, but the big idea is that you do an upfront search of a definitive repository, um, which you know isn’t itself going to be 100% accurate, but we have high confidence in it. Um, and then based on the answers that come back, the AI will then generate an answer. And and you’re seeing this now in Google, or at least I’m noticing in my Google experience quite frequently it’s creating a generated response based on the initial articles that come back in the search. Right. And um so that is kind of a best of both worlds situation, right? Because um, rather than it just telling you something and you confidently and you can either believe it or not, it will, you know, draw from sources that are trusted. And you can also ask the AI, or it will tell you um, you know, here’s by the way, the sources that I got this information from. You can go have a look at it if you’re not sure, or you know, and you can kind of um suss out their uh their kind of reliability yourself and see if it makes sense. And so that is a kind of best of both worlds approach that is uh is increasingly popular. I think it really makes perfect sense for a support context. Um, but it requires that definitive source. And that’s really where I come back to my answer to the question. Yes, we still need the knowledge base or something that looks an awful lot like the knowledge base. Even if we’re delivering answers through AI, we need to have a process of you know governance and continual updating and capturing accurate information. Um, and and as a matter of fact, some of our clients who, you know, as you all know, many companies are throwing a lot of stuff at the wall with AI and seeing what will stick. And um, a number of our clients have reported that AI projects that are built on the knowledge base that you know they’ve been building, and in my client’s case using uh our client’s case using KCS, um, are getting out of the blocks faster than the other AI projects just because they’ve got better stuff to feed it with.

Charlotte Ward: 16:18

Yeah, this is going to be something I wanted to explore with you. So so therefore, you know, yes, yes, you said at the top, and you’ve re you’ve reaffirmed there, yes, we need a KB. Um KCS generated, uh KCS maintained, generated and maintained KBs then sound like the perfect source. As perfect as a source can be. Is that a true statement?

David Kay: 16:46

Perhaps self-servingly, I couldn’t agree with you more. Um yeah, I mean, you know, KCS is is as as you well know, is built on kind of two big ideas in part. One is who’s the right person to capture knowledge and create knowledge-based articles? Well, the person who heard the question from the customer understood what was being asked or heard the issue, and then had to tell the customer things to make the customer happy and go away, right? And so if you had that experience, right, you’re you’re exactly the right person to write that down perhaps for the next person. And when do you do it? Well, you probably wouldn’t want to wait till next week, right? Because then you’ll have forgotten the details. So um, in KCS, the right time to do it is while you’re solving the problem and the right time to update things. I just I had this experience today. So we have a little public benefit corporation in my rural neighborhood that delivers fiber internet to our neighbors. That’s really cool. And uh I’m on the volunteer board, and I had to switch over from one subscriber to you to another. Like the house got sold today. And um, you know, because I am who I am, I have had, of course, written a knowledge-based article the first time I did this. Um, and I I went back to reference it, which was good. It’s a slightly complicated process, and we only do it every two or three months. So um it’s about a 10-step process. And the knowledge-based article that I’d created before had about eight and a half of those steps, right? So um, you know, as I was now, it was pretty clear as I was doing things what the gaps were. Uh so, you know, it was just a kind of a no-brainer to go in and update the article with the things I’d forgotten to put in the first time. So the next time I use it, or probably more importantly, the next time somebody else who hasn’t done it before does it, they have a more complete answer. Didn’t take any time. I mean, certainly not compared to actually executing the process. So that’s kind of the big idea Casey has. And I think those qualities of having articles that are about one thing, they’re in the voice of the customer and what they needed to hear, and they’re constantly updated and refreshed is a is a really great source for all kinds of things, but especially for AI and retrieval augmented generation.

Charlotte Ward: 19:22

That makes sense. That makes sense. Um so I I think that we have established quite solidly that any support team out there needs to go and implement KCS instantly, this very minute, because uh this is this puts them in the best stead possible for then applying any any um any AI of any sort to knowledge retrieval or customer self serve or you know what whatever the the output of that uh LLM that model is, it it is is the quality of it is. Dependent on the source, and we we both know that KCS provides the best sources, really. Um so I guess then um having already established that organizations should just go away and implement KCS instantly this very second. Um how I I guess what I’d like to explore finally is what does the what does the journey look like for an organization that’s really just getting started with their knowledge base? And let’s face it, they’ve probably got AI vendors knocking at their door saying, oh, well we can solve all of your knowledge problems by applying an AI to this. Um what’s the ideal, what does the ideal journey, you know, look like? How does it unfold?

David Kay: 20:50

Well, I I think it it starts with the realization that that you just expressed and and that we’ve been talking about, which is that um you don’t do AI to get a knowledge base, you do a knowledge base to get good AI, right? And so um, because there’s a lot of focus on, well, can’t we just like throw our case notes into some big model and you know it’ll write our knowledge base for us? And um, you know, first, have you actually looked at your case notes, right? And what do they, you know, they’re either walls of words or they say I fixed it, right? In many cases. So but I think more uh more importantly, the thing that as of right now, humans are uniquely good at is meaning making, right? So it could be that you had a case or or a ticket or an incident, whatever, whatever language you use, that has gone on for three months, right? And you’ve gone down so many blind alleys and dead ends, and and that’s just the way that’s the way troubleshooting goes, right? That’s nobody did anything wrong. Uh if you ask AI to summarize that journey, it will likely do a very good job of it. If you ask it what was the important part of that journey or what was the aha, as of today, uh, as as much experimentation as I can do, I’m not a trained prompt engineer, but I’ve you know certainly been been going to school as much as I can. I haven’t been really successful in getting it to to find those to find those answers. So I would I would go back to the AI vendor knocking at the door and saying, say, you know, I want us to set your project up for success. So we’re getting started with the knowledge base. We’ll we’ll continue and improve that. And that’s gonna be the way that we will make this project that we’re undertaking as partners the most successful. So, so maybe just back off for a couple of months while we’re getting our ducks in a row. And uh, and and then we’ll move, you know, go slow to go fast.

Charlotte Ward: 23:01

I think that’s a really interesting insight. It is that like the the AI is not very good, or possibly even won’t ever be able to spot the aha moment where your support person or your engineer realized, ah, that’s the question I need to ask, because I’ve I’ve got a working hypothesis now. The working hypothesis so often happens off the page. Uh and you have to be a very diligent support engineer to write down your working hypotheses at every single opportunity and consistently enough that an AI can begin to feed off it, I suppose. Um and I I wonder if there isn’t a uh uh you know an opportunity for a vendor out there somewhere to take knowledge as it’s being created and case notes as they’re being created and begin to tie the two together and begin to find ways for the LLM, for the AI to say, ah, this is beginning to follow the pattern of something else that happened over here, and maybe I can prompt the aha moment for you a little bit sooner. That feels if somebody cracks that for those very long-running cases, um, I’ll be there for it.

David Kay: 24:20

Yeah, I I agree. And that that that sounds much more plausible than a lot of what I’ve heard people talk about in the industry. And if you tell a support engineer, another support professional that you could help them do this, you know, uh they’ll be right next to us in being an advocate for this because that’s where we get the dopamine release. It’s like that moment, right? That’s that’s what feel you know, so support folks in my experience like to help people and they like to solve problems. And when you hit that aha moment, you’re doing both of those things. And so any way that AI can help you get there, help you ask better questions, help uh surface hypotheses that you might not have been thinking of. Um, that’s very exciting. So I hadn’t I hadn’t really thought about that. I’m really glad you you you put that bug in my ear.

Charlotte Ward: 25:11

I uh I’m I’m hoping that some vendor out there is listening and I I want to see this when when we do listen back to this in 2028. And that’s what I want us to talk about. Like that idea that we put somewhere. I’m not gonna, I’m not a founder, I’m not gonna pick it up. So it’s consider that idea open source, dear tech community. And uh I I’ll be I’ll be there for for the beta program on that one and we’ll we’ll have another conversation. I’m sure we’ll have many more conversations, David, but um we’ll definitely bookmark fall 2028, or as I would say autumn 2028, to uh come back and and have a conversation about whether that happened or not, because I think that’s really exciting, actually. Um well, thank you so much for joining me today. It’s been a pleasure to explore this with you. Um, I would love to have you back to actually mine your KCS knowledge at depth. Would you be willing to uh come back and talk specifically about KCS? We can chew the cut there a little bit, I’m sure.

David Kay: 26:12

You know, there are a few things in life that I like more than talking with smart people about KCS. So, yes, anytime, just call me.

Charlotte Ward: 26:23

I definitely will. Thank you so much for coming today. It’s been a pleasure to have you. That’s it for today. Go to customersupportleaders.com forward slash two eight one for the show notes, and I’ll see you next time.

A little disclaimer about the podcast, blog interviews, and articles on this site: the views, thoughts, and opinions expressed in the text and podcast belong solely to the author or interviewee, and not necessarily to any employer, organization, committee or other group or individual.
© 2026 Customer Support Leaders
Made with in the UK & AU