Not just a chatbot: Build virtual agents that are actually helpful with gen AI

Not just a chatbot: Build virtual agents that are actually helpful with gen AI

Show Video

Virtual agents have become a part of our daily lives, for better or worse. But I have to wonder. How accurate and helpful are they actually? What is the difference between a chat bot versus a virtual agent versus a human? Because, good grief, I've heard them all. So today I'm going to be joined by Susan Emerson from Salesforce and Nick Renotte from IBM. We're going to unpack the burning question: how do you serve the different people who want an agent and those who want the human? Because I think most of us want them both at different times.

So Susan and Nick, welcome to AI in action. Well, thanks so much. Thanks for having us. For sure. Let's get on into this conversation. But, Susan, let's start with you first. You have such a storied career.

It seems like you've been everywhere and worked with companies that are small and work with them until they've gotten large and beyond. But how did you end up at your current role at Salesforce? Well, I would say probably a long history of saying yes to young companies that are at the intersection of some new innovation and technology, which kind of always puts you unsteady because you're figuring things out. And that just leads always to be in a conversation of innovation.

And that's how I got to Salesforce, the company I was working for was acquired. Excellent. And now, Nick, I know that you know a lot of people in this space as well too, because you've been, did you start like coding when you were like eight years old or something? Yeah, yeah, I did actually. So, funnily enough, I started at coding inside of Excel. So I was doing like a lot of VBA coding.

My dad was doing like share trading and options pricing. So, I was like, hey, I can actually automate some of this stuff. It started off really basic. It was just like, record a macro.

And then Jerry rig something to to get it to do something else. But funnily enough, like after that, like there was this huge period, right? I barely touched any tech. I'm like, I just really want to become a business owner, and I really want to get into business and everything's about business. So I went and studied business, not realizing that really, in order to build amazing businesses, you have to build amazing products. Which tech really helps. So, that kind of switched around, and I actually built an ed tech startup.

and I currently lead, like, 500 AI engineers around the world. So, so they sort of take my lead in terms of, like, what we're doing. and, and YouTube's definitely played a huge part in that as well. Wow. I love again, we've got such a breadth of experience just represented right here on this specific episode.

So I want to take full advantage of that during this conversation. Nick, I want to start off with you. I've been thinking a lot about chat bots and virtual agents.

And I guess the thing is that if we really consider it, they both have been around a lot longer than people assume. Can you please briefly take us through exactly how we got to where we are with customer support. From a long time, a big part of interacting with an organization is making sure that you do have good customer support.

We started off really simply like you would maybe go into a bank, you'd speak to a, customer support officer or a teller, and they'd be able to help you out. That eventually shifted to being able to perform things like phone banking. And then we had the rise of the internet, so we had chat bots coming into play. So that sort of shifted how we've been able to interact with different businesses. I'm using a bank as an example, but, this has been across a range of different organizations as well.

Those chat bots in the past have kind of been reasonably simple. So, you might know them as like rules or heuristic based chat bots. So you type something in and maybe picks up a key word or two, and then it's able to go, hey, where you looking for? I don't know, support on tracking your particular product or did you want to go and transfer some funds? We've gone and shifted significantly away from that to to the ability to use generative AI to help us, speed things up quite a fair bit, but but again, they've been around for quite some time. So I've been at IBM for, I want to say like five years now.

And since I started at IBM we've had virtual agents, through Watson assistant. That being said, they have evolved so rapidly to to the point that they're at now. Okay. You talk about that evolution. I remember some of those early experiences where I just felt like, just put me on the phone with the person, this is not this is not what I'm trying to do.

Susan, can you tell me a little bit about your experience in terms of what you've seen happen with customer support over time? Well, I mean, one of the things that, kind of just listening to Nick talking about the evolution, I, I kind of think this is really completely not relevant to anything, but I, I remember a time when there were party lines on your phone and you would pick it up and you'd have to yell at the neighbor to get off. But anyway, so now thinking about like Gen AI answering the questions versus like, yelling at the Martha Becker, it's a whole new world. So the thing I would say around, like, why we're seeing such crazy transition and transaction with these things, it's, it's a number of things like generative AI, obviously, for those of us in tech, it's like the biggest transformation since the internet. And all of the companies that make investments in supporting customers are pressured by the executive staff, the board members of, how are you bringing Gen AI into the foreground? And it's one of these areas where you get that pressure.

You get the fact that these call centers are literally the eyes and the ears, or maybe the ears and the eyes, depending on which order you want to go. And they represent everything from the question answering, the brand, the experience, the relationship and being able to serve people up on the channel that they choose. You talk about when you want a human and when you want the digital interaction. It really unlocks that full potential and what the technology has done is brought it forward in a way where the way you develop it is so much easier because you don't have to do that tree based stuff that Nick was talking about, where you would have to previously train and look for every kind of intent and keyword and have this stuff really, you know, rigorous down a decision tree.

Now it can be generative, interactive, multi turn, but with all the guardrails that you need. So for the things where you don't want the human, you can get the quick answer. And the things that you do want the human, you get a quick connect. I think the quick connect is is very key. I keep thinking, all right, what's more important? Is it the accuracy that I want.

Like is it the speed that I want. Is it the personalization. And I think the answer is yes. Like Nick, you talked about the adaptive element of this.

So, I kind of want to get under the hood a little bit with you, Nick, as we're talking about the rule based chat bots moving into gen AI. It's such a radical shift. Nick, how do you even start thinking about making that? Let me give you an example of like how this actually comes into play and like where Gen AI is really suited for handling these because, I mean, you mentioned something, right? There's certain situations where you do want a human, and there's other situations where getting an answer from a virtual agent or just finding a response and getting a valid response is enough.

Because let's say, for example, you wanted to check your bank account statement and it's more than enough for you to get that via virtual agent or just checking an app. You probably don't really want to be speaking to a human, because these are really quick interactions where you just want an answer really fast. So let me switch it up a little bit. Right. So let's let's take Bob. Right.

So Bob owns a florist. So every single day he gets like a ton of requests for people that want to go ahead and buy buy flowers. So it's like, okay, all right, all right. Cool.

This is perfect. I want a chat bot here. This chat bot’s going to help people just purchase flowers. They're going to be able to choose where they get them sent to.

It's just a form, we’re able to fill that out. Now the problem is though like 80% of the interactions that Bob gets via his chat bot are to buy flowers. So so it handles that 80% absolutely perfectly. The problem is that there is like 20% of other questions. Where are the flowers sourced? What's your refund policy? Do you have hypoallergenic flowers? I don't know if that's actually a thing, but like, let's say it is. These are what we typically refer to as like long tail questions.

Right. So questions that maybe don't come up so often, but you still want to be able to handle. Now like using rule a heuristic based chatbots, like in the past has been perfect for handling that 80%.

But when it comes to the long tail questions, these are where you can very quickly start to spend like an absolute ton of time to get very minimal benefit. In the past, we've been able to do things like semantic search, and bring out an extract and then just dump an extract back to a user. But they kind of know that, hey, you're just dumping your response to me. It's not really all that personalized. I probably could have just searched for that via a search engine and probably got the response back. Gen AI is amazing for this, right? Because we can give that extract that we traditionally would have just dumped to the customer.

We can actually pass it to an LLM, that LLM can personalize it based on the customer's interaction, because we have context and we can get a response. So like when somebody goes, hey, do you have hypoallergenic flowers? The virtual agent’s like, look, to be completely honest, I don't actually think hypoallergenic flowers exist, but you might try these as an alternative. So when it comes to handling that 20%, where we spend a ton of time where we don't typically get, first call resolution, this is perfectly suited for generative AI. Okay, Susan, I heard you throw in some mmhms and yeahs during that. What do you got? I was thinking about the hyper allergenic flowers, and I was thinking about the intersection of like, what do you want fast, and what do you need accurate? Because it's probably different for different industries. Like, you know, flowers are a great example, but maybe extracting ourselves from that, this idea of like the 80% of the questions are always sort of in the same domain.

I would argue a little bit, maybe that heuristic has been fine for that, because there's a ton of set up in taking all those utterances and conversations and training them for intent. You can go a whole lot further with these generative experiences because they leverage the power of the pre-trained models. But then I would also say, like in a lot of other industries that might be regulated, you started off with an example of banking. It has to be right, and it has to be grounded, not just in a knowledge repository that is validated and known, but legal risk, compliance and things like that. Because of the impact of could be something as simple as getting a balance wrong or something like that. Some of these things have to be right.

And so in some examples, you know, right is going to be much more appreciated than, than fast. But then there's the fast to deliver and fast to support and fast to innovate in the back of house and everything like that. So, you know, with traditional call center, a lot of these conversations start with like the two words that most people orient to things like call deflection, which I guess is a technical strategy as a thing.

But as a consumer of any product or service, it sounds kind of like, oh, I hope, I hope I'm not really being deflected. I'm a customer, right. Like that kind of thing. But but there's this whole idea of like call deflection isn't always the goal because there's some categories of, of experience and being a customer where the human in the loop is the priority, not the speedy fast chat bot answer. And like doing the balance inquiry and like, did you send me my tax form and doing that all automated is great. But like you have like like a bad you know, I'm going with your banking example, Nick. You have like a bad fluctuation in the stock market.

Like you want that human on the phone right then and there. And so this there's this whole mindset of like not one channel is better than the other, but have a point of view of what type of interactions you want to actually optimize your relationship with the customer on whether it's fast and speedy and automatic or human and personalized and have it all be part of the same platform. So you really always knowing the customer. It's funny, Susan, as you as you're talking about the banking and Nick because you brought this banking example up, I, this feels very personal right now. So I'm going to have a little confession right here. Maybe some of the listeners will vibe with me on this one.

But every now and then, like perhaps I will forget to pay a credit card bill on time. And because I feel like I'm special, I always want the bank to drop that fee for me. I always want the credit card companies to drop that fee. And so that involves me often, you know, calling and trying to get that done. And sometimes the virtual agent doesn't care that I feel like I'm special.

And sometimes I need the human in order to work with me through that. So yes. Yes and yes, to all of this, Nick, I know that you were about to share some more with us about that distinction between the virtual.... Nah, I'm with you, I’m with you on the... you need a little give and take.

You don't always get that with the virtual agent or chatbot, right? But, yeah, to echo Susan’s point, like that, that is absolutely bang on. Right? I think it's important that we're creating an experience for our customer. It's not just a solution. So like from a technical perspective, the people who have to build and curate these things, it's a whole different world. And then sort of maybe, you know, adding on some of the more recent innovations.

I know some of the stuff that we've been working on is like, you know, those SOPs, Nick, right, in terms of on is like, you know, those SOPs, Nick, right, in terms of these are the questions, these are the instructions that if we had a human, we'd take you step one, two, three, four. And these are the things we would permit you to do. And this is when you escalate to the supervisor or to the human or things like that.

So having that kind of inbuilt infrastructure is one of the I think the big things that help customers make this transition because, you know, obviously many organizations still wrestle with what their point of view is around trust, fidelity, false certainty. You know, if this is customer facing, you know, to have that those those types of checks, balance controls and guardrails in place. Yeah, definitely. And one of the things that I've noticed, right, is that as we've gone and deployed generative AI solutions, let's say we generate an answer for a particular question.

What we're actually doing now is we're taking those responses and we're caching them. So that means that it's not as costly. You don't need to send all your data to an LLM again, because if you know that you've got a standard question with the standard response, you can actually go and sort of save that off in turn that into a rule. Plus you can apply that additional overlay with, with your own corporate field. So if you have a specific way that you want to go and respond to it, you've already got a baseline.

You just need to append or tweak the response that you've got. As you talk about this tree of responses, like thinking about the rule space. But now also again, of course, compared now to this virtual agent space that we're in, are you finding that people are trying to skip to the human portion less now? Like, because I can think about many times when folks would just try to always press zero. I think it really depends on on the industry. Right? So if you're like a high tech startup and you've gone and done this stuff before then, then it could be weeks.

But if you're a bank or a government institution, then it's typically months because the process to govern, monitor and test is, significantly more stringent in terms of like people handing off or trying to get through to a human a lot faster. I think it's important to note that you're still going to want to go to a human for a lot of interactions, like you mentioned before, right? Like you're going to know that you can sort of you can pitch a human why you should get your your bank or credit card late fee remitted. Right? Like you're going to have a hard time doing that with like a gen AI solution. Like, I don't know how much give and take you're going to get when it comes to bargaining with that. I mean, you can give it a crack. It'll be very Australian of you to do so.

But in terms of actually seeing people completely deviate away from a virtual agent all the way back to a human, I think the the attitude is still kind of similar. Right? So we've just added an additional layer of support to a virtual agent. That doesn't necessarily mean that you're going to want to deviate away from everything. That being said, I personally find myself using virtual agents a lot more because I'm like, I don't really want to speak to a human for this.

I just want an answer a lot of the time. Where it's edge cases, then then I probably want to go and speak to someone because I'm probably going to get some guardrail because they probably aren't allowed to talk about that anyway. If it could answer it self-serve, first first pass. If you're asking for the human like, that's a whole different ballgame. Yep. And you know what? Like in terms of, regardless of what the situation is that we're dealing with, we know that data is still at the foundation of it.

And the virtual agent is only as good as the data that is being placed inside of it. So how much can you program a virtual agent for accuracy with what you have? I mean, definitely the moniker of like data is the foundation of AI and AI needs data is is super true. A lot of organizations, you know, Nick was talking earlier about, standard operating practices.

Most organizations have this stuff written down and they have a knowledge repository. And so many organizations are much further along to use these things, especially if they're this gen AI like chat bot experience can tap into that official knowledge repository, because then you get this power of like conversational turn and that great user experience, plus validated answers that are incorporated with the context setting of a of a large language model answering a question, but with all the valid responses that are in the validated, risk approved, marketing approved knowledge repository. Yeah, I think data is is absolutely critical and has been from day dot.

So if we look at how we've got to where we are now, big data was a really big thing back in the day. Then it shifted over to data science and everyone wanted to do data science. And then with machine learning and now it’s generative AI.

All of those principles or all of those spheres have been focused on data, right? Generative AI is no different. It's a derivative, that the, the underlying models which power them. So large language models originally trained as a deep learning model.

It's based on a transformer architecture, which you need a ton of data to actually go and and train. The thing is, though, right, accuracy is always going to be like a fluid metric and a fluid dynamic. because it's only like one factor, right? So in terms of how we go and evaluate whether a generative AI solution is performing, it's a whole range of things, like how factual is it? How valid is the response? Is the response generating an answer from the context that we've given it? I think what's more important, rather than just focusing on accuracy and what we've been preaching, within our client engineering team is how do we monitor, test and govern these LLMs? Because you can go so far as to train the model to actually ensure that it's going to generate a correct response. But how do you know that the customer's going to ask a question which that underlying model has seen before or knows the answer to? So the governance framework around these solutions is, is probably just as critical.

Right. And what we've actually started or I saw one of my engineers present to me the other day, is this Swiss cheese framework, right? I love cheese, and I was like, he recommended Swiss cheese to me. I'm like, perfect. Cool. Let's go down this route. So if you think about a piece of Swiss cheese, right, it's got a whole bunch of holes through it.

Now imagine you're sending a question or a prompt through a slice of Swiss cheese. Some of this stuff is going to get through, right? Some of the bad stuff is going to get through. Now imagine you place another layer of Swiss cheese.

Maybe it doesn't get through that layer. Then you're adding another layer of Swiss cheese. Maybe we finally catch it at that final layer.

Each one of those layers of Swiss cheese is a different component in how we actually go and govern our virtual agents or large language models. We can perform input filtering, which would be that, that that first Swiss cheese layer. We can then go and perform a model prompt engineering and governance at the model layer. But we can also apply guardrails at the final layer as well to ensure that as we're going through, we're making sure that, hey, we're only taking in stuff that we want to be taking in, and we're only outputting stuff that we want to be outputting.

On top of that, we can apply monitoring. So if we start to see user feedback that, hey, maybe we're not answering as well as we potentially could, we can actually give that feedback back to our engineering team to make sure that we're going, and potentially going and fine tuning a LLM. We're adding that data to the corpus so we can actually generate that response.

So we think about accuracy a lot, but I think a lot of people maybe don't think about, hey, how do we fix it if we're getting the wrong response, how do we fix it? Because ultimately it comes back to bang for buck. If Gen AI is not performing well, then you're spending a lot of money for something that doesn't work. I can think of like all sorts of things that could define the Swiss cheese. Like it could be everything from your authorized use of generative AI.

What is the framework of what you're comfortable with, and the types of use cases you put in front of your employees and your customers? And then it would be things like, what is the data that you have to ground this stuff, and what is the data safety that you have to maintain through this stuff in terms of ensuring that your data isn't inadvertently training someone else's LLM and it's not stored in the wrong place, and that you have like a full suite of capabilities to say what was asked, what was answered, what was the toxicity score, and did the human use it or change it like because you need to have all this governance and then you also have to have things like, as Nick said, prompt engineering because that gives the full set of instructions to the LLM and you want to ground it with the customer data. So when the thing goes off through the Swiss cheese filter, it's like giving instructions that are not naive and relevant enough. So like there's a whole like series of really like nice things in technology like Salesforce that give us those frameworks for interacting. Let's say, for example, you've got a virtual agent to to your point, right.

Like you've got a banking virtual agent. Now, what is a banking a virtual agent? Within there there might be a, a specialized agent, which is really great at trading interaction. So like, hey, this is our banking agent, which actually is focused on stock trading, but we've also got one that's focused on transactional interactions. We've got ones, another one that's focused on debt recovery or payment plans.

So having multi agents as, as I understand it, where in my context or in my world, is actually having specialized agents which are focused on different types of skills. Yeah. So, so in terms of how we roll that out, typically, you can have switching or task handoff. So let's say for example, you actually go and ground an agent in a specific SOPs when your initial agent detects that, hey, this intent is probably focused on that. We can hand off to that agent, generate that response, and then come in, present a unified response, or a unified answer back to the customer. Coming back to, like, have this with Swiss cheese framework fits around this, right.

It wraps around the whole thing. If you think about, like, data as being, actually, I've got this, this great analogy. Right? So, like, I'm a big foodie. It's not going to be more cheese, is it? No, it's not, it's not cheese. It's not cheese. Right. I’m a massive foodie in case you haven’t noticed. Right.

So I went to Japan, last year and I was actually presenting this to to my team there. And I had a good laugh, but have any of you had okonomiyaki before? No. No, okay, all right. So okonomiyaki is like a Japanese cabbage pancake, so it's like a savory pancake. Now, the basis of this pancake is cabbage, so it's like a ton of cabbage in there. And if you think about cabbage as being your data, right, it forms the foundation of that specific data.

You can also add in other different toppings which sort of like spice it up and make it a little bit more interesting. And like let's say, for example, we go and chuck bacon on top of our okonomiyaki. The way that I think about that is the bacon is our large language model. It's what adds the spice to actually generating our responses.

Because if we gave our customer raw data and they'd kind of be okay, but it wouldn't be as nice and bright and shining as it potentially could be. Governance or the Swiss cheese framework is the data that ties it all together. So we wrap up the cabbage and then we put the bacon on top, and then the batter ties it all together in terms of have the Swiss cheese framework works around that, right. It wraps around the entire multi-agent system. We have handoff in between, but we still apply these guardrails at each step. You have input guardrails which would wrap around all of your input.

You'd have model guardrails which would apply at each stage, or each different agent that you'd have. And then you'd also have your final Swiss cheese layer, which would be, your output guardrails or your output filtering. So again, it's sort of if you just think about the batter holding together the pancake, that's the way a best way that I can describe, handling governance, which is what we actually use, Watsonx governance for at IBM. And I'm thinking more about as a vegan, I'm not thinking about baking or batter or like the, the cheese even too, so, but the, but like, kind of thinking of these virtual agents, like, I usually just like one of my favorite frameworks in the world is jobs to be done, you know, where you just have this really kind of programmatic thing of like, who is the user? Or in this case, the customer or the service agent that is experiencing this, this digital virtual task based agent? What are they trying to do? Where is the friction in the process? What are the workarounds that make it horrible? And if you could do it all better, why would it matter? And so thinking about these virtual agents like we're working with so many call centers where, you know, it starts with that hypothesis of where the friction is.

And then is that the thing we should automate to the task based agent. Because not everyone is ready to take a call center and make it all digitized. They're still human. So what are the tasks that are really ripe for that? That you can just say, okay, agent, tell me when this is happening.

And then, you know, maybe kind of taking the next step further in terms of like a more of a sense and respond culture where the agent is always on and they're identifying problems, like, you can use examples where you got real time signal in terms of a customer interaction. You know, maybe it's like, you know, watching things on Netflix or your mobile phone and consumption or like things that have like real time data like these task based agents can start to do the sense and respond and cue up the proactive outreach and things like that. So it really can be not just like a friction take out. And let's use gen AI as opposed traditional decision trees for a lot of organization, this is like the first step to new operating models, where they can really start to think about new ways that they connect with their customers. Okay, that sounds like a Salesforce commercial.

Connect with your customers in a whole new way. I didn't mean to like do that. Like, but I guess I've been here too long. No, no, no. You're on brand. You're on brand.

So I want to talk about the future now. I want to talk about what is going to change. What do we have to look forward to in this space? Well, I mean, we've been working on what we call like, these transformation frameworks because everyone, like, looks to partners like IBM and Salesforce in terms of what this could look like, where do we stand on the steps of looking better, better, best, and what should we be doing next if this is the step we're on? And so we put together this thing we called the maturity framework for gen AI, where, you know, most organizations start in some form with human in the loop when they're bringing gen AI to their employees and their customers. It's the things that gives them the confidence, the control and the learning around these things. And we kind of call that phase one. And phase one is usually like the start point is we have a hypothesis of value in terms of how we're using this to drive loyalty, customer experience, revenue, whatever it is.

And it by and large will take friction out of the process. It will hit the top value of human capital and data in very nice ways together. And that's sort of phase one of this maturity model. But you can't get to fully autonomous until you start with the first one.

We talked about like call it into the call center. And hey what's my balance? Why is Bob the banker first not calling Nick because Bob has the data. It's just not super relevant and available to him unless he's focusing on it.

So what if we had these AI agents that are personalized and have this kind of persistent worldview in terms of how we achieve this business objectives? The AI agent tapping Bob on the shoulder. Hey Bob, like we noticed you haven't called Nick and we've seen all these things happening. We think you should talk to them.

We think these are the topics you should talk to him about. And here's your summary of all the interactions you've had with them over the last couple of months. So it's proactive. It's persistent. It takes all the dirty work out of like that preparation and that identification of how to spend your time. So that's sort of I think like the next chapter of these autonomous agents where they're like personal, persistent, predictive and present and coaching and things like that.

Susan just painted such an ideal version of the future. What does yours look like? Yeah, I think that like there's four words that I keep harping on to myself. Faster, smarter, customized and governed. And how we're doing that really is faster.

Like, we've been training a ton of our LLMs and getting these trained on business based data so that, you're not likely to go off the rails. So, like, if you look at what IBM's released, we've got a range of LLMs. One of them called IBM Granite. Now that's great. You're probably thinking, hold on, Nick, there's like a ton of LLMs out there.

Like, why am I going to use IBM's one? Or like, why am I going to go down this route? Well, this sort of brings me to like my second part, which is like smarter. So we actually released, in partnership with Red Hat, we released Instruct Labs. So like, it's a completely open source framework, which actually allows you to go and fine tune a large language model.

The amazing thing about this, though, and I've been taking a look at it, is that we actually do something called synthetic data generation. So let's say for example, you've got a PDF and you want to go and train your large language model or your virtual agent to be able to go and respond based on those questions. So there’s currently two key ways that that that we can do that out there in the field.

So we can go and build a pattern called retrieval augmented generation, where we think of it like just going and dynamically grabbing chunks out of that document that are relevant to the customer's question. We chuck that into our LLM prompt, and then the LLMs are able to answer based on that context. The other way that we can do that is using fine tuning or parameter efficient fine tuning, which basically means that we actually go and train the model to go in to answer better. But to do that, you need the data structured, prepared and formatted so that we can actually go and pass that to a, an LLM. The cool thing about Instruct lab is you literally just dump a PDF or a markdown document, and it's able to go and generate that data in that format to be able to go and fine tune that LLM.

So I think I wrote a cheat sheet for for our team, and I think it's like eight commands and it literally goes and fine tunes your LLM and deploys it. That brings me to the next stage, right? Customize. So once you've actually gone and made your LLM smarter, I think it's really important that we start baking this stuff back into our workflow. Like AI is not a process in and of itself, it's part of your daily work.

Like how do we just make you smarter by using these tools? That's something that that my team does. So like we just a hook to the next podcast that I'm going to be doing, we're going to be talking about how to ensure that you got successful pilots, but we actually bake this into into customer solution. So rather than going out to the AI system, it's inside of your CRM, could be inside of Salesforce, it's inside of something else. And quantifying the business value to making sure, to make sure that that's, relevant and going to deliver bang for your buck is absolutely critical.

And then governance. Right. Swiss cheese framework, making sure that once you've gone and rolled something out that you've got the ability to make sure that it keeps performing out into the long run. So I think that that's it. Faster, smarter, customized, more governed.

I just have to be 100% honest here. It's really neat to have this conversation and to see the connections between you and Nick and the customers that are being served. And then, like me, figuring out and just kind of like having all my synopses fire, as you say certain things. So Nick and Susan, thank you very much for being here.

Thank you for opening up this world of virtual agents. And to everyone who's listening, everyone who's watching. thank you also for spending your time with us. It means a great deal. If you have any additional thoughts, any additional questions, please don't be shy. Just drop them in the comment section and we'll see.

Hopefully we can get to some of those for you, but we'll see you again soon. Bye bye.

2024-10-01 16:15

Show Video

Other news