AWS re:Invent 2024 -Hours to minutes: The gen AI evolution in Convera's customer experience (AIM331)

AWS re:Invent 2024 -Hours to minutes: The gen AI evolution in Convera's customer experience (AIM331)

Show Video

OK. All right. Good deal. All right. Thank you, everybody for joining us. Uh We're gonna have a fireside conversation between you and me and these two gentlemen. And we're gonna talk about, you know, conveyor's journey, what that technologies, what platforms, what challenges, uh what business outcomes, all that great stuff. Let's see. All right. So uh of course, we'll do our introductions.

You know, what's, what's really important is we'll make sure that we cover, you know, the, why, why did we start this? What was the problem that we're solving for? Um How did A I play into this uh where the rack space fit in, you know, did the results that were rendered where they were expected were the curveballs along the way? And uh you know, what did we learn along the way? So, uh with that in mind, I'm gonna go and introduce myself. So my name is Travis Runte and I'm the CTO of our public Cloud for rackspace Technology. And I have these two gentlemen here. Uh So

do I wanna introduce yourself? Yeah. Hello everyone. Good afternoon. Welcome to the session. I am Sudip, DAS VP of Engineering at Convey and I have the privilege of running a fantastic engineering team that focuses on innovation and delivers a lot of impactful projects for our customers as well as for our internal users. And

we will talk about the story about it today. You introduce yourself, please. Sure. First of all, thanks everyone for being here.

I hope you are having a good uh rein event so far. Uh My name is Vikram. I'm primarily based in Boston. Um but I had the global Aws uh Data Analytics and animal delivery for rackspace.

All right, fantastic. So, so I'm gonna hand it over to you and you know, if you could, you know, tell us about Cona, you know, tell us about, you know what you do and just, you know, go from there, please. Sure, thanks. Um So for those who don't know about Cavera, we are one of the largest cross border B two B payment company. Uh We process business payments for about 26,000 customers and we have a pretty large global footprint and we support about like 200 plus countries and 140 plus currencies that you know about like almost all the countries and the currencies in the world. So let me break it down a little bit like what it really means. Those are all the numbers, good stuff.

But what it really means is that we process the business payment and our customers, like we have B to B customers and the example for that could be like a manufacturer or import exporter. So they pay their invoices for their suppliers. We also have customers who are in the B two C space, mostly like the pension payroll processor where the beneficiaries are in different countries and they pay their pension payroll to their beneficiaries. We also have the universities and higher education institutes where their international students pay their tuition through our system. So as you know, like the international payment can get very complex because there are regulatory stuff for like all these countries, there are compliances and then there are fluctuation or volatility in the market or in the effects space.

So we as a conveyor break it down, cut off this complexity and bring simplicity in the international payment for our customers. Fantastic. Thank you. It's uh it's difficult making hard things look easy sometimes. Yeah. All right, Vikram, can you tell everybody a little bit about rackspace, please? Sure. Um So Rackspace is a global organization with over 6700 employees. Um We are an AWS uh premier partner.

Um As you can see, um we have over 2800 Aws uh certifications, uh technical certifications and some of them are also professional. Um And, and some of our resources are more than multiple professional certifications as well. One of our superpowers are basically our capabilities and the accredits that we have received over a few years, we have over 15 service delivery designations. We, we actually have all of the data analytics and service delivery designations and we have over 18 competencies including the recent one on the Generative A I. Um And in general, Rackspace is well known in the industry for public cloud services. According

to IG leader, we are the leaders for migrations, modernizations, consulting data analytics and machine learning as well. Fantastic. Thank you, Vicar. OK. So, so let's set the stage of the journey that got you here. I think the journey you lovingly call reimagining.

Tell us about that. Yes, I will come to the word reimagining. That's very interesting. So let's go a little bit back on the history. We are in this industry for a long, long time.

In 2022 we started a new journey when this business was divested and acquired by a new investor. And that is when we kicked off on a new journey. And of course, as you know, it is about extracting a business from another business which is never like an easy style. So it involves a lot of regulatory approvals, branding, establishing the new brands and then also like migrating all the infrastructure from the mother shape into the new company. So that required a lot of effort. So we

started with that. But within that, we actually thought that ok, we will take this as an opportunity and set this as a foundation for our journey. It took us about like a year which was really fast to get like an entire infrastructure out of one company and migrating it over to the AWS. But within that time frame, we also started doing the blueprinting for the future of our product, our platform and our internal application landscape. So while we are doing that one thing, we realize that it's just not only about the customer facing applications, product or platform, we also need to reimagine how do we business and how do we support our customers internally? And we use this word reimagining because this entire journey is not necessarily about upgrading the technology. So for example, it is not about like taking Java seven application, upgrade it to Java 17 or things like that. It's rethinking

about how do we do our business, what is our business processes and reimagine that to be prepared for future and serve our customer better. So from that point of view, like we also started this new enterprise case management system which is very workflow based sophisticated internal application for any case that we receive from our customers. While we are in the journey, we realize that well, there is a need, we are building like a very sophisticated workflow based system for the case management. But if the intake process is not sophisticated, then that is where we will get the backlog. So we receive a lot of emails from our customers about like different servicing needs and everything. So that is where we thought like, OK, if we don't address that pinpoint of where we are getting so many emails and if we don't automate that process, we are not really reimagining our process, we can have a really good application or customer care system on the back end. But

the input channel or the intake channel is basically clogged. And that is where when we had this problem statement of the challenge, we looked into what could be our solution. And that is when the gen a solution, we started doing a rapid proof of concept, started trying to find a proof of value for that. And once that is established, the implementation was just a natural progression. Fantastic. So along that journey, at what point did you realize that A I could be leveraged to assist you on that? Those next steps? Well, this is in general like where we are doing a manual thing where we have to do the automation, right? And when we talk about the natural language, like email is basically the natural language. So understanding that natural

language translating the intent of that email and then like invoking the actual case management system to create the case with the proper case type and the subtype. So it's, it's basically like the A I task. So that was more of like the natural thing, I think what was the bigger question there is why gen and why we got into that, right? So that was more of like the decision point that which technology that we will pick to solve this challenge. So, you know, let's, let's jump into that, you know, some of the challenges. Um you

know, what, where exactly did you find yourself? And you know what, what was the, the road ahead that you were facing once that was all determined? Yeah. So when we talk about the challenges, let's go back to the starting point, right? Like where we were. So in the system that we had is there are 130 shared email boxes. Now why? 130 shared email boxes? Because we are a global company, we support like all different regions. And this shared email boxes was basically a manual work around for having a workflow system or not having a workflow system.

So this email boxes, shared email boxes acted as a proxy of the workflow. So we have these shared email boxes for different regions, different countries, different customer type, different case type and it was all like monitored by our customer care team. So instead of having a workflow, when the email comes and assigning it to the proper case type and the case subtype what we had is like, OK, we are asking the customer send us the email to this particular email box that we can pick up. So the workflow

was done very manually and it was pretty much like a creative hack, I would say. So that was the part. And that's what I was referring to as the complex intake process. Unless we solve for it, we will not be able to get to our future state or the desired state. So that was one of the challenges that we had and we absolutely had to do the automation for that. Now, what was the other big challenge? So as I said, it is a reimagining project, right? It is just not a technology upgrade. So in this process,

we have optimized our entire case management process where we have cut down almost 50% of our case type and case subtypes. And this is to make our inter process more efficient, our entire customer care process and the team more efficient. So that was the whole intent. Now

think about it. We are getting like so many emails and we are manually sending it to different case types of different shared email boxes which represents a mapping to different case type and the case subtype. But in the new system or in the target system, that's completely out of the window, right? Because we are not really using those existing case type and case subtypes. So the data

that we have, we cannot use that as a training data set because the way it needs to be classified is completely different, right? Because it has been reduced by half. So the that is where we also didn't have the proper test data to use like a traditional mechanism to come up with a solution. And that's where actually going back to your previous question. And that is

where the gen came as a solution where we can leverage that and use that to really classify all of our emails for our target case type and the case sometimes. Fantastic. So you had a journey, you had the mission, you knew the business impact that you were looking to drive. You've done some work, you know, I'd say uh manually or through processes. Um when you went down the ja I path, you know, what were some of the the the technical hurdles that, that that that occurred? Well, so it was very new technology, right? From the timeline perspective when we started.

So some of the challenges were like where to start making sure that we are very much like a regulated industry. We have to make sure like all the data that we are using is not violating any of our customers policy or the privacy and then getting through the proper approval process of our application security and overall set ups. So those are the things are the initial starting point of the new technology adoption that we also have to go through, right? So those are the things I would say are pretty much the things and this being like a very, very new technology and evolving so fast, we always have to see like, ok, we are trying it out but then what is next, if there is anything better, right? So if we try something today, maybe like next week, there is a new release of another model that is much better. So it's kind of in a continuous evolution process that we have to go. And

so I'm going to ask you rackspace and were in this journey together, what were your observations or challenges along the way? The first comment that I would say is no, I'll hold him responsible for the model changes. He made us change like three times, but that's just a natural progression of the technology over the last three months or six months, right? Um So the few other challenges that we have seen, you know, and we have closely collaborated with convey on that is um language has been the big thing because some of the emails were in different languages. So that was one big challenge to begin with. And then the other thing was the use of modern technology generative A I large language models and all of that getting the security sign off from convey has been a big thing for us. You know, primarily we have to sit down explain, talk about what it is doing, how LM behaves, what are the pros and cons and more importantly, you know, make sure that everyone is comfortable that none of the conveyor data is actually going out. That that has been the biggest challenge through that entire evolution of that journey.

So that it's a great point, you know, I'm sure you're, you're highly regulated, very sensitive organization, you know, Vikram, when we were telling that story by choosing those technologies and, you know, building that confidence, you know, how did we, how do we tell that story? How did we, how do we get that buy in? Um I think it, it, it again, I, I probably tell some offered in the future in the future session as well. But you know, some of it is, you know, when, when we define an A I solution, you know, we try and create the boundary of responsible A I and secure A I to begin with. And that has been the cornerstone of our architecture when we are defining any, any A I architecture for that matter. So we have those guardrails kind of defined to a certain extent and then, you know, comes in a curveball of P I data and especially being in the in the fintech, fintech area or the fintech space P I information is crucial. You know, we cannot let any of the P be available globally for whatever, even if it's a security leak or even for internal users, right? Obviously, not all P I data can be easily available. So

and taking that P I information, what do we do with that P I, how can we, you know, put in the guard rails and all of that? That has been another another challenge and I think it all started with, you know, having multiple conversations enabling both convey, enabling our our side as well because technology has been changing on a faster pace that has been, you know, the key for this to be successful. Ok. So along the way, at some point, it's time to determine the platform, right, I'm willing to bet we chose Aws, right? Uh But why did they with Aws like Suto, how did we come to that? How did you come to that conclusion? Um I would highlight like two private points of our reason of selecting Aws. One is uh really fast execution. That's

one part of that another is the pace of the innovation. So let me break it down what it really means when we talk about the speed of execution, like we have a bias for action, we are into this reimagining journey, we have to get things done really fast. And when we talk about like any a IML model or JA I for that matter, right? That model itself is not the full answer of a business problem, right? So in order to solve the business problem, the actual model is part of the solution. And along with that, we need to have like a lot of other components. So in this case, like having an email receiving system or service, and then the way we really like redact P I data, then you have to call the actual ECM service or the enterprise case management service.

So it's a multiple service that comes together. And the integration between these two system or all these services is also very important. Now, if not Aws, and if we start building each of those services, that means we have to spend a lot of time in building those infrastructure and the integration code, right? So now that gets cut off when we actually like try to leverage Aws native services. So we have done a lot of investment to migrate into Aws. So it's a natural choice for us to see like where we have those native services that we can leverage. And that is where one of the primary reason for Aws. And then the second point was the

piece of innovation, right? Like um as I was mentioning, it was moving so fast, especially in the JNA I space. And then bedrock was came as like a game changing service for us, right? So we can build that, build that integration. But we also have the flexibility to try out like a lot of different like underlying models without really sacrificing on the time and really having a very quick turnaround time on each of those. So

it gives us a lot of flexibility in experimenting as the industry or the technology is progressing and maturing. And then also like we being there in the Aws, the ecosystem and all the connected services that gives us that speed of execution and then you know, Vikram. So as part of this, you know, uh what factors did we incorporate into that journey to help evaluate which platform was the right uh destination for this? From the Rack Space perspective? Yeah, I think um it goes back to our approach, right? You know, how we, how we do, you know, build A I or JA I solutions for our customers. Um It's not like, you know, an architecture was defined overnight. We actually sat down with S and his external team, we had multiple conversations around, you know, what's the right thing to do. And more importantly, we take a step back from technology and start looking at the business problem first.

So we tackle from the business problem first, understanding what, what is the business challenge that we are trying to solve and then come to, you know, proving that technology can solve that problem. And as part of that, you know, why Aws perspective, you know, some of it is, you know, has, has already covered because of a relationship that we have with aw you know, there is additional me that we can bring to the table as well, both from Aws services side, you know, whether it's funding options or additional SS that are needed specific to fintech industry, what to do, what not to do something, you know, that, that, that is equally important as well for any, any solution to be successful. I want to ask you another question, Vikram. So as technologists oftentimes we and solve problems with technology, losing sight maybe of the business reason why we're doing this as we went down that journey, how do we stay oriented to make sure we're not using technology for the sake of it? And we're actually keeping the business challenge or the business outcome on the horizon.

It's a tricky question. But what I would say is um when, when we are especially in an A I solution, right? You know, we are not trying to replace any human per se. What we are trying to do in building solution is how can A I augment and make that resource better? You know, what tools can we provide that to particular resource to make sure that in conveys case, effective classification and effective customer response is is happening. I think that's the approach that, that, that we have taken in general.

Yeah, that's actually a good point because when we did this proof of concept, there are a lot of discussions about whether we should use J or is it just because a word and then we are getting pushed into that, right? But it was more of the things that I mentioned before that in, in a typical A I where you have like a lot of data volume, you have a training, data set that you can use, then it would make a lot of sense. But in our case, it was a little bit different because we really didn't have that much of a training data set that we can use for our future case types. Right. So that's where it actually fit really well into our overall solution and, and then overall like, you know, AWS ecosystem as well, right. You know, when the large language models came out, you know, obviously, you know, other partners were where we had Aws was a little bit behind. But the way Aws enabled some of those services to be available for external customers. Now, that was a huge differentiator um for, for any organization to be honest.

All right. Fantastic. So suri you ended up choosing Rack Space as the partner to go along this journey with you. Can you tell us a little bit about why rack space? What that decision, what that evaluation consisted of? Yeah, it also goes back to one of the similar points. Uh Why Aws

that was about like the speed of execution, right? Um So we are looking really for like a partner who has the approach of doing a really quick proof of concept, prove the value and then really go to the next step, the team who has a good A I experience A I practice area. But what is also very important is this is a proof of concept. This is also an experiment and it is very important to bring together that three party relationship, right? It's our engineering team that we have then the specialized skill set that rackspace or anybody brings in plus Aws because it's a new product that they launch, bringing them, getting their support. That is also very important,

right? So that's where like we like it. Like rackspace was embedded in our team for quite some time, right? They know our environment, they know our business. So it was more of like bringing all these three parties together exploring this new technology and having that approach of testing it out, doing like a proof of concept rapidly. That

is what actually helped us to go ahead with the got it, got it. So kind of a three organization team that had their own specializations, their own focuses but operated as one organization. Yes. Yes, because it's a new technology. They are different architecture. We have to review

that with the AWS solution architect team, we have to bring that expertise, have to tweak that for our actual production organization. So that is where we really needed all the groups to come together and work as like a one common team. And that's why it was really important for us. All right. Fantastic. So one of the specializations that Brax provided was specific to A I. Right. Yes, Vicar. I'm going to ask you to kind of expand on,

you know, our approach to A I and the business unit. We've, we designed to focus on this expertise. Yeah. So um you know, before I

talk about our approach and all of that, you know, I'll take, I'll take a step back and explain what fair is in general. Um So about 18 months ago, what we have done at rackspace is we took a step back when, when the J A hype started and everything, what we have created is we have created a spin off within racks space organization called Us Fair. It stands for, you know, Foundry for A I by rackspace. What we do as part of that organization is we just focus on building A I and is and services for our customers.

Um That's what we do and, and the three pillars that we abide by as part of fair is symbiotic, secure and responsible A I, you know, those are the key things as part of every conversation, whether it's ideation, uh talking to customers about what, what you can do, what you cannot do, um you know what to do and what not to do. You know, those three are the 33 pillars and our approach as part of our, our fair umbrella has been, you know, pretty, pretty simple if you ask me, um we, we don't go about and say like, you know, here is a use case specific for fintech. Let's go implement that we sit down with our customers. Um What a re the business challenges, what are the potential solutions that could be? And more importantly how a particular technology can solve that business problem. So it's exactly what we did with um convey as well. You know, we had multiple sessions to sit down on what are those use cases that are specific to convey, what are those business challenges? And then, you know, prioritize the particular use cases to begin with.

You know, that's what we do as part of ideation. So once we do our, you know, we get to the next step of doing the incubation, that's where we do pcs or build close to MVP product as such. Um You know, in convers example, you know, we have done that po over, you know, 2 to 3 months, it was not a small short 22 week po and then say that, you know, bedrock can solve your problem. You know, there are additional challenges that we had to run into specific data, data set specifically around security as well that we have to account for as part of the POC itself. And once the PO is successful, you know, we get to the next stage of industrialize, industrialize is where we we build a production grid environment, make sure that, you know, all the C ID principles are accounted for as part of the overall solution build and then get to a production ready or a production deployment as part of as part of that. And all of this is done in, in, in, you know, within those three pillars of symbiotic, secure and responsible.

Um Yeah, it's great. Yeah, that incubate phase is interesting, right? Or the POC phase, what you gonna call it because a use case can be super valuable but not easy to do, right? Or easy to do but not to create the value that you want to do. And we had that example in con as well. It's not like we had done one po and washed our hands saying that it's not possible.

We have, we have learned our lessons on first situation, got to that next iteration, you know, fix those issues and ran into something else. So I think we went to like three iterations of PC to get to that final stage. You know, he'll still hold me responsible for some of the accuracy numbers to be honest. But, but you know, we have, we have done all of that. So did you feel like that was a interactive process, a valuable process? Yes, of course. I mean, like without having an interactive process, you just cannot do it, right? So, and when you say interactive process, it's not only about like having our engineering team or the product team, it's also including our business team or the service delivery team into the mix, right? So that's very important.

And you know, those that especially in in the POC uh phase, right? It's an evolution. You just can't do like, you know, define an architecture as is and say that's what we go with as we learn we evolve, we make some architectural changes, we move on. You know, that's, that's why, you know, we started with one LLM to begin with and ended up, you know, three versions later um in production. OK. And so

let's get into the nitty gritty Vikram. I'm going to ask you to kind of walk through the architecture, the high level and uh SUTO feel free to jump in as well. Yeah. So um this one is, is,

is it a 10,000 ft level? You know what, what we, what we have done for convey, right? And get into some of the details in the follow on slides, but a high level um as as to explained the business challenge, right? You know, they had uh case management um solution that they are building. And as part of that one was um to see how we can automate that process and more importantly, make it simpler overall and and and also, you know, improve their customer experience in general. And as part of that, what we have done is at a high level architecture perspective, we have reviewed the emails as a first step. Once we review the emails, we kind of classify those emails as support emails versus non support emails. That's the first step, you know, there are some non support emails as well that, that that we are there. And once that classification

has happened, anything that needed human review, we sent it back to in a feedback loop for human review to get the ground done. And all of that, once that initial filtering is done, we go to the next stage of um you know, categorizing or, or classifying those emails. Um And, and for that, we have, we have gone through multiple iterations and that's where we, we especially classify that email and to one of the, you know, 40 or 50 categories that we have uh for convey, I know some of those examples will be, you know, reset my password or why didn't I get my transaction, um pass or fail or whatever it is. So, you know, there are, there are specific criterias or there are specific classifications that con has based on how they solve those issues as well. So we classify those emails first and, and anything that is not classifiable for whatever reason, right? And it goes back to the, you know, feedback loop again for the human review and once the email classification is done, you know, we summarize that um email, you know, what exactly is the problem statement that user is sending or user received from experience perspective? And then, you know, what necessary action is to take after that.

Some of those actions could be, you know, opening a secondary ticket and automate it and make sure that it gets resolved. Or it could be a human intervention where, you know, someone has to go manually and check, you know, why exactly a particular account number is not receiving any data for any, any money and things like that. And you know, for that, you know, we have used different Aws services as such. But the core important thing is that center thing that you see where you see all the Lambdas there. That's where the majority of the work has happened.

And some of those things, you know, I'll get into those details is, you know pi I information, which which is, which is critical for for fintech industry. And then more importantly, you know, we also had conversations around tokenisation of data that is also, you know, equally important for, for con as well. So at at a high level, you know, that's that's the um high level architecture that we had. So if you have anything to add there, yeah, this is what I was just referring to you that it's not only about the LM or just the A I solution, it's the ecosystem of different A W services that come together for the entire solution to end solution. Fantastic. So double clicking a little bit into the email classification summarization, you know. Uh Yeah, yeah, yeah. So

this is something like very similar to like what we just discussed on the previous slide. One thing that I mentioned before that we had about 130 shared email boxes and all of those were running out of outlook. So the first thing that we had to do is route all of this email into AWS Ses so that it can be routed and then we can pick up all the emails and email content from there and feed into the process. So it's a series of different function that we have to orchestrate even before we call the actual JA I or the LLM model. So the first thing that we have done here in that series of step function, which each of those are like the lambda functions is scrub or redact all the P I data. So we keep coming back

to the topic of the P I data because in our regulated industry, that is very important, the privacy of the customer is very important. And this is also about like the internal governance about the A I that do not use the customer P I data if it is not really required, right? So for us to in order to do this customer classification, it doesn't fall into the bucket or in the category where we have to use the customer P I data. So that is the first thing that we do in the P I or in the in the classification process. Now, every time we run this classification process, it's also a cost. So we have to see that what are the emails we are sending through that classification process.

Now, if you think about the entire process, when a case gets created in our case management system, there is back and forth communication with the customer. Now when we send an email, the customer can reply back. And for those replies, we already know what is the case ID, what is the case type? So there is no point of sending it through the expensive classification process. So that is where we actually embed a case id in when we send that email and when we receive it back, we pass it and then automatically attach it as a customer response into the one of the existing case ids or the cases. So what after filtering out all of those in this step functions, what is left out is the brand new emails that are the new request for a new case? So that comes into a step function. And from that step function,

we invoke the bedrock A PS to get all the, get to do the classification. And what we get back from there is what is the case type and the case subtype for those. And once we get that back, we have a separate step function that invokes our case management api and creates a case for that. And in this lip as vic was mentioning that there could be error in every single step and as it errors out, it goes into like the handle error queue and that gets actually handled manually. And there are certain type of emails in this process. For example,

some of our customers send secure email like no content in that email. It's more of like this is the link you go to this external website, see the content and bring it back. So those are the things that fall into our exception scenarios because that cannot necessarily be classified because it requires manual clicking, going into a different application and etcetera, etcetera. So that's kind of like the overall classification process and how we really do this execution, you know, a couple um challenges that, that we have seen and observed as part of this overall solution will obviously talk, spoken about the language. Um You know, one thing that I mentioned earlier was the data set, the training data set being available.

Um So, you know, we had multiple instances and iterations where we had created some level of synthetic data to mimic those emails and and train the model, you know, using, using that. Obviously, we didn't get into the level of fine tuning the model as such. You know, we've used prompt engineering as much as as as possible. But in the P model was for the P I removal, we used um Sage maker um service and a model within the Sage maker to do that for classification and summarization. We used bedrock um obviously, and then, you know, all of these things is, you know, we are tracking it closely tracking what's getting done, what's not getting done. Um You know, what's getting classified, what's getting ed out and everything in dynamo DB um um in near real time.

And then we also put a visualization tool using quicksight on top of this to showcase, you know, how many emails are getting processed on a, on a daily basis or a monthly basis. You know, what the trend looks like. You know, obviously, you know, over a period of time it gets better, you know, as, as the model gets sorted out more and more with the new data set, you should see more and more improvements as such. You know, that's, that's another level of work that we have done. Um when it comes

to, you know, specific classification as such, right on the API in working, right? It's not about just creating the tickets in some cases, you know, some of those issue types or the other requests that we have seen are, could be easily fixed, it doesn't need manual intervention. So there are specific A PS that we worked closely with the organization with those A PS and get those issues resolved. Um You know, that's where, you know, the customer experience has, has improved significantly, you know, previously, if an email comes in, you know, if a person is not available just to wait for an hour or two, but now, you know, near real time it gets done, got it, got it. So, you know, this is the gateway to your customers right in their gateway to you. And so naturally, uh availability, fault tolerance is is critical. Um Var can you talk a little bit about the architecture here and decisions that were made to ensure that it remained available and resilient where needed? Yeah. So overall, um you know, most

of the services that we have used here are several less high availability services and that's the beauty of using several less services within AWS when it comes to fault tolerance, you know, specific to this particular solution that we have built, right, you know, at any point of time, you know, if for some reason, a particular in working in API has failed for whatever reason, right? It could be api un unavailability for whatever reason or it could be a particular data set that's been missing, that we should have passed through that API you know, we have created a 2 to 3 iteration of fault tolerance where we had made it try two or three times before it actually goes to a human review for manual kickoff or whatever. So that's what this this architecture kind of defines lambda Tiggers or the step function kind of retries, the entire entire process gets to that case casing location. You know, there are some instances where we have classified it. We have um you know, we have to maintain like, you know, two different versions. One, the P I data is redacted.

You know, we have to maintain a version of the P I redacted email and then we also maintain a version of P I actual email as well elsewhere. So, but when we send that ticket out or when we, when we call that api know, we need to reference both of them or as the person who is taking action on, on that side, you know, need not know which account number to look for or who is the person that they are helping solve the, solve the problem? Things like that? Fantastic. All right, Suto. So this is a long journey. This is interesting. We started for a reason. What were the results on the other side? So as I said, we had about 130 shared email boxes.

We cut it down to three email boxes, but it's not necessarily like the outlook email boxes, but three emails for our customers. There are two more for on boarding to be very honest. So we cut it down like 95%. But what it actually essentially means is we have eliminated that creative hack like that where we didn't have a workflow based system and we use the work, the shared email boxes as a proxy for workflow. So we have eliminated

that enter process. We eliminated the manual processing of those emails. And what it really means is basically creating value for our customer.

What is very important in the customer care is your response time to your customer, right? So if we have all these emails getting processed manually, then someone is reading it comprehending it and then manually typing in and creating the cases. So now we have completely changed that process. It's all near real time. Email comes in, automatically

gets classified and the case gets created automatically, right. So that is very important that 82% of the accuracy that we have achieved of this model where 82% of the cases are completely getting 100% getting created automatically on our case management system. So the 18% where we don't have that, it's like some of those secured emails or encrypted emails fall into that category.

There are certain emails or even the P I data extraction also reduce certain percentage point. But these are the areas where we will be doing. And as I mentioned, it's a continuous journey. There will be evolution we will see like without P I data, how we can improve that accuracy. But that's the next

step progression on that. Got it so wildly impactful. Now, the journey also uh probably rendered a few lessons, right? We learned some things along the way together.

Uh you know, so deep, I'll ask you first, you know, what are some of the lessons learned along the way? Um I'll highlight two points. One is when we are working with gen A we and particularly in a regulated industry like ours be prepared like if you're doing a rapid MVP, like really, really fast, so just be prepared to sacrifice some accuracy like in your initial release because you have to really redact some of the P I data etc that can reduce your accuracy by a few percentage points. So that's one of our learnings. Uh I would highlight there is another point that also I mentioned before is the fast paced evolution of this technology, right? So be prepared that it is not that we build it now and that is good for like the next six months or even years, right? So be prepared to the evolution of this technology, be prepared to change whatever you have done now and improve that to improve the accuracy and the efficiency of the model.

Yeah. Um So some of the challenges are the lessons learned that we um that we had us through, through this journey right now. One is um obviously the P I data that he mentioned just to, to throw some numbers right now without P I data redaction, we have seen above 90% accuracy. Um And, and the second one is the label noise, you know, the label data, the noise was was too much, you know what I mean by that is a small example, you know, there could be an email that says, you know, reset my password, I'm not able to send uh money or whatever, right? Um When a human reviews that email, um it could classify into, you know, one of the two or three classifications that conveyor has.

Um And depending on how the A I model classifies it. You know, you can call it as, you know, it's 100% success or it could be, you know, 30% success. I know that's, that's a one big challenge or, or the lesson learned that we, that we have. Now the label noise is, is, is a real thing. Um you know, within uh within, you know, building an A I solution.

Um and then, um you know, the rapid pace at which all the LMS were, were being available, you know, right around the start of the year right now, literally a new model was getting released every three weeks or four weeks, you know, by the time we showcase something, you know, so there would challenge us again and say that, hey, will this improve if we use the latest model? Um I know we took that as a challenge as well. Um you know, to um to showcase, to showcase that and, and over a period of time, you know, it has definitely improved, you know, we started with one version of cloud and, you know, we ended up with, you know, Sonnet version of, of, of cloud, you know, when we actually were putting it in production. So, you know, that's, that's the one big lesson, you know, keeping it up and the keeping up the pace with the, with the technology of evolution has been the biggest challenge. So I'll ask you another question. Um specific to that rapid pace of change, right? You know, what was the balance of, you know, protecting what change you introduced? Like how rapidly you incorporated it to, you know, not missing out, right? How did you, how did you balance that as a, as a, as a, as a business? Uh The change is what we are adopting, right? We are in the journey to do the change, right? It is the reimagining. So change is

like baked into the process. We really take extra care that while we are bringing the change, we are not disrupting our current business, we are not really degrading any of our customer services. So those are really the important thing. So change is something that we cannot avoid and that's something that we are actually in the journey, but it's keeping the right balance to do it, but don't really disrupt any of our existing businesses fair enough. And then, you know, as as part of that change, we have had instances where when we were trying a new model, our performance actually went down, our accuracy actually went down. So we had

to like quickly pivot and you know, use a different one. you know, like I said, we have went through those multiple iterations of, of, of that in a pace of evolution as part of this journey. Got it. So what's next? What is the next? That's really broad question, of course, like, right, we are in this journey of reimagining. But if you look

into the payment industry, particularly like in the consumer side, right? Like in the last two decades, it was, it has progressed a lot, right? It used to be like cash based paper based payment. And then we have come a long way where we can pay through our phone. But the progress on the B two B payment side is not that much, right? Particularly when it comes to the international payment side. So we are in our reimagining journey to rethink about how this B two B cross border payment happens and bring more value for our customer.

So that's where our journey is. There are a lot of things happening in the industry, a lot of discussions about real time payment, embedded payment. So all of those things we are exploring, we are researching with our customer, what is their current pin points unmet needs and something more of like what is that? They don't see more as a need that will come oh my God. So that will come for them in the future. So that is what we are exploring and we are developing our road map and the blueprint. It's a continuous process of understanding the customer need and really like factoring that into our road map. Got it, got it at the risk of

making this a pure technology conversation, any sort of emerging A I capabilities or use cases that you can see Yeah, I mean, like in these days, whenever we are doing any reimagining, we cannot think of doing any reimagining where A I is not part of the solution, right? It brings tremendous value, tremendous efficiency through that. Um We have structured our strategy around A I or more specifically gen A I in three different themes. I would say one is basically improving the productivity of the organization where this one particularly is one good example of improving the productivity. Uh Another example where we are doing some uh experimental proof of concept is how can we really improve the SDL C process efficiency in our S DLC process or product development process? Just not necessarily like having a copilot or helping our engineers to code faster, but it is the entire product development life cycle, how we can do that.

Then there is another theme of how can we protect our business better using A I or JA I. And in our business like fraud is like one thing that we always have to invest and find out how we can do like fraud mitigation and the solution related to that. So that's on like protecting the business.

And those are like the two areas where we have our immediate focus when it comes to the A I and gen. And then of course, there is a third part which is more of like the value creation, I would say like the value creation cuts across all these three themes because in this case, like you see the value for our customers that the resolution time is much faster, but there are some explicit value creation for customer that will come through like the third theme that is fantastic. So, you know, Vikram, you know, expanding on, you know, some of the A I capabilities that rags space has. Uh why don't you talk about some of our, our our opportunities here? Yeah. So um in general, um you know, there are like three broad areas that we've been focusing on from, from A I solutions perspective or generally A I solutions perspective. Now, one is um creating those SME S within your organization or, or a sub org within, within our organization that, you know, is primarily focused on, you know, knows the enterprise data very well understands and responses, you know, clear and crisp, you know, imagine if you have an intelligent sme within a software development organization where you can go look at the documentation from, from before, you know, an agent has a knowledge base that goes to confluence, you know, one drive, you know, whatever your organization standards are, you know, that's, that's one area building, those intelligent sme S for an organization is one area.

The second one is actually taking it a little bit further and creating those, those assistance for an organization. Imagine, um you know, a, a regular request that everyone does is, you know, send an email to hr saying that, you know, give me my employment verification letter and no and an assistant can get, get that thing done, automated end to end. You know, it's, it's basically the intelligent automation where you put in a request or you send an email and then pop you know, five minutes later you have you have an email back saying that here is your employment verification.

So that's the second pillar and the third most interesting pillar is, you know, which is where the future of generative A I is, is to be honest, you obviously have seen some of the, you know, announcements today on, on the coworkers or I call this agent agentic architecture. Um That is, that is the future and that's what we are working on um you know, having, you know, coworkers within our organization to begin with and also, you know, as part of our service delivery, you know, building multiple coworkers for us to make our life easier from the delivery perspective and, and, and as part of that um right, and if you look at that evolution of, you know, the general purpose A I to purpose built A I those SME S or, or, or, or SME S are primarily, you know, act as an intelligent agent. So that's where, you know, the bedrock agent or Q for that matter, you know, kind of falls into that.

But the more modern, the custom A I agentic is, is where you know, there is still some level of evolution that needs to happen. I wouldn't say evolution. But more importantly, and the organization is to build those, those those agents as part of that Aws is giving the foundation, you know, available through, whether it's bedrock or all the foundation models that are available through. You just have to build on top of it. You find tit specific to, to your organization data set, you know, you do your infer and then more importantly, build an application on top of it that everyone can, can utilize. Imagine um you know, as part of your software development life cycle. Um right, you know, if you have an agent that can actively look at the code that you are writing, you know, make constant recommendations saying that, you know, hey, you know, in a simple statement, you know, you do you use, whereas in group by in in one statement, if if you do it, you know one way versus the other, there's a fundamental difference in, in the SQL execution plan.

It's as simple as that. Um But, but still, you know, an agent can actively review those things. Um you know, provide recommendations and a second agent who is actually reviewed the code, created the test cases for you guys. That way you don't have to, you know, build a test case from scratch and then could be an agent where, you know, it's running the entire test case. Um end to end and providing the results to you. And I think that is, that is the future of, of A I engineer DB I fantastic.

And, you know, we've talked a lot about racks space A I fair. Um You know, we've got very specific packaged offerings that are available on the marketplace. Uh We mentioned the ID A process help you understand what's possible, how we can help you through that just kind of brainstorm together and know what's what you, what is, what is out there, what is a possibility.

Um And then we also have the full accelerator, right, where we take you from end to end through those three phases. And uh we also have a tremendous library of, let's say reusable assets that could help you kind of get jump started on that journey as well. So if you're interested, uh there's QR Codes, uh we'd love to talk to you more. And so gentlemen, let's go and uh kind of wind down our closing remarks. And so, you know, uh Suto, I'll start with you, you know, any closing thoughts that you'd like to share. I think it's still a very early stage of leveraging gen A I.

But the good thing is unlike many of the technologies that we have seen, there are some real business use cases that can be solved using GEN A I. So I think it is more about like experimenting those and then bringing the business value out of that. So it's going to be like a really, really nice exciting journey with the Ja I Vikram. Um

I'm just thinking um but in general, um right, you know, Ja I should not be an afterthought is what I would, what I would say. Um If, if any organization doesn't have JA I as part of the road map in 2024 they should prioritize it for 2025 for sure. Um You know, in some cases like um Suto mentioned, it's, it's making your internal life better first and then start thinking about monetizing. Next. There are some organizations that could actually monetize AJ A I as part of their, their customer delivery as well. Um I know that that's, that's the next evolution that, you know, more and more organizations are going towards and then I'll again, touch back on the agents architecture, right? You know, that is, that is the future every organization needs to think about, you know, building those agents internally for themselves.

All right. OK. So in closing, um I wanna make sure everybody knows. You're welcome to stop by our booth. We're booth 1428. We'll be in the Expo hall, we're there. We'd love to talk to you. Uh This journey is unique

but also very common. And so uh uh we'd love to talk to you about what's possible. Um You know, additionally, we are having a uh really great party tonight. Uh The details are there on the QR code, please register. Please join Rack Space. There was a pretty good party

we'd love to have you. And uh you know, the last piece that I'll say is uh thank you to the these two gentlemen. Uh Well, thank you, of course for your confidence and allowing us to partner with you. Uh And thank you both for sharing your expertise, for sharing your time and thank you all for, for joining us and uh listening to the story.

Thank you all. Thanks Travis. Thanks a lot. Thank you Travis.

2024-12-08 12:16

Show Video

Other news

From Analog to AI: Telco's Journey 2025-01-17 18:23
6G Talk - Threat landscape and potential solutions in 6G | Prof. Mika Ylianttila 2025-01-17 09:41
Novel drilling technology to accelerate the heat transition 2025-01-11 13:25