um good morning all glad to be here in front of all of you um I'll you know I'll start by introducing myself I'm Kik from Accenture uh at Accenture I lead our you know I'm called the global asset engineering lead basically I'm responsible for all our software asset development and tools and automation that we take to our clients um I've uh uh over 20 years in the firm and I've uh over the last years I've been specializing as in the in the area of applying Ai and Automation in software development applications and platform management um over the last I think 10 years I've uh happy to say I filed over 115 patents in this space uh so this is what I do I am glad to have uh here with me my long-term collaborator over the last 10 years Luke Luke you want to quickly introduce yourself as well yeah hi hi everybody uh Luke Higgins uh I'm I'm the chief Arch Tech responsible for accent Global asset and automation deployment uh I also uh work with uh Kik AKA The Wizard uh to to build out uh the different capabilities um particularly around the AI op stack but but also uh have been contributing in in some of the other areas that we'll talk about today uh I don't have quite as many patents as as kashik I think I'm only about 21 um but uh the last couple of years um well this year we we won the accent's innovation of the year uh around uh some of the work that we've been infusing and which we'll explain today around how we're using generative Ai and and some of our AI models so uh particularly in the uh Service delivery life cycle approach um as well as how we enable it in in our modern operations thanks so he called me the wizard he's I call him Tony stuck so uh bit of background right so what we are going to talk about um is um how we bring gen to power Innovation across all things technology right software development application management migration modernization rationalization of software managing applications and platforms and how we do this powered by AWS Services specifically q and the foundation models from AWS so a little bit of um context um on you know how we actually got there so two years back so we were in the world of automation Ai and and we were we thought we were on to the next big thing called you know you would have heard of it PA programming it looked really great and then suddenly I think about exactly two years back open AI happened chat GPT went live and all of that and suddenly the whole discussion was okay do we need people do we need software Engineers so there were those who felt everything's going to go away and then there are many who are very skeptical including me on whether any of this is really going going to work you know in a real production system so being Accenture what we did is we put a method to it so the first thing we did is uh the um uh we worked with Stanford to create a scientific model on how we will actually approach applying generative AI in all things you know we call it the technology delivery life cycle which is all things related to technology right so where we actually put a methodology to it and then we had the benefit of you know doing over data with over 50,000 projects all kinds of projects right managing applications sap implementations Oracle implementations Cloud migration projects of different kinds right custom development and so on so for each of these archetypes we actually looked at where we have applied Automation in the past we've been doing this for over 20 years right we know what works well when it comes to traditional automation when you apply Ai and we created a hypothesis on where we should be applying generative AI so for example test automation already worked very well ticket automation worked very well but then we could what we could never do is do a meaningful root cause analysis of what exactly happened when a ticket was there we couldn't create designs for example or generate code so those were some of the areas we looked at what we then did is so we are primarily a services company and we have a number of Partners like AWS right so we spend time with AWS to understand for example what AWS is doing in the space around foundation models back then it was code Whisperer which AWS is coming up with so we understood what their uh awss road map is and then looked at you know how we can complement this because we run projects and we AWS complements us with you know capabilities around code generation testing the foundation models Etc which is the third piece which is you know create our build by strategy we then took around 70 what we call as controlled Pilots what it means took a project ran them the usual way and then created a parallel team made sure the teams don't talk to each other and started the new team would then do the Gen experiment they would for example develop the same software with Gen solve the same tickets with Gen Etc and then we compared results we learned from our mistakes we documented all of this and then these became our playbooks are uh trainings uh you know everything from technical things legal regulations uh you know costing all of that architecting Etc which we institutionalize and then what we are now doing is we are already scaling it and applying it in over 1,200 projects today so every new project we apply this so we have two solutions we have a platform based solution where we actually have a platform which we will show you which uses AWS capabilities uh you know like Q the foundation models you know to do this and we also have a more open source version where we can actually build this with clients so what have we learned how are we actually applying this and this is what we are actually going to show you right so the first question that we get asked is uh is it you know is generative AI making us more productive answer is absolutely yes it is so we do see a significant additional productivity as compared to traditional automation which is over and above this me which complements it but the real promise of generative AI is actually beyond that right so what the typical conversation I have is lot of it is the left side familiar to you most cios or CTO I talk to they say it looks either like this or more ugly or more beautiful but something like this on the left side where the real promise of this is using traditional techniques and tools adding gen and getting to something like what you see on the right side which is beautiful modular systems clean without technical debt with clean Integrations good usage of data Etc the ability to go from left to right with smaller risks quicker results and you know uh the uh in a much more modular way right and the way we apply it is you know the seven things that you see on top the first and most important thing uh which is actually a very foundational step is digitizing taking all of the data from your systems and actually digitizing it using generative AI it means two things right there is there is the explicit knowledge knowledge that we document there is knowledge which is in the minds of people who know the applications and systems and then there are the unknowns like for example what's lying in the code so a lot of our clients have you know Mainframe systems written in cobal assembler Etc so what we do is we digitize what's known we use multimodel models um you know the to uh for example conduct a session you know record a session like this with consent and uh digitize this knowledge and synthesize this with all the other knowledge artifacts that are there and use that to create system understanding documents standard operating procedures Etc the more interesting piece to me is uh then going after code a lot of the knowledge in the system is actually lying in the code using an agentic approach to reverse engineer this code running agents so one of the best examples I can think of is uh assembler taking assembler code reverse engineering this we've been able to get a 50% accuracy where you actually get documentation where you map code to English you know I've written assembler 20 years back I had to debug an assembler system I thought it's impossible I've been proven wrong and we have so and my hypothesis is if you can do assembler you can do anything right you only have the other the platforms to deal with where it's more configuration based I'll show you some of that the second part of it is uh managing applications and platforms so here again right we've done ticket automation we've done um monitoring observability all of that works very well you know but what we for example couldn't do is for example take a voice call for example a call being handled in AWS connect and convert that to a ticket combining information from the user and from with the knowledge in the system that's something we couldn't do before and for example in a customer call every minute is valuable right so that's something that we've been able to get the second part which is the more which is you know which is always like something I would have wished for over the last 20 years is as you get ticket can you do a root cause analysis now you can so you actually use the knowledge base use the uh the the architecture and the configuration of the current system get the information that's there in the user call in the ticket synthesize this and point it to what events happened and explain the events using generative AI the third and fourth so I'll start with I'll do four first which is once you have the knowledge everything across Ross the software development life cycle writing new features writing new stories using generative AI which is actually what we we doing now with uh you know a number of our clients to generate features fast so usually I say you know what's where does most of the time in you know a product release go many different views but one of the common ones is defining the right features and stories and you can speed that up by providing context on what the current functionality is with Gen with the vector Stores um one of the very interesting things which I'll show you is architecture as code what we you know what we've been able to do is use architecture objects Arch this was there even before generative AI right the problem with that was it's very hard to read you cannot read architecture as a practicing architect I find it very difficult to read an architecture but one of the things we realized is with Gen you can generate an architecture object reliably you cannot generate a diagram reliably which you can actually use in development so that's something which we do so to codify everything from a functional architecture a technical architecture an application architecture development deployment architecture Etc write it as code and use that as a basis to generate new software similarly UI code generation and that's where you know Q comes in to complement to you know take over the actual coding uh itself the testing of it Etc similarly with platforms sap Oracle Salesforce workday Adobe Etc generate configurations along with customizations of code but the other key thing that we also do is one of the very common questions clients ask is when do I use standard platform features versus when do I use customizations so reverse engineering the customizations and helping with decisions and insights on when to keep a standard feature versus when to customize is you know the other thing that we do with platforms the last three are the to me the most promising ones uh you know when we especially when we talk about large development and Si comparing different applications overlaying that with insights and Diagnostics to find out for example what's common you know like for example one of the clients we had had nine versions of reservation systems where each of them was came from a different acquisition so one of the things we had to do was reverse engineer all of them compare them find out what's common what's different so that we can make a call on which version and the nine versions some were written in C somewh written in cobal somewh written in Java some were written in different exotic languages and you had to make those call to compare and you know uh so that's the that's rationalization modernization rewriting software from one text tack to another and data engineering so usually you know we lock up people for 3 Days provide food and water and take them through all of this and train them but we are going to we will show this you know we'll show you quick glimpses of this and we'll take questions as we go through before I switch to the demo um how does this actually look like so this is the architecture of the platform that we have so um as a you know I usually go bottom up so we use everything from you know the uh and this is how we do it with AWS right powered by different AWS tools the Gen models from AWS ranging from you know everything in Bedrock clae you know um llama Etc the traditional AWS Services what we essentially do is create a data fabric which actually has our you know knowledge graphs and Vector stores that codify everything that we get on the current system use an AI layer so some of the key elements there is um what we call as the switch model switchboard which means providing the flexibility to pick different models within the same use case within the same agent to pick the the right model for The Right Use case for the right activity and capabilities like U you know building patterns building you know having U projects and users build their own use cases using the capability ilities that the platform has the knowledge engine Etc and on that have all of the different use cases that you see here and again the user experience we either plug it into the tools like jira uh service now Etc there is a web interface there is a chat interface and then there is an exploration interface where you can try out your own proms Etc so I'll pause with slides I will switch to the uh I'll show you a glimpse of it oh no okay so we are on the live platform which we call gen wizard so the first thing I'm going to show you is uh knowledge how do you actually you know get to the knowledge base so for this I have here a an application from GitHub called ppro as you can see here this is a c based application written 10 years back last modified seven years back what I'm going to show you is how you actually take this and reverse engineer this using you know the entire you know Suite of foundation models you know from uh from Bedrock to get this to simple English functionality right so when we started we didn't know what this actually did so when we actually take this application in what you see here is so here is pck loaded into the system it takes about an hour or so to ingest it and we do several iterations the first thing that we do is we have an a reverse engineering agent which will actually you know go through this and first what it does is it actually scans the code identifies what type of application it is and I gets the best pattern to go about reverse engineering so here it is I calling out a pattern and we have configured this to pick the AWS stack so you can see there for every activity it picks a specific Foundation model uh you know a series of trigger prompts to start the whole process and uh to you know go about reverse engineering right so sorry the agent determines it you can always override it so what we do up front is we want to say we say look these are the models I've subscribed to pick from this so and you can say I want best possible quality or I want the lowest cost highest speed you can specify those parameters and the agent will go about picking the right model like for example to explain assembler you need the most powerful model available uh similarly but but to explain terraform you can pick a smaller model so you those decisions are made exactly yes and now once you run this what you get is something like this so here is so I picked a relatively simple application to show you everything when Luke switches over we will show you a Java application with a million lines of code how it looks like and you can relate to some of the Marvel Avengers and DC movies Graphics soon when you see it okay so here is the uh application so it's as you can see it's all C no pun intended so let's take one of these right so here is the account management so a lot of C code now I can actually take this code and put it straight on uh you know a foundation model and try to get an explanation but the problem with that is you don't know whether what you're getting is right or wrong the larger the code the more difficult it is right so what we what the agent actually does here is it chunks this up it explains different chunks but what it also does is the agent will take your explanation in English convert that back into prompts regenerate the original code using exactly the same model and the exactly the same parameters and use a deterministic model to compare the two code two sets of code not the two sets of explanations but two sets of code which is very deterministic and it scores it you don't get 90% when you start you probably get 15 20% when you're lucky but then it repeats the chunking it changes the parameters changes the model and keeps redoing this it keeps redoing this till you get it right it knows when to stop if you are stuck it knows how to call a human friend uh to get more English so you're basically reverse engineering a little bit of the human brain to explain what the application does and then keeps repeating this process till you get a desired score which you can Define here so you can say I need 80% accuracy or 70% accuracy Etc as I do this I get what you see is a nearly functional explanation like this which which tells you in business terms what the application is doing and you know there is the technical references but with this as a starting point I can now do a lot of interesting things so the first thing I can do is take this English explanation and convert this into stories I can write prompts to say you know describe the same code through user stories so this is telling you in very simple functional terms this is if I had written this application today this is this is what the stories would have been right and what it does is these are the things that you know we usually don't like getting into details of right what is the detail description accept acceptance criteria almost never gets defined but you can make the llm do that right so you get all of this once you review this and know that this is correct you can now use stories as a base I mean of course you can abstract features from it but the more interesting thing is get a first draft of functional test cases that can test the same stories out because your functional test cases are tied only to the stories not the actual code itself right so I generate this again along with descriptions and even test steps Etc once I do this I can then generate test scripts in a language of choice in this case elenium what you don't get is the data the data bindings you do later I've already done it so that's why you see it here and you get the scripts for each of this I do this for the entire application like this one at a time so here is the technical view but the view that I always like which I you know it used to be a wish list when I started my career was uh something like this a full graph of the entire application so which tells you this is the application these are the feur features these are the stories these are the test cases scripts and all the code attachments and tags against this I told you about rationalization I can now take this functional map and I can map this to an Enterprise architecture or a process hierarchy so what I'm doing here is taking a typical banking process hierarchy it's a payment application right and I've been able to map this to the processes and saying look it has cash management virtual accounts management M Etc so you see different processes that are there in this oops and against each of these I can always drill down and get the corresponding code back so now this way what you what we do is if there are many versions of payment application I can always compare it I can compare find out what's different where is the common functionality ET which gives me a basis to rationalize so this is the this is what interests a lot of techys like us for a business user what exites them most is a Wikipedia of the application that we generate like this so here is a description you've already seen the features and the stories but then you can actually go in and generate more interesting stuff like the uml diagrams the sequence diagrams the API specs pseudo code Etc which you then get it out once and have the team maintain it going forward so for an unknown application you get the documentation for this how am I doing on time yeah okay so code to English but more importantly your mapping code to you know all the functional descriptions right now the other part like this is the difficult piece the other piece is like how do I actually take what we already know about the application the known documents the calls like this and get it digitized so what I'm showing you here is you can take a session like this have many different recordings which is what you're seeing here now for each of these sessions what you're seeing here is like a 1 hour call which is actually actually happened here so we are again using a multimodal model to Index this standard stuff now what you're seeing on the right is from the video and the associated documents we are generating documentation of that particular area of functionality now what we can do is from the synthesized description of this we now have users go in I can actually prompt to get the required level of detail from what I've already digitized use the codebase ETC but one of the biggest challenges that we see in production systems is we don't have enough standard operating procedure or system understanding documents that actually tell you what the application is actually doing I can now get this generated where I look at I can actually you know I'm manually prompting here but I can also look at the agent can actually look at what looks like instructions versus you know free flowing information and generate an sop like this so I've taken all the text here and what you are seeing here is a generated standard operating procedure along with a process flow on how I would resolve different types of tickets or service requests in that particular process area what I can further do and U is go back here take a standard operating procedure generated like this get it properly verified Etc and from here I can once I confirm I just basically curate this and convert this to an automation workflow like this which is actually telling me this is how I would actually go about resolving every incident related to the mean of a particular type related to the process area and this then goes to a ticket resolution engine which you plug into service now or you know Luke will show you our Auto ticket resolver which actually helps you automate straight from the knowledge that we've actually got transitioned so I have I just took you through the first of those the knowledge part and what you can do with this before I hand over I'll just show you a couple more things and then uh over to Luke um having got this what can we do with this right so one we make we can make the product owners right features and stories a lot faster so what you're seeing here is uh you know all the different uh features that you already saw but if I have to write a new one all I need to do is Select an existing story or an existing functionality and generate new features new stories Etc or make it better I spoke about architecture as code so what I can also now do is take all of the knowledge that I've reversed engineered and codified and use this to start generating architecture deliverables so the first usually this is a recommend you know this is what I usually follow and uh you can configure your own what I what we now do is say take all of this and create a functional architecture artifact from here so here are all the different features that you we already reverse engineered I can now pick a reference architecture for my system so you would have yours or you can pick a standard one say here is my standard reference architecture and I can use this to now generate a new architecture like this so this takes about 20 seconds so you will what you will see here is it attempts to generate use your reference architecture take all the features that are reverse engineered overlay them onto a functional architecture in the interest of time I'll probably pause it here and so you get a functional architecture like this so what you really seeing is an architecture object but since again we don't want to read an architecture object we are dumbing it down and providing it as a draw or a Visio diagram which you can edit right here right so you anything inaccurate you can modify it change it to what you want you can add layers you can add components Etc right here and confirm what the functional architecture is going to be once you in the very same way what you can also do is get an application architecture so that's what you're seeing here so I can say I want a three tier application with these layers write a set of prompts get this modify and so this I then confirm what my application architecture is one of the other interesting things we can now do is uh you saw that the whole application is in C I want to rewrite the UI I can say look I want a react UI so it takes the current logic and then you start the process of rewriting it on a new text tack which actually tells you how you convert it similarly with every layer um what we then do is use this specify a technical architecture pattern I want an event driven serverless architecture on Lambda Etc and I get a technical architecture as code like this and from here I can actually provision the entire you know um development environment through you know answerable terraform scripts infrastructure as code because I have architecture as code I can then trigger this whole thing off and I get one the environment provision and I also test out the architecture to you know to see if it is you know one of those PowerPoint architectures or a real architecture which I actually do and if I promote this into production that actually becomes my production configuration similarly designs user interfaces Etc one of the things that um so when we you know U pseudo code we generate all of that at any of accuracy threshold in each step very similar to what I showed you you set an accuracy in each step and we it's not like you run it end to end every step you have the human verifying it it's there to augment in the world of gen I say 60 to 70% accuracy is great right and then you have you need to have the human verify and do the the rest of it right once you get the pseudo code the code related specifications you can now generate it on the basis of the architecture you saw choose a stack like this so so in this case we are choosing Q the different components you create a code branch and basically you publish it to the repository and you get a developer uh ID with the with the required plugins the required code generated and then they start coding using Q right away so that's this piece and similarly for testing Etc which you've seen last piece before I hand over I'll show you one exciting thing and then over to luk which is I can the same reverse engineering that you saw we can now do it right across a stack where we can take a cmdb read everything that's there in the cmdb enrich it with a process hierarchy so what you then get is um you get a cmdb like this right relatively empty this is how most production systems and configurations really look like right with different degrees of accuracy what we are now able to do is get all of this knowledge ingested get the documents ingested and then get the process information and what processes this relates to and this actually helps us enrich the same cmdb what you are seeing here is a much more accurate cmdb with most of these populated with the enrich cmdb we now do your Diagnostics consumption analytics Etc overlay all of this and then take decisions on on what to rationalize what to modernize you know how to simplify the it estate and then how you start you know modernizing or rewriting the systems and then get recommendations like this on what to do with the applications what to retire what to keep where you have overlapping functionalities Etc generate recommendations like this and then start the whole process of software development so very quick Glimpse on different things that we see here so this is all the things that we do in terms of setting the knowledge developing new software rewriting software simplifying the stack Etc now I'll hand it over to Luke to show some of the magic that you do on the live production system in terms of incidents monitoring observability and you will see the million lines reverse engineered which he triggered as well yeah great so I the last the last six months uh I've been fortunate enough to to be at Mi uh working and going through a course at MIT uh specifically at uh figuring out you know how we and our major teams project is figuring out how we actually apply this technology uh into um into the Enterprise and and they they first taught us it's really important for us to uh understand the problem statement that we're really trying to solve for and so I created these two views because for me um yeah this is the actual problem on the right that we're facing inside the it Enterprise and and as kashik mentioned in his picture um we've layered our systems uh and encapsulated them and we've layered them because we needed to get the key features out to Market quickly uh and in doing so uh we layered you know core Legacy systems uh and and then subsequently now uh we're at a situation where mean many of our larger clients they're at the core of their systems there's the core mainframes uh and we need to all these you know Legacy meanline Java applications that are as old as my career and made up of many different iterations of of that language uh as well as you know many other permutations and on the leftand side uh we're seeing obviously the executive look at the IT services and and not see the outcome that they're investing in and fundamentally the opportunity that I see you know the generative AI is really unlocking is the ability for us and what's calic shown is to be able to understand what's happening in the left uh decompose it uh and then be able to you know and en able um you know essentially a rewrite you know of of the system so are you rewriting a watermelon yeah are we uh trying to fix the watermelon to actually make it uh 100% green now when we when we look at the um the the the the service delivery life cycle this is what we map out and you can see on the left hand side obviously it's the core monitoring systems um and where we've been infusing AI into um our event Toops layers so that it can actually start to enable smarter correlation but start to use that living knowledge base that Kik spoke about because we can Infuse the core intelligence that comes essentially out of that knowledge base that we've we've been building out to be able to help us with enabling the RCA uh and then from there we move it through into the right hand side Loop which is essentially managed by workflow uh when we first started this this process uh we were triggering core pipelines um to enable obviously change in the environment we soon realized that we needed to actually use workflow to encapsulate you know automated scripts obviously with anible or terraform um but also pipelines as well and and and potentially other sorts of automation uh and then bring it through the life cycle to resolution now what we've been working on um recently is is actually uh the resolution code generator because now we can actually start to pass in uh the information we're getting out of the core events pass it through triage within the workflow itself and then pass it into the first step of of agents to be able to enable uh the resolutions to be generated or at least partially generated and if nothing else we can at least first do a code impact assessment and so when we've been looking uh at the life cycle of the sdlc we're seeing those three core primary use cases the first obviously is the forward engineering U which kashik was showing the second being the reverse engineering and the third enabling a CIA what we call a CIA on top of that reverse engineering to be able to do the code impact assessment on existing code bases that we have reverse engineered so we can determine you know where do we M need to make the changes in the codebase and then subsequently what potentially we could be making changes on to be very clear CIA is change impact analysis yeah that's right and nothing else yeah yeah no we're not not in the CIA yet um so uh so I might just just show you briefly um that sort of blueprinted architecture um just so you can see kind of how we see it coming together now for us uh each of our core assets um that we built out around the automation workflow as well as the event Ops layers and we built out um predominantly in core Amazon serverless um in the anation layer it's it's actually running on kubernetes clusters uh but when it comes to the event Toops layer it's it's completely servoless and we're starting to move to more of a base seress architecture um and it's running on on Amazon serverless in key Amazon components so uh components that we then assembled together and and augment and we're finding that uh this new architecture is allowing us to have improved resiliency U but fundamentally also the ability for us to scale uh as well as um then we can augment that that based architecture as we add in new capabilities and features so we're seeing as well a really a huge reduction in the amount of security overheads uh that traditionally we've had to maintain if we have the larger stack uh across uh across the organization now if we um go back to the the sdlc um this is the way that we see that flow working through the reverse engineering into the business requirements through to the app ARS um some of the interesting learnings that we've had was and Kik mentioned it briefly but when we first started the reverse engineering we were really looking at the generative AI to discover you know the code and understand it and since then now what we've essentially Incorporated is algorithmic approaches to actually PA out the basic code bases and structures and then we uh enable what he was demonstrating which is essentially the the the code explanations themselves inside that graph um so that we can subsequently then map it through in an organized structure uh since we needed that structure to reduce the probabilistic error that otherwise we would be um subsequently creating um the results that we're seeing at the moment um and it does range obviously depending on the complexity of codebase um but we are seeing the Improvement of effort um by between 30 and 40% and we're seeing the Improvement of speed in a similar fashion being able to generate obviously large chunks of these artifacts and I think when we first started this journey we were really concentrating on code Generation Um but then we immediately realized that the broader sdlc uh has so many other opportunities and you actually need to start to look at those other layers and int introduce the generative approaches because the detail that you need to actually uh have when you actually go to do code gener generation and when we're doing code generation we're obviously generating the first tranches you know of of the code outcomes uh needs to have a lot more information in order to make sure we produce something that you know is a lot more complete now um when we go through to um our newer workflows when it comes to sdlc this is the approach uh that we're following and we're starting to work through how we incorporate the agentic networks um into um into our core workflow approaches and what we're finding is that the ability for us to obviously use the agentic networks to First create the understanding of the code is critical from a from an actual Bas type role and off the back of that what we're generating essentially and it really depends what we're trying to actually do as far as the outcome so if we're looking to transform the code base then into a new language like kashik was talking about when we do a reverse engineering really what we doing is looking at reverse engering the behavioral understanding of the codebase because we do not want to include the core application content so one of one of our larger clients we were transforming uh an older angular application into react and when we were doing that process we reverse engineered initially including the core application architecture uh and of course when we then subsequently then started to you know pass that on to the Ford engineering agent it started to get create a react application but it was a bit of a Frankenstein because fundamentally it was taking into consideration the application architecture of the angular uh and so from there we started to curate the outcome of the agentic networks um to be able to make sure that you know we just carried through the key functional flows we have the human in the loop then to understand what they are and curate them and then from there we pass it to the secondary agentic networks which then obviously do the the core generation of code now when we assess the reverse engineering uh outcomes we saw that it was remarkably accurate when we did it from code we were seeing it in in the the 80s in in that particular scenario um which was fantastic and we knew that because we went through reviewed it you know obviously as as humans to then improve and curate it and then we passed it subsequently uh on to to the secondary stage and the CIA or the the S of transform forward engineering outcome was about 60% complete and we knew that because we actually then benchmarked it uh and and completed the code base as well now you can see that we have these uh these flows here so within our workflow engine itself what we've done is we've in incorporated as nodes um the particular agentic Network steps and then we're passing it into as we would normally into our crcd loops and so off the back of the crcd loops it's passing back into the agentic networks you know what isn't working or what's any throwing out sa for security sest and D outcomes uh and then it gets corrected before it then passes on to the human in the loop at this part where we have our Q developer assistance also helping you know out human in the loop to complete what they're doing and then again it can pass back through their gentic networks to to fix the issues now obviously we would also do an element of testing you know in this layer as well in an automated way uh wherever possible and then eventually you know the final human the loop test we're seeing that we're getting a huge amount of savings obviously where we're enabling the agentic network to do the heavy lifting um and we asked and and and parts of you obviously that loop back uh layer where we're being able to fix defects or fix issues that is coming out of the cicd now I might just pass it over to um my my computer if you can transition it across uh to just do the demo that would be great thanks now that first flow that I was showing you um what we're seeing here is um is our SAS automation service um which is essentially automating obviously many different workflows uh across the clients that that that we have on our SAS now we have assets deployed in our SAS cloud in 21 different countries across different regions um but we also uh have it uh fundamentally uh running in client premises as well where they where they where they want it to run internally so you can see over the last uh couple of weeks essentially uh We've automated somewhere in the vicinity of you know 5 million different tickets uh and the the chatbots are involved in that process as well uh and uh and essentially um we're working through um resolving all the particular issues that come through now one of the examples um that kashik spoke about was the call center example that we we're also infusing that now also into our call centers so that this flow doesn't just have to receive uh you know tickets um through through the particular uh ticketing engines can also come in through on connect and the ivr now I made this call a little bit earlier today just sort of you can see what we found when we' started these use cases initially was we saw that our call center teams would be spending quite a lot of time transcribing you know information you know into some form of a templated ticket uh before it would then get lodged in the system and so initially when we were applying gen we were applying this core template just to sort of summarize and you can see here you know I gave a particular you know issue around no longer receiving orders the time that it again you know no known work around you can see how nicely the the template gets cleaned up um now from there what's actually occurring is that it's obviously getting lodged in our system and it's starting to flow through that Loop that we spoke about um where it gets enriched and that knowledge engine or that knowledge base gets Incorporated to be able to then work through you know what what are the likely issues that are causing the issue um and what the resolution steps are now off the back back of this um we're also linking that through our event Ops layer which is actually doing this enrichment uh updating the ticket and then it's it's starting to Route it into that automation Loop um which is then powered through the workflow um and we're finding that we can really cut down by somewhere in the vicinity of like 70 to 80% uh the time it's taking so the mttr um from essentially what was taking for people to actually do that triage before that what we would see is huge conference calls everyone would be invited they would then have to deliberate you obviously on the actual issue Itself by being able to do this enrichment this routing to the particular root cause and then we're bringing in the right teams uh along with the information um that the system knows about we're able to reduce the audience of that call the automation itself is bringing the teams together the other benefit is that the information that was shared you know by the team on the ground is a lot faster to be encapsulated and we're not missing information we were also seeing that that uh that would be occurring as well now when it's passing into our event Toops layer um and this is a view of our event tops um you can see that we look across you know all of the key events um and in this particular example you can see that we start to duplicate the events down now a lot of the different ticketing systems today are passing you know huge volumes of events you know into our ticketing systems but there's a lot of noise and so what the first job of our event top is to do is to really rationalize Down The Noise by looking at the duplicates and then start to look for the core Master tickets um that exist in the system and then from there look at enabling a form of correlation you know across across that layer now what we'll be doing is correlating as well against what exists inside the cdb uh and then from there we'll be able to bring it together to be able to see with the information that we have uh from obviously the core events in the system but also the events from the cdb exactly you know what the the likely relationships to issues are as well as then infusing the particular resolution itself um that it's pulling out of the the knowledge base that we've built um and being able to encapsulate um what we saw with our support teams was that they'd be getting spam with so many different issues it was very hard for them to determine the relationships between the issues uh and then this way we can start to Cluster them together prioritize the uh the master TI as well as start to you know provide the particular resolution itself um that it's it's polling straight out of uh out of the data store now as we start to pull this together you'll also notice the the view of problems and so we're also looking for core patent sets of problems within the events themselves as well um and and some of our larger clients right now are obviously looking for us to be able to eradicate you know the underlying problems themselves and so what event Ops is doing is looking for those repeating patterns as well uh to determine exactly you know when certain problems began and then subsequently again using that information you know from the knowledge base to be able to then associate you know what the likely resolution of the issue is now when we actually look across the different workflows that we built you saw in my initial dashboard we had hundreds of different workflows this is the example of what that workflow looks like but what we're now incorporating it to do is enable you know our forward and reverse engineering uh components as well to be encapsulated in the workflow and so each of these nodes here are becoming agentic um so you can see here when we're generating certain code bases what's what that's doing essentially is then being able to encapsulate via the workflow that process then passing it off into your crcd Loops like I was showing in that diagram to be able to to then resolve particular issues you know that essentially get generated uh and then from there we can obviously then Loop it back um into completion so maybe I might just pause here to sort of see if there are any questions um you know on what yeah what I've spoken about okay great yeah so I might just conclude with um just a view of you know one of our uh our larger applications that we've had to do reverse engineering on um this is a a milon lines you know of of java code which essentially um we' reverse engineered out and visualized um and you can see here the complexity you know of some of these larger applications and so being able to you know enable the understanding of of obviously those core relationships and then filter the via the core uh core functional core trees um the particular uh relationships of you know of core functions from those relationships we use them as filters through this reverse engineered core graph to then actually understand you know exactly you know what is being impacted through the through the layers of code um so with this more simplistic paypro application you could see you know how um how they get mapped under the covers obviously we're dealing with you know millions of lines of code uh across many different large applications and obviously we have to you know use more sophisticated structural graph models to be able to understand uh and then subsequently be able to reverse engineers them so um this is just a view of them so you can actually appreciate you some of the complexity you know that we're having to deal with so right
2024-12-15 07:21