AI vs Generative AI
Hello, everybody. Welcome today's podcast. We are excited to have Christopher LaCour an esteemed leader in the information technology industry, joining us. Christopher has worked in the technology for over three decades, ranging from small start-ups to global corporations. He has endeavoured to become a known and strategic voice in the generative AI space.
For the last two years. He has done this by being vocal in the LinkedIn community, consulting as a Fractional Chief and Gen AI strategist, and a part of The Professional Speakers Bureau International. Today, we will explore his journey, the impact of AI and Generative AI, and much more.
Welcome, Christopher . Christopher . You have become a strategic voice in the generative AI space. With that in mind, would you like to put some light on what is the difference between AI and generative AI, And how do you see this distinction influencing industries in the near future.
Well yeah, there is. It's a it's a good idea to go over this with the with folks, especially when they're beginning to work, in the AI field. AI has been around for a while. Traditional AI Generative AI is the new thing that's come out on the on the scene with the last couple of years, but the easiest way to differentiate between the two is that, AI works with, items that already exist with the transforming or working with pieces of data that currently exist to put it into a layout that you want to be able to do something with Generative AI, creates something new. You're creating something new based on a training model or an LLM, or something that you have specifically curated to be able to build new content or build new, information.
Right. You mentioned about LLM’s. Could you help us understand what large language modules are? Large Language Model’s So this is something that, also people don't understand a lot. it's LLM’s. Is the, the abbreviation for large language models. It's a specific type of model that uses machine learning to process and generates human like text.
They are based on real networks so a machine learning model is made up of small mathematical functions that, LLM’s can perform that allows LLM’s to perform tasks like writing, translating, and answering questions. You are is the thing that you train and LLM’s the thing that you train by feeding it data or information to provide a human like response or a response based on the training information that you feed it, in its simplest form, we use chat GPT and you create a, a new chat you're working with directly with LLM, you're working with the thing that is generating the text based off of the training that you've already given it. That's correct.
I think this brings me to the next question. We all know how important it is to equip professionals with the right skills to leverage AI effectively. Building on the growing importance of this, are there any generic or universal training programs for generative AI? There's a lot there's a lot of training material out there. New, new models are. And new certifications are popping up seemingly every day. The big one that has come on the scene lately is, the Generative AI OpenAI Trading Academy, that was announced September 23rd of this year.
There it primarily looks like it's going to be focused on, development assistance in attempt to make generative AI a more ubiquitous part of our training and development deployment models for engineering, software engineering. But there are quite a few courses that exist in Coursera. And, throughout the, the training system and plural plural site these are online trading platforms that are available to anyone who wants to buy subscription. There has been also the, the Google Generative AI, systems and training systems have been around for a very long time. And they, they focus a lot on language models, a lot of translation, because Google Translate is something that's been around for a good long while, too, is kind of dipped around the edges of this, the most, the most recent one that, I like the best is the, the IBM system, the IBM training model for Generative AI. It's a Generative AI.
It takes you all the way through prompt engineering Its also into AI ethics. it's a very comprehensive system that, can help you go from zero to hero in a very short amount of time. Depending on the track that you take, it's as low as 40 hours and as high as 260 hours worth of training materials available to you. That sounds interesting, Christopher . If I recollect during the first conversations, you mentioned an initiative you are working on, the Centre of Excellence.
Could you tell us more about it and the impact it aims to achieve? Okay, so thank you for that. This is something that I have been, wanting to to work with for quite a long time, ever since Generative AI came on the scene. I recognize it as an emergent technology.
That's something that I really wanted to focus on and work in, and something that I have noticed over the course of my career is outside of, high tech companies or bleeding edge technology or software development companies. Research and development doesn't really exist anymore in the same sense that it did 20 or 30 years ago. We've kind of removed that function from, regular service provided companies or retail companies that have technology departments. We know that because we've moved over to this agile development model that has become the standard across most, orgs, especially the US. And, that is a very transactional system. There is a customer wants this or the, the sales team wants this.
We're adding this to our to our functionality, or we're accommodating this specific piece of, of new, of new methodology that has to be added to our application. So the developer team provides that they don't go any further. So, Center of excellence is another way of redefining how generative AI is going to fit inside of, companies that don't have traditional research and development right now.
I've been assisting with, business readiness, by analyzing companies, policies with their business analysts, their data analysts to see if there is some type of, application that could exist within the organization as it sets. It's a lengthy process. It goes from three months to nine months before you get a real ROI. But by the end of it, you do know a couple things.
You know for certain if Generative AI solution is right for you and your company, and you also know that, you're going to be able to use this for future applications. Your business analysts become the engine that drive the center of excellence, and they drive the R&D department, along with the, the advent of AI solutions architects, prompt engineers and AI ethicists, as this grows and it becomes another department inside your organization, if it's right for you, alright. There's you can also find out that it's not it might be a situation where you need Generative AI right now you don't have that type of necessity.
So it's something that you can put off for a little while. But I think in about five years time, just about every organization that has any type of repetitive business process that is oriented around software will have an AI Center of excellence somewhere in their pipeline. Moving on to our next segment, Decoding Risk How Generative AI Is Shaping Modern Risk Analysis. Christopher , I would like to ask you, risk analysis is one of the areas where generative AI is being explored for enhanced efficiency. Could you share some insights into how it's being utilized in this domain? Well, sure. So risk analysis is something that comes up all the time, particularly if you work as a CIO or CSO.
Anything that has to do with, Generative AI, there's always the worry of connecting it to a database that contains private information, because there's some unknowns that come along with Generative AI that you have to make sure you account for, but also you want to be able to use this as a tool to analyze your inherent risk. But it's important to understand what generative AI can do. More importantly, what it cannot do due to AI has a mandate.
The way that it functions the way that it works. Is, you ask a question or provided a prompt of information. The mandate is it's going to provide you a response. The response is going to be dictated by a series of filters that it creates based on the training material that it has been given.
You can collect or create, a 3D model that is tiered specifically to your organization or to your line of business, or you can use one of the generic ones that exists out there in the world. As a result, generative AI is very good at editorializing, at summarizing, or at, categorizing things. It is not good at understanding. So what you could do is you could feed a generative AI model, your infrastructure, your, your network, your information about your application, what it does. And then you can tell it to provide you a categorization of that information or a summary or some type of report on that information and then provide that to an experienced security professional, and they can go through it to discern what the areas of risk are. Now, what you can't do is instead of saying, please provide me a report, say, tell me what my risks are.
It will provide you an answer, but it doesn't actually understand the concept of risk. It understands what it needs based on the training material that it's been given, but it's not going to be guaranteed to give you a correct answer. And in all likelihood, it's going to miss something. It's going to be something large.
So you always want to have an experienced security professional vetting out this information for risk analysis. Generative AI is a fantastic and phenomenal tool to add efficiency and completion to a security professional's work. It does not replace anything.
It just augments what they're able to do and speeds it up a little bit. So there are several different types of risk analysis that, exist in the IT field already. Things like risk matrix, BOTI analysis anyone in the world as a CISO knows these terms like risk assessment, root cause analysis I think everyone knows root cause analysis. You just keep on going down. Try and find the initial root cause. That was the catalyst that created the entire problem.
Now these methods are not subjective. They have been used for decades, and they are essential to proper planning and responding to incidents correctly. Generative AI can assist you in those main capabilities regarding that editorializing of data or paraphrasing data, aggregating data, as I mentioned before.
So it is a fantastic tool in this capacity, but that doesn't replace anything. That's fascinating. Christopher , building on that, qualitative risk analysis often relies on subjective assessments to rate risk based on the likelihood and impact. Could you explain us how this method works and how experts determine risk levels are high, medium or low? Okay. So this risk analysis comes with, a couple of factors that are universal. So you are gauging two factors if you are a security professional.
What is impact? The second is likelihood of something bad happening. And those are really that's it. You have to be able to define those two factors. Then you essentially give them a number, you multiply them, and then you decide on what level of risk you have.
Now the universal standard is one to five, for those two factors. So the highest risk is something that rates at 25. If something has the highest possible, impact, it's a five. In the highest possible likelihood, it's a five.
That's a rating of a 25. Those are the things that you have to respond to immediately. Those are things that you have to say this could be a company ender, or this could be something that could take down an application or provide enough of a reputational damage that we have to address it today, right now, and that's something that, is well defined throughout the entirety of IT security training materials and Generative AI assists with this in finding the, the parts of the data and infrastructure that may get missed because security analysts and security directors, they literally have to read hundreds of, hundreds of items per week. There are new, items that come out every week from Microsoft, Google, from the government for the OWASP top ten.
These are articles that are, that are produced consistently and constantly. Staying up to date on those is an incredibly difficult thing to do. So Generative AI with it's aggregation capabilities, its ability to take all that information and provide, editorializing and categorization allows you to take your infrastructure and all of the changes that come up every week, every other week and make correlations.
And it really does help with, improved efficiency because it's so easy to get behind on these things. Staying on top of the OWASP top ten usually becomes one of the biggest things that you want to do that those are the, the top ten risks that are defined by the government. and they change. They can change, every quarter.
I think, there's new updates that come out to them to shift around the priorities or the shift around the, you know, what numbers they are. These are the things that you have to watch out for the most. And being able to take your infrastructure and say what these factors, how are they going to impact this factor or what are the what is going to be the, the correlation between the changes to this list? and my infrastructure is something that, this could really help you with a lot. So Christopher I'm shifting the focus slightly here. Since we all know that quantitative analysis takes a more data driven approach, what advantages does it offer compared to qualitative risk analysis? So, qualitative versus quantitative, risk analysis is a term that you, you deal with a lot in, in any kind of IT security. Qualitative risk analysis is fast but subjective.
It's something that you really want to, switch over to quantitative risk analysis because it is more objective, more detail oriented. And you can have contingencies in place for goals No goals for your, your decisions that have to be done. And they have to be done quickly and have to be done decisively. But it does take more time, takes more planning. Qualitative risk analysis is something that you can take a look at, an overall big picture problem that is presenting itself. And make a decision based on what you believe the likelihood is of an impact that's immediate or an impact that is going to be, not just hugely impactful, but it is also something that you don't want to leave in place.
It's your immediate response versus your long term response, Very well, said Christopher! So looking ahead, what types of improvements do you think Generative AI could bring to these traditional risk analysis methods? So there have been improvements to the qualitative and quantitative analysis that generative AI allows. Its there around speed, zero accuracy of decision making and centered around the, the three main things that Generative AI can do to provide because, qualitative and quantitative analysis have, different time frames surrounding them. the faster you can get to a quantitative objective solution, the better off you're going to be.
Right? So having something that is a framework built into your model, a trained model or a framework that you want to adhere to, is something that you can do with LLM, that you have trained, curated the data going into it specifically. And that's something that, is, is harder to do without a model like this. without a tool that can aggregate this data as quickly as Generative AI can. So discerning risk, no matter how good you are at it, no matter how, how many years you have as a technology manager, or a IT or a security manager or CISO. New risks happen every day, every week, every month. Something new is hitting the horizon consistently.
and constantly. I think in 2023, we had 4 zero day problems that popped up just with Google Chrome in one year alright and they were all brand new. They all had to be accounted for. And, every single one of those had to be gauged against the infrastructure that currently existed, whether or not it impacted any organization. These are things that, each one has to be looked at, on its own, independently. And the, the speed that you can react to those with a quantitative analysis is going to really determine how safe your organization is, how secure your organization is, and more importantly, how secure your data is.
Data is king in this world, asking for the creation of the model scheme, for the training of the model, but the protection of that data protection and privacy of that information is what entirety of IT Security is based on the protecting that data in the fastest, most complete possible way. That's very insightful. I think, Christopher the way you've articulated it is really well.
So here I would like to ask you, is there a message you would like to leave with the audience about the shift we are seeing in the tech world today and the processes due to Generative AI? I've been in the technology field for a very long time. Generative AI is an emerging technology. and the order of the internet, when it comes to the change, it's going to bring, this is something that is going to eventually alter every single area of business process that exists.
In the same way that the internet did in the mid 90s. We are looking at another massive shift in how we're going to be able to work, not how we're going to employ people. This is going to generate, a number of jobs that don't exist right now, things that we can't even fathom today.
In 2 to 3 years, we're going to be seeing more and more titles pop up in and around this area that, around management, around curation, around creation of data that are all going to be focused, in and around Generative AI, so much so that it doesn't even currently fit neatly under any correct chiefs that exists in most companies. It crosses too many to planes too many barriers. Finance, sales, technology, obviously security data, these are all things that can report up to different chiefs. The chief of operations, chief of information, chief of technology, chief of finance.
It fits most clearly right now under a COO, mainly because it crosses so many but it is very, very likely that as you grow in your Generative AI, journey as a company and as a, as an engineer or as an office worker, that this is going to alter your job in such a way that, you'll be able to do things that you never thought possible today, and that we so much so that we have to be on the we have to be on the, on the floor of it. We have to be, right there at the beginning. And, I feel like I'm lucky to be involved in, at this level, this early on in this development because it does give me a chance. Gives all of us a chance to be amongst the people, the leaders. that get to shape the questions, being able to shape the questions about how the next generation of coped workers and, engineers are going to work inside of technology is a very exciting place to be. I don't think there are a whole lot of chances to do that one, maybe two in your lifetime.
So I'm excited to be here at this point, my level of experience in the space where I get to work in this field, very exciting. so coming to Human AI partnership, how can Generative AI improve decision making processes in a service company? Service based companies thrive on Information and data. It's a question of, what are you? A data driven company or You're a company that just likes to believe you're a data driven company, but really, you just want to do whatever you want to do anyway. It's a question you run up against all the time.
Big data driven, truly data driven, means that you have to listen to the data. You have to be able to understand it, read it and follow where it tells you. And this is something that Generative AI can assist people with a very granular level. So going through transforming data as you do your analysis of data mining, to discover trends, to discover correlations between factors is something that's been done for 50 years. 60 years.
and that's not new. Generative AI can help you make the decisions by pointing out, correlations that you have not thought of, but you have put together, because if you do, it's more extensive data analysis and categorization in ways that, can help convince the appropriate people to focus on the future correctly. It can. In short, it can help companies be better at being data driven and be, be better, being better companies. And that's, that's something that is, that is hard to get to, if you work in a service business or any good a service provider company. Take for example, service provider, that is a telecom company.
Well, telecom, it's a 80% of what it deals with is going to be data. The data are the, the connection rate. You know, the throughput data are the, the success of connections, the, The resolution of the video call, whatever it is. So all that information being fed into just an Excel spreadsheet or a database gets transformed in a very regular way, providing dashboards or providing, visuals that executives or leaders can use to determine what the next phase of their business is going to be. Generally, I can help take that data along with your documentation and make correlations and categorizations that you didn't think of.
At least you're not trying to think of right now. Now we're we're trying to think in a certain way because we've been doing things the same way for so long. This does this helps in an incredible, way.
I've seen it. I've actually I work directly with this, in 3 or 4 different places, where we were able to, see new directions and see that, certain areas of business were thriving. We were we were focusing and eroded amount of time on areas that were less than one tenth of 1% of the overall revenue, but it was taking up 15% of overall time. We just didn't know where to look for that time in the generally. I mean, it'll help us curate that time.
It's an incredible tool to have a true disposal. That brings me to the next question, Christopher So once these AI systems are implemented, measuring their effectiveness becomes extremely crucial. What metrics do you think should be used to gauge the success of generative AI implementations? Well, there's there's the big one, obviously revenue. If you're able to, come up with a generative AI solution or application that helps the bottom line. Fantastic.
Or I'll be honest with you, most of the time that's not where you see it. It's not you're not creating a solution and selling that solution to people. Unless your technology or software company, if you're any kind of service provider or a regular corporation, what you're going to be looking for is savings and efficiency. And that's going to come in the form of time or utilization, utilization minutes utilization time for professional services, the amount of connections or applications you're able to handle and work with, consecutively or, concurrently, things that are going to be held directly by AI implementations and that's actually a lot easier to measure than you might see with the help of generative AI. So, you know, you, the first, general video solution that, that I worked with was actually a way to track how to track improvements.
And it was it was a little bit better. That's true. But, it was able to, the universal we were able to do that, feed our regular models, the regular, type utilization models into this general AI application that was all internal.
And, did do before and after, every implementation, every application. And the results are staggering. In one case, one application, the first iteration was saving a company of 200 people, 1650 hours per month. that's massive.
With one application that level of savings that's that is absolutely monstrous. You know, that's, that's something that improves over time to is iterations. You the first few iterations, you are multiplying that 2 or 3 fold. So, it's, it's those are the two things that you start off with. So what is the big one? Catch is always giving. But, time is a huge one for me in my field.
That's the most important one. That's right. I believe you. As you rightly said, the time and money are very, very crucial. Which brings me to the next question.
Christopher So what are the best practices for training employees to collaborate with generative systems? So freedom to fail. And I know it's cliche, I understand it, but, poor concepts and allowing, allowing teams to work with generally AI solutions, to figure them out on their own because telling them beforehand how they're going to use a solution if they're trying to come up with a new application is counterproductive. I'll give you one example. A sales team that sold one, one particular application, that there was ten of them.
They were not technologists. Right. A proof concept. We give them Microsoft Copilot, and just said, use this in any one of the ways that you see fit.
We can attach it to your red boxes attached to your Microsoft Teams. It was an m365 company. And, figure out how it's going to best serve you. So over the course of one month, they went from, working the way they, they did to completely altering how they did internal meetings, how they did internal communications, and how they did customer based communications. Instead of now having to meet every day for 45 minutes to an hour, they could cut that down to once or twice a week for half hour. And The copilot system would aggregate their communications between each other.
It would also aggregate data coming from release notes coming from the engineering teams. And we would to be able to, put that into the pipelines that were personal for each one of them that were documented in the systems that the copilot was connected to, and make all sorts of correlations and cross-references, and then provide them with a summary. So by the end of the month, the sales team that had almost no experience in technology didn't know how to work without Microsoft Copilot. That training is something that we or any technologist would not have been able to provide for them, because that's their job. We we I don't understand how they do their job, just like they know how to do mine, but they were able to, creatively use a system that is relatively new to generate an entirely new way to do business internally that they all love. And everyone got a lot more than the subscription, supposedly subscription price out of it.
Right. So despite AI capabilities, we all agree that human involvement remains vital. What role does human oversight play in generative AI implementations, and why do you think it is so important? The easiest way to frame this is necessity. Human oversight is an absolute necessity. You cannot just take a check, but if you have a mandate from your board of directors to work with the generative AI space, you can't just take a chat bot, replace a customer customer service person, put it on your website and say, okay, we're done here.
Oversight is an absolute necessity to be able to ensure that what is being said, what is being done by that bot or that, or that service, is correct and consistent because it's also treating itself as it goes on. You you've heard some terms, that are, starting to be created around the, generative AI space, things like visual or, imagination or, you know, just to, to be up with, with new policies or procedures. Remember, a model has ability to respond. It does not have a mandate to follow the rules. It doesn't really understand what rules are. It has a mandate to respond based on the information that has been trained on.
So you cannot leave it on its own without any type of oversight. You can let it take care of repetitive work and the march things for you, you know, a specific way, so you could work more efficiently. So it is, right, Bill, as the way that it, with the general AI, you take spaces it cannot function without direct human oversight. So, Christopher topher, thank you so much for helping us understand the difference between AI and generative AI. Its impact and the risk attached to it.
This brings me to the last question of our podcast today. Can you let us know, and can you throw some light on it as to how can companies proactively mitigate the potential risk of implementing generative AI? So I just mention one, is that it can, it can alter, your perception of what rule is, there's a there is a good example of this, from, Air Canada. Now, you can anyone can look it up if you want. It was a part that was attached to a website. And customer interactions, where they were asking questions about, whether or not they could be reimbursed for certain types of tickets and the chatbot response was, of course you could. Why wouldn't you be able to? You're here you go.
This is how you do it. None of it turned out to be true. It was just something that the bot responded with. The risks of general AI is that it provides information that, while it sounds like a human, is saying it might be factually incorrect. So you have to make sure you take the prompts very specifically and, detailed in a very detailed way.
Probably junior is also that that's a term you're a lot that's I'm not sure that's going to be a job that, sticks around for a very long time. I think the idea is just going to be a skill that want to be a bullet point. A lot of people's jobs. So the make sure that information that you're receiving from everybody is factually correct. That's, that's something that's a risk that has to be mitigated.
And, knowing when you are going to go with a model that you train or that what it's been trained by an open source network, understanding the information that has been curated into it is, is paramount to be able to mitigate that risk. So know what you're getting back, is something that is factually correct or that is usable. It's functional. You also you don't want it to to run in autonomy for too long because like I said, it's teaching itself the about the models are teaching themselves as they go along. So if it's consistently training and retraining itself, AI created material, which is sometimes what happens in this world, in this space, it gets closer and closer to sounding like another bot and further, further away from sounding like a human right. And that's not something you want. it's, that there hasn't been a case where that has been a desirable outcome, in the, in this space at all.
The other is I haven't actually seen a case where they are real. They're just fears, not risks. The fear of, losing your job of AI being replaced by replacing a person. I can't see that happening anywhere.
All you see is augmentation. I don't see that happening in customer service. The most basic form of application for this is a chat bot.
You still need the people, in the queue, a level to curate the correct information, a customer success person or a customer service person may be able to handle more augmenting their skill set with a, Generally, I might be able to handle multiple conversations at once, but there's no replacement that goes on there. There's not even a reduction. All you're doing is increasing the amount of work that the business can handle. So the end result is not going to be cutting back people. It's going to be growth of the company. The, or to a database that contains private data, private information.
There's a general fear of that because you personal health care information or personal health information is one of the bigger things that, that this is of that any company that has to deal with Hippo regulations or ensuring that, the medical information is kept secret or kept private, connecting a generative AI model to a database that contains the type of information that is a fear. But it's not a problem or a risk that I have seen manifest yet. The bigger risk is what's going to be legal in six months with the laws for originally, AI are still being decided on, it's too new technology.
And, Washington moves at a pace that does not keep up with, the rate that we are changing and adding new tool sets and new capabilities to generative AI. So they need to slow down. While I don't agree with it, I understand it. But these are not risks. The fears, the risks that, that do exist, they can all be mitigated by the appropriate level of education for the people that are working within these spaces and the appropriate level of human interaction with, with the bots. Well that brings us to the end of this podcast.
Thank you, Christopher topher, for being a part of our white and wolf network and sharing such valuable insights with us. We look forward for connecting and collaborating with you in future. Thank you. Thank you very much. I appreciate the opportunity to speak with you all. It's been a great experience for me.
All right. Thank you. Thank you. I haven't actually seen a case where they are real. They're just fears, not risks. The fear of, losing your job of AI being replaced by replacing a person.
but it is very, very likely that as you grow in your generative AI, journey as a company and as a, as an engineer or as an office worker, that this is going to alter your job in such a way that, you'll be able to do things that you never thought possible today, to work with generally AI solutions, to figure them out on their own because telling them beforehand how they're going to use a solution if they're trying to come up with a new application Does good. Another generally call us universal training programs for generative AI.
2024-12-05 09:26