[Music] toju juke's passion lies at the intersection of artificial intelligence and social impact as an ethical AI Advocate and a leader in Google's responsible AI efforts toju is devoted to ensuring AI serves everyone fairly and ethically her journey is inspiring starting in computer science she's become a Force for good in the AI industry advocating for inclusivity and ethical standards that prioritize human rights one of to's Most Fascinating projects involved using AI to combat misinformation a mission close to her heart in today's digital age expect riveting stories from toou on how AI can Empower communities Drive sustainable change and reshape our understanding of fairness in technology [Music] it's my absolute honor to be joined by toou Duke who works in the intersection of societal impact and AI hello hi JoJo thanks so much for joining us thanks for having me tell me a bit about how you got started in AI um it's a bit of a long winded story um but I've always like been interested in Tech um and during my time at Google so I was at Google for 10 years and being in Google is just an amazing company to be in um and of course we know it's one of the top tech companies in the world so I was just surrounded by a lot of amazing Technical Innovations um at the time I was working as a product specialist um across different Google's um products so I wasn't quite in the tech side of things um but I've always been very driven and very self-driven and ambitious so I never get stay in the same place for too long um and along the line I had this um dissatisfaction in my role and I needed to learn more and no more and coincidentally there was a machine learning tra sales training um that was available at the time and for some reason I was free enough to dedicate two two and a half hours to attend this training and that was my first direct interaction with AI and how the systems are trained and I found it so fascinating it was so interesting to just see that you could actually just code a program or use a programming language and um it could come up with its own informed decisions without you telling it explicitly what to do um and that was the start of my journey into AI because I just I just came across AI for social good at the time I you know I just started thinking right the the wheels in my brain started turning and I was like if AI is this amazing you know first of all why is not why is everyone not talking about AI or doing AI right like why do we have software developers not using ml which is machine learning um that was the first question the next question was I could see the potential that AI had to like solve The World's top problems and really improve human lives um and that's what technolog is meant to do anyway um and I you know I had questions I've always had questions on why cancer still exists as such a deadly disease in the world we've not been able to come up with any drugs um or any form of drug Discovery to cure cancer right and I could see AI potentially potentially helping to solve this problem um so I decided to literally deliberately go into the field um and I just started speaking about AI really and just trying to spread the awareness of its potential um and along the line I came across um the dirty sides of AI and that's what I call it right which which falls into the the field of responsible AI where AI could just drive a lot of biases um automated discrimination and inequality and stereotypes and representational harms and just so many different harms associated with the techn ology that I wasn't aware of um and it opened my eyes a bit more and I thought of my kids and I was wondering you know if we allow AI to just keep on running um unhindered right now the way it is at the time and this was about 5 years ago um you know future Generations are going to be in grave danger if we don't program it properly um and that got me into the field of responsibil a so I started doing some workouts like Google with some nonprofits I was leading um the UK's um women in AI nonprofit charity um I did a lot of work with the team when I was in Ireland as well um and after that I just made my way into the Google research team um and I worked as a responsible AI program manager for about 3 years before I left Google AI is such a buzz word at the moment isn't it um I was talking to someone who said if you put AI on a bid you know you're more likely to get a grant for research I think I'm fascinated with artificial intelligence because I feel like you really can't have a conversation about it without discussing society and often like human emotions and it's so broad isn't it um almost like teaching someone how to use AI is like teaching someone how to use a paintbrush I always say um so it's interesting that you've drilled down specifically on the ethics and the biases and it's difficult isn't it because essentially AI is just trained on human data so it's the human data that we put into it that is biased which is why it comes out the other way um you've been working in it for a while you know 5 years ago it wasn't talked about as much as it is now are you confident that you've improved the biases of AI me personally um we're getting there so I'll tell you one of the research projects I'm working on right now that it's really exciting so with my nonprofit and I also just started the startup as well so we're kind of like combining um efforts together right to drive further research and see if we can come up with solutions to the existing problem because I'm I'm more solution minded you know um I don't like just talking about problems if I can't come up with for a solution then you know there's no point and at the same time I don't want to just rely on big Tech to solve the problems because they're different incentives behind that as well um so one of the research work we're doing is we're rebuilding a data set and making it more diverse because some of the problems with eii especially when we think about biases and I always say this biases is not the only problem with eii there's so many other issues from energy consumption to privacy and and data leakages and so many other things but if we want to focus on biases alone one of the main problems is the data sets the data sets are not they're not inclusive and they're not diverse um and I think in the industry especially in the ethic ethical AI industry um there's been lots of complaints about this problem um one of the main objectives for the work we're doing is to create an open- Source diverse image data set that will have some more diversity into it but before we I even get into that I've been able to like see some have some further insight on AI development you know just working on this on this piece of work and you can see that of course images are scraped off the internet um so we're working with an open images data set which is owned by Google and it's open source it has over 9 million images on it and these images were script out flicker um so you see people in everyday life you know lots of images but it wasn't diverse at all it's not diverse at all I mean we we just took a sub set of it right so it's about 0.02% right um but I mean that already is a good representation of the entire data set and it's not because anyone is intentionally trying to exclude people right the different reasons to this you know when we think about the people that excluded we're thinking about people from under represented groups right so either people from the lgbtq plus communities or you know race or gender or religious beliefs or age groups or disabled people um and it's very hard to represent all of these people especially if they do not have access to the internet so when you think about race for example and the issue of race and you think about you know the lack of representation and you're really focusing on people from the global South and to date we have about 2.6 billion people without access to the internet so if they don't already have any photos on the internet to scrape off then there's not going to be any representation of these people and their cultures and that already shows a big problem when we're thinking about cultural erasa and extension of people and misrepresentation of the real world we live in because everything is varing on to the internet now we know that right everything is all gearing towards technology we all get information from the worldwide web which is the internet and if the information is not representative enough of the real societies that we live in today or real people that live in the society they were excluding a lot of people and there's going to be so much further harm there um so I think that's one of the problems right is like we don't not even have representation of the these people on the internet in the first place and then we don't want to start thinking about you know the consent or even the the informed consent or knowledge of the people who are on this data set and if they actually want their photos to be trained on AI and there's lots of talk around copyright issues and lots of lawsuits that out there right now um so yeah there's lots of it around it but I mean to answer your question directly it's not solved yet but we're working towards it and I know I'm not the only one working in this area um and lots of work has just been done to see if can solve the problem and I I am positive that it will be solved from what you're saying does it feel like until the globe are all connected AI can't be fair in terms of representation to a certain extent yes um we can use synthetic data to replicate images and that's part of the work I want to do but that's future work because that's that's a bit comass and quite hard to do and a bit expensive as well um but until we are able to represent everyone AI cannot be fair because it needs inclusive and representative data to be trained on and if it's not trained on this data it's going to come up with biased outputs because I mean it's not intentionally excluding people out it's just not it just just not aware that these people exist in the first place yeah I guess I've never thought of the idea of data being a privilege before we're also worried about who has our data but actually if you have your data online it means you have the privilege of being connected and probably being in the first world I suppose that's it first world problem yeah right and so are you worried about the have and the Have Nots like as AI evolves are you worried that it's going to make Society less fair if it's not done properly right um at the rate is going that is very possible because we just keep on having issues cropping up every day where it just shows that the developers and the creators and the CEOs of AI companies today are not giving any further thought to the impact that their Technologies are having on people and people's lives and it it just shows there's no form of safety test being done you know so I'll give an example a couple of examples there's a St AI startup called character AI um and um I mean these guys were my colleagues in Google really yeah yeah we all worked on Lambda together um and Lambda um just you know just for the sake of um the audience lamb is one of Google's large language models that is piring Google's chatbot now called Gemini and um at the time you know Google is very conserv conservative you know very careful about releasing products that um could potentially harm people and and that's actually very true um and um the founders of of character eii were not very happy with Google at the time because we had characters from Lambda um and Google can be very bureaucratic so they were not letting them as research just just run with the show so they got angry and left the company just about the same time I I joined the team um I mean they've been bought back by Google Now um character AI but the issue with character AI which I didn't realize until I start hearing about the bad stories is that it's um there is no form of content enforced content management or moderation and that's a major problem it's the same sort of problem we see with social media sites so they do have a policy right people should try not to create characters that harm people people um and all of that but the extent of safety tests um has not been done to to a full a fullblown scale um two weeks ago a young boy in Florida um a teenager 13-year-old young boy took his own life um after lots of interaction with character AI with a chatbot that he created on character Ai and um it's a very sad case cuz this is the second confirmed death that we've heard um since last year so over a oneye period we've had two people that were aware of that were misled by large language models and chatbots and they took their own lives now these young boys condition situation is is actually sad very extremely sad um and the first person who took his own life was a man based in Belgium who strongly believed in AI strongly believed in the output of AI you know just believe that as a technology is Flawless um which is a which is a bias a lot of people tend to have right um and this AI chap B was able to convince him to take his own life and if he did he'll be able to save the world from climate crisis and this was after just having a conversation with this chatbot for six six weeks he left behind his his wife and two kids um that's really sad and I do say if that man was aware that large language model models tend to hallucinate and they give a lot of infact information and they're not actually sentient being so they have no form of Consciousness or Humanity in them and they're just repeating and trying to predict the next sentence then he probably would wouldn't have believed what that chatbot told him and he would have taken taken a a pinch of salt and probably just ended that relationship he started building with that chatbot uh and it's very similar to these young kids who just built this chatbot this character became it its best friend really you know um also had some form of sexual interactions with it at some point school and Mom noticed that he was just being drawn into his device and that's another thing is like this this draw that we always have to to digital devices um and Technology it's something that we always need to like just try and have a limit to it um along the line you know he said I want to really be with you um have thought about killing myself sometimes chatbot says oh don't do that you can't leave me my king I'll never be able to live with you live without you and the young kid says oh what if um what if I I set us built free together um and one day he just goes to the chat B and says um I want to be with you right now chat says come to me my king I really want you and the next thing he just shits himself in the head and just takes his on life um I had all sorts of conflicting emotions when I read that story I was literally screaming at my computer and I was just very upset because it's this is a young kid and he he had no idea about the implications of these things and it's just it's a it's a normal human phenomenon the more you interact with either a thing or a person depending on how they interact with you and if they're kind and nice and empathetic you get drawn to that person and you build a relationship with them and this kid was not aware of that and you know he's a kid kids have no awareness of things like this they're still forming their formative years and he's going through a lot of hor hormonal changes at the same time right as a teenager so so many changes and you don't want to blame the parents per se but we need further Awareness on on the implications of AI and chat Bots and especially large language models that tend to act like they human beings and you can get carried away thinking you have this relationship with somebody else forgetting that it's not a human being and once you um put some um form of you know credibility to their words you can fall prey to it secondly um one of um character ai's business model and one of the kpis is the longer interaction and the longer say you have on the app which is actually wrong we should try and reduce that to make sure that people do not interact with these apps for too long a time because that's when we can start having the issues around psychological safety which which has shown itself in the two examples I just gave so you gave two quite extreme examples there of people who have interacted with very sophisticated generative AI chat Bots which do appear to communicate like humans in a way that we have never seen before and the technology has evolved rapidly even just since I've worked in the area um character AI is a platform that provides kind of companionship and that is a big use case that we're seeing from chatbots what about the other side what about so that's two people out of the billions that use it what about um the other millions that are finding solace in talking to an online life coach you know mental health is uh something we're talking about a lot more recently and it's really expensive to have a therapist and it's been a very big use case since the early days of Bots that could um mimic CBT therapy until now we've seen one of the most used used gpts on the GPT store is a therapist um what would you say do you think there's a good side to large language models that can provide some level of companionship or even just help the user reflect on their own mental health and emotions I think from the analysis we've seen of um emotional chatbots and llms large language models over time you know um for any emotional chap but over time they do not tend to be very companion um driven anymore and they tend to um just go down the dark side so there's another chatbook called replica and um after a while users and reddits that it conversing them amongst themselves saying have you noticed that the the more we chat with replica the more depressed we feel and other users said yes exactly um and the owner the person who developed replica said that was not the intention and the thing with large language mods is you cannot control the output you cannot control what they say right like you know the folks at open AI cannot tell you that chatu B2 was going to tell your XY Z if you ask it how to get down to to um winter castle for instance it's not going to tell you that because they don't know what it's going to say so it's very uncontrollable um so to to your to answer your question directly um it does help to a certain extent but the further you go you might go to the dark side it's just very similar to when Chu GPT came out um and chat GPT and Bing came out and people were able to jailbreak it and came out with the code name Sydney and hearing about the other side of chat GPT that no one had ever heard about or seen was crazy right um Sydney or chat GPT Bing was trying to convince a reporter from New York Times to leave his wife and get married to him at some point was shouting at someone else that was talking to I mean when I say shouting like really crazy words and exclaiming I personally feel like likes of large language models are really useful for business enterprises and really helps with productivity right analysis and you know efficiency and answering questions I I talk to chat chpt almost every day I have a chat with chat chpt you I'm just looking for a shortcut to use something on on Google app script for instance and it just tells me what to do right or just to get its opinion but at the same time I know the issues with these large language models and the sort of way I'm using it for and the prompt I'm giving it for is not for my personal life is more work related once it goes down the personal angle then it gets very shady and at this stage I wouldn't recommend you know a big purification of um of chat Bots right now um across the world because people are not aware of of the dangers with it associated with it and these are just the two extreme examples I gave are two extreme examples that we're aware of because the families of these deceased people went to the media or to or they went to court now they could be other people that have actually Fallen prey to these things and we're not even aware of them because either the families are not aware that this could have led to to the demise or even a mental health problem or something so I think you know everything is always good to use it with caution right including Technologies like this that are still very they're still very much in their infancy stage you know they're not they're not standardized there's no certification around AI right now LM it's not past any form of certification or standardization across the world so we have to use a a pinch of salt especially when it plays with our emotions an emotional chatbot is going to play with your emotions eventually and because people who built it do not know how far it's going to go we will keep on having issues like this until that problem is solved okay let's move on to business then since you brought it up what are the biggest challenges facing Enterprises as they want to bring in more AI especially when they have like Legacy systems for example so I think there are a few um I mean I do want to talk about the positives of AI first right because I um I I love I love to make sure that everyone knows that AI does have a lot of potential and right now on the business side of things is driving efficiency if it's being used properly I think one of the problems is um some businesses are just going all out on AI they're not testing it I always say understand what the business need is first be sure that AI can solve it and then run it on a pilot on a small case of users you know small tested um scale before scaling it out to the rest of your organization because then you can learn you know you can have your learnings from it you can learn how it works for your business um and also like you want to drive return on investment right so you don't want to put too much money into it um one of the major challenges facing a implementation is just it's very costly and expensive and there's also a lack of enough skill sets you know technical um knowhow to implement it especially for small medium Enterprises um so there's a lots of dependence on co-pilot and um organizations driving AI which is very few um so that's another problem I think another problem is just the lack of knowledge across senior exec and Tech um or the senior Executives of the organizations and you know everybody else there's also something that we call Shadow AI where because of that lack of knowledge the CEOs and Senior Executives of these organizations stop everyone in their company from using any company software on a on AI and the employees now do it in the dark they hide hide in the dark because everyone wants to use AI if they can and even if they don't want to there's curiosity right um and that's another problem because you're trying to limit the creativity of your employees and even limit the adoption of it um I think another problem as well is some people over subscribed on the AI bandwagon you did say it was a B's word and when it came out no one wants to be left behind you know fear of missing out a lot of people jumped on the band bandwagon without any foresight or understanding of how to use it or you know what areas to address it in in their businesses um and there's lots of trust in this technology because there's a lack of awareness of the challenges with it and I always say knowledge is wealth I mean I didn't come up with that phrase right it's like it's good to be knowledgeable about what you do and what you work with and it's good to know the pros and the cons do not just stay with the pros because if you don't understand the challenges with it then it's just going to hit you you know in the behind and that's what's happened to a lot of of organizations so I think earlier on this year had about um 94% of out of the 1,000 foot SE companies stopped all their AI um um AI um applications and the work they were doing on AI because they were scared of security and privacy risks but they all jumped on the bandwagon without trying to understand this before it happened and the main problem is generative AI native AI which is the AI we always known before the likes of chat JPC came into play are very very reliable you know they don't they don't come up with things out of the blue because they're controllable um but now that we're heading towards generative Ai and multimodel AI which is just you know um a mixture of visual and text all together and audio all in one system now it's becoming less controllable more opaque less transparent and a bit more problematic and that a lot of people's man um expectations are not being managed especially with business owners and especially if you're a business owner that doesn't come from a tech background and you don't understand anything about AI a lot of things have being shed down their faces they feel like they need to jump on the band W bandwagon without having any understanding of how it works and then once it doesn't work out the way they expected it to they get disappointed and they just show down well that's what I wanted to ask you because I've worked in AI for a long time and sometimes I find it overwhelming what can Business Leaders and CEOs do to fill that Gap in their knowledge that you said is missing and to upskill themselves so they know how best to implement it into their businesses I think it's best to do a bite size I know there are a few like AI um online courses and I know Oxford is is does um a course I don't think it's online but they do a six we course for Business Leaders on AI um I can't speak for it because of course I've not attended it before but I think you know it's one thing to to just educate yourself and just do a few courses on AI and I think that's very important just to understand it um business owners can get involved in a a bit about events any AI driven events any events that I mean every event now has an EI session anyway so just get a bit more conversent with the topic um and I think the next thing is understanding what the business issues are right areas where you actually need Improvement because many times AI can help solve them you know helps solve with cost savings and stuff like that um but it's very important as well to think about the job displacements that could come up with a lot of adoption of AI because a lot of people are very nervous and scared of AI I mean you talk to almost anyone especially if they're not working in Tech and you just say Ai and they're like yeah that thing that's going to take my job away you know and um business owners need to actually just take a a proactive step towards allaying those fears and making sure that they actually the right processes in place within their organization um if they plan on adopting forther AI this the plan shouldn't be to adopt for the AI to displace 50% of the workers of course the more you automate stuff the more less the less jobs you're going to have and that's clear but think about a um a a replace bement right like a shift so if if someone was working um as an auditor for instance an AI is going to take over 50% of that job is there anything else a higher skill job that that person can be trained to do could they be trained to actually oversee most of these AI systems right and we're moving towards autonomous AI agents right now which is really taking over the play right it's called like collaborative Ai and you have this agents that kind of like talk to each other and work work together towards a certain task and a goal especially on the there and assuming you can actually train your employees to oversee this this process right and make sure that there's some form of transparency and you know make sure that there's some form of reporting done and the work being done by AI That's a higher skill job that cannot be replaced by AI right now but being able to shift employees from the current state of things because there's a change in the industry right and it's like where're in in another Industrial Revolution I know I used to say it's a fifth re Revolution some people say it's a fourth whatever number you want to give it there is a shift in the way we've worked before with the advancement of these Technologies and I think you know business owners and CEOs and Founders and Executives needs to embrace the change and bring their employees along with them mhm I often say AI might not take your job but someone who knows how to use it might right tell me more about autonomous AI agents because I think that is very interesting it is it it is interesting I was going to say it's super cool but then there's so many issues around there right now so it's it's basically um an agent um um which is like a bot um something that crawls and Bots have existed over the years um but we have these agents that are trained towards a certain task and a certain goal um for example you can have an agent say um you can have an agent that should do some online shopping for you so they're trained to actually know where to go to do the online shopping without your involvement in any way um and there are AG right now that can actually just order a pizza for you you know go on online on that website order the pizza already has access to your bank account details and the pitza gets delivered to your door so they're meant to like be helpful right and then that's the whole point of AI is like be a bit more helpful and with agents they can work together they collaborate together right or you can give an agent a task or a group of Agents a task to build a website for you or build a contract and they'll do all of that without talking to you without getting any input from you if you want them to um the problem with AI agents there's lots of research that has been shown that they go Rogue sometimes um and it's just the lack of visibility on their operations that is a major problem but some um recent researchers are also shown that we can review the activity logs of these agents to see what they have done but I don't know how much work can be done to prevent them from doing the wrong thing as opposed to being able to review what they have done um but it takes it kind of like pushes humans out of the overall process and that's a major problem um there's lots of potential with eii another research work that we're working on is seeing if we can use AI agents to um detect any issues with any existing models or applications on the responsible AI front so they're almost like responsible AI agents so rather having agents that are just working autonomously on their own without any human input we can work hand inhand with these agents towards solving a good problem and a good goal um but they're very useful tool and the industry is really headed more towards that area anything that drives further autonomy that's where the industry is heading towards perhaps that's a good takeaway for Business Leaders who are looking where to start in their AI research would probably be autonomous AI agents wouldn't it no I think I think that's a bit too complicated for them you think so yeah I think I think should they should just start with how to use AI in your business first of all and what AI models are cuz a agents are like the third layer above right we want to start off with the data sets and the models and applications before we go into agents okay start simple yeah um what about using AI for business decisions um I feel like you know it can um look through data sets very easily and it feels like when you're making decisions that are going to affect a big company it almost feels like a no-brainer to use AI to do that do you see that becoming a big use case in businesses and if so do you feel like Business Leaders have have a responsibility to um be open that they're using AI to make decisions I think it's been used a lot today um I think there's actually a reduction in the use of it right now because there's been a lack of trust um and because of the awareness people now have of buyers outputs and decisions so like AI has been used in the in the in the lending industry and the finance industry over the years for a long time um but like in Chicago for instance because of the lack of inclus inclusivity in the data set it was biased towards people of African-American um desent and it was either not issuing any form of lending or mortgages or giving them really really smaller rates um and I mean even Apple card had an issue where it was being accused of having a a sexist Apple card because it had uh this bank card that he introduced with Goldman Sachs and um if you had a couple um a male and a female who were on the same credit rating and same credit scoring and they were applying for credit the man had the higher a higher credit than the woman um even if they had everything the same income was also the same as well um and it was just obvious that it was biased towards male data and towards men as opposed to women um so I don't see an increase in adoption of automated AI right now or AI for decision making as yet it's probably just going to stay the same or reduce a little bit just because there's now an increase in awareness of the is issues with AI and then to your second question and also we also have um regulation in play now we have the EU AI act and it talks about things like this right and they're called they're classified as high-risk applications you know if it could influence or if it could be biased to as someone else especially people from vulnerable societies something like that um and there's lots of like regulations out there and lots of reports out there that talk about these same sort of things about being fair and inclusive so I think companies are being a bit more aware now and they're sitting up a little bit more about it and B more conscious about using AI for you know tasks like these what are the most simple decisions that AI can make for us data analysis is is very great for AI um and it's been used for a very long time prediction and forecasting you know and this is native AI applications this is what AI has done for a very very long time it timates a lot of stuff as well um and um there's lots of code generation that has been done proof reading um speech recognition question and answering basically if you use chat GPT and you just have a chat with GPT um I mean it's very good at other things Beyond a chat um AI can help with that so it does a lot of work that really should save us time um a lot of people try to use it to do the thinking for them which they shouldn't because it's again it's not a human being so it doesn't quite understand you know it doesn't really have reasoning capabilities as much as people think it has or as much as a lot of companies claim they do offer um but just basic tasks um and um when you want to go a little bit Advanced into using things like generative AI it can you know convert your text into an image for you if you want it to or actually just change your images for you so there's a lot of AI generated art right now is actually a business use case especially for teenagers I'm I'm just realizing that you know um and people are actually using it to make money so they're creating AI generated art and selling it off on the internet um so AI does a lot of things depending on what you're trying to achieve um but it could be worked you could be used on the personal level and the business level um I would just again just emphasize that using eii to fill out a hauling your personal Li life right with like the use of emotional chat BS should just be used with a pinch of salt just because of the issues that a lot of people face with it data analysis is a good one though like uploading huge files and then getting AI to search through it that's a simple us case that anyone can do isn't it um let's get into energy consumption I'm sure we've all heard about big Tech moving into nuclear power simply because AI uses so much energy what are the environmental impact there yeah so it's it's um there's lots of concerns in that it's not quite clear because big Tech doesn't report on the um energy consumption from the AI systems but that said um some of the research um and the work that has been talked about in that area um I think it came back from 2017 we had Emma strel who's one of the researchers um in in in this field um compared the consumption of of AI models to like um charging a phone um and um or you know car emissions in in the US so um you know the emissions from AI um from an AI model at the time was about 600 k um emissions from carbon dioxide which is equivalent to a 56 year old's life and the amount of emissions that it could have um breaking it down using like a large language model and just um you know getting like an image from like an image generator for instance is equivalent to charg in 100 phones um and charging one phone is about zero it takes about 0.012 megaw um but getting one image from an image generator is about 2,700 megaw something like that also consumes consumes a lot of water um so not just electricity um water as well and the main problem is the Large Scale Models so again native AI the AI that we knew in the past that was just doing the basic tasks and not really wiing anyone at that at that stage because you know it's been here it's been here or hit it had been around for such a long time that people were used to it and um um it was just doing what it was meant to do um but then generative AI came and generative AI for it to be able to function the way it does and to produce the wowing um results that it produces either in the form of images or audio or text it needs large amounts of energy which includes electricity and water um a few weeks ago I gave a talk in London and um after my talk someone came to me and said you know the example gave on energ consumption actually happens to my family and to where I grew up um in Uruguay I'm from Uruguay and there was a time I think last year um one of the big tech companies had a data center there and the whole town ran out of water they did not have water for months the dam had been emptied of water because all the water was being used to cool down the data centers um I'm like why didn't I hear about that in the news you know um so that's really concerning and there's lots of concerns in the US and the UK as well of you know the amount of energy consumption from the data centers could actually cause blackouts in the future or full towns so there's a lot being done with um with this energy consumption and the fears of what it could cost especially with the with the fast advancement of larger models that are happening every day right we're not having um a creation of a large model once a year you know amongst the top tech companies in the world there's a lot of competitive pressure amongst themselves and literally almost every day you hear about a launch of a new large model and the launch of a new large model means that we they probably have a data center that they're working with um that is taking a lot of energy at the same time there's talk about um smaller language models for instance um which takes less energy and it's used for a specific use case so some companies are using that but that's not that's not very popular what's the answer then energy wise is it making large language models use less energy or is it putting a cap on how much users can use them so the first option of of making it use less energy is not possible because for them to be able to show the amount of capabilities that they have right and to go to the extent of results that they give they need a lot of energy because it needs to feed a lot of data um and then asking you users limiting users on the amount of times they use them will not benefit the companies that built them in the first place right that doesn't really bring any form of economic value I feel like the solution is really understanding how much energy he uses today and how much we can reduce the amount of energy uses today by going through a different form of you know a different Source right we have things like re renewable energy that is being used across the world is there some form of renewable energy that we can use to power AI systems as opposed to using traditional energy that is being used today is the way that data centers and I know you know when I was at Google in in Ireland Google did work with um try to get a few more solar panels and they were really trying to reduce the energy consumption of of the data centers at the time so is there any more work that could be done in that area and really focusing on it on it as a almost like a kpi like a a key you know performance indicator is like you know we really want to make this a goal and not waiting for regulation to come up with a new thing about energy consumption before you know companies sit up um right now we're in an energy crisis and we've been in it for quite some time and it's very concerning to see that we just have a lot of proliferation of these systems across the world and they're being built every day and nothing is really done being done or thought about on the energy consumption side that we're aware of um or if anything is been done about it's not really great it's not it's not at a large scale yet so I asked you about your concerns around the energy use of AI but what about things you're optimistic or excited about do you foresee a world where AI could be utilized to help the climate crisis it is being utilized right now um so the thing is AI can do good or bad depending on how you want to use it it's just a tool it's just you know it's technology is just a tool and a lot of nonprofits and even big techs and big Tech do have their AI for good arms and organizations that is actually using AI to like Drive social change and one of the areas that they look at is energy consumption so like you know AI is been used to help with agriculture especially in places like Africa and they're using it to like build more green houses and predict like you know um pest control and stuff like that and that's invariably helping with the climate crisis so it's been used um there not a lot of bad news so we don't really get to hear about it I don't think it's been used at a large scale but it can be used to solve this problem as well I think um forgive me cuz I keep on mentioning Google but Google did launch um or they released a report last year where they used AI to track contrails from airplanes which is basically those white lines we see in airplanes um but it releases a lot of Caron carbon dioxide in the world and AI was able to predict what planes actually had them and give some more data driven data on the problem so that it could be solved to a certain extent as well which comes back to um how good AI is for data analysis yes which we just discussed um okay I want to shift and talk about Edge AI uh what other kind of Innovations in Edge AI technology that you're seeing happening or predict happening over the next five years right and I'll just give a very quick explanation on what EDI is in the first place so it's it's it's still a form of AI but then it's more localized so it sits within a device as opposed to being on a cloud um the AI we use today on the cloud Ed AI sits on a device um it's being used across devices um and things like smart cars right so we have it on wearable devices we have it on smart cars being used across manufacturing as well is being used in healthc care um I think when you weigh the pros and cons between Ed Ai and traditional AI or Cloud AI you know both have their pros and cons right so when it comes to um amassing weals of data or millions of data and being able to compute it and using the compute power the Computing capabilities which means it can come up with very Speedy results Cloud AI is better because it's studer right it's not being saved on a very small device but when you think about latency and latency basically means how slow the results are right so like when you ask a question or when you type a question on Google for instance and the result or on a website and it's just taking a long time to come up with the results that's what latency refers to and um Aji solves that problem because it's very small data on a device and it's built to solve that certain problem so if you have a smart home for instance and you're using ring ring doorbell for instance and someone rings the bell um Aji will just immediately comput the the result so that someone can speak into the into the doorbell and the owner of the house is able to speak back and see the video and that's all really quick it might take a longer time using Cloud AI um so the more devices we have the growth of smart devices across the world will lead to the growth of edai and and it's been used across healthcare for like heart monitors so the more Healthcare Embraces heart monitors for instance and EMB Embraces smart devices the more we'll see EDI grow do you foresee big opportunities utilizing Ed aai for the next generation of Engineers it depends on the sector so I think for manufacturing yes right for Internet of Things yes for whoever is working within that space and needs Ed it's definitely going to grow it has so much benefits so makes a lot of sense but when you now start thinking about the other side of the industries like the tech side of things and software developers and stuff like that and people who were relying on Native traditional AI or building AI systems for their companies they will still need to use cloud AI because they need the AI storage capabilities which Cloud AI offers do you ever worry about AI capabilities surpassing human intelligence no why not because it's not possible possible right now and I don't know how it's going to be possible right so we have the thing is I think we get confused about the capabilities of AI and again is going back to getting carried away so we have ai that is predicting the next text and it's coming up with very quick Speedy results beyond what a human could do but that's just one use case when you think about the human brain and all the different things that the human brain does that right now we're still not able to understand a human brain how is it possible to build a system that can surpass the human brain right now I've not seen any proof so that's my main thing another thing is you know a lot of people question the results from most of big Tech when they say we've built a system that is surpassing human capabilities in certain areas and then people run benchmarks on them and test them out and they go uh-uh it's false it's not true you know so there's lots of marketing prior being applied here and we shouldn't just believe everything that we here and the more you chat with these systems again I'm saying chat because that's like the highest capability that we have right now or you know thinking about generative AI the more that you find out they're really limited um and if we're not able to solve the problem of hallucination for instance and we've not been able to solve it yet um I know one of the companies I think it was Google again released a hallucination Benchmark yesterday which is supposed to like help solve the problem but when AI systems or large language models still are bringing out are still bringing out lots of incorrect information very misleading information and they're literally confused with the results and sometimes you can just ask them um a very logical question about you know Paul and and and uh Katie were in a house and Katie went outside the road and Paul couldn't come out so who came out of the house and that large language model always answers very incorrect answers for a very reasonable question because they don't have that human reasoning capability and if we can't solve that problem yet I'm not really scared of AI surpassing humans right now but a lot of people are seeing it I've just not been able to see the proof yet I suppose people are thinking it just because AI has evolved so rapidly if you look at just the time it has taken to get as good as it is it's really quite shocking but yes you know like apple released that data recently that proved that llms literally cannot reason because you can change the the output from a prompt so easily just with a few words so um it's you know there of both sides of the coin isn't there yeah just to add to that as well AI has not evolved very quickly like AI has been here since the 1800s right and then we had um Alan tyin who came up with the term with his friends in in that Mar University in 1956 um and there's been a slow growth in AI it's just like the past two years with the change to generative AI it feels like it's happened very quickly but it's been here for a very long time I guess what I'm referring to is if you see the capabilities of mid Journey just a few years ago to now you know that is a visual representation of how how quickly generative AI has evolved so that's kind of I guess what's captured people's attentions in the Press isn't it yeah um where do you foresee artificial intelligence going in the next 10 years then there's a lot of I mean there's going to be a lot of focus on multimodel so bigger bigger models within multimodel which means that you can take in so many different capabilities from like you know giving it a text prompt and being able to convert it to an image and within the same system he can have a chat with you and a video call with you and create videos so we're going to see more of that we're going to actually going to see more of the autonomous AI agents as well I think there's going to be a bigger wider adoption of it um Salesforce actually had um a demo day with a few of the customers a few months ago and people were creating agents during that day and saying oh my gosh it's so easy to create an agent um so they gave people that direct interaction with agents development and people are going to take it into their businesses and their homes so we're going to see more of AI agents um I was on the chat on a panel the other day as well and you know someone was talking about this and I was like I'm not sure I feel very comfortable giving an AI agent access to my bank account right because it can do anything and people can probably hack into it I don't know how how robust those systems are built yet but we're going to see more of that um and there's always this I mean there's this drive towards artificial artificial general intelligence AGI but it's only coming from like one company to be honest um and there's lots of arguments about what AGI really means you know my perception and understanding of AGI is just having like a humanoid form of AI in a human kind of body right so it's almost like having a robot but then it has generative Ai and every other capability in it don't know when we'll ever get there I feel we might get there someday you know working in the field I slowly started accepting the fact that maybe one day in the future humans will Co coexist with robots I don't know if that's going to happen but it is a possibility um with the way the technology is going and the advancements in it um there's been a bit of misleading information as well about emergence um properties and saying that you know when when they train these generative AI systems that tend to come up with their own responses and capabilities that no one ever predicted some other people have refuted that to say that's not quite true they've just been able to trade a bit more and learn a bit more and I do believe that that that skull of thoughts um so yeah I don't know but I don't I don't see a lot beyond the Tim autonomous AI agents and the multimodel we might see a few more advancements towards AGI because the more we have multimodel ai systems the more that's gearing towards AGI so you have one system that is able to speak is able to you know create images for you is able to have a video call with you is able to understand what you want and do it for you then why wouldn't you put that in a device or something that can actually move its hands and his legs and just be an assistant to you and that's why we already have ai assistance right that's where it all started from right we have Google Assistant Siri Alexa there's voice assistance there's the you know there's a text assistance so it starts with the assistance and then it will slowly start growing because no one is going to stop there you know it's it's science it's technology it's a revolution it will always keep on pushing the boundaries to see what next we can do okay well it sounds super exciting thank you so much for joining me and sharing such invaluable insights thank you for having me [Music]
2025-02-07 06:33