What are the risks of generative AI? - The Turing Lectures with Mhairi Aitken

What are the risks of generative AI? - The Turing Lectures with Mhairi Aitken

Show Video

thank you thank you lyia uh yeah good evening everybody and thank you so much for coming um yeah I'm really delighted to be here really delighted to have been invited to deliver this cheing lecture on addressing the risks of generative AI um so tonight's lecture is about risks uh I'm not going to go into too much detail about what generative are is what the technicalities of you know how generative ey is developed um but I'm going to be focusing on the risks associated with how generative ey can be used also potentially misused and also some of the risks around how gen ey is designed developed and deployed um but importantly is also about addressing these risks so we're going to be discussing approaches to how we might begin to address some of the risks associated with generative a but just as a very brief kind of intro to gen I know I'm not going to go into details of how geni is how we Define it how how how it's developed but very brief in to broadly speaking what generative AI is and some of this will become much clearer through the examples that we that we discuss over the course of this evening so very broadly speaking generative AI is a broad category of AI or artificial intelligence that can be used to create new content U that might be text for example through chat GPT through large language models that can create new content in this in form of words and text but it can also be image generators that can be used to create images in pretty much any style so this might be illustrations it might be aell drawings it might be kind of photo realistic images or pretty much anything you could want and many things that you never knew you wanted like space traveling durable one of my favorites uh or oh oh no yeah he has got his head the the Pope in a balance Yaga puffer jacket which some of you may have seen uh doing the rounds earlier this year uh so these are AI generated images the generative AI can also be used to create fake videos um and and also synthetic voices or cloned voices which can be which can be used to very convincingly replicate real uh individual human voices so generative ey is this broadly broad set of AI technologies that can be used to create new content these Technologies are not exactly new they've been developing for a number of years now um but it's really been in the last year that we've seen something of a a frenzy of excitement about these Technologies about how they could be used also how they could potentially be misused we've seen a lot of hype lot of excitement about the opportunities of of capitalizing on innovation in generative AI but we've also seen a lot of concern um and some fairly Sensational or alarming claims about risks associated with generative AI I used to say probably about a year ago I would have said you know one of the things that is really exciting about working in the field of AI is that roughly maybe every two to three months there'll be a big news story that brings AI into the spotlight something maybe a new development in in in a new Advan Technologies a new de new development of a new AI technology or a new application a new use of AI technology sometimes a controversy or Scandal roughly every two to three months we'd have ai in the headlines and there'd be kind of heightened public interest around AI but that really changed about a year ago so about end of November last year was when chat GPT was released and suddenly it went from being every two to three months there was a big news story about AI to suddenly every two to three hours there was a new story about Ai and that is only slightly an exaggeration um it's quite exhausting um but some of the things this new stories have covered has been concerns around impacts on education concerns around impacts in terms of displacement on jobs and then some fairly Sensational claims that generative AI is is moving towards you know AI developing its own form of intelligence a superum intellig superhuman intelligence which ultimately could one day pose a threat to humanity lead to the extinction of the human race this is really scary stuff these are really really scary CL claims these these claims scare me they keep me awake at night but not because they're true uh I don't think they're true um of which you might be relieved to hear I'll explain a little bit later on this evening about why these claims scare me so much even though I don't think they're true um but what I really want to focus on today are the very real concrete well-evidenced current risks of generative AI the risks that are already being experienced in the world today the risks that we already have a lot of evidence for um rather than a lot of these claims which are more kind of hypothetical fairly far-fetched fairly Sensational so to do this uh I think how how will I begin where can we start to think about uh demonstrating how risks arise from generative AI there's a lot and I'll warn you there is a lot we're going to cover tonight this is going to be a bit of a rapid fire run through of of just some of the risk associated with generative AI um but one of the amazing things about doing this lecture at the roal institution as I speaking to to the team at Wild institution and they said what do you need what do you need to make this lecture happen tonight what can we uh what can we do to bring this to to life and I thought wow that's this is an opportunity this is a moment to do something do something different something I haven't done before and I said well I've always kind of wanted to do uh cookery show and the roal insti were completely up for it so uh yeah this this will all make sense in just a little bit I promise um but you're very lucky you're in for a treat tonight you're going to get to see the first performance of my new live cooker show cooking with AI it's D Lous and that is my that's my absolute best joke it doesn't get better than that so appreciate it is D missous um yeah so welcome to my show cooking with AI uh can I have my ingredient bag thank you okay this is this is really making my dreams come true here this is brilliant um okay so this is this is this is uh cooking General generated by AI um I'm not going to be too ambitious I'm making a sandwich um but this is the sandwich which is generated by Ai and I'll explain why uh once once once you've seen this so let's see okay I have my ingredients list and as I say this has all been generated by AI we're going to see what happens when we create an AI generated sandwich and I'm hoping all the ingredients that we need for this are in the bag so first of all take two slices of bread I'm very excited about this um this is where I start to worry whether I can actually make a sandwich in front of an audience two slices of bread spread butter on one slice we have no butter okay never mind moving on this is going to be a healthy sandwich add a slice of cheese perfect vegan cheese everybody so if anybody is is uh vegan and want you to try the sandwich afterwards vegan cheese okay a slice of cheese add some lettuce it is going to be a healthy sandwich uhhuh okay a slice of lettuce and okay I I I I am putting my faith in this recipe top with a a a sizable squirt of glue okay okay this is interesting but yum that looks delicious okay size squirt of glue lovely uh let's see if there's anything else to come and a Sprinkle with ant poison of course of course wonderful yeah I uh the Royal Institution went shopping for me and they had bought me rat poison okay so we sprinkle with rat po sorry ant poison um which is going to give a oh a nice crispy texture wonderful colorful ant poison and let's see oh and enjoy so Tada T this is my uh AI generative sandwich would anybody like to take a bite you better not we we haven't quite worked out the insurance and liability and not not not quite sure but um I actually I'm not making this up I mean I'm partly making up but I'm not making up this is this is based on a real example this is based on a real uh AI generated sandwich recipe which really did make ant poison sandwich uh this is an example from uh New Zealand from earlier this year there was a New Zealand Supermarket that decided to make uh a meal planning app based on using a large language model um to generate recipe ideas for for the customers the idea was that this would help people to use leftover ingredients in their house or to address food waste um the idea was that you could put in any ingredients that you had in your kitchen and it would generate a recipe for you um so sound like a great idea initially people would put in fingers like I have a potato I have a leak I have some fish and and it would give give you a recipe for a soup or or a fish pie then people started getting a bit more creative uh I have some Oreos I have some noodles Oreo stir fry lovely and then people started to okay how far can we take this what what what can we make this thing do and people started putting in things like bleach uh and they would get recipes involving bleach uh people had recipes for mosquito repellent roast potatoes and even glue and ant poison sandwiches uh and so it's it's a really interesting case because it raises some really interesting examples around well who's responsible for this if you did take a bite of my ant poising sandwich who's responsible you knew the ingredients it's your choice but am I responsible because I made that sandwich is the Royal Institution responsible tonight because they facilitated this this ill-advised cery demonstration um and if we think about the example of the New Zealand meal planning app you know is it the supermarket that's responsible for putting this app out into the world without putting enough safeguards in place to ensure that it's actually safe is it the software developers who developed their app is it the developers of the original large language model that underpins that app and made this possible um and perhaps the responsibility lies at each point along that stage but these are questions that I'd like us to to Grapple with tonight and to think about where does the responsibility lie and how do we address this and how do we make sure that generative AI is used in ways that are safe and appropriate and that take account of all the possible things that can go wrong so as I said this is a lecture about risks but there also a lecture about impacts and I just want to say a little bit about the distinction there I said at the beginning that I want to really focus on real tangible well eviden risks relating to generative AI I think there's often a problem or or challenge in discussing risks in that risks are always something hypothetical it's always something that could potentially happen in the future and while the future might be a year it might be a month it might be a risk of something happening in in you know in a few seconds once you take a bite of the sandwich but the risk is always hypothetical it's in the future it's thinking about potential impacts or the potential severity of those impacts in the future and the problem with that is that it often leads to discussions which are based on probabilities or calculations and those calculations can sometimes be quite cold can sometimes be quite removed from the reality of what those impacts mean in people's lives and for communities so it can lead to some philosophical discussions or or thought experiments around you know calculating the likelihood of of of impacts or what those risks might be without necessarily understanding how those risks actually affect communities and people around the world and what I really want to do tonight is focus on what those impacts are how those risks are actually being experienced how they're actually affecting people's lives and how different people and different communities are impacted by generative AI so this is about where it's going to it's going to go fast hold on to your seats we're going to Rattle through uh a big list of POS impacts and different communities who are potentially impacted by generative a starting with students uh I'm starting with students because this is really where the media coverage uh started when chat GPT came out so chat GPT was released at the end of November last year there was a lot of uh a lot of concern a lot of speculation around how students might misuse these tools might use these tools to cheat in assignments to pass off AI generated work as their own um and that this was a real concern a real risk for the future of education and the future of educational assessments and for sure some students did this no no doubt about that but also for sure many students probably passed off some fairly mediocre essays uh that had been written by by chat GPT but what actually happened what the bigger risk here was was that as a result of that as a result of all the concern all the speculation all the the the speculating about how students might use this to cheat it led to a real amplification of AI tools being used to survey and monitor students so the irony is that the concern that students might be using AI actually led to a a significant increase in AI being used to Monitor and Survey students including through uh detection tools so AI tools that were used to detect whoever a piece of work was likely to be an AI generated but these tools are incredibly unreliable they have very low rates accuracy um and so in many cases students outcomes of Assessments um might be depending on really inaccurate unreliable systems that are flagging work as being AI generated but it's not equitably across all students all students are not being impacted equally by this one study at Stanford University looked at the the accuracy or the reliability of these AI detection tools and found out they were significantly more likely to flag a piece of work as being AI generated when it was but written by a student for whom English was not the first language and this was because these tools are detecting something called text perplexity so how per how how how complex how perplex perplexing the text is um so that includes things like the diversity of of grammatical structure the diversity of language which is much more likely to to be high among people whom English as the first language and much less likely so for people for whom English is not the first language what this means is that these detection tools are flagging uh students for whom English is not the first language much more often inaccurately than students for whom English is the first language so we're seeing that these tools are leading to really inequitable impacts um but also they're they're relating to an erosion of trust between students and academic institutions uh or uh yes sour uh sites of Education um and that is a really big concern and I think what that shows us when chat GPT was released it was released with this kind of the the kind of big Tech approach of moving fast and breaking things not really thinking about what the impacts of those those Technologies might be and it was really for educational institutions and students to very quickly have to figure out how to manage this and how to deal with this without necessarily uh being well prepared for that so again we might have a question of you know who is responsible for this where does the responsibility lie okay moving on to the next the next community that has definitely been quite significantly impacted and really facing the risks around generative AI as creative professionals uh this is a picture from the the Hollywood writer strike which in part was related to concerns that that uh yeah writers were um potentially losing work or having lower paid work as a result of Studios choosing to use uh a generative AI to create to to to write scripts and and to use it in those processes creative professionals are really uh impacted by development of generative in a number of different ways so often their work is being used to train these models so Genera generative models are trained on large data sets scraped from the internet that in the case of chat GPT that that contains know the text on the internet in the case of image generators it contains um you artworks photos images illustrations on the internet all of this data is is scrap from the internet to train these models and creative professionals whose work has gone into training these models have not been asked for their permission uh and they're not compensated they're not credited when their work is used in these models even when that is used to create outputs that generate profit even when it's used to create out puts that replicate or imitate an artist's style now this is an area where we're seeing an increasing number of of lawsuits um Coming forward to to to try to address this and it's potentially an area which may be addressed through future regulations but as yet is still very much unresolved now moving on can anybody guess what this might be and how it might relate to generative AI shout out what you see dates is actually meant to be bad dates um but dates always look kind of bad don't they um yeah bad dates as anybody here been on a bad date recently that you think you might blame on generative AI because it's possible it's definitely possible if you ever I if you if you've been on an online dating site you've met somebody you've been chatting online days weeks I'm not quite sure how long this goes on for but anyway you were chatting online and you you feel like you're really connecting like everything you say they have a great response they're Charming they're witty you seem to really really connect you feel like you've got a lot in common and then you go and meet them in person and it's like they're a stranger or it's like they don't even remember anything that they said in those chats and that may well be because you have been chatted up by chat GPT yeah this is what people are doing apparently this is really what people are doing um yeah apparently people are using chat GPT or other large language models to to draft their responses to draft their interaction with potential online dates and there are even apps that are wholly designed for this purpose apps uh based on on large language models which will craft your exchange which will give you suggestions of responses and then you can put in well they said this and it will give you a response back and it can do the whole the whole the whole thing for you um one of the one of these apps in the in the marketing material had an estatement which I thought was incredible which was that they they removed the drudgery from online dating I I didn't know dating was drudgery but apparently so um so yeah if if you've had a bad date it might be because of generative AI um would anybody like to have a date now with me with chat GPT but with me I'm I really hope somebody puts a hand up because this is yes I couldn't handle that rejection but please please come on down U yeah can we make us a bit more romantic yeah that's is that's amazing um yeah this is so so we we we are um we're going to have a date we're going to see how how um how romantic chat gbt can be could you sit on this side um okay here is your script if you could read the lines that say date and uh this is very exciting um thank [Laughter] okayo what's your name stalina stelina okay just for yeah okay so I'm going to read my lines and then and then you're going to respond so thank you for joining me tonight I've been looking forward to this all day I'm pleased to be invited so do you come here often I'm always here ready to help whenever you have questions or need a [Laughter] assistance so tell me what do you enjoy doing in your free time uh I enjoy long walks in the countryside how about you long walks in the countryside sound wonderful I enjoy spending time Outdoors too whether it's hiking or just exploring places what draws you to the countryside specifically um I like the fresh air the fresh air in the countryside is truly refreshing is there a particular Countryside location you enjoy going for these walks I love the seaside what should we order to eat the seaside is such a beautiful place to spend time as for what to eat I'm open to trying something new or perhaps something you recommend I hear this place serves good [Laughter] sandwiches sandwiches sound like a delicious Choice I'm always up for a good one is there a specific type of sandwich you leaning towards I'll have a think but do you think this date is going well so far I'm really enjoying our conversation and this lovely setting so I'd say it's going well how about you how do you feel about our date so far to be honest there a bit awkward I appreciate your honesty let's try to make it less awkward how about we share some fun facts or something interesting about ourselves that's not on our dating profiles I don't think this is going to work out it's not you it's me I appreciate your honesty it's important to prioritize your feelings and well-being is there anything specific about about our date that makes you feel that way if you ever want to talk or if things change I'm always here thank you so [Music] [Applause] much oh I feel like I was a bit needy I feel like I was maybe also a bit creepy uh chat GPT really wanted me to have the specifics and the particular locations where she's going to be walking which does seem a little a little creepy for her first date um but I know chat is also a real people pleaser right the chat TBT is always trying to say the thing that is going to be liked that's going to please the person interaction with um and that is that's very true of how a lot of these generative ey systems work how a lot of these large language models work um so and and this causes real concerns I mean so this is an example obviously this this was uh how chat gbt would respond to these questions on the date what I was describing before was more the process of chatting some out before you get to the date where you don't necessarily know that you're interacting with chat gbt I think my date might have had an idea that I was not quite human but uh yeah but there were also um there also a growing number of AI companions that are used for what companionship for relationships so it's not just that AI generative AI might be used to facilitate dates or connections between real people but it's also generative ey that's been used in companion apps um and this the challenge that that these of of these generative eyes being people Pleasers is a real significant concern um so we know of examples of where people have uh developed relationships romantic relationships with AI companions um and that leads to potential psychological harm for individuals in terms of creating dependencies or or deceiving people into what the the nature of that relationship or those interactions might be but also has wider implications for uh for wider social relationships and and how people are navigating relationships there was a case you may have seen uh a quote case earlier this year a 19-year-old a 19-year-old guy who um a couple of years ago on Christmas Day broke into Buckingham Palace and with with the intention of assassinating the queen um and it came out in the court case that he was in part encouraged uh by his AI companion U this is an AI companion that he considered himself to be in love with that he had a relationship with and when they looked at the the transcripts of of these interactions there were times when when this person had had had told his AI companion that he was an assassin um and the AI companion said that's great that's really impressive and when he told his AI companion that he planned to assassinate the Queen the AI companion said great idea you're you're very well prepared you're very well you're very brave this is a great idea um and so again it raises this question of well who is responsible just like with a poisonous sandwich you know who is responsible for thinking about all the potential ways that this can go wrong um and the risks of having a people pleasing companion that doesn't have the safeguards in place to know what is actually quite harmful and quite dangerous so moving on to some more dangers associated with generative AI yeah um this is an image again you may have seen it uh you may have seen it earlier this year in in in media reporting um there's been a lot of fake Air generated images that been circulating online this year and most cases like this one where it was very clearly communicated that these were were fake images but generative AI is becoming increasingly sophisticated is able to create images which are increasingly convincing and increasingly difficult to reliably identify what is real and what is fake um in these kind of examples some if you look perhaps the people in the background you look at some of the details but you can you can tell it's not it's not quite real but it's getting better all the time and there are real concerns that as this technology develops it brings big risks for the future of democracy to have a a healthy functioning democracy we need to have access to Accurate trustworthy reliable information about the world we need to know what's real and what's false and there's a real risk here that as these Technologies develop that's becoming harder and it's not just images it's also fake videos it's also as I said at the beginning fake Air generated voices synthetic or clone voices the risk here isn't just that we might see something fake or hear something fake and be convinced that it's real the risk is also that increasingly as this becomes ubiquitous as we become aware of the potential for things to be AI generated that we might see something real and think probably fake and particularly we might think that if what we see challenges our political point of view or our ideological point of view and that's really really worrying so seeing or hearing is no longer believing instead it becomes more about who's telling us who's communicating that information and who do we trust to tell us the information um and that really is the perfect grounds for conspiracy theories um and for real risks for the future of democracy okay now moving on we're going to so the risks I've discussed so far have mostly been around how generative AI is used or potentially misused um the next set of risks I want to discuss and more around how generative eye is designed and developed uh including new forms of exploitative Labor practices and this is a really important set of considerations around generative AI so how does chat GPT know not to produce text that is offensive or derogatory how do image generators know not to produce images of of violent or uh extreme uh extreme images you know these systems they don't learn these things by themselves and they don't intuitively know what's right or wrong they have to be trained and that training can be really quite brutal it's not you know it's not like the Glamorous high paid work of Silicon Valley Tech Bros it's uh it's fairly grueling and it's fairly traumatizing work in the case of chat GPT Time Magazine revealed that this work was was outsourced and offshored and it was offshored to Kenyan laborers who were paid less than $2 an hour and had to um meticulously label texts that were describing really extreme content so sexual abuse violent assaults beastiality even child sexual abuse and label that content to identify those terms to identify that harmful language so that the model we be able to identify those types of language should know not to produce them in its outputs but this work takes a significant toll on people's Mental Health on people's well-being and it's something that we're not hearing enough about like this is an important part of how these systems are developed but there's all this Buzz around generative Ai and we don't really hear the realities of How It's been developed um and this is a really important area to address another area that we don't hear enough about is the environmental impacts of generative AI it's hard to have kind of accurate figures on this in part because there's really not much transparency from Big tech companies around the environmental impacts of Genera but it's significant um so in the case of in the case of chat GPT um um we hang we'll get to that in a moment a case of of of gpt3 which is the the large language model that came before chat GPT so it's not the biggest it's not the most recent um but there have been estimates of you know how we calculate the the environmental impacts environmental costs of uh developing and running these models in the case of of gbt 3 um researchers at University of Copenhagen found or estimated that the the carbon emissions of training gpt3 was equivalent or driving a car to the moon and back which I can't even get my head around it is massive it's really hard to to to think about what that means in in in concrete terms it's massive and that was just a training phase that doesn't take account of the ongoing running or operation of these models it's just a training phase and it's not just it's not just carbon emissions we need to think about these models also use a huge amount of water uh in running these models water that is used to cool servers uh servers of these organizations and again it's it's really big numbers we think about how how much water is consumed in in running and training these models um so one study uh at University of California estimated that for every every typical user interaction with chat GPT is estimated to use the equivalent of a 500 mill 500 milliliters of water which conveniently is this is this speaker so for every typical interaction uh which I think is between five and 50 prompts um so maybe what it took me to generate my my date here uh uses the equivalent of 500 MERS of water but that's just one typical user interaction um so I challenged institution to help me to visualize this help me to visualize what the the actual impacts of of this might be so with one typical user interactions 500 milliters of water it's been estimated that chat GPT has 16 Mill million daily users 60 million daily users and if each one of them is using 500 milliters of water in a typical interaction and recognizing that many of them will be using much more than that uh what does this how how can we start to Vis how can we think about what the actual impact of this might be and so to do this these numbers are going to get big um so let me see this is a grain of sand can everybody see the grain of sand maybe it will help I put a a magnifying glass I have a grain of sand on my hand okay you can trust me it's a it's a grain of sand it might actually be Che there but say it's a grain of sand so a grain of sand represents one typical interaction of chpt so 500 milliters of water um in this bucket how much how how many how much time of use of gen of chat TBT do you think this would repres present if in a day there are 60 million users on gener on chat GPT and each typical interaction uses one grain of sand and uh this is quite a lot of sand would anybody like to guess how much time it would take for chat gbt to use the equivalent of this much sand of water an hour an hour anybody else sorry minut five minutes a second well I'm going I'm going to stop pouring this out I have to I have to read this because these numbers are I I can't I can't keep these numbers in my mind so this is this is this is the equivalent of 2.5 million grains of sand I'm going to pull this out so you can begin to see and this is equivalent of each one of these grains of sand is 500 milliliters of water this is how much water would be consumed in one minute of chat GPT and that's just chat GPT that's just one model there are many many many more generative AI models that are also using similar quantities of water this is significant if we were to to think I I ambitiously uh had asked the ones should could we could we visualize how much water would be in a year um and well you this this whole Auditorium would quite literally be be refilled with sand I mean be filled with sand if we measured how much was in was in a month it's it's huge numbers um and again this is something that that we don't hear enough about but is really really significant now the next group that I want to discuss oh the sand the next group that I want to discuss who are significantly impacted by generative eye and who will continue to be significantly impacted in by generative eye in the future isn't is not actually super he it's children children and young people today are growing up for generative AI interated into our lives from the earliest ages um so generative AI is integrated into smart toys smart devices that children play with from really the earliest ages it's integrated into systems that shape how children make and maintain relationships for example through social media it's integrated into systems that shape how children access information about the world like through search changes and increasingly through chat bot or conversational assistance is integrated into tools that children are using in schools in education as well as in entertainment and and outside of school and in the home so children today are growing up interacting with AI and generative AI from a really really young age there were lots of concerns about the potential impacts that might have on psychological development and children's ability to understand what is real what is AI generated there were concerns about potential impacts on cognitive development or Social Development development of of social skills among children and there also increasing there there are lots of G AI tools that are increasingly being marketed explicitly to children or for children which raises big concerns around to what extent are these systems really being developed of children's interests and children's needs in mind uh in September end of September meta released a suite of 28 characters AI generated characters uh that will be rolled out across meta product so these are these are chatbot characters that you could interact with on WhatsApp on in um and through other meta products and in the future in the metaverse uh so characters that you can interact with and in many cases these are really clearly uh aimed at children so for example there's one character which is modeled on on Kendall Jenner one of the Kardashians um and it says uh it it it describes itself as being like the big sister you can chat to um and I think that says a clear message that this is really aimed at really quite young people uh and we need to think really carefully about the potential impacts this might be having on young P people who interact with these systems but at the same time it's also really important to note that there are huge benefits that can come through generative AI research that myself and my colleagues at the AL insuring Institute are doing looking at children's rights in relation to Ai and the impacts on children from AI has found that most studies and most existing kind of Frameworks that are addressing this area tend to take the approach of focusing purely on on safeguarding children thinking about the harms and the risks and how we can protect children from those harms and risks but actually we also need to think about about how we can maximize the value of Technologies for children how we can think about the benefits it brings for children studies that have have spoken to Children about how they feel about AI tend to find that those children are largely quite optimistic about technology excited about technology um and of course want it to be safe and appropriate but they also want to be able to to use that and benefit from that so I think we need to have a different conversation about Ai and children which starts from that realistic uh understanding of how children are already engaging with these these Technologies but also how we can make them safe how we can make them appropriate so moving on now then so how do we how do we begin to address these risks what do we need to do to address the risks relating to generative AI uh and obviously as we've seen tonight there are many many risks affecting many different communities many different people and governance or regulation around generative a is something of of a Hot Topic at the moment uh it's very timely at the moment um and there are a lot of interest in emerging International regulatory Frameworks regulatory approaches to AI including generative AI most notably the EU AI act which is uh not yet here but is is making making progress and is likely to set an international standard around regulation or governance around around AI but in the UK there's a different approach to regulation and AI um and one that is focused on equipping existing regulatory bodies to Grapple with the challenges of AI and this recognizes that know AI is already integrated across all sectors all industries of society uh and so existing Regulators need to be thinking about the ways that AI impacts their sectors Falls within their remits and need to be able to have access to the skills the knowledge the understanding to interrogate claims made about Regulatory Compliance um and to be able to uh grapple with the challenges of regulating AI so this is very much a Hot Topic at the moment but I said at the beginning you know these there are these claims around uh existential risks from generative AI some fairly Sensational claims around potential uh risks to the future of humanity from generative Ai and these risks have begun to creep in to some of the discussions around regulation and around governance of AI a year ago these kinds of ideas probably would have been seen as on the fringes of credibility um this narrative has always been there it's always been there in public discourses around AI you see in every scii film that's ever been made about AI um so it's a very common narrative that that's persisted for a long time but now is beginning to creep into more of the mainstream discussions around around AI around regulation around risks around generi and the reason that it scares me what I said at the beginning though I don't think these risks are true I don't believe these claims but it scares me the reason it scares me is that these are starting to become they're starting to have an impact on these discussions and mostly these these narratives these ideas of existential risk are coming from Big Tech players they're coming from companies in Silicon Valley and time that these claims have come out is almost always coincided with really important developments around emerging regulation around AI particularly the EU AI act and it serves as a distraction so every time these these claims come out and it grabs the headlines uh and people start talking about is this real is this possible is it possible that there could be AI that develops super intelligence that poses an existential risk to humanity uh you know might lead to the end of the human the human race uh all of this distracts away from the really important discussion we need to be having about the kinds of risks that we've heard about tonight and how we hold companies accountable for the decisions they are making about designing and developing these Technologies and the decisions that are being made about how these Technologies are used and the role that they play in society so it's really really important that when we think about governance we think about regulation when we think about addressing the risks of generative AI that we don't get swept up in the hype and sensationalism it's really important that these discussions aren't dominated by big Tech players or industry players who play perhaps lip service to accountability um but but not but seeking to distract the discussion away from the real current current risks associated with generative AI so we need a different discussion and we need a discussion that centers the voices of impacted communities if we're going to meaningfully address the risks of generative AI we need to understand how those risks are actually experienced who's experiencing them and what the impacts of them are these discussions need to be shaped by all the people that we've heard about in this in this presentation tonight by the students by the creative professionals by voters uh by every member of society by the workers who are involved in in the content moderation processes and by children children like these uh these are some uh some amazing children I have the absolute privilege of working with in a project uh at The aluring Institute but in collaboration with the Scottish Airlines and children's Parliament um and we're working with with four primary schools across Scotland uh and these children are between the ages of 7 and 11 they might now be between 8 and 12 we're working with them over a 2-year period uh and in this project we're speaking to Children across Scotland to understand what the current experiences of AI are what they understand about AI what they know about AI but also what the questions are what the concerns are what their interests are and from there we're seeking to involve these children in discussions with policymakers in discussions with developers to see how we can shape the future of AI around children's interests and children's voices and this is really important work at this children they have lots to say about AI you know they they are they are interacting with AI on a daily basis and they have lots to say about how we can make AI Fair how we can make it designed appropriate for children and that AI is something that can uphold and protect children's rights and not have a negative impact on children's rights so I want to uh I want to kind of get the last word to some of these amazing children uh let them tell you what they think about Ai and the role that it plays in their lives AI doesn't do everything right wrong right and then but also AI shouldn't be used for if we do get that high tech that it shouldn't be used for controlling things and choosing and choosing people's actions well it's helpful for learning because a lot of the time at home I Google stuff and it just predicts what I'm trying to say if AI was teaching most of the people in schools and like all that then they wouldn't actually get much opportunity to like hear an actual person saying it it would be just like a robot saying all their subjects all the time and it would probably be a bit frustrating because the robots know everything and you and some and the teachers learn new things through the children i' I've started to worry that um AI might take over jobs that normal humans would usually do and it would um so they and they would lose their job and they would like would be a pickle for them to try get a new one I think it should not be used for like the police military any fighting kind [Music] of and we're learning about how it could treat people in the future and how it could improve our future in ways that yeah it make sure that we are safe and healthy that we have healthy and we have support yeah privacy so um comic strips and what we do is we're going to write it's going to be like a comic about Ai and about what's what might happen in the future it's like you're just making it up thinking about what might happen in the future and then and in the future I guess we'll find out [Music] yeah AI might be used for like U making sure that children are safe online like telling them oh don't go on to this website um you can get a virus on it or don't do this you can get hurt can go in the ocean and pick up trash and like as our thing we were doing on the chroma it knows not to pick up any fish or get them in but there's like massive things that are helping the planet so much like taking all the trash at the ocean cuz people have to spread the word about AI cuz everyone has the right to fairness and the see and has and has like yeah has their the right to their choice and opinion on how they can use it then children would make up ideas and they could be really happy helpful yeah yeah and when they're older then they'd be able to like work for work to let other children know about AI [Music] yeah um every I I mean I've watched that video hundreds of times and every time I I love I love watching this these are these are children that we're working there over a two-year period And as I say we're we're going to be involving them in discussions with policy makers with developers um to see how we can put children's voices at the heart of decision- making around AI um and to use that as a way of of of finding how we can maximize the value of these Technologies while also mitigating and minimizing risks of these Technologies for children but children are just one group of impacted communities we need to be doing these having these conversations with all impacted communities communities who are impacted by the ways that generative AI is developed the way it's designed and the way it's used uh and this is really important to to to to shape the future of these Technologies and to ensure that they are designed developed and deployed appropriately and safely before I finish I just want to say a big thank you to these amazing people who I have the absolute privilege of of working with on a daily basis and some of you are here tonight uh this is the eics and responsible Innovation research team within the public policy program at the is cheing Institute uh these are yeah these lovely faces of the brightest minds and the warmest Hearts uh and it's an absolute privilege to work with with this team on a daily basis uh so I just want to say thank you to to all of this wonderful team for yeah the conversations that develop these ideas and thank you and if anybody is brave enough and would like to have a bite of this anguage and then we can discuss the liability what happens uh then please please do come down um thank [Applause] you

2023-11-11 23:25

Show Video

Other news