Seminar recording with Professor Colin Gavaghan (17 May 2023)
hello everybody it's lovely to be here and a real pleasure to be able to introduce Colin um I'm going to start with this too often we have seen what happens when technology outpaces regulation the unbridled exploitation of personal data the proliferation of disinformation and the deepening of societal inequalities we have seen how algorithmic biases can perpetuate discrimination and Prejudice and how the lack of transparency can undermine public Trust this is not the future we want if you were listening from home you might have thought that voice was mine and the words from me but in fact that voice was not mine the words were not mine and the audio was an AI voice cloning software trained on my floor speeches the remarks were written by chat gbt when it was asked how I would open this hearing and you heard just now the result that was Richard uh Senator Richard Blumenthal opening up uh senate committee yesterday and in which Sam Altman spoke and those of you that read that news piece will know that Sam who is the CEO of openai is calling for regulation of AI and we're now seeing an increasing number of people that say we need to pause we need to govern we need to regulate so I can't think of a more timely moment for Colin to come and talk to us about the subject of AI regulation and how we balance Innovation and the speed of innovation and creation of opportunities with precaution and ensuring that we do good in the world um Colin has come to us from the University of Otego in New Zealand which he told me is the most Southern University in in the world yeah so there you go where he was at now let me see if I can get this right the center for it law and policy of emerging Technologies is that right that's the name but yesterday and I think there's something about the New Zealand law foundation in there as well thank you um College research interests sits sort of intersection of reproductive Technologies as well as Ai and Regulation and he's just co-authored a book on a citizen's guide to AI which I think sounds like mandatory reading actually for a lot of people um really looking forward to hearing your talk today Colin and I'll be back to host some questions for those of us that are here and in the room and online and for those of you that are online I am real and I believe Colin is also real thank you thanks very much for that introduction Richard and thanks as well Healy for the for the housekeeping remember the special code fire alarm with the questions get too tough um okay so um I should say first of all by way of a lawyerly caveat that I've interpreted the introduction remit fairly literally so don't expect anything Paradigm Shift thing in terms of insight into the law here today I see this really is the start of a conversation or I guess a bunch of conversations with my new colleagues and you um fellow citizens in Bristol because there's an awful lot here that I still don't know the answer to um the title of this talk nope that's not it the title of this talk is you possibly recognize is a remix of um Mark Zuckerberg's move fast and break things his original mortal for Facebook uh something I think that was taken up by the more disruptor elements in the tech industry thereafter I should see In fairness to Mark Zuckerberg to change that motto in 2014 or two I think it was moved fast with stable infrastructure which doesn't look quite so good in a t-shirt um but what I'm looking at today is houses law and regulation I'm going to use these terms somewhat mischievously interchangeably uh how do they keep peace with fast moving technology how can we move fast and mend things or maybe move fast and prevent them from getting broken in the first place I want to try and achieve a few things with this talk I guess I would like to introduce myself and the work the work I do I want to talk a little bit about what tech lawyers do some of this will be awfully familiar to some of my Tech Law colleagues here maybe a wee bit less so to others uh the kind of challenges they face the kind of strategies available to them when trying to get to grips with and see to grips with fast moving Technologies and I also want to flag up though a few areas where I've kind of hit a wall I'll guess in terms of my own research my own disciplinary limits we're some kind of insight from other disciplines I think would be very useful but first a little bit about me I started my academic Journey at this place University of Glasgow where I um I was an undergraduate and then a postgraduate and then a lecturer there having spent the early years of an academic career at Glasgow about an hour from the town for a I grew up in actually I decided it was time for a change of scene so my partner and I decamped about 20 000 kilometers around around the world so as Rachel said the southernmost University campus in the world for a complete change of scene and ended up in this place um it Liz deletes a lovely city and I totally recommend it it's also done a Scottish place I've ever been including Scotland I heard more bagpipes there than I ever did here as Russia said it was to take up the directorship of New Zealand's first one technology Research Center it was a gloriously monitored New Zealand Law Society New Zealand law Foundation Center for Law and policy and emerging Technologies it's a bit of a mouthful but luckily they were blessed with a nice snack in the acronym for that it's one of the main draws in fact of working at the bgfi is it's got a it's got an acronym you can actually see without posing for a race break halfway through um and then on obviously to Bristol the first part of my academic Journey was actually as a medical lawyer there's a famous saying in medical law about the relationship between law and medicine that comes from an Australian judge Justice windier and he described law as marching with medicine but in the rear and limping a little well that was true with regard to Medicine how much more true is it with regard to Technologies like artificial intelligence these days it sometimes feels like law isn't so much in the real and limping but being regularly lapped and hoping to catch on to the coattails of AI as it zips past to be honest or technology media technology Academia I can find about the same sometimes I've been talking to some of you about that the relationship between technology and laws often expressed something like this these are literally the very first two images for Law and technology that I got in a um a Google image search and it's putting us together law has seen as being slow to make and slow to change reactive to the events predictable in the sense of once it's in place we should be able to predict more or less how it will apply to our actions and of course law has to fit together or really odd to fit together with other laws technology in contrast is seen as being fast-moving Innovative unpredictable in all Kings of ways and disruptive of existing paradigms technological and social the truth though is that technology isn't always fast-moving sometimes there are long walls during which progress slows an early promise isn't fulfilled think of the famous AI winter all the yet to be fulfilled promises or previous change everything Technologies like nanotechnology or gene therapy this though might change everything but you couldn't really accuse them of getting out ahead of public and academic discourse or of the opportunity to regulate at the same time law isn't always slow to change in the aftermath of the Christchurch terrorist massacre in New Zealand in the 15th of March 2019. and the online proliferation of footage taken by the shooter during that Massacre Australia fast-tracked allegislative response the sharing of abhorrent violent material Act and when I say they fast tracked it let's take a look at this timeline so the massacreb is 15th of March by the 3rd of April they drafted and introduced legislation it had made its way through the whole of the legislative process and received oil was sent by the 5th of April 2019. I'm fairly sure I've spent longer and hope to my bank than that that is a remarkably fast um process of law Drafting and law making whether that's a good law or a good way to make law are valid questions but it does show that lawmakers can respond very quickly indeed to new technology technologically mediated Harms and risks so why don't they do that more often why has it taken two years for the European Union's AI act to get from first draft to the stage of actually being voted on and about the same time for the UK's Online safety Bill to get only about halfway through the House of Lords part of the legislative process well there's a difference I guess between this law and those other ones this was a law addressing a very specific and discreet mischief that's a bit easier to deal with than a wide-ranging area like AI or online harm also just comes down to political realities governments change through the legislative priorities but In fairness to lawmakers and rulemakers of other sorts there are also some real challenges in trying to write good laws for fast changing Technologies this is a quote that I think is most plausibly attributed to news board but I think was made famous by Yogi Bera the the baseball coach um it might seem Trey but it's really on to something when they're talking about some of the challenges when it comes to rule making for emerging Technologies if you take the currently very topical example of AI regulation an obvious problem is it that exists a wide array of Highly disparate predictions about its likely limits uses and eventual impacts take a look at this quote tell me who you think said this EI what will it mean helpful robots washing and caring for an aging population or pink eye Terminators sent back from the future to call the human race any guesses and do you remember this guy um I I love this quote for various reasons one of them is that Boris Johnson's sort of spectrum of benign uses of AI starts with robots caring for the Aged which I know for a lot of people was kind of more towards a dystopian end of the spectrum but that aside um time travel in Terminator is probably aren't the top of most people's watch list when it comes to AI pearls but there are some pretty serious people expressing some pretty serious concerns there is an open letter Richard alluded to calling for a six-month pause in the development of peripholia systems gives a sense of what these might be should we let machines flood our information Channels with propaganda and on Truth should we automate away all the jobs including the fulfilling ones should we develop non-human Minds that might eventually outnumber outsmart Obsolete and replaces should we risk loss of control of our civilization and that's some pretty serious people both from the AI industry and the tech industry um putting their names to that are these risks plausible will AI take more jobs than it creates does it pose an existential threat to our civilizations and how on Earth is law supposed to respond to such dire predictions a couple of lessons I guess that um I when I'm introducing this subject to my students instead of speaking to government people about it in New Zealand we have to start with a little bit of humility from the lawyer's perspective not every problem in the world can be fixed by a law or by more law if you if you're only two was a hammer then every problem can look distinctly nail-like but there are good reasons for caution when trying to resolve every social problem with a legal solution it's always a very particular kind of tool it's good at certain things but there are many others available education social pressure etiquette non-legal rules of various sorts Market forces that govern particular places a central tenet of liberal thought is that there is or there ought to be a realm of private life that's simply not the Law's business now precisely by that border should lie is contested but I think most people would agree that there is a border if nothing else we'd presumably agree that there should be a de minimis threshold whereby resources of law enforcement that be wasted on trivia and there are instances where the presence of more laws and rules about giving the appearance of doing something don't actually help those at most risk a few years before I left New Zealand I was contacted about this guy this is Tony Ross better known to new zealanders as taronga Tony Mr Ross was a freelance sperm donor what does this have to do with AI bear with me the freelance sperm donor who'd been offering his Services via a Facebook group his clients were women and couples who were desperate for babies of course but who for a variety of reasons time cost didn't want to go down the official Clinic route several of the mothers had made completes about Mr Ross alleging among other things that he lied to them about how many other couples he or how many other people he was providing his Services too he had by this time fathered at least 20 children uh possibly I could many more of which in a small close community that I hardly need to spell it out at that time I had a role on New Zealand's assisted reproduction regulator and I was contacted by the media in that capacity I was asked for if what Tony Ross was doing was against any laws and I told them as far as I could see it actually probably wasn't for reasons I could talk about if anyone was interested the next question was ought to be is this a gap should we have laws to stop what he's doing well that was such a clear one on the one hand if he really was inducing people to get pregnant with his sperm on the basis of misrepresentation that seemed morally wrong but was it the sort of role that law could fix after all these women had been driven out of the regulated official sector because it was too hard to access but more rules actually help them maybe that we couldn't just assume that would be the case by the way when the um the program was aired Tony Ross made a complaint to a broadcasting regulator which I wasn't a party but I was certainly implicated he didn't he didn't succeed and when the link to the program went up on the website um this was the thing that really amused me Sunday investigates the unreal unregulated world of DIY sperm donation in New Zealand we Tracked Down the country's biggest donor and speak to the women who've had his babies and who want him to stop picture interesting chats in the Jess gavigan household that night um even if we think that technology has given rise to a problem that could potentially be a valid Target of law it doesn't invariably follow that Nuno new law is needed there's a lot of law around and it's unlikely that many new technologies that emerge won't interact with some of it at some or other time take GPT and generate it to the eye generally as an example what kind of areas of law make that interact with well deformation or if it produces damaging lies about people and other people repeat it copyright law we're seeing that already as being as being an issue privacy and data Protection Law when it scrapes up data from other sources to inform its results it was the presence of existing data protection and Privacy Law that led to the Italian privacy regulator guarantee Banning chat GPT for a while in Italy they didn't need new laws for that it was the existing laws that they relied upon most recently the competition regulator in the UK is taking a look at that GPT um using again existing was so I guess my kind of lesson here is before we rush to scratch build a new regulatory or legal response let's have a look at what's there already and ask if it's not working why it's not working and why you might think new law would work better the accretion of so much law and I recently came across this great expression legislative space junk it's this idea of kind of so much bits of law floating about there in the environment that it's getting hard to get anything else through without bashing into some part of it um so much of a slightly to be relevant to say generative AI is starting to present a problem for drafters of new law the eu's AI act for instance has to navigate its way around the present gdpr the new digital service design the AI liability directive and various other bits and pieces of EU law making sure that none of the acts 85 articles or nine annexes contradict any part of that is to say at least a bit of a challenge my next uh issues is this one that's not to say there can't be gaps in the legal or regulatory coverage but as with the Maverick sperm donor case we should pause before rushing to address legislative polyfilla I get trapped in metaphors sometimes um build and you will address the problem that it's posing I was glad that um I was glad that Rachel started with that clip today because this is a subject I've become quite interested in I'm actually giving another talk about it next week one of my closest friends and collaborators in New Zealand is a chap called Ali no tibos for a company called Soul machines an AI company and they're creating avatars like this now this um this app this lady to me is still wandering right down the middle of The Uncanny Valley but they're getting very good and it's not just that they look good they're getting very good at mimicking human responses when you speak to them like responding in an emotionally appropriate way um to a variety of responses is this a worry um Daniel dinner is in the Atlantic today actually I've practical as you said at lunchtime they're saying this is one of the greatest threats facing democracy and civilization I don't know if I'd go that far but I certainly agree with uh withdrawal Hartsock when he wrote In 2015.
that he was talking about embodied robots but I think AI avatars are at least as big a threat here that they're uniquely situated to mentally manipulate people certainly some of them are being optimized specifically to do so to manipulate or not just in terms of their buying decisions our political decisions disclosing data personal information and the like the EU has responded to this in the AI Act with an obligation on providers of AI systems to make sure that systems like that will be disclosed to members of the natural persons the true identity would be disclosed they have to tell you that it's an AI you're dealing with and not a person is this the right legislative response is this going to make a difference is knowing that we're interacting with an EA System going to help insulate us from the manipulative capabilities with which these sorts of systems are being optimized I literally do not know the answer my intuitive feeling is um that as with much of our Consumer Protection Law which is predicated on deceptive and misleading practices it may not be quite engaging with what's really going on here which is effective manipulation but I'd love to speak to someone and psychology or or whatever you perhaps are somewhere and say incident I think it's a really interesting question okay so let's assume that we decide that new law is needed was the right vehicle an existing law isn't doing the job there's a couple of other things we need to think about here Roger brownsword is the kind of Guru of Technology law in the UK has coined these two expressions regulatory phase which is about when we should regulate and Regulatory connection about what we regulate and how both of these present challenges for lawmakers and Regulators in this area if you look at Phase first the question of when we ought to regulate this has become really topical over the last month the UK government last month published the EI white paper it was very um bullish about AI technology it was very Pro Innovation and it takes this view rushed attempts to regulate TI too early would risk stifling innovation what are we to make of that claim well by definition we shouldn't try to regulate too early but when it's too early and as importantly when is too late it isn't as simple as seeing that industry doesn't want this and the very same day as the UK government white paper was released the future of Life institute's open letter was also published to think calling for the sixth month slowdown of developing powerful AI why because the signatures to that lesser think that developers have to work with policy makers to dramatically accelerate development of robust AI governance systems and bear in mind that the same way people like musk and Wozniak and mustache so it was it wasn't just the usual suspects in law like media there's industry bigwigs are also seeing this the eu's approach with regard to the EU the the EI act one of the stated specific objectives of that act is to ensure legal certainty to facilitate investment and innovation in AI so even if we were only concerned about the impacts in Innovation it isn't clear that holding off with regulation is the right way to go and of course there are a great many areas where stifling Innovation is precisely what we want to do the areas that the eu's hiving off as unacceptable uses of AI That's the kind of bad Innovation that we don't want um why I can't see as a lawyer though is that there are risks on both sides failing to regulate to something bad has happened has an obvious cost but there are risks with going too early as well this is the one and only tax case that I could say anything about and I couldn't say very much about it because I don't even know what the rule of apportionment is so please don't ask me that but I love it because I love this underlying bit here from Justice Frankfurter um he's talking about courts and judges here but I think it applies to lawmaking maybe more generally that we ought to be weary of raising questions that we ought notes to anticipate and we are not to embarrass the future by answers that can at best a only maybe constitute a guess in terms of what's likely to come next this brings me to brownstone's notion of connection what to regulate obviously law can sometimes struggle to keep Pace with technologies that disrupt her prior understanding of how things had to be um particular example here of driverless vehicles be quite fun uh time in New Zealand when we noticed that some of our existing Road Traffic Law probably didn't apply or might at least not apply to driverless Vehicles if you look at the rule against speeding starts with a driver must not drive a vehicle with a speed exceeding the applicable speed limit ah well what if the car didn't have a driver but a driverless car be able to speed all it likes Maybe it's easy to be sympathetic for a lawmakers in that situation you know in 2004 is not that long ago but driverless cars weren't realistically on the horizon at that point and honestly this isn't the biggest problem in the world because they're very very easy fix suggests itself before driverless cars will be allowed on the roads there sometimes though the problem can arise even when lawmakers had a particular technological Target in mind um way back in 1990 UK Parliament was concerned about the prospect of did anybody guess who that cheapest by the way before I say anything else that actually is a picture of Dolly the sheep trust me not just a sheep could you live in New Zealand when you live in New Zealand for a long time they take their sheep seriously um the UK Parliament decided that they weren't going to allow human clothing so they passed as part of the human fertilization and embryology act a rule against coding six years later dolly was born and uh the government saying it's all right this isn't hand we we banned we banned this years ago there was no Prospect of it being used to make a human except that the people who drafted that law against cloning didn't just see cloning's band no no they defined very specifically what they meant by cloning and the very particular form of cloning that they described wasn't the one that was used to create dolly Panic ensues do we have a can could someone go ahead and include um the parliament rushed through a legislative patch to make sure they couldn't before eventually the House of Lords as was uh took a kind of elastic interpretation of that rule to say that well actually we think it was probably meant to cover all kinds of coding really probably what's the lesson from that case to draft laws with more elastic definitions so they can stretch to new and unforeseen forms of technology well Maybe but this gives rise for regulatory tension because if law is going to serve not only as a bottom of the cliff sort of mechanism but as an action guide we should after all be able to try to comport our Behavior Institute to adhere to the law not just wait to see what happens afterwards then it can't be so elastic that we can't predict how it's going to be used and how it's going to apply this is a regularly genuine regulatory tension here's another one I like these bumper sticker type titles for for slides nothing edges as quickly as yesterday's vision of the future so said Richard Carlos I love this cartoon as well from Popular Mechanics in 1950. as the best I've been able to find out it was not meant to be a joke or satirical or anything it was the cartoonist vision of how the housewife of 19 of 2000 would be would be living our life living aside the aesthetic atrocity of a completely plastic wet room real in the middle for the living room what really strikes me about this cartoon is the complete failure to consider that technology reducing the burden of housework might in some way change or even obviate the need for housewives and certainly the notion that in 50 years time they will still be dressed like Lucille Ball clearly hadn't occurred to them in the slightest so when we're trying to predict enough of the future to make good laws for it it's not just a form of Technology we need to think about it's the uses of that technology it's the ways that technology might change our society in a variety of different ways I love this example this is a a sign from a lecture theater in otago University there was a no eating intellectual theater no drinking and no whatever the hell this is um because no one under 30 has a clue what that's meant to even be um and when you ask them well you say that was actually a that's a cell phone and they get over the laugh again you see why do you think there was a rule against using cell phones in the lecture theater and you have to explain to them that there was a time I know right there was a time when the predominant use for cell phones was to speak into them or more plausibly to shout into them fast forward to an interview where these are just very small powerful computers should that rule still apply if people are using it to take notes to look things up to record lectures does the rationale for that rule still make sense when cell phones have been used in a very very different kind of way disconnection is proving that issue of the prospect of Regulation disconnection is proving an issue for the EU with regard to the AI act as well um the approach taken in this draft legislation is there's the kind of red lines around there things that they're not going to be allowed at all and then we have a series of a kind of stratified risk hierarchy or perceived risk hierarchy that risk hierarchy at least in early drafts was based around intended use of AI systems there is a whole issue about how you define AI systems at all but but the biggest controversy I think has been around this risk classification hierarchy so unacceptable risk of the things that are banned altogether and then you've got high risk AI systems that are subject to a bunch of special rules if you want to introduce them into the EU jurisdiction high risk is defined largely not by the form of the technology but in the context the intended use in the context in which AI systems could be used so it's things like employment access to services and benefits border controls law enforcement a whole bunch of video Spirits fairly high stakes kind of stuff I think relatively late in the process uh the parliament kind of thought well hang on a minute though we now have these kind of general purpose AI systems that could be put to pretty much infinite range of different uses in different contexts it's very impossible really to imagine what they might all be so they've rapidly but reasonably rapidly these things go added on a bunch of new Provisions that will deal with general purpose AI systems as well as this kind of high risk this kind of high-risk list so almost over to you guys this is my new law checklist this is the kind of um checklist I give my students this would be like the introductory lecture if I was if I was back in a toggle here's the kind of questions we should be asking when we think about the need for or the utility of new rules for new technologies is this even the sort of problem that law can plausibly address is it like the Maverick sperm donor case a problem that's perhaps come about because things are a little bit overregulated already and hard to access if it is what law already exists and why isn't the existing War working why might we think that the new law would do a better job than the law we already have these aren't question begging questions I mean they genuinely are open side number of different answers but we ought to be asking them I think how will our fit with existing law remember the legislative space junk how will it work its way between all of those is the new law just a regulatory placebo something put in place to make us feel better but not actually be any better or be any safer do we have a clearly definable regulatory Target can we actually see what it is that we're trying to regulate and can the law be elastic enough to cover new and unexpected forms and uses of a technology without losing its function as an action gate without becoming soulless but it doesn't tell us anything actually useful I'd love to as I say I really hope this is the beginning of the conversation not just for the next people in the Q a but over the next months and years with a lot of you so thank you very much for now thank you
2023-06-02 18:58