Reining in the tech: powerful technologies and society in the 21st century by Gillian Hadfield

Reining in the tech: powerful technologies and society in the 21st century by Gillian Hadfield

Show Video

Welcome back, I hope you enjoyed that video. I certainly did. Now it's my extreme pleasure   to present professor Gillian Hadfield. Professor Hadfield is a professor of law and professor of   strategic management at the University of Toronto and holds the Schwartz Reisman Chair in Technology   and Society. She's the inaugural Director of Schwartz Reisman Institute for Technology   and Society. Her current research is focused on innovation for legal and regulatory systems for AI  

and other complex global technologies, computational models of human normative systems   and working with machine learning researchers to build machine learning systems that understand   and respond to human norms. Professor Hadfield is a faculty affiliate at the Vector Institute   for Artificial Intelligence in Toronto at  the Center for Human Compatible AI at the University of California Berkeley and Senior Policy Advisor at Open Air in San Francisco. Her book "Rules for a Flat World: Why Humans Invented Law and How to Reinvent it for Global Complex Economy" was published by Oxford  University Press in 2017. Gillian, over to you, thank you.   

Great, thanks Isaac and I'm really excited about getting a chance to talk to a great crowd of people   at U of T today. What I wanna talk  about is this question of who owns the future.   We know the future is changing dramatically; who's in charge of that? Well, these days we're   feeling increasingly like it's these big tech companies: Google, Apple, Facebook, Amazon, Microsoft.   They are building everything that we use, they are building this system helping to build this system   that we're that we're using running it on and it can feel like they are in charge of just about   everything in our daily lives these days. In fact, Apple is now the largest company by far   in the globe, over two trillion dollars in market valuation and that puts it ahead of Canada's GDP.  So if you just want to think about the scale of these private technology companies. Far beyond  

anything we've seen historically, but it's not just the the ones that we know that we're using   in this part of the world, it's also these big AI companies in China that are increasingly   building that technology infrastructure for many parts of the globe,   playing a very large role. Baidu, Tencent Alibaba — you're probably following news about them as well.   But I also want to point out that you yourselves, involved in building the technology at the   University of Toronto and delivering the systems, are part of that process of building   that technology that has become to define so much of what we do. Particularly, over the   last year, none of us could have survived without  the work that you were doing continue to do our   work without the work that you're doing to build that environment. So every time you're building   something like this right, the mechanisms  by which we are getting access to our files, our   systems, our research. That's part of building that environment. Or this one taken from my   applications for human research ethics protocols. So there's that little there's that  

little click box, ones that live everywhere  on the web now, but I have read and agreed there's   that click box. Sometimes you're involved in creating those and putting those in place and   and responding to them. Or this one: just user not  authorized. Someone who's just the determination of   who has access to what kind of, what kind of tech.  So that's what I mean by thinking about the  

the importance of the ways in which all of these  decisions are shaping the very infrastructure of   daily life today. Just think about the amount of technology that we're using right now   to interact, to do our jobs, to make friends to  be with families, everything is that infrastructure.   And that means that there's a significant way in  which that technology work is what I call writing   the rules of daily life. Think about those check  boxes, those are the rules of who has access. Those   are the rules of the legal relationship between  the people who click on those. So how do we make   sure that we own our future? That it's not owned  by companies building technology or systems that   are just implementing technology. How do we make sure that we are continuing to stay on top of that?   How do we make sure that we are the ones who are  writing the rules for this new, this new world?   Well I'm the Director at the Schwartz Reisman  Institute, which as many of you know is connected   to this beautiful building, which is still  a hole in the ground, but it is a progressing hole   in the ground. The Schwartz Reisman Innovation Centre, hope to be up in it in a couple of years.  

And this was created, as you may recall, by one of the biggest donations, at the time, to   the University of Toronto from Heather Reisman and Gerry Schwartz. And as part of their commitment to   helping to drive innovation and technology at the University of Toronto. Sitting this building right   there, at the corner of College and University, right at the centre, the hub of campus and our   connections to the medical system, to technology, the Vector Institute and so on. When Heather Reisman and Gerry Schwartz were contemplating this gift, one of the things that   really mattered to them was to say: "we want to make sure that this technology we are helping to drive   is also good for society, that we are  doing things that are making the world   a better place." Because we're all seeing ways in which technology can sometimes make things  

worse. And they created, as part of this gift, the Schwarzman Institute for Technology and Society,  which I'm very pleased and honoured to be  the founding director, in order to address those   questions about the interaction between technology  and society. So we put that together as a mission   that says that our goal is to deepen our knowledge  of technologies, societies and what it means to be   human by integrating research across traditional  boundaries and building human-centered solutions   that really make a difference. And it's  this piece at the end here, this building   human-centred solutions that I really want you  to be paying attention to. If there's one takeaway   for you, from today, it's that you are all involved in building those solutions with technology   that are influencing how technology affects our lives. And so I'd love to, I'm hoping that we'll   get to some thinking about how all of us, in our daily lives and work at the University   of Toronto, can do that. I came back to University of Toronto after 30 years in Silicon Valley,

in the heart of the technology world. And one of the  things that drew me back here was the opportunity   to be deeply engaged with these questions and I'm  very excited about the opportunities we have at   the university here. So a little bit about my  journey to this to this place in my career.   I started off thinking about questions, about what makes the world a just place and this is   when I earned an undergraduate degree at some  other undergraduate university, a university   down the road in Kingston and read in one  of my philosophy classes John Rawl's book   "A Theory of Justice." Probably one of the most  important philosophy books of the 20th century.  

And in it, what I was so fascinated by in Rawl's  book is his attention to this question. He was   dealing with this most fundamental question is:  "what does it mean to have a fair and just society?   How do we design a fair and just society?"  And so he's thinking about those very big   questions, but he was thinking about it in a way  that was analytical and systemic.  How do we think about the building the structures  and the incentives that people behave under   and how do we structure that. And the other work I  was doing in my degree was mostly in economics   and this sort of motivated me to go down to Stanford to do a PHd in economics and a law   degree jointly. And that work is what took me into  thinking very systematically about how do markets  

work and how the rules work and then this question  of well how well are our systems of justice   working? I had some family litigation that went  on for a very long time, cost a massive amount of   money that is an economist led me to think: okay  so what's really going on here and if it's just   me and my family case that'd be one thing, but I started finding out that it was really everywhere.   I talked to general counsel at the largest tech  companies: Google, Apple, Cisco and so on in   California and talked to lots of innovators  about how the legal system was working. This was in   the early 2000s, how well it was working for  them. And the answer was it's not really working   for anybody, it's not working for ordinary people  who need help with their immigration cases or   housing or family because it's too expensive. And  it's not working very well for technology because   it's just not keeping up with the ways in which  the world in the last few decades has simply been   completely transformed through digitization and  globalization and very very fast-paced technology.   So this is the the book I wrote. It took a long time to produce it, it came out in 2017.

And it's actually now, it's finally  post-COVID available in an audible   version so an audiobook version. But this book  is looking at this, it's bringing together this point   to say here's how our legal systems have been  built and how they helped us grow massively during   the 20th century, but now they're out of step with  the way technology works today. And that's why I've   started thinking about artificial intelligence.  Some of you may recognize this, although I have to   say when I teach classes I sometimes find there's  only a few people who recognize that this is   the image from Hal the computer in Stanley  Kubrick's 1968 movie "2001: A Space Odyssey."  

But it's a wonderful example of  the challenge of building powerful technologies.   And those of you who've seen it, you haven't  seen it, you should definitely go watch it,   But it's a machine that's built to do  a job, help take a mission out in space   and it is completely committed to doing exactly what it was asked to do. The problem is that   exactly what it was asked to do is something that  humans on board have decided is no longer what   they should be doing. And Hal attempts to interfere with human control at the end of the day saying:   "whoops we didn't really want to carry out that mission." And that's just something called the   alignment problem, which I now spend a lot of time about thinking about. Let me say a little bit more  

how I actually got from this path of thinking about law and economics to thinking about   about artificial intelligence. So part of that story is also a bit of a family story. My son who is just finishing up his PhD in artificial intelligence at Berkeley this year. He and I   started talking back when he was a masters student. About questions that we found we were   thinking about as grad students, I had been thinking about them 30 years earlier as a grad   student that were really quite similar is this question of, we were both pondering the question   of: how do we get an agent to do what we want? Now, I had been thinking about this in a relatively   boring context of franchising organizations when I was when I was a graduate student.   Thinking about the question of: well how does an organization, like a franchise organization,   get its employees and franchisees to behave in the way that will be good for them and then good   for the good for the company? How do we structure that? That's an incentive problem, it's a principal   agent problem in economics. And that's been the core of a lot of my thinking; about how do we get  

organizations to behave. Because that organization could be a university, it could be a a country and it could be an entire legal system And that's really what's informed most of what I've done in my career is thinking about how do these  structures, these institutional structures operate   to get us to outcomes that we, in some sense, want. Well, Dylan was thinking about this problem in   the context of building artificial  intelligence and machine learning systems and   he had he had a more interesting version of this. Here's his version. So this is a video of  

a machine learning system that was built by engineers at OpenAI a few years ago. And, as Isaac   mentioned, OpenAI is a company an AI company in San Francisco that I do some work with.   And the machine learning engineers wanted to train a machine to play this boat racing game.    Now, if you're noticing something odd here, that you can see up in the upper left corner there's the   the actual progress, those of you who are video gamers,  you know exactly what to do. But I didn't grab the   upper left corner that's the progress of the  boats. Now you'll notice here that this boat is not   winning this race. This boat is spinning around in circles, crashing into stuff. But it's doing  

something quite smart. In fact, it's doing, like Hal, in "2001" is doing, exactly what it was asked to do.   Because what he was asked to do was, let me just go back here, what it was asked to do is to   get a high score because the engineers who built it thought well that's a great way   to reward the computer for doing what we want it to do. And it can learn how to do   we want it to do, which is get a high score. So if you  look in the lower left, you'll see that's what's   happening, it's getting a high score.  What the engineers did not think to do was to tell   this reinforcement learning system: oh yeah and by  the way win the race while you're doing. It didn't  

tell the boat that's what they really wanted, said  thought that would be obvious because it's obvious   to humans, but it's not to a system. And that's therein lies sort of like the the crux of   how do we make sure we're actually building systems that are truly doing what we want to do.   Because many of you probably have the experience, things don't always work the way you expect them   to and they're things we forgot to say and things that we have trouble communicating to a machine.  

So let's let's back up a little bit and sort of think about this big topic of: what is AI?  Now, I should say at the Institute  we're looking at the challenges of society and   technology quite broadly, looking also at data  governance the digitization and eventually we'll look also at genomics and gene editing and technology in the medical sphere. But in these   first couple years, we're heavily focused on AI, so let's think about what is AI. Well, in the popular   imagination, AI is often thought of like this, right? The killer robot that's coming down. But many of you probably know that's really not the AI we're talking about these days. This is  

the AI we're look we're dealing with today. This is the AI that's embedded in your maps on your phone   or in your banking system on your phone. It's in Siri and Alexa and Cortana, the synth of the voices   that can answer your questions and look things up for you. It's in the Netflix recommendations   that you that you see, that's being  powered by by machine learning systems that are   taking in the massive quantities of data about what people are watching and what they're   finishing, what they're leaving after five minutes. To come up with recommendations for   what to watch next. It's also in  increasingly, ubiquitous surveillance systems  

because one of the things that AI allows us to do, machine learning in particular,   is to process process quantities of data that  no human could possibly process. So we couldn't   we couldn't have enough people  sitting in front of machines to follow,   in front of television sets or video monitors  to follow what's happening. So we we're using   machine learning and AI in the background to power  increasingly ubiquitous surveillance systems.   Some of you may have seen this film, if you haven't, I highly recommend it, as does Netflix.   This was produced by Heather Reisman, one of  the benefactors for the Schwartz Reisman Institute.   The Social Dilemma looking at the  implications of the way in which artificial   intelligence and machine learning in a lot of our platforms, like our phones and our tablets   is being highly tuned to figure out what  is it we watch, what will we watch more of.  

And this is all in the service  of selling more advertising   because the more you can narrow down who's going to actually watch something, a lot higher the   likelihood you can get somebody to click on something, the more you can sell the ads for.   And the film set up sort of as a  documentary, but very enjoyable to watch   takes you through so what's going on behind the scenes that this is driving so   much of behavior really great show. Some of you may have been following the stories about how   increasingly powerful facial recognition systems, both can be misused, but also don't   aren't aren't being built to be equally responsive across different groups in our society. So this is   from a study that Joey Bilimuni and Tim Mitch grew and then later included one of our U of T engineering graduates Deb Raji, looking at the  way in which facial recognition systems that were   being deployed out in the world had very different accuracy rates on different types of types of   faces. So the error rate on the systems that were deployed that they studied in 2018 were   accurate up to one with only one percent error. But on darker skin female faces, for example, they had an error rate that got up as high as 35 percent. So those are systems that are not working  

as well across all of our diversity  of members of our community.   And here the educational institution, AI is coming to you may be using it in some places now, but   it's certainly something that's on the horizon. And so that means the things that we're seeing   out there in the world are not things that we could say well that happens and then I go to work   and we do something quite different. We're going to have to be thinking through AI in schools   and the ways in which that affects us. So I want to say a little bit more now about thinking  

about what this where this problem is coming from, why are we facing this kind of challenge.  So I'm going to start off with my one of my favourite definitions of artificial intelligence.   This one being that artificial intelligence is that activity devoted to making machines   intelligent the key thing is what's intelligence, and that's the quality that enables an entity   to function appropriately and with foresight in its environment. Now it's this word "appropriately"   that I really love about this definition because appropriately reflects our values. Appropriately reflects what we want to have happen, what's good to have happen, what's healthy   for our communities, what's helping us achieve our true objectives. And that's really the challenge   of building artificially intelligent systems. So let's dig a little deeper into this so why   

is building a system that knows how to do what is appropriate, what we want it to do such a challenge? Well, some of you may recognize this person, this is Garry Kasparov in 1997, and he has the honour   of being the first world champion chess player to be beaten by a computer. This was IBM's Deep Blue.   And that was a great milestone in the building of AI systems. But the kind of AI system that was   built with IBM system Deep Blue is what we called what we call good old-fashioned AI — GOFAI.   At that time, in the in the late '90s, AI  was being built on the basis of rules. The  

machine was being told what to do, was being programmed what to do. So IBM's Deep Blue had,   you know, basically every possible configuration of  a chess board in its brain and depending on what   Kasparov did could sort of just read through and look for what's the best response among all the   different channels that we could look at there. This is Lisa doll, he is the world champion, was the world champion in a much more complex game  called Go, which is has more spaces on this   board than there are atoms more moves on this board than there are atoms in the universe.  

This is not a game that you can program with rules for a machine to win. It's simply too complex, it's   simply it's just too huge, you couldn't get it inside the machine. But Google's deep mind built   a program called AlphaGo, which I'm sure many of you have heard about, which was in   was able to beat Lisa doll in this in the game of Go a few years ago. Huge milestone in   the development of AI and the importance is that the difference here is that AlphaGo was built   with machine learning, which was we like to remind everybody, invented here at the University   of Toronto Jeffrey Hinton who's still  with us and at the Vector Institute   Yoshua Benjio and Jan Lacrim. And machine learning operates in a different way from the rules.   The way we do machine learning is we don't program the machine to do something, we don't put into the   machine, take this move, put a black stone here if the board looks like this. We program the machine  

to learn what to do, we give the machine a goal. Here's something we a good thing for you to do.   Now, figure out how to do that, learn how to do that by looking at lots and lots and lots and lots   of data. Machine learning is a type of AI that is making predictions by recognizing patterns, whether   or not we have asked the machine to look for that pattern. It is looking for patterns in order to do  

well on achieving the value that we've set for it. So imagine you wanted to use a machine to   to make a muffin. So if you were using GOFAI, good old-fashioned AI, you give the machine instructions, a recipe.   First, stir together flour and baking  powder, salt and sugar, then add the milk, oil,   blueberries and eggs and then bake at 400 degrees. And that will get you a muffin, that's a recipe.  

That's all that an algorithm is, it's just a recipe. So if you're building, if you want to try and teach   a machine to make muffins using machine learning, however, you don't give it the recipe, you just   give it lots and lots of muffins. And  say: okay you figure it out, here's these muffins   here, my muffins here, here, you figure it out. Now, of course, one of the first things you do is teach   the machine to actually let the machine figure out how to recognize muffins. And that creates another  

challenge, which is quite fun, which is can you tell the difference. Can the machine tell the difference   between a muffin and a Chihuahua. Notice it would be very hard to explain to the machine how to   tell that difference, but sure enough we can build systems that can pick out the difference   so AI is the appropriate part of  artificial intelligence is tricky   because AI surprises us for good and for bad.   When Lisa doll was beat by AlphaGo, there was this move in the second game, which became quite famous, move 37 in which the   AlphaGo took a move that it had calculated no human would have taken. It was a very, very   surprising move. And, in fact, when the commentators saw the move because this was video it was   the game when the commentators thought they were sure it was a massive mistake because it was   a no-no, even a novice player, one of the first things they learned it don't make this move.  

AlphaGo made the move and in retrospect it was kind of downhill from there for Lisa doll   when he lost that game. And it's the surprising part of move 37 that you want to keep track of.   That's what's so important because machine learning is learning the patterns for itself,   not being given the patterns to look for. But it's learning for itself, there are surprises.    And some of those surprises are great, some of those surprises are ones that are seeing patterns we   don't see. So DeepMind, which built AlphaGo, also built AI that can beat doctors in breast screening.   It's seeing tumours that human radiologists are not seeing. But it's also seeing things that we perhaps  

don't want it to see or don't want our societies to be using AI to try and suss out. Here it was   it was a system that somebody built to detect, really not to detect sexual orientation, but to   detect whether or not somebody had placed their photograph on a website seeking a same-sex or an opposite sex partner. Sometimes it's seeing things different from what we we think it's seeing. So some researchers built a system  to detect the difference between   wolves at the top and huskies at the bottom. And so they do what  you do in machine learning, they fed the machine   lots and lots and lots of pictures of wolves and huskies and then they gave them a label.  

Gave the machine a set of labels, here's a set what will tell you which is which.   Now you see if you can predict the correct label that we put on. And sure enough the machine is   able to predict with high accuracy oh yep that's a dog, that's a wolf. But one of the ones that the machine made a mistake on was very telling. It was this one, in the middle here.   This dog in the middle here was misclassified as a wolf and when the researchers looked a little   more closely to say well let's see if we can tell what the machine is actually looking at, we think   it's doing what we wanted it to do that's what we designed it to do. Turns out that the machine  

had learned to recognize snow because all of the pictures of wolves had been pictured   that rain the training set had been on had been pictured on snowy grounds and most of the dogs   that had been in the training set were pictured like this one here on rocks or sand or grass.   So the machine got very good at telling the difference between the pictures, but not because   it figured out that there's been wolves and and dogs, but because it figured out between snow and not snow. So the machine can be learning something different than what you think it's learning.  Here's another example, this is an example of something that's known as an adversarial attack.   If you train machines and so here's an exam on the left here we have a picture of a panda.   The machine has been trained to look at these images and attach the label and it sees this   one and it makes it makes the prediction that oh a human would label that panda sort   of confident 57.7 percent confident. that's true. But then, what we what some researchers learned  

was that if they just added a little bit of  noise to the pixels of that picture so that's   what the middle picture shows is that the noise it produces the picture on the right which to   our human eye looks exactly the same as the picture on the left our eyes can't pick up   that little bit of difference. But to the fed to this machine the machine was now confident   that it's no longer a panda that's a given. And that's a problem of a small amount of   variants in the picture. The machine is doing something a little different than what we think it's doing. We're also learning that machines  that are trained to understand words because   if we're at a lot of what we need to do  with machine learning is have it process   words so that could be processing the words in a application university or for a job or reading   essays. Machines have to take words and convert them to numbers. And the process we do that by is called today is called word embedding. But the way you do that is you feed the machine   lots and lots of text and in particular  maybe like feed it everything on Google news.  

And it will learn ways to represent the words by becoming good at predicting what's the next word.   When we do that, however, we find that our word systems, our translation of words into   into numbers contains a lot of the biases we see out in the word. So, for example,   we see words that are associated in the in the math of that machine learning model   with gender. The machine thinks  the following are only "he" occupations or   very likely to be "he" so we'll associate that maestro, skipper, protege, philosopher, captain, architect,   financier, warrior, broadcaster, magician, fighter pilot and boss. What do we find on the   women's side there nurse, librarian, nanny, stylist, dentist, so that's a bit of a problem. But what   we're seeing here is that when we've trained the machine on that data and the data contains us, the   data, if you went online, you would be more likely to see women associated with those types of jobs.  

That's what your machine learns and then represents back to us as oh here's   what you mean by that word. So training on Google news, father is to doctor as mother is to nurse,   man is to programmer as woman is to homemaker, black male is to assaulted white as white male   is to entitle to. Or if you search, I think this is they've they fixed this one, but at the time   the study was done if you if you search for three black teenagers and three white teenagers   on Google search you bring up mug shots for the black teenagers and happy sports pictures for the   white teenagers. So that means that these these systems, which are highly complex and building   themselves are picking up the biases in our world. Now some of you may also be aware of the Cambridge   Analytica scandal, which it's important  to note has its origins in the university   where data that was collected over Facebook, psychology tests and quiz that people did was   shared with these researchers. Researchers build systems that was able to then help   people to make predictions about political affiliations and to give targeted advertising and   targeted messaging, which interfered with the 2016 US presidential election and the Brexit vote.

And then, if you followed the events at the U.S Capitol January 6 of this year these   same systems are in the background, part of the polarization that we're seeing in our societies.   We wouldn't have, our social media platforms absolutely depend on artificial intelligence   to operate and that's one of the reasons that they are taking us a skew.    Now, I also like to say you need massive amounts of data for these systems and the more data there's this   so I think of this as we've set up this data Hunger Games. That we've got a mad race   for as much data as possible and that's one of the reasons that our corporations are incentivized   to pull as much data as they can because they extract that information they train their systems   on that. That's generating a problem that many of you I know will be involved thinking about  

sort of what the world of cyber security looks like, how do you protect all that data. And then   the impacts on privacy and our relationships with one another. I don't think privacy is dead, but that's the conversation we're having. So let's think about this: is big tech just evil?   You remember, you know that Google  when it was created its motto was: "don't be evil."  

And today, we certainly hear lots and lots of people thinking that companies like   Google and Facebook are evil, but is that theproblem? I don't think it is. Companies are   doing what companies do. They are responding to the incentives we've created for our   corporations markets and that drives a lot of great stuff. That's why we're getting the kind   of technology that allows me to give this talk like this in the pandemic for example. It's   driving the production of that technology and that's a pretty great thing. The problem is that   there are markets need to be in balance with our rules about those markets and those markets and   rules are out of balance today and we need to figure out how to get them back into balance.  

That's really what takes me back to what I was originally working on and thinking about the rules.   John Rawls and and what makes the society just, how do our markets work how do our systems work.   We need rules to structure our markets because they are the infrastructure of trust.  

A complex market is one in which you're trusting all kinds of people all the time to   handle things that matter to you: your data, your applications to work, or school, your assignments,   your interactions in the healthcare system, your family. All those things we spread out the   responsibility for doing that, that's what a market economy does, and we trust we have to trust people   to do their jobs in ways that are good for us so that's a infrastructure of trust. And that's sort   of the core of building human societies since we've had human societies. So for millennia the   ways in which we did this as as human communities was with conversations around the fire. This is   a image of the Johann Sea, also known as the sand bushmen in the Kalahari, who are still   hunter-gatherers today and who enforce their norms, they have lots of them, including, for   example, a norm that says we are all equal to each other. There is no big man, there is no head man.  

And they enforce that with conversation, gossip, mockery conversation, deliberation, that's   conversation around the fire. And as societies grew more complicated and Indigenous peoples   invented more complex systems, more structured system, invented our first democracies with rules   about how we would be organized. And that was more structured councils and democratic processes.   Fast forward a bit to, I started about 3,000 for the Common 3,000 years before the Common Era. In Mesopotamia, the cradle of civilization, as it's referred to, we start to see more formal laws this   is an image of Hammurabi's code with 287 rules carved into the stone that meant that deal with   very mundane things like what you have to pay if you flood your neighbours fields and so on.  

But those are some of the first formal, written down legal systems that we see.   And then, the ancient Athenians, this is about 500 before the years before the Common Era and invent even   more formal systems of participatory democracy. This is something called the Claritarian, which   is the randomization device that the Athenians used to choose who would be on their juries.   And their juries were 600 to 6,000 people hearing cases argued by the litigants and had written law as well. And then fast forward, we've invented complex courts, here's our Canadian Supreme Court.   Lots of lots of written rules. And as  we're looking at the world of data and AI

emerging, that's what we're starting to see is we're starting to see lots of rules written   to try and respond to the challenges. Some of you may have already had some exposure to   the general data protection regulation, you may know that you've had exposure to it   because you're having to think about it this is in place in Europe governing the use of data.   But all of us have actually experienced it because it's responsible, it was introduced   in May of 2018 and it's responsible for all the little click boxes that now jump up on every   website you go to to say well we're using cookies, we're using cookies, do you want to accept or not?   So it caused a number of those click boxes to explode. Canada has just introduced something similar to the general data protection regulation as a proposed law.    This is Bill C-11 to similarly federal law that would affect not only how we handle   data but also how we interact with automated decision-making systems. And the European, the EU, the European Commission just last month introduced the world's first   comprehensive, a proposed, comprehensive law for regulating artificial intelligence,   which first of all designates certain artificial intelligence systems as what they call high risk   and by the way anything to do with education is in that high risk category. And then require,  

outlaws certain types of use of AI systems, particularly having to do with bio biomarkers,   biometrics and then imposes the system of national supervisory bodies and certification   in order to regulate those systems that still propose law, but it's going to have a big impact.   And if you followed what's happening  in the United States, post the election,   both left and right are urging their big anti-trust competition law lawsuits that have been   filed against large tech companies like Google and Apple and big pieces of legislation that are being   introduced as the conversation has shift to how are we gonna how are we gonna manage big tech. I don't think it's gonna work, I don't think all of this law that we're writing is going to   work in the ways that we think it's going to and I certainly don't think that those lawsuits are going to get us very far. And that's going back to the work that I was talking to you before about   earlier from my book, "Rules for a Flat World."  Even before artificial intelligence came along,   we were facing the absolute limits of what the legal systems we invented in the 19th and 20th   centuries could really handle. They were really invented for the mass manufacturing economy  

that kind of stopped at the border of the nation state, there was some international trade, but   we didn't have massively integrated production systems and global supply chains.  It was made for the companies that were, for the most part, large integrated companies in stable industries.   And the world as we know in the last several decades has just totally transformed from that   the speed of innovation has increased tremendously. We're now globalized in a very different way with   deeply interconnected global supply chains. The pandemic taught us a little bit about that as well.   We have increasingly complex systems that it's very difficult to much, much   harder to predict what the implications are of taking this action or that policy in this way.   And a lot of this, because it's happening inside our large, private technology companies, go back   to Google, Apple, Facebook, Microsoft Amazon, on that first slide, or Baidu, Tencent, Alibaba,   it's growing inside these massive organizations and there are tremendous returns to scale   in building machine learning models. More data, there's there's very good evidence of this  

that most most resources you get a diminishing return from further input, but our machine   learning models are still learning even  when they've got trillions and trillions   of data points to grow on. They still  benefit from a big jump in additional data.   And so there's this massive returns to scale in model building inside private technology   companies or whatever kind of an organization. And we're seeing this widening chasm between who has   access to data and who has access to big, powerful computers in order to analyze that data. So in   many ways, the technology companies that we need to bring under the that we need to enforce   rules against in order to make sure that they're doing what we as a collective want them to do;   they're basically we're our political systems are outgunned, our legal systems are outgunned   and I talked more about this in the book. So here's an image of a young Bill Gates.  Microsoft was sued by the U.S Department of Justice in and around I think first   investigation started around 1990 alleging and I trust violations this is the first sort of   big tech lawsuit because they were tying their browser, use of their browser to access to   their operating system at the time they probably had like 95 market share in operating systems.  

Well, it took almost 20 years before this case was ultimately resolved and over with. And this is one  of the reasons that I think that filing anti-trust laws and competition lawsuits is not really going   to do very much for us, it's just going to be very, very expensive and take a very long time.   We need to adapt our systems of rules to keep up with our social, technological and economic change. Let me say a little bit about what I think that means. So I spent a lot of time thinking  

about something I call regulatory technologies. And these are technology I think we're going   to need technology not just the written rules in books, we'll need some of those. We're going   to need regulatory technologies to keep up with technology. Let's say we let we're going to need AI   to help us regulate AI because we need something that can be as fast and as smart and as data rich   as the technologies that we're seeking  to regulate. So regulatory technology, one of my   favourite examples is actually something developed by two University of Toronto professors. This is Lisa Austin and David Lee, they're research leads at the Schwartz Reisman Institute.  

And Lisa is in the law school and  David is in the School of Engineering.   They work together to come up with something that they call app trans, which was a piece of machine   learning software that could automatically read privacy policies on an android phone and then   check to see whether or not the phone was actually complying with the privacy policy. So if you had   clicked off don't use my location data, was the app on the phone actually using location data   and what they found is that 60 per cent of the apps that they tested were actually violating their own   privacy policies. That's an example of a regulatory technology that I think we need to be building.   We started working with the Creative Destruction Lab at Rotman to try and drive the growth of   these kinds of companies and there's a couple that have already come through the program.   At CDL, one of them is also a University of Toronto company, Private AI, which automatically removes   identity personally identifying information out of text and images so that companies could share that   data more confidently that more confidently than they could now. And another Toronto-based company  

Armella, which is building uh which is building machine learning systems, currently thinking   about them in banks, for example, to go through to check that our machine the machine learning models   used in a bank are complying with the legal rules, but also try to ensure that they are not   biased on the basis of race or gender or other variables that we want to protect against.   So those are examples of regulatory  technologies and I think we need to be building those regulatory technologies. I also think we need to create the environment to drive investment into   those regulatory technologies and I call this the building of regulatory markets. So conventional   regulation we have governments that write a set of rules. Historically, they are command and control   type rules and they regulate directly regulate the companies we want to regulate our regulated entities. The concept of a regulatory market says: well let's get some companies, let that Private AI   and Armilla and App Trans, for example, let's test can we designate those as regulators   and regulate them to make sure that they are doing what they say they do and to make sure that   they are achieving the outcomes that governments have established and then have those regulators   regulate our technology companies. So essentially you say look we're gonna as a as a community  

rely on this technology company, like a Private AI, to say well we've tested Private AI's system,   we believe it does a good job of achieving the goal of of our privacy legislation so a company   that's using private AI and following its system is in compliance with with our legal rules.   And the last thing I want to mention is just to drive more innovation in rule making. The legal systems that we invented over the 20th century have become quite frozen   and self-reproducing. And I say this as  a person who's been a law professor for   more than 30 years, teaches thousands and thousands of lawyers over my lifetime   But we have we have not imbued enough innovative thinking in our ways of making rules.   So I want to promote the idea that  we need to be using more design thinking   techniques in thinking about how we regulate in all these spaces because so much is new.  

And that's where it's going to come back to you and your involvement in thinking about how   you design and implement and oversee the systems you work with in technology at the University of   Toronto. So remember, you're designing these things, you're building these things, you know putting   in the the designing the authentication  systems and processes for authentication,   putting in these check boxes, getting legal advice about putting in the check boxes.   You were shaping the infrastructure of U of T and you are writing the rules of U of T so what I   want to as a takeaway for you is to think about how can we be more innovative about that process.   And here's our own Isaac Straley participating in an event we held just before we all got shut down   a year ago in March for the pandemic, in which we conducted a design thinking exercise to help   figure out a problem for organization headed by former dean of medicine at the University of Toronto,   Kathy Whiteside about how do we access data. This challenge that diabetes actually in   Canada was facing and bringing to us is there's we have fantastic health data in Canada and Ontario,   but it's all locked up in different silos and it makes it very difficult for researchers   and physicians to gain access to it for  treatment purposes, for research purposes.   So as a puzzle, why can't we access that data, a lot of the existing infrastructure is a reason   for that. So we held a design thinking workshop that was aimed at trying to dig out figuring  

out, we called it excavating the why, for how that how that what was happening what   was going on what could we build that would be different so the lesson I want you to take away   is that building safer systems requires not only techno technical innovations, but it also   requires regulatory innovation. So my lesson to you, my message to you is: you're a tech innovator,   be also a systems innovator and a rule innovator. And I think we're going to own that future. Thanks That was excellent, thank you so much for sharing your thoughts with us. And, as always,   it's a pleasure to listen to you and we've had you didn't have the pleasure of looking at it but   we've had a robust chat well while you've been talking. It's pretty incredible and we've got  

a number of questions and so I want to just go ahead and dig into them. You talked about this a little bit at the end, but I'd like to ask it directly, especially because it's our top, we can up-vote and I encourage you to put your questions continue to put them in there as we're talking and   upvote go to the Q&A tab and upload the ones but Professor Hadfield so again you you already   have this but it's a question people are still asking so you what are some of the things that   we as individuals can do to drive industry and government towards responsible and inclusive   AI and machine learning development and I might follow this up with also my own   on this and again this this plays into what you just said but especially technologists what   can technologists do in this space it seems to be much more philosophical legal you know   areas that aren't traditionally the space of technical justice right right right right so I'm getting a feedback here so let me just see if I can there we go okay so yeah so that's what   I get to at the end right to say so we actually need to start thinking about that we think of   rules as all those books we think of it as the domain of what lawyers and judges do but rules   are what human communities build to say oh you can't do that or hey you should do more of this   and we're in the world of technology we need to be thinking about how technology accomplishes   that so part of showing the little boxes of the you know from the authentication to get   onto the U of T system or if I'm doing my human effects protocol I got a checkbox actually we   are implementing those rules and creating those rules all the time so lots of the teams at U of T   are so they are sitting there and saying here's how we design it now they may say well we got to   get the lawyers to tell us what to do here and part of what I've wanted to say to people   who are not lawyers for this why I wrote the book in a way that was intended to be accessible to   people who are not lawyers to say we all have to own this process and we can't leave it to lawyers   because we all we all are perfectly capable of participating in the thinking about what   makes sense here now we'll need some we need some legal input along the way   so partly it's to say don't fall down dead in front of lawyers right push back when they say   sorry you need a 40-page agreement before you can hand over that data we're gonna have to look   at it's gonna take months before we can approve that I mean as you remember participating in our   our workshop for diabetes action Canada part of the story we were hearing from researchers and   physicians were it's taking us years to negotiate data sharing agreements and that's the problem   we're seeing all over the world in all kinds of contexts efforts to to manage climate change treat our health care systems build better government services the data is all locked   up and that's I think a major problem for us all to be addressing so it's partly saying   so you know to have the confidence to say well I know the lawyers are saying it'll take six months   to get approval on this can we think of something different and I really do encourage people to use   design thinking techniques, which you know work teams can participate in can learn   there's lots and lots of people available to try and facilitate that that's the kind of thing we're   trying to do at Schwartz Reisman as well but to say hey what if we thought about this differently   so the first thing you can do is say it's not somebody else's problem i can participate in   helping to solve this problem and I can help think of new ways to do it as technologists we can they   think about that App Trans that that Professor Austin and Professor Lee built right they said   well okay Lisa knew Dr professor Austin knew about the legal environment and professor lee knew   how to build a machine learning system he said well can we can we build the machine learning   that helps us instead of the companies and so I think there's definitely I really do think this is   going to be an important area for technology I'd love to stimulate people in technology to think   about becoming participants in that if you're helping to run startup incubators think about   where's the where's the regulatory technology here and then I think it's just not it's just not   I think the thing that lawyers have done most effectively is to get everybody else to just   kind of say well i guess that's the solution  that you know if lawyers say we need it to look   like this and i i love lawyers i've trained them  i'm one of them we they need more pressure they   need more people coming back saying nope not a  good enough answer I need the one page agreement   because I know people will not understand the 40 page one. So I think we have to we have to   just kind of get more active as a community and say we need to be doing things differently.   Thank you thank you for that I think that applying design thinking to rules and legislation you know   we do a lot of technology now right especially our the way we build systems now are really focused on   human design and putting the pressure in the reverse direction I think is really inspiring.  

I have a my own question that  leads into a community question   so there's a follow-up to this, but we'll start with you mentioned   advertising being a motivator in a number of these discussions, is that a bad thing?   Don't want systems to help us find and get the things that we want? Is advertising a bad thing?  Advertising and profit motivation. Yeah no, so at the end of the day I'm an economist.   And so, I'm not anti-corporation, I'm not anti-advertising, I'm not anti-markets and I don't think really anybody truly is because you know you can't you can't hold up your phone   and not feel pretty excited about what markets and and frankly advertising have done for you. So   those are powerful things and they have they have led to you know doubling in human life expectancy   and massive increases in wealth around the globe. And you know there are four billion  

people living on less than ten dollars a day around the world, they would benefit from an   infrastructure that supported more corporate activity and market activity. The problem   is not those systems, it's the system of rules, it's not the markets. it's not the advertising per se.   It's the rules we have around that we were able to build those rules reasonably well in the 20th   century to say you know we started off letting factories dump whatever they wanted in the water.   We said "oh no that's not any good" so you built effective regulations about that. People built cars   and put them on the road and we said "oh I guess we need some rules for how we drive on the car   drive on the roads and how the roads share space with pedestrians and bicycles and horses."  

We said "oh and and we should  have seat belts in them and airbags   and crash protections, right?" They're all these things. This is the rule infrastructure that   harnesses that powerful engine of markets, corporations, profit making and directs it   right? That's I think where we're we're facing our real challenges today. Our   systems for controlling that has just broken and absolutely it's doing stuff we don't we don't want.   But you know, heading down the path of saying: so let's get rid of corporations and let's get rid of   advertising and let's get rid of profit making, no we don't want to do that we don't want to do that.   But we do, absolutely, with great urgency, need to figure out how do we get that back in line, back in   harness, so it's taking us where we want to go and that we you know democratically are back in charge.   And and I know it feels incredibly daunting, but I think it's so urgent and I think that's   I mean I hope that's a major thing that university can do is drive that. So that is a great way to  

my next question we've got about two minutes left so unfortunately we've got a ton of questions but   so those changes are coming and I think leadership like yours is helping drive them   but what do we do today? Can you comment on the risks of research and education institutions   or individual instructors adopting and using tools from for-profit companies and sharing   data with them despite this framework that you're talking about? Like what how do we help manage that   and navigate that today? Well I think it helps, so something like this is very important so I   think there's going back to that first question what can we do, it is an awareness. I do think   anybody in technology today needs to be aware of you know, you're in an environment where   you've got these powerful, powerful tools and we don't have all the right rules in place yet.   So you need to be aware in a way you maybe didn't in the past of what the potential   risks are. You need to be thinking what does my data set look like what's the impact what's  

this like for my users. So I think needs to be an awareness for that. But I also think that   we need to not we need to be taking some some we need to be taking some risks, we need to   be experimenting, but we should be doing those in deliberate ways. We should recognize it,   I mean because I think what we're defaulting to is somebody else will give me a whole bunch of rules   so it'll take six months, I mean even just think about our pandemic response right. There's a lot of  

places where regulate you can point to regulation as the thing that held us up and I think probably   got us to the wrong trade-off. Like we need we need a little bit more freedom to experiment with   things but make sure you're tracking what happens. Do an AB test you know try it out with users,  beta test it right, do much more instead of like you know. I think universities have a habit of   saying we'll figure out what we want, we'll design the system, we'll get it approved, we'll implement   that and there's not maybe enough well let's build five of them and test them out and collect that   data. So I think more of that innovate experimental mindset I think is something that that would be   helpful to us now. That's great, thank you so much and to the community I apologize waiting to get   to all of your great questions, but it was a lot of great questions and Professor Hadfield: again thank   you so much for your your time and and leadership here we really appreciate it. So now we're going to  

move into a quick yeah absolutely we're gonna move into a quick break please grab a coffee or   your beverage of choice and head to the breakout sessions that we're starting at 11:25 a.m. Thanks

2021-05-20 07:03

Show Video

Other news