Shaping [AI] Technology for Social Good
okay uh thank you all very very much uh so I I'm excited incredibly excited about the second panel uh this is about uh using AI for good and I have the pleasure of introducing Dr Sally cornbluth who is the 18th President of the Massachusetts Institute of Technology uh she has a she received a bachelor's degree in genetics from Cambridge University and then a PhD in molecular oncology from Rockefeller University she had a distinguished research career at Duke University and then became a brilliant Provost and administrator of that University and we are very fortunate to have her here what you didn't know however is she also has a bachelor's in political science from Williams college and uh this makes her especially ideal for this setting at the interface of uh technology and politics and economics and the set of issues we're grappling with today which is how we meld technology uh we with us our moral objective is to produce an environment a society in which we all want to live so I'm going to turn it over to uh president corlu who will introduce the other panelists thank you so much that sounds great thank you so thank you David and a good afternoon to everyone and thank you all for being here uh in a moment I'll introduce you to this Allstar lineup but first I want to just say a few words about the topic of our discussion shap technology for social good so last year in my inaugural address which seems I can't believe it was only a year ago um I touched upon mit's responsibility to help Humanity come to grips with the tectonic forces of artificial intelligence I use the word tectonic to try to convey the enormous scope of ai's impact and how fundamentally new technologies May shift the way we live work and understand ourselves as humans but even know knowing what's at stake we're not always sure how we should proceed AI has evolved so rapidly that many of us including some of us at MIT feel like we missed a lecture or two and we need to catch up AI is causing us to rethink how we teach how we learn how we communicate how we care for our health and the health of the planet and it's even making us rethink how we think AI applications have so many potential benefits that it can be incredibly tempting to forge ahead before taking into account the potential risks there seem to be countless factors to consider as we focus on how to develop policy to get us on the right track and keep us there and it's not only the technology part that's immensely complex defining what constitutes social good is no easy task fortunately we have incredibly insightful and accomplished uh group of panelists today to help us think about these kinds of questions and I'm delighted to introduce them to you so uh Brian De at the end is the director of The White House National Economic Council um and an MIT Innovation fellow he's focused on developing strategies to address climate change and promote sustainable economic growth as an adviser to two US presidents Brian was instrumental in rescuing the US Auto industry and negotiating the the Paris climate agreement he also served as the global head of sustainable investing at Black Rock where helped Drive greater focus on climate and sustainability risk uh Dan huton Locker who's known to many of you is Dean of MIT schwarzman College of computing his background is a wonderful mix of the academic and the industrial he helped found Cornell Tech a digital technology oriented graduate school he's been a researcher at the Xerox P alter Research Center and he served as CTO of a fintech startup Dan is an internationally recognized researcher in computer vision and the analysis of social social media and saving the best for last he holds a doctorate from MIT um we have here Frank McCord he's the founder in executive chairman of mccort global a family firm working across industries from Real Estate media and sports to technology and capital investment and Frank also founded project Liberty to help build a better internet one where users have control over their data a voice in how platforms operate and greater access to the economic benefits of innovation in 2021 project Liberty released the decentralized social networking protocol or dsnp designed to help social networks create social networks that Foster more meaningful constructive dialogues and then we have in between I went out of order but uh zad uh zad oberm trained as an Emergency Physician who now spends most of his time on research and teaching at UC Berkeley he builds machine learning algorithms that help doctors make better decisions and his work on algorithmic r bias has influenced how organizations build and use algorithms and how lawmakers and Regulators hold AI accountable zad is a faculty research fellow at the National Bureau of economic research and co-founder of Nightingale open science a platform that makes massive Medical Imaging data sets accessible for nonprofit research all of our speakers have fascinating insights to share um so let's get started so why don't we BR begin with Brian uh we're going to tr we've asked the speakers to speak for 5 minutes so uh let's see how we do great um thank you uh uh thank you all for uh for convening uh and and really for training a focus on this core question um of how we actually more effectively harness uh uh technology for the uh for the social good hugely important in lots of ways I know we're going to spend a lot of our time on this panel talking about AI I want to start sort of one step back at at core technology um my uh my reason for that is because I think the that there's actually broader lessons we need to learn here uh but my underlying reason for that is I know less about uh artificial intelligence than any of the other people on the panel so it's better for me to go first and establish that before uh uh before they all do um so I I'm I just want to talk a little bit about the sort of the right the right way to think about um the government role uh in this this core question uh about training technology for social good um and start by I think making a distinction we often miss between two important governmental functions we have the governmental function to Foster Innovation and then the government function to regulate uh uh Innovation um and I think we often and I will put this on policy makers I think often make the mistake of thinking about the principal um goal of the former being to try to promote good and the principal goal of the latter to try to be to uh discourage bad um and what I think that that has done is in both both cases it's put uh it's brought a policy-making approach to technology and Innovation that largely brings these questions of the social good in um expost uh after we have either tried to Foster good or try tried to mitigate bad we start to think about these questions on the back end of the development of innovation uh and when you get to that point it's extraordinarily difficult uh for policy makers to play an effective and additive role so if I if I could offer sort of one very simple takeaway um that if we are going to actually make progress on harnessing technology for social good we have to figure out more ways to pull forward these questions earlier on in the governmental process both as a foster of innovation and as a regulator of Technologies um to to bring that more earlier on so U I want to offer just one example that's not AI related in that context about what's happening in real time uh uh with government policy today so uh most of the most of the government efforts historically to Foster Innovation including through funding basic curiosity driven research and others have actually been agnostic to this question of of of the the ultimate social outcomes and have said we want to create Innovation and then ultimately we want to see where that Innovation goes um we have recently started to run a set of experiments to try to do it a bit differently and to try to say on the front end of Investments innovation in investments in research and development and deployment how do we actually pull forward these this focus on uh labor on community on society um and we're doing that principally in the area of the government's efforts to try to Foster clean energy Innovation um and so if you look in over the last couple of years in the government policy Around clean energy Innovation there's an explicit effort one to embed labor and to give labor leverage in the context of government funded research that's trying to Foster clean energy Innovation uh two there's an explicit effort to try to prioritize place and community-based outcomes in the government's efforts to try to Spur Innovation and three there's an effort to try to tie uh governmental efforts to Foster Innovation to positive public outcomes some quite controversial controversial to academic research communities around limitations to uh uh the deployment of uh of of Innovations funded with public dollars um but I raise all of these because there's a lot of Novel there's actually this is actually a place where there's a lot of Novel policy being undertaken that was designed to try to pull for pull that forward earlier in the process to say if we provide um communities workers and the public interest earlier on um can we actually produce better outcomes better social outcomes early evidence is very promising in this respect both in terms of the direction of of of of of investment where it is flowing in terms of the benefit flowing to to workers um and so I think that there is a lot to learn on that front the other admonition I would say as we move and shift this into thinking about AI every element of what I just said has been intensely controversial and if you look in you know um my background is economics and economic policy if you use a static economic model in almost all cases the things that I describe are viewed as constraints to uh optimized growth or optimize efficiency and so I think we also need to you know broaden our lens when it comes to policy of thinking about what is a constraint versus what is uh what is an opportunity and I think that um those lessons we should think about transposing onto this question of how the government operates as an effective fosterer of innovation with respect to AI but more importantly and more s more of timely in terms of regulating um uh uh regulating as well so with that and your effective eke keeper on the front end I'll I'll pause it there and turn it back over to yourself great thank you Brian uh Dan super so um I also want to set some context but maybe a a different kind of context one related to AI so when we think about Ai and shaping the future of work or frankly more generally Ai and shaping our future period in almost all human endeavor it's really important for us to begin to understand how AR differs from pretty much any previous technology and this difference is not necessarily the difference that you see in the everyday dialogue this difference is not about AI taking over the world about AI subjugating humans uh despite the science fiction and a lot of the public dialogue rather it's about how AI fundamentally changes the nature of what it means to be human so AI does not fit our centuries Long View of reality since the enlightenment people's understanding of the world has been defined by two things human reason and human faith in the divine Divine AI now brings to Bear a new non-human form of intelligence as a third means of understanding the world which is neither human reason nor Faith as AI outstrips human reason it certainly seems quite plausible that this is going to be a big challenge to our view of ourselves and our role in the world but at the same time AI can stand to really Elevate human reason in understanding of the world as we bring Ai and humans together and develop really new Pinnacles of understanding okay that's pretty lofty with some more practical implications um well there are I think three that I think are important in this context so the first is it's going to be even harder to predict the effects of the AI Revolution than of any previous technology Revolution we all know how the internet Revolution went and in Frank fact Frank's been somebody really I think you know speaking about how to fix that um but this one's going to be even harder uh second there are literally no norms for human interaction with AI but if you think about trying to shape or to govern or to regulate something for which there are no Norms almost impossible to think about how you do that effectively without Norms so that's the second thing uh and and maybe there's an example to make that concrete which comes out of some uh pieces that we've been writing lately in some policy briefs like everyone knows what it means to shove a fork in a toaster right so we've been referring to this as the fork and a toaster problem uh across cultures across communities you get it you did it it's your fault um that wasn't always true actually early toasters had exposed electric elements and it was easy to electrocute yourself and so what happened over time is societal Norms developed and the technology changed to make it difficult to not follow those Norms of understanding so we're nowhere on AI in terms of both the technological Evolution and the set of norms that need to evolve with it to Define things like legal liability and effective regulation and other things so that's the second one the third is that um this fundamental nature of the changes wrought by AI actually pretty explicitly calls for a particular particular approach based on collaboration between humans and Ai and this is not a new idea about bringing together human reason and this new form of intelligence rather than relying on either one alone in fact our own jcr lick ligher here at MIT in 1960 wrote a paper about this but with all of the recent advances in generative Ai and and machine learning generally lately it's now imperative that we understand how to actually take this kind of an approach integrating humans uh and machines in doing this kind of technology and not just depending on either one alone so now let me try to bring this back to the questions here around Ai and the future of work and shaping shaping the future of work so let's just think about the hiring process for a minute the use of AI screening tools constitutes the use of AI by itself not ing right you run it first and then whatever the AI says now you pick up after the AI left off this is Standalone AI exactly the thing that that lick lighter and I think it's important that we reject so the natural reaction there has been to say oh well we should ban the use of AI in hiring or we should audit all of these things in advance uh and you know delay the use of them in hiring until we're very sure that they work well and how we're sure they work well we don't have good Norms is another problem but but leave that aside but but if instead you think about approaches to using ai ai is a collaborator with humans in the hiring process and in fact if you think about any human hiring process it's people working together in some way it's not one person making a decision alone if you bring AI into that hiring process that combination can actually yield better results Than People alone or AI alone there's an approach there that can work and can actually help us do a better job of developing norms and understanding AI so with that I'm also at time thank you for your effective timekeeping um so I think at least when I think about aligning AI with social good I my mind leaps to the politics and the regulation and the the deployment challenges but I want to go back one step and refocus on a I think a very important technical part of AI that has huge implications um for how it's regulated how it's deployed um and that is the data from which the AI learns um and I think that it's very hard to think of there's like an optical illusion element to AI because like when we're interacting with chat GPT it it really feels like you're interacting with something and it doesn't really feel like this is just a next word predictor um but understanding that it is in fact just a next word predictor can be really helpful in trying to understand how we use it effectively and how we avoid some of the dangers so I have a parallel story I want to tell you from U medicine you know which is where I work um that I think illustrates those points and I think in medicine we see the same thing because we often hear and even are guilty of saying things like you know oh the AI is reading the X-ray um but you know it it so here's an example uh AI is trained to look for signs of arthritis and the knee large cause of um pain globally and um and we say that the AI reads the X-ray for signs of arthritis but how does the AI learn to read well if you look at all of the literature the AIS have one thing in common is that they learn to read an x-ray by learning from doctors um and so AI has shown a bunch of x-rays that x-ray is paired with a number um usually the kelgin Lawrence grades this is an objective scoring system Radiologists look at the knee they grade it from 0 to four and the AI learns to predict that number from 0 to four based on the image of the um that the X-ray that it's seeing so that all sounds very reasonable here are two major problems with that number one if all an AI is learning to do is to replicate a human's judgment what is that AI going to be used for it doesn't take a lot of imagination to see that that's going to substitute for human labor out of the box uh rather than complimenting um what humans are trying to do now I know Radiologists are expensive I know we spend a lot on Healthcare so maybe that's okay um in in this particular context but here's the second problem which I think is a bigger problem when an AI learns from humans it also learns to replicate all of are errors and biases and prejudices and problems um and so you might ask okay well what is this kelgin Lawrence grade well it turns out that there are two doctors krin and Lawrence and they were working um on studies of coal miners in England uh in the 1950s and if you look at the you know table one of their study there's no mention of like the demographics because they were all the same um and so um one problem with that score which is still in wide use today is that it misses a lot of causes of knee pain on x-rays that weren't present in that original population that that you know kren Lawrence algorithm was trained on how do we know that it misses those causes of pain well because we trained a different kind of AI to read those x-rays uh we train that AI not by learning from the doctor but by having the AI listen to the patient um another important human whose opinion you might want to consult um when you're dealing with an x-ray um of someone's knee and so um that that AI learned to correlate features of the of the pixels from that image with patient report of pain which I'll add is actually what doctors krin and Lawrence were trying to do in those initial studies initially in in the 1950s but without the benefit of thousands and thousands of x-rays from a very diverse set of people like we have today um and so that AI has huge practical implications because if doctors are missing causes of pain in the knee that could account for the fact that even though black patients have many times the incidence of knee pain they have about half the likelihood of getting knee replacement surgeries and other um remedies for that knee pain because the doctor doesn't see a problem the patient is sent home with like you know call me in the morning take some Tylenol um and does not get referred to that orthopedic surgeon so I think that um doing that project really made me optimistic about the role that AI can play in rebuilding the science of medicine rebuilding a lot of other things that we do and doing it in an equitable way um but I think it's hard to when you look around at how AI is built today it's hard to be that optimistic because almost all the AI we see from chat GPT to hiring to predictive policing are fundamentally built on this edifice of human judgment not on more enlightened and sophisticated measures of the ground truth um and I think once you see that problem you can't unsee it you know our predictive policing algorithms are trained on arrests uh or convictions are hiring algorithms as Dan mentioned are trained on what an interviewer thinks which is why an early Amazon um effort ended up with a pool that was 90% white and male everyone got mad at the AI but it was that's just what the data that's just the data looked like um and so I think as we think about aligning AI paying attention to that data and making sure that we're using um data that speaks to the truth rather than to human judgment I think is a really important part of aign ining AI with social values thank you uh Frank Sally oh first of all thank you for having me it's great to be back in Boston as a native Bostonian I'm happy to be here um I'd like to elevate the conversation for a moment to set some context at least from my perspective in terms of what we're up against here um I just finished writing a book and I wanted to think of a metaphor or a framing device for where we're at right now and uh I chose the American project as that framing device and uh the reason I did was that I I felt it's uh it was a moment uh and by the way inspired by you know Thomas Payne's common sense a relatively straightforward document that a pamphlet that really spoke in accessible language about what was at stake and what was at stake at that time were you know uh he made the the argument that the colonists had a a choice right these early settlers had a choice they could remain uh subjects of a monarchy or they could choose citizenship and a a set of human and property rights that up until that time they didn't possess and it was really just framed in you know as People's Choice and one could choose to to remain a subject or move off in a different direction and by Framing it in such a simple way and uh empowering individuals to to actually make that choice um you know it it it set in motion a set of outcomes and here we are uh nearly you know 250 years later having built something really remarkable in the time frame when you think of what what was created in this country in uh just over a couple hundred years and uh the the reverse is now happening in a way so as rapidly as the technology has advanced um we are regressing socially and becoming uh returning to subject Hood the way the technology is designed and I'll get to that in a moment we have been stripped of a fundamental set of rights that were the core premise of everything else that was built right so we had simple protocols thin layer protocols a declaration of independence a US Constitution and a Bill of Rights that we that established the set of Rights and the ideals values principles call them what you will that we built this great country on lots of things were built on it you know that were not all predicted at the time but enabled by again these core simple protocols and um what what they were based on was this ability of individual agency ownership control consent permission uh and in other words individuals uh bolted into locked into their society their governance their economy their country and uh so let's slide over now to different type of protocols you know tcpip for instance which connected all these Computing devices and and and created the internet and then HTTP which connected the data regardless of what device it was on enabling this fantastic technology um both of which were intended to decentralize and democratize and then we we the the the so-called web age was it's its promise was I I think shortlived because we entered the the the app age and in the app age everything was centralized and then suddenly our data became very very valuable to a to a few platforms that aggregated and applied by the way algorithms are you know another word for AI right machine learning call it what you will this is and the the point is we've unwittingly perhaps delegated this power to a few platforms the only platforms with both the massive data sets you know the huge amount of data and the compute power to actually benefit from this next ER of AI so my my final point is I think we need to reflect on where we are return to the basics fix the internet which is the source of the data so that individuals once again have agency ownership consent control and feel like we're part of the future as opposed to being dragged into a future that is uh we have zero input into our uh out ability to to uh influence its outcome all right well thank you all that was a great uh setup to some of the things we want to talk about and actually your comments um Frank lead right into sort of my first question which you know if we think of social media in particular as a test case for a new technology that had great promise and also a lot of risk uh the results now seem to indicate a lot more risk than we had originally anticipated and you know what do you think that not we but Society or the inventors of uh social media um what did they get wrong what can we learn from this well what we've gotten wrong is again we've um delegated this you know the the the the power to a few platforms and we need to shift that power Dynamic back to individuals first of all um if we're going to be governable and be be a society that can actually whether it's a policy initiative or any other social good um uh objective we we need to be able to understand you know what is what are the set of facts we're dealing with here and what are we trying to optimize for when you contaminate the information ecosystem so that there is no ability to discern there's no ability to kind of navigate forward we're all in this kind of library of Babylon right we're just in this this this space where the uh the as I say the information ecosystem is so contaminated it's hard to know which way to go and everybody can make up their own kind of set of altern quote alternative facts and and by the way um when we talk about social we should think about the social media platforms are like poster child for what's gone wrong but it's actually our social graphs that dsnp is focused on it's not just social media applications it's anything that actually scrapes our social graphs and Aggregates that data and then optim and then applies some indic or algorithm to it to optimize for some purpose right and in these social media platforms when they op optimize for Time online you know stickiness and selling ads and kind of a a surveillance type of of technology and a surveillance type of capitalism we we we we're we have this situation now where we're broken it's we're kind of broken as a society and and I would just conclude by saying because AI is actually more powerful version of the same Tech architecture we have now the architecture hasn't changed the the the compute power has and so why in the world would we just knowing that it's we're broken and we can't function and we're not governable why would we allow a more powerful version of the same broken design to be enabled and Unleashed into the world let's fix the design and then do massively positive things with AI and the technology because the technology is awesome how it's being used is is very very destructive right now and you know as the had the had expressed you know there's a huge human element into how it's deployed so your example of how you know scraping data someone is deciding what data needs to be scraped and then how that's incorporated into the next uh iteration you know what I'd like everybody to do every time we use data just think personhood yeah think it's your it's your in the digital world it's you it's not your data data is unemotional it's it's just an abstract term it's our personhood that's been scraped it's our personhood that's been stolen and that's why not democracy has no chance with autocratic technology autocracies will do awesomely well that's right that's right we've seen we've seen some of that so you know thinking about you know this panel shaping the technology for social good um part of it is you know what is social good who gets to make the decisions and this comes again to you know uh who decides uh how an algorithm's going to be built who decides what uh data sets are used Etc and how do we uh come to a consensus so um just all of you maybe we'll start with Brian based on your own thinking about social good what do you think the most important things what are the things AI should accomplish well I want to I want to start by answering that very profound question with uh just a reflection back on on Frank's points because um it goes to sort of who who who who is right to answer that question right one of the things that I was consistently surprised by um in the context of being a policy maker looking at a number of questions around um uh autonomy and security when it comes to people's data is when people are asked the question and then asked a set of sort of revealed preference behavioral questions both in terms of their response to uh inquiry and then their behavior on the back end of that um they are consistently uh willing to trade off security for convenience um uh again and again and again um in ways that uh Frank's admonition I think is the right one and the one that I I sort of I wish and wish for society that there was more of that when people re recognized the how how personal their data is from an autonomy perspective they would be more protective of that um but I think we also have to acknowledge that the the the the desire for convenience and simplicity is powerful um and so uh that that that that you know that that question is one that um we have to take great care in terms of who who who answers that question because the risk of course is that we we answer it in a way that um that actually constrains um some potential uh from outgrowth from this technology that we would constrain um on the front end of that but I think that look I think the the um the core answer to that you know that that question is that it should you know help to enable and amplify more human flourishing uh and I think that in in order for that to happen and I think you see this across a number of the examples that people have put forward we need to find ways of pulling uh people and the right people um the people whose data it is that is is empowering uh the the technological outcomes the people for whom um their perspective actually shapes the outcome as opposed to uh um uh as opposed to they're on the other side of of of using this um we have to create structures where those people are at the center of the project um and earlier on and I the one thing that I would say on that which I think connects something that both zad and Frank said is given how far we are on in the process doing so will feel quite disruptive doing that I think the only way that we can do that given where we are today is to take a set of steps that will feel quite disruptive as well I was thinking as as Frank you were speaking that you know we when we even when we talk about the issue of AI and regulation we go immediately to to narrow a universe when to talk about the issue without talking about anti trust in the United States misses you know a big core of why you know a lot of the reason why we ended up with the the risk that I I would say asymmetric risk with respect to social media platforms is we operated in an environment where the the network benefits of scale were so great and the antitrust parent paradigms were so ill enforced uh that we ended up with a situation where we didn't have the kind of competition that would have reflected the views of uh of of users and consumers and and and patients uh you know uh in you know in lots of different examples and we and so on the back end we've thought a lot in the FTC context about what you do about uh about algorithms in the social media context but doing it on the back end means you've got to do some pretty disruptive things if you want to get pushed back up and get workers or patients the the end the end owners of the data to actually be meaningful participants in the project yeah I mean it's like the company's captured the right to decide what social good is without or to ignore social good if they wanted to you know or we created we we we created a a a legal and social contract to enable them happen that's right and those were that was that was that was a policy Choice associated with a set of laws and regulations yes you know one thing while we're we're talking about all this it may be a bit of a tortured analogy but maybe zad will appreciate this you know as a cancer researcher it makes me think about um how patient advocacy groups have for particular diseases have been very strong in trying to determine you know how clinical trials are run how technology is developed for diseases Etc and the question has always been how much do you have to know about the process to be able to have input into it that's valuable and you know I think a lot of the sort of Black Box nature of AI and people not really understanding it makes some people feel like maybe they can't they can't have uh valid input and I'm just wondering you know zad as you think about you mentioned about you mentioned you know AI making humans more effective and Powerful rather than you know replacing humans you know how do we think about that what kind of policies are needed particularly in the medical realm where I think there are very strong emotions as I said you know patient advocacy groups Etc people who want to understand what AI is going to do to their healthare but may not have the you know the knowledge to do that understand the capabilities understand the trade-offs ETC yeah I think it's very related to um this last part of the conversation around defining and who gets to Define social good I think I mean we can all agree that we want social good we can all agree that Liberty and freedom and the pursuit of happiness are good I think the problem with AI is that AI doesn't operate on that level AI operates on the level of here is a data set here is a variable what is the objective function what am I minimizing in this data set and a lot of those Concepts that we can all agree on at a high level turn out to be very slippery when you get down to the data set level so like Health there is no variable called health or sickness in our cancer data sets or any other data sets um and so I think that you know for me given that as both a researcher and you know I think just as a person who engages with the Healthcare System there are many different measures that you might want to use and I think right now those measures are buried I think and they're buried online too Facebook doesn't tell us we are deploying a lot of things to keep you on this platform as long as possible and to maximize the number of clicks but it actually would be very useful if we had that level of transparency about what the algorithms are doing so it's true that AI is a black box and that there's a big complicated function in there that's doing stuff with a lot of variables but usually it's predicting one thing what's that thing let's talk about that thing it's not so disimilar to another life science analogy to the process that leads to the definition of a primary outcome in a in a randomized trial so the pharmaceutical companies would love to Define their primary outcomes as something super easy to measure in like 30 seconds that they can do for everyone so they can get the and and the FDA says no that's actually not adequate we need you to do so I think there you know there's a very careful and context specific negotiation process that has to happen that's very domain specific but I think that transparency about exactly what the AI is doing is very important and you don't need to have a PhD in computer science to know oh yeah predicting that variable doesn't seem like a very good idea yeah I mean that that's a segue and Dan maybe I know you've thought about this too uh the question then even at the governmental level how do you legislate regulate make laws for technology that you don't understand and that you might not even understand the notion of you know the sort of single outcome yeah I I I think a lot gets made in discussions about how can we regulate it if we can't understand it and sort of pushing for explainable AI or um but uh I think that there are practical things that we can do in terms of looking at outcomes um in fact if you think about much of regulation pre AI we regulate outcomes if humans do some bad thing then we you know then there's some determined punishment for them once in a while we try to get inside their head and figure out what their intent was like premeditated murder versus manslaughter but you still killed somebody right so both of the times there was an outcome that was not permissible under the legislation and we don't get in our heads so I think there's too much focus in the AI of trying to understand it rather than trying to understand the outcomes which I think think is aligned with what what zad was saying um and I think there are practical ways to move forward with that and in fact in some of the policy briefs that we've been working on um uh that for congress with David Goldston in the DC office um and as was Doug who's sitting right over there um the uh is is that um because so much existing regulational legislation is about outcomes of what humans do and don't do you know what constitutes uh discrimination in a hiring process what constitut we have to at least hold algorithms and AI to those same standards that we hold humans to and we're not actually doing that in many cases today so I think there really are practical things we can do they probably won't be enough in of themselves um but it also stops us from getting I think when there's a new technology that we don't understand we also get sort of very caught up in oh we might we need a whole new approach to thinking about it I don't think we do actually I think we can do very sensible things and so like if you look at the EU AI act draft I mean they they've gotten themselves completely tied in knots and trying to Define what are high-risk uses of AI well let's forget about that we already regulate most high-risk things that humans do right so that's a pretty good road map for where we want to pay attention to what AI is do so you know sort of outcome based and it's the things that we already view as being risky for society risky for our norms and our values as a society you know the point then that you're making you know just when you talk about Europe's policy objectives I think it's a it's a it's a good proxy for the following um Europe is is fairly clear on the the their policy objectives you know and this is even prior to AI you know gdpr dma DSA and so forth all all pre cses by the way cuz again it's the same technology just a more powerful version but they don't have influence over the actual Tech architecture right that's the let's face it the choices today are primarily CH Chinese technology or American Technology and so they're focused on American Technology that's what that's what is you know this the the citizens of the EU are are using because there's a disconnect between the policy OB objective and the actual Tech and how it's engineered there's a there's no ability to really to affect those policy objectives what I'm arguing is that we need to we have an engineering problem is is what we have first and foremost and we fix that return some ownership and control of the data to individuals get them connected to to to what's going on fix that piece first before we really turbo turbocharge all this and the second point is that let's not think of just social media platforms as the one optimizing in unhealthy ways for society social social media platforms aggregate our social graphs and apply a social index search platforms aggregate out data imply a search index shopping apps do the same agregate our data and and and and um and apply a shopping index it's all the same thing it's data being aggregated and using AI algorithms whatever optimizing for something until we regain control of that and reset and build I think we're going to be just chasing our tail and I don't think we any policy objective or regulation will will ever keep up we need to innovate our way out of this so why don't we all really interested points I think we have uh 10 minutes left why don't we open it uh to the audience see if anyone uh yeah back there um just a comment and a little bit of a question that kind of connects all these things uh lick lighter's 1960 paper is phenomenal but there's also an amazing paper he wrote about policy 1979 because in addition to being a great innovator he was also one of the great practitioners of industrial policy inica American history and uh that's called computers in government strongly recommended to everyone it really connects all the things that you're talking about and in particular it highlights the way in which the US was at that very moment turning away from supporting the fundamental protocols that uh Frank was talking about and towards you know Ai and crypto cryptography and things like this and I kind of wonder what you all think about the way that public policy can support uh these directions that can lift up those uh fundamental protocols and thereby build the basis of this human machine uh cooperation that he was so focused on anyone want to comment well well I'll use that occasion to make a quick point which is which I wanted to uh follow up on on Dan's point which is I think one important element is to not allow the reality of complexity to become a shield against practical policy uh steps being taken I I I cannot remember a policy meeting that I sat in in Washington on the topic of AI that didn't start with an admonition that this is all really complicated and most people in Washington don't understand it and that is true you know in in one sense uh but it is it is much more often used as a complexity is much more often used as an opportunity for those who have the the the most to gain economically from uh from avoiding regulation to try to create this veneer that only a few people can actually engage on the topic so I think that you know one big one big piece of what needs to be done here is to start with Basics and to actually build the build capability up from that I mean a very simple very simple Point um we don't in the United States have even basic protections around the privacy of children's data the reason for that is not because it's a complex issue the reason for that is not because uh the American public is is is split on it this is not unlike some of the other uh convenience versus privacy issues this is an issue that is a 9010 issue in the public the reason for that is because there is there are a small set of of companies that have disproportionate Market power and political influence and so we can't get things done that are basic and simple so that's like I think a b one of the answers in the political economy vein is we need to not allow complexity to be a shield and we need to start somewhere with some basic steps um and that is true you know that that that's that's possible with respect to D you don't have to go all the way to Frank's Vision which I which I like you don't have to go all the way on on day one to recapturing all of our control there are things that we can do that actually we could prove out um and if they you know if if they're if they're positive in Direction they create more actual actual more space uh to do more going forward you know a good example I think of how policy and Innovation interact is I'm I'm thinking about 1993 we started a telecom company called RCN we built it in the major cities in America including here in Boston the internet as we know it today was nent yeah uh and but we saw the power of connecting your phone your television and your high-speed internet access we were the first ever to bundle and it was interesting people came to sign up and many uh it said come back to me later when I can have my phone number in other words the the fact that they couldn't own their number they would sign the contract and say Here's my phone number and we had to say sorry we can't give it to you because the oligarchs of the time the seven baby bells say they own your phone number a and you know but in 1996 there was a telecom Act pass phone numbers became portable uh carriers became interoperable and so forth so let's just think about it today our data should be portable just like our phone number became portable the apps of the future should be interoperable just like carriers are interoperable we shouldn't be clicking on the terms of use of five big pl we should be the new app should be built and clicking on our terms of use for our data it's it's very simple it's very basic and it's all quite doable and we actually have a use you know use case now that's migrating to dsmp and you're actually building one or Deb Roy is building one here at MIT another one that in in this one being built at Harvard and they'll be interoperable and we'll demonstrate all this I would add one more Point there's value your your social graph this aggregated data is worth far more than a telephone number yeah there's a whole new economy that's going to evolve here a data sharing economy which will be a great equalizer it's this is a huge impact project as well if we can return to people what is rightfully theirs and restore faith and trust and in the in the system because right now it's gone we we we we have the best of both worlds we we restore we save democracy we We Share value and I would say we we we do the most important thing we start protecting children because we're not protecting children right now which is our most important responsibility as adults and as you know officials in government or university presidents or whatever this what are we doing this is just not right and we can fix it and we we and we must before the explosion of AI because AI can do so many wonderful things to solve problems and we can get to a point where our kids kids will live till 150 years old healthy lives but not the direction we're going right now because we're just not going to get there and this is back to a choice are we subjects or citizens uh thank you very much wonderful uh panel the theme that many people touched upon and Brian uh argued in a somewhat understated way uh I want to sort of amplify that and and and ask the question that it implies so something that traditional economics and I think a lot of computer scientists and others sort of agree is you don't mess with the Innovation process the invention goes wherever it goes and if there's policy regulation that should come Downstream and I think what you pointed out Brian is that you know for many of the problems that we're facing such as uh cyber security uh climate change that doesn't make sense that we have to change Innovation okay you don't need to preach to me I'm the converted here I've been arguing this for very long time but it's still still a minority View and I think part of the reason why it's the minority view is because we don't have the infrastructure for having reasonable policy on the direction of innovation we don't have the Norms we don't have agencies we don't have expertise and we don't have you know what is democratic control or what is government control of innovation that doesn't become stifling so how do we build that you know how do we make sure that you know when you discuss redirecting Innovation and I think that's what Frank is about and I think Dan has thought a lot about this as well you know that doesn't become like hijacking of Innovation um yeah you know I think I have sort of an incremental and an existential view on that question right I think um uh existentially what what it what it actually requires is a sort of a shifting of the Viewpoint of of fundamentally what is the what's the right governmental role and governmental function in Innovation policy and you do have to to go back to this view that if that government is you know government plays an important role in fostering Innovation government plays an important role in in regulating Innovation but only to the degree that you start with the question of innovation to what public end right to what public good um uh I think that you've you've you've got to shift that and you and that takes that is a generational shift it's a shift in uh in in people and institutions and processes the incremental uh uh piece is we are running a lot of actual experiments in the US government right now in moving us uh closer to this direction in pulling us closer uh to that um and uh and creating a system of being able to assess them um and amplify them and then expand them radically when we see that they are succeeding I think is the most fruitful way of doing that in uh in the short term and also with a a practical recognition that in almost all cases they are cont controversial because they are upending a an existing embedded structure where there's a lot of economic rent on the other side of the table so there also has to be a certain willingness to uh to be disruptive when the disruption is actually um at the expense of somebody who is currently um uh holding you know a significant economic rent but being able to identify and and and defend that actually a lot of that is actually a public good that needs to be you know that needs to be approached differently but I just I don't you know I don't want to um I don't want to leave this group with a pessimism the the number of of experiments that we're running today that I'm quite optimistic are going to prove out and that we could scale quite significantly is high which is means it's it's an important moment for us to be um spending more time on you know how how how we can effectively and quickly analyze and scale and on that optimistic note we are out of time so I want to uh thank thank all our panelists uh for this very interesting discussion so thank you very much thank you sing
2024-02-08 14:17