Trillion Dollar Questions: Solving the Policy Puzzle of Generative AI

Trillion Dollar Questions: Solving the Policy Puzzle of Generative AI

Show Video

>> Please welcome Christopher Mims, Technology Columnist from the Wall Street Journal. He will moderate today's discussion. >> Morning. Thank you all for being here. I think this is an unusually timely panel given the events of the last 72 hours at OpenAI.

And while that news could be old news by the end of the day, I think it illustrates just how challenging is the problem of governance for generative AI, how much concentration there is in the hands of a few individuals and a small number of technology companies. And so I hope you'll agree that that makes what we're going to discuss all the more salient. And personally I feel privileged to have these three panelists in one place at one time, and I want to start by framing the discussion as briefly as possible. Artificial intelligence is a difficult thing to define.

I've learned from years of reporting on the subject that the definition of AI is whatever someone thinks it should be. And so when we live in an age when any software that has even a smidge of machine learning in it qualifies as AI the question becomes, is anything AI? Is everything AI? I like to use the metaphor of a steam engine. I think we're at the early days of artificial intelligence. We're at the primitive steam engine, lifting water out of an underground mine shaft stage.

And if we follow that metaphor eventually we're going to get steam turbines and we're going to get internal combustion engines and jet engines and everything else, and they will all be called AI. So our panelists today are experts at these analogies and these mental models, and they are going to walk us through a number of pressing questions, a number of practical questions which is I think the best way to tackle a subject as abstract as AI today. I'm going to begin with Erik Brynjolfsson of Stanford. Erik. >> Hi, Chris. Thank you so much for inviting me here and thank you to everybody for gathering. As you said, this is a really interesting time.

There's so much confusion and hype even just this weekend about AI, but my personal perspective is very clear. What's happening in AI is the biggest technology change of my lifetime, possibly ever in history. This weekend's events only underscore how billions, perhaps hundreds of billions of value can be created or destroyed overnight so the stakes are very high. As Chris said, this is in some ways like the steam engine.

Economists call that a General Purpose Technology, or GPT, although I understand the machine learning people have taken over that acronym from us. But unlike the steam engine, the core technology is now improving at a rapid exponential rate. The models are getting about 10 times bigger every 18 months or so, and if the scaling laws continue to hold as they have in the past, this portends significant improvements and capabilities very soon, which they've already been very impressive. They're aiming to solve intelligence, and that's the most perhaps fundamental technology.

If you solve intelligence, you can use that to solve so many other problems in the world. And unlike the hype around other earlier technologies, this one is already affecting real work and jobs. There's a number of case studies of that, some of which I've done.

So let me not bury the lead, the first order effect I think is much more productivity. As the part of titles suggests, this is a multi-trillion-dollar question. Productivity is defined as more output per hour worked and that's a very good thing.

We need that, we need more productivity to solve challenges like more resources for the budget, health care, the environment, poverty. For United States, I predict the productivity growth rate will at least double compared to what the Congressional Budget Office is saying. They predict about 1.4-1.5%,

I think it's going to be over 3% growth in the coming decade and possibly much higher. We'll see similar effects in other nations driven by improvements for knowledge workers like financial analysts, general managers, customer service reps, doctors, and lawyers. A lot of people who previously were not so affected by technology. I put my money where my mouth is. I made a long bet with Bob Gordon, a great economist. And you can go look at longbets.com

where I wrote down the exact terms of what my prediction is. I wrote a piece with Martin Bailey and Anton Korinek laying out the reasoning for why we think productivity is going to grow faster. I'm co-chairing a National Academies report with Tom Mitchell of Carnegie Mellon. Where it should come out in a month or two, or a couple of months, laying out in much more detail what's happening with the technology. As I said, this is not just hypothetical, we're already seeing some real changes on the ground, remarkably fast I think.

In just a couple of minutes here I think I have left, I'll just mention briefly that I looked at a call center where they rolled out a large language-based model to help with millions of conversations. We had all the data from the millions of conversations, 5,000 call center reps. Unlike the nutritional system, the system captured the tacit knowledge from the data. And it did not just replace the workers, but really complemented them.

It worked alongside them and it would coach them in real time with hints on how to serve their customers better, and very quickly we saw massive improvements. Within a few months, just a couple of months, we saw that the productivity improved by 14% on average. Interestingly, the least experienced workers improved the most, about 35% more productivity. Customers also benefited.

Their customer satisfaction was up. If you looked at the sentiment in the calls, customer sentiment went up a lot more happy words and fewer angry words. Workers also seemed to be happier, they were less likely to quit. So is it really a win all around? Stockholders, customers, employees all benefited. There are many other case studies like that that are emerging by other folks and some that we're doing. So I think on the ground we're beginning to see some changes, and if you roll that up to the economy as a whole, this is certainly a trillion-dollar question.

Managers, policymakers need a roadmap for capturing this knowledge. I hope we'll have a chance to talk about that later, but let me stop there and I'm looking forward to all your questions and comments. Thanks. >> Next up we have Kaye Husbands Fealing from Georgia Tech. >> Hi. Good afternoon everyone.

Good to see you. And I wanted to put my comments around the public sector. Erik just gave you a really terrific overview of how large this economy is around artificial intelligence, but I wanted to drill down in the area of how is artificial intelligence being used in the government sector, in the public sector. And also think a little bit about, well, what does all of this do for us in the economy? What does it do for our residents in this economy? So artificial intelligence has been around for a very long time. Many of us don't really think of it that way because of so much new news that's going on as Erik mentioned, but there are decades within which various aspects of artificial intelligence has been used. The second point I want to make is that we really need to think about risk management and communication with the public.

We're going to drill down on that a little bit more later on, but with all that is going on, as large as the sector it is, as many sectors that it connects to, how do you mitigate risk? How does the public feel more comfortable about what we're using? And again, it has been in use for a very long time. The third point that I really want to stress here as we talk later on in this conversation is that we're focused on technological solutions, but oftentimes we also really need to think about the social science, the humanities, and the arts in relationship to artificial intelligence. Not in terms of how artificial intelligence benefits those areas, but what are the lenses, what are the capabilities that can be drawn from other areas in addition to technologists to be able to solve some of the issues that we see on the horizon.

Now let me just put a few more points on the table. We're thinking about artificial intelligence in terms of oftentimes routine tasks and also original and creative thoughts. And I think a lot of times we get really worried when we think that the routine task automation may displace workers.

And we're also worried if we're thinking that, oh my goodness, we're building something that could replace creative thinking. And so again, I want to put those as topics on the table that we'll discuss and drill into soon enough. The last point I want to make, and I just want to put some more things on the table here.

In fiscal year 2022, the non-defense US government agencies allocated a total of $1.7 billion to artificial intelligence research and development. So AI R&D spending was $1.7 billion in FY 2022. That is expected to go up closer even to two billion dollars. So even in terms of our own spending in the federal government on contracts, prime contracts, grants, and other transactions, we're spending quite a bit in this area.

In the public sector, where are we seeing that expenditure? And I just want to lightly touch on some ideas here and drill down on these a little later. National defense, in terms of health and precision medicine, and forecasting what's happening with infectious diseases. Energy and environment, those are areas where quite a bit of artificial intelligence is being used. Cybersecurity. As you can imagine, the use of cyber forensic specialists and other aspects of cybersecurity in detecting attacks where AI is being used.

Transportation, building of bridges, law enforcement, which is again an area where we really think about, well, how is AI being used and is it used equitably? In basic research artificial intelligence is also being used. So last point here that I want to make is that I want us to think carefully about two things. Does AI create inequality? And what are the lenses that we need in addition to technologists, again, to be able to understand better what are the impacts of artificial intelligence in our society, in our communities? What's the value? What are the risks? And how do we mitigate those risks? Thank you. Back to you, Christopher. >> Thank you, Kaye. And next up, we have Krishnan from Carnegie Mellon. >> Thank you, Christopher.

Good afternoon, everybody. It's such a pleasure to be with all of you and in particular with Christopher, Erik and Kaye. I will offer a set of framing comments related in particular to AI and policy, but let me start by reiterating the rate and pace and change of AI technology.

For those of you who got a chance to either attend OpenAI dev day or saw what was released on November the 8th, just about ten days ago, you got to see not only the technology as many people think of it in terms of chatbots, but a number of other advances related to how to connect the GenAI technology to other applications through API, through plugins, etc. The economics of this has changed also. The cost of actually, both on the input token side as well as on the output token side, is dropping rapidly, thereby setting the stage for greater application of this technology in consequential applications, which is really the set of applications I'd like to focus on as. And we think about this, I'd like to highlight that across all jurisdictions, there is broad agreement around, let's not regulate the technology, let's regulate or think about what governance needs to be in place, for the use of the technology in particular context.

Context matters. The use of AI in autonomous vehicles is very different than the use of AI in, say, hiring or recruiting, or AI in health care. So it's important to think of use cases as we think about governance of AI in consequential settings. The second is to be clear about the distinction between model developers, the OpenAI, the Anthropics, Googles of the world , versus model deployers. And there are a very large number of model deployers, but a very small number of model developers.

And that presents issues not only for the reasons that Christopher outlined at the very outset, but also there's a potential for correlated risks and systemic error that have been highlighted by a number of leaders including SEC chief, Gensler, in terms of the correlated risks that might arise, for instance, in financial markets. Now, in terms of three top of mind, policy issues that arise across multiple jurisdictions, it was certainly in President Biden's executive order, you could see it in, the Bletchley declarations, as well as in the EU AI Act and as well as the Chinese Act that was passed in September, one is the sense of thinking about applications as having risk tiers, which are unacceptable, which are high risk, which are medium risk, which are low risk. The second is to think in terms of the issue of content authenticity, which is a big concern, particularly in light of the fact that the very large number of people are going to go to elections, both here in the United States but elsewhere as well, what is the synthetic media code of conduct and what should be the technology that could potentially be used to enforce that code of conduct with a particular concern about deep fakes? And then a set of questions related to how do we provide a governance prior to the deployment of AI models in consequential settings? This is called pre deployment. And the NIST AI Safety Institute that has just stood up a few days ago, is very focused on this question of what measurement science, what evaluation, and what assuring can we do of AIs in these very consequential use cases prior to deployment and then post deployment? To build on something that Gensler said, in information security in 1988, with the arrival of the Morris Worm, we stood up something called the CERT, Computer Emergency Response Team.

What is the equivalent such capability that is required for AI, that even with all the due diligence that we might do pre-deployment, we need a capability to respond to AI failures, to vulnerabilities, and be able to do forensic analysis of those, to ensure that model developers patch those vulnerabilities and implement those patches to create a more trusted AI ecosystem? So these collection of issues are going to be quite essential to have in hand for the societally consequential use cases to get the benefits of productivity that Erik spoke about. And I'll close with another data point. The AI economy today in the United States is $100 billion.

The US non-defense R&D spend, like Kaye mentioned, is $1.7 billion from NitraD data. The NSC AI report called for the US to invest up to $8 billion in AI to be competitive. Given the importance of this technology for both economic and national security, I believe that a significant investment has to be made on both the technology innovation components but equally well in the policy innovation components to ensure safe and reliable AI.

I look forward to the conversation. Thank you, Christopher. >> Thank you, Krishnan. So if I can ask all of our panelists to turn on your video right now.

This is the part I always enjoy, which is the scrum of conversation that comes next. I'd like to start with you, Erik. I'm sure we're going to get lots of follow up comments from everyone else.

But let's talk about applications for generative AI that are on the near horizon and then we can talk about impacts and we can get to the policy portion of this discussion. So you mentioned your own research on a call center, which of course is very impressive. I got to say, given your long bet about how generative AI specifically or AI in general is going to impact productivity, I think I might have taken the other side of that bet. Not because I disagree with you about the technology, but because I think that people can be slow to adopt this technology or any new technology. I want to ask you, so often when executives come to me or when people have questions about the coverage that we do on AI, their first question is, how can I use it? What's it good for me? And of course, that's particular to each business, but is there a structured approach to looking at how generative AI specifically can impact a business? Can it impact anyone in the private sector and then we can talk about the public sector. >> Sure. Well, that's a great question.

The call center is one narrow use case, but there are many other. And since I wrote that paper with Daniel Lee and Lindsay Raymond, a bunch of other papers have come out with other specific use cases. I think one of the biggest wins has been in coding and software, where some people see gains of up to 40%. A team recently came out with one looking at management consulting, where they also had gains very similar to what I saw. There have been some that have been looking at writing.

And one thing that's striking to me, especially since I agree, usually these things take years, I've written about how electricity took decades to pay off, is how quickly the results are being realized, and that's in part because I think we have an infrastructure now where people can just roll them out, incorporate it into Microsoft Office or have apps. But they're not good for everything. They're good for some tasks and not for other tasks. And so managers, policymakers need a structured approach to understanding where the tools are most useful.

And what we've been developing is something called the task based approach. It started with a paper that I wrote with Tom Mitchell back in 2017 that came out in science that basically took every occupation and broke it down into about 20-30 distinct tasks. For radiologists, there were 30 distinct tasks, for insurance adjusters, about 19 task, economist, journalists, they each have a certain number of tasks. And it doesn't make sense to look at generative AI for the whole occupation, but it does for specific tasks. And it can help with writing, but not with loading boxes on a platform. One of my students who actually helped with the first paper, Daniel Rock, wrote a paper with a team at OpenAI, I don't know if those guys are still there, and they applied IT specifically to generative.

My paper with Tom Mitchell was looking at machine learning more broadly. And now we can look at about 18,000 tasks in any given company and one by one prioritize whether or not generative AI can help with them and roll it up and basically give the CEO or managers a priority list that says, here's the biggest benefits, here's the next biggest, here's the low hanging fruit because I see everyone is overwhelmed, there's so many opportunities, but this gives them a structured way to have a game plan for where to roll it out. And I would like to accelerate that timeline I do win my long bet.

I have a stake in this, so I have a dinner bet on it that if they can take these amazing technologies and bring it to the bottom line by rolling them out and not making mistakes of doing it in the wrong places, it's going to be better for the company, it's going to be better for society. Daniel Rock and Andy McAfee, and James Milin and I have started a company called WorkHelix that makes it easier for people to do these plans but you can read our papers and they can go ahead and implement the task based approach on their own. The sooner that happens, the sooner we'll start realizing these productivity gains. >> Christopher, can I just add something to what Erik said? So two things come to mind.

One relates to the deployment of generative AI or even this transformer based models. There's a paper in Nature that described in NYU Langone these transformer based models to support operational and clinical decision making tasks that doctors are faced with. Like for instance, will insurance be paid or not for this patient, or within 30 days, will this patient be back in the hospital? These questions.

It turned out that by virtue of the fact that transformers were using documents in the workflow, they felt that this allowed for easier integration of the generative AI, as well as, in this case, transformer based architectures into decision making workflows in the hospital system so in contrast to the more traditional predictive AI models that were not as easy to integrate. So that is one point. The second point I quickly wanted to make was, in talking to boards of companies and government agencies, one of the questions that keeps coming up is, we hear about exposure of our tasks to generative AI, how do we identify what skills individuals have, what skills a given occupation demands, and how that changes in light of what Erik just said, and how m we fill the gap for workers as they try to determine how might they best contribute to the changes brought about by AI. We have a project here called the Workforce Supply Chain Initiative at the Block Center.

And folks can go to the Block Center website and get more information about it. But this can also be analytically done. And I think supporting workers and firms and government organizations as well in terms of finding and filling gaps in skills I think is going to be a really important issue. >> So to add to all of that, I wanted to put a couple of things on the table. One, just at the conference in Ottawa about a week ago, Canadian Science Policy conference, I happened to slip into a session on medicine in AI.

You probably heard all about that folks online have heard quite a bit about what's happening there, but the one thing that was salient that we all took a breath was, if we're using artificial intelligence in the diagnostic process or even to prescribe a certain medicine, or to develop a medicine, it's tailor made for a given individual, if it works, hurrah, if it does not work, who has the liability? And we all took a breath because we weren't really sure of the answer to that question. It probably isn't different from some of the other conundrums that we face when we're thinking about intellectual property and also risk. But I just wanted to say something Krishnan said in following up on Erik that it just reminded me of an area where we need to drill down. But the larger question that I heard was also about workers, who is doing the work in the public sector, that's the sector I'm thinking about, and how is artificial intelligence and the variety of ways in which it's being deployed, how are they going to be used by the worker? Not displacing the worker, but what is the process by which existing workers and folks that are going to come onstream, how are they going to be skilled and reskilled to be able to do this work? Erik mentioned almost a ladder and a stackable way of trying to understand how AI is deployed throughout an organization. That's really important.

I wanted to say that I noticed also in preparation, OMB, Office of Management and Budget, has a directive on the types of safeguards that need to be put in place. I don't want to list all of them, it's pretty extensive, but the first off the block is conducting AI impact assessments and independent evaluations. The evaluative process is necessary and I think we also need to very much communicate that these evaluations are happening because workers want to know that they are going to be part of this organization, they're going to be able to adapt to the use of various types of mechanisms that will enhance work and not necessarily replace work. And I don't have real skulled glasses on, so I'm not going to say everything is going to be enhancement in that replacement. That's for our discussion today. But I also want to make sure that we understand that conducting these impact and assessment processes, that's a really important piece.

And we also need very much communicate how are we doing so so that workers feel more secure in the work that they're doing and contribution to society. >> Kaye, let me ask you a follow up question there. The public sector, one area where it seems like there's an opportunity for it to move ahead of the private sector is in evaluating the impact of AI on work, work processes, output. As you know, a lot of the impacts of AI in the public sector to now have been disastrous, especially as they've been applied in sentencing, in the legal field, in law enforcement. Do you think we've learned from those lessons? Do you think that there's an opportunity for the GAO or others to head off some of these potential negative impacts? I mean we're talking about AI like it's going to automatically displace workers.

I mean, a lot of the early results I've seen are that when it's misapplied, which is often because it's a new technology, it's just as likely to slow people down to generate more work, to generate a worse work product. So could you talk a little bit about how the public sector has an opportunity to evaluate the impacts of deploying generative and other AI in a huge variety of types of work? >> Sure, I'll start off, but I think Krishnan also may have something to say based on what you just saw at NIST last week. I want to just put it in a slightly different frame. The things that we hear about that catch your eye in the news, the negatives, those are not necessarily the preponderance of what's happening. So what I don't know is how much of what we're seeing in terms of the use of, we say, artificial intelligence, but big data use, neural networks, hierarchical generative models, natural language processing. I can go through this long list that exists of how various aspects of intelligence and artificial intelligence on automation, how much is being used in the public sector.

And if we have that sense, I do not think that we would have the sense that it's completely wholesale, so negative. I'm putting these, see me eyes darting around, I'm putting in the chat for everyone. I sent that to everyone the use cases for AI in the government sector, so people can pull that up and see. I have this massive spreadsheet of all the different ways in which is integrated already, not new things that have been happening. So I think we first have to get back to answer the question, not pose it as an outcome. How much of this is already happening that's actually going well? That we take for granted, that actually is making work more productive or easier, so to speak.

And then tackle those areas where we know, wait a minute, it's a disruptor that is not benefiting the worker nor the output. And then dig in on those areas to be much more clear about where the solutions need to come about. I'm sitting at a university of course.

I have to say that there are aspects of this where we really need more research. Erik listed a number of papers he's worked on. Krishnan has as well. We have folks here at Georgia Tech who are working on three of the AI institutes from the National Science Foundation, as well as ethicists who are working along with those AI institutes.

So there's work being done to try to understand better how to do those assessments better. And for many folks that may seem like, well wait a minute, I want to know when rubber hits the road, I want an answer, but I think we have to frame the question a way better than we have been framing it. And to just remove the balance that this is always a negative thing, and it's always a problem. Because sometimes it really truly isn't. I will say though, that the problem that I mentioned earlier, that really gets at the crux of some of the things that I worry about is, does AI create inequality? And there are three different ways that I've seen as a Harvard Business Review article on AI and machine learning, title of it is Eliminating AI Bias Is Just The Beginning of Equitable AI.

And it decides to go in three different directions. What are the technological algorithmic biases? What are the supply side forces that are issues? And what are the demand side issues? The demand side is basically there are people who are uncomfortable if something is developed using AI. Health care was the example that was produced.

But the supply side forces Brookings has another article that shows that are we really seeing the displacement of workers and in what areas are we seeing that, and can we get a better sense of what is happening there? >> Let me just interrupt you because I want to focus on exactly what you're saying right now and just get feedback from others as well. You're talking about workforce impacts. And are there any? And I would love to throw that to our two other panelists. Are we seeing workforce impacts yet? Do you think we will soon, in other words, an impact on either the quantity of jobs or the tasks that they consist of? >> I can start. So we're not seeing an effect on the quantity of jobs right now.

Unemployment is close to our record low. And although always in some turn, there's ups and downs in different sectors. And I'm not sure we will anytime soon because there's still an awful lot of work that only humans can do. And depending on how well the economy adjusts, people will redeploy even as some sectors need more employment, others need less. That said, there's evidence that we are beginning to see some effects on wages.

And certainly that happened over the past decade with earlier waves of technology. We saw some big changes in waves as AI and other technologies affected routine skills disproportionately. And we saw a drop in the demand for those wages fell. >> But Erik, sorry, let me interrupt you right there. We've seen a record appreciation in wages, especially for so called blue collar work since the pandemic.

>> Yes. >> Sorry. What effect on wages are you seeing already? >> So there's been some spot evidence of studies, like for instance, on Upwork, where the tasks that can be done by gen AI, by ChatGPT, there was a fall in demand for people doing some of those, some folks doing translation.

So these little spot areas if you believe as I do, that this is a very powerful technology that's going to affect perhaps 50% or more of the tasks in the economy just based on what LLMs and generative AI are able to do and you match them up to the tasks that I talked about earlier, then there's going to be a lot more. It's barely been deployed yet, so this is still speculation, but I guess that's what you asked me to do. We're seeing a few spots here and there. The short answer is if wages adjust, then we probably won't see it show up in employment but we will see it in wages.

And that underscores what Kaye and what Krishnan said earlier that we need to think about how to rescale, redeploy people. One of the discouraging things is the US labor markets have become much less flexible over the past 10 or 20 years. Much less dynamic. There's a lot more, for instance, occupational licensing. For some reason that people don't totally understand, people are not moving to new jobs as much as they used to, they're not moving across geography as much as they used to.

And that makes it harder to redeploy because the way to address these technology shocks is not to try to freeze everything into the way it was last year or 50 years ago, try to just lock in place the existing job structure. The only way that America, or any country has succeeded is by embracing the dynamism, creative destruction, and having people redeploy into the new areas that are going to be growing. And I don't think we're at a stage where we're at AGI, maybe Ilisitzke disagrees that will be able to do all the tasks.

And therefore there'll be plenty of tasks for the perceivable future, at least say the next decade, that only humans can do, and we just need to be able to train and redeploy people into those. The key is going to be maintaining flexibility and dynamism and training so that people will find those new opportunities not to try to freeze the existing labor structure. >> So if I might add to what Erik said, I think two things that I hear a lot in talking to policymakers here, and I should say this is not just a federal issue. I think Governor Shapiro of Pennsylvania signed an executive order for the deployment of gen AI in Pennsylvania government decision-making processes. So this question of what the potential impact is on the workforce is certainly a question that arises in state level decisions and policymaking as it does in the federal level. And as Erik just noted, I think there are a number of papers that talk about the extent of exposure of tasks to gen AI.

In fact, Daniel Rock, along OpenAI colleagues, has written this paper that 80%, there's going to be considerable exposure of tasks. >> 80% of the jobs have at least 10% of the tasks. >> 10%. And 20% of them have 80%. >> 20% have at least 50%. Sorry, I read that paper a lot. So I know a lot. >> Yeah, 20% have 50%.

>> It's a great paper. I'd encourage everyone to take a look at it. And our whole task-based approach is very much inspired by that and the earlier work that Tom Mitchell and I did in that spirit. >> And I think the main question there is not whether you agree or disagree with what Daniel Rock said in that paper, but rather irrespective of that, what should the government or a firm do when such change happens or comes about? And I think that's really where you need this quick response capacity to gain real-time situational awareness of what's actually changing.

These are predictions of what likely might happen, but what firms need, as well as what society needs or what policymakers need is situational awareness and the capacity to find and fill gaps as they arise in terms of skills. And so a skill-based approach which identifies workers and their skills and occupations and their skills and how they change and where those skill gaps lie really will result in what kind of upskilling programs need to be put in place. >> I agree with that, Krishnan, and I really want to double click on this concept that it has to be specific. I think the general outcomes, we need to understand that as in the general economy, in the macroeconomy.

That's the question that we usually see discussed and posed. But I think for the worker, for the citizen, for even the organizations, they really have to get that granular understanding as to how is this helping? And where do we need to augment the capacity for workers to do either the current job they were doing and the next thing that's coming? The other point I want to put on the table and see what you guys think about this, when we're looking at the benefit's structure, I was looking at this article about how AI is used to develop new highway walls and bridges and those types of things. And one of the things that's being used to do, even though it's building infrastructure, part of it is also reducing carbon emissions. The design is not only meant to, let's fix something that's crumbling, let's fix our infrastructure in US and we're using AI to do so, but one of the expected dividends is also benefit.

So I want to understand better, what's the connection between some of the things that we're seeing where we're seeing advances and the bigger picture of what the benefits are. So that's in addition to really digging down and understanding the workforce elements. What are these other elements as well? I don't know if you've seen any work that can really give that broader picture.

So I went down granularly and came back up again. I'm just curious. >> I'd love to return us to the most basic question that I think that this panel is trying to answer, which is how do you address the Gordian knot of regulation and policy toward generative AI in general? And I just want to ask this as a provocation. If the current generation of AI is as big as I think everyone here believes it's going to be and its impacts are as broad as we agree that it's going to be, are we actually too early in terms of trying to implement any kind of umbrella regulatory structure on AI? Because if its impacts are so broad, it's going to have to be piecemeal because the impacts on individual fields, on safety, etc.

are going to be specific to each one and idiosyncratic and will happen within the existing regulatory structure. And I'll just give you one example and then you guys can go to town. In the past when people have talked about what happens when you bring the black box of AI to insurance? Catastrophic, could be health, whatever. The response of regulators, which I think has been very clever is guess what? The old rules still apply. It doesn't matter if humans are making the decisions or AI is.

If you're discriminating in violation of our rules, then that's discrimination. It doesn't matter how you arrived at that decision or the technology used to do it. So that to me is a great example of where we didn't need a specific AI regulation. The existing ones were sufficient.

Even when they're insufficient, you have experts in each field or each domain that are maybe best equipped to say, well, this is how we should regulate this now that the ground is shifting beneath our feet. So is it too early to talk about broad-based AI regulation? Is that ever going to be appropriate? I'd love to hear from each of you. >> Should I go first? So Christopher, I think there are a couple of things I wanted to add to what you just said.

I think if you took a step back, I think there's a need from a societal standpoint for the citizens at large to gain trust in what AI can bring to the table. And I think it's really important. If you've seen the Khan Academy, Khanmigo demo of Sal Khan demonstrating how GPT combined with Khan Academy does an amazing job by being a personalized tutor to a kid who can actually acquire learning and learning objectives and capabilities, I think things of that nature, I don't want to use the overused term moonshot, but I think we need something that ensures people really get to see what this technology can really bring to the table. And I'm a big proponent and believer in what I think it can actually bring to the table because I think the rest of it, with regard to what guardrails we put in place, how do we ensure we get all the benefits while ensuring we mitigate the risks all stem from our view as a society of what this technology brings to the table.

So I'll put that up front. The second I think is you're absolutely right, which is, as I said at the outset, AI regulation has to be done in context and in use. And if you look at what the NAIAC, I serve on the National AI Advisory Committee to the president and I think if you look at what the EU AI act is trying to do is they're saying you could take a look at what the EEOC is doing, look at what HHS has, look at what CPFB has. So we have existing regulation, and if AI is just a means to an end, then we have those mechanisms in place. But that said, I think the reason why you have an AI safety institute being stood up is because you have this technology that you need to, and I think I'll reiterate this, I don't think we fully have a complete understanding of how to ensure safety and reliability of this technology because remember, we are not talking just about chatbots, and even the chatbots, there's work from Carnegie Mellon that showed how you could jailbreak the guardrails. It's called a suffix attack.

You have this gobbly group of strings you post after tell me how to build a bomb or something toxic like that, and you can actually break the guardrail. This is the work of Matt Fredrickson, Zico Kolter, and others here at Carnegie Mellon, and this is a systemic risk that is shown to occur in every GPT that is out there. So I think there is a need for us to understand how to document, there's measurement science that needs to evolve, there's evaluation science that needs to evolve, and we need to figure out what's the equivalent of, for financial statements, we have the generally accepted accounting principles that we assure financial statements against, what is the GAAP for AI? So you can actually then say, I'm able to assure these models against those principles in these use cases. So I think there's work to be done. It's not to say that the existing regulations don't get us off to a good start, but there's work to be done.

>> Well, Kaye, go ahead, please. >> I want to just say I agree with that last part especially. We don't have the evidence and the knowledge right now to put something in that's generally broad-based, very high level. We just don't have that. So when Krishnan said work to be done, and I was saying earlier that the research needs to come before and we need the evidence of what we're trying to fix.

I think that's important. I also really worry that this term AI is just being spread around like peanut butter. There are very specific aspects of what we mean by artificial intelligence in all these different areas that we talked about. It's not going to be a one size fits all possibility.

But one thing that we do need to do, and again, just echoing what Krishnan just said, the communication about how this rolls out and about the protections that our citizenry will have from the government or whomever , that's really important. And I think that part of the design of what those policies will be will have to integrate what the private sector is seeing as well. It's not going to just be something that government does on its own. >> Chris, you asked whether it's too early. It's definitely not too early to be all hands on deck trying to understand the regulatory needs.

As you mentioned, there are someting you can do with existing structures, but there's a lot of things that also have to be updated. The technology, as we talked earlier, is improving exponentially. A lot of new possibilities and possibilities that we hadn't thought of before are emerging.

And one thing I'm encouraged by, is I've met with folks on Capitol Hill and the White House and the agencies, they are taking this very seriously. They are reading the research literature, unlike they did a couple of years ago, or five or 10 years ago were they honestly weren't paying much attention. Now, there's a real urgency around trying to understand it. We're putting huge resources.

Microsoft is alleged to be putting $50 billion into new data centers in AI research. And now I guess they're acquiring OpenAI for next to nothing. And there's some real efforts to push the technology.

We don't have anything comparable yet on the economic, social, societal ethics, policy side, in terms of investment, we need it because the effects of this are going to be very consequential for employment in the economy, for misinformation, politics and democracy for national security, and the rest of us are going to have to hustle to start thinking about what the right policy and other implications are. It's not something we can put off and say, well, let's see how it plays out. >> I'd love to frame this next question specifically for you in light of what Erik just said. You mentioned that the federal R&D budget for AI might crack $2 billion soon. But frankly, that's a drop in the bucket compared to what even some of these start ups are able to raise to work on this. One thing I've heard repeatedly from AI researchers and from people trying to work on AI governance.

Well, if I don't work at OpenAI or maybe two other companies, I don't have access. Well, we seem to have this very strange situation right now where the folks who are thinking about this the hardest, it's not that they're not practitioners, but they may not even have access to the AI that's potentially the most transformative. I know that your specialty is the public sector but do you think then that places a special burden on these companies that it in any way transforms their role in terms of what they need to be doing or thinking about now either interacting with regulators? I know Sam Altman has frequently said, my industry should be more regulated. I'm happy to help with that. What's that interface look like? What are the obligations of these companies or do they not really have them other than their obligation to shareholders? And we should worry about that? >> There is always a continuum, from basic and the early stage of discovery all the way to something that's where the rubber hits the road, where commercialization exists.

>> I just want to put a fine point on the fact for the , now that continuum, it's not the usual continuum, All of the R&D is being done in the private sector. This isn't government funded research that's then being transferred like aerospace was. >> Well, some of it is government funded research. If you look early on about where some of these ideas were generated, we see what's commercialized and we understand how that affects us every day.

But a good bit of this was done early stage in labs that were funded by the federal government. I don't have the data on me to say what that proportion was. But we've all seen ways in which those things that we take for granted today. The iPhone is the example everyone uses and there's a lot of research that was funded back then, but where I was going with that is to say that there's a need for both. And I don't think it's an either or.

I don't think it's something where there's a synapse that we used to talk about the value of death and you throw it over the other side from basic research or fundamental research to development, I don't really think that we're in that position anymore. It has to be a collaborative process. It has to be a process where we're looking at what yes, the federal government or possibly others can fund, but it has to be in concert with industry. It's not something that we're going to be throwing something over the transform at all, that won't work.

Your idea there about not having access, there can be innovation centers. I know that NSF is working on an NSF regional innovation engines and AI focus innovation centers and things like that, where that gets much closer to where the action is in terms of working with companies. The TIP director, it was stood up for this type of purpose as well. So there's a lot that we can see. Not just NS but also Department of Commerce.

I think Christian was talking about NIST. This is integration of what's happening between the public sector and the private sector. It's not the throwing over the transom that I see happening, especially in this case to get to any reasonable solutions.

>> That's comforting. We have, I think, just about four minutes left. So I just want to do a quick lighting round because we had a lot of audience questions about copyright, which we just didn't have time to get to you at all. But we all know it's a mess, it's currently getting litigated in the courts.

I would love to hear everybody's pet solution. If you have one. LLMs and art generating AI's and the rest, should they do some fractional attribution, should they be not allowed to train on copyrighted data, should they be paying out to the creators of these copyrighted works or anything that you want to talk about that's just related to solving the copyright problem? Erik, let's start with you, in about a minute, what's your description to this? >> Real quick, look, it's definitely a mess right now, but I'm cautiously optimistic.

I had a great conversation with Senator Warner a little while back and there's legislation coming to clarify because part of it is just the confusion of people not knowing exactly what the rules are and so thatwill help. Secondly, OpenAI, a couple of weeks ago said that they would indemnify people who licensed their technology that will give comfort to the people who are doing that. Thirdly, there's been remarkable progress in using curated data and synthetic data. So it may be that they can keep track of the provenance of the next models.

I don't know what's happening with GPT5 and Gemini, etc. But it seems likely that the big companies are paying a lot of attention to this and we'll be able to be in a position to be more confident about it. And last but not least, you mentioned finding ways of attribute it.

I'm working with another researcher to work on a shapley value way of dividing up the benefits to folks, and I think there will be some possibilities to attribute and to share the benefits going forward. So the short answer is, it's a mess right now, but I see a lot of promising solutions on the relatively near horizon. >> Christian, go for it. >> In 30 seconds or less. So I think the thing that came up in my hearing at the Commerce Committee was to balance on the one hand the artists' right, to advertise what it is that they've created, while at the same time gaining protection over their authorship to ensure that it doesn't get scraped and end up as input in LLM that can reproduce content in the style of that artist, be it audio, video, image.

There's some interesting work from the University of Chicago which actually allows you to produce content that to the human eye is examinable. You see it as art but the pixels through adversarial training have been modified that the LLM can't scrape it and then use it as input into the training. So it strikes a balance by giving rights to authors. And I think in addition to everything Erik said, this might also be a type of technology that is needed as we strike the right balance between the author's right to publicize their works while gaining protection over their authorship. >> So apologies to everyone on the call. I think we're going to go over one minute, but Kaye, I want to definitely hear you.

>> It's really quick. I think Christian just nailed it, so I just want to put something on top of that that I think companies really care about as well. And that's cybersecurity. A lot of things that we didn't get a chance to talk about today relate to what we're just focused on right now. But at the end of the day, cyber threats are another layer of the same type of issue where we need to find solutions. So I didn't answer the question, but I put another thing on the top of the heap that really needs to be addressed.

>> I agree with you. I think the biggest short term impacts from generative AI are going to be more and better cybercrime. We're definitely already seeing it in terms of more sophisticated social engineering and phishing attacks.

So Kaye, I think that's exactly the right place to wind up. So I'm actually not sure who's taking us out. I'm guessing it's me, but I want to thank all of our panelists. We could have had an extra hour and a half with each of you. I certainly took a lot of notes and I'm going to be following up with you individually.

I just appreciate that. If nothing else, this panel really emphasized the breadth of the impacts of AI, that it's not just a new generation of stochastic parts, that there's tremendous potential, but that it comes with a great deal of responsibility to make sure that it's used in a way that does more good than harm. Let's hope we can figure this technology out that in advance, since we don't have the greatest track record with every other technology in history, I believe in us.

>> Things are really high trillion dollar question. >> Thank you Christopher. Thank you, Kaye. >> Thank you everyone. >> Thank you.

2023-12-10 17:57

Show Video

Other news