The AI Revolution A Deep Look into Innovation and Equity

Show video

Hello, I'm Corinne Peek-Asa the Vice Chancellor for Research and Innovation at UC San Diego. Here at UC San Diego, experts are piloting innovations and technological advancements in the Artificial Intelligence Revolution. These equitable community-centered solutions are laying the groundwork for a better future. Public Higher education institutions must be a driving force to ensure that ethical People Center technology is at the center of AI advancements. Why the United States must invest in infrastructure that ensures universities are able to work alongside industry and government to develop the next generation of AI technology and advocate for responsible development, regulation, and investment.

Joining me today is Data Science and Philosophy Professor David Danks, Chief Health Information Officer for UC San Diego, Amy Sitapati, and The Dean of the UC San Diego School of Biological Sciences, Kit Pogliano. Thank you for being here today. I want to start by asking each of you to talk a little bit about from your perspective, what is the key role that universities need to play? Why do we need to be at the table? David, I'll start with you. Thanks. I think one of the real keys with universities is that we are not bound by the profit motive.

We don't have shareholders that we should be responsive to. We have citizens and residents of the state of California and the US that we are supposed to be responsive to. That frees us to ask certain questions and to pursue certain lines of inquiry that might not ordinarily come up if you're in a corporate setting. To do so in a way that is backed by an enormous number of resources. On the one hand, we can pursue inquiry wherever it would take us.

On the other hand, we're able to do it with some support, unlike, say certain non-profits. I think balancing those two means that we can really be at the forefront of producing technology that provides public benefit. Great. Good, and Amy. Great. Well, I'm going to

take the health perspective and say, I think academic centers are really phenomenal at partnering with their university to bring all of those incredible ideas around ethical considerations, around cutting-edge technology. But actually get them to the bedside and see how do our communities feel about artificial intelligence, how are they experiencing it? Really have a feedback loop that's very inclusive. I think of the community, researchers but to advance health. I think we're in a unique state to be able to do that.

Great, Kit, your turn. I'll take the educational lens as the Dean of biological sciences. I think one of our roles is also to educate the next generation, our students and trainees to use AI and machine learning in responsible ways. Also to make sure that we're deploying it in a way that accelerates research and discovery and our efforts. Again, to just emphasize previous points, it's a really human centered way that maximizes the benefits to society, not all of which are going to be profit. Thank you. AI success

is going to require so many different elements. We'll need to work together with government and industry. On your wish list of how universities can work with either government or industry, what are some of the things that you hope for, and Kit I'll let you start with this one. Thank you, it's a wonderful question. Among my wish list is that we can have more ready access to the computing power that we need to really build new models and leverage them and deploy them.

To scientists in both academia and industry who can help expand and accelerate these discoveries. I'll go next. I think partnerships are incredibly helpful, and we want to create a laboratory where really it's the industry sponsor, perhaps, and an academic partner, trainees, community, all coming together and learning how to shorten the life cycle of this artificial intelligence but do it in such a way it's ethical, it's safe, it's reliable, and hopefully, also, it's affordable. So that the lessons that we learn can be shared across the country to help advance healthcare, advance other domains as well. But I think that if we don't engage a group to do this, then what we'll maybe do is leave persons out.

We really want to have a very inclusive new laboratory when we're doing this work. I think Kit talked about the need for compute and computational resources and these partnerships. I think the third piece in all of this is really the data. Nowadays, modern AI systems are constrained as much by lack of data or poor data as they are by lack of computational power. Universities, I think, are well placed to take advantage of broad classes of data. The problem is we don't necessarily have access to them.

We know how to protect data. We have impartial role that we can play. But at the same time, we need to partner with folks, whether it's a medical center here on campus, whether it's a government entity to be able to take advantage of those data, to be able to bring them to bear on the problems that we're trying to solve with AI systems.

Industry is already out the door with so many developments. Universities have the unique niche that you brought up that we contribute to the whole breadth of building the computing infrastructure, to the workforce, to the applications, and the algorithms we need. Where do we need government to make sure those are in step? Where do we not need government to make sure that we still have a fast evolution of ideas? Well, maybe I'll just talk a little bit. I have the good fortune of using artificial intelligence in my practice. I'm a practicing physician. When I'm seeing my patients and taking care of them, artificial intelligence is actually in the room and part of the experience.

What I mean by that is it's integrated in a way to help me respond back to my patients. In that experience, what's happening is the patient might ask me a question, doctor, I'm concerned, I'm having chest pain, I'm not sure what to do. Well, what's amazing about the artificial intelligence, first of all, we can help get the nurse on the line to call that patient.

But if we decide, hey, the message that was sent to the clinician doesn't really need to have an urgent scenario, maybe there was a fall and a rib fracture, and there's other things. My patient can actually get a lot of additional information that they wouldn't have otherwise ever gotten, so rib fractures can take some time to heal. It might be painful when you take a breath.

It's very important to keep breathing. All of this great detail really means my patient is better helped at the end of all of this. I just think it's incredible that today as universities, we have the opportunity to define what that looks like. The issue is that we need the universities to help refine the data, the algorithms, the prompts so the right information gets back to people.

Because sometimes if we haven't done all that work, it looks like a misfit. Maybe the prompt and the response have nothing to do with what the patient has asked, it's not accurate, it's not reliable. It actually erodes then the trust of the clinician in the artificial intelligence of the patient with their health care delivery team, so they're all of these things. Universities play a really unique space in trying to refine the rigor of the artificial intelligence that feeds back to all systems and all people involved. Historically, when we think about issues of trust, regulation has played a pretty big role.

In terms of governments assuring that a technology is doing what it's supposed to. That's been a real challenge with AI because AI is changing so fast, it's being used across so many different sectors. Healthcare is a great example where, what are the rules? We all know about HIPAA as a data protection rule.

But the healthcare AI rules are not always as clear. I think one of the myths here in the United States is that we aren't regulating AI, it's just the Wild West. That's actually not true. Here in the US, we're doing quite a lot of AI regulation, but it's regulation of the outcomes that might be generated by AI. It's asking questions like, is AI making it easier for a violation of HIPAA to occur? Is AI making it easier for a company to discriminate in hiring? I think one of the roles the government has is helping to ensure not so much that the technology develops in one way or another, not so much that we constrain innovation.

But rather that we ensure that the innovation that does occur is actually benefiting people, is actually working to the benefit of the public. That's, I think, a really valuable role for government. It's the way that we've thought about it historically, for other technologies. Now we just need to update it to the AI present.

Thank you. Those are great answers. One other role for government, in this regulation is enabling the right experiments so we can get the robust data generated to build the right models.

This brings together interdisciplinary teams that would include biologists or doctors, as well as computational folks. We need to bring these interdisciplinary teams together. That's a new thing to research support, but the public benefits and the potential payoffs are huge. If you can bring together the right teams, you can go from the foundational discoveries to the clinic much more rapidly in a lot of the work. You had brought up workforce. Yes.

Let's talk a little bit about workforce, and the partners we're going to need to develop the AI workforce of the future. As the dean of the School of Biological Sciences, what you're doing to prepare that workforce. We're integrating AI and machine learning into our data analytics classes, teaching students how to use the tools appropriately and in their work.

Using it behind the scenes in other ways too in our research efforts. The things we're looking forward to doing is building something that's very much like what Amy talked about. Mixing place where researchers, students, computer scientists, and doctors or biologists can come together in a place to take the data and find the best way to leverage this in building new models accelerating discovery and translational impacts. We need new frameworks for bringing interdisciplinary teams together for instilling interdisciplinarity in our students so that a biologist and future doctor can talk to a computer scientist, because we need deep expertise in all of these domains in order to really maximize the benefits. That's a challenge when we think about how many processes are going to be impacted, hopefully, in positive ways for how things are done.

About health care in particular, how do we prepare the health care workforce that has so much to learn for just how to take care of a patient to also be AI savvy? That's a great question. Some parts of artificial intelligence are fancy forms of analytics. What I mean by that is we've been using data to power health for decades, and we look at outcomes, and then we adjust.

Then we had quality improvement teams that stand up, new processes, new outcomes, and try again. Then we discovered that we can figure out with these five variables, who's at risk for a heart attack, and put them on Aspirin ahead of time. We've been using risk models for years. Now, what we need is to cultivate a new understanding as it relates to artificial intelligence, machine learning models, and other things.

There's a new language to learn. There is a new, how do I apply this to my work? We have to bring everybody along. That means nursing, pharmacy, physicians, social workers, everyone will be using this tools. How do they ask the questions to work with that data and have the understanding that this is giving me a prompt, but it doesn't feel quite right. I'm not sure if this algorithm is performing, how I thought it would. That's good to know that it's saying that, but maybe I won't listen.

Let me give an example; when I'm on the inpatient ward, I put in the schedule a risk profile. It's artificial intelligence, and it says; here's the patient's risk of being suddenly quite ill today and there's a score from zero to 100. When I come into work and I have a long list of patients, I have to decide who am I going to go see first? I know these people are very sick on my list, but a surprise number three came up high on the list, and I didn't expect that. Then I can dig into that patient, what happened overnight. Go talk to the nurse. Are you noticing anything different on that patient.

I will get a sense, if the artificial intelligence is giving me knowledge that I wouldn't have seen or is it a mismatch? Maybe it fell into an acuity, but it's just something to keep an eye on, and there's nothing to act on right away. In this way, it helps us prioritize our list but what it means is it probably won't be perfect. The whole team needs to have an awareness of how to work with the models.

Excellent, I will add that we already have a working example of this in an AI tool that helps identify early sepsis that was developed at UC San Diego Health. In our conversation, access and equity impact every element of our economic success and leading as an AI country. Talk about how do we make sure we are transforming our educational process in a way that is equitable for all of our students to participate. One big key there is that education with AI is in large part about how do we learn how to use these tools.

As both Amy and Kit were saying, and recognizing, that our students are coming to the classroom with different backgrounds of using these tools. Some of our students already walk in, having used generative AI systems for two years now, even before they were publicly widely available. How do we make sure that we are bringing everybody up to the same level? How do we make sure it's not only the wealthy students who have access to these tools because they do cost money in order to use them.

It's at a deeper level about recognizing that we don't. We professors, don't necessarily know what problems our students are facing. We have to have a certain humility about meeting them where they are, with the challenges that they face, and that is a much harder challenge.

As we talk about equity, we also have to think about how we can work with industry. Many of our students going to work in industry, be the health care industry, the computer industry, or any type of small business that will be transformed. How can we partner with industry to transform our education to be more responsive to the needs of our students when they get into the workforce? The biggest problem is the industry doesn't know what their needs are, either.

Right now, we're chasing a moving target. I spend half of my time teaching and working in the Hala Zulu Data Science Institute here at UCSD. The role of a data science and companies is changing literally day by day. We're doing what we can, to prepare our students, but the job is going to look different in two years than it does right now. What that's leading us to do from an educational perspective is, in many ways focus back on the basics.

Focus on issues of critical thinking, understanding the needs of your stakeholders, the people that are leading you to engage in this effort in the first place. Understanding the risks of what you're doing, and the possibilities that have gone unexplored in the past. The students sometimes want to just know how do I program this piece of code but I think in the long run, to the extent that we can teach them how to think critically and responsibly about the problems that they're going to be tackling. We're preparing them better for the rapidly changing needs.

Thank you, and Kit, as an academic leader, you think about how industry can partner with us so students can learn hands on in their experience. Tell us about that. Absolutely, we have partnerships in the School of Biological Sciences, and you see San Diego with 'Thermal Fisher 'and other instrument providers that are bringing cutting edge technologies into the classroom and our labs making them more accessible to more people. This is a part of our effort to democratize advanced instrumentation, that also applies to machine learning and artificial intelligence. We want everyone to have access to these tools and a fundamental understanding of how to deploy them in responsible and productive ways. One thing that we are doing in the school to level the playing field for all of our students is instituting a new data analytics, experimental design class. That includes;computer programming, machine learning and AI experiments to give all of our incoming students a foundation in these key quantitative approaches.

It is clear that in the life sciences space, we need more and more people that have a working knowledge of machine learning, AI, and other computational approaches, and that can marry that with biological sciences and work in teams. Let's get back to the theme of equity and access. We are very proud of being an innovation focused campus, and we talk a lot about equity and access at the individual level. We also need to think about how this is going to transform every business.

How can the university help be part of our innovation ecosystem to make sure we are a powerful positive force in that evolution? Well, I'll start. I think students are amazing as clues. Not only does it give industry an opportunity to work with someone who has advanced skills, cutting edge, just been trained but also the student learns from them, what are the problems they're trying to solve, what are the constraints that might be available? How do you deliver a project in industry on time? I think there are opportunities for probably more industry, student, smaller level projects that come together, and if we just think about our local ecosystem in San Diego, who are some of the companies that we might partner with that maybe we haven't historically worked with that could serve this role, that are seeking advancement in data science to solve some of our communities best problems, and that's where we should tuck in and put the solutions in. I think that would be great.

I think one of the things that has helped to level the playing field between the large and the small companies in the field of AI has been the creation of open-source software. Tools that can be used freely by anybody as long as they have enough compute power and the right data, and so one of the things we can certainly do as a university is to continue to build innovative open source software, put it out into the world, and help to train and teach small, medium-sized companies and organizations, non-profits, as well, how to use these tools. The tricky part with that is that we don't necessarily know what tools those organizations need. We need to be willing to get off of this amazing, beautiful campus, go out to the parts of San Diego County, the parts of Mexico, right across the border, where there are organizations wrestling with problems that might not ever occur to us, to be frank up here on this campus, and that requires us to move out of our comfort zones.

We can't expect these organizations to come to us because they're busy trying to survive and benefit the people in their communities. We really need to be reaching out and giving back in a deep way by respecting and listening to what it is that they need rather than assuming we know the solutions. I think it's clear from our conversation thus far that as academics, we really see a lot of promise and opportunity in technological advancement, computing power in AI, and applications but we also know there is maybe a less good side to AI as a tool. What are some of the things you're worried about that we have to be attentive to, some of the missteps that we might take, and some of the risks that are out there? I guess the thing that keeps me up at night as biological sciences dean is whether or not we will have enough computing power, and this is probably not exactly what you're asking for.

But do we have the infrastructure to really maximize access to these tools? Do we have the compute power available to everyone to build the models based on the great data that can then be deployed to maximize discovery and have positive impacts on society? Do we have the energy to sustain that? Can we run it locally or how are we going to handle all of this? I love that that brings us back to where universities can play a role, which is we need the workforce that is working on the materials and the infrastructure, hardware for computing. How do we do it in an energy-sustainable way? Then how do we translate it? So much comes back to universities and workforce. You mention the difference between large and small companies. I also worry about the difference between new faculty and established faculty that have much bigger grant portfolios.

Who can afford to pay to do the research? That's a huge question. As you said, there are many reasons to worry. I'll just quickly try and say three that keep me up at night.

The first is people building and deploying technology without understanding the real needs of the people that they're hoping to benefit. Many of the harms that have resulted from AI technologies are not because somebody set out maliciously to hurt people. They were roads paved with the best of intentions, but it was done out of ignorance rather than any sort of hatred, and so I worry about the ignorance, sometimes, quite frankly, verging on arrogance of the people building these systems. The belief that we know what's best, and I say, we as somebody who builds AI systems.

The second thing I worry a lot about are the divides that are arising within our region, within our world, between those who have access to the compute and those who don't, who have access to the data, and those who don't, those who have access to the Internet, and those who don't. We build technologies that run on the smartphones that are in the pockets of probably so many of the people watching this, and there are millions of people in the world who don't have cell phones don't have smartphones. What are we doing for them? How is AI benefiting them? How are we engaging with countries and communities around the world? The third worry I have is the ways in which AI gets used to use a phrase, normalize the abnormal. Educational technology is becoming very widespread in many classrooms. I have a friend who helps to build those technologies and was on a sales call, and the principal of the school district said, this is great. If we use your technology, it's okay that we have 50 students per elementary school teacher, and that's normalizing the abnormal.

It's not normal to have 50 students per teacher in an elementary school, and yet the AI is viewed as a way to make it okay, and I think that we're seeing this. We're seeing that there are inequities in health care access, and it becomes, well, we can just deploy an AI system to do the diagnosis without realizing the deep need to have a human doctor that people can trust and talk to, and we just see it across sectors that AI is enlarging divides and normalizing those divides, I think in really problematic ways. I'd like to pick up on some of those themes and just echo.

What keeps me up at night is thinking about hospitals across our country in a scenario where a third of the hospitals have access to Cloud computing and artificial intelligence, and two-thirds don't, and that designated public hospitals across the country will be in a scenario where certain people just won't have access to the best care possible. The systems that need to monitor and help give all those prompts to the nurses and doctors won't be present, and so thinking really carefully about the cost, but also how to think about the common good so that everyone can have access. Once we trust a system feel it's reliable and improving care, how do we more ubiquitously enable everyone to be able to have that? I think it's really important or we're just propagating some of the structural barriers that have been here thus far. That's the first thing, and then the second thing is, really, in this equity lane, thinking about, again, how do we train this? Who's the cohort this got included on? Is it the same cohort that it's going to make sense for the group that we're applying it to? Then what impact is it going to have on the outcome? Let me give you an example. What if we have a model that helps to identify people who need a ride to the hospital, and we think, wow, that's great. This artificial intelligence is going to help us get more rides for people who need it.

But what if we didn't look carefully, and we didn't recognize that we're actually preferentially excluding a group of persons that has a differential in axis and outcome? Now we're actually propagating a disparity. We thought we would adopt the model, the model would work great, but one of the features in there is actually tied to a propagation of a disparity. We have to be so careful, even when we have the best intent. This DEI lens requires constant look so that you're making sure you're helping to support and reduce inequities where feasible, and then the other part is really the drift over time. We all get attached to, we pull out our phone, we expect an app to work. Everything's fine, and then one day, it doesn't, and what I worry about is that I have 100 AI tools and I show up to work, and then I didn't realize that three are broken, and so the care I was expecting to give the algorithms I thought I was following they weren't, and I actually didn't know it for a quarter, months.

How do we make systems that have enough reliability, enough systems to know, it's down. It's off track. Turn it off, double check it, and that costs time, energy, training. I don't know. I'm curious how we're going to do that. I hear that we really need to build learning systems, and that's the only way that things will work. One of the concerns that I think is out there with AI is that AI will replace jobs that people need to support their families.

How big a concern is that? How can we help shape the development of AI so that that doesn't happen? The studies that have been done so far I think mostly point in the same direction, which is AI doesn't replace jobs, AI replace tasks within jobs. What you are likely to see is a consolidation of certain job roles. Companies may not need as many people but it won't be that they can say, fire an entire section of entire department. The challenge I think is that we've seen this happen before and there's a complacency that I worry is setting in that says, well, sure technology has economic disruptions.

The car put horseshoe manufacturers out of business but that's okay people found new jobs. It is important that government provide support, retraining, ways of smoothing the transitions because transitions, even if globally beneficial are harmful to some individuals of course. The problem is that AI is moving so much faster. The car didn't put all of the horseshoe manufacturers out of business in six months. But we're seeing, especially with generative AI advertising companies that are releasing 50 plus percent of their employees. We're seeing this even in technology where people who write code are being released from their jobs at a disproportionate rate.

I think the acceleration that we're seeing of the economic disruptions is of a different scale than what we've had before. I think it's even more important I would argue that governments step in, that universities be thinking ahead to not how do we train somebody for the job that exists now but how do we help people have the skills that will enable them to have the jobs two years from now, four years from now, because this disruption is going to happen much faster than our historical examples would suggest. I will say demography might be on our side in this a little bit in that with our aging workforce and with our emerging workers we may be able to have them working less but smarter and not have to worry so much about the changing profile of how many people are working and how many people are retired. There are definitely some opportunities.

We have talked about how AI is going to transform every workplace. How do you see your work? Amy, you've talked a little bit about this for your other ways and Kit and David, how do you see your work changing? In biological sciences this will have a huge impact on the research mission and is already in terms of allowing us to analyze our data in more comprehensive ways. It's even being used by instrument manufacturers to allow instruments to collect better data faster. That's having a transformative change.

It's allowing us to link different data sets and experimental sets together and we can stitch it all the way to the clinic. Looking forward, I see a much more rapid movement from foundational discoveries about how cells work to the clinic and to better clinical outcomes, new therapeutics, new treatments, better diagnostics. That will have a huge impact. Again, I love the generative AI for tasks that I used to do at night. It's making my day as you pointed out more efficient.

Instead of working such long hours artificial intelligence is helping to reduce some of the burden of work in the generative AI space and that's not just for me that's for all physician colleagues have the ability to really use GenAI in this space. That's going to be phenomenal. The work is different. I don't come from a background of large language models and artificial intelligence historically and yet I've dug in and learned more about that. What it means is, there are a number of projects on any given week and any given month that are embracing this change, this transformation and what artificial intelligence can be. It's engaging medical students, residents, computer scientists, multidisciplinary teams in ways we haven't historically done.

What I mean by that as well is the health system and the health side of the work is reaching more tentacles out across the campus, to bring more people into the room, to bring different datasets together that didn't exist. That is a new science at scale that I don't think we embarked on in the past. Kit and Amy talked about the research and practice side, I guess, I'll talk about the education side and how AI is changing that.

I spent half my time teaching data science students and half my team teaching philosophy students. What's shared across them is the realization that AI can be an important tool in learning and doing data science and philosophy. The challenge I think we have right now is, actually as educators we don't know how to teach with it.

I'm reminded of I'm old enough to remember when word processors were this scary new technology that make its way into schools and I had teachers who wouldn't allow me to use a word processor. I had to write it out by hand because otherwise, I might revise and not really think about what I was saying it let me go too fast. Of course now we don't have those worries.

We want our students to use Word processors. We know how to teach with them. We know how to teach with graphing calculators and mathematics classes in high schools. What we're trying to figure out in real time here is how do we teach with these generative AI technologies. How do we make it so that the students are actually accelerating the learning we care about? By offloading things like the simpler parts of writing but still retaining understanding, retaining the growth in their conceptual knowledge.

That's tricky because we know we're making missteps but I think there's a spirit of innovation and experimentation that is really widespread here on campus and we're going to get there sooner rather than later. I have a question related to that, that dives into some researchy nitpicky things but I think it's really analogous to any use of AI to generate new information. In science AI is becoming really good at developing figures, creating text and that can lead to both falsification of data but also easier algorithms to detect falsification of data. How do we help create learning systems and structures that has the right checks and balances? One part of it is, of course, building things into code to make it harder to do the problematic things. But as you note many of the things that we want to do could also be used for problematic cases. The technology alone can't be the solution.

I think the real solution is actually to go to a different governance that we haven't discussed, which are social norms. Which are having an understanding of we as a community just don't do this. I think we often underplay the importance of social norms when we're thinking about the ways the technologies get used. But we've seen it in for example, you mentioned biological sciences the immediate response to the Crisper technology.

What are the things we just will not use this for? Those can be very powerful but they have to be built out through teaching, through projects, through working with students and being explicit. I think that's the real key is to not ignore the social and human piece of it but rather lean in on that as the best way to have more ethical and responsible use. Which is exciting, and it brings in our philosophers, but also our social scientists, our policy experts, again, coming back to the interdisciplinary campus.

Other ideas about the checks and balances. I think David's suggestions were great. I really think normalizing the discussion of it, discussing what it can be used for, what it can't be used for and diving in when you find it makes mistakes and emphasizing as instructors, as leaders, that this is not okay to use it in these certain manners. We really need to have well articulated norms. Yes, it's fine to do this or if you use it you need to disclose it and you need to verify the references or whatever it is that it made up to make sure that it doesn't contain made up things.

It is still your responsibility as a human being to check what AI is spitting out. We can't normalize just accepting it at face value no matter what that result is. Now, let's talk for just a minute about data in data out because that, of course, is a big concern that everyone has brought up. In health specifically as we think about the biobanks, the genome sequence banks, we have tended to gather those from as you described cohorts that have healthcare access, have access to academic health centers that are running these studies. We can run algorithms on them again and again that will never translate to patients who haven't had those access resources. How do we, in our roles, help make sure that the data in is what we need to get the right answers? Maybe I'll just take two parts of this question, and it's not all parts of the question.

The first thing I'll talk about is just data cleanliness. It's hard to know you got a good result when the base data is not trustworthy. I think we're taking a leap when we develop machine learning models and these algorithms.

If we don't trust the underlying data, then how can we build a model on that data that is trustworthy? We build better models when we groom our data first as part of that training process and understand how all the data pieces interact. That is quite a job. That is quite an opportunity for academia.

But I think that good little data can spark lots of great big data in the AI space. Data governance and curation is really important. That's one part. That's just building models overall. But I think on the other side, the inclusion piece is a great question, and I don't pretend to have all the answers to that. I'd ask, how are we building further opportunities for communities that have been left behind in the journey of all of this, technology, education, artificial intelligence, broadband? Where do we need to create new financial assets for communities who decide how they want to participate? We enable the country to better serve those persons, for instance.

But it's from them perspective outward than us inviting one in. I think we can do better. I think it's absolutely critical, as you were saying, that we avoid the data harvesting that has occurred too often, where people will go into a community, harvest up the data, and then say, thanks and leave, and the community is left with no benefit. Nothing is changed or improved for them.

Whether that's through models of data ownership by communities, there's a whole movement around issues of indigenous data governance, especially around languages that companies would love to just suck up all the data about, say, the Maori language in New Zealand. Instead, those communities have stepped up and say, we own these data, we are the caretakers and the curators of our language. That has led to those communities benefiting.

I think we as academics have a special responsibility not to impede those efforts, but in fact, to accelerate them in various ways. I think the other big thing is to recognize that no matter how much we try to do to be more inclusive in our data, and in our model, inevitably, we've got to also make sure to monitor. We've got to know that they're going wrong.

We can't assume that we've done enough, let's wash our hands and just deploy the model and forget. As you said, everything drifts over time. I think that's a key compliment to improving the quality of the data is to improve the quality of the monitoring we're doing on the back end to make sure that we really know how well our models are performing. I hear underlying all of those comments is that we in academics need to keep these questions and these challenges in the conversation, that we need to make sure we're advocating for these discussions to happen. I want to make sure that each of you have an opportunity to highlight something maybe we didn't get to or a theme that you want to develop a little bit more. Amy, let me turn to you and give you free rein over the floor.

Well, in the health space, especially, government investment is just critical across the board, and this comes in many different areas. This includes NIH, the basic research that we do. But it's beyond just NIH research. What about the trainees? Who are the workforce that are going to power the new health? Who are the data scientists that we're going to develop? We don't have robust funding lines at the moment to build that workforce of the future. Of course, there are other partners, FDA, NSF, and others.

We're seeing some pipelines come open. But not at the speed and the breadth that we need to really grow the field in the way that we need to for the future. Yeah, that's what I would like to see.

Wonderful, and Kit. Academic institutions have a really special role to play in this and not only training the next generation of interdisciplinary scholars that we need. But also in applying machine learning and AI technologies across the broad spectrum of disciplines and ensuring that we have a positive impact on human society or health, you name it. I think what we need to realize our abilities here is indeed more access to the computing technologies, more funding for dramatically interdisciplinary training and research, as well as just access to the raw compute power and the computational infrastructure we need.

I think when we think about the ways that AI is transforming our lives. It's really tempting to think about, how do we get the good AI and not the bad AI. There's an enormous emphasis these days on trustworthy AI or responsible AI. I think one of the things that we as academics have a really special role to play is in getting the conversation to shift to, there's just AI that is helpful and done well.

Or you can do AI badly, and that results in harming people. That if you build an AI system that disempowers people, that perpetuates existing inequities, it's not that, oops, I messed up. It's your bad at your job. You're bad at what you've done. I think that we as academics can lead the way on this change in the ways that we think about AI. But it requires us to be able to be resourced, to have the time and the connections to really be at the cutting edge of what's happening in the creation of AI systems, to show that there are better ways of doing it than perhaps we've done in the past.

I just hope that the ways in which government is starting to regulate the ways that governments fund, the ways that private foundations fund, the ways that companies are crowding out some of these projects, don't leave us as academics looking and saying. But we've got something to contribute, and we're locked out of the house. I think we have to recognize that we need to be positive advocates for the real role that we can play in bringing about this future where AI benefits everybody rather than just a few. Absolutely. Thank you so much for all of your expertise, all of your insights.

As I think about some of the themes. It's clear that AI is ours to develop as we so choose, and that doing that successfully is going to require partnerships first and foremost. We need industry, academics, students, leaders, government to contribute to those decisions as they move forward. We need to make sure we are building the path forward in a way that we're constantly learning and that we have checks and balances built in. That we have investment built in.

The government investment, industry investment, is both spurring growth and acceleration, while at the same time shaping that acceleration in a path for good. We have a lot ahead of us. But I think that the energy is really good in being positive contributors.

2024-08-17

Show video