The Impact of Responsible AI: Insights from David Ellison and Lenovo | Intel

The Impact of Responsible AI: Insights from David Ellison and Lenovo | Intel

Show Video

Hello, everyone, and welcome to Intel on AI. I'm Ryan Carson from Intel, and I'm your host. I'm so excited today to welcome David Ellison to the show. He's the Chief Data Scientist for Infrastructure Solutions Group at Lenovo.

Through Lenovo's U.S. and European AI Discovers Centers, he leads a team that uses cutting edge AI techniques to deliver solutions to external customers, while internally supporting the overall AI strategy for the Worldwide Infrastructure Solutions Group. Before joining Lenovo, he ran an international scientific analysis and equipment company and worked as a data scientist for the US Postal Service. Before that, he received a PhD in Biomedical engineering from Johns Hopkins. He has numerous publications in top-tier journals, including two in the proceedings of the National Academy of Sciences.

Welcome to the show, David. How are you doing? I'm doing great and I'm happy to be here. Good to have you here. So let's get started and talk about rubber meets the Road.

How are you actually implementing AI at Lenovo? Okay, well, this has a little bit of backstory to it since, you know, before before I made some changes, I was making all the decisions about AI and responsible AI as a chief data scientist. And I figured that that was problematic for many reasons. So what we did is I established a few years ago we established the responsible AI Committee, and this is a group of about 20 to 30 people that get together and make all the decisions, internal projects and external projects about what is fair, what is just. And we established six principles that we're going to go through in this, podcast, I'm sure, six principles that we, cover and we make sure that we are protecting privacy and security, diversity, inclusion, accountability, and environmental and social impact. you know, and all these different principles, you know, we ensure, and this is a diverse group of people, right? and not just a bunch of data scientists. All right.

So there's nothing wrong with data scientists, but it's good that 39 00:02:32,785 --> 00:02:34,020 you got a representative group. So, let's go on talking a little bit about AI systems prioritizing privacy. So how can we ensure that AI systems do prioritize privacy and security, especially when dealing with sensitive personal data? I think everyone's tuned in to the fact that we need to think a lot about this right now. there's a number of principles here. The first one is data minimization.

There's this, tendency. And, and I just collect as much data as you can and figure out how to use it later. And that's not a good way of going about it. That's just collecting a lot of personal information that you may or may not use.

so if you do have to collect personalized data, you know, anonymization and pseudo anonymization is also important. You know, you want to hide the data that has personal information. It, you know, anonymize it. there's many techniques for that.

What's the practical, way that's actually being done? so the practical way that that's being done is, there's certain programs out there, certain open source programs that actually do it. differential privacy is a mechanism for doing this, too. it's actually used by the Census Bureau.

if you ever need to get census data, it's used in that. It's like introducing a little bit of noise into the data. I mean, it helps protect privacy. Got it. Awesome.

So transparency is obviously a big subject here, right? Folks want to understand how things work. How is AI making decisions. So what role does transparency play in building trust between AI systems and their users? And can you provide examples of how transparency can be achieved and AI decision making processes? Sure. Transparency. You want to know that you're interacting with an AI system and what data is being used to interact with. So you know, model documentation is based on this one. Now, I'm not saying that everyone reads the ML documentation, but at least having it written out there is important.

Yep. Having user friendly interfaces that tell you you're interacting with an AI system. You know, it'll get more and more frustrating to be interacting with systems that you think are human, you know, person that you're interacting with and you're just talking to a bot. Right. So we want to avoid that. So I assume that we're going to quickly move into a world where most of the systems you interact with will be driven by AI, whether it's machine learning or deep learning.

so how do you think practically, that's going to roll out in the future? I think it's going to be, just like the acceptance of cookies became a big thing on, on, on, on the web, right where you have to accept cookies, you're going to have to accept the fact that you're dealing with, you know, a chat bot and, and you have to be informed on it. You know, this is important, especially in European law. America hasn't taken this, to the forefront as much. But in European law, it's very important when to represent that you are interacting with an automated system. Right.

So to me, it feels like we'll have this choice in the future of where we can have much better results, much better, interactions, more valuable responses as we give more and more of our data, to an AI system. But we'd want to understand, you know, how much we're doing that. Almost like a dial. and some transparency. Okay. How much of my data does this system have? are we already doing this, or is this something.

Do you think that, a lot of companies are going to have to figure out? I think a lot of companies have to figure this out. I think, you know, we are just we we have all these new chat bots actually burst onto the scene. we are just seeing, you know, these really advanced chat bots come out, and I think we need to learn how to use them. And regulate how we use them. I mean, we have to come to a consensus as a community of what is fair and what is what is the right way to be using these things versus just allowing every company to decide, right, whatever it feels like.

Yeah, absolutely. You know, we're all using ChatGPT now. It's got this, feature of memories, as we all know, and it's fascinating to see the user experience when it says memory updated. Right. And you start to get this sense of, oh, okay, it's starting to remember pieces of me, and then you can go and delete certain memories. So I wonder if we'll start seeing that user experience paradigm, you know, rolled out further, or is it going to be even more complex than that? do you think we'll see something like that, or will it be, a different user experience? I think we're going to see something like that.

there is in European law, right to be forgotten and that, you know, we have to this gets back into the privacy and security concerns that we started talking about. you know, you have a right to know what data is being used and control what data is being used in your automated system. Absolutely.

So it kind of leads to the next question, which is about explainability. So how do we balance the need for explainability in AI decision making with the complexity of modern data sets? Yeah. I mean, there's two things complexity of the modern data set and the complexity of the models. There's this trade off, you know, do you want more complex models or do you have more interpretable models? If you're using linear regression and decision trees, you can figure out what's going on pretty easily.

When you start using neural networks, it becomes much, much harder. Right. and you know, that depends on the use case. if you're doing like ad targeting, maybe you don't care that, you know, like show me cat videos, you know.

No, maybe you don't care that it that you are explained exactly why that decision was arrived at. But on the other hand, if there's a medical diagnosis of, hey, we think you have cancer, you really want to know why it thinks you have cancer, right? That's becomes extremely important, very fast. So we've seen some of the labs, release this idea of feature control, where you can start to understand which features are being activated and that the deep learning neural network. do you think that's leading us somewhere helpful? or, how do you see us actually, implementing explainability, in neural networks in the future? Well, so, a shout out to, we're here, local to the Raleigh area, and we have a professor here, Cynthia Rudin That has produced some interesting work in this area. And or one of her papers. This looks like that takes, takes AI models and, like, you get a picture of a bird, it says, well, this wing looks like this bird.

So we think that this is this is this bird. Because this wing in this beak, like this way. Right. So it's a very detailed explainability paradigm. And I think we're going to see more and more of those developed. Got it. it just takes a little bit of time to, for the, for the catch up because right now we're just making bigger and bigger models. It's going to be more and more important for us to make explainable models.

Right. Like you said, if you go to the doctor and you get MRI and that it gives you a scary result, you want to understand. Okay, well, tell me a little bit more about that.

Especially I assume that we all know now that a lot of, the scanning, explaining, diagnosing is going to be happening, you know, through models in the future. and it will be more accurate. but we also understand why because it's not a human talking to us, on the other hand. So, let's keep moving on to, talking about diversity. Inclusion.

So what are some of the strategies for promoting diversity and inclusion in AI development to ensure that these systems better reflect the needs of a massively diverse user group? Sure. First is assessing the the current state of your team. I mean, having a diverse team helps you build diverse products.

so, you know, making sure you have you analyze your team's demographics, backgrounds and skill sets, see where you understand. And if you do have deficiencies in that area, recruitment and onboarding and development, can play a whole role. And then when you do have a system that is working, doing, involved in diverse perspectives and user experience testing, I mean, you have people with all different technology backgrounds and different backgrounds that have to use the technology, right? And, you know, it's frankly unfair to have it just, after one demographic and, you know, it works for, just a small percentage of the people out there. You need to work for everybody. Absolutely. One of the things that we did, at a previous edtech company that I ran is that we created apprenticeships where we would actually go and we would source talent, from groups that didn't look like us and train them up. And I can see a lot more companies doing that in the future, specifically with, AI and machine learning.

That was a lot of opportunity there. It's really good to hear that Lenovo, Intel and others are thinking, you know, actively about this and how important this is. So, all right, moving on to the next question. How can data scientists effectively mitigate potential environmental and social impacts associated with AI system deployment? Sure. there's lots of things you can do, sustainable models.

You know, right now there's a race to make the biggest model possible. And some of these models use as much energy as the New York City in a day, you know, just to train. It's crazy how much energy these things are using.

So using, domain specific AI models over constantly increasing foundation models really increases the, the energy or decreases the energy utilization. So you can get, you know, a much more optimized output. you can also do, these technical things like energy efficient algorithms, model compression, sparse computation, and all of these, technical things that allow you to use energy more effectively. And, you know, depending on how much energy costs, if you're in Germany versus Texas.

You know, that that matters quite a bit to the overall bill that comes when you when you start dealing with AI. Absolutely. yeah. And then and then you also have the social, side of things.

And I think that involves, you know, involving all your stakeholders, you know, using, getting feedback from your stakeholders, getting them reviewing the process and how, AI is affecting everybody in the company. And outside the company. It's been interesting to see, really, the, excitement around small language models and how we've actually started to see how you can have a really effective, you know, model running on the edge, you know, maybe with 2 billion parameters, that actually is doing a very good job at a specific task. So it's exciting to see that we've that innovation, on that edge. And obviously that's one of the reasons we're excited about partnering with and over. We got a lot of compute now on the edge that can run these, SLMS, very effectively.

Do you, you know, and then we have on the other side, we've got Lambda just dropping, you know, three, 400 billion, you know, so you get these massive models dropping. How do you see the future of very large models being around the cloud versus small language models running on the edge? And how's that going to shake out in your in your opinion? I think it's all about using the right tool for the job. and you know, if, if you have a highly complex, you know, PhD level response that you need, maybe you have to go with the largest models out there. But if you are just looking for a chat bot to do customer service engagements, you know, something running closer to the edge that's quicker, more responsive, lighter, and you know, again can be run at the edge is more your model. I mean, let's not overengineer every solution.

But I think there's a place for the small language models and the large language models together. We just have to figure out what that balance is. Absolutely. Yeah. If you're just walking down the street, you probably don't need to drive a Ferrari. Right? Right.

It's exciting to see these things be innovative. And actually the improvement that we are seeing, like you said, through the actual architecture, innovation. Next question I have is around accountability. What measures can be taken to ensure accountability and reliability in AI decision making processes, particularly when dealing with high stakes or mission critical apps? absolutely. This is important. Somebody has to be responsible for one.

AI goes awry, and if there's not accountability for that, it's everybody pointing to, well, you know, maybe a developer did something wrong somewhere and no, there's no accountability and there's no method for redress if something goes wrong. So, you know, transparency in the development process, you know, documentation and explainability is important. robust testing and validation, and then accountability mechanism establishing clear accountability structures of who has the right roles and responsibilities for the AI system development.

And the and somebody is responsible. And there's some human oversight if a decision is being made. So if you’re loan gets denied, there’s some human oversight.

It's not just an automated process making these decisions. There has to be some human oversight of that so that there's some responsibility involved. Absolutely. So, as we see, AI systems be rolled out across probably every company in the world for folks that are listening to this show that are in an executive role, who should they be thinking about putting in charge of that accountability, you know, is that the CTO or is this a new, you know, chief AI officer? What do you think is going to be standard practice for making sure that these systems are accountable and explainable? I wish I knew standard practices in AI, AI in general is non standard practices. I do encourage the development of the new, chief AI officer role.

I think that's important to have kind of this holistic executive level, you know, presence to make these decisions when you deal with the CIO and the CTO, they may not have that holistic view of how AI interacts in a responsible manner and in a manner to protect your clients. So I think having somebody where the buck stops here is important. Yeah. It also feels like it kind of bleeds into the CSO. Territory as well. There's it's kind of it's going to be fascinating to see how this shakes out.

And then, of course, you know, give us five, ten years AI is going to be just assumed. And you know, we don't have Chief Cloud officers. So it'll be kind of fascinating to see how that role then gets reintegrated back into, you know, the standard, exact C-suite. So. well, when AI is successful, it's no longer AI, right?

Right. I used to be Google Maps, and that used to be AI. Now, it's just an app on your phone.

Yeah. so, like when AI is successful, it doesn't seem like AI, but it's still there. Yeah, absolutely. So a couple of fun questions to close out the show.

what kind of books or podcasts or content are you, absorbing a lot of about AI that's interesting to you? so I subscribe to every newsletter out there. Do you have the time? How do you do that? So I must say, I don't read everyone, I read a selected amount. But I subscribe to and, you know, I post it on my LinkedIn. The interesting stories about there.

But what's happening in AI? I allow other people to do that search for me. Also, there's a service out there, Perplexity that searches the web. That's really great at searching the web for topics. So, if you want an update on a particular topic, flextime is a great place to.

Yeah, Perplexity is great at the simple user experience of of kind of multi-step reasoning and then documenting the sources. It just gives you that, that feeling that you want. I totally agree. I'm a big Perplexity user.

I'm also enjoying the new artifacts on, cloud 2.5 on it as well. So it's fun to see all that moving forward. Okay. So, any particular podcasts that you're really enjoying the AI space or even non-AI related? Yeah. Yeah, I like a lot of podcasts.

Let me check my podcast here. I got the, you know, we have the Data Skeptic. Data skeptic and Data Frame are ones that I definitely listen to. Gradient Descent and The Gradient, are probably other ones that I listen to along with everything else, you know, all the other podcasts out there. Yes, so many. One that I particularly enjoy is called Latent Space. And the founders of that came up with this idea of the AI engineer, and it's been a lot of work around that.

So it's kind of fascinating to see how it all shakes out. So, well, David, I really enjoyed this chat, and, I appreciate you bringing all of your expertise and your years of knowledge to the show to share. What’s the best place for people to go to find out a little bit more about you and what Lenovo is also doing in AI. So you can reach out to either AIdiscover@lenovo.com that'll lead to the center that I'm in charge of. Or you can reach out to me directly at Dellison, that´s D E L L I S O N@lenovo.com.

Happy to answer your questions. Happy to you know help you figure out how AI can help you in your journey. Awesome. Well, thanks for your time. And, hopefully, we'll see you somewhere on the internet. Take care. Thank you. Bye bye.

2024-08-06 00:52

Show Video

Other news