Ethics in AI: Who Decides? | Intel Technology

Ethics in AI: Who Decides? | Intel Technology

Show Video

(upbeat music) - [Announcer] Welcome to CyberSecurity Inside Live from the Green Room, backstage interviews with speakers who are shaping today's discussions on cyber security. (people chattering) Now here is your host, Camille Morhardt. - Hi, and welcome to today's episode of CyberSecurity Inside Live from the Green Room.

This is Intel's Innovation Summit, and I've got with Ria Cheruvu, who was just on stage with Pat Gelsinger, Intel's CEO. She is Intel's AI Ethics Lead Architect. She graduated Harvard University at 14 years old and joined Intel and a year or two after that, went back and got her Master's degree. She is now 18 and also Intel's AI Ethics Lead Architect.

I'm really interested in talking with her about all things ethics, as they relate to artificial intelligence. So let's listen closely and find out what insight she has to give us. Welcome to the show, Ria. I have a number of different places to start, but why don't I just toss that to you and say, where would you start in a conversation about artificial intelligence and ethics? - I would definitely start off with what is AI ethics in terms of, you know, what is the definition that we start to care about and what does it really mean to develop technologies that are responsible and trustworthy and why it matters? - Okay, so what I've heard is over time there's been a concern around privacy and surveillance, if you wanna put those in a category together, concerns around bias and that even unintended bias leading to discriminatory policies or frameworks or reactions taken. This is all within the context of artificial intelligence and what people worry about.

And then inexplicably, so if we call it unexplainable output from artificial intelligence, and I think that people are kind of familiar with those at a high level. So are those still the major concerns or have we moved on and there's new worries? - Definitely still the same concerns that are applicable on a wide variety of levels, and the folds at which these emerge is starting to become a lot more clearer. For example, with privacy and surveillance, this started to come up when we were thinking about situations like, okay, surveillance in malls or in airports, but now we're starting to see that there might be some more downstream implications that we had previously not thought of that are opening up interesting new use cases for AI systems as well as considerations.

So a great example of this is, you know, policing workers, when you're using or applying these algorithms that are detecting hard hats or other types of objects, can they be unnecessarily or in an unanticipated way be used to police workers' time breaks? Because you're able to figure out when a worker is on a particular location and when they're not. So these types of downstream implications where you now start to have multiple AI principles and AI ethics principles intersecting with each other, that's the current focus as I see it today, Camille. - So it's not so much that there's new categories that we're worried about, it's that we hadn't even thought of things that we could track now, even if we're not attempting to track them, it's like, oh, well we now have... Because when you think about the level, when you talk about privacy and then you think about precision medicine and you think about even down to gene editing, and now you're talking about a company that maybe a corporation, maybe, even having access to your own personal genome, that's pretty detailed. - Exactly. And it is a little bit of both. So there are also some new categories that are emerging, like sustainability for AI systems and energy efficiency, and a lot of the conversations around there are fresh, new tooling ecosystems, new discovery of the nascent nature of overall all of the different directions that we can take, as well as the regulations and restrictions that we want to apply.

But yes, it does boil down to these main categories and then all of the use cases that we can start to see this applied in. For example, just taking transparency in the healthcare domain as well, where we start to get into contentions with security, which we've always known, right? Transparency and security are always in contention with each other, but the way that we navigate that in relation now to user experience and making sure users are comfortable with the way that their data is being used, They know what's going on, but we're also able to differentiate between malicious users or attackers at the system. All of these very complex questions are really coming under the AI ethics domain and are starting to be addressed for different use cases. - I wanna talk about some other kind of new, fun, troubling, It depends on your opinion and perspective, use cases that we're seeing.

Like one thing is we have, I guess we would call it generative AI, where we're creating, you know, artwork or music or all kinds of other things, videos, and well tell us the difference between that and deep fake. Is it one and the same or is it how you're applying it? - Not the same. So it can be different methodologies that are applied for them. The concept generally behind these can be the same, but yeah, I would definitely put it under different categories, because the models composing them are different. So with generative AI and with, you know, different types of networks that are used to generate images, it is definitely the same fundamentals that we're starting to see as part of text generation and image generation models like DALL-E and CLIP that are out there.

It has kind of spun into, in and of itself, its own motivations that we're starting to see with these models as well. But the fundamentals are shared between different disciplines and are continuing to grow. - But can you say more about generative AI? What is it generating? - Absolutely. So it can generate text images and, you know, audio in certain cases, it is essentially being able to digest a large database of different types of samples, of the data modality of your choice, and then is able to perform essentially a reconstruction of that and then output or spit out in a sense, you know, a reconstructed version of that data point that is, you know, matching the type of input.

And again, the methodologies defer based on the exact model that you're using. The overall fundamentals of this reconstruction element generally stay the same, but you know, the more that we change it, and sometimes, and in most cases, the bigger we make the models, the more iterations we train it for, the better results we get. - So in that context, AI can generate, say, news articles that could be posted that are made up by artificial intelligence. - Yes. And a colleague and I were thinking about this a couple of years ago, just thinking about, okay, you know, what are the implications of an AI model generating news content? Especially when human users and readers can't really distinguish between AI models and what a human is writing, and what are the implications of that? There's definitely the opportunity for misinformation to be propagated.

I know that there is some recent uprise, and, for example, students using AI models to generate their essays as well, which is starting to come into, or bring into play questions like, is that a good thing? You know, using an AI system to handle a lot of the grunt work that you would need to kind of compile all of these references, or is it taking away from the learning process as well? But overall there, there's a lot of concerns around the indistinguishability of the text that's generated, or the images, but also, you know, a lot of sigh into the potential of these systems. So it's a very interesting balance that we need to tread here between the two. - I know that AI can generate like paintings of, let's say, long since deceased artists, you know, so we're seeing what would, you know, what would Van Gogh be painting now if he were...

And generating, and it's sort of fun and interesting to look and see how good it is, but that this can cause kind of a different sort of question when it's generating artwork from living artists who are also generating art artwork at the same time. How do we handle that kind of thing? - Yeah, and I think it's, you know, it's tied to where it's like licenses, copyrights, and how do we really navigate all of that. We had a couple of very interesting backstage conversations at Innovation on this as well.

And so, Camille, today, I think the way that we are looking at it from an AI perspective is when you're distributing these models, there are certain obligations that you may need to follow when it comes to transparency and documentation-related items. And I think the legal perspective to that is still nascent. It does need to be integrated fairly quickly. It is amazing to be able to understand the failures and the weaknesses of the model, the providence of the training data and related elements, but we do also need to have some guidelines around what data that the AI model can draw from.

It's not always limited to that, but I think that's a really good first step. If we're able to understand where the data the AI model is being trained on is coming from, we're able to constrain that database to the images that we know the AI model can use, or, you know, the developers of the system are able to, to create a model that's able to use responsibly and track that over time. I think that's a great source or a first step, but that is contingent on, you know, not releasing models that are trained on these huge, you know, corpora of datasets that are all mangled together, and we're not really able to track where they came from, and, you know, how do we provide that credibility back to the original source? - Can you talk about deep fake, what is that and you know, how do we distinguish it? Or can we? - Deep fakes is a pretty challenging topic. So previously, and I feel like it's lost a little bit if it's hype right now, which is kind of strange.

You know, the stable diffusion and generative AI part has taken the focus, I think, as part of the media, but deep fakes are always a constant concern because you have AI systems that are able to generate images, video and audio that's pretty much indistinguishable to what you would expect to see in the real world, you know, narrated by a human or, you know, by nature. And the interesting thing with deep fakes is the number of different use cases that they could be used for. I feel like the technology between these elements as well as the elements that we might start to see with, for example, image reconstruction where you're actually bringing old images, you know, in like black and white style to life, you're able to color them and then to create 3D reconstructions. There's a lot of interesting potential there. And for me, I feel like deep fakes is kind of the opposite side of that, where there may be some potentially good use cases in education, but we're seeing a lot of the opportunities to promote misinformation with this technology.

How do we control it? Are we creating AI models to be able to detect deep fakes that are produced by other AI models? Is it some sort of a competition in order to figure out, you know, who is generating what and then be able to thwart that immediately? Are there ways that we can use traditional computer science and cryptography and hashing in order to figure that out? I think those are conversations that are still ongoing. Again, the focus, I feel, has waned a little bit, but you know, it's going to rise back up very quickly as we see more efficient creation of deep fakes and hopefully more efficient creation of detectors and thwarters in that space. - The other phrase you brought up is stable diffusion.

Can you describe what that is? - So stable diffusion is one of these, you know, image generation models that's very big out there. It's getting integrated into a bunch of different applications, and I believe it is what is powering a lot of the current kind of websites and services that are out there. And I'm still getting a grasp of the technology and reading through the articles. I mean, can you believe it? It's popped up within like two months, all, you know, all over the place now we see the words like stable diffusion or use it as part of our application to generate images and stuff like that.

So definitely learning a lot more about the technology in relation to generative AI. But the fundamentals are again, you know, around reconstruction and being able to get back to an image that you'd previously not seen, really pulling together elements from different workflows. - Does certain kinds of models pose greater concern on an ethics front? So I mean, artificial intelligence is so incredibly broad.

It's like, okay, you know, we have deep learning, we have neuromorphic computing, we have, you know, federated learning. We have so much different stuff within it, so do we worry more about one versus another? - It's a great question and from all of the colleagues that I've spoken to, I think the consensus is yes, we do want to have hierarchies of prioritization and levels with which we use to decide what AI model needs to have more stringent ethical AI guardrails, we can put it as, compared to another. And that is really based off of risks and harms in terms of analyzing the ethical implications of the system on society. And then again and of itself, those methodologies and the definitions and frameworks that we use to figure that out, that's still under debate. If you use one metric or definition in order to, for example, identify fairness or bias of a system, you could accidentally, if you're optimizing for that fairness metric, accidentally exacerbate another. So you start to see, you know, a lot of different metrics that you need to look at, some of which may not be relevant at all and you have to tailor it accordingly.

But putting aside those problems or, you know, uprising concerns with the methodologies that we are working to solve, yes, there is definitely a prioritization level or a risk level, and, I think personally, the European commission's proposal on AI does a great job of doing the categorization. There's a lot more work in refinement to be done in terms of what is general purpose AI and what isn't. But you know, having that delineation based on the use case of AI systems and how it builds up over time, that's definitely very useful.

For example, AI being used for determining access to employment or to education is definitely has very, very big ethical implications and probably should be constrained very much. Whereas if we see the use case of AI in games or for like, you know, Instagram filters, probably don't need that much of a constraint. AI in healthcare, you know, we can start to think about the different obligations that we might need for chat bots or similar types of use cases. And definitely they have their own risks and harms associated with them. We wanna treat them differently depending on the types of implications and harms that they can bring up.

- What is the thing that you would worry about the most in this space? - I think it may be not necessarily conflicting definitions and frameworks, but the applicability of a lot of these methodologies, I think, may be a concern today, Camille, and I say this from the perspective that we have a lot of amazing teams across the industry, academia, governments, and so many different organizations that are now recognizing the problem. We understand that something needs to be done and we are trying to handle it or attack it essentially from different phases of the AI life cycle. There are a lot of tools out there. I think the objective right now is to evaluate what exactly each of us is adding to that space so that we're able to come up with some sort of a holistic solution. And I see that done as part of the traditional, you know, computer science disciplines, and I think that is something that could definitely be reflected as part of AI ethics. And again, it's very nascent.

It's gonna take us some time to get to the place where we kind of currently are with AI. You know, we have a sense of the different overarching disciplines within AI, reinforcement learning, supervised learning, unsupervised, right? We're able to categorize it fairly nicely for responsibly AI and AI ethics, where we're just getting there. We know there's a lot that needs to be done. We have these principles like transparency, privacy, energy efficiency that we know something needs to be done about, but as the tooling ecosystem continues to grow, I think these conflicts and these overlaps are definitely gonna iron themselves out. - I wanna pick on one thing you said, because I had a question about it and you just kind of articulated it perfectly.

You said energy efficiency as a responsibility, which I think, you know, might allude to just greater sort of concerns for the planet or potentially sustainability, or people might call it climate awareness. How broad do ethics extend when we're talking about artificial intelligence? I mean, this is a compute tool, so how, are we talking social movements? We're talking environmental? - It's a great question and I think there are two questions hidden in that initial prompt. So for the first one, in terms of the extent to which energy efficiency is, and like the overall societal context around this, and also kind of getting into the second question around how far is ethics really going into technology? My personal opinion is, it's very far and overreaching, because when we are mentioning ethics, it is a very loaded and important term. And I think many of us in the a AI ethics space, and myself included, we use that word in order to signify that there are implications for the greater context beyond just the technical mechanisms and the infrastructure we're creating towards explainability and privacy. For me, it represents that unification of the siloed elements, where we are actually taking that into account, the bigger picture around, okay, you know, societal debates that are surrounding the technology, for example, pulling on energy efficiency. Definitely climate awareness is a key part of this.

And to provide some context into this, there are two different, you know, main categories of AI and sustainability. There's sustainability for AI systems where we're looking at optimizing AI models so they don't have such a large carbon footprint, or are consuming less compute, et cetera. There's also the AI for sustainability, where we're seeing the rise of numerous different types of use cases for AI systems to help improve the climate, whether that's, you know, detection at a larger scale of trends and patterns or even as simple as, you know, at a local level for users to better understand their footprint and what they can do to optimize or or refine that. So within this, I believe that that social aspect is very critical. It's why we need so many perspectives at the table.

I often say it's like, I don't know, maybe 10 or more different disciplines and domains that are kind of integrating into this. At least that's what I've learned from all of the colleagues that are working in this space. So ethics is a very loaded term. It's a term that requires a lot of responsibility and a lot of different people.

But again, that bigger picture always needs to, in my opinion, be there even though we may, in a siloed way, work on technical mechanisms that enable each of the elements that are composing ethical AI. - With AI now, you know, tackling sort of, so like you say, greater social movements, is there a risk that there's kind of this consensus among, I'll just say the tech community of what is social good or what is the right thing to do for the climate, and that does not represent the opinions of people who maybe disagree? Or, you know, don't have anything to do with technology and so aren't even following. Help me out here. Do you understand what I'm asking? - Yes, I do. It's a beautiful question because it is, in a sense, the answers to that can be somewhat controversial, but in my opinion, from what I've seen, to be completely honest, is when an AI engineer is starting to become a philosopher or is starting to get out of their domain, there's a lot of problems that can happen.

It is a wonderful thing because, you know, a lot of the engineers who pathfinded and created the technology, they had a vision for where it should go. And I'm sure you may have seen, you know, there's a lot of of debate around, you know, founders of certain AI disciplines and the way that they wanted to see the technology progress. There's, you know, sometimes disapproval or, you know, disagreement with where it should have gone and where it is now.

You know, that speaks to the space of AI overall with like the whole artificial general intelligence route. That's completely one way. And then there's also this route of deep learning and just building better models that are faster and more performance, sometimes large, sometimes small and going that way. But throughout these different disciplines there is, in a sense, a mixture of what you had shared.

So the first is, you know, we don't have a lot of team members who believe that the disciplines are important, but that is changing with responsible AI because as soon as you mention ethics, that societal concept automatically comes in. And I've had many colleagues, you know, they'll immediately bring up like trolley problem type of situations where, you know, you have autonomous vehicles, you wanna decide what they're doing or, you know, AI in the workforce and there are very specific disciplines that come to mind to different teams of individuals as soon as you mention responsible AI, so I think that even the mention of it and the discussions are changing, somewhat, the idea that, you know, technology is the only thing that really matters. We're not really thinking about the broader picture. But when it comes to engineers or even specific teams, I'm also speaking to like legal and philosophy.

If they don't have integrated disciplines or perspectives, definitely there is an amount of siloing. We don't get the type of alignment that we need. And we definitely see this as part of the regulatory conversations as well. If you have more representation of stakeholders as part of these committees that belong to one group over the other, you're gonna start to see a couple of trends that are popping up. For example, if you have a lot of technical community members, you might find these definitions that may be hard to follow for late people or for regulators.

And if you have more of, you know, the regulatory folks that are involved as part of making standards or, you know, just guidelines and documents. You have things that don't have a lot of practical applicability back to AI models. For example, some of my colleagues have commented on this, saying, you know, oh, you know, there's a regulation out there that says this is what needs to be done at this point in time, but from the technical side, it really can't be done. You can't just make that statement. We don't even have the testing infrastructure to validate.

That statement is true if somebody makes it. So that type of discrepancy, we definitely need the different teams looking at it. So it's a balance, you know, we need our teams thinking about other perspectives, but in the end we need multidisciplinary teams, stakeholders, boards in order to attack the problems. So no one-size-fits-all solution.

- What are we doing about this data collection, massive asymmetry of information? Those who are collecting the data, and a lot of times people become concerned that that is consolidated data collection and storage even, and potentially, access to held within some subset of corporations, actually even, not necessarily even governments, that don't yet have, you know, regulations or laws or policies, at least not global ones in effect. So how should we look at that? - And I like the word that you used as well in terms of, you know, the asymmetric kind of nature of the problem itself. So as part of this, and it's very interesting, I think, to start this off by saying, you know, there are specific instances even where let's say, you know, you did have access to unbiased data or you had access to the type of data you would need, you can also start to see label bias directly during the annotation stage itself, when you're cleaning your data, when you're removing outliers.

And if something doesn't conform to the data scientists' worldview and they take it out, or maybe it doesn't even work as part of their problem statement or analysis, it can have a lot of impact on other populations as well, whether or not, you know, you're cleaning out missing data, again, you know, stripping outliers from the data, all of these data science operations, you know, they can contribute to label bias and many other types of biases as well. But I think overall, when it comes to the data collection annotation stage, it is a key problem in a number of different ways. And again, that have their own problem statements attached to them and potential solutions. For example, 'til date, all of the problems related to data sourcing have had solutions like, okay, can we use data augmentation? Do we use kind of the the same data points that we have and then flip them around, turn 'em around, resize them, rotate them, you know, can we do something to change them up and then add them to our dataset? And I wanna take that as an example because I came across a very interesting case study or use case, you know, a couple months ago that was illustrating something interesting here. So if we just plug into data augmentation for a moment, how could ethical AI influence this type of operation to potentially lead to better quality datasets? Assuming that you don't have a lot of these available that you're able to find in the license that you'd like them to be in.

So one of the examples that I saw is, in a textual dataset, being able to replace words that are gender-specific to something that's gender neutral. And for me, although it seems like a very simple example, this tool was taking a look at doing it automatically, and you know, being able to create that nice dataset give you not a very detailed visualization, but some sort of a report out of what's going on and what were the changes part of that that were being made to the text? And I think that's just such an interesting example of how we can start to implement these types of concepts that lower levels, that data scientists and AI engineers can use hands on. If you don't have a lot of great data out there, and you are forced to use data augmentation, maybe there are methodologies that you can incorporate at these levels that can help boost or at least enable some sense of data quality and ethical AI that are incorporated here, improve the diversity and representation of your datasets.

So this is a poor example in the sense that, is that it? You know, that's all we're gonna do, just a quick fix to data augmentation? How do you know what impact it has on the system? But at each of these levels, we definitely need to have solutions, and my argument is maybe we start small and then we build up from there. And when it comes to datasets, you know, being constrained to certain organizations, a very big problem, because reproducibility has already been a severe problem in the data science space. When folks publish papers and and models, sometimes they don't share their datasets. I'm guilty of doing the same because we don't have time, maybe, we don't wanna go through a review process that's very lengthy because we wanna get our results out there before someone else does. There are a lot of factors in there that can contribute to that. We do definitely need to see more data sharing efforts.

- Can you just explain what data augmentation is? I'm getting that it means you don't have enough of one kind of representative within your dataset, so you're maybe sort of multiplying or extending that to assume like there's more, you're making some kind of an assumption that may or may not be true. - Right, it's a great summary, but the first part of that premise is a little bit different. So with data augmentation rather than looking at, well, we are looking at representation but not from the ethical AI perspective. So for example, if you have your very typical CIFAR or MNIST dataset, if you don't, you know, you want your model to perform better on fours or nines, or you just wanna boost the size of the dataset to 100 to 100,000 more samples, or something like that, that's when you would start to apply these rotations, fixes, flips, you know, resizing, basically to boost the size of your dataset. In terms of dataset augmentation for ethical AI though, that's the nascent space. - So what are the things that we're worried about with data augmentation with regard to ethics? - It's an interesting question.

So when it comes to data augmentation and ethics, there is one key problem that pops up all the time, which is, and this is the same problem that's exasperated or exacerbated with synthetic data. If you are generating additional data points for your dataset, first of all, do they actually make sense? We do this all the time because we wanna boost the amount of data because better data or larger data equals better models, most of the time. It's starting to become the case where you can't always say that, but it's very gradually turning into that. But it's still, the equation is pretty much better data or larger data equals better models. So when we are kind of trying to boost the size of your dataset and potentially in some cases even the quality, is it actually representative or reflective of the real world data when you're creating these samples for synthetic data? And data augmentation comes under synthetic data generation in many cases.

Does it make sense? The types of inputs and the trends that you would really see as part of the real world. That, I think, Camille, is the main question that comes into play with data augmentation. It starts very simple in terms of what is the real world applicability of the data? But then it starts to evolve into something that's a lot harder to tackle, which is when you are dealing with potentially sensitive data, and let's say you're generating data related to, you know, race or gender even, you know, proxy variables, like, you know, not license plates necessarily, handwriting or something similar, it starts to evolve into something a lot harder than we can take.

- So would we do something like disclose the percentage of the data that was synthetic or augmented so that we could take some assessment of the risk of the assumption there? - Exactly. That is what we want to be able to do. And with that, we wanna be able to track, that's why, the providence, and also validate the providence of the image data. And as we can see as we're having this conversation, we can start to think that, you know, in addition to just having the percentage of the synthetic data in the training dataset, or even, you know, in the evaluation dataset, wouldn't it be cool to have a little bit more information about (drowned out by Camille) - Which sets of data. (laughs) Which sets did you augment? - Exactly right, right? - Yeah.

- And then we run into security problems, which is, you know, okay, if there's a malicious actor who has a lot more information about the data samples that the model was trained on, because you're giving them information about the synthetic data, what couldn't they do with that information? They can engineer inputs that are matching that, or, you know, they could try to to play around with those inputs and see what happens. So it's so interesting, you know, it's a lot to take because when we, for example, present these types of fairness and bias, or even just basic representativeness, we're not really even talking about sensitive demographic data here. We're just talking about does it actually match the real world? And then we immediately have our security teams that come up and raise these challenges as well. So it it's a lot to take on for one person or one team, that's for sure. That's why we definitely need multiple teams that are challenging us when we say, "Oh, you know, we have problems with our dataset, let's use data augmentation." Okay, we have problems with data augmentation, let's use synthetic data.

We have problems with ethical AI, let's use, you know, like change our gender neutral, or gender-specific terms to gender neutral. We have problems with security, we put access control mechanism, something like that type of workflow that needs to be established. We definitely need to to do that, so. - And it's kind of the, it almost goes back to the unintended consequences, or unintended biases, now we're sort of talking about another unintended scenario. While we're trying to fix one problem, we may be creating another. - Exactly, right.

And the the weirdest part about all of this that I've seen so many different folks in the AI and AI ethics space raise is that AI systems are fairly, it's wrong to say that they're dumb, in the sense that if you put an AI model in an Internet of Things environment, you're empowering it, because it's able to consume all of the data that you're getting from sensing, and it is able to actuate in so many different ways, depending on how we structure it. But, you know, AI systems fail on, you know, ridiculously strange real-world inputs. Like you can give it something that... My team has personally experimented with this so many times, you know, we'll give it an image of a cat in a blanket, like wrapped in a blanket and it'll start to classify it as like a kite or a (indistinct) (laughs), and this one of the state-of-the-art deep learning models that, you know, has seen so many cats and dogs and blankets. It's trained on some of the largest databases, you know, ImageNet and others with like millions of images.

But it just can't figure out these types of problems. And we see the same thing with autonomous cars as well, which is rapidly changing, I think in the self-driving space. But you know, there's always been, up until like last year or even continuing to now, this question of, okay, we have a lot of self-driving cars out there, but how will they behave at like very busy intersections? But it's changing. We're seeing solutions pop up to that a lot.

But AI models are still, you know, fairly strange. They're weird, and we are already starting to see a lot of these capabilities pop up. So as we refine AI models' outputs, because we do want to get AI models to a a point where they are able to contribute to society with a better performance, we're gonna start to see a lot of these issues increase. It's a strange kind of correlation, but it's true, because the better that AI models get, sometimes the worse the issues are. So that's something we need to handle. - I'd like to get your perspective on, because robots are now joining us in our homes and on the street, (laughs) and I'd like to get your perspective on continual learning and the ethics around that.

- I haven't been asked this before. So continual learning is very interesting because it offers a lot of challenges, I believe, and this is also applicable for offline training and a couple of conversations, very early stage that I've been having with some of our intel robotics teams as well, and externally, which is when you're developing your model, you have access to a plethora data. You have access to your ground truth labels, that's the main part of it, because that's how you're able to monitor the performance of your robot, your device, your AI model. You have access for this type of evaluation training.

When it comes to deployment, though, you don't have access to the ground truth in the sense that you need to have this offline, you know, human evaluation team or, you know, maybe they're online, depending on the speed of the review. They're actually reviewing the AI models' outputs, they're checking what's going on. Maybe they're telling, okay, you did wrong here, you're doing right here, you know, rejection or accepting the outputs of the model, and they're providing that feedback back into the system. You know, there are different models for doing it that way. That's the offline training way, but it applies for online training as well, or online learning, and continual learning where you need to have at some point in time, ground proof labels in order to assess the performance of the algorithms.

There's something else that's added to this, which is qualitative behavior analysis, which is a term I'm just mentioning here right now. It's not an actual term, but just, you know, if we take the example for robot in a restaurant, if it's navigating over and it's bumping into a lot of tables or it's just completely, you know, bumping into a person as it's, you know, serving different folks, that's a problem. We're able to actually see that. Maybe our metrics that we've used or quantitative metrics are not capturing that. The performance of the model may be used superb, but something's going wrong. Is it accuracy drift? Or, you know, is the sensor damaged? What's going on with the model? So these two kind of quantitative and qualitative measures of evaluation are very interesting to consider.

And ethical AI, we are hoping to actually bring from qualitative to quantitative. The reason why is the following: If for example, and there's a great example of this with soap dispensers. I'm gonna just take a tangent, but you know, soap dispensers, computer-vision based, no AI included. If they are dispensing soap for folks with a lighter skin tone compared to folks with a darker skin tone, there is an immediate problem that we start to see here. And we're seeing the same type of technology and biases reflected in AI models, for example. And this is just a generic example that might be incorporated into robots and other applications as well.

So you need to be able to perform that type of testing. And in the case of continual learning, you want some of this to be done real-time, because you can't always tell the model, "All right, pause," you know, "I wanna do this test." You do want to do that, because at some point in time you wanna have a very thorough examination of what the model is doing and give it that performance feedback, but you can't do it real-time in continual learning all the time. So you want some sort of a quantitative metric that you can anchor this on.

And putting AI ethics into quantitative metrics, very challenging. That's why we boil it down to the individual elements and start from there. - What are the soap dispensers doing? - Oh yeah. So right now, well essentially the soap dispensers are just dispensing soap, but what they're supposed to do is if you've got a hand, right, you know, right underneath them, they're supposed to be able to just recognize that and, you know, dispense the soap, but in the case of, you know, lighter-skin folks, they do it perfectly, it's all fine. In the case of darker skin folks, it completely ignores them.

- Oh, it's not seeing the hand or not recognizing that there's a hand? - It's not recognizing them. Yeah, and a little bit more insight into this, at least from my current understanding, because if, you know, the... I think the term use was like refraction, but you know, essentially because of the light, there were not able to detect darker skins folks hands. And so they weren't, it wasn't like disposing it, no matter how much you wave it around the dispenser, so it's a key problem. Definitely should be identified as part of your pre-design and development phases, not during deployment, after you've released your product and it's working in the real world. But I feel like that's a good example for AI models as well.

You don't really anticipate these problems, but if you actually think about it, about benchmarking, evaluating on different groups of populations, then maybe you can catch some of these issues early on. But if it's the case like continual learning, you need to be able to do somewhat some of that real time. - Continual learning. Maybe just give a very quick definition of what it is, now that we've had the conversation about (laughs) the implications. - Yeah, sure.

So for me, the way I see it very simply is what would happen if you could keep training for maybe not forever, for a given duration of time where the model is constantly consuming this data and it is able to predict, and it's able to learn on the fly, and the different mechanisms that we would use in order to learn on the fly, they can vary based on the discipline, but that's the overall premise with continual learning. I just, I think of it is very simplistically training that's being done, we're a very large time period, you know, training forever. (laughs) But yeah, that's the general gist of it. - Got it. So are we ever gonna see ethical implications flip on their head, and we as humans are going to have to worry about how we're treating machines? (laughs) Okay. - Yes.

- Say more. - Absolutely. - I was not expecting you to say yes to that. - Oh, it definitely, because, and I did end up writing a kind of a short thesis, opinionated paper on this and I got a couple of very interesting feedback (drowned out by Camille) - Oh, I'm gonna read that now. I should have already read it, but. (laughs) - Oh, no problem.

I'll provide the summary here, because it's just so interesting. So there was a initial research experiment, I forget when, but it was a couple of years ago where there were these researchers who created a robot in a mall that was supposed to do some navigation, and they encountered, in my opinion, what I like to phrase it as like an unanticipated concern, or an unconventional concern, where they had a lot of kids that were trying to kick the robot, punch it, move it around, shift it. Some of them were curious, some of them were violent, so they had to deal with this newer problem. So what the researchers did, they built a attack estimation, I believe, if I recall this correctly, an attack estimation algorithm and also a trajectory planning algorithm so that they were able to avoid kids that were coming their way and trying to attack them and either maneuver to the parents, who were taller. So they were judging based on height, or, you know, maneuver somewhere else. So it's interesting.

Oh, I believe they called it like an attack evasion model. So it's interesting, because it raises a lot of questions around how do humans act with AI systems? And one of my peers in this space was also telling me, based on a recent study and a survey that they were working on, that a lot of us, users of AI technology, we do want to design our actions or the way that we act in a way that helps AI systems. Again, it depends on the types of communities that you're serving as well, but you know, the majority, you know, we want AI to work, and we're willing to change somewhat in simple ways the way that we help AI systems. And let me provide an example of that. For example, if, you know, we have an AI system that is helping us in a healthcare setting to report that we've taken our medications or, you know, do some sort of processing there. If it's just as simple as angling your, you know, medication bottle a certain way or, you know, or doing something, maybe turning on the light, I think we're willing to do that to help the AI algorithm better detect it.

And when it comes to that, we start to get some expectations as users. It'd be nice to have a little manual that tells us, you know, what is this AI model doing? Where does it, you know, typically fail? So that's one thing that we might wanna see some information on. And again, we may also want to figure out, okay, how do we actually use the system? Where is my data going with the system? We have a set of requirements and expectations that we wanna know, what exactly is this model doing? Same thing with an interactive kiosk. If it's, you know, taking my voice commands and you know, it's using that information, it would be great to know where exactly is that gonna be used? Is it gonna be used against me? We need to answer these types of questions, and preferably real-time. So that's why I think definitely we need to think about the way that humans interact with the AI systems and the ethical implications towards AI systems. Maybe not because AI has consciousness and emotions, which is a debate in and of itself, but I personally believe we're not near that.

But because the effect that we have on AI systems, AI systems are amplifiers. So in general we are impacting the people around us and the people who are trusting AI systems by being untrustworthy towards AI. - Right. Well I do, I know you brought it up, so now I do wanna ask you your opinion about machine consciousness, because it's such a debate right now. We're nowhere close, in your opinion.

So what would it take to know that we are close or could there ever be a scenario where we were close to that? - I believe the consensus is we are not close to machine consciousness when it comes to a timeline. If we were to get to that point, it's probably not going to happen in the sense that you can get some sort of a simulation or if a model of it, and I know quite a few people in the philosophy space who sometimes argue that we even can't get to that type of approximation, which is a good point, and I definitely take their word, because they're the experts in terms of what really is consciousness. But personally, I believe that we can get to some sort of a simulation or modeling. And the reason why I say this is if we're able to create maps of different human emotions or morality, which is work in progress, you know, different ways that humans react in different situations, we can get to simulations of emotion.

We see it a little bit today in chatbots where actually if you look at the technical advancements in chatbot technology, they're really targeted around how do you best respond to phrases like, "Hey, how was your day?" Rather than sounding robotic in a sense, you know, actually having that type of nice little interaction. So I believe we're gonna get to simulations of it, we're not gonna get to possibly the consciousness or that maybe the thought uploading, but you never know. So I'm totally open to the prospect of things changing over time. But when it comes to extended consciousness as a whole, I do definitely see AI as an enabler or an amplifier or some sort of a process that you can use to, from a philosophical standpoint, kind of extend your extended mind.

And this actually takes me back to a great point that Pat made during the Innovation 2022 Keynote, which was that everything is a computer. And that is very interesting to think about, because in a sense that, you know, everything around you was a device that you can use that computes or something like that. So for me, I see AI is kind of an enabler and amplifier of that, and it goes back to the original research on the extended mind, and other stuff from the philosophy space. So we'll see. For now, short answer, probably not around machine consciousness, but I'm open to the possibilities, and I'm not biased towards it, so we'll see what happens. - How do you see it as extending, an extension of consciousness? - Based on the definitions of this, I mean, the way that we currently interact with AI systems is can you get this task done for me? Or you know, can you listen to my command and I'm thinking of voice recognition or assistance.

I've got my phone right next to me. If I'm telling Siri, you know, please send out this email or something like that, that's the extent of which we have it today. But as we start to see more human-centered AI systems, like robots and others that are interacting more closely, for example, robots that are interacting with the elderly, we're starting to see a lot more emotions and connections that are being created there. And eventually I feel like we may start to see AI agents that are really improving productivity by being this type of second backup that you can rely on. We're seeing it slowly, not there yet, but for example, with generative AI, you know, this content creating machine that generates really weird images and text, but something that you could use as part of your creative process, right? That kind of backup thought generator, or something like that. So I feel like as part of that, it's really adding to the thoughts or you know, the persona that we currently are.

Maybe it's for example, personalizing our feed of the clothes that we're buying and the, the food that we're eating. So it is adding to who we are as a whole at individual and group and societal levels. So from that perspective, yes, I believe it can kind of extend our consciousness. Again, depends on the definition, but yeah. - Do you think, I'm gonna try to make this my last question.

It's incredibly difficult to hang up with you. (laughs) So you're 18, so this is relatively young in the scheme of humans. Do you think...

And the other day the Intel CEO said you'd be CEO in 30 years, so I guess in that kind of a time horizon, if you can think forward, if time moves linearly, (laughs) what do you think we'll look back on from this time and say, "Oh my God, you know, we were such, sort of, we were such babies in this space," or we didn't see or we moved, you know, we stepped left and we should have been thinking about right? Like as humanity dealing with artificial intelligence and its rise? - I think we shouldn't, because we tried our best. And that is definitely reflected in the current attempts that we have. And the reason why I say this, Camille, as well, going back to kind of those controversial debates around, you know, what was AI planned to be, and what has it come to now? I think it may be, it's not an incorrect question to ask.

You know, opinions are always welcome about this, but from what I see today with AI and technology, we've tried our best and ventured into so many different domains. So I know that when we get to the future, we're gonna converge to something that's amazing. We're gonna get to technology that we probably had not even thought about before, but all of these directions have definitely led up to this. All of the thinking, the failures, you know, they are building up to this. Now, there is an element in here that I think that doesn't apply to, or this type of optimistic point of view doesn't apply to, and that is ethical problems. So for example, if we're seeing the introduction of facial recognition into, you know, challenging environments, or use cases in the sense that, you know, it just doesn't set right morally to see certain applications introduced to many of us, not just at an individual level, but we're actually seeing that at a shared level.

That's something we do wanna be able to prevent ahead of time. And again, to be completely transparent, I'll mention it directly, one of the key debates here is around do we ban facial recognition altogether or do we keep it? And there's folks on many sides of the arguments as well. We all see the potential for it, but we all see the capabilities for destruction as well. So I think having conversations about that early on, and definitely acting based on those conversations, that is what we definitely need to do. I don't see it as something we'll regret in the future, because again, we have multiple teams, and different folks that are working on it, talking about it and raising it. But it is something that we do need to get alarmed about and get alerted to.

And I think all of us are, again, working towards that type of outcome. So I'm very optimistic for the future. I'm happy. I know that we're gonna be able to solve it and yeah, we we're working on it, so. - Well, Ria, thank you so much. It's Ria Cheruvu, who is AI Ethics Lead Architect at Intel, and just very recently on stage with Intel CEO Pat Gelsinger, talking about Gaudi and other AI chips.

I don't know if there's anything you wanna say about Gaudi? - About Gaudi? Oh, it's a fantastic technology and I did wanna say it was an honor to share the stage with Pat and to demonstrate some of the wonderful technologies out there, Getty, Gaudi and our Intel Scotland's team's amazing work to get the optical innovations out there and a ton of other work with OpenVINO, with game development as well. So it's definitely the challenges that we face today. We're gonna continue to face those from an AI perspective, but AI quality and a lot of these other key capabilities and performance-related items that we're looking at, when it comes to technology, we're solving it. So I'm very, very excited for the future and what we're gonna do. - Thanks again, Ria, appreciate it. - Thank you Camille. It was wonderful speaking with you.

- [Announcer] Thanks for joining us for CyberSecurity Inside. You can find more episodes including our companion series "What That Means with Camille" wherever you get your podcasts. - [Announcer] The views and opinions expressed are those of the guests and author, and do not necessarily reflect the official policy or position of Intel Corporation.

(gentle uplifting music)

2022-10-13 17:10

Show Video

Other news