Responsible AI (Cloud Next ‘19 UK)
Hi. How's everybody doing I. Know. I'm standing between you, and dinner. So. I appreciate. Everybody being here if. You're interested in coming in please come on in. I'm really excited to be talking to you today about, responsibility. I so I just want to give you a quick sense of the, setup for today so I want, to give a quick overview of some of the things, that we've done at Google and then we actually have two different conversations that we're gonna have, so. Eva. Is going to come up and and moderate, a panel with, some customers, and, partners and then we're gonna come back and have another conversation so we've got a lot of exciting. Things to talk about today and of course we really want, your feedback, so. Please if you don't already have the, app please, download, it and give us your feedback because we always really want to make sure we are improving. So, when. You sort of step back and think about where we are with, AI today. You. Know it, really feels like and I think we can all probably attest to at this point that we're, in the midst of what many, people feel like is a second. Machine Age and, this has really been characterized, by massive advances, in compute, and things like artificial. Intelligence. When. You look at what this means in practice so there are actually over. 90. Machine, learning papers that are published every, single, day this is now outpacing. Moore's. Law so. The amount of innovation, that we've had over the a relatively. Short period of time is, quite. Dramatic so. Much so as an example if, you think about computer. Vision, computer. Vision in, 2011. Had a 26, percent error, rate humans. Had about a 5 percent error rate in. Five, years, by. 2016. Computer. Vision had. About a three percent, error rate that's, an incredible. Amount of change in such, a short period of time which. Really leads to the space where we are today this. Change has been transformative. For. Google when. I started at Google about nine years ago we. Never heard the words machine, learning it wasn't a thing that was talked about but, you can look at now. The number of products, who, are built on machine. Learning at, Google has just. Skyrocketed and. This will continue to. Happen and this is that a transformation, that we've undergone that, we are now also, working, with a number of you, as you are undergoing the same kinds of transformations, for yourselves. There. Are tremendous, number, of examples. Of where AI, can. Be incredibly, beneficial for. The world one. Of my favorite examples this is one in the upper middle these. Are some very rural. Farmers. In Japan who. Use the tensorflow model, to. Sort cucumbers, on there. They're little mini production line inside, their greenhouse we've. Been able to build models. That can find manatees. From. Very high heights from aerial photographs, and when you look at those pictures, there's. No way that, that a human, eye would, be able to identify what. A manatee, is but a machine, learning model can, which can help tremendously, with, things like endangered, species.
And. There, are places where we've seen AI can. Really, help. To. Create more fairness, in the world so this was work that we did I joined, Lee with the Geena Davis Institute. On, media, and gender, and. We took all of the, films released in 2015. 200, films and analyzed. Them for both screen. Time and speaking time and you can see the results, which are fascinating, so, in 2015, women. LED films, made, 16% more in the, box office and yet, men, were both seen and heard twice as much on. Screen as women were that's. If. You think about what what, could have been made by, the media industry, if, there had been better balance, this is not only about creating. Examples. And opportunities, for, young. Girls and women to look at and see themselves on, screen but it's also very, much, about the bottom line for an organization. And you. Can start to see places where those. Kinds of biases, are really pervasive, in ways, we might not appreciate an understands this was a fascinating, study done. Of three, million English, words from public news sites and it was set up on these. He-she analogy, statements, so if I said, to you King. Is to man as Queen is to. You. Would expect woman or if I said Paris. Is to France as, Tokyo, is to you would expect, the answer to be Japan, and you, can measure the distance between words. And, then, create that same analogies. And it comes up with things like. She. Is two registered, nurse as he is two physician. Perhaps, that's, not a surprising, bias. That we know exists, in, society, but. Some of these are, quite surprising, she, is two cupcakes as he is to pizza she. Is to interior designer, as he is to architect, ms all based on just distance, between words, these are things that are pervasive. Across all. Of society, as we know it today and machine, learning can both help bring, that to the light and. If, not done carefully, and responsibly. Can. Help propagate these biases, in unintended, ways. For. That reason, we. Really believe in cloudy i that. All of our efforts around responsibility. I are our. Part, and parcel, with the way we think about what is going to be successful, for AI it's. Critically. Important, that. That. All organizations be, thinking, about what does it mean to deploy AI how, do i think about all, of the possible impacts. Of the, work that i'm doing because. The reality is if organizations. Don't, think about that the, potential for trust. To be broken, can. Really stop the progress and, stop. The benefits, of AI from being realized, around the world and that would be a terrible outcome because the benefits, are so, big and so powerful we. And we all really want to make sure that that is. Experienced. In. World. So. Within, Google we've, done a couple of things about this the first is we. Wrote AI. Principles, that stretch across the, entirety. Of Google. And these are seven things that we believe a I should do, and four. Things that we said, we would not directly, pursue, but, as you can see these are quite broad. It would, be hard to make decisions as. A product, area when you're evaluating an individual. Product or an individual, use of a product just, based on this because what, does it mean to be socially beneficial how. Do you measure that how, do you know you're not propagating. Unfair bias is that even possible, to, fully achieve if if there's no real definition of what that might mean and, so. In order to support these principles, we've put into into. Practice, two processes, within cloud that help us operationalize. Those the. First is we evaluate. Our. Customer. Engagements, that are using our machine learning tools that we've created, and. And look, for how those engagements, will align with, those AI principles the, second, is. That we evaluate, every product that we build in. A really deep robust, way with. A cross-functional. Across. Google, group. Of people who come together we're, intentionally, multilevel from, quite, junior an organization, to very senior, we pull in the relevant stakeholders we, pull an external, voices we, pull in folks, from the human rights community civil, rights community to make sure that we're continually, opening, our frame of reference, and evaluating. Each product. In a unique way so. That we are doing everything we can to ensure that we, are aligning with our own principles. It's. Important, to us that this process, is rigorous that. It's engaging it's actually, I can, say in, my nine years at Google it's the, most inspiring, thing, that I personally. Take. Part in those, conversations while. Sometimes, really challenging. I always, walk, out of the room having learned something, I didn't know before and. I find, it incredibly, inspiring, and, helpful, and, they've really opened.
That Frame of reference for our entire organization we want them to be efficient, that's important, they have to be effective, which means that we actually put those things into practice and we can measure them to the best of our abilities and they, have to be balanced, because we have to make sure that we're looking at the needs the business as well as our as, well, as the responsibility. We've. Launched a few things today that I want to touch on briefly because. There are really exciting, efforts, to. Help organizations with this the first is around explainable, AI and. This gives the human observer insights, into why, a model, behaves the way it does it, can help really. Create, information. Around. What. Were the factors, that led, to a particular, outcome and in what proportion, this information, is really hard to find today it's a way of building, trust, with explainable. AI because. It's important, to design. Interpretable. AI and. Then to be able to deploy it with confidence, this is incredibly, challenging. When. You don't know exactly. Both. How a model was built and how it's making those decisions and you have to think across that whole range of stakeholders, all the way out to the, business user or consumer, who is going to be on the other side of that model to help them understand, as well. This. Can explain why, an, individual, data point received. A prediction, can help creators. Debug, that model, can, help refine, it verify, that the behavior is acceptable and, then can help with a general, understanding of, that model we're, really excited about this I hope everybody gets a chance to try these tools are part of our a AI platform. It gives analysis. With every, prediction you can choose, an explanation, method. We've. Also integrated, this, with. Our what-if tool for model inspection, that. Can help you test, well what if I looked like this, what if I try this way what, if I take on this persona and give you more information that. Way the, second thing that we've released are something called model cards this is built off of research that we released last January. And it's a proposed framework for, better understanding how. Performs. Their. Short documents. That, accompany, trained machine, learning models and. We've released two four components. Of our vision API both, face detection, and object. Detection. Our. Goal with these first model cards is to provide practical. Information about. Their performance, and. Their limitations in order to help developers, make, better decisions, about what, models to use in what context. This. Is a really. Exciting area for us we not. All model cards will look the same we, have these two, available today we're looking at how we might create more for other, parts, of our technology. As we're as well, as where this will go in the future, you. Can think of these like nutrition, labels for, machine learning, and. They, have incredible, potential to help address issues of fairness and bias along, with just, general explain. Ability and transparency. And we're really excited about, this. We. Often talk about how technology is. Most powerful, when everyone, can use it and really, at the end of the day it's. Even goes beyond that because really when technology is most powerful is when everyone can, benefit, from it and that's what we're aiming to do with our cloud AI products, I wanted. To give you that short overview as a set up to the conversations, that are going to happen and with that I'd like to invite. Eva, and Sims annually and andreas up to the stage to, have a conversation. Thank. You very much. So. Good. Afternoon everybody, my name is Roger and I'm product, obsolete, for, cloud artificial, intelligence, in California, and, I have a pleasure to welcome our guests, over the past years. Within. Google have worked with. Numerous partners, and clients in, deploying and fostering, the adoption of AI in. Business context, and everything. That Tracey talked about, unresponsible. III I think, that wouldn't be possible without. Our partners, or clients so. That's why we invited some. Of them today to to. Our panel, to share their experience. With, working. With us and with. With, our clients, so welcoming. Since, who. Is the. Program. Manager at, the deep, mind, she. Has dedicated, her, career to. The. Beneficial, use, of artificial, intelligence, and things that this is the, key to. Enable. The, next generation of knowledge for. The humanity, then. We have Oliva. Linger, who. Is machine. Intelligence research, group head of. Siemens, era. He, has deployed numerous projects, with simens. On. Deep, learning and, artificial. Intelligence, and in, 2017. Funded, the. Siemens AI lab in Munich welcome. Thank, you for joining us and last, but not least we have an receiver, who. Is a managing, director, of internal mental and, co-founder. Of apply. Di initiative, in.
Munich That fosters. And. Enables. The application and, adoption of AI, with. Industry. Government and, startup. Ecosystem. So thank, you very much for for, joining that, session I. Will. Maybe kick, off with. A simple question. In your organization's. What, is the, role of AI, and. How. Beneficial that, is and in the context of responsible, I how are you deploying, that so, ladies. First male alright I'll, go first, hi everyone thank you for being here so, deep mind is a research, organization our. Mission. Is to solve intelligence. And then use that intelligence, to make the world a better place so we have teams devoted, to core research science. Engineering. Ethics, and society applied, artificial, intelligence, as. Well as an amazing, operational. Structure, to, kind of help all of those teams work efficiently, together, and, we also take part in many partnerships and collaborations, across the industry. My. My, team specifically, works on the application, of machine, learning and AI to, challenges, in the energy sector that contribute to climate change, so. We have right, now two projects. One of which is working. And operational, in Google Data Centers that is improving energy efficiency right. Now by about 30%, and, another project that is running, on Google contracted, wind farms that we are seeing about a 20%, improvement in, the value. Of wind power which. We think is really important, to making it more competitive with fossil fuels, on energy grids so. The. Look of my team is that our, success, translates, into Carbon, Reduction which. I think is really important. For. The use, of AI continuing. With Siemens. Yeah hi, everybody. I. Work for one, of the oldest German, startups. I would say which happened to scaled, up to, 380,000, people in the world. Where. We obviously have, a multitude, of sub. Companies and domains we are in mobility and health healthcare and energy sector in, grid space and energy management space, in, marvelous, marvelous, application, areas where digital technologies. Have a vital impact. With. That we undergo, a huge, what's, called the transformation, you may have that the three entire day and maybe for a certain aspect where. We see how and what is the role of a meaningful impact with, AI technology, enabling. Our transition. Through the journey. We. Have happened to be that our, management, will decided, and we're very happy to see that AI is, so-called. Company core technology, and with, that it's, seen. As the writer backbone. Of of. The transformation. Of Siemens. We organized, in and we funded from, top-down and with, that my. Team. Focusing. On applied, research in AI machine, learning a deep learning research, together, with partners and, the ecosystems. At, the same time making a difference individual. Products, whether it's in mobility. Whether, it's in computer vision on, healthcare or, whether it's the internal processes, and. With that loss, team. Applies. Regis contributes, to the open source movement but, also want to makes it different with. A movement, hours with ones of industrial and. You. Have been breeding actually. Between, industry, working with numerous corporations. Governments. And in, the startup environment what, is the AI. Role. Is playing AI in, those corporations, and where what do you see the biggest impact yeah, so. From. What PC in. German. Industry, but also European. And worldwide is, that. There two. Three streams. So one is that the companies, really need to think about or, are, thinking about what, AI will. Mean. For, them to. Differentiate themselves, against, competition so that's really at the core of what they're doing, and. That's mainly. Research. Driven right now so we don't talk about applications. Here but we talk about industrial. AI use cases it, really how this will make a change, for their core business and. Then we will see, where we see AI. Applications. In. Support. Functions, and functions, quite. A bit in in sales in sales support in. Customer. Service and, like, virtual assistants, but, also predictive, maintenance like, these are, areas. Where there are products. Out on, the market and, companies, are readily can. Readily apply, it, internally. As a, product, and as a service from someone else that's. Helpful. For scaling, it while. For, the topics, they do for themselves it's really really. Challenging for them to get it to a global scale but. That is mainly, research. On their core functions, you, mentioned. Challenges. Exactly. It's. Challenging, what are the challenges that you're, particularly. Seeing that, in. The context, of responsibilities. I. May. Be just, continuing, since you just are so. What we see is or, what what.
Why, We started, is actually, AI is not, a technology, topic, if, it comes to adoption, of AI in, society. In the industry it's a culture topic it's a trust topic, it's a transformation. Topic for like. Really every one of us so. This, explainable. Topic, is so, important, if it really comes to adoption at scale, and companies, and so, there's. So much time, spent. On explaining, people to really, use these tools, and to accept using these tools. Liability. Issues. Competence. And capacity. Capability issues, so then all these like, very human. Topics. Like when it comes to into action between AI and humans that's the most challenging part in companies. And that that, keeps them away of from really scaling, it in inside, the companies like it's it's very easy to have it as a small prototype but, when it comes to real adoption, of the cases, it's it's. A trust topic, and I like. Acceptance, topic from for. The employees yeah. And really, Snowden I, know we've, talked a lot and. Interacted. A lot. We. Need a bit of differentiate, between an industrialization. Which, was always about, gaining. Efficiency, and productivity and, the consumer space which, in in current times. Pushes. Hard on attention, and predictable, behavior let's, say personalization. Right the, main idea of the I is is, turning, input into. An action or will world right and and. This, action, in the real world have, failure modes we, know that we can measure them accuracy. Precision recall right, so. If. We can measure them and we know they are filler modes how do we make sure that these systems that we deploy in a large scale nowadays, reflect. Those kinds of values that be, represented, and want to represent as an ecosystem as a company where it's a digital person and, this. Is somehow really. Tricky it's not a why the bad thing it's really tricky because if, you in the domain for. You it's clear some process some applications, some maybe. It's. Not that clear if the black. Box is the. Input and the output and the black boxes in the middle how, do you explain the model yeah. In that we know that you know we all have failure modes and we, all have blind spots and some of algorithms. Right and we. Know from from, the innovation drive-ins, and the excellent that diversity, in terms of. In. Terms of people domains, and. And, broad set be more inclusive - that helps not only pushing, innovation but also reducing, buyers in these kinds of abusive stats and that. Is why it's so important, to open a discourse, on responsibility. And position, on bias, unfairness, in a broad set not only on techniques because AI, has, such a vital part in our lifecycle on the product so it's not only about training models, but deploying, how, to update how, to organize how to impact that and that's a tricky tricky, tricky question and, that's why you know you have to be more inclusive and you have to be more cautious about, the entire lifecycle of machine, learning flows right. I. Absolutely agree, with you in mind I think, you know that best I absolutely, agree with your points on diversity and inclusion I, also think a really important aspect of responsibility. Is reliability, and, for. AI obviously, the. Recommendations. And the output that you're going to get is only as good as the data that you're putting in and one, of the things that we've seen in the application, of AI in the real world is that the real world's data is quite, messy so. It takes a lot of time to clean that data and actually even get it ready for training and so, one of the things that we can do to. Address this challenge like data quality and data. Quantity, it's actually doing, this pre work you know ahead of time having conversations. About, standards. And there there's some folks in the UK who, are doing excellent, work on this in the energy sector specifically. EDT. For energy, data task force for anyone who's in the energy sector and is interested their recent, report in July was fantastic. But I think it's a really good example regardless. Of focus on energy industry, or any vertical really that. This, discourse, needs to needs to be, happening more, and more often so. That's kind of how we're thinking about addressing those challenges even. Going beyond, the okay I know Andres has been engaging. With the policymakers, and on regulation. On how to make. The governance, structure. Adaptable. To the, changes, that we are now encountering, in responsible. Iife can you comment on that a bit, yeah. So if you talk about responsible. AI from, a like. National. International, perspective, you need to, like. Look, at this whole topic a bit differently and the one thing is, where.
Companies. Intrinsically. Motivated. To work on their own and. The, whole trust. Part the hood transparency, part is something they want. To work on like there are, legal. Reasons. For that you need like an insurance in, banking, it's auditability, so, you need to be transparent, of that what. We saw before is for, testing purposes, to making it more robust, but, also to gain the trust of your customers. Ultimately you need to be transparent, and explainable, of what, you're doing so that's something from. My perspective where, we need some maybe, some push from, from the government or, policymakers, but overall it's a topic where companies, really push this forward and. Then we have topics. We. Talked about fairness or we heard about fairness before, and. Fairness, is a very interesting topic because it's, a cultural. Topic so, what is fair is it fair to give a loan, to. Someone. Who has. Highest, likely, to pay back that would, motivate us, to like, have. Very individual, solutions, and. Maybe. Each person, gets what he deserves but. Is it a from a societal, perspective is, it something that we want or should everyone get the same loan to the same conditions, to make it as a fair condition, to everyone, and. That's a cultural topic and there is no clear answer to very, individualistic. Well. Societies, or a, society. With a larger social sense. Or community, sense and in, that to give directions. In that way is a, policy. Topic, because. It. Is impossible. For each companies. To decide on their own and, to find, the reasoning, on their own it needs to be like. The need to be at least clear guidelines, in which the right you should go in one way or the other the same for for, liability. Issues, and. The, whole. Like. Responsibility. In a legal, sense. Also. There we need to have certain standards. And structures, which which are given, in. A way that we can work with these types of structures so. We. Have these these two aspects. Of of. Responsible. AI, that. Ultimately. Helps us all to be. Better and to really use. These types of systems. Yeah. But but we need these both sides of things. Policymakers. It's for them it's really hard because, we. Suddenly start, a discussion which. We like. Implicitly. Had, for the last 20, 30 40, a hundred years and we had these topics, of fairness. And. This was a like, a personal. Decision, made, by one person by a judge, saying. Like this. Person is more likely to go to commit. To more crimes and therefore it's. More likely to arrest this. Person with a specific background, and, race or gender, or whatever and. Now we have this data-driven and that gives, us more responsibility. In doing, that so suddenly, we get these. Discussions. Which, we should have had a couple of years a couple of decades ago but, now with full force we need to answer them because we have the power in these systems, to. Give us recommendations and. To start answering these, questions. And. That's. Exactly where our next panelist. Will touch, upon on, on exactly, those topics but I want to finish a bit, you, touched already about, the past presence. What. Is about the future and what. Are the recommendations, you. Would give as. Panelist and experienced. In, responsibility. In. To. That audience, that sits here. From. Demons from the mind from from applied AI perspective. And then. We hand over to the next panelist. So. From. Outside we're, currently working on the framework is called Siemens. Responsible. AI, Industrial, responsive, AI it, has mainly three pillars one is you mentioned already policymaking. So what kind of best practice and guidelines are. There if you had access, the, capabilities, of the eye we, differentiate, between three, impact factors one, is the, transparent, world about profiling, about computer, vision technologies. About, sensing the world the second is about human augmentation tools. That help somehow our, customers, about as well as our employees to. Improve, on certain tasks and then help in the in the in the complexity, of the world and the third is about autonomous, aspects. How we ship more autonomy. In our machines in controllers, in robotic systems with. That, capabilities. That we have also, risk. And responsibilities. Coming ahead and for that we. Believe that in other policy-making on a governmental, level is sometimes. A bit handing, off in times you. Know behind there and therefore we want to focus on guidance, and best practices, how, to use it second, pillar is technology, also. With partner there's marvelous, technology out there federated, learning, safe acts robust, AI explainable. There I what if tools you have seen it these is something about, what's right and also to push how it's with, technology, make an impact and the third pillar is.
Focusing, What we call co-creation trust. Is not necessarily. About transparency. It's about interaction, and interaction. Means that we jointly create, certain. Products, and solution, and being, aware, of what. Kind of capabilities, we entering and what kind of needs we, get in in with. Scale over people scale of assets, comes the scale of responsibility, and this needs to be reflected inside, yeah. And I think similar, things have been engaged with implemented. We have been partnering on those as well, yeah, I think that from, our perspective one. Of the most important things that we, done is and what, I would recommend that anyone in this room does is Center, the voices, of those individuals. Who are impacted. By the technologies, that everyone's. Building we've. Learned so, much, from engaging. With, other, community on ethics. And responsible, AI, things. Like citizens, juries and even surveys, and. That. Has been incredibly. Important, for us in helping, us understand, actually, how, app algorithmic. Technologies, are shaping, the lives of the people who use our technologies, and. So I would say please Center those voices, I think that's the best piece of advice that I could give, and. I think. Andres, who has been deploying, that knowledge, to. The industry, and to the policymakers, in. Comment, on that yeah I couldn't, agree more and I think like. If we accept, that this is a topic that everyone here, like. Is responsible. For and it kind, of. Working. On. Sharing. This information like, really exchanging. And, getting. These best practices, overall because ultimately it, helps us all if, we find good standards, if you find best practices here, and. That's it's, not a topic of one single company it's a topic of us all to, create, general. Trust, in AI. And. It, like, it can be ruined by single. Companies, by single individuals, and and therefore it's so much more, important, to work together on these topics and I think, that's like the main, main. Point for my side. Excellent. Thank you very much and that's an excellent bridge as well to our next panelist, and exactly. Working, on those topics. And applying, that in real life. Ask, their Tracy to come back on stage thank. You so much thank. You so much. And I just I just. Want to call out something that Asim, said because I think that's so important. In terms of ensuring, that you're bringing in the, voices of, people. Who might be impacted, by the technology because it's important to recognize, we. Don't necessarily. Have that information even us, on this panel we're, not a representative. Group, and. I want to recognize, that about when, you sort of look at this group and so making sure that we, have, that we engage in that conversation we recognize, our own limitations, of our own knowledge and that we invite that, in so with, that I'd like to introduce, Kirsty. Everett who is the compliance. Chief of staff and head of digital at, HSBC, and, we're, so excited that you could join us today so thank, you, and, you know I would love to start by asking, you a bit about HSBC's. Approach, here and in. Particular in terms of developing, and, accessing. These new, technologies, like AI. And. Great, to be here it's, been a fantastic day so. I mean, look, HSBC. Like you, know a lot of companies is been thinking about how. We can apply AI. To. Improve our business and and across. A really. Wide range of areas from, how, can we improve our customer experience to. How can we get better at fighting, financial, crime and. As. We go, on that journey you know we keep, coming back to that underlying, principle, of it. Really isn't around, whether. We can do something you know whether it's legal it's, really for us about whether we should do it where, does it stand from that ethical, perspective, how does it stack up and.
I, Think you know we're almost inundated, at the moment with think. Tanks and. Agencies. And governments. And peers releasing. Ideas. On frameworks, and what we should be thinking about and you, know we're taking, those all on board and and and thinking about what it means for us and, I echo. Some of the comments from earlier because, this is this. Isn't a one-off tickler. Boxercise, this. Is an entire, framework, from, you. Know we have principles, but what does that mean in, real life you, know how do we what. Are the controls, we put in place what, are the questions we, ask ourselves how. Do we. Check. That, we're making the right decisions, how do we train, our staff what. Governance, did we put around whenever, we're thinking about using these technologies what, approvals, do we want to get so. It's it's, very broad, and it's something that you can't just say oh ethics, done tick it's, in every single part of that journey and every, time you're thinking about using, big, data or AI thinking. About really what it means in every stage of the journey. And. I, think we, look at you know all the things we've already talked about today so you. Know really important one for us is. How. Do we protect our customers data, and. And. What does it mean from a data privacy perspective. How. Do we think about explain. Ability, and, transparency. You. Know are. We really sure that we understand, why we're doing it and and what the benefit is that, we're looking for and then, lastly around the, point on fairness and and and you know unintended. Bias how do we manage that so. It's. Not a short answer I'm afraid well, it's not a short problem. You. Know new technologies, new challenges, I think just, being. Open and understanding that, it's, very holistic and, it's not something. You can take and say done move on yeah yeah. I can't stress that enough I think you. Know from. Our own experience and what I've seen in other journeys, it's very. Desirable. To want to create that big list of here and all the things that are fine and here all the things that are not fine and at the end of the day that list it's not possible, to create and and.
That Can be a hard, realization. For. An organization, to have that it's actually much more complex. But at the end of the day that, leads to much better outcomes, and it isn't something that you tack, on at the end that won't lead to success is something that you have to think about at, the beginning and at every stage and so. I'd love to hear a little bit more you know as an. Organization. Like HSBC. With the with the global footprint that HSBC has, what. Forms. What. Kinds of issues, around unfair bias in particular, do you feel like you spend the, most time thinking, about if there's anything, you could tease out. Yeah. I mean, unfair. Or unintended, bias we've. Pretty much done like any of them so, we. We. Don't have a favorite, you. Know it's, it's, for us we are obviously a customer. Driven organization. And. We. Need that trust with our customers, and we need we're accountable to, them so you, know we do spend a lot of time thinking about this and we are conscious. That we. May have, bias. In our existing you, know datasets which can then lead to us encoding. Bias into some. Of these solutions. That we're looking at so we. Really do kind of challenge, ourselves and, the point around you know think about the data that you're starting with don't, just think about how you may bring, it in through the journey is key. But. But you're right I think as a as a global organization. We have, customers. And staff all. Over the world and they have different, characteristics, different. Perspectives. Different experiences. And and, one of our main, corporate values is diversity, and inclusion so anything. That you know puts that at risk anything that starts to suggest, that we may be looking in bias in that space is is a big problem, it's. A complex challenge for sure so. You know even obviously. We think a lot about explain ability I know you do, it's top of mine for, many particularly. As AI systems, start to be more, commonly, deployed so can, you talk a little bit about what that means, for HSBC in, particular. And how, you would, describe the needs, of the. Bank with regards to explainable. AI. Yeah. I mean I think andreas, said it earlier. Financial. Institutions and banks it's absolutely, key you. Know we we. Are we're. Accountable to our customers, and anytime we make a decision, or there's an outcome, that impacts, them we have to be able to explain that it's it's, it's, it's pretty, black and white. Having. Said that I, think. There's always a balance right so you. Know as banks. We're used, to thinking about things, on a, balance. Of we take risk what. Is the risk offset, here and and I think when. It comes to explain ability, quite often we see, there's. An offset between, performance. And explained ability and that's really interesting for us because. Then. You're back into the ethics of where on that line should we be and there's there's, some situations, you, know I'll use a, non. Banking example, but if. You had a. Life-threatening. Disease and. I said to you well I can give you an AI treatment, but I've got no idea why, or how it works, when I guarantee, it will save you you would probably say okay. I'm, gonna. Take it. You. Know we, don't deal with things quite like that but, there is that balance around. How. Much do, we need to be able to explain do we need to be able to explain every, single going, along, the model or do we need to just be able to explain the. Key drivers, for an outcome or a decision that we've, made you know if we've turned you down for a bank account you. Deserve the right to know why so, it's. Not just about what we. Think is appropriate from, an explained ability perspective, but it's also what we think our customers, will think is appropriate and what our regulators, will think so and it. Is very, key for us one of the things that you know I've, been thinking about a lot in this in. This, we. Often talk about the, creation of AI from, beginning, all the way out there deployment, as a team sport, because. There are so many different, roles and functions that that, need to participate in that, as you, move from creation. And development all, the way out through deployment. Into an application or into a workflow, and, into, them the impact of that out in the world and, I I think explain. Ability. Needs. To also be thought of along, that scale because, it's it is important, to think about what. Kind of information is actually useful or helpful to. The the, different stakeholders, who are impacted, or have to understand, a model because it isn't the same what. A data scientist, needs in, terms of understanding while, they're building is not the same as, what.
A Doctor would need in. Terms of understanding does. This eye scan, for diabetic, retinopathy does. It how. Did it make that decision what factors led to that and that's a very different information yeah, and that's the same for us you know our. Data scientists. Need. One thing as you say but our frontline staff that, need to have a conversation with the customer and say this. Is your credit decision, or you know you've, had a payment, stopped for financial, crime purposes, they, need very clear black-and-white. English this, is exactly, what, has happened and, why yeah, so, we have to get to that level that's right and that's about trust as well because, that Trust has to be with it within your, employees. As well as with your customer, I mean the last thing we can have is is anyone, at the bank saying to a customer, or yeah. I don't know the computers yeah the computer says bad freedom, yeah you. Know we need to own the message yeah, so. Either. About explain ability or anything actually, if. You could direct, our. Engineers, to do anything. What. Allen what what would it be what would it what would you need yeah, um, maybe, scope it to responsibility, i but just antenna I. Was. Often yeah so. I'll bring, it back to a business context, not just me personally. Yeah. I. Think. What, you were announcing. Earlier in terms of explained ability is key and I'm fascinated, to, kind of get under the bonnet a little bit because I, think if you'd asked, me that question before, this had come out really, that would be it you know how can we I. Don't. Want to end up with an AI pyramid. But how can we use AI to help us explain a oh yes yes. That's how Auto ml works it's machine learning models I create machine learning models yeah building. It up so explain, ability is key and and, and then one of the other points that you, know we've touched on a little bit today around, monitoring. So, and. I don't just mean after. Deployment, but almost through the entire. Spectrum. Of development, and deployment of you, know how. Do we check. From. An ethical perspective that, we're not bringing in. Unintended. Consequences. Bias drift, that we're not what. We thought was fit above us and we've gone through that process isn't moving away from where we were so, they're, really starting to help with that, continually. Watching, how. Things are evolving and, moving and changing over time. Good. Well I'm. Glad we were able to help with the first already that's great, so. And, I'll be excited to hear your feedback, so I want, to spend the last part, of our time talking, about an experience that we we, had recently together, and. As as, I spoke about earlier we have this process. That. We use to evaluate all, of our work and, we recently had, what. I thought was a great experience, of actually doing.
That Jointly. With HSBC and, that was the first time we've done that and and. It was really exciting for us to, be able to have that kind of conversation, with, a customer, which you. Know in many ways is a very, vulnerable space. To be because you are that is, having. A conversation where, you're really asking, yourself, about what, could the possible. Intended. Or unintended and. In particular. The unintended, impacts, of. You work be and. Having that conversation. With. A customer, can. Be can, feel a little. Unnerving but it was quite exciting so I'd, love to hear from you what what. Did you expect, when you joined. That, conversation, and was, the outcome different, from. What you expected, it. Was fascinating actually and we were invited down to your office to talk about some, of the work that we're partnering on and and and how we all thought about it from an ethics and responsibility perspective. And I'm I'm, not sure I've went in with any. Specific. Expectations. But. Firstly. It was really. Great for us because you know we work with Google, on you, know a number of projects, innovating. And trying to make, life better for our customers, and and we, do only, really want to partner, with people that have a similar, kind, of focus, on, responsibility. And ethics in this space so, getting. Involved, in your machine and seeing, a little bit more about about how you work on it was was really fascinating, I think. If. I had to take away one point it would be. That. Having, a number. Of different people who, are all talking about the same project, actually. Coming. From such different places so you know we had the bankers, and then, we had you guys but then weren't even within those teams we had different specialisms. And and what I found was, the. Range of. Potential. Issues, that, we needed to think about just. Grew and grew and grew more, than I thought it would and that makes it sound bad but it was that the. Point mentioned earlier about having. Diversity. Of thought you know in this, space getting. Everything, out on the table and then being able to say right now let's, address it let's think about what serious that was really invaluable and. There were lots of things that came out in that debate that I don't, think either of us kind, of expected, to see and. So that was the one thing that I took away was you know it's, Sookie and in. Your own little world you're, not necessarily, going to get all the answers and engaging, different, people and getting. More thoughts on the table was was fascinating, that's, great to hear I and truly. You know I mean I think we've. Been we've been doing these, for. Well. Over a year and a half and I can't. Think of one of those, conversations, where, we didn't. Encounter. New. Things, to think about or have new ideas of how to address them in the room, and. Because of that that group, and so for, us the experience of expanding, that to. Have you, there was it was really wonderful so I hope we get to do it again yeah, and I thought one of our most interesting points. Of conversation. Was actually.
We. Spend a lot of time talking about unfair. Bias and, potential. Pitfalls, but actually we had a great conversation about, a positive yep, because, we were saying you know it's. Not all doom and gloom with some of the things we're working on you. Know we, may actually be able to tackle. Unfair. Bias that exists, at the moment at right and turn it round and we were talking about that in the concept, of financial. Inclusion and, saying if, we can build these systems right, so that we can really identify. Legitimate. Customers and legitimate, customer needs we. May able to might, be able to offer products, which. Are outside our risk tolerance, at the moment and actually, that, starts, dealing. With a you, know a really serious issue of financial exclusion. So. When, you start throwing around the ideas, we haven't necessarily thought. About that from. An ethical perspective we've, just seen it as a benefit, but when you start throwing the ideas around we were like you know this is key as well we mustn't always go down the yep, let's. Always look for the the cons. Let's look at the pros as well yes and, I, also. Think that's in such, an incredible, outcome, of these kinds of conversations is, that you, start to realize that there are ways that you can tackle these. Challenges that. Can provide incredible, benefit. If you end the, idea of being able to work with customers like hsbc. So. That you. Can then. Really, transform, your industry. Is such. An exciting, opportunity. And, you know it's one that we. We're. Just endlessly excited, about and I know this, is something that HSBC. Is been thinking about for, quite some time and we've had lots of conversations but, it, was really exciting for us and hope, we we, get to have lots more so, we. Are at the end of, our, time I hope, everybody has enjoyed this conversation I certainly have thank you to all of our panelists, for joining us, today on this incredibly, important, set of topics and I hope. Everybody has a great evening and we'll see you tomorrow. Thank. You so much.