AI for Good

AI for Good

Show Video

good morning and welcome everybody here this is  day two of the tekken society uh conference as   many of you know we are pioneering and trying  to innovate on a new format which is a hybrid   format both in person and remote we think that  that hopefully will open up attendance further   to people that aren't even in the area locally  plus have a good quality uh discussion here   as you know with the goals of the conference we're  trying to really highlight where can technology   be used for good and we're trying to think  about with all of that what are the tech lash   issues data privacy issues anti-trust issues  future work issues and how do you navigate   the balance between the two of those uh issues  um we had a great kickoff yesterday with juan   enriquez and hopefully most of you were were there  a lot of interesting thoughts about life sciences   about the genome project and then getting into  a much broader discussion about technology and   society and how do you kind of create better  opportunities some good kind of reminders from   him about how to think about problems and be  really thoughtful about collecting the context   the information before moving ahead today's  conference we're gonna have three uh topics so   ai for good is gonna be the first topic which  couldn't be again a more timely one with explosion   of data and the ability to analyze it second topic  is gonna be global innovation so innovations from   around the world and a reminder there a lot of  leapfrog innovation is happening outside the u.s   and that's why looking at business models product  services etc outside the us matters so much and   then a third area third topic is going to be green  tech and green tech how can you use technology   to help in one of the biggest areas of concern in  society today climate change sustainability uh etc   if i were to give you advice or make a request  as you listen to today's sessions two things i'd   ask one is to get in the ring on the issues and  what i mean by that is hear people out in what   they have to to say on all sides thinking about  what technology really can do in areas like health   care and transportation etc and then think about  what are some of the challenges um there it's very   easy to get absolutist on positions to get extreme  on one side or the other which starts to kind of   create a question about do you really have all  of the facts there and then the second issue and   as a request is put yourself in the role of a  leader you know this is one of the defining things   about being at a business school and focusing  on leadership leaders have to take decisions   they have to get a group of people together  and they have to go execute against things so   just understanding an issue and taking a position  on an issue isn't enough it's one of the necessary   parts but it's not all of leadership so think  about what the leadership imperative would be   as you listen to these things and i always find  this kind of two questions that define leadership   what would you do so not what's your point of view  but what would you actually go do if you're in the   position of a leader and what does success look  like what are you ultimately trying to achieve   and i find those are reconciling questions to a  lot of the kind of disparate views that that exist   so first session that's going to be up is ai for  good and we couldn't have a better moderator than   heather caruso heather is our assistant dean for  diversity equity and inclusion her research she's   a faculty member here she's an adjunct assistant  professor in the management and organizations   area her research is focused very much on  collaboration on team dynamics and on leadership   one of the frameworks that she has brought to the  school that i think is super helpful mavic i use   in my class is the echo framework that really gets  people to think about are you really taking on   information are you reflective about it are you  humble and how you you execute on things etc so   it's been a huge help she's got a great academic  background on top of everything stanford undergrad   and phd at harvard so heather let me turn over  to you and uh and welcome thank you so much hi everybody um it's a it's a real pleasure to  be with you and i really want to thank you for   for attending this session of the tekken society  conference i also want to take a moment to thank   terry kramer and the whole eastern center team  for giving us this space to have an open and a   courageous conversation about these issues and  in particular about how we go about aligning   tech-based innovation with the ultimate well-being  of our society and how we might sometimes get   pulled off track it will come as no surprise to  many in our audience that these are particularly   hot particularly focal issues in the domain of  artificial intelligence which many refer to as ai   i think it's important to recognize that we are  in a difficult position as everyday users and   consumers of ai it's often easier to kind of start  to use it start to rely on it just become kind   of enmeshed in the world that is created by it  without really understanding a lot of the details   about how it works which puts everyday users in  a difficult position when it comes to informed   decision making about how well ai is actually  aligning with their individual goals and with   underlying sort of societal goals  so when we're you know when we're   sort of browsing the the web and getting  product recommendations or um when we're   trying to buy a ticket from virtual customer  service agents when we're getting driving   directions or fully automated driving assistance  from our cars it's difficult to know whether or   not the reference information and the reasoning  that's being used to sort of to guide us   is actually aligned with our preferences  with our underlying individual and societal   uh objectives and so that can be a challenge  it also it sort of adds the difficulty that in   in contrast to situations where we're dealing  with guidance from real world human beings   with ai systems we often cannot just turn to the  system and tell us you know ask it to tell us   where it's coming from and kind of why it's doing  what it's doing and that kind of adds to some of   the mistakes so it means that producers of ai have  to do a lot to sort of earn trust and to build   confidence among consumers they have to uphold  the responsibility to do good with the technology   while making sure that they're still pushing  the boundaries and they're still exploring and   i'm doing the work of innovation which is  fundamentally unpredictable you know you're going   to be developing these technologies where the  capabilities and the impacts can never be fully   sort of mapped out uh in advance and we know that  that the the firms that are doing this are going   to be up against um sort of a little bit of  skepticism right so we know according to 2020   financial times data uh that it's about a third of  of consumers only these days who are comfortable   interacting with businesses that interact with  them through ai we know that fully 53 of consumers   feel that ai systems will always be sort of making  decisions and providing guidance bearing the uh   the biases of their original creators we know that  only about 12 percent of consumers in those data   feel that ai systems can reliably distinguish  good from people right and so when those kinds of   when those are the sorts of feelings that your  consumers have to be a producer to be a developer   of ai i think is a uh an enormous challenge and an  enormous responsibility although it's also notable   that that there's quite a lot of enthusiasm for  ai we know that just if you look at everyday   consumer behavior we clearly enjoy the fruits  of of all of this technology we are you know   using the recommendation engines for everything  from you know where to eat uh what restaurants   to go to to sort of what romantic partner  is to try out you know we're definitely   enjoying the efficiencies that we're getting from  ai-driven search and navigation uh we're certainly   appreciating the efficiency and the effectiveness  of ai driven healthcare systems and all of that so   i think the imperative for leaders for firms in  this space is to try to figure out how to kind of   maximize and sustain the advantages of artificial  intelligence while making sure to identify and   minimize any of the aspects of the systems that  can do us harm okay so that's going to be the sort   of context in which we set this conversation and  to help us reflect on these important issues and   considerations i could not be more delighted than  to have our special guest for today dr john kelly   uh who is uh a renowned leader in the tech  space a driver of innovation in almost any   area you can think of when it comes to  information technology recently retired   as executive vice president of cognitive  solutions and ibm research after over four decades   of leadership there perhaps most widely known as  the father of watson the famed ai driven i think   two-time jeopardy champion that ibm produced  dr kelly actually put ibm at the forefront of   innovation uh in everything from semiconductors  to supercomputers to many other technologies   he's actually done so well for ibm that they have  been able to maintain their status as a u.s patent   leader for the last 28 years running straight you  can also see his impact in in terms of driving   transformative collaborations and these ones can  span the globe so he created a network of ibm   labs that engaged 3 000 scientists and  technical workers across i believe 12 labs and   10 countries and more recently in a really  uniquely boundary-spanning collaboration   brought ibm together with microsoft and the roman  catholic church to become initial signatories to   the rome call for ai ethics unsurprisingly uh  dr kelly has received numerous honors and awards   including three honorary doctorates to join his  initial phd in mechanical engineering from ren   sailor polytechnic institute he also won that  institution's lifetime achievement award he's   also gotten the national academy of engineering's  arthur m bush award for driving excellence through   corporate government and university collaboration  he's an excellent guest to bring to to help   us think through these issues i want to  welcome him john thank you for joining us   good morning heather great to be with you all  thank you i want to get started john by thinking   about your position seeing multiple transformative  technologies uh sort of rise and and um and   and become mature over the course of your career  in the context of that experience i want to   know if you think about the ethical conversation  that's happening now with artificial intelligence   as distinct from the ethical conversations that  typically arise and have arisen with those new   technologies or is there something meaningfully  different about what you're hearing today   sure sure well heather first my thanks to you and  the whole team at ucla and uh chris lowe who is   very helpful in uh reaching out and bringing  me in so thank you all for this opportunity   um as you'll see this is a topic that i'm very  very passionate about so so to your question   um there's there's many similarities with um the  ethical implications of ai uh to what i've seen in   other technologies but there's a big difference  um so when you think back you know all new   technologies raised ethical questions going back  to the steam engine atomic energy the internet   all of those um which were technologies that  were advancing at sort of exponential rates   caused people to really pause  and think about well what   what are the the good and the bad uses of this  technology what are the ethical implications   what i think is different about ai is that it  so much gets at who we are as humans because   it's it's coexisting augmenting our cognitive  capabilities and whenever you're doing that   i think we're into a different level of ethical  issues and it sort of relates to who we are   as humans and you know i'll never forget  a couple of conversations or a couple of   conversations and instances uh you know when  you you reference watson winning in jeopardy   and and of course it uh it beat two champions  but ken jennings was was the uh the all-time   winner i i later saw ken on a uh a ted talk  where he sat in front of an audience you know   when i when i lost that match i've gone through a  process where i really don't know who i am because   that game and my championship was who i am and  and now a computer took that another conversation   i found very interesting was with gary kasparov um  before we did watson heather we built a machine uh   10 15 years before that which was became the the  grand master of chess and beat gary kasparov a   years and years later i had lunch with  gary and a couple of other people and   he was obviously very upset when a machine  called deep blue at the time beat him um but   someone at the lunch asked him said gary would you  rather have been beaten by a machine or a person   and he paused and then he he he got very uh  animated and he said no you don't understand   it doesn't matter if it was a machine or a person  i never lost a chess match i never lost and now i   lost so those two things just strike me that with  ai we're getting so close to who we are as humans   and how we identify ourselves but i think it  raises it to a whole new level of uh ethical   requirements heather yeah i think that's an  excellent point and it it drives it something that   i think is is probably part of every innovative  push and effort but maybe has particular   significance in this domain which is you know when  you're driving that innovation you have to balance   two imperatives you have to balance the imperative  to um to disrupt and to sort of teach everybody   what we didn't know before to surprise everyone to  see what role these technologies can play in our   lives recognizing we can't always foretell that  that's the one imperative on that side but then   there's the imperative obviously to do no harm and  when we're talking about technologies that get so   close to who we are as human beings understanding  the various kinds of harm that people can feel   the visible and the invisible the internal  the emotional harm those kinds of of things   get to be quite complex so i i wonder if you could  reflect over the course of your career on how you   have balanced those two different imperatives  and how you would recommend that leaders in   general in this space think about balancing that  those two sure sure you know it's um there's um   you know it's the old there's a ditch on both  sides of the road you can you can almost become   paralyzed and say well i shouldn't innovate  i shouldn't invent this because someone might   take it and do something wrong with it on the flip  side and this is always how i've thought about it   um if it can be done it will be done by someone  and i think it's incumbent upon us then to   have the the most ethical leaders the most  ethical people doing it and to recognize the   ethical implications and have open discussions  and set a set of principles in terms of how to   guide one's decisions in this area and i will  probably talk more about you know how you set   those principles and how you think about them  but you know think about technology like ai   it's advancing so fast just in its raw  capability you mentioned lots of the applications   it is everywhere and it gets smarter  every time it's used and it never forgets   so it's not like when you have a technology like  that it's not like you can sit down and write   you know five rules and it covers every case  of fai so um what i have always found the best   way is to drop back to whatever set of principles  that we can use in guiding every decision we make   every day in both inventing the technology and  using the technology it's a really interesting   point that these principles can be used as a  sort of beacon towards which we we aim when we're   developing the technologies i want to ask you in  some sense to think about where those principles   come from so as you know the the eu recently  unveiled a draft set of regulations sort of um   top-down principles in a sense aimed at regulating  commercial and governmental use of artificial   intelligence across a variety of applications from  self-driving cars to hiring processes and banning   some things altogether like automatic facial  recognition in public spaces do you think of that   sort of regulation as effective or feasible in  terms of really being the driver of ethics and ai   well here's here's how uh we approached it at  ibm you know in the early days of watson and ai   uh we could see clearly the ethical  uh implications of the technology and   and you know in ibm we have sort of a history  of uh not only do we invent it but you mention   all the patents but we also believe it's part  of our job to introduce it uh ethically and so   um i'm actually sitting today in the conference  room in ibm research the watson research center   where i first saw watson in action and  and sitting in this chair and i remember   a chill went up my spine  heather and i said oh my god   this is going to change the world it's  going to change the world and that was about   a year and a half or so before every  the rest of the world saw watson so i started to think about and leaders in ibm  started thinking well how are we going to deal   with this and so we decided to go principles-based  um and by principles i mean for instance   just a few of the ones we derived which is it it  it takes data to train these ai engines these ai   computers and the first principle we set was  your data is your data and we have no right to   take it use it without your permission or resell  it because your data is now part of who you are   so your data is your data the second kind of  a principle became if we use ai in a product   we'll tell you that we're using it so it's not  some hidden thing in the background that you're   not aware of we'll tell you uh if and when we're  using it and a third example would be since ai   is a machine that's trained by humans and data um  another principle we established is we'll tell you   who trained our ai machine so for instance if  it's in healthcare we'll tell you that memorial   sloan-kettering doctors trained it and not  someplace else so a lot of these are are around   transparency so as we advance those principles  though it became obvious to us uh a few years ago   that they were wonderful and they were  helping the 400 000 ibm employees and   developers of of applications think it through but  we came to realize that we also needed regulation   but we were very specific we needed precision  regulation so because each use case of ai   is dramatically different and it's very hard to  just blanket regulate so we became very vocal   proponents of precision regulation around specific  uses of facial recognition or voice recognition   or other kinds of ai so we believe that the  best balance is principles based and then   very precise regulation versus blanket regulation  which might actually harm the advance of the   technology it's a wonderful way to strike the  balance and to make sure that you're incorporating   individual dignity and some um some  recognition of the importance of of external   stakeholders being part of the conversation  um and and that brings me to a question about   where you think the principles for any given  leader within a firm should come from who's   whose principles do they use if you're you know if  you're the evp uh or if you're a developer on the   ground whose principles are you referencing  and how do you ensure that those principles   are are interpreted consistently throughout  your organization so it's not like you have   different sets of principles being used by  the different people who touch the technology   yeah well you know in the end heather  um the principles i think are based on   an individual or a group or a company's  sort of moral values that's the   sort of the fundamental base and so you you  always want people that set the principles   you know to to be the most ethical and and moral  um particularly when you're dealing with these   bleeding edge technologies that are going to  change the world and are changing the world i   will admit though it gets complex because you know  as you know as soon as you start to talk about   morals you're also into cultures and different  cultures have different moral standards and values   so um in a company like ibm you know we're in  literally every country in the world um you know   in large numbers and so we really think carefully  will this principle translate across cultures   but we also believe that while you know we we  might be different in the us versus you know   europe versus china versus india the company of  ibm has a constant moral standard that the company   is built on and so we try to make those principles  and values so bedrock that they do translate   across and then we as a group of leaders you  know get together we've we had i can't tell you   how many rounds of discussions on those principles  and you know we reached the conclusion on them   and you know i will admit some wanted to become  very specific but i reminded them that you know   we're in the early innings here of ai and so  they have to be durable and last over time   because you never want to put out a principle and  have to reel it back in later so it's dynamic but   it's really bedrock yeah i like that that notion  of you know any individual leader needing to think   about the extent to which the principles that  they're using really reflect the principles   of the organization and to be in conversation  with the rest of the organization to make sure   that they are in fact in touch with that and  and that the articulation of those values and   those morals is is flexible enough to  go across the different cultures to be   understood well across the different cultures  while still being bedrock and still being firm   and that makes me just want to check in on an  implication of this to your point you know when   you're talking about morals and and ethics  it's very difficult to say okay we we said   we were committed to this today and then like  next year not so much but to the earlier part   of our conversation about always being at the  forefront always kind of pushing into a space   that can't be completely predicted how do you  think about finding out that you were wrong about   something as a as an as an individual developer  as a firm what's the process for learning from   experience and updating the perhaps the the way  in which we serve our ethical principles yeah so   you know while it's bedrock for us um we  also learn as the technology develops um and   because our we do so much very advanced research  our our high beam headlights into where the   technology is going to go um for at least the  next decade or two is is usually pretty good   and that allows us to um you know establish  principles that are enduring that said we do learn   uh from it and you know as an example the topic  of you know facial recognition as you mentioned   as more and more data became available on biases  that were being developed in those ai machines   we went into high gear we really started  we changed our ai engines to remove as much   of the bias as we possibly could and and then  ultimately we said that look um we're we're not   going to continue to sell products based on facial  recognition because as good as our principles are   it's it's almost impossible to completely remove  bias because remember the machine learns based on   the data and whatever sample data you give it is  what it learns not unlike humans by the way that's   how we learn and so we just we just realized that  that in that particular situation the risk reward   for society wasn't worth it and we just we changed  yeah oh that's a great a great story i think it's   it's not told enough the the way in which learning  experiences in this space can sometimes teach you   that you don't want to go any further in this  domain that you actually want to pivot and   focus elsewhere so thank you for sharing that um  i'd like to give our audience a chance to hear a   little bit about one of the more recent and really  sort of extraordinary collaborations that you have   put together to help further drive innovation and  to center ethics in the development of artificial   intelligence i'm talking here about the room  call for ai ethics and the way you brought ibm   and microsoft together with the roman catholic  church to become the initial signatories of that   of that document can you tell us a little bit  about what it is and your involvement in it and   what led you to think that ibm should become one  of these initial signatories sure so um it's sort   of an amazing story um so i'll keep it brief but  as we were wrestling with you know all of the   moral ethical implications of ai just a few years  ago amazingly the pope was thinking about the same   thing and so he turned to his pontifical academy  which is think of it as the pope's think tank and   ask them to really think about and study  the implications of artificial intelligence   he as i understand did not quite understand it  but he understood the potential and interestingly   enough he was interested in making sure  that it was used ethically but available   globally because he saw it as a tremendous tool  to advance humanity but it could also be another   have and have not uh in the world and he was  very concerned about a new sort of ai divide   emerging so he charged his pontifical academy um  they came in uh the united states and and visited   ourselves for a couple of days as  well as microsoft and through those   discussions we realized that we were tackling and  thinking about the same challenges and problems   through different lenses but ultimately we  were worried about exactly the same things   and so we were uh we and microsoft were invited  to work with the pontifical academy to develop   a set of principles um that the catholic church  could also endorse and uh that then became   the rome call in and around ai ethics which  um myself and brad smith from microsoft   went to rome and signed just before  covid completely broke out by the way   and put us all back on the ground but it was  a uh an amazing effort and it's um who would   have believed that ibm microsoft much less  the catholic church would come together and   work on something but it i think that in itself  tells you how important this topic is that's i   yeah it's a really extraordinary step forward and  it's funny i think it was microsoft's brad smith   that said that that people might think of ibm  microsoft uh and the roman catholic church is sort   of strange bedfellows in an endeavor like this  together um and that that just sort of makes me   wonder what you think of the process of that and  how on such a complex and multifaceted issue three   such you know powerful and independent entities  with very strong sorts of principles of their   own walking into uh into that space how do you  collaborate on something like this how do you come   in the end to a document that you can all agree on  coming from different perspectives especially with   ibm and microsoft in some some spaces being  competitive i mean how do you balance all of that   yeah it's uh that's a great question um i'll tell  you what we did and and how it worked and it's a   it's a um it's a process that i've used in other  international you know sort of partnerships um   we didn't you know in the first couple of days of  meetings we didn't sit down and try to write the   rules or the principles um we didn't you know well  here's 20 rules let's vote on 10 and that's what   we'll sign um we started with a an understanding  of each other's values and uh morals i guess and   you know as an example the pontifical academy  was as interested in learning about how ibm   thinks about the future of computing like quantum  computing and what what is that going to mean to   society as they were artificial intelligence so  we went off into these other you know dimensions   and issues and we took them in different  areas and what we we did is we basically   established a trust that we were all seeking  the same thing we all had the same fundamental   value system and therefore it is possible  that we could agree on a set of principles   and that's exactly what happened so after that  couple of days together we we all realized that   we're pretty much cut from the same cloth from a  value standpoint and ethics standpoint and then   our teams could get to work on what specifically  were the principles going to be in the wrong call   so we didn't we didn't gun jump you know to try  to get to an answer which is you know in tech and   for type a's we generally try to do that it would  set back and as as humans let's make sure we're on   the same page before we get started i appreciate  your underscoring that i think that's something   that we try actually to bring into a lot of our  our classrooms because we do have a lot of type   a personalities a lot of students want to jump in  and solve problems and and there's a tendency when   coming into into a room with other people to start  by figuring out well what do you want to do and   what do i want to do and you jump right into some  kind of negotiation of your positions on the issue   uh and what you're describing sounds like a really  different and also just a richer and a more robust   way to build an actual relationship with people  which is first of just find out what motivates you   what gets you into the room uh to begin with what  do you care about in the in the broader scheme of   things and then once you understand the extent to  which you are aligned on those broader principles   you can come together to talk about specific  things that you might collaborate on together   yeah you know i think if we had tried to just jump  to the answer we would have made we would have you   know going to the white board and said well here's  a list of good good uses and bad uses do we all   agree um for instance we would have completely  missed the pope's concern of equity of use   and and we had a long discussion about that um  if he was very concerned that you know areas   of south america would get access to it the poor  would get access to it that particularly when it   reached applications like healthcare or education  um it would reach the young as well as the elderly   and that it wouldn't cause a further divide in  any dimension of humanity and that's that's a   dimension we would have completely lost if we  had just gone to the white board and started   listing good and bad that's a great point and i  wonder as you think about the future of the rom   call what are the kinds of things that you  think will draw other firms other religious   institutions other governmental entities into that  collaboration as you think about its expansion   what do you think is going to pull people in and  are there any barriers that you think might might   create bumps in the road yeah there's there's  been tremendous interest uh in follow-up we   obviously we ran head-on into the covet thing  which uh slowed us down a bit but there's   um we've had meetings with many of the other great  religions of the world that are very interested   in this many uh big corporations um are studying  it carefully because they've many corporations   have been busy writing rules and they realize  that the technology again is changing so fast that   they can't keep up writing the rules so let's go  back to the basics and they're looking at this as   sort of a foundational um document and set of  thoughts um so our intent is to follow up with   an additional with additional signatories but  also it's viewed you know as a living document   we're going to learn a lot and i'm sure we'll  be back together as a team you know looking at   well what's what's transpired in the last couple  of years and what do we need to update in those   in that document in the wrong call this seems  like a great space to acknowledge the enormity   of the problem that faces every one of these these  institutions and um and to highlight the power of   collaboration is something that can actually  help everybody to get a little bit more of a   handle on it i'm thinking about that i want  to sort of help us transition a little bit   more to thinking about your particular career  journey and i know that collaboration has been   something really at the center of a lot of  what you have done could you speak a little   bit about the role of collaboration for you as a  kind of leadership tool alongside obviously all   of the elements of leadership that have to do  with making sure that your firm is competitive   how do you think about the role of collaboration  as part of that sure another fantastic question so   you know i've always felt heather that you know  as big as ibm is and as much as we spend in r d   and all the patents we generate we don't  have a corner on the market of smart people   and ideas uh much less the means to bring them  to market and so i've always used collaboration   even with competitors as a way of advancing our  technology and our business we did it you know at   big scale and semiconductors with other companies  with the samsungs and toshiba's of the world   companies that in some cases competed with  us a second dimension to collaboration   though is around open innovation and so we've  always been and i've been a huge supporter   of open source uh software uh where it's it's not  developed strictly in one of our labs some place   in the world uh it's it's the community uh we  contribute to the community and we take the open   source software and then we build products above  it and below it but the nugget um of say linux as   an example is an open source community and we  um we nurture that comm that community we help   we we put code into it but we do get the benefits  back uh in that regard so another dimension of   this collaboration are these open communities and  you know we see it as a force multiplier for our   own r d um as as well as it's just a fantastic  way to bring new technology into the company   wonderful i want to zoom out a little bit  and take a look at your incredibly long and   impressive career i mean it's so rare these days  to find people who have the benefit of long tenure   in an organization and you have four decades at  the forefront of innovation at this incredible   pioneering firm i want to ask you to sort of  reflect on the balance that you struck during that   four decade long sort of tenure making incredible  transformative change on the technological side we   talked about the semiconductors and the  super computers and ai and all of that   but also driving incredible uh transformative  change on the business side of the house in   terms of partnerships in intellectual property and  security and privacy can you tell our audience a   little bit about how you transitioned your  stem background and your technical work   into roles that unite technology and business  functions sure so when i uh i joined ibm i i   literally heather defended my phd thesis on a  friday and started at ibm monday back in 1980 but when i joined uh to be honest and  transparent all i wanted to do was technology   it was just just put john in an r d lab and and  i would be happy um but i i quickly learned that   and this is part of why i came to industry versus  academia at the time i quickly learned that   to get things done and to have an impact at  any kind of scale i had to i had to pull teams   together and i had to understand the business side  and that is what caused me to finally understand   what i was best at but also um what was most  effective for the company and i always sort of   describe myself as being on the diagonal and  what i mean by that is if you if you view sort   of business this way and technical this way  i'm like on this diagonal and i've wandered   off the diagonal at different points in my  career pure business pure technology but but   by and large for four decades i stayed close to  that diagonal and i'm just very fortunate that   it's what i enjoy most if i'm doing just technical  or just business or any extended period i'm not   very happy but it turns out that i have found  that a company like ibm and many of the other   tech firms uh that you not only not only am  i happier but you can get a lot more done   and have a lot more impact on the world i mean  think about if we hadn't brought ai to market   uh we wouldn't have changed the world um you know  we're now bringing quantum computing to market we   we could have left it in our labs and you know  advanced qubits and things but so what so we've   got to bring it we've got to bring it out to the  world and that that's where business comes in   yeah that's a great point and it brings to mind  you know the sort of stereotype of those who have   great technological skill a great sort of focus in  this area the sort of stereotype of individuals in   this area is lone wolves who want to get all  of their work done independently who want to   make their career success on  the basis of their individual   brilliance and i think that the existence  of that stereotype can often overshadow   the importance of the ability to work within teams  to pull together teams to build relationships   that help to to maximize the value of the  individual technical skills that people have   i want to ask you to reflect on on those kinds  of relationships maybe in particular mentorship   relationships which is something we talk a lot  about here is something important for our students   to cultivate can you tell us about the role of  mentorship in in driving your own career success   sure so sort of segwaying from the last  discussion i i always i always believed   heather that it wasn't about just inventing  a technology or leaving a piece of technology   out there that you know as a leader um one of  your most if not the most important legacy is   the team you leave that the team you build  and what that team then goes on to do   and you know i i'm i'm very proud whether it's  the ai team or semiconductor teams or super   computing teams those teams go on and on and on to  do bigger and better things and that i think is a   contribution that goes way beyond you know uh the  thing itself so on on mentorship um i again i've   been extremely fortunate and blessed um  i've had mentors both in ibm and outside   but probably not traditional mentors not mentors  you sit down with and have a chat and they they   sort of coach you and well john you got to do more  of this or more of that or try that kind of job   i had very little of that what i did have though  is a set of mentors and network that would   uh challenge me and give me experiences  so throw me in to something that was   much bigger and much more difficult than i  thought i could possibly deal with or handle   and it's sort of sink or swim throw him in the  deep end of the pool and see if he comes out   but but what you learned from that  is incredible you really learn   how to get things done how to work with people  but you become more confident in yourself   um you know okay throw me in a  deeper pool and i i'll find a way   um through it and so the best mentors i've had are  the ones that they don't sit down with me you know   one hour a month or whatever they just they're  always around and they're like okay john go   go do this see if you can take that on um and it  could be technology it could be business it could   be a people issue a leadership issue a government  issue but just you know go go try this one   and through those experiences then you just build  up this incredible sort of well-rounded capability   it's really interesting i think a lot of our  students um tend to think of mentorship initially   as something that's meant to make your life easier  like you go find somebody who's done it before   they can tell you how to do it and then you're um  then you're on easy street and what you're saying   is in some sense the opposite find find people who  are going to make your life harder absolutely dude   for the students uh i mean if you enjoy sitting  with people who don't challenge you that's okay   but find people who will you know throw you in  a deep end of the pool and the other thing i   would say is um don't be afraid to take on really  tough things um you know i'll admit that at first   i was afraid to do it you know could i really  do it um and then i started to think well gee   people will start to associate me with okay  there's a problem john's always around uh   so you know which is chicken and which is egg  um but i i found that the biggest challenges   whether it's in technology or business  attracts the best people in the best minds   and so i always sought to go to where the problems  were because i knew i'd be rubbing shoulders with   brilliant people and maybe some of that would sink  in uh to me and so run to the problems and if you   can't run there have a mentor that will throw you  at the problems and amazing things will happen   i love that i love that um i am going to see if we  can use the last little bit of time just to make   sure we get our audience questions in give me  a second i'm just going to try to pull up those so our first question is that dr kelly mentioned  that ibm has a solid set of guiding principles   how do we ensure that companies without  principles are held to the same standards   well that's that's an area where uh precision  regulation has to come in and and that is   uh that's why we realized that and crossed  that bridge because you know not everybody   and every company will have the same set of  ethics and so we do need the precision regulation   to protect against the corner cases and the more  most harmful cases um to society a second thing   uh that can be done and and this is to help people  abide by the principles you can actually build   into the technology things that will monitor  and compare behavior to your principle and so   as an example we built a set of software that all  of our developers use in ibm which is constantly   monitoring their code to see whether the code is  structured in a way to develop a bias or whether   the data they use to train it is developing a  bias think of it as sort of a bias meter and   so as you're writing code it's going okay you're  getting much more biased you better come back and   what we find is that if most people uh they're  they're not conscious that they're doing it   but if you show them what they're doing most  people will mid course correct if they understand   what they're doing so i think the precision  regulation but also giving people the tools   because most by far most people are ethical and  if they realize what they're doing they'll react   accordingly yeah and it highlights the opportunity  to put yourself in an environment where you are   getting kind of early warning signals about uh  the alignment of your part your principles and   your uh and your behaviors rather than having to  only be subject to something like the regulations   from external entities that's right another  example would be uh you know one of our principles   we don't we don't leave holes in our code you  know what are called back doors in software   that somebody or some government or whatever it  can come in through um and so that's a that's   a fundamental principle and so our people that  write our software and our code are very conscious   not to be sloppy to leave a door slightly  open if you will or a window slightly open   make sure take the time to make sure that every  door is closed and locked in every window so that   the code is protected from from people who  might want to do harm to it that's excellent   there's a question about whether or not you  see an arms race in artificial intelligence   and how that might be influencing  the moral foundations of ai well um unfortunately um like most technologies these  are all multiple use technologies and so um   there is a bit of an arms race it's both  from an economic standpoint but also   military i mean most of the major military  powers of the world including the u.s   have openly talked about leveraging ai in defense  systems and offensive systems so in that sense it   is a bit of an arms race and this is where  again i think we need like we did with the   internet and cyber security and we still need  we really need more international discussion   around these topics what is acceptable use of the  technology i'll give you a very specific example   one of our principles is that we will always in  potentially life-threatening conditions we will   always make sure there's a human in the loop we  will not allow an i ibm ai system to take data   use ai and then cause something to happen without  having a human in that loop either on the intake   or the decision process and we think that's  really a fundamental important principle   um in a in an arms race if you will so again i  think if we get back to the principles we have   more international discussion um that would be uh  wonderful and i go back to you know the rom call   you know if if microsoft ibm and you know  and the catholic church can come together   then i think anyone can come together on the right  principles i think it's certainly encouraging and   i actually i want to jump to a question that also  sort of helps us to dig into the question of how   humans can be in the loop or how humans and and  ai can complement each other and drive one another   to better uh to better outcomes the question is i  wanted to know if there are any advances in using   ai to help humans learn differently so that humans  can compete with ai at least in decision making yes so um i would i would strike the last part of  that question um which is to compete with and i   i can't emphasize enough that this is human plus  machine and there have been countless studies   in all sorts of domains heather that um hit  basically in a challenge a human plus a machine   against either a machine or a human and in every  case in every field human plus machine always win   always can beat one or the other and that that has  stood true since uh since the early days of watson   um to the point now where uh gary kasparov when  he plays chess as an example um wants his matches   to be he wants a machine with him i wouldn't  want to play chess against gary when a machine   that would be an impossible thing to win but um  think of it as human plus machine as opposed to   can we cause one to be able to beat the  other um you know i have to i have to admit   that um i don't know if it was right for me  to to use whether it was deep blue or watson   in demonstrations of human versus machine because  in a sense it it drove that kind of thinking   which causes this kind of question if i had it  to do over again i think i would do a human plus   a machine versus a machine and a human to show  that power and i guess i would just end this   this particular question with that puts so  much burden on the human machine interface and optimizing that and we now have so much  work going into how do humans best interact   with machines and how do machines now read and  interact with us the natural language processing   of these machines is actually better than humans  yeah but how does the machine understand your tone   your position your face your reaction and how  does it sort of impedance match as an engineer   with the human and that whole field is is is just  open for great research i really love that point   and i'll i'll finish this up there just because  i think that notion of of the addition the human   plus machine these the idea that we're talking  about some things that come together in common   good to make everything better on all sides  i think that's been something that has come   up multiple times throughout our conversation um  it helps to explain why i think the sort of the   principles based uh approach and the principles  that you were able to come together around when   you came together with microsoft and the roman  catholic church sort of getting down to that root   what what brings us all here what is that sort  of common benefit for humanity that we're all   striving towards if we can get there then i think  the the specific differences we might have or the   discussions that we need to work through in  order to drive uh ai and technology forward   become much much more manageable and much more  fruitful so with that i just i want to ask you   give you just one last chance to say you know  is there anything you want to add to share with   our community any upgrades on what i've said  well i think i would end with one thought um   which is these machines are just a reflection of  who we are as humans you know i was once asked   uh john will watson does watson have a soul will  watson choose a religion you know etc etc and i   i reminded the person that this machine was built  by humans it's trained by humans it's programmed   by humans um it's corrected by humans so whatever  set of ethics it develops or whatever religion is   totally up to us and we need to be very conscious  of what we're doing with these machines because   in a sense when we look at these ai machines  we're looking in the mirror we're really looking   in the mirror thank you so much john kelly for  joining us it's an incredible pleasure to have   been able to spend this time with you thank you  to our audience and thank you to terry and the   easton center for putting this on have a great  day thank you heather thank you at all bye-bye excellent let me just add my own uh thanks to  john and to uh heather what a great way to start   and i you know i want to share a couple of my  so what's i don't want to wait to the very end   because i think these are cumulative in nature  and matter a lot and i'm going to start out when   john talked about personalization and what  created problems people feel like you know   who you are is being exposed maybe not in a good  way well if you put that aside for a second and   think about what technology does it allows for  personalization you think about recommendation   engines it tries to get a better understanding  in areas of media to know what you like   to make relevant recommendations if you  look at health care so this dichotomy of   personalization which is a core benefit of ai is  also something that makes people feel not so good   and so you better design it well to get the real  benefit if personalization was a nothing issue   then you would say forget about all of  it but it's it's one of the big benefits   second comment that john made that i  thought was a very important call to action   if you don't develop the innovation somebody else  is going to do it so to think that the answer is   let me stop all my development let me not develop  products and services that are technology based   is not a good answer because somebody else  is probably going to develop it and society   is going to have the same issue and companies  are going to have the same issue third thing   is what are the imperatives of leaders here and  i always find this interesting so we get past   what could be a theoretical discussion john talked  about at ibm a variety of things that they use as   their tools to make sure the technology is wisely  managed he talked about transparency principles   he talked about precision regulation and he  talked about bias meters all his examples   and they're heavily oriented around what you as a  leader and you as a company own as opposed to what   i call punting stuff you know bring regulation on  and it's like what does somebody exactly mean when   you say bring regulation on so i thought those  points were excellent a couple of last things this   human plus machine is better than human or just  machine i have to admit when i cover in my class   we talk about the growing capabilities of  machines and they will eclipse humans at   some point with lots of data john's making a  very important point when you're talking about   pushback concern etc even it can be on a basic  level about adoption of ai if you're pitting   the machine against a human you're going  to get a real sense of pushback on that   on that opportunity so think about human  plus machine as where is where the real   love benefits are final comment he made again  a real commentary about individuals in society   the machines are just a reflection of who we are  that can either be a good thing or a bad thing   and i think what he's saying is be a good person  develop the right intentions etc and you'll get   good products don't do that and you're going to  have a big problem on your hands and i think he's   indirectly calling out there been some companies  that have not managed this stuff very well shame   on them um let me just thank heather i just  want to thank you again for eliciting a great   great conversation that is truly interdisciplinary  there's a living example of kind of transcending   boundaries as you think about business  and leadership so huge thank you well done

2021-11-25 12:09

Show Video

Other news