How to Lead Enterprise AI with Harvard Business School CXOTalk 803

Show video

Today on Episode #803 of CXOTalk, we're  discussing how to lead and manage enterprise   AI projects. We're speaking with Iavor  Bojinov from the Harvard Business School,   and my esteemed guest co-host is QuHarrison Terry. My research is on AI strategy and operations.   What that basically means is I work with  organizations to help them overcome their   operational and methodological challenges that  they face in implementing AI within their whole   organization. So, pretty much everything  we're going to be talking about today.  Iavor, when we talk about an AI project, what  do we mean, and how is this any different   from other kinds of technology projects? AI projects fall within two buckets. One is   internal-facing and the other is external-facing. Internal-facing projects are ones which are   designed to help employees do their job better.  For example, this could be lead recommendations.  

It could be supply chain optimizations or really  anything that a company's employees interact with.  External-facing projects, those are AI  projects which are deployed where the   end user is actually the company's customer.  This is like Netflix's recommendation engine,   ChatGPT, and pretty much everything that  you see around you that's leveraging AI.  That's sort of high-level what an AI project is. There was a second part to your question,   which is, what's different about AI projects?  Why is this not your traditional IT project   that we've been dealing with for 20, 30+ years? Well, there's one big fundamental difference here,   and that is that AI is random. What that means  is that, Michael, if you open up your ChatGPT,  

and you ask it a question, and I open up my  ChatGPT and ask the same question, and Qu opens   up his ChatGPT and he asks the same questions,  we're going to get three different answers.  Even though that seems like a small change,  it has major implications for the whole   project. This inherent randomness essentially  makes things much, much harder to deal with.  AI today is very reminiscent of cloud computing,  as we can think about in 2010. Which shift   should I think about? It seems like if I'm still  implementing my cloud strategy or that migration   and then AI pops up and I've got AI projects to  maintain, one of the things I'm wondering is,   do you run them on-prem where the data  is really good or should we finish the   transition to the cloud and then begin with AI  (because data is such an integral part of it)?  This is really thinking about the overall  strategy, the AI strategy that companies   have. Now, I think what's become a standard is  that you need to have the cloud infrastructure   in order to be able to leverage AI  because, to use AI, you need data. 

What companies are doing now is they're  transitioning to the cloud. That's   where all of their data is going to be stored. I kind of think of it as this is all part of that   big digital transformation that many companies  are undergoing. One of the motivations for that,   actually, is to be able to develop and deploy AI  because AI is the thing that removes the human   bottleneck that leads to efficiencies, so really  allows you to optimize your operations. But at  

the same time, in the future, it's going  to enable completely new business models.  This is the big thing that is, I think, quite  different from cloud. Cloud is a tool. It allows   you to really sort of scale your operations.  And of course, if you're a cloud provider,   you have your new business model. But for  most companies, it's not really redesigning  

their whole value proposition, whereas  that's what AI can essentially do for you.  I kind of see it as cloud is necessary and  it's part of the digital transformation. That's   what's going to become the foundational thing  that you need to have. But AI is the thing that's   really going to transform your organization. This is right off the tails of Nvidia's recent   earnings calls, which I'm sure you're familiar  with; $13.5 billion in revenue, $6 billion of that   in profit, largely driven by these cloud computing  providers (AWS, Azure, GCP) and their shift from   general computing to accelerated computing  where they can supply their customers with AI. 

You bring up a good point. As that shift  happens, what's the mindset that I need   as an enterprise leader to have and maintain  because this isn't an instance where we go to   the cloud and we're just moving one file that was  traditionally stored offline to online? This is,   if I get this right, I literally can have  a 24% increase in the performance at my   company or my organization instantly. That's  absurd, and we have records of that already.  It's really easy to sort of look at this  high level and be like, "I need to have a   successful AI strategy," but it all begins with  AI projects. What I would advise leaders is,   of course, have your high-level strategy that  you're going to improve your operations. You're   going to redesign your business model.  That's going to come down the line.  But what you need to do right now is  ensure that every single one of your   AI projects begins from inception. It gets built  successfully. It gets evaluated. It's shown to  

add real value. Then it's being adopted. Then  it moves into the steady state of management.  That's a really big takeaway for me is to  have that big strategy, but really focus on   each and every single project. This is one of  the things that we wanted to talk about today.  A lot of these AI projects fail because it's  not as simple as just taking something that's   offline and putting it online. We can do  the cloud transition. It's really hard.   It's really costly. But when it comes to  AI projects, most of them tend to fail.  What I would encourage leaders to think about  is to focus on each project and ensure that   they can implement that successfully.  That would be my big difference here.  What you're just describing, the issue of AI  projects and succeeding with these projects,   there is an inherent marketing problem, which  is, everybody loves talking about AI and the   broad strategy, and as Qu was saying, we're  going to be in the cloud. Now you're kind of  

raining on our parade because you're saying,  "Well, we need to focus on this as a project."   We're strategists, and that's  not what we want to do.  Here's the thing. Around 80% of AI projects fail.  That's a shocking number. If you compare this to   big IT projects, the number was somewhere in the  40%, so twice as many of those initiatives fail. 

This is extremely costly because, if you  think about if you want to deploy AI,   you need to invest in the data. It needs  to be on the cloud. That's not cheap.  You need to hire the right team.  AI engineers are very expensive.  You need to have the right computation. We're  speaking about Nvidia. Those chips are not cheap,   right? If you want to do large-scale  computing, it's extremely expensive.  If you have this failure rate  where the vast majority of AI   projects you're working on are not going to  be successful, they're not going to pay off,   well, not only do you lose the money. You lose the  momentum. You lose the organizational trust. Then  

companies start to doubt if they can even do this. That's why I like to... Of course, the strategy   is super important and you want to think about  that big picture. But if you can't deliver on   the project, you're not going to get anywhere. Please subscribe to our YouTube channel. Hit   the subscribe button on the CXOTalk website. I'm hearing two distinct differences between   AI projects and traditional IT projects. Number  one is the uncertain nature of the results that   AI produces. And number two is the nature of the  infrastructure requires tremendous and specialized  

compute. Yes.  It projects, if you're running business process  software, you need fast computers and a database.   But with AI, you need GPUs and all of that  data infrastructure at a very high level. Is  

that an accurate way of looking at it, Iavor? There's one extra piece of this puzzle that's   quite different between IT and AI projects,  and that's really how you drive adoption.   This becomes really challenging because people  don't really understand AI. They don't understand   the uncertainty. When it comes to getting people  to use it, that can be another big failure point.  Of course, we've learned a lot from the  IT industry in terms of how you do change   management, and that's all really, really useful  information. But it still doesn't tackle how you  

build trust between a human and an AI. That's  a whole new area of research that's emerging   right now. In addition to those two, I  think that's the third big difference.  You have also this knowledge  that 80% of AI projects fail.  Yes. Eighty percent is a large number,  

and yet we're seeing every organization in  various industries (whether it be healthcare,   media and entertainment, logistics, you name  it), AI is being thrown into it, and it's being   looked at as the savior. But with such a large  number – 80% is large – why should I be running   towards this AI revolution or transformation? Absolutely. I mean it's that 20% that's adding   tremendous value. That 20% is completely  transformative because it completely removes   the human bottleneck, so it allows you to  have scale that's completely unprecedented.  If you look at companies like Ant Financial in  China. They have more customers than any of the  

big banks in the U.S. and Europe, and they have  a fraction of the employees. Yet they're still   able to serve those customers effectively,  and they do that because of algorithms.  Even though you have this really high  failure rate... Maybe I should just  

quantify this a little bit. I'm speaking  sort of failure in a really broad term here.  Of course, there's the failure where you don't  have the right data. You can't even build the   product. But I'm also including projects which  just fail to deliver on the promised value. 

Maybe you built it, you deployed it,  and only 10% of your employees ended up   using this product. It's not really adding  that much value to the whole organization.  Or you really overestimated how useful  this product is, and then people... You   don't really get the ROI on that. That goes into  that 80% number. That's why you are seeing it. 

One thing I've noticed is a lot of companies  are happy to talk about their AI initiatives   and very few of them are able to really  quantify and say, "This is how much value   that actually added." A lot of them are  just like, "Hey, we have this. Trust us;   it adds value." But if you look down, it doesn't  give you a specific number. That's, yeah, the 80%.  This intersection of value and failure is quite  interesting to me. In order to accomplish the   value that QuHarrison was referencing earlier,  this real leap of value, not just incremental,   seems to me you need to have a new  infrastructure, as we were just talking. Okay,   you need to have the right infrastructure. Here's  the hard part. You need to have a new culture. 

Yeah. Most organizations are   not optimized for great leaps in value. We're  going to innovate at a rapid clip, and we're   going to change everything we're going to do. And  we're going to disrupt what we're doing. Right?  Yeah. Organizations are focused on  

process. You, as the operations expert, know that  more than anybody else. How do we handle that?  The first one is you have to think really  hard about which projects you're going after.   That means you have to think about both the  feasibility and the impact of that project.  

This is where you need leaders that are  versed in both sort of the technical   know-how around AI and data science,  but also the business know-how.  Let me just sort of try to make this a little bit  more concrete. If you have a leader who is leading   your AI organization that's really good at the  technical aspects of it, they're going to focus on   projects which are very feasible that can be done,  but they might not be impactful because they don't   really understand the nature of the business. They  don't understand the processes the company has  

in place and how they're going to plug into that. On the flip side, if you have a leader that's only   versed in the domain knowledge, they understand  the business through and through, they're going   to identify those really high impact projects,  but they're just not going to be feasible.   That will lead to a failure because they'll pick  the best project, they'll start working on it, and   then six months down the line, they will  have nothing to show because they didn't   understand that maybe this isn't even an AI  project. Maybe they don't have the right data. 

I think the culture piece begins with the  leadership. You need to have a leader that   deeply understands the domain knowledge and the  technical expertise. This is why I'm starting to   see a lot more companies creating this role of  the chief AI officer because that's a person who   bridges the gap, and they can be a big part  of transforming the organization's culture.  Then there's one other piece I think often gets  really forgotten, and I want to call it out here,   which are the processes around developing AI. Right now, when you go to most organizations,  

their AI development process very much looks  like the pre-industrial age. You have these   amazing engineers and experts running around doing  everything from data collection and data storage   to building the thing. It's even delivering it  to you. That process is extremely inefficient,   extremely prone to mistakes, and just not  the right way of developing algorithms.  What you're starting to see now is a few of  the tech leaders and a few of the companies   that have undergone a digital transformation.  They've started to build what my colleagues Marco   Iansiti and Karim Lakhani (who you had as guests  here a little while ago) called the AI Factory.  This is basically a representation of the  company's operating model where data is at   the heart of it. Then it makes it really easy  to develop and scale AI for every use case. 

Just to summarize that, you have the  culture piece but then you also have the   actual processes in terms of how you're going  to build that AI. That is absolutely critical.  You made a lot of great points. Let's say I'm  at the enterprise. I understand that culture   eats strategy for breakfast. I've still got  to deliver results quarter after quarter. 

Yes. I've spent a   lot of time and resources on building my culture.  We just got out of a pandemic, and we're trying   to return to profits. Then this thing, AI,  comes into the foray just eight months ago.  Largely, the things that people are excited about  in AI have existed. But in the last eight months,   things have really taken a turn. Yeah. 

With that, I'm looking at it. AI, if I'm looking  at the generative AI side, a lot of it is party   tricks, to me, as an organization. I'm seeing  it. I'm asking my team, "Can it do this?"  To your point on the 80% of failure rates,  it's just not happening. It's not clicking.  I, as an executive, don't want to bring that into  the boardroom just yet. I know it's a priority,  

but what are some of the areas  for us to really understand it?  To just be quite frank, the  moment of AI and generative AI,   it seems like it's a startup game. It  doesn't feel like it's an enterprise game,   or we haven't met that wave of enterprise  really catching their strides with AI.  Again, it's very early. But I'm  curious of your take on that.  I think this is a matter for the board. I was  recently at this board of directors summit which   was all around what should the board strategy  be when it comes to AI and generative AI. It's   very much top of mind for most board members  right now because it really is mainstream. 

When ChatGPT came out, it became the  fastest-growing app of all time. It is   something you are seeing. It's something you  can touch. It's something you can play with.  What I've been saying to leaders is you have  to be curious; you have to interact with this   technology, and you have to start experimenting. I agree with you. I think right now we are at the   beginning of it. We haven't really figured out  how we're going to use this technology and where  

it's going to be transformative. But if you're  not experimenting, you're going to fall behind.  It's similar with the Internet. If  you think of the companies who started   to experiment in the early days with  the Internet, they were so far ahead. 

Sure, it wasn't delivering value for the first few  years. But they were ready when that technology   matured. They could actually bring it on board. They knew about it. It was integrated into   different parts of the organization.  They weren't playing to catch up.  My advice is this is a matter for the board  to be discussing, and it's something the CFO   needs to think hard about, like, "How  much of the budget can I put on this?"  If I were implementing generative  AI in its current state, you can   see some really large, transformative  efforts occur in the marketing sense. 

Yeah. All the categories that are marketed, it's very   easy to come up with a plan for that. I don't know  every single company, but one of the strategies   that I would recommend would be to take... You  have to build that AI team, as you mentioned. 

Yes. But that AI team,   the critical component is the data, like the  data scientists or the data engineer or someone   in a data role. Would it make sense for you to  just have a data person go to each department,   look at what the congruence is, and start to say,  "Here are some of the experiments"? Or do you   think each team should make their own experiments  in this? That's kind of where you're going to   start to build that culture and that strategy from  that you would take to the board, in my opinion.  It actually comes down to what Michael was saying  earlier around the culture piece. The way it sort  

of comes down to that is how data-driven is your  culture. If you have an extremely data-driven   culture, then you can go for this embedded  model, so you look at places like Amazon.  Meta recently did a major restructuring where they  basically moved to this embedded model where all   their data scientists and AI experts actually sit  within each of the business units. They go through   and find those problems, and then you can create  a task force across the different business units   to find those applications of generative AI. But if you don't have a culture that's very   data-driven, that understands AI, having that  centralized team can be really helpful because   they can work with each other. You have enough  people that can go from problem to problem.  The one thing I would add to that, again  coming back to what I was saying earlier,   which is you need a business leader that  understands the data science, the AI,   and the business problem so they can help do that  translation and really find those opportunities. 

I would encourage every team right now. This  is actually something I've been hearing from   many companies that I've been speaking  with. They're all trying to find those   experiments that they can have with generative AI. Most of them tend to be sort of internal-facing,  

really trying to help their employees do a better  job. Because of hallucination and other things,   it's a little bit too risky to maybe expose  your customers to conversations with versions   of ChatGPT, so that's something  companies are staying away from.  We have a really interesting question from  Twitter from Arsalan Khan who is hitting on   this exact point. He's saying, as you just  mentioned, Iavor, that many organizations  

(or most) are looking at generative AI internally.  He's saying, "What about partnering with external,   narrowly focused companies?" Yes.  For example, AI with cybersecurity, AI  for cybersecurity, by partnering with   ISPs. As an organization is looking  to implement AI, how should they be  

thinking about their partnership strategy? The first one is, what is your timeframe?   Are you hoping to deliver something within  the next few weeks, or are you okay if you   take your time? If you want to deliver  something quickly, then you absolutely   need to partner with another organization. The second pillar is really the technical   expertise and the know-how. There aren't that  many people who are experts on generative AI.  There are a lot of people right now who have  watched a couple of YouTube videos and are   calling themselves gen-AI experts, but they're  not. So, you have to be really careful about that. 

Here, for most organizations, it's really  hard for them to recruit enough experts   to do this internally. I think, in the  short term, you're going to find that   partnering might be the best way to go  when it comes to the skill portion of it.  Then the third pillar is really around  the complexity of the problem you're   trying to solve. Is there something that an  external organization can provide a solution   which is sort of compatible with what you're  doing and you can just sort of plug into it,   or do you have a really complex, nuanced problem?  If you have a really complex, nuanced problem,   then an external partner (unless they are willing  to give you this amazing white glove service where   they're going to just rewrite everything  for you) that might not be the way to go.  Those are the three pillars. I don't think there's  anything new here. This is the typical "do we buy"   versus build debate that strategy classes have  been having and companies have been thinking   about for many years. It's the same idea here. Given the nature of these projects, especially  

generative AI, again coming back to your initial  point where the outputs are indeterminant.  Yeah. How do you   recommend that organizations evaluate the success  of these projects? It seems much trickier than   traditional business process software. This comes back to understanding the  

impact or the potential impact of a project. One of the things I always encourage people to do   is to think of the if-then-by-because hypothesis,  which basically says, "If I have this project,   then this outcome will be improved by X  percent because," and then here is my evidence.   In that moment, you think really carefully  about what you're trying to transform.  When it comes to AI, there are, broadly speaking,  two big outcomes that people are tracking.  The first one is just pure engagement and  usage. Are people actually using the AI  

solution? Are they going to it to help them  improve their job, to help them (whatever   it's supposed to do)? Are they using it? Then the second one is the financial one.   This could be revenue. It could be cutting  costs. It could be increasing sales.   Whatever it is supposed to do, is it doing it? First, you want to identify how you're going to   measure success. You have the engagement part,  and then you have the financial aspects of it. 

Then the next part is you have  to run some sort of experiment.  Now, if you're doing an external-facing  project, if you're someone like Netflix,   you can very easily experiment because you've  got a million or a billion customers. You give   some of them the new tool. Some of them, you  don't give access to a new tool. Then you see,   is it driving your KPIs and your business metrics? When it comes to internal-facing projects on your   employees, that experimentation becomes a little  bit trickier, so you have to do it maybe not quite   as traditional, clinical trial of experimentation.  But again, you want to have that experimentation   mindset and that desire to really measure the  impact of it. It's tricky, but it is doable. 

I want to go back to Arsalan Khan's question  because it's a very interesting question. I   think that when you consider partnering,  like in the traditional sense (like if I   have a company and I partner with someone on  cybersecurity), I'm omitting myself, obviously,   of some of those problems because I have a partner  that can share in that responsibility. But if   things were to go wrong, there are things that I  can also blame on that partner, like liability or   we can have a postmortem and figure that out. The thing that is fascinating here with AI is   there is a lot of regulation that is yet to  happen. Some of that regulation is very scary  

when you think about how fast and just  the exponential growth of AI, especially   generative AI (as it stands today). And a lot of  the partners or a lot of the companies that you   would partner with, yeah, they're very young. Some  of their biggest deals are happening yesterday.  Even today, as we're recording this, Hugging Face  just had their raise just this morning. It's like,   "Okay," and how many people are using Hugging Face  to train their datasets and things of that nature?  These are the players that you have to work with.  When it comes to sharing the responsibility,  

it's going to be very hard if I'm at a very large  enterprise to say, "Oh, we had this unfortunate   calamity occur. We were working with this partner.  They're going to take some of that blame."  I don't think that's going to cut it,  so what's your take there? Where does   the responsibility fall when we know that  80% of these projects fail and failure in   the enterprise leads to often times lawsuits? This is very deeply connected to the notion of   trust that I mentioned earlier because another  one of the challenges (if you are partnering) is   it can be quite tricky, especially if it's in an  internal-facing product, to get your employees to   adopt it because people don't trust AI. They're  worried that it's going to take their jobs,   so they just don't want to use it. This is where my framework for trust  

in AI is really helpful and is really connected  to your question. The way to think about trust   here is it has three elements. You have trust between the human   and the algorithm. Is the algorithm  interpretable? Is it transparent? Is it   privacy-preserving? All these typical ethical  considerations that you have around the AI.  The second one is, do you trust the developer?  Do you trust the person, team, or organization   that built what you're going to be using? Qu, that's kind of what you were getting at here,   which is, when you're using this external  partner, maybe you don't really trust them.  

Maybe they're not following best practices. Maybe  they're not going to preserve your data in a way   that's suitable for you. This is something that's  really important and you have to think about.  Then the third piece of trust is trust in the  processes, which is the organizational trust.   This is essentially how you handle  things going wrong. Who is to blame?  All of these things need to be agreed upfront, and  it becomes really tricky when you're partnering   with an eternal organization to figure out, "Okay.  The algorithm recommended X. That was the wrong  

thing. But none of the employees overruled it,  so we just lost $10 million. Who is to blame?"  You can't do a postmortem on that. You have  to figure that out before you've even deployed   this because then you don't really have  trust. That's how I would think about it.  That's difficult, right? Some of the negative  sides of these partners are, like, if you look at   OpenAI. I believe in Sam Altman and his ability to  fundraise, but we also have to realize that some   of the costs that have been shared (that it's just  taking to maintain even ChatGPT) could potentially   lead them to bankruptcy or them being insolvent. Yes. 

When you know that, and that's something that is  a fundamental question, how do I justify that to   my board saying that they're the leading experts?  But the leading experts are a startup, and it's   not uncommon for startups to exist today and not  exist tomorrow, even leading, prominent startups.  Just look at crypto for a better  reference point there. We had some   incredible companies that cease to exist today. How do I know when to get on this AI hype train  

and partner with some of these people so I can  not have that problem that you just described?  It's something we face in previous versions,  not with AI but with cloud, for example. Right   now, the big cloud providers are really  big tech companies. But in the beginning,   when you were trying to convince people  to go to cloud, it was a bit like,   "What is this cloud thing? Should  we really be investing in it?"  One of the big advices I would give organizations  (and this is something that I've seen companies   sort of having to backtrack a lot) is when you  do end up partnering, partner in a way that makes   you agnostic to the person you're working  with and a way that if that company fails,   you've built your system in a modular way  where you can just pick another company.  

For every problem you're looking at, right now  there are about 20 startups trying to solve it.  The high-level solution here that I would  encourage organizations to have is to own all of   the individual pieces of infrastructure. Then just  plug in that little piece of generative AI that   you need in a way that you can just swap them out. That also protects you because one of the things   we've seen with cloud providers is if  they lock you in, that price goes up.   A lot of companies are now sort of backtracking  and trying to be cloud-agnostic in a lot of their   offerings. But it's really hard because  if you moved everything to one of the  

cloud providers, it's hard to get off them. That's one way I would really mitigate that   risk is basically be like, "Yes. We're using  them. But it's this tiny little piece of it.   And if they fail, we get someone else to do it." This issue of trust is so important and  

complicated. In the past, trust with enterprise  or business process software basically boiled   down to, "Do we have confidence that the  vendor is keeping our data safe in the cloud?"  Yes. "And no one is stealing our money," really   is what it ultimately comes down to. Now, we have  all of these questions about the ethical use,   the bias, and we have a question from Arsalan Khan  on exactly this point. He says, "If organizations  

are using AI to make important financial and  insurance decisions, how can consumers make   sure that the algorithms aren't biased and  the data is not skewed in these AI systems?"  Yes. "Organizations don't share this yet."  Yes. "It becomes   another layer of lack of trust or distrust for AI  systems and the implications that flow from that." 

A financial regulation does, to some extent,  require you to be able to explain if you're   using algorithms for, say, lending decisions.  You need to be able to explain why that person   was accepted or rejected for a particular loan.  There is a little bit of regulation in that.  The European regulations that are coming out  (or are already out), the AI regulations,   they're going to require this type of transparency  and explainability for these important decisions.  I think what's going to happen is – and  Qu was speaking about this – there are   going to be more and more regulations around  this. We're going to need that transparency.  We're going to need to have AI audits.  Companies that are starting to do this  

where they basically go in. In addition to  auditing a company's finances, they're going in,   and they're actually auditing their algorithms. They're checking for bias. They're checking for   fairness. They're checking for privacy  leaks and all of these things. You're   going to see more and more of that. You essentially need to do these types   of audits because algorithms have unintended  consequences, even the best-designed ones,   and I'll give you a really fun example. I was working with LinkedIn. We actually have   a paper that came out about a year ago in Science. One of the things, basically, that we leveraged to  

write this paper was the fact that this  algorithm on LinkedIn that recommended   people you should connect with had this long-term,  unintended consequence that the people that were   using it who were expanding their network were  applying to more jobs and were getting more jobs.  This algorithm was designed to grow your network.  It was actually helping you get more jobs.  Here this was really beneficial. But you can  imagine these types of knock-on effects. Because   everything is really connected online, they can  actually be huge, and they could be really, really   negative. So, you need to have these types of  audits to capture these unintended consequences.  Now we have a very interesting question  from Twitter from Lisbeth Shaw who says,   "How should organizations choose an appropriate  type of AI initiative, whether it's "regular" or   generative AI? And then, how should they begin?" I'll qualify this by saying we're just about   out of time, and we could spend an  hour talking about this. But I'll  

have to ask you to keep it really fast. It comes back to thinking about impact and   feasibility. You have to make sure that, for  the impact, it's aligned with your strategy,   it's going to deliver real value, and you expect  it to deliver real value. Then the feasibility   is that you're going to be able to do it. One thing I will say about impact and feasibility   is researchers and some of my colleagues (Jackie  Lane and Karim Lakhani) have shown that we're   really bad at compounding the two. We tend to  think the things that are high impact are high   feasibility, and vice versa, which is not true. What I would encourage you to do is to try to  

disentangle those and then find the  projects which are truly high impact   and high feasibility. Then go after those ones. Just as a very quick follow-up, between generative   AI and traditional AI, how do you choose? The big difference is that generative AI   is a more general tool. I think you have  to look at the problem. There's no right   answer or clear framework for when one  is going to be far superior to the other. 

Traditional ML is really,  really good at predictions.   Generative AI is really good at having human-like  conversations. If the problem you're working on is   this type of human-like conversation, generative  AI is the way to go. If you want pure predictions   on whether Michael is going to buy this product  today, you probably want to use traditional AI. 

But the other interesting thing that's  happening is generative AI can help you   build traditional AI. Now generative AI is  becoming a tool for developing traditional AI,   which I think is also really fascinating. I want to ask you some questions about two   enterprise companies that I find super  fascinating in this space that have   deployed both traditional ML and generative AI. The first company I'm going to touch on is Nvidia.   We talked about them earlier but they've done a  great job of being very nimble. Their chips have  

been used in everything from automotive to gaming  to now AI more broadly (both generative and ML).  One of the things that really shocked me in the  enterprise (and I'm curious your takeaways from)   is Meta. Meta is largely... They've been  on the press circuit for the last five,   six years, and it hasn't been that great.  But if you look at the last year, and you   look at what they're doing at Meta AI, they're  actually doing phenomenal as far as some of the   announcements, even their most recent one. They did one this morning. I haven't read   the paper, but they did Seamless M4T  the other day, 100 languages that we   can translate into seamlessly. Now they've got  a coding LLM that they've adopted Llama for.  Why is Meta winning in AI right  now? I'm curious your take on that. 

One of the things that we've seen in previous  waves of technology (and we saw it with Google's   open-sourcing Chrome, open-sourcing multiple other  technologies), they sort of became the foundation   that everyone built on. What happened with  generative AI is that a lot of companies were sort   of doing this in closed doors because I think they  believed that the ability to train these models is   what's going to give them the competitive edge  and they were worried about open-sourcing them.  Meta was actually a little bit on the flip side  of it. They've always open-sourced some stuff, but   they were a little bit behind on this generative  AI. But they had the strategy that we're   going to just open-source everything in the hope  that everyone is going to build on top of that. 

Right now, we don't really know if that's what's  going to happen. We're in this stage where it   could or it might be just too costly to really  train these models any further. You just kind of   use the default, so it doesn't really matter. I don't know. I think time will tell. 

Zuckerberg acting more like an entrepreneur  in the startup sense is helping them   right now. That's what you're gleaming? I think they have amazing AI leadership. If   you look at some of their leaders in AI, they  are the world experts in that area. And they   have a strong commitment to open-source.  I think you're seeing that influence.  I think Meta also realized that they were  going to fall behind if they didn't try to   do this open-sourcing because they didn't really  have... They're not really set up in a way that's   going to monetize this really easily. If you look at someone like Google,  

they can monetize this pretty easily through  their search. They're basically redesigning   how search works with them. If you  look at OpenAI, that's essentially   what they're designed to do. Those companies  are like, "Oh, this is going to be my core."  For Meta, right now, it's unclear.  Having a chatbot in WhatsApp,   that's not really going to add that much value to  WhatsApp, so they're going for this open-source.  

Maybe they'll become the foundation that  everyone builds on. Then they can figure   out how to really monetize that later. I think  that's the strategy that they're going after.  It's a really interesting space, and  it's great to see how it's evolving. 

You're an operations professor at the Harvard  Business School, so give us your advice of the   patterns you've seen that make AI projects succeed  or fail. Give us the top three, and really fast.  At a high level, what you have to remember is  that a project is not just a single entity. It   goes through five distinct steps. It goes through  the selection, the development, the evaluation,   the adoption, and the management. You have to look at each of these  

pieces individually and really try to optimize  for it. That's really how you'll be successful.  You can't just think of a project from  start to finish. Break it down. Focus   on each of the parts of it, and really  try to optimize each individual process.  Don't forget the management and auditing  piece. That often gets neglected.  Qu, you know it strikes me that Iavor's advice  to optimize each piece of course makes sense. But   without the expertise of how to optimize,  it's like, "Okay, what do we do?"  There's a lot of research happening in each of  these stages to try to help companies figure   out how to really optimally do it right.  In the development, we talked about the  

AI factory. In the evaluation, we talked about  experimentation. In the adoption, we talked about   my framework for trust in the algorithm,  in the development, in the processes,   and then in the management. There's a whole  slew of tools out there to help with this.  We're out of time, Iavor, but we'd love to have  you back. There's a lot to be discussed here.  With that, a huge thank you to Iavor Bojinov. He  is a professor at Harvard Business School. Iavor,   thank you for being here. As Qu  says, we hope you'll come back.  Thank you. Of course, a huge thank you to my co-host for this  

episode, QuHarrison Terry. Qu, it's always awesome  to see you. Thank you for joining us today.  Thank you. Thank you. Now before you go,   please subscribe to our YouTube channel. Hit the  subscribe button on the CXOTalk website. Check  

out our newsletter. We'll send you our upcoming  shows. We want you to be part of this community.  Thank you so much, everybody. I hope you have  a great day, and we'll see you again next time.

2023-09-15

Show video