Generative AI and Leadership, with Accenture CTO (CXOTalk #795)
Today, we're talking about generative AI and leadership with Paul Daugherty, Global Chief Executive for Accenture Technology. My guest cohost is QuHarrison Terry, Chief Growth Officer for the Mark Cuban Companies. Thanks for having me, Mike. It's exciting to be able to talk with Paul on AI today. Paul, why don't we begin by asking you to tell us about your work as the chief executive for technology at Accenture? Accenture is a large organization. We're about 740,000 people, over $60 billion in revenue. We help companies do amazing things with technology. That's what we're all about.
Do you want to give us, to start, just kind of a brief overview of generative AI? I think everybody in the audience knows what it is. But in the context of business and in our world, where does it fit today? To talk about generative AI, you have to talk about AI first. AI has been around for a long time, and all of us use AI continuously. The three of us talking here and anybody listening has used AI dozens if not hundreds of times today. AI has become a pervasive part of our life through the advances in machine learning and deep learning and such that have come before. AI, as I'm sure most of the audience knows,
it's an old field. The term was invented, I believe, in 1953 at a conference in Dartmouth 70 years ago. And it's gone through a lot of iterations over the years. I like to think about three forms of AI: • Diagnostic AI, which is using AI to diagnose things. Often, deep learning and the like to look at, for example, using machine vision to look for manufacturing defects (a thing we do commonly), to unlock our phones (as we do every few minutes of every day), or assistive driving features in cars. That's diagnostic. • Then there's predictive AI, such as AI we do to do retail forecasting for companies, often machine learning and optimization models. Those are well-established
techniques. We have lots of people doing that work for lots of clients around the world, and many companies use it. • Generative AI is the new thing on the scene, and it really is a massive breakthrough, probably the biggest breakthrough in AI to date. And what we're really talking about with generative AI is foundation models,
which are really powerful models that can be reused across many different use cases. That's why they're called foundation models. Large language models are a type of foundation model that really understand language and have allowed us to really master language through artificial intelligence. Then the transformer technology added on top of that
allows us to generate things. GPT (generative pre-trained transformer) are these large models that then have transformer technologies. They can create new sources of content. That's really the breakthrough of generative AI; models that are very powerful and can be reused rather than bespoke data science projects combined with foundation models, which have tremendous reuse and power, combined with this creative capability to produce language content, whether it be graphics, video, et cetera. It really is transformational in terms of what it allows us
to do as individuals and what it allows companies to do. But we're at the very early stages still. Hey, Paul. One of the things that I want to talk to you about today is the whole concept of you thinking about this stuff almost a decade ago. In your book Human + Machine: Reimaging Work in the Age of AI – sorry, I don't have it in front of me, but I did read it a while ago – when I was looking back at that book, one of the things that you talked about was how AI would ultimately become the ultimate innovation machine. It's fascinating that it's 2023, almost 5 years later since you published that book. What's your take? It seems like you're
spot-on, but what things happened in generative AI that you didn't envision or forecast back in 2018? I think the premise and all the precepts in Human + Machine really have stood the test of time well and the concepts we talked about, the human plus machine, and the idea that AI gives humans superpowers to do new things really has stood the test of time. We see generative AI as an even bigger step forward in terms of the augmentation and enhancement of what it can do for all of us in terms of giving us greater tools and productivity to do new things. I think the surprise, we did talk about all this technology in that book, and then our next book that my co-author and I wrote – Jim Wilson is my co-author – which was called Radically Human. That was the second book. The pace of the advance is what surprised us more so than the capability. We were anticipating that some of these capabilities would come along. But the pace of development of the foundation models, the rapid growth, the size and complexity of the parameters, and the weightings and everything, and the breakthroughs that came about with that, were probably the biggest surprise, Qu, in terms of what we saw. Then one last thing on that. When you talk about the timing and how fast everything is coming
together, it's fascinating to think that even open AI's ChatGPT is still relatively nine to ten months old (as we stand today). Yes. When we fast-forward to just yesterday, Elon Musk announced XAI, which is another fascinating AI company. As a business leader and executive, how should I think about AI? It's happening fast, but does that mean I take the "move fast and break things" approach, or should I wait and see where things settle? On the flip side of that, an organization might be behind. How should I think of that? Our belief is that generative AI is a participant sport. You have to jump in and start using it, experience it, and do some experimentation. We're
encouraging companies to do that, and that's the approach we're taking in our own organization. It's very early with the models. You just highlighted that with how young the GPT and ChatGPT models are. A lot of companies have not reached GA (general availability) status of their models and products, so it's early evolving. Elon's company was announced recently, and there are new companies sprouting up continuously. And so, I think the key for companies is, first,
look across your business and decide where it's applicable. Second, pick some use cases where you can jump in and experiment with the technology and manage some of the complexity and risk. Then third, develop the foundational capabilities that you need to then scale it faster. Those capabilities include technology capabilities like understanding the models: the problems engineering, the pretraining, and other things that you might need to do and how to integrate these models back into your business. As well as the business skills of understanding how and where you apply it. How you develop a business case for it. How much does it cost to apply these models? Those three steps—looking across the landscape, experimenting, and laying the foundation—are what we're helping a lot of companies do today. Be sure to subscribe to our newsletter. Subscribe
to our YouTube channel. Check out CXOTalk.com. Paul, you're describing this kind of open field of innovation that's going to be happening. But everything around generative AI right now seems so ambiguous. The technology is changing. The implications for business are apparently amazing but unclear. And so, how should business leaders navigate this intense ambiguity?
I think generative AI is just a new ingredient into the mix. We've been talking for a while; I've been talking for a while about the exponential advance in technology that we're living in. We've been talking for a while about how organizations need to develop the ability to innovate and recognize and adapt technology faster. The three key technologies that I think will define a company's success in the next several years and decade are cloud, artificial intelligence, and the metaverse. Those are the three themes, and I can talk more.
We're talking about AI today. I'm happy to go into others, other directions, as well. As you look at the AI piece of it, those things build on each other. To be successful with AI, we're finding, and companies are finding, they need to get to the cloud. Those that have an advanced foundation in the cloud are better prepared how to utilize AI.
Most of these models run in the cloud, and you need to have your data foundation in place. Have the data to drive the AI model successfully. A lot of organizations have struggled with this over the years. We did a recent survey and only 5% to 10% of companies really have maturity in terms of how they manage their data and the corresponding AI capabilities. That means 90% have a long way to go. You need to start with the cloud foundation and what you're looking at. You need to look at your data, the governance around your data and your metadata, and how you pull all that together so that you can support AI in the right way. And then it's the AI capability
and skills that you build on top of that. It's a journey that we're on, and it's going to continue. Generative AI is amazing, but it's not the last big breakthrough and it's probably not the biggest breakthrough we'll see in technology as this exponential advance continues. This is kind of the muscle, so to speak, that organizations need to develop to continuously anticipate and have the flexibility in their systems, their architecture, their business and their business processes, and their talent to continue to adapt as technology advances.
From your perspective, AI is essentially another (in a chain of technologies) that's not necessarily all that different from what's come before. What's different about AI... It is the latest in a chain, and these things all build on each other. It's this combinatorial effect of the technologies coming together that really creates the power. But what's different about AI is it allows us to create more human-like capabilities. I can communicate with large language models using natural language, using voice interaction, et cetera. I can get output that is easier for me to interpret. That's the powerful breakthrough with generative AI. What I advocate is the more human-like
the technology, the more powerful and the more exciting it is for us. We shouldn't view it as a threat (as technology acquires this capability). It allows us to really leverage the technology and give us superpowers (as we talked about in our book) around giving us new capabilities. For example, I can be a customer service rep and, rather than just what I know in my memory and from my experience, I can have at my fingertips every aspect of every technical manual on the product that I'm answering questions on brought to the forefront and prioritized so that I can ask the questions right way. This is the type of power that the technology has given us. Just to g at that a little further, the real impact of AI – while cloud probably changed technology a lot and how we build technology and support technology – AI is changing work and the way we work because of this capability. One of the research studies we did recently showed that 40% of working hours across companies globally are impacted by generative AI, 40%.
That doesn't mean 40% of jobs go away. Far from it. We actually see it enhancing jobs and enhancing productivity and capabilities that people have in many ways. I'm happy to go into that in more detail. QuHarrison mentioned that your book was called Human + Machine, and we have a really interesting question from LinkedIn. This is from Milena Z. She says, "How would you describe the significance of incorporating human values into the development of generative AI technology? It's incredibly important. If you don't have, in your organization, a really strong, responsible AI program, you're simply being irresponsible. At the core of responsible AI is accounting
for human values in the way that you do it. Responsible AI, in our view, is about things like the accuracy and coming up with the right answers, avoiding the hallucination. It's about the ethical issues that you need to think about in terms of how you're applying the AI. It's about bias and ensuring that there are fair outcomes and fair use of the technology and, in certain cases, the transparency and explainability that you need around the technology. We encourage every organization using AI, especially with the advance of generative AI – we've been talking about this for six years, but especially with generative AI – you need a responsible AI program in place. If you can't inventory every use of AI in your company and understand the risk of it and know how you're mitigating those risks, then you're simply going to get yourself in trouble with improper uses of AI, and that's the way we think about responsible AI. It's not just mushy values and principles. It's execution, operations, and compliance
in terms of how you're applying the technology. Paul, it's a great point. But the question I have is the theoretical version of that and the actual application of that oftentimes looks entirely different. For example, if I'm in a company. Let's say I'm experimenting with generative AI and it's just in our R&D department. Then we quickly realize that this could actually have some scale. We just apply it to a whole sector, another sector of our company, or maybe we apply it to the whole company. At what point do I actually stop and say, "Okay. There's a legal component here"? Oftentimes, when we point to that direction, that's the big debate in AI today, even at the congressional level, is what do we do? How do we regulate this stuff? If I stop now, aren't I hindering my innovation? And if I'm in charge of innovation and acceleration of technological development within the company, I'm caught in the Catch-22, if you understand what I'm saying. I am not one of those that support stopping,
banning, or pausing on the technology. I think it's about putting in place the right framework and the right guidelines so that you know what you're doing and can evaluate the risk of it. I would say, not just at the end but every step along the way, and before you even get started, you should do an assessment of the risk. There are a lot of guidelines and ways you can do that.
The EU is going through the stages of approval on the AI Act. They identify high-risk, different risk categories of AI. Does your team understand those, and are you assessing for any application of AI? What risk category are you fitting into? And then how do you mitigate that or deal with that to make sure you're handling that? That's just one example in respect to EU. There's also the Whitehouse guidelines. There's NIST and other things that are out there. And there'll be more coming because
of the interest in setting some guardrails around this, which I think is a good thing. But I think the teams need to be trained and organizations need to have tools in place so that you are assessing the use of AI and, again, understand the risk of it and make decisions accordingly. There are things we won't do and things we've decided not to do, applications that we've not pursued because of the risk profile of them, or we felt it was not aligned with the right values. That's an important consideration to build into your process. It can't just be after the fact. It's got to be
as you're considering use cases and starting out. AI brings out either the best in people or the worst in people. The latter component, when it brings out the worst in people, traditionally what I'm seeing is people will try to hinder the AI's abilities or slow it down in fear of losing their job or seeing other calamities ensue within their industry. One of the questions I have for you is, how do we get better at communicating AI? Technology is neutral. Technology isn't
good or bad. AI, generative AI, fits into that description. Generative AI isn't good or bad. It's exactly as you said, Qu. It's how you use it. It can be used for bad purposes. It can be used to spread misinformation at scale,
deep fakes, and all sorts of things. But that's people using the technology badly. I think that's what some of the communication we need to do around generative AI is that the thing we really need to be looking out for and preventing is bad uses of AI and people using AI in bad ways. We need to educate the general population on what that means so that they can recognize and understand if something has been propagated and generated artificially at scale using generative AI for some illicit purpose, whatever it might be. I think that there is a broad education that needs to happen there. We're doing a lot of work
on that. We're working with a lot of different organizations on that, governments and other bodies, to look at how we can better educate the general population as well as business leaders, technologists, and decision-makers around using the technology in the right way. I think that's an ongoing effort that we all need to work together on. We have a bunch of questions that are stacking up on LinkedIn and Twitter. I have to say
you guys in the audience, you are so intelligent, so smart and sophisticated, and your questions are absolutely great. Our next question comes from Florin Rotar. He is the chief technology officer at Avanade. I have to say that I did a video with Florin years and years and years and years ago in Seattle. Florin, it's great to see you pop up.
Here is Florin's question, and I think it gets right to the heart of some of the key issues. He says, "How will generative AI change the future of work? Can it also play a role to enable people to realize their full potential, to thrive and to grow, not just to drive productivity? Will it blur the lines between white collar and blue collar?" I'll just add to that. To me, this question is also getting to the point that QuHarrison just raised, which is, generative AI brings out the best and brings out the worst (in people). We talked, in Human + Machine, about the idea of no-collar jobs, and exactly what Florin highlights, eliminating this distinction between blue-collar and white-collar, as you look at it. Think about
a hands-on service technician. Think about a plumber or an electrician that now has access to large language models that give them tremendous amounts of additional information and potential. It can give them tools to run their business more effectively. Maybe they can be a service provider to others in their profession rather than just being the specialist at the physical trade that they have. I think that's the blurring capability that AI allows.
Think about a small business (or any part of a larger business) that wants to go international overnight. They can start communicating in dozens of languages seamlessly in expanding their business. It's these superpowers that are enabled that give people more capability, and that leads to a lot of new entrepreneurial activity and ideas. Think of what GoDaddy did to the Internet in creating a generation of entrepreneurs in a lot of different ways, or eBay marketplace, and such. We're going to see that to the
next exponential multiple with generative AI, creating all these new possibilities of what people can do. That's what we see happening there. To get more specific around it, we see the new opportunities for jobs and the way generative AI impacts that fall into five categories. The first is advising. This is advisors, assistants, or copilots to help people
do their jobs more effectively. For example, a large European service organization that we're working with where we're using generative AI in their customer service organization to allow them to answer questions with a lot more accuracy and quality because, as I mentioned earlier, they can pull tremendous amounts of technical information together to answer customers' questions better, faster, with higher quality, and they can cross-sell more effectively because they get the ideas and prompts and support on how to cross-sell. That's advising. Creating is another whole category, is a second whole category, another category. A good example here is the work we're doing in the pharmaceutical industry where we're able to (in the drug discovery process and clinical trials process) create some of the regulatory and compliance documents they need to create so that then gets reviewed at the final stage by humans in the loop, avoiding all the rote work that a person would normally do, and allowing them to apply their judgment and expertise in the final product. That's creative (in addition to applying it in marketing and other areas that I could talk about, which is super interesting right now). That's the creating side of it. There's automating where you can use generative AI to automate some of the transaction processing. An example here
is a multinational bank. We're using generative AI in their back office processing to read and correlate tens of thousands of emails that come in with transaction activity. Normally, people need to sort through all this to reconcile and do their post-trade processing more effectively.
Again, you can do this with other technology. You can do it with generative AI. You can make people's jobs more productive and effective and take out some of the drudgery. The fourth category is protecting, which I think is super interesting. An example here is we're working with a large energy company on a safety application so that workers in real-time can get all the information on what's happening. Real-time conditions, weather conditions,
and other things in a complex, say, refinery, and then combine that with all the information they need to know from safety procedures in manuals and regulations and such that they can operate in a more safe manner in real-time. Again, couldn't put all this together before generative AI. Then the final use case we're seeing a lot is in technology itself, using AI and software development in technology development. I'm sure we'll find more examples as we go. Those are five that are kind of standing out right now, just to drill into some of the ways that it's transforming work (in response to Florin's question). We've got another question in from Twitter from Chris Peterson. The question is, "One of the opportunities mentioned in Human + Machine was the AI explainer role. Is that even possible
for something as complex as GPT4 with billions of parameters and almost unlimited training data?" In some industries, in some problems, if you can't explain it, you can't do it. That's part of that screening that I talked about earlier with responsible AI. If you have kind of a regulatory or ethical or business need to explain exactly how something is happening, you need to use the right type of approach (where you can do that), and you can't do that with (to your point) some of the models that are there. There's a lot of advance happening in explainability. There are ways to create the models to understand how they're processing. There are areas like GAN (generative adversarial network) that we can use in different ways to get some insight into how models are working and such. So, there are a lot of different advances there, and there are new fields, in addition.
New fields like prompt engineering are cropping up because of generative AI. We're also seeing demands in the market for explainability engineers or explainability specialists who can bring that understanding in to help understand those kinds of conditions. The other thing that's sometimes important is that, in some applications, you don't necessarily need to explain exactly how you got the answer. You need to provide the transparency of what information you're using, what data you're using, and the process itself. You need to differentiate where you really need to explain exactly all the math you did and how you did it, so to speak, and where you just need to provide transparency into how you're doing it and show that you're using information such in the right way. Distinguishing that can help organizations unlock some of the potential, too. We have another question from Wayne Anderson. You
can see we love taking questions from the audience. Again, the audience is amazing. This is from Wayne Anderson on Twitter. Wayne also has a question coming up on LinkedIn, so he's sort of a multi-tenanted— Multi-platform.
–multi-faceted social media happening here. Wayne says, "What is the litmus test? Is there one, a question, set of questions, that you use to quickly evaluate a client's place on the operational maturity journey for AI and ML?" We have a maturity framework we use to assess for ourselves as for our clients. There are steps of maturity that you go through in assessing it. There's assessing talent and where you are with the talent and the expertise that you have in the organization. That's about the technology talent as well as the
skills you have in the business and the kind of training programs that you have around that. There's assessing the data readiness for it in terms of (as we talked about earlier) your data maturity and the maturity of your platforms, data platforms, to support what you need to do. There's then the maturity of how you need to use the models and your sophistication around that. That depends on the strategy that you have. Is your strategy to use proprietary, pre-trained, publicly available models, or is your strategy to do some of your own pre-training or customization using your own data? That requires far different operational skills and, therefore, you need to evaluate where you are on that spectrum.
Then there's the operational skills around it. So, how do you put the AI in place, and how do you monitor it on an ongoing basis for the right outcomes? Then finally, the responsible AI dimension of it. ) Those are kind of the dimensions. There are more underneath that. But there's a process that we use to go through it, and I think that's every organization having an understanding of that and having a way to evaluate their maturity is important to know how you're making progress. Wayne actually did ask another question on LinkedIn. I'm looking at it right here. He talks about the security and the risk of AI is not something that is entirely a technical solution. A lot of it is in the
humans and innovation/development processes. What formal steps do you need to be in order for that innovation to provide the kind of guiderails and talking points on the future of machine learning projects? The way I interpret that is we've got a lot of groups working together. How do you make sure they're all working and their energies are going the right way, the right direction? We think the right approach to use is a center of excellence kind of approach given the state of the technology where you create a center of excellence. You have to centralize your organization that has those capabilities in it. That's what we've done for ourselves and we're helping a lot of our clients do. In fact, we have something called a COE in a box that we're using to help clients set up these kinds of capabilities. It requires the technology capability, the business capability,
the legal teams and capability (legal and commercial), and talent (the talent, HR kind of capability around it). You need to bring all that together, a center of excellence where you can have that capability assembled. Representatives from all those different groups in your organization is important. You can federate some of the experimentation then. But it's really important to bring it together.
Security is a great angle. I don't know if that was the primary thrust of that question, but there are a lot of implications on security from generative AI, both in terms of new security challenges as well as consideration around data privacy, grounding of models, use of sovereign data. Depending on the jurisdictions you're operating in, those become really critical considerations for companies. Having this built into kind of a center of excellence that you know that you're channeling this in the right way in companies, I think, is critical for the stage of development that we're at right now. Paul, given your purview and some of your thesis around the future, one of the things that I'm wondering in your realm is, when I look at technologies like the cloud, an enterprise corporation is probably best suited for that realization today, right? Personal cloud computing, it exists, and I think the strongest use case of that is probably video games today. But beyond that, it doesn't make that much sense for an individual
person or even a small startup to endeavor on a very complex cloud implementation. However, I think that might differ given some of your comments you just specified when it comes to AI. On the AI front, one of the things that we're seeing is corporations that have a lot of technical debt or have a lot of data that hasn't been digitized or have very complex teams and org charts. They're not well suited because it's going to take them some time to get all these things in progress, in place. Now on the flipside, they have the most data, so they'll probably have some of the stronger AI models. But the question that I have here is, would it make more sense for a startup or even an organization to think about creating an internal startup and then going after it? That's what Google did with Deep Mine, and we just saw some of the news related to Deep Mine where they're bringing in Denise to lead their actual AI practices at Google. There are countless examples where this
is also true in the AI industry. Is that the right approach or do you think that that ship has sailed a long time ago? For some companies. There's an example, a media organization we're working with that sees an opportunity to really create a whole new part of their business using generative AI. They can use generative AI to create a way to generate coverage for things they couldn't cover before. I can't get too specific about it. In that case, that's maybe more of a startup. You actually are using generative AI to branch out in a new direction. We think a lot of the generative AI
potential is going to be changing the core of how you work as a company. It's going to transform the way work is done. That's that phase we used, "Reimagining work." That's what this is about, which means I think you do need to have a lot of capability at the heart of your organization looking at how you do and drive the transformation. I think it could be a mix for different types of use cases. A company may spin out or have a
separate project to pursue some initiatives they're doing. But I think this gets to the core of how companies are operating, which is why companies need to embrace it broadly. But another point that you're mentioning is I do think that generative AI offers a lot of potential for new startups and small companies because they can access tremendous capability to build new businesses in addition to the power it gives big businesses. I heard people ask, "Are the big companies only going to get stronger with this?" or "Are the new startups, new companies going to win out?" I think it's really a mix here that we'll see going forward because of the power of the models and the power for new organizations to leverage them as well as the power that larger organizations have to move faster.
Paul, let's shift gears a little bit and talk about investment, technology investment. AI is changing so rapidly. The capabilities are changing. The models are changing. The implications for the enterprise, and for society at large, remain very unclear. Given this ambiguity, how do you recommend that organizations should be investing? I will mention that Accenture recently announced a $3 billion investment in this. Obviously, this is
something that you're giving a lot of thought to. As you said, we announced a $3 billion (billion with a B) – we don't do that too often - $3 billion investment in data and artificial intelligence. There's a good part of that is for generative AI, but it's across data and artificial intelligence, so we're doubling our workforce. We have 40,000 people that work in data and AI today. We do a lot of work in that area. We're going to double that over three years. We're developing a new tool called AI Navigator for Enterprise to help companies apply AI more quickly, including generative AI. The tool itself uses generative AI to help companies understand the roadmap they need to follow and, industry-by-industry, how they can drive value from AI. We're creating a center for advanced
AI where we're looking not just at generative AI but the next breakthroughs that will come as well. Yeah, we're excited about it. We're putting a lot of money and focus on it because we do believe this is transformational for business and this wave will build faster than cloud and faster than some of the other technology waves that we've seen before. Yeah, a big focus, and we see companies doing the same. We did a survey recently, and 97% of executives that we surveyed—this is just a couple of weeks ago—believe this is going to be strategic for their companies, and it's going to change their business or industry. Ninety-seven percent, that's basically everybody. Over 50% believe it's game-changing. Not just
some change, but game-changing for their industry or company. About 46% are going to invest a significant part of their budget in generative AI in the next 2 years. This is a fast build, and maybe some of this is companies getting a little over-excited, but we believe that that pattern will hold and companies will move and invest in this technology more quickly than we've seen other ways of technology built. But what about the risk associated with investing in something where the end trajectory is so unclear? You need to look at the horizon. I think there are a lot of things that are clear. I think the key thing is to look at this from two dimensions: business case dimension and the responsible AI dimension, which helps you balance the risk. The business case helps you look at the value. The responsible AI
helps you look at applying the human values and the right risk profile. If you take those two lenses, I think you can find the intersection of the right things you can start on now with no regrets. Obviously, you have to make sure that the use case you look at can be supported with the technology that's available today, which is moving super-fast. I think, Michael, you can identify no regrets things to do. We believe, in the near term, this is going to be human-in-the-loop types of solutions for the most part. It's going to be solutions that bring in tremendous new capabilities
for people. It's going to be new, exciting capabilities for consumers to use more directly. In one case, a retailer we're working with is using generative AI to create all sorts of new product configuration capability for their customers. It's going to create new capability for employees, et cetera. This is all stuff that's doable today, I think, with no regrets, without really worrying too much about the risk. You can apply the right
principles to do it in a responsible way. From an industry-specific standpoint, it seems like each industry is dealing with AI at its own speed. The two that I want to bring up right now that have had probably, I would say, some of the most impact is, one, education, and two, the legal sector. The funny thing about it is they dealt with this regulation in entirely different ways. In the education sector, everything is pretty much a chaotic mess. You have schools banning things, turning things off, then re-enabling.
We could have a whole show on this but, on the legal side, you've got—which surprises me the most as a technologist—lawyers really embracing this technology. There's obviously a little resentment, but there are legal LLMs, and there's a lot of adoption as to how you can integrate it and adopt and make your law firm or your practice move faster. I would have never predicted that in 100 years, but it's happening. Now, on the flip side, Lisbeth Shaw from Twitter has this really good question where a lot of organizations and individuals have begun using generative AI for work without any AI governance in place. She's wondering how you can apply governance once the horses are out of the barn and racing. The reason why I brought up the points earlier is education. That whole sector is dealing with this whole dilemma right
now. I'm curious on your take just because you're seeing it on the enterprise side where, if I input an email or contents of a document, there is a true risk there whether it be IP or trade secrets, whereas with school, if I put my quiz questions and test questions in the program, it really only impacts me and will have a lasting impact on the knowledge that I retain and gain. We're seeing broad adoption across industries, unlike any other technology I've seen which had very specific, and everything had specific industry patterns. Client-server, ERP, mobility, cloud, SaaS, had very specific industry patterns. Generative AI is super-broad in terms of the industry adoption we're seeing and the industry potential use cases we're seeing. The two you mentioned are super interesting, Qu. Education, I think,
will be literally transformed through generative AI. It enables truly personalized learning in ways that are significantly different than our current educational system. It'll take a while for that to work through but, yes, it's going to be pervasive and powerful.
Legal, I agree with you. The interesting thing about the legal profession is it can help paralegals work more effectively and do higher-level work, and it can allow experienced lawyers to leverage themselves more effectively in terms of the work they get done. So, we're seeing it being adopted across the different types of work in the legal industry or industry profession from that perspective. But I think to the horse out of the barn question,
you can still apply responsible AI. You can go back through and do it. It's a matter of being systematic and rigorous. It's about having C-suite and CEO support. We report on responsible AI to our board. It's part of our formal compliance responsibility
that we do. And we encourage organizations to do the same. If you already have AI out there, and most organizations do, and most organizations don't have enough responsible AI in place, we believe it's time to do that. Inventory the AI. Know where you're using it. Understand the risk level. Know the mediation techniques and tools and have them at your disposal. Know if you've mediated the
risks. You have to go back retroactively and do that if you haven't done it so that you know what your baseline is as you start to apply more AI and generative AI going forward. Given the impact of AI, we know that it is profound and will be profound. Where is this going and, more importantly, how should businesses position themselves to capitalize on this obvious sea-change that's kind of erupting all around us? I think the simple answer is you need to think big, start small, and scale fast. The think big is think about what the real potential is and where this could take your organization. Where are the big threats and the big opportunities? That's thinking big. Start small is the experiment with the human
in the loop and the no regrets use cases. Get some experience. Understand the model. Select the right partners (models and such) and do something. Get ready to scale fast. This is the centers of excellence, the operational maturity that one of the good questions came in on, and other capabilities, and the talent that you built around it to scale fast. "Think big, start small, scale fast" is the advice I'd give. Sci-fi has shown us what the future looks like. We see some of the gadgets and gizmos
that are real-life objects from Star Trek. We see some of the unforeseen and uncomfortable futures from Black Mirror start to arise. One of the things that I'm wondering your take is, I mean, you wrote the book Human + Machine, and then you've got another one since then. I'm guessing you've been thinking about this whole concept of transhumanism and merging the brain-computer interfaces that Elon talks about with some of these AI models. How near do you think that is, or do you think that that is still fodder for science fiction novelists? First of all, I'm a massive fan of science fiction, and I believe most science fiction eventually becomes real. It's a matter of the timeline. If you want to read about where technology is going, you pick up somebody like Neal Stephenson and read his books where he coined the term metaverse among other things, and his book Fall previewed where we are with technology right now (a number of years ago) really well. Science fiction can be incredibly illuminating into where we're going.
In terms of transhumanism, I'm not a real expert per se in that field, but I talk to a lot of friends and colleagues who are. I believe it's quite far away. Think about how blown away we are by large language models today and ChatGPT and everything. There is no intelligence inherent in these models. These are statistical models. People ask me how intelligent these models are. The models have no intelligence. The models are a bunch of data with technology that can statistically create results from them. There is no inherent knowledge. Now, some of the breakthroughs we're looking for in AI, the next generation of things like common sense AI, the way knowledge graphs come in and can be combined with generative AI, that starts to create systems that have more intelligence inherent in the models, along with the generative capability. I think that's where you see some interesting advances. But truly getting to the human and
surpassing human level, I think we're quite far away from it. We're multiple breakthroughs away, I believe, from seeing that. I think that discussion distracts us a little bit from what we need to do today, which is some of the great questions that listeners have asked about human values and ethics. Let's prevent people from using today's technology in bad
ways and avoid getting a little bit too distracted by the things that are pretty far down the road. This is from Mike Prest. He's a chief information officer on LinkedIn. He says, "As a business leader managing the risks of AI, what advice can you offer on sharing information to become good stewards of the technology and dispel some of the dystopian conversations about generative AI?" Very quickly, please. I think we should share more. On that front, I'm happy to connect with anybody and share some ideas. There are various forums out there where there's a lot of this sharing happening (both in business communities and different technology forums). I think that's how we'll all get better. It's at the
early stages, and I have a lot of forums that I'm running with some of my peers and colleagues and other companies to share a lot because we're all learning together in this fast-moving technology. We have another question from Twitter, another really good one. Again, really quickly, please. This is from James McGovern who says, "With Microsoft and Oracle holding layoffs, the talent for enterprise architecture and sales professionals must be huge. Who is hiring?"
Enterprise architecture: As much as you need generative AI skills, enterprise architecture is immensely important. Generative AI (along with the metaverse capabilities, which we didn't talk about in this call) creates really a rethink of your enterprise architecture and what you need to do. So, those skills, I think, are in tremendous demand as we look at this going forward. I think a lot of companies are looking at hiring the right talent to build this out. And I think enterprise architects, in particular, have been a shortage in the industry for a while and are even more in demand as we go forward with every new technology like generative AI. Paul, let's shift gears here. You're an avid sailor. I've known you for many years,
and I see you sailing. Tell us why. Tell us about your sailing. Why do you like to sail so much? I've sailed my whole life, so it's something that's been a lifelong passion. I love the experience of it. When you're out on the water and you're seeing the sunset, you have a nice breeze behind you, and you're powered only by the wind, sailing along, and can hear the bubbling under the keel of your boat as you're moving through the water at a nice pace, there's not a better feeling in the world than that. There's a challenge aspect of it, which is
optimizing. How do you go a little faster? How do you get the sails tuned a little bit better? And I love the intellectual challenge of that. There's a learning aspect. I learn something. I've been sailing my whole life. I learn something new
either by making a mistake or just encountering something every single time I'm on the boat, and it's a continual learning experience. Finally, I'd just say it's my happy place. It's the one place where I really don't think about anything else because, from a safety perspective and focusing on what I'm doing on the boat and everything else, when I'm on my boat, that's where I am and that's where my whole focus and my mind is on my boat and the guests and passengers that I have on it. As an author, I'm sure some of your pastime includes reading. What books are you reading these days, and what's keeping you sane? One of my favorite authors and heroes is Neal Stephenson who wrote so many great science fiction books, so I'd put him out there. A great book that I read recently is Cloud Atlas,
which is a fantastic story that gets into some of the topics that we talked about. It's a prize-winning novel that covers everything from the fall of the Ottoman Empire to space travel in the future (through a series of parallel stories). It's a very interesting read. There's a book called Reality+, which I'd recommend to anybody, anyone that's interested in, first of all, the transhuman topic you mentioned, the metaverse, or related topics. Reality+ is by a philosopher from NUI who is exploring the question of are we living in a real-world or a simulation, and how would you know the difference between the two? It's a fascinating book and super well-written. I read a lot, and those give you a sense of the
realm from fiction to science fiction to philosophy as well as technology. You're the senior person for technology at Accenture, which employs about 740,000 people. Just that number in and of itself is almost incomprehensible. How do you spread yourself over 740,000 people and manage the pressure and the expectations? It's an amazing privilege to have a role like this. Our mission is to deliver on
the promise of technology and human ingenuity. The human ingenuity that we have in those 740,000 people is just amazing. What I like most about my job is the ability to learn from 740,000 people. I don't talk to each of them individually, but the work that we do
for clients, the innovative ideas they come up with is just super inspiring, the projects we do in terms of improving communities and society through some of the work we do. It's really a privilege to do it. I'm just honored to have the role and to represent the amazing group of people that we have and the amazing leadership that we have. It is a big company. It's a lot of people. But it's a lot of
small communities that come together with a common culture is the way to think about it. We have the system that we know how to hire people in volume if we need to. We know how to build community and build culture in our organization in a lot of different ways. As you scale up and get bigger, some things aren't that much harder to do at a bigger scale and upscaling very well as you grow. That's what I've found as we've grown the organization.
It's a lot of fun and, again, it's just a privilege to be in an organization like this and have the role that I have. What's the hardest part? I don't know all the 740,000 names, but I'm working my way through as best I can. Hey, Paul, a question for you regarding just being a techie. What's your favorite device? Probably apps that I use. One of the devices I'm really getting a kick out of is my Oura Ring. Not to do any marketing for a specific product, but it's a simple device.
The ring is connected to the app on the phone. And I'm finding it's really helping me understand some patterns and how I can be a little healthier and happier and get better sleep and such. You can track. I can track and correlate my heart rate,
my oxygenation, my breathing patterns, all sorts of things, compared to my sleep activity, compared to my sleep cycle, compared to my activity cycle. We're data-driven, and if you get better data, you can improve patterns and such. That's one of the things I'm playing around with right now that I'm getting a lot of value out of. One of the things that's interesting about the Oura Ring is it represents the whole quantified self-movement. Right. You now have your own personal database of data that you can do whatever you want with. Are you
going to build anything using your health data or is it just a personal experience? I don't know, but I'm on that exact journey you mentioned. I'm starting with now the personal biome, understanding your biome more using the self-diagnostics, which has another big impact on health and wellness. Yeah, I've been trying to get more and more data-driven and understand what makes me work and what makes me healthy or not. Yeah,
that is something I'm going to continue doing. It's funny because that's the big data that comes off of your body, and then you could take that, what works for you, and implement that at the enterprise at scale. I see what you're doing. [Laughter] Exactly. [Laughter] Okay. With that, we are out of time. A huge thank you to Paul Daugherty. He is the chief executive for Accenture Technology. Paul, thank you for coming back again to CXOTalk. We really, really do appreciate it. It was a pleasure, Michael, and it's great to do this with Qu as well. Thanks to you both
and to the audience. Those were amazing questions. I wish I could be there and ask the audience a lot of questions as well, but it's been a great experience. Thank you. QuHarrison, it's great to see you. Thank you for being such a great co-host. That was a lot of fun, wasn't it, Qu? Indeed, man. Thank you for having me.
Everybody, thank you for watching. And as Paul said, you guys are an amazing audience. Before you go, be sure to subscribe to our newsletter. Subscribe to our YouTube channel. Check out CXOTalk.com, and we will see you again next time. We have amazing, really great shows coming up. Have a great day, everybody. Bye-bye.
2023-07-27 11:37