Lessons in Leadership | Let's Talk Tech - The Generative AI Revolution
We are so excited. And appreciative to give this master class on the generative AI revolution and what it means to you. And I want to briefly 1st introduce myself, then give you a little bit of context about why this matters so much and how you can continue your learning. The Brown School of Professional Studies. And then I will introduce our incredible speaker today. And turn things over to him. So 1st I want to ground us a little bit in why this topic matters. So much right now at this point in time.
So 1st of all, the statistics on the engagement of the US workforce are unfortunately really bleak. According to Gallup survey that was done just last November, 70% of US workers report being disengaged. At work and statistics like this actually fuel the work of the Brown School of Professional Studies because the mission of the Brown School of Professional Studies is to transform the way the world works for good by partnering with you to give you the skills, the networks, and the tools that you need to be the leaders to do that. And we are inspired and also disappointed to see statistics like this because it shows us that the change that each of you has stepped up just by being here today in order to be part of. Is so, so needed. So thank you for being here because right now the way the world works is not working for 70% of US workers but we can change that. Interestingly, there has been a report that just came out this year from Microsoft. I really recommend it the 2,024 work trend index report that finds at 75% of global knowledge workers are using generative AI and had doubled within just the 6 months prior to the publishing of this report.
And the initial impact has been really positive. 90% of users say AI helped save them time and 85% say that they can focus on their most important work. However, and this is why we're all here today, 60% say that they worried that their leaders do not have a vision for incorporating AI into their work and actually other surveys I've seen of leaders are somewhat similar. Where they are also. Feeling under prepared for how AI is going to transform the way the world works and how they can help set both parameters, expectations, and also guidelines for how to use it in optimal way while also reducing risk. So I'm excited that we get to talk about that a little bit more today. And I also want to invite you to consider the Brown School of Professional Studies as your partner in gaining the in-demand skills that you need to both thrive in your own careers to unlock the people potential to help them thrive in their careers. And to make a difference to transform the way the world works for good. We offer in-demand programs through master's degrees, non-credit certificates and short courses and free master classes like the ones you're at today.
On the most in-demand topics through flexible formats. We concentrate our portfolio in 3 areas, healthcare, leadership, and data and technology. Because you're here today, you may be most interested in our data and technology, our portfolio programs include our Masters of Technology Leadership. Our applied AI and data science short course and upcoming it will launch this fall course on AI ethics and governance. We also as you could see on the slide have a portfolio of offerings in health and leadership as well. Are applied to DI Strategy Certificate and our leader as coach program in our effective communication program, which is all recruiting now.
So we'd love for you to be a part of any of those programs. And we are constantly adding new programs based off of demand. So one of the reasons we are excited that you are here today is because we'd love for you to shape the future of our portfolio. At the end of today's session, I'm going to have some homework for you. I'm going to launch a couple of polls asking you what you would like to see if we were to launch a short course on AI and leadership, which we would love to do. So be start thinking about what your goals and needs would be for that program. My last housekeeping item before I introduced and turn things over to today's incredible speaker. Is that if you have a question at any point in today's program, please don't wait until the end to ask. I expect to get a lot of questions at the end. And so we'll do our best to get to as many as possible.
So please click on your bottom bar of Zoom, click on the Q&A button, and type and submit your questions there. I'll be monitoring the QA throughout and I'll be ready to answer any questions you have. So with that I am really, really excited to turn things over to our incredible speaker today. I'm so grateful. Took time out of his busy schedule to be a part of this program. Ryan Mcmanus is an entrepreneur, executive, investor, board director, and advisor.
He has worked in the AI space and led it for the past 20 years. And is currently on the board of several AI startup companies. He has advised Fortune 500 companies. And was recognized as one of the top 100 most influential corporate directors by the National Association of Corporate Directors in 2,023. He is truly considered to be one of the leading voices on AI and its impact on the workforce and we are really lucky to have him here today to share his knowledge. Welcome, Ryan. Thank you so much for joining us today. Thanks very much, Allison, and thanks to Brown for inviting me to join you all today. And thank you all for joining.
Let me go ahead and bring up the slides and we can Get started. So I'm delighted to share with you some of our thinking around. The leadership impact of this massive wave that we're all living and leading through. Obviously all things AI but more specifically generative AI. And what I'd like to discuss today is really 4 main areas. Number one. Is what this represents in terms of the next phase of the digital economy, which we've all been building out for the last couple of decades. I'd like to also. Explore what I personally think is the long tail of this new platform wave and really differentiate between what's happening today and where we see things going. And I'll illustrate that with some very specific leading edge case examples. We'll talk about the acceleration.
Because really on day one, if you will, of a multi-year, sort of evolution of this technology, so much more to come and it's accelerating incredibly quickly as you're sure you're all feeling in your daily life and then we'll close out with some perspectives on the leadership impact both at a C-suite level but also at a board level because it's a real incredible opportunity for even further alignment between Those 2 leadership levels, but also the rest of the enterprise. Before I go into those areas, I'd like to share with you 2 questions that I've had the opportunity to ask several 1,000 leaders around the world over the last few years. The 1st question is this. How many of your organizations were actively discussing the impact of artificial intelligence in your C suite in your business strategy in your board rooms. Before any of you ever heard of Open AI or Chat GPT 3. And when I asked that question, relatively few hands go up, certainly some hands go up, but relatively few hands go up. And my second question. Is how many of you are comfortable that your organizations can keep pace? With the unrelenting speed of change that again we're all dealing with and they're nearly 0 hands.
Ever go up in the room. Those are the same question. They're just different formulations of exactly the same question. And the point that I'm trying to make here. Is that leadership in whatever the technology domain is and critically with GenI. Leadership does not happen by accident. It happens through an explicit instrumentation. Of our strategy of our Exploration of where things are going but also an explicit instrumentation of how we invest and how we engage the broader Enterprise. So we'll come back to that as we get to the end of the formal remarks. Let's put this into context.
In my view, there have been 3 and now we're on to the 4.th Sort of major platform evolution across. The digital economy and many of you will have lived through these. The 1st phase was web one that was based on client server was largely a read-only information access only. In environment. It was relatively quickly superseded by web to the platform economy. Based on cloud and now we're in an environment with read write collaborative services based. Technology capabilities you can already see that if you were a web one leader but you missed the jump to web 2 as many organizations did that you really started to struggle very very quickly.
Web 3 and my estimation is going to be. Around different transaction models where it's still fairly early innings on this one. This will be enabled by all things blockchain, not exclusively crypto, but all things blockchain. So this is really an ownership economy model. Again, different ways of transacting across markets. And what we're going to talk about today. Web for call it a generative economy or an augmented economy, obviously powered by all things artificial intelligence and in particular the new GenI networks.
The reason that I take you through this very quickly is because there's a specific pattern here that continues to apply. And that pattern is that the organizations that look at technology as a pure automation or productivity driver tend to fall behind and in many cases fall behind pretty quickly. The organizations that win in every single phase and you can break these phases down into more detail. The organizations that win are those that bring new business models to market, new value propositions to market. Based on these new platform. Technologies. It's a very basic lesson, but it's 1 that is very also overlooked.
In many boardrooms and C suites because we have a tendency to Ask ourselves a question, how can technology help me do what we're already doing, maybe better, faster, cheaper, which is fine. But we missed the question about transformation and new value propositions and we need both. And the second question is, the second question is actually much more challenging than the 1st because we have to challenge a lot of our assumptions and we have to. Critically think and lead very differently. Let me break this down a little bit. Before generative AI. We were all living in a world that Gartner described as having 4 levels of data capability. Ranging from descriptive to diagnostic to predictive, ranging from descriptive to diagnostic to predictive to prescriptive. My strong thought is that most organizations were somewhere in the 1st 2 levels. And suddenly we have a generative level which allows us to ask a fundamentally different question of our machines, which is, what can we create? And at an atomic unit. This is the most important thing that I suggest leaders start asking. We've never had machines that can create before.
We've had incredible machines for pattern analysis and those kinds of, you know, processing. Areas. But we've never had machines that can create. And again, this is the fundamental change that we're starting to see. Roll through the economy. And again, it requires us to think in a very different way. Now, productivity is an immediate application of GenI. And it's not only the big LLMs that all of us can access for either free or a few dollars a month. Those are pretty amazing in terms of what they're doing in basic content or image or even video areas. Moving forward. The next tick up for enterprises, of course, is to bring in specific sector and or functional.
Domains obviously I'm not going to drain all of this here today but there are hundreds and hundreds of these already off the shelf of available to us. This is just one portfolio that describes the cross enterprise. Applications immediately available, for example, in banking. And so we start to move up the productivity ladder very quickly. And again, We have incredible stories of different gains that organizations are realizing. Through this kind of immediate and tactical strategy. And so we do need to be looking at that with some priority. What we also need to be looking at, however, is where this can go. And my suggestion to you is that the long tail of generative AI is going to be both in productivity. But specifically in productivity in things like innovation. Idea generation prototyping, market testing, market analysis, user experience design, and those kinds of more creative areas that enterprise are Enterprises are engaged in.
This is still relatively young in terms of the evolution of the capability and the visibility of it in the market. But I'll share with you some examples here in just a moment. And again, it gets back to that atomic unit of what this technology does for us. The implications at a strategy level here could be as profound. As flipping our innovation. Methodologies on their heads. And I'll explain what I mean by that. As we get into some of the examples. To it.
Just a random selection of segments. In food, Nestle is using generative AI for trend spotting trend analysis for ingredient exploration and generation. For cracking health benefit insights for rapid prototyping of new products. Bridgewater is publicly discussing their death of AI, which is artificial investment associate. This is both sort of a co-pilot for their portfolio. Managers, but also. Directionally an interest in creating Gen. Established and designed portfolios. New Vasive is a company that uses GenI to create 3D printed titanium spinal implants. That both make that implant more flexible but also reduces the sort of health risks. Of subsidence from 20% to nearly 1% which is an incredible generational shift in the safety levels of that kind of capability. Arctic blue AI. I uses GenI to actually prototype Enterprises GenII portfolio investments.
To address the fact that many of these experiments are not delivering the intended ROI yet. Limbic uses JI to both proactively identify information threats but also optimize responses. Which of course is a very important need, especially in annual election cycles, but across different industries, brand risks and the rest. So there's lots happening already at that frontier. Which leads me into What I consider to be the real super powers here. And I think that there's 4 areas. That leaders need to be thinking more creatively about. We have the ability to do things like problem solving.
Increase productivity and innovation. Discovery and Complex Solution Engineering. At levels that we have not seen before. Let me explain what I mean. In terms of problem solving. Tesla used GenI. As part of their design methodology for their new hair PIN electric vehicle motor. So it's an extraordinarily more powerful engine. It manages heat. Much better has an advanced range, all of those things very important and very differentiated. But they also used the machine to remove rare earth minerals from the design. And that has not typically been.
Part of the architecture of electric vehicle motors. Why is that important to Tesla? Well, on the one hand, of course, rare earth minerals have a significant sustainability consequence. And secondly, they're quite expensive in terms of input costs. And so there's a lot of wins that Tesla gets out from solving these kinds of problems. That had not really been addressed previously. In video that you all know. Has their own internal large language model called Chip NEMO. This is a copilot for their chip design engineers. Based on the decades of chip design data and information that they have. Really accelerating the design of incredibly new chips. We'll see a couple of examples of that in just a moment.
And so incredibly enhanced productivity tools. Even in things like advanced engineering. Some of you may have heard about these headlines just in the last couple of months. Oxford reveal that AI prostate cancer, sorry, that prostate cancer is not just one disease. Using AI. Think about the thousands of researchers who have worked on this problem. Historically, the billions of dollars. That have been spent on diagnosing this. Horrible disease. It was AI that actually started to break it apart and look at new patterns and you can immediately see the implications for both identification as well as new therapies now that we understand it better. Similar kind of model but a very different sector we're starting to see a lot of new research coming out in terms of equity performance and critically what are some of the most important drivers of stock market returns.
Many of them are not the kinds of things that our research has led us to before but again we can look at these new patterns and shine the light on them and so the discovery capability of these new tools is quite, again, extraordinary. And that takes me into What personally I'm most excited about. This idea of complex solution, design, and engineering. And obviously we're moving. Beyond the content. Level here into really, sort of industrial approaches, orbital material uses GenI to design entirely new chemical based materials to solve different industrial problems. And a couple of years ago, we saw this triumphant of organizations partner together. To design a new therapy for the most common form of liver cancer. As well as predict the outcome. It took them 30 days to get to that result. We've never had.
Technology capabilities that allow us to move this quickly and solve these kinds of problems or discover these kinds of truths, that we have at our fingertips today. And this is again very early stages, but I think that this is what We're all we can all look forward to. In terms of where things are going. Now there's both an offense and a defense here of course and we'll talk about the risks. Now. Obviously, most organizations today already have their policies and understanding of the cybersecurity threats and evolving cybersecurity posture so we don't spend too much time on that. Right now, one of the things that leaders need to get their heads around is that it's not only the traditional technology risks that we're looking at here.
There's an entire litany of new risks that we have to get onto the board agenda and certainly into our CIO and CTOs agendas. One of them just to illustrate the portfolio are things like prompt injection attacks where someone comes in drops a prompt in into our system and can really hijack a lot of our capability. But there's again whole litany of these that we need to understand and explore. The differentiated risks that we're looking at include things like the risks of bad actors. There's been a 700% jump only in banking between 2,022 and 2,023 in terms of deep fake attacks and that's just a single sector over the course of a single year. And so this is exponentially growing as a threat. You're all aware of how this is translating into global and geopolitical risks.
This is big game strategy kinds of stuff. May have an important implication for where we do business in different ways and how we do business. There's rapidly emerging global and local regulations. There's also just a strategic risk of are we moving too fast or too slow? And in certain cases if an organization has moved too fast and overstated their AI capabilities and seen a resulting equity pop because of that the SCC is actually cranking down on those kinds of activities. And then there are entirely new challenges that we have to get our heads around. Such as the entire sustainability and energy footprint and requirement for powering these models. And more and more the question leaders can ask themselves is what's the difference? Data and energy usage and that answer is getting closer and closer to it being exactly the same. Conversation. In terms of talent impacts, we know a number of things, of course, are happening here.
In different domains and different sectors, we're seeing Delta's productivity delta's between 30 and even 300% in some cases. Those are enormous opportunities for our teams. But there's some consequences to that that we have to also consider if we are able to more More automate some of the initial early jobs that our people do, we need to put in place other ways of training the next generation of managers to make sure that we have people who can continue to lead the organization. Even if they didn't necessarily go through the same kind of formulation, even if they didn't necessarily go through the same kind of formulation early in their career. There's very important conversations. To be had around job displacement. And as all of this place through. We can imagine. Teams where we have some people who are very good with the tools and some people who are not How do you balance the KPIs and incentives there?
And make sure that the culture continues to adhere. In terms of the overall. Enterprise. All of that being said, we also know that our top talent wants access to these tools. They want the productivity. They want the kinds of learning that the tools can give us. And to that end, we're seeing some larger organizations. Describing the number of applications and use cases that they see, JP Morgan Chase. Is actively talking about having 400 use cases for jENII or Dana already has 750 of these in-house GPTs. Which takes us to a very expansive. Market of these capabilities. But one of the things that we can ask ourselves is It's not only about individual Jenny. It's actually becoming a question such as. How much work productivity design? What have you? Can one person do?
If we have hundreds or thousands of these autonomous agents here to help us out again. Fundamentally. Different kinds of ways of thinking about the capability. So how do we bring all of this together as leaders? This is a largely different environment that we have to get our heads around. And the question I come back to is how do we keep pace with the development? My friend and colleague David Bach who is the CEO at Optios, gave me this statement before an article that I wrote, even last year where he said if we used even 6 month old machine learning systems We would be so far behind the curve it would be embarrassing. The need to bring speed into our enterprises. Has never been greater and speed is not something that most organizations or industries really needed to have.
Before this kind of acceleration. And the acceleration is. Profound. So just this year what you're going to see if you haven't noticed it already is AI at the edge. That's what the Apple announcement represents. Okay. AI native hardware across personal but also industrial devices. An explosion of small language capabilities, which means we can train the models much more quickly and much more cheaply. An explosion of open source models, which is a democratization. Of the capability and that's what leads us to this idea of billions of agents. Surrounding us and really changing the nature of work. Furthermore, GPT 4 is estimated to have an IQ equivalent of about 155.
Many of you will know Einstein was 160. So what we can anticipate as and when GPT 5 comes out or other leading models is that we're going to start exceeding. Anything that we've really had access to before. And the Deepmind CEO in his TED Talk. Described the incredible Explosion of computational power over the last 10 years. 5 billion times more computation. Then 10 years ago that's a 10 X jump every year for 10. Yes. Nvidia just a few months ago announced its new chip, the Blackwell B.
200. Their explicit intention here is to democratize trillion parameter. Hey, I. Brillions of inputs coming into these models to make them more refined and more powerful. Incidentally, this chip is 30 times faster, but also 25 times more efficient. In terms of energy consumption than their previous chip. These are not minor games. When we look at the acceleration. And by the way, in video wasn't done with this announcement.
They're already talking about new AI chips just earlier this month. Because of the acceleration of The design and of course. The market demand. This is moving as many of you are, I'm sure aware, at speeds that we've really never seen before. With all of that said, how can we imagine a long tail of this capability? So I like to think about where this is going in terms For example, user experience. But also a simplicity of. Generation. So let me invite you to think about a few different use cases. Let's say you're about to board a flight and there's something that you really want to learn. You can spend your time downloading different courses. Many of them are going to be terrific or let's say that there's a new kind of content that you're interested in looking at.
You can design the nature of the experience that you want. Perhaps you design a new chapter in a video game that you're playing and you ask the machine to generate it and you start to play. Or you start to watch. You start to consume the content. You don't need to be a designer, you don't need to be any of those things. There are very important copyright issues here. I'm not going to deny those. But the capability is already being. Developed. Here's another one. It's 7 o'clock in the morning, maybe 5 o'clock in the morning for some people.
I'm about to go ahead my breakfast. I'm about to go to the gym. Generate exactly the optimal workout for me today. Generate the optimal eating plan for me today. So that I can be as healthy as possible. Or in a more pedestrian sense. Think about all of the icons on your phone, all of the software that you have to work with. And all of the complexity that you have to do your job. And now imagine that coming down to a single kind of interface like this. Generate the 3rd quarter cash flow forecast. Based on an assumption of different kinds of inputs. And the machine will go back and hit the enterprise and bring it back for you.
This is not science fiction. We're building all of these capabilities. Today. And so if you're paying attention to the market and some of the leaders in the space, This is the reason that you'll hear things like. We've never. Seen anything like this before, right? And Dreessen saying that this might be the biggest change that any of us have ever seen. Or the CEO Goldman Sachs saying that AI is driving more companies to reinvent themselves. And that we have candidly never seen. Anything like it. This is a response to the understanding that this is a platform shift. And that atomic unit of what can we generate, what can we build. Is going to drive an enormous amount of opportunity as well as risk. So let's bring it back a little bit. To a leadership consideration. In my view, there's really 5 things. That leaders can do to help their enterprise along the way.
Number one. Is we need to embody this change. We need to In embrace it and have a strong point of view in terms of what the implications are here for our industry as well as for our specific organization and markets. Number 2, we enable our organization with some of the more easily accessible tools in as far as that's appropriate for us. Number 3, we experiment with some of the more leading edge tools. Things that can actually start comfortably hitting our own data sets.
And our core business. Number 4, we start to execute. Which could include a portfolio. Again, think keep in mind the hundreds of these kinds of things we can bring into the enterprise. How do we do that strategically and safely? And then number 5. We expand with some of the real edge capabilities. That effectively turn our organizations. Into the type that can use those more single kinds of interfaces. And result not only in massive productivity gains but some of that complex solution engineering and design capabilities. Without compromising.
The business or a privacy or a data or a market. There's levels to this implementation. It's not a 1 size. Fits all. So to close. Even though we're in a new sort of platform stage with GenI. There are some things that we've observed over the last couple of decades that continue to apply here. And these are what I call the keys to success. In the digital economy. These are the patterns. That continue to surface and continue to distinguish between the winners and the losers across sectors.
The 1st one is that digital business models win. We talked about that earlier. If our company, if we're automating and our competition is transforming, If we're looking at cost efficiency. And our competition is bringing new value propositions to market. We are going to be in for a very rude awakening. Number 2, automation and transformation are not the same thing. I've reviewed hundreds and hundreds of digital transformation and AI strategies. This is the single biggest mistake I see in those. We talked about transformation, but we only invest and create incentives for automation.
They're not the same thing. We need both. Number 3, and this is. Very much accelerated by GenI. This technology allows us not only to imagine things we could't imagine before. But to deliver things that we couldn't. Delivered before. This is what personally I'm most excited about. In terms of growth opportunities. In order to address all of these previous areas we need to be actively experimenting with what's next. It's not enough to go to conferences. It's not enough to read about it. You actually get have to get your hands dirty and be. Playing with the emerging capabilities. Experimenting with those, which means resource allocation and support from leadership.
And then number 5 again. We have to explicitly organize the Enterprise and our leadership capability. For these changes. We don't win in these. In the digital economy by mistake. We win because we understand the implications. We understand that we need to have a combination of a bold vision. But a way to learn our way experiment our way towards that bold vision. As well as the requisite talent incentives. And other kinds of things that make the machine go. And keep in mind the atomic unit of strategy here as we move forward. It's not only about how do we bring in technology. To support what we already do. It's not only about increasing our productivity as critically important as that is especially in the short term.
But over time, this question of what can we create, what value can we create? What new solutions can we create. Is I think where you're going to see a real ship between winners and losers. In this sector. Incidentally, The National Association of Corporate Directors will have a report coming out on the implications of technology and governance in September. I'm 1 of the commissioners of that blue ribbon report. And so this will, there's a lot of places that we're developing much more explicit instructions both for boards as well as management teams and leadership.
Terms of how to get this right. It's actually very clear, but it does require. A specific instrumentation by the leaders. So with that, I will close my formal remarks and welcome any questions, Allison. Thanks very much. Thank you so much. This was absolutely incredible and thank you everyone who has already submitted great questions. So just a reminder that if you do have a question, we are going to move into the Q&A portion in just one more moment. So please click on your bottom bar of Zoom, click on Q&A and type and submit your questions there. And before we get to your questions, I actually have a few questions for this audience. So I am going to launch a poll now.
You should see it on your screen now. It is a for question poll that will actually help us to build a course on this. So many of you have already asked us if we have a existing AI courses. We have one existing AI course as a technical course and we'd love for you to be a part of it if you are looking to get in the weeds and in the data. But we do not yet have a would love to have is of course it's more on leadership and AI and the leadership competencies needed to lead. In the time of AI like what Ryan gave kind of a sneak preview of today. And so we would love for you to help us to influence what that could look like. Question one is which leadership skill do you think is the most crucial? For effectively managing AI projects? You can only select one. And the choices are strategic vision and planning. Technical proficiency and understanding, ethical decision-making and responsibility.
Effective communication and collaboration or change management and adaptability. Question 2 is what aspect of AI is impact on leadership? Would you like to explore further? And that is also single choice. All these are single choice. So you have to choose the one you most would like to do. And your choices are leveraging AI for data driven decision making. Managing AI driven organizational transformation. Developing policies for AI governance and ethics.
Building and leading AI focused teams. Or in dancing personal leadership skills with AI tools. The 3rd question is, which technical topic related to generative AI interests you the most? The choices are advanced neural network architects, architectures. Training and tuning large language model models. Reinforcement Learning for Creative Applications. Ethical considerations and bias medication in AI. Or generative AI for data augmentation and synthesis. And then our last question is, which area of generative AI are you most interested in exploring further?
And your answers are generative adversarial networks. Natural language processing advancements. Deep fake technology and ethics. Creative AI in art and music or AI generated content and marketing and advertising. Looks like we have about 60% of your responses in. So I'll leave it open for just a couple of more moments. But let's move on to the answering their questions as they answer ours.
We need both at once. And I'll go over to the QA. Ryan, the 1st question for you is how do you think generative AI will make the most impact in physical product design in the next 5 years? No, great question. And so it's already in use in a lot of different areas. I'll give you, just one example. And it's a small example, but I think it illustrates the point. So we already, well 1st of all, we already talked about the Tesla engine example, 1st of all we already talked about the Tesla engine example but NASA has a great case where they showed how their engineers used GenI to design a different kind of component for one of their spacecraft. And it went from a very traditional kind of triangular, you know, very linear geometric. design.
To a much more fluid organic design that frankly I don't think many if any design engineers would already would have come up with. And the impact of that, they went through several iterations to get there. The impact of that was a very significant decline in in the cost of the component as well as a very significant expansion in terms of the structural integrity and the the force that that component can take. So I use that as a simple example that's sort of in plastics and metals. To illustrate where things are going. Depending on how you define define physical domains you know, we're seeing these kinds of experiments taking place in a lot of different fields and they can range from that basic kind of individual component all the way through to a complex system such as, such as an airframe such as a, an electric vehicle engine. And that's 1 of the things that we need to get our heads around. It's not only about specific discrete products.
It's actually about complex solution design. I see this happening, you know, more and more across different sectors. Including physical sectors. I haven't really found one yet that I, that I don't see some experiments taking place in, but be it climate via agitec, be it physical product design, be it industrial, be it transportation, be it aerospace and defense. There's lots of examples that you can already see. The implications of that capability are going to be faster innovation cycles, different kinds of supply chain requirements, hopefully increased supply chain efficiency. Very importantly, new kinds of competition. And part of the question we have to ask is, Are we locked into an assumption of design? That other potential competitors can compete with even groups that we've never heard about or, sort of sectors that have never been part of it.
Great question. Yeah, thank you. The next question is, how do we up skill staff in AI if it is always changing and we're always getting Yeah, again, you know, big challenge that a lot of organizations are dealing with. That's part of the reason that I wanted to have those several levels of. Sort of an execution or leadership framework. Because on the one hand, We need to understand. As Alison pointed out. People are using this whether or not we have. Approved it and that's a massive, sort of reality.
We have of course be careful that they're not using it in the wrong way that they're not putting IP or customer information into open source models. That's a very big risk. But people are using it and starting to learn. My suggestion is that an organization describes a roadmap for the capability. What are some of the basic things that we can bring in? A lot of organizations, for example, are just built just using the existing stack that they have with a Microsoft co-pilot or similar. With the right kinds of privacy and security. And that allows a little steam out of the kettle, if you will, people can start to experiment with. The capability. And then you go into a more strategic kind of discussion. What are the high priority areas?
Where could we, for example, use some of these? More refined or specific productivity tools that we discussed. What are the biggest bang for the buck? Is it marketing in my organization? Is it financial planning? Which of the sectors are actually the most relevant? Where could we do the most? Have the biggest bang in terms of the productivity. Obviously your computer, software engineers are already using these things and seeing extraordinary that benefits there that's that's a no-brainer in many cases. So there's that kind of analysis to look at. But I think at the same time, You also want to be describing why this is important.
Why is it important for your teams? To start getting their hands dirty to embrace this shift. How are you going to train them? How are you going to empower them? And what are the implications for the industry in the business model moving forward? It's actually relatively simple. To get people learning the basics of prompt engineering. And the basics of using these tools. That is not complicated. Culturally. There are some things to manage. And so Yeah, yeah, and to just sort of end my comment here.
Training, there's lots of different opportunities. The the most powerful and to keep people up to speed is to actually enable them with tools because that will pull them along as the technology develops. But my suggestion is not to do it in a vacuum, make sure that there is a sort of cultural incentive. You know, and safety. To the implementation that comes along with the training. Yeah. Thank you. So comprehensive and helpful. We have 2 questions about how innovative AI truly is. So the questions are that if AI is just, you know, bringing in sites from existing data sources, then how can it actually impact or innovate in areas like medical technology, pharmaceuticals, and design. Yeah, great question. So let me take the pharmaceuticals point. I've done a lot of work.
I mean, I've worked at the pleasure of working across industry so I can see a lot of these patterns that are that are sort of more global or generic. The forecast for pharmaceutical discovery is that I think it's 56%. Our pharmaceutical discovery is going to be genI driven moving forward. So the obvious question to me based on that number is What's the other 44%? I think the other 44%.
In some cases is going to be things that we haven't that we don't have the data for you. Right, where we still have to do some of the basic science level work in the original experimentation. So. It depends on how you define. Innovation in this very important question. Because obviously the machines are working off of existing data sets, but they are able to find patterns and find recombinant opportunities that we haven't seen. Through human activity before. And so that was part of my reason of giving you those several examples when I was talking about the superpowers. Right? We have had decades of research into into different cancer areas. We've had decades and decades, again, billions if not trillions of dollars of research into equity markets.
But we weren't able to actually. Join up that data and that background, those inputs into some of these. More innovative solutions. Or at minimum we haven't been able to do it with the speed that we're seeing. Happen now because the way that the models work Is you run tens of thousands, hundreds of thousands of experiments in the in the machine. And then the experts come back, right? And they refine and they rerun it and they refined and they're looking for the best possible outcome and solution based on how the machines work. So at minimum, you're looking at millions of human years if you were to do this. In the traditional way.
So it's a fair challenge. But we need to break apart. That question into the multitude of things that, comprise really an innovation capability. But it's a very important and a very, you know, appropriate question. Thank you. That's a great point. You had mentioned culture and there was a question about culture. So I think that's a great next one to go to, which is how can boards use culture to drive digital transformation and lever the potential for AI at the edge. Great. Okay. So, So let me talk. So there's a couple of levels here. Let me talk about board culture in the boardroom specifically.
And again, what we have some amazing material coming out here in September for those of you who are active board directors. And others. And then we'll talk about the relationship between the board and, and the, and the enterprise. In the boardroom itself, my very strong position is that boards need to be thinking about bringing on a different kind of capability into the boardroom, a different kind of experience.
That what I mean by that is not necessarily an AI expert per se, although that might be appropriate in some cases. But more relevant is people who have built new businesses based on emerging tech. And have gone through that kind of experience. Operating something that already exists and building something new are very, very different skill sets. And a big mistake that organizations make is to equate the 2.
Some people have done both. Many people have only done one or the other. And that's perfectly fine. But if you're looking at how to keep pace at the edge. You probably need somebody who's actually been successful or done that in their experience. That also brings us to a different, I think opportunity for boards, which is the following. How do we create space in the boardroom?
To ensure that we have these ongoing conversations. Who is responsible on the management team for keeping the board? Informed and for driving the kinds of experimentation and learning that we talked about. And the nature of that conversation is different than what many traditional board conversations were. A traditional board conversation would have been something like, you know, what is the ROI and the investment? How is the going to market work? Those are still relevant, but a new question that boards can ask is, what did we learn this quarter? Right, where do we see how do we see things evolving and management critically needs to understand that we know as directors that that's not a that's not a right or wrong answer necessarily. It's exploration and it's going to evolve. So there's actually a different kind of level of trust.
Between the board and the management team that has to be brought in here. Incidentally, this trust question is very big and it goes across Many levels. There's trust with our people. In terms of what are the implications for their job security and their advancement in the organization. There's trust around our data and our customer data. There's trust again between management and the board. There's trust within. With our markets and at a more societal level. And so culturally this is actually a very broad question. There's trust that even at the edge of it in terms of Are we doing this in an ethical, secure, regulatory compliant way? Are we doing it in a way that we're trying to minimize the energy bump in as far as possible? That's a whole deep dive that that you can look at. But I come back to this point about an explicit organizational instrumentation.
It doesn't happen by accident that we keep up. One final point here. We actually wrote a couple of pieces for the NHCD about science, technology and innovation committees. We've actually seen a 20% jump between 2021 and 2023. In terms of this additional 4th committee of the board. And that, information is available, it's public, but that's an indication of how important boards are seeing this as an ongoing topic.
The fact that we're actually changing the structure of the board. To make space for the conversation. It's a terrific question. Thank you. How is it possible to avoid hallucination and even recognize who we Yeah, again, really great questions. And so. That is most basic level the way the easiest way to avoid hallucination is to make sure that the person who's using the machine can recognize it. Right. And so who's the expert that's at the end of the process that's sort of QA in the machine? That challenge doesn't go away. Another way to And then you get into more sort of technology drivers of reducing hallucination.
Having high quality input going into the model is of course critical. The weakness of the big public models is that it's trained on everything. They're trained on nearly everything. The power of that is that they have these incredibly exponentially more powerful algorithmic capabilities because of the scale that they've addressed. However, if you can somehow start taking advantage of the of the compute and the algorithmic capability, but focused on a higher quality, more limited data set. And as far as you have the capability to do that, you will see hallucination, you know, go down. Go down, you know, pretty, pretty significantly. But again, there's a culture here. Who is responsible at the end of the model? What is our process and policy and requirements for using things that come out of these tools and making sure that our experts are actually validating.
There's a whole recipe set that you can use there. Thank you. Next question is there are major ethical and governance considerations with AI. How do we protect ourselves? And I would also add to that as leaders, how do we protect our customers, our stakeholders when using AI? Yeah, and so this is, you know, there's a lot happening here and historically as we've, you know, worked through any number of technology. Changes, of course. What we've seen is that regulatory has not really kept pace very well.
And that's not exactly the case here. Interestingly enough, as fast as Jenny I is moving in on the offense, the defense and the regulation is actually moving very, very quickly as well. And so Ethically, we need to have our, our, you know, we need to have the right kind of alignment and clarity in terms of what we want to do here from a regulatory perspective and I'll just bring ethics into the regulatory domain. The European Union, as many of you know, have already put their legislation on the books, it will start to come into effect in 2,025 china is doing very different is working on this united states directionally has some federal regulatory efforts in the works at a state level there are states many states actually that are both considering and or have written into law different regulatory environment so on the one hand it's actually kind of complex On the other hand, My colleague, Dominic Shelter, Leipzig, who's attorney in California, wrote a great book called Trust.
Where she talks about the 7 areas that leaders are going to have to pay attention to. 2, yeah, to get the point of your question to feel safe and to make sure that we're doing the right thing. And those 7 things are high quality data use. Which is part of what we just talked about in the previous question. A continuous testing monitoring and auditing capability. Making sure that we have an explicit risk assessment and keep in mind it's not the same risks that we had before. You actually have to expand your ERM lens to take into account the very new risks that generative AI pose for us. You have to have of course technical documentation. A degree of a very significant degree of transparency in terms of how the models work and as far as you can do that.
Human oversight, which we talked about in terms of hallucination as well. And then finally a failsip, right? So how do you turn off the machine? If it starts to go a little bit wild on us. And so there are pretty clear frameworks that you can start moving and that will that I think will serve you very well. Thank you. So much. It's so helpful. I think we'll have time for one more question, which is too bad.
You have all asked really incredible questions. Thank you all for these thoughtful questions. So the last question I'll ask is would you consider the advent of LLMs as a democratization of intelligence because now even non types. Can communicate with data and help to start interpreting the data. Sorry. That's a that's a very big question. I love the question actually in terms of a democratization of intelligence. Possibly. It's definitely a democratization of of some of these super powers because the point part of the point that the that the person asked the question is making is comes back to those 3, you know, those 3 sort of what's next, where things are going kinds of, graphics that I shared. And so in as far as we don't need to be. Software developers trained software developers to write code.
Now that's a bit of an exaggeration, but it's but it is directionally, directionally correct. Yes, it's a democratization of capability. In as far as I don't need to be a graphic designer to create those 3 images that I showed to you. It's a democratization of of capability.
The answer gets more complex. When you when you start to look at the segmentation of the industry because one could actually say that there's a bit of an oligopic structure, that's starting to take place with the very vast LLMs. Overstating it on purpose. Those LLMs are bringing new capability that can be, you know, extended into different. Questions. And so at an individual level, I think you're right, we can do vastly different things. At our own scale with these tools and more specifically with a portfolio of these tools. At an industrial economy wide level it's still playing out in terms of. You know, who holds the cards, how distributed is the capability a lot of what's happening today are models that are built on the very major LLMs which means that there is still, you know, certain kinds of power, structures that are in place. And one of the things to really pay attention to is, is how open and how different are the models going to be that we have access to.
There's there's a lot of complexity to it, but, but directionally that questions, is actually very important one for all of us to be asking. Thank you. So much. This was absolutely incredible. I learned so much in such a short period of time and I am sure everyone else here did as well. We're getting some positive feedback in the chat. And I want to thank everyone who joined us today as well.
We know you are busy and we are appreciative that you want to be on the cutting edge of what it will take to bring yourself and others into what's next. That you've chosen the Brown School of Professional Studies as your partner. In that. So I thank you for your input. Your amazing questions and shaping the future of our portfolio. And we do hope to have a feature course on AI and leadership. So please. If you are not already subscribed, subscribe to our newsletter and we will announce when that is ready to come into fruition. Thank you so much, Ryan, for being here today and for giving us such a condensed with such a rich insight into where things are now and what leaders need to be able to do to be able to lead us into the future.
2024-11-26 03:01