Today on Episode #810 of CXOTalk, we're discussing enterprise AI with two of the top CIO advisors in the business. Welcome to Tim Crawford and Isaac Sacolick. Boards and CEOs expect CIOs to have a strategy, to have a defined vision in terms of where the organization can take advantage of these technologies. There's a lot of pressure top-down, but there's also a lot of bottom-up activity, and that's both people who are very excited about what they can do with AI and spending a lot of their time testing out these different tools, and those that are really fearful, scared about their jobs. What are they going to be doing as a marketer or a software developer, as prompting becomes kind of commonplace in their technology acumen? What should CIOs be doing in terms of championing these initiatives and bringing the pieces together? That's the $64,000 question that every CIO I work with is asking. Where does it fit in? Where does it not fit in? How do we start to navigate some of the things that Isaac had mentioned starting with just the innovation but not ignoring the people element to it as well because there is a lot of concern about job loss and "this is going to replace me." Most of it inappropriately thinking along those lines. I don't think it'll have as dramatic an impact as people might think. But let's
not forget; we have a lot of history that shows that there's change that comes with innovation. The real issue for CIOs is how to navigate through that to ensure that you are embracing generative AI and AI. Granted, AI has been around for decades. Generative AI, a little more recent. But at the same time, making sure that you put up the guardrails, where appropriate, and that you don't over-rotate, which I think we'll talk about what that really means. Then, at the same time, be a leader. Be a leader to your organization both up the
chain of command as well as parallel to your position and within your own organization. Help them understand and educate them on what those opportunities are today. Frankly, that'll change over time. As we get to know these
technologies more and more, it will change. We have to be adaptable. We have to be at the front end of the line and help guide that process as leaders. That's probably the single biggest opportunity for CIOs is to be at the front of the line and help provide that guidance in a balanced way. I agree with you, Tim. You think about every introduction of a new technology, analytics, or AI capability now. We're moving up stack. We're taking things that we do today. We figure
out a workflow around it, a way of working. All of a sudden, a new technology comes into play and says, "You know what? I don't need to do keyword searching anymore. I'm doing prompting. And I can continue to re-prompt," and that's changing the workflow. It's changing the skill set of people and how they're interacting with these tools. That breeds some who just lose a ton of their time because they're just experimenting and playing with the latest gadget that's come out. Others are just fearful about what this all means. "I'm a developer. I know how to code. What's Copilot going to mean for me? I
already know how to do development very well." The same thing on the marketing end in terms of building campaigns or writing content. That's what's happening in the trenches, and when you talk to what CIOs should be doing, the first thing that I think about is coming up with the types of problems that you want people to be focused on. Where is this really going to have impact in your organization? How do you register the types of value that a particular AI is bringing to an operation? How do expose that? How do you make sure that when people are using a particular tool, they're using data in a sanctioned way? They're using the right tools with the right data? That's the leadership role, bringing it all together and saying, "We're focused here. Here is our short-term vision. Here is the strategy around it. Here is the registry that we want
to use to capture ideas. And here is how you provide feedback to us about what's working." How is this different than any other technology initiative? What Isaac was just describing sounds like plain vanilla leadership for technology. We get lost in the details. We forget the big picture. What's our strategy? How is any of this different? If you look through the big tectonic shifts in technology, whether it was going from mainframes to distributed computing or the Internet or cloud, all of those kind of sat behind the scenes within IT, at least initially, and then eventually would get out to the public domain. One of the things we've seen with generative AI is it started in the public domain, and we have ChatGPT to thank for that. Now, everybody and their brother has access,
and most people today, the average person has used ChatGPT in some way, shape, or form. They've been exposed to this new innovation and seen what it potentially could do. Now, that's coming from a position of just experimentation. But that does filter into driving what enterprises then have to look at and have to think about and have to do.
Meaning, I used it to build this content. Isaac talked about marketing. Content creation is one of the three big areas that generative AI is being used for today. The content creation piece is what common folks are using generative AI for. Then they think, "Okay, well, I can do this in my personal life. How do I do it in my professional life?" to make that end of the spectrum easier to manipulate and work through. You have that coming about, which I think has just accelerated the adoption and the interest in generative AI specifically.
Some have said it's having its iPhone moment because we saw that when the iPhone came out, and there was just this mass adoption because people realized, "Oh, my gosh. I can take these 20 or 30 different functions, and it's all within one device now." Generative AI is kind of having that same moment. But we're going to learn. We're going to learn where it does fit and where it doesn't fit, as we go through the upcoming months. We're now seeing more companies looking at how to leverage large language models behind the firewall, looking at their information, starting to look at everything from textbooks in higher ed to financial information that's coming in the form of news and other content, and saying, "How do we make people smarter with this content?" and also use the capability of natural language querying and prompting so that that skillset goes down, that we don't necessarily have to have experts in every field. We have to have people who have to know how to ask the right question and being able to challenge and interpret results. I think
that's the bigger challenge here when you look at AI versus some of the other technologies. Everyone that's listening to this has heard the term "hallucinations." Generative AI will hallucinate. It'll make things up. I had someone just recently say, "Generative AI and ChatGPT can be the best liar because it builds confidence." There's actually a story that's out there now talking about an attorney that used generative AI to create a brief and then kind of double-down on that. Generative AI ended up making up the cases [laughter] and the details in the cases that didn't exist.
Is that a bad thing? Well, maybe. But maybe not because it's optimized for confidence, and so building toward confidence. But as Isaac said, there is this big question mark as to when you do a search, you can tie it back to the source. And so,
provenance of data and trust in data is really important. But with generative AI, because of the nature of how it works, there's this breaking point within the algorithm itself and how it works that we have to kind of get over and start to understand. We saw this in cloud with ephemeral workloads, workloads that come and go, instantaneously almost. And so, I think we have to go through the same type of process of learning a different way to gain confidence in results in data, and we haven't gotten there yet. We've got a ways to go. Please subscribe to our newsletter and subscribe to our YouTube channel. Just go to CXOTalk.com. We have amazing shows coming up, live shows where you can ask your questions,
and truly, genuinely, you are part of the conversation. So, check out CXOTalk.com. Where are CIOs in terms of understanding the opportunities, as well as the various risks, as well as the learning curve that Tim was just talking about? Overall, what's the state of the CIO when it comes to these issues? There's a lot of learning happening at too slow of a pace is how I would categorize this. Technology and the capabilities are moving too fast. The employees are experimenting, and the CEOs are trying to nudge the organization to be at the forefront, sensing that it's going to change business models, it's going to change customer experiences. They're hoping that it makes their workflow and their employee environments a little bit more efficient than they were in the past. CIOs really need to have a documented vision and strategy around this. I've done some writing around what goes into that: getting into a sense
of what problems to focus on, what areas of the business are going to have a material impact by experimenting with that. What's the goal around an LLM capability that maybe they're trying to build up over a horizon two and a horizon three? I think there's a lot of learning, but the risk here for CIOs is to get something out there on paper and start communicating, letting your business partners know that you are going to be the center point of putting a strategy together. There's a risk of shadow AI when a general manager says, "I need to go figure this out," and decides to go off and sign up a new platform or test a new set of capabilities. Even if it's wrong, it's going to evolve. What we are talking about now as a set of capabilities has changed since two, three months ago. Being able to do blue sky planning with
business leaders, with technologists and data scientists on a very frequent basis to say, "Is our strategy aligned or do we need to pivot or do we need to add to it?" I think that's really the goal for a CIO now is to continually do that over the course of how this technology is changing. Tim, Isaac just said that the learning that CIOs are undertaking right now is, frankly, too slow. Do you agree with that? And the important question, therefore, is what should CIOs be doing? Generative AI doesn't work in a vacuum, and the remit for the CIO is not just about generative AI. They're still navigating. CIOs are still navigating through trying to figure out how to make things like hybrid-work work, how to make remote-work work, how to think about technical debt, how to deal with cybersecurity and ransomware. All of these other big, huge, monumental issues still exist for the CIO. And so, this is just
one more thing, one really big thing that has popped up and now is on top of everything else. He's right. The learning is moving too slowly. But at the same time, this is where I think it's important to leverage your peers, leverage trusted sources. It doesn't have to be just the two of us, but it could be the two of us. It could be others. There are some really good sources out there that are actually looking at the bigger picture and starting to navigate and starting to see some of these challenges that are coming up. Then make sure that you're putting the right guardrails in place to at least bide you some time. Don't hinder innovation. Don't hinder experimentation.
I've seen everything from trying to block access to ChatGPT to blocking every .ai domain (by institutions). I don't think that's necessarily the right way to go about it. I understand why those individual companies do that. But I think it's important to find ways to expand your organization in terms of how it thinks about experimentation so it's not just the CIO learning, but it's their executive team, it's their junior folks on the team bringing fresh and new ideas to the table. It's about leveraging your organization,
and I don't think we do enough of that through the course of being an IT leader. But this is a good time to explore that and generative AI is a great place to test it out. How much emphasis right now should CIOs be placing on enterprise AI, generative AI? How much resources compared to all the other things that are on the plate? Let's make this really practical right now. I would say a lot and the reason for that
is because we know that all of these big spaces, these big boulders that we have to contend with as IT leaders, they all are going to rely more and more on data. And so, when you think about data and you think about the mounds of data that you have access to today, let alone what you have access to externally, you have to find a better way, a smarter way to be able to gain insights from that data. This is where things like generative AI can really help move that forward. We saw that recently with networking companies that are starting to move natural language processing into the process. Isaac mentioned Copilot and code development. But again, let's balance that, too. Code development is a great example of ensuring that you're not leaking intellectual property or you're not inadvertently bringing someone else's intellectual property into your code base. Those are examples where you have to find ways
to leverage this in a myriad of different ways. I think it's a total game changer for the enterprise across the suite of pillars within the company. But you have to make sure that you're using it in the right way so that you don't end up leaking confidential information or intellectual property or, vice versa, bringing that in inadvertently. CIOs are looking at their business strategies,
looking at where they are in the technical modernization, where they are in terms of security. There are a lot of companies out there that are still catching up. When you're still catching up, you look at a shiny object like AI, and your data isn't in order, you don't have a way of communicating with your employees in terms of what experiments they should be working on what experiments they shouldn't be working on. You have known security issues. I look at the entire corpus of what is the CIO driving at and say, "Okay, where are the greatest opportunities and risks?" For some industries and some companies, AI is going to be in the top three right now, and it should be in the top three. For a lot of companies, the CIO's job is to
articulate back to boards and back into their executive group, "Look. It should be in the top three. But we have no data governance strategy. We're still struggling with hybrid work," some of the examples that you shared there. We need to catch up on that, I think, regardless of where you are. I think when we talk about CIOs being too slow, I think the real issue
here is the CIOs need to come up with a point of view and a documented plan and strategy around it. It doesn't necessarily have to be elaborate or consume 20%, 30%, or 50% of their resources, but it needs to say, "Look. Here is an area of the organization where we're going to experiment with these tools in these areas. If you have your own area that you want to experiment, come to us. Here's how you register an idea with us so that we can get the right data and tools in front of you." Not being able to do basic portfolio-like activities around where you're experimenting, I think, is a key area. Then being able to take those and marketing them back to the business
sponsors and the executives saying, "Look. This is where we're experimenting. This is where we're starting from. These are the impacts that we want to see. And here are some of the learning activities that we're doing so that we can see what's happening in our industry or maybe learn from some other industries." I always say CIOs have to be one of the champions of being lifelong learners, and so going out there and seeing what other companies are doing is part of this equation as well. I say something very similar. I say it takes
a village. As a CIO, it takes a village. You can't do it by yourself. Your CIO network is immensely important, especially now with these new innovations that are coming about. But here's the risk that I'll also put on the table if you don't follow Isaac's advice of putting that vision in place—you mentioned that earlier—and really thinking about what's your plan of attack. I'm already seeing this play out in organizations where the edicts are starting to be discussed. What I mean by that is you get the CEO, you get the board, the board, edicts from the board for technology. That's
really bad. We saw this with cloud. We're still dealing with it with cloud today. But the last thing you want is a governance body – whether that's the ELT (executive leadership team) or the board of directors – dictating what technology should be used. The CIO needs to be, absolutely needs to be, in front of that and driving that conversation. If they're not driving that conversation, I think that opens a whole other conversation about where they are and where the organization is today. But I would encourage every CIO to get in front of this and have that plan of attack and communicate it and socialize it with the ELT. But how do we get there? Isaac, how do we get there? CIOs have to get used to putting out something with incomplete information and where the information is going to change. This is actually one of the easier areas because of that.
If Microsoft or Google or somebody else comes out with a big capability that's another game-changer, guess what, board, guess what ELT, the rules of the games have changed again, and we can go modify our strategy. This is actually one of the easier areas for CIOs to put something out there that may end up being wrong. I think the problem here is that CIOs are afraid to be leaders around it, to go and bring all the general managers in place, to bring in the functional leaders in place, and be that coordinator of where the focus should be. I think that's the skillset, that's the job here that's missing. The other side of it, again, is controlled experimentation. It's hard enough to go to a team of 100 Ph.D. data scientists and ask them to have a disciplined, scientific process about how
they're focusing on which problems to focus on, when to pivot, when is it delivering value. It's a constant churn to bring their models from early stages to something that's ready for production. Now you're doing it with the entire organization, like everything is out there. So,
is it worth experimenting in marketing around content creation? Well, if you only put out two blog posts a month, maybe not. Or maybe give a simple tool that's licensed that they can go use. I think the other thing we haven't covered here, Tim, is just about every platform that any enterprise is using is probably embedding a language model capability built into it. I'm not going to name names, but every one of them has it. It's a limited capability because it's often siloed to the data that it has access to. But it's an easy area to experiment, whether it's customer service or CRM or coding or marketing automation. Go tell me what you learned
by experimenting with the technologies that we already have that have AI capabilities built into it. Then go back to your data strategies. Most of our data strategies have focused on structured information. How do we get our rows and columns in an organized way, well-documented, in a catalog so people can do BI and machine learning off of it? We have ignored our unstructured data because it's been hard. Now we have tons of this information. Now the question is, now that there is an LM—and maybe it's a little hard to use it today, but in a year or two, it's going to be easier and easier—where are we going to get strategic value by bringing in content and making it easier for people to ask questions around it? We actually have a question from Arsalan Khan, who always asks these great questions on CXOTalk. Tim, maybe I'll direct this one to you to start. He says, "As IT becomes a commodity—" That's
an interesting point in and of itself that we could argue about. Is AI becoming a commodity? "—and AI becomes used more and more in different areas of IT, will we have an IT department that just has a far diminished role and really is only asked to come into play when there's a functional business need from a user?" Really, he's bringing into question the very role and existence of the IT department. Tim, any thoughts on that one? If that is the mindset of the organization, full stop, I think that's just the beginning of the end for that company. I strongly believe that IT and technology is the ultimate
differentiator between competitive companies, and it gets down to data and how you use data. I don't see the value in IT as a cost center or as just a response to a specific business need. But in fact, if you were to play this out, I actually think the better organizations (broader organizations, not just IT) is where the CEO and the CIO have an incredibly tight relationship. Now, that doesn't happen very often. That's actually fairly rare today. But in those organizations, what happens within IT is pretty magical, and it's very well engrained in the business, the business strategy, and the direction that the company is taking, especially within their industry and starts to impact even economies based on their success. I don't look at it as IT is potentially heading toward this kind of somewhat nonexistence—I know that wasn't the word that was used—toward this diminished capacity. I actually see
it going the other direction, and I think that we already have good examples of that in some large enterprises today. We just need to expand that leadership capability and learn. This is a bigger question than this show about generative AI, but CIOs need to learn how to be better leaders. I'm going to say it. That's one of the biggest challenges I think that we have is that, historically, IT has actually been struggling with this.
I think the rubber is hitting the road now and we've got to step up to the challenge. This is why you're starting to see some CIOs get sidestepped and get diminished in their capability because they're not able to step up to the challenges that companies need them to today. And we see other roles that are starting to come into play to augment that. I do think there is a bigger opportunity for those that are willing to A) understand it, B) learn from it, and C) really have the courage to go after it. We're using the word "CIO" here, but I think the real magic also happens when the CIO has a team of leaders underneath them. You talk about where CIOs are spending their time to really understand the impacts and to market the solutions. They're out there customer-facing. They're out there leadership-facing. They need to change the mindset and culture. That means they need
really good lieutenants who are learning how you connect problem and opportunity to solutions. The people doing this are what I call digital trailblazers, and these are people that, depending on the state of technology, are moving up stack, are defining solutions at a much higher level than we have ever done before. We're not defining storage solutions anymore. We're using cloud capabilities to do that. What AI is going to be doing over the next three years is maybe we're doing a lot less coding and a lot more prompting for code solutions, code examples, to help me document the code, help me build an API out that's a little bit more robust.
We're still required to move up stack, continue to bring problem and opportunity into solutions. There really is one major change as these technologies get easier to use, and that is, we're taking capabilities that we used to have to go to IT for and now they're becoming business capabilities. We're not just doing spreadsheets and Word documents and Office 365. We're now doing analytics. We're doing small AI out there. We're doing code development with low-code platforms. We're taking things that we didn't have enough people in IT to do over long periods of time, and we're bringing it out to our marketers, our finance people, and our operations, and saying, "You have a ton more capability to go do over here."
I don't think IT is going anywhere any time soon. I think the challenge is learning these new capabilities, and it's hard. We're not just coding anymore. We're connecting an idea to dozens if not hundreds of different ways to stitch a solution. And now we're trying to figure out what's a good or optimal way to start experimenting and then seeing which are the solutions that will help us get to market with a technology and a capability a lot faster. We have a question from Twitter. The question is, "What about the data privacy? How do you prevent or mitigate the leak or misuse of intellectual property and confidential information?" This is a risk. This is a concern. It's already happening. We're already seeing examples of this.
I just wrote a blog post about an example of IROs (investor relations officers) that are using ChatGPT with pre-published financial information to build their press releases, their financial releases. That's a concern. Then, of course, you see it in Copilot with code creation. I think the biggest opportunity here—and it's going to sound like I'm kicking the can down the road a little bit, but I don't mean it that way—is education. We cannot put the technology guardrails in place to cover every permutation that we ever
are going to run across without inhibiting the ability to experiment and really explore. This is where technology does not replace the human. We need the humans to use that gray matter between the ears and think about what they're doing. And we need to educate them and help them understand where the risks are so that they can make good, educated, valued decisions from their seat. I think that's incredibly important. I think too often we think of IT people (or others think of IT people) as machines that go to the freezer and get the box. We've way over-rotated on that, and we need to come back to "these are people." These are people that have a brain, that can think, and we need to give
them the trust and build that education so that they can help make great business decisions, whether they're the most senior person in the IT org or the most junior person in the IT org. Those organizations that have gone down that path have had wild success at limiting the risks. Also, these people start to become identifiers to say, "You know what? I recognize this issue.
Maybe we should talk about that," and educate the rest of the team. They become lampposts out in the distance, too. I think that's another piece that we often don't think about is the power of the organization. To me, that's where the guardrails start.
How are attitudes in the C-suite, in general, evolving towards AI, towards the consumption of AI, and the adoption of AI? Where are we today, Isaac? We have a spectrum. I think some believe it's science fiction and we're ready for Star Trek – you know, "Computer, help me with this problem." – and expect business leaders and IT leaders to collaborate to get an answer around that. I think you have a lot of naysayers who would prefer keeping business as usual, prefer using the tools as they exist.
I see that as a spectrum that we see with every new capability. The main difference here is it's moving a lot faster and a lot more people are experimenting with it, so it needs to come together much faster. I think how we consume generative AI will evolve. Isaac talked a lot about building LLMs within your organization. That's
really challenging and really expensive to do. One organization recently mentioned—this is a vendor—that they're paying $0.5 million a year for junior folks. And the more senior folks, they're paying $1 million a year. Now that's an expensive individual, and if you go back to when we were
trying to put data scientists within our IT org, we were having trouble finding that skillset. I think there's that. There's also the actual cost of running these models in the cloud and paying for that. I think it's just too complicated for most organizations to do it themselves. I do think that the way we consume this will be through those enterprise apps that we already are using because they understand the data, they characterize the data, they can map the data and market, and they can help put the right guardrails in place. It's
a much easier lift for the average enterprise. There will still be corner cases where folks are building their own LLMs and managing that. But those will be corner cases and very, very specific. It will not be the masses. I agree with you. It's really expensive.
The skillset to work with LLMs is hard to get. This is really about, number one, getting your data understood so that when we start embedding them in LLMs, you know what that data looks like, and then training and educating the employees to the executives about planning for the art of the future where you can ask those questions and there may be an LLM in the middle that's being able to answer them. It's not really necessarily about going out and building it yourself. The other thing we haven't talked about, Michael, and we don't have time today, but I would encourage folks to just be aware of the whole regulatory space. There is a ton
of regulation that is on the books today as well as in process at a state, federal, and global level that companies have to contend with, and they have to be aware of. It's only going to get messier as we go through time. Again, the problem gets to be harder over time (not easier) and more complicated. I think these are things that (going back to your point about education) we have to learn. We have to spend time and learn and learn fast. A couple of comments from Twitter. Doug Gillette on LinkedIn says that the answer to the question about the relationship between AI and employees, he says, "You don't ban AI but monitoring employees is where many should start." I know that can potentially be also a provocative statement.
Then Arsalan Khan comes back on Twitter, and he says, "Do CIOs and CXOs, in general, actually understand the value, the financial value of their data and where to therefore make investments in data?" Very quickly, any thoughts on this data evaluation and investment question? To the first comment about monitoring employee behavior, I think that's a massive hot button and something that most organizations should absolutely stay away from. There are specific ways that IT orgs are doing that, but they're incredibly transparent about how they're doing it. It comes back to trust in your employees. Then to the second question about the guardrails. I do think that there is more conversation to be had there and more understanding to be had there. We're in the very early innings of this. But the game is going very, very fast.
I would add risk and opportunity sides. On the risk side, we talk about chief privacy officers and chief data officers. We're now at a point where every business leader needs to understand the basics of this. Where are experiments happening? What data is being used? Is it being used in an appropriate way? I think we're taking that knowledge and that responsibility and saying, "We need to have a lot more people with that level of knowledge to make better decisions around them." Then on the flip side, they have to take
a stronger role in saying, "We're going to experiment chasing after value and chasing after experiments here. Here's what the value looks like when we want to have developers do something or have a test out in the field with some IoT technologies." Being able to capture that in a way that you can go and say, "We're heading down a path that's starting to deliver value," because if we end up in a situation and said, "What's this added data source worth to us? What's the ROI around an experiment?" we're not going to be able to calculate it upfront. We're only going to be able to say that the experiments we're doing week-to-week, month-to-month are leading us down toward this value equation that we think will ultimately deliver financial returns. It's important to understand something that Isaac mentioned, and I want to underscore this because it's really important to focus on this. Focus on value not cost – value, not cost. The reason I say that is, quite often, people are focused on, "Oh, well, the cost to do something is X," but they don't necessarily understand what the upside of doing that is. To the question about, "Do we understand the
value of data?" No, we don't. We're learning that as we go. I think that will get exposed as we go through time and more experimentation. Deep Khandelwal says, "How can you bridge the knowledge gap in employees to help them get ready for AI and new technologies?" Isaac? I want to market teams that are having success. I want to be able to bring that
to the forefront and show what they're doing, how they're going about doing it. I'm thinking of the old school town hall picture, but we need to be able to show people examples. They went out there. They went above their job. They went after something that had a
real problem, an opportunity statement around it. It was defined upfront. They went through two or three pivots in their experimentation. And here's something. Maybe it's not ready for production, but here's something that proves that they're heading in the right direction.
I think CIOs, with their CHROs, with their CMOs, need to have a program about bringing these success stories across the organization. We've talked about that for a long time, for decades, and I think we've kind of lost the marketing ability. But I agree with Isaac. We need to bring that back, finding those experiments that are successful and being able to support the teams that have discovered that. But also, let's support the people who failed. They tried, they failed, and they moved on to the next thing. What did they learn from it? Let's not just focus on the successes. Let's also remember the failures or
things that we learned from, not the successes. We have another quick question from LinkedIn. This is from Kash Mehdi who says, "Since data is so critical for organizations worldwide, how does one go about creating a data balance sheet? Are you familiar with any particular frameworks or just guidance?" Data balance sheet. There are a couple of books around that in terms of turning data into a product exercise. I think, for most organizations, trying to keep this simple and starting with something that most tools already have today, measuring data quality, measure data utilization, measure the outputs from your data experiments. Where is it leading toward value? You get into quality. You get into being able to load in more data sets efficiently. You're starting to impact the gears that are going to
start impacting results that come after that. The other things you have to think about, especially as you're working at a global level, there are tools that are just emerging onto the market that are starting to consider this. But you also have to think about regulatory and governance and privacy at a global level because that's getting incredibly more complicated, and these tools are what's going to be required to help you through that.
This is from Sylvia Macia. She says, "What is your opinion as to the state of entities using their own internal transformation manager such as portfolio program or project managers as ambassadors or, as McKinsey calls them, analytics translators to help bridge those knowledge gaps between the business functions and the CIO, CTO, and IT organizations?" There are a lot of people with different titles that have that business-IT alignment and business-to-IT translation. Whatever the titles being used inside, I think the most important skill set there is really a product management skill set. Who is the customer? What value, what problem are we solving for them? How are they working today? What are their current tools around it? Why is this strategically important? Every organization has different skill sets around it or different titles around it. But I think the most important thing is to think about a customer focus first and strategic intent second. The last point from Doug Gillette who just wants to add this, he says that it's a complicated tradeoff between AI adoption and monitoring employees. He says,
"Do you give employees free rein and, therefore, refrain from monitoring their activities on AI to maintain the trust? On the other hand, monitoring provides an opportunity to create policies and governance and address issues that might otherwise go unnoticed." Very quickly, Tim, you made a statement about the monitoring versus adoption. Thoughts on that? It's a tough problem. There's no one answer to that question. Each organization is going to be different. You have to think about the culture. You have to think about what you're trying to do, the sensitivity of the data, sensitivity of the risk that comes with it. There are a lot of factors that go into the answer to that, so I wouldn't say that you live at the extremes of whether you monitor or don't monitor. I think it's a more complicated
conversation to have, unfortunately. Isaac, Kash Mehdi comes back and says, "How do CIOs view chief data officers?" I love this. We're getting provocative. I like that. Not as an adversary, as an adversary? If yes or no, how do you empower them as a CIO? You're getting into the entire organizational structures. Does the CDO report to the CIO?
Are they peers? What are they really chartered with? Are they more data governance focused or do they have both data and analytics responsibilities, even though they just have a data officer category? I think it really comes down to sitting down in rooms (or virtual rooms) and coming up with what the objectives are, agreeing with them, and then drawing that line and saying, "Regardless of the reporting structure, here's where the CIO's organization is going to lead things and take ownership of, and here's where the CDO is going to take ownership of." If you try to make too many generalizations around that, you end up with putting people in boxes, which I don't think actually works. Tim, do you want to have a last crack at this? Then you're going to have the last word. I've seen pretty much the spectrum, as Isaac has. I've seen the extremes of the CDO is a fill-in for the CIO that's incapable of taking on some of these more business-centric pieces. Okay, there I said it. Unfortunately,
that's the reality of the world we live in. And so, now we're starting to see chief AI officer. I think the thing that people forget is this title is getting watered down to the point that it's irrelevant. A chief officer role has a very strategic and important role amongst the executive leadership team – or it should. Unfortunately,
we don't necessarily play by those rules evenly. I think there's some conversation we have to get back to doing that. With that, we are out of time. A huge thank you to Tim Crawford and Isaac Sacolick. They are two of the very best CIO advisors out there. Gentlemen, thank you so much for being here. I'm very grateful to you both. Thanks, Michael. Thank you, Michael. A huge thank you to our audience. You guys are intelligent, bright, and sophisticated,
and you make CXOTalk. You guys are awesome. Now before you go, please subscribe to our newsletter, and subscribe to our YouTube channel. Just go to CXOTalk.com. We have amazing shows coming up, live shows where you can ask your questions, and truly, genuinely, you are part of the conversation. So, check out CXOTalk.com. Thanks so much, everybody. Have a great day, and we'll see you again next week. And I should mention, next week, we have the president of the consumer group at T-Mobile on CXOTalk talking about customer experience. The president of the consumer group at T-Mobile, you know he's pretty high up there, so check it out.
Thanks so much, everybody, and we'll see you again next time. Take care.
2023-11-17