Trust, Moats, and Regulation in the Business of AI feat. Jerry Chen | ASK MORE OF AI with Clara Shih

Trust, Moats, and Regulation in the Business of AI feat. Jerry Chen | ASK MORE OF AI with Clara Shih

Show Video

(air whooshing) The old moats are the new moats. So in a world where everyone has access via an API call to the super powerful model, Clara, brand matters, switching cost matters, trust matters, scale matters, network effects matters. So all the things that made the previous generation companies great will still matter in 2025. (air whooshing) - Welcome to "Ask More of AI," a podcast at the intersection of AI and business. I'm Clara Shih, CEO of Salesforce AI, and I'm excited to be here with Jerry Chen. Jerry is a partner at the venture capital firm, Greylock, where he works with entrepreneurs building companies in cloud infrastructure, data products, enterprise SaaS, and, of course, AI.

(air whooshing) So you've been investing for quite some time now. What changed in the last nine months? And what is going on in the world of AI through the lens of venture capital? - Well, obviously the large language models changed. And, you know, AI and machine learning was always a technology that we used. I mean, Google was using it for recommendations in your search and algorithms, and ad recs in Facebook were all machine learning, all some form of AI. And I would say three, four years ago, the research on large models, especially people were skeptical.

Then obviously Google wrote its paper, "Attention is All you Need." They publish the transformer models. And, you know, to quote "Seinfeld," "Yada, yada, yada," the rest is history.

But the impact of large language models was basically predict the next word, the next token, no one thought would work as well as they do today to kind of anticipate and give you answers that are text driven, right? We'll talk about images later, but for text driven, it had this magic ability. And when I first started playing with, you know, these large language models, with GPT-3.5 and GPT-4, and then the image models, like DALL-E, you felt like you were touching the future, right? All of a sudden, it's like early days when you first played with the iPhone, or had your first Google result search where Google was magic, or the first iPhone was like this magic box in your hand.

Playing with these new AI models was wow, you could touch the future. And so I think these large models in particular have changed what is possible, but more importantly, what we think is possible, right? And that creativity of the imagination is really what makes founders great. - And how is that affecting your investments? Both the companies that are AI companies, as well as companies that may not have started off as AI companies and, of course, they're trying to reinvent themselves as AI companies. - It's actually the second gallery is super interesting, we can go hours and hours on that.

But it's both. You think about AI, there's multiple layers, right? There's core AI technology, like the foundation models, the infrastructure to build AI applications, like things like LlamaIndex that we invest in the seed, and then there's applications built on top of it to build AI. And, you know, I think I wrote this blog where we talked about this system of intelligence, using AI to build intelligent applications.

It's changed a lot because both the core technology to enable building these new applications have changed dramatically. So you think about what it meant to build an app in a client server era, you know, your database and your server is one thing. Building app for the iPhone mobile era was a different stack. And then, cloud obviously a different stack to be SaaS and multi-tenant.

And they never replace, they kind of like, they add up. And so now we think about what's it mean to build these AI applications from foundation models, vector databases, you know, Retrieval Augmentation Generation models, or RAG models. We think about what it means to actually have memory for these apps and make recommendations and understand trustworthy AI.

There's all these like buzzwords people throw around, because we're trying to reimagine what a full-stack AI app means today. So that's interesting, so we're investing in the new stack, if you will. But then, to the second category, we have existing companies that weren't AI native, and then new startups that wanna be AI native. And it's both, one, for the first category, if you weren't using AI to begin with, you say, "Okay, how can large language models, or AI, change how I do this workflow or this thing," right? And the thing is always for an application like SaaS, software's usually a business process in bits. Hire to fire, order to cash, right? And SaaS companies were about digitizing that workflow.

With AI, we think about, "How I can skip two or three of those steps in the process with this AI magic?" Or number two, "Using AI, how can I acquire users differently," right? And we call this in the blog, and other people, system of engagement, right? GPT, and chat especially, changed how we think about interacting with software. So all of a sudden in companies that were some normal workflow, like a bunch of screens on your phone, can now use chat or something else to communicate or anticipate your needs. And I think Salesforce acquired Slack a few years ago because they saw that chat in the enterprise was becoming a dominant system engagement, right? It's how we interact, it's how we communicate.

And all of a sudden that chat metaphor in the enterprise, powered by Slack and then Teams, and everything else, and obviously that was copied from iMessage, and WhatsApp, et cetera, instead of a human, you have an AI agent or a bot under the side. - That interface question is so interesting, right? It's going from punch cards to using code syntax to interact with machines. And then, at Salesforce, I mean, we built our business on clicks, not code, the declarative workflow builder. And then, now it's conversations, not clicks. - I mean, the early Salesforce was built on what SOQL, right? The Salesforce query language. And now it's conversation on clicks.

And you never replace the system engagement. It's always additive, right? Right. - It's like now clicks and touch replace, you know, typing your command line, the command lines are still there. Your voice assistant at Alexa at home is still there, but you still have physical switches.

But I think the question is, if AI is really that intelligent and omnipresent, what does software look like in 2025, 2026, right? - And, of course, that's your job, as a venture capitalist, to predict that. So what is your view currently? - My job is to find the founders that predict that or create that. You know, my vision's, I think, like I said, it's additive so it doesn't replace everything. But I think chat as a simple engagement is metaphor for sure will become more commonplace for the long, fat tail of tasks.

What I mean by that is it's always still faster to flip a switch than say, "Hey, Siri," or something like that, right? Sorry for the folks at home that I just activated things. It's always faster to, you know, type in a query versus like, you know, imagine something on voice. So if it's a defined workflow, it's quicker to hit a button.

But what AI and these chat agents let you do is there's a long, fat tail of problems, questions, tasks that you wanna do, and oftentimes you can't anticipate these workflows as a developer. But if you, Clara, can ask the app, circa "Star Trek," you know, "Computer, do this, do this," or interactively ask a bunch of questions or have the agent, the persona ask you questions, which is even better, so you see these chatbot agents asking you questions. We're in a company called Inflection that has this bot called Pi, right? And the Pi will ask you a bunch of questions, and asking you those questions, it'll figure out what you want and how to help you. And so that's pretty cool, right? So instead of you being declarative saying, "Do this," the bot will say, you know, "Well, what's your problem? How are you feeling? What are you trying to solve?" And based upon that thread, something like Pi from Inflection will say, "Oh, this is what you need." - It kind of reminds me of that saying, it's like, "You don't know what you don't know." - Sure.

- And here we have Pi or these other new agents that can really help us uncover what our intents and goals might be. - Oh, absolutely, I mean, you know, homage to like Clippy back in the day was trying to guess what Microsoft Word was trying to do. You know, it had the right idea, probably way too early execution. But, like I said, it's kind of cool to touch the future. And there's a million founders out there working on all levels of the stack from the app, the interface, to the system intelligence, that we're excited to back and invest in. - So you've invested in some of these, but you're hearing far more pitches than you're investing in.

What's the most common set of themes at each layer of the stack that you're hearing pitches on? And then, also where do you think the most differentiated value is? - Well, I think that's the billion, $10 billion question, right? Yes. - It's where in the stack does the value accrue, right? And we can talk about, I wrote this blog called "The New Moats" years ago, when I revisited, called it "The New New Moats," which is saying, "Okay, in this AI world, where does value accrue?" And I would say the pitches are up and down the stack. For sure, we invested in a few of the foundation models, like Inflection, Adept, building like kind of the core technology that all sit alongside OpenAI, PaLM from Google, Anthropic, et cetera.

We've invested in stuff in the middle layer, like tooling, like Snorkel, TruEra, LlamaIndex, that helps you build these applications. And then, we vested at the app layer, right? So kind of like Tome is doing like next-generation PowerPoint using AI, named first (indistinct) Coda, another one's like productivity software using AI. I would say I, coming from a cloud data infrastructure background, spent a lot of time at kind of the infrastructure and the lower levels, but increasingly we've seen a rush of creativity at the app layer, right? And I'd say they're a category of apps that are okay, they're just like ChatGPT or Bard from Google just wrapped. We've seen a bunch applying AI to different verticals, healthcare, legal, financial services.

And then, we've seen, you know, the creativity is like imagining, once you have this kind of like superpowered intern, if you will, what else can you do? And I would say it's been fun to see all the pitches, but like you said, we only invest in a handful of companies a year, and it's always hard to say no, because it's hard to guess what's gonna work or not work. But to the crucial questions where does the value accrue? And, you know, I think there'll be some value accrued to foundation models for sure. That's why they've raised billions and billions of dollars, partly because they had to for training costs. I do believe there will be some new infrastructure middleware tools because how you build these apps change. But I do believe we are seeing a new generation of both SaaS experiences, like different things like CRM or ERP applications. But I think we're gonna see a bunch of new consumer experiences too, right? And I'm not a consumer investor, but you see at least AI avatars, videos, music, and the content created, so I think that's gonna be pretty fascinating going forward too.

- So you referenced your moats blog post. I read it, it's very interesting, and a lot of others have read it as well. Let's talk about those moats, and I'm curious to get your take on how those apply at each layer of the stack. - Sure, you know, it always feels weird to like quote yourself. But yeah, you know, the first blog I wrote in 2017 called "The New Moats" was in the world of cloud, and SaaS, and open source, worth is value accrues.

The moats, historically have been, you know, moats, people talk a lot about moats in tech, right? But it's like, you know, a deep IP, network effects, right? Like Facebook, even like look at what Twitter/X.com, I mean, network effects are so hard to counter. Brand, switching costs, those are the classic moats.

And then, the new new moats, both in 2023 and revisiting 2017 was what it got right, what it got wrong, right? And the question was, you know, in this world of cloud, and SaaS, and AI, what happens? And arguably, you know, for a while looked like OpenAI and a couple of the big companies would have the main advantage in this AI generation. And then, I think, you know, Llama 2 from Facebook, or Meta, open source, as well as a bunch of other open source models kind of changed the game. And, you know, I revisit, okay, now everyone has access to this powerful technology, what happens? And, you know, the TLDR, the too long didn't read response was the old moats are the new moats.

So in a world where everyone has access via an API call to the super powerful model, Clara, brand matters, switching cost matters, trust matters, scale matters, network effects matters. So all the things that made the previous generation companies great will still matter in 2025. - But maybe they'll manifest in different ways. - Absolutely, that's the only concept. - Yeah, so let's talk about some of those. I mean, what does it mean to have deep tech? Do you have to have that underlying model? Or, like you said, because of the open source options that are out there, it's no longer deep tech because anybody can access it? - So different horse for different courses.

So if you wanna fight in the foundation model space, yes, you need deep tech. And can you build a better foundation model? And there's a bunch of IP around that in terms of data quality, data curation, scale, et cetera, right? So there's definitely IP around building foundation models. But I think there's only gonna be a handful of those foundation models given size/scale constraints. So then if you look at some of the other applications, like building support or CRM, or a SaaS application for healthcare, where is the IP in tech? It's not gonna be in the model itself, but it could be in proprietary data. So you're in a vertical like healthcare, and oil and gas. How can you tune or use that data with the model together? It could be the same old deep tech in terms of workflow, right? You understand how in the healthcare landscape, or the oil and gas landscape, or defense landscape, what the customer wants, right? And so the technology around workflow is still IP.

There's still aspects of technology like scale, and speed, and security that still matter, don't go away when you're using big models. - And arguably are harder and more important. I mean, think about all these new security questions, trust questions that I hear all the time from customers. - Oh, well, that's why brand and trustworthiness all may play towards the incumbents, right? You know, Salesforce is now one of the known brands in the cloud.

And so when you think about brand, trustworthiness, security that actually skews towards the incumbents versus the startups. And so for sure as a startup, you know, we have to explain to all the customers why this model is not hallucinating, right? Why it's an accurate response, and, you know, that's always half the battle. - Well, one of the things that you have talked about before too is, you know, the foundation models are improving exponentially.

I mean, that's one of the amazing things about AI is that they can learn so much faster than a human can. And one of the risks you've called out before is that the foundation models themselves become so capable that they start to compete against some of these applications that have been built on them. Can you talk more about that? - So the question is, these foundations, if you think of like the, you know, set of problems out there in the world, right? If you can imagine a graph of problems out there, you can argue the foundation today, software can solve what we call the fat head, like repeatable business processes, you know, ordering my coffee at Starbucks, wherever it is, over and over again.

And there's a SaaS application to do that, right? One job over and over again. Think of that like the robot arm putting a door on a car in a factory in Detroit. So all of a sudden when these foundation models are so flexible and they're problem-solving machines more than anything else, they can now without any programming or guidance start to solve kind of this long, fat tail of problems. Multipurpose.

- Multipurpose, right? How to get from point A to point B. You know, order this inventory product from, you know, India through the Philippines and past customs. That's a software package today, but these models can do it. Now there's a question of speed and cost, right? Because these foundation models cost money for inference execution, and so today it's probably not practical for the long tail of problems. Like you wouldn't run a huge model to automate, you know, the lights in our house.

But as you think about the models getting bigger, more capable, and if the cost curves come down, you can see some of these big models solving these multipurpose problems. - So interesting. And, you know, we've talked before about large models, but also small models. And, you know, we saw that Llama 2, open source model from Facebook, comes in different sizes.

And we've been working on models of different sizes at Salesforce too. What are you seeing? What are your predictions around the role of smaller models versus these larger ones? - Yeah, and firstly, I don't know the answer to the future. I can tell you what, you know. - It's just fun for me to put you on the spot, you know? - But I reserve the right to change my mind in the future. But for sure there will be a world for both big models and small models, right? Because small models are gonna be faster and cheaper for a set of problems, right? And you say equal size, equal size, a fine-tuned smaller model will outperform a bigger general-purpose model for a bunch of reasons, right? The data set's tuned, you don't need to use a giant context window so you're not, you know, burning a bunch of tokens, et cetera. And so the world will have small models, as well as big models.

The question, Clara, is if you split the percent of problem, the pie of problems, what set of applications need small models or big models? Is 80% of the world's problems and applications solved by big models or 20%, right? And I think the debate is, you know, what percent of these software problems, or problems in general, should be solved by big models or small models, or medium, right? Because there's different models for different problems. And that, I don't know the future. If I would bet right now in the near term, I think a bunch of small and medium sized models are super relevant for a bunch of reasons, right? Privacy, security, costs, et cetera. As these models get bigger and bigger, I think they will solve more and more problems. So these bigger models would just be more general-purpose problem solving.

But I don't think the pie is fixed, right? So if you think of what it is today, I think a bunch of small models and medium-sized models will solve today's problems. But I think there's a bunch of new problems, the market of the TAM expands, like all technology, The number of things you can address with AI expands, and big models will be driving that TAM expansion, if you will. - Yeah, we'll just have more tools in our toolkit. - Correct, absolutely. And you can't anticipate those things like, you know, neither can I, no one could anticipate like calling a car in our phone eight years ago, right? And that became a default experience of ours, or ordering food online easily.

And so I think there will be new tools in our toolkit that will be enabled by these big models. And so that's how I see it going is big models solving these problems, but small, purpose-built models filling in certain jobs. - I agree with you.

You know, as AI technology has become more powerful and pervasive, something we talk about all the time here at Salesforce is what is the role and responsibility of us, as technology providers, investors, to ensure that, you know, we're doing this in a responsible and ethical way? And so what are your thoughts around, you know, bias, privacy, job displacement? How do we ensure the responsible development and deployment of these technologies? - Oh gosh, that's more than a curve ball. That's like a canon of questions. I think, first and foremost, we're all global citizens, right? And I think as investors, and executives, and founders, technologists, we, two things, one, wanna change the world in a positive way, and I think me and my partners we're optimists more than anything else, right? I wouldn't be a venture capitalist, I wouldn't be backing founders if I didn't wanna always see the good and potential. So I would say, in general, I think it's our responsibility to enable technology, because it would be criminal to slow it down, right? I think my partner Reid Hoffman says, "Look, you can have an AI doctor and AI tutor in every pocket.

It would be criminal to prevent that from happening." So I think in some ways, it's our responsibility to make sure that reality, that future comes faster and it becomes commonplace. But like you said, with all these changes, there's consequences. And I think when we work on development, work in these companies, we're conscious about it I think.

We were early backers of Airbnb, right? Understanding that that enabled a whole set of economic opportunity. But, you know, what it would do for risk and costs, there's always trade offs. But on the whole, I do believe they're expanding the pie, the GDP or the market, that that's overall a good thing. - Do you think that AI should be regulated? - I think it's a loaded question, regulation. I say the easy answer is yes, if you're a big company. But I think regulation generally benefits the incumbents, right? Because regulation normally means cost.

And if you're a startup, you don't have the capital to regulate yourself like the big companies. So I would say the easy answer is yes, but it kind of like depends. I hate the BS answer, it depends what regulation is. I don't disagree with controls, or provenance, or trustworthy tools or guidelines on AI. Like, security is a separate category.

We don't say we gotta regulate software, we have to say we need secure software, right? We gotta patch it, we gotta update it, we gotta make sure we're not hacked by, you know, criminals or, you know, 12-year-old, you know, hackers out there. And so we don't say software has to be regulated, we say we have to secure it. And it is never 100% secure, right? I think someone tweeted out, "Trustworthy AI or indefensible AI is like trying to write a 100% secure operating system." Software's never 100% secure, but we get better at it by patching and updating it, right? Salesforce included, you know, Windows, Linux included. So AI regulation, not regulation, is never gonna be perfect from the get go. But the whole idea is you use it, you put it in practice, and you fix it and improve it, right? Or get the AIs to improve itself.

- And you go in with that expectation that it's not going to be a 100%. - It never is, right? Nothing's ever done. Humans are never done. A software's never done. You know, from the first piece of software shipped, you know, until the last update of Salesforce, there's always patches and updates.

So AI's never done. And so does it need to be regulated? You have to tell me what the regulation is. Should it be protected? Should people care about the quality, and security, and the trustworthiness of AI? Absolutely.

Do we care about security and quality of the software? Yes, we have compliance audit, SOC 2. We do penetration testing on the software to make sure hackers can get to it. You know, we publish, you know, I think Salesforce has salesforce.com/trust, right? In terms of do you trust your cloud and your SaaS application going up? You know, Salesforce pioneered trust SaaS from the get go because people didn't believe it back when they're running servers on-prem. So just like Salesforce kind of pioneered, trust me, writing your application in the cloud with their data and everything, writing, you know, 99.99% of the time,

yeah, companies with untrustworthy AI will not do well in the market. And customers are gonna buy the AI that's always updated, patched with provenance, and trust, and security. So I think, you know, as an investor and capitalist, clearly I believe the market will demand it and the market will reward it. - I think, my view is it's easier to do in the enterprise. Okay. Because you have companies that have the resources, the means, the alignment to verify their providers.

I think it's harder, it'll be harder in the consumer space. - I think you're probably right, and I'm not as quite familiar, but yeah, there's motivation in enterprise, right? Because the product you build is the product you sell, and I'm selling you trust, I'm selling you security. On consumer, potentially not, right? Because consumer, the product you build is not necessarily the product you sell, right? You build photo sharing, you're selling ads, right? You build search, you're selling ads. Not all the time, but, you know, I'm being a little facetious here.

- But, basically that's how the majority of the consumer internet is financed is through advertising. - So when those incentives are different, per se, I think that's the question is, you know, how do you align the incentive to make sure your AI is, you know, trustworthy and secure? And you can argue in the enterprise, the alignment probably makes more sense. And the consumer side, I'm sure it can align those incentives as well, right? I mean, on the consumer side, if you leak my data, or, you know, something bad happens to my information, my personal information, I care, right? And you leave the site, you go to, you know, Google always says like, "The competition is one click away." You go from Google to Bing, or something else, right? So I say you can align incentives clearly, it's just making sure that's obvious, and then there's choice.

And part of our job as investors is to make sure there's choice. - So you've invested in a number of AI companies, you've also invested in non-AI companies. And I'm curious if you're seeing the new wave of startups, whether they're in camp one or camp two, are they being built and scaled differently because of AI? - Yeah, I mean, I think AI is, everything being built and scaled differently in AI for sure.

I mean, just look at how software's being built today. Things like GitHub Copilot, right? Is helping you build software better, faster. AI tools to make your software more secure. So even if you're not a, quote unquote, "AI company," AI's impacting how software's built, right? It's making you more productive, it's helping you communicate. Like I said, it's AI or die.

So if you're not an AI company, you're using AI in some form or fashion. And, you know, instead of searching on Stack Overflow or Reddit for articles, you're asking Bard or ChatGPT for answers. - And do you see the startups that you've invested in scale faster than previously because of this? - Yeah, for sure. I think some of the Copilot, or the code completion tools, are making developers more productive.

It's amazing, people are skeptical, but, you know, once you try it, it's kind of amazing how well it works, right? And I think that's just the tip of iceberg in terms of code tools, security tools. I think we're gonna see observability tools. We're in a company called Chronosphere that does like, you know, a Datadog competitor, they're leveraging AI to make their applications better. And so I think you're gonna see that permeate everything, AI or not. And obviously the AI companies, it's interesting, right? They're moving very fast because the underlying substrate of the models are moving so fast.

I mean, maybe they'll limit it by the GPU allocations or something like that, but we'll definitely see the iteration, the cycle time pick up, which makes my life so much easier. - You and me both. I mean, it really is, it's hard to plan a software cycle when the underlying substrate or various levels of the substrate are constantly evolving. - Oh, for sure, I mean, just think about waterfall development in software, to agile development software, to like where we are today, we're just constantly pushing updates now. And, to your point, it's hard to plan, and in many ways people are just holding their breath and anticipating what's gonna happen.

And, you know, it makes my job super fun or challenging to try to anticipate, okay, there's new research on AI coming out every week, every month, right? And you can argue, for example, these large models are the thing right now, but there's, you know, other models, other technology coming out that could be different or better, right? And I don't know how that changes the game, and, you know, I don't know what that bears, but part of my job is to keep investing and looking for that next thing. How did you become a venture capitalist, just for anyone who doesn't know Jerry Chen? - I think plenty of people do not know who I am. Look, venture capital is something you kind of happen into. I think the first time around, I worked at another VC firm before Greylock, the dotcom days, you remember them.

I was an associate hired to Accel by Peter and Theresa, you know, two other VCs. And I saw the rise of the NASDAQ, and the dotcom companies, and the crash. After that, I'm like, "I never wanna be a VC again," right? It was a tough days, because I was like 26 years old shutting down companies. I went to business school, worked at a company called VMware for 10 years from like employee 215,000.

Along the way, you know, I talked to VC firms and said, "I love shipping product, you know, I want be a product guy for the rest of my life." And then, Aneel Bhusri, who I think we both know, was a partner at Greylock and also co-founder of Workday and CEO, he and I had a relationship back to his early Greylock days and my Accel days. He's like, "Jerry, Greylock is the right place for you."

It's a bunch of folks like Reid Hoffman, Aneel, a bunch of operators, a bunch of founders. And, in 2013, he convinced me to finally be a VC, and that's how it happened. You know, you get comfortable, and get invited, and you find, I think Reid Hoffman says, "Find your group, your DNA, your people," and I feel very lucky to be at Greylock. - It's so interesting.

So you have had two different experiences in venture, in very different eras of technology. How would you compare and contrast, you know, those early days at Accel versus what's going on today? - I mean, both, great firms and full of great people. So I would say that they're very smart, very great investors. But, to your point, that the eras are very different. So I would say venture capital investing startups, the game's always the same. It's find great founders in very good markets and just back them, you know, and help them as much as possible.

But, you know, in the dotcom days, it was a much smaller industry, right? Even though tech for us was really big. If you look at the amount of capital, the amount of startups, the market cap of the companies, even at the peak of the Nasdaq, it's much smaller than it is today. So now you have, you know, billions and billions and billions of more dollars, more VC firms, accelerators like Y Combinator, you have seed funds, growth funds, international funds. So it went from kind of a very smaller community and industry to kind of a global asset class and global community, which also echoes where tech is, right? I think where Salesforce was 10, 15 years ago when it first started to where it is today, you know, it defined cloud, defined SaaS. And you kind of look at the growth of SaaS and cloud over the past 10, 20 years, you look at venture capital and technology has an equal rise in both size, but also significance, right? So now before technology was a small sector of the economy, you know, or smaller, now it's the top of the headlines, right? It's the front page of "The Journal," it's the front page of "The New York Times." - And arguably every company in every industry is a technology company or wants to be.

- No, I mean, every company is a technology company. And as we're gonna talk about, every company now is an AI company, right? Yes. It's, you know, versus it's AI or die for both startups and big companies. And technology, to your point, Clara, is an ingredient and AI is the atom, if you will, of all the future molecules we're building.

- How do you make the time to think about what's next? And what are your favorite sources for learning and inspiration? - Finding the time. And it's actually the inspiration's both, it is gonna be cheesy to say, but it's really, there's founders and individuals that I have the privilege of working with and meeting. And so you meet with hundreds and hundreds of founders all the time, and you meet with founders and customers yourself. And not every founder I back, but the conversation I have like this is where you find inspiration and a kernel of an idea. And so, you know, I had a couple meetings this morning with founders doing some great research in AI. And it was ideas I didn't think about before, and I may or may not invest in them, but both, it was, one, inspiring to see the research, but number two, thoughtful in terms of, "Oh, that's an idea that I haven't thought about before," or you see this pattern repeat again.

Like using large language models for integrating with SaaS applications, right? It's like, "Okay, yeah, that kind of makes sense, but oh, this is how they're using these large models to integrate with Salesforce, or Workday, or something else." And you see that pattern repeat over again. And, you know, I don't think VCs per se can predict like what the next turn of AI research is.

That's not our job, the researchers are better at that. But, you know, our job to kind of see the patterns and kind of see different founders, and, you know, what we call peripheral vision of what's going on to the left and the right, and kind of see what's emerging. And so it takes a lot of time, but, you know, it's what I love and it's what I sign up for. Like, I just love being around these founders and the technologists and kind of learn new things. And it's neat that, you know, after what, 25, almost 30 years working technology and beginning the conversation of working in the dotcom days, now this AI wave, that to see all these new waves of technology it keeps getting energized, right? We've seen the cloud wave, the SaaS wave, the mobile wave, and now this AI wave, you know, and they all build on top of each other.

And yeah, I'm still an optimist, I'm still excited. - I am too, I mean, I think about in the nineties, just how all of us had to reinvent ourselves. And everyone we knew had to change from, you know, the filing cabinet and paper and pencil to digitizing, and then the internet, and then the cloud. And here we are again, having to do the same thing, but we don't yet all know all the ways that we have to change.

- Well, let me ask you this. I mean, you see a lot of customers, and I'll ask you a few questions. What gets you excited right now when you think about AI and the customers you're interacting with? Like, what are the tier-three problems? Is it security? Is it privacy? I mean, I'd be curious what peaks your interest now? - The number one thing that I'm hearing from customers, and I'm sure for you too, any of your companies working with the enterprise, it really is around trust, right? And there's the data security and data privacy aspect of it.

There's also the ethical, responsible output side of it. And so we've put so much of our effort, as a company and as an entire ecosystem, on how do we build out this trust layer, which is a set of technologies, but also an ecosystem of partners, it's policy, it's working with lawmakers and regulators to really figure out how do we deliver this in a safe and responsible way? - Well, I think it's a worthy requirement from customers. - Well, Jerry, this has been so insightful. Thank you so much for coming on the show, I learned a ton.

And thank you for your partnership as always. - Of course, thank you for your support always. As a great citizen of the tech ecosystem, Salesforce is one of those great companies that we admire and actually inspires myself, inspires a bunch of founders out there. So I'm excited to be here. And let me know when you wanna turn the tables, I'll interview you next.

Looking forward to it. All right, some amazing takeaways from Jerry. First that the old moats of business are the new moats of business. And that more than ever, trust is number one, it is absolutely imperative for getting AI and data right. That's all for this week on "Ask More of AI," the podcast at the intersection of AI and business. Follow us wherever you get your podcasts, and follow me on LinkedIn and Twitter.

To learn more about Salesforce AI, join our "Ask More of AI" newsletter on LinkedIn. See you next time. (gentle upbeat music)

2023-10-05 10:43

Show Video

Other news