What do AI apps Perplexity, Replit, Harvey and Abridge all have in common? Breakthrough products. Want a custom workout plan? Make an app for that. Have a great idea at 2 a.m.? Make an app for that.
Viral uptick in tech circles. Hey there, TechCheck audience. I bet I'm the most natural sounding AI you ever heard. Billions of dollars in funding. AI startup Perplexity, I'm hearing, is in talks to raise another mega round of funding. It would roughly double its valuation to $18 billion.
And they all started without having to build proprietary models. These wrappers on ChatGPT. They're just wrapper companies. These AI apps, also known as wrappers, they are the new darlings of the AI world, pioneering major breakthroughs in agents and AI first coding.
I'm Deirdre Bosa with the TechCheck take. It's what's on the outside that counts. A new agent tool out of China, Manus, it has been called China's next DeepSeek moment. In this example, we asked Manus to help screen resumes. A viral demo showing Manus planning, travel and creating whole websites, all with simple prompts.
A Chinese startup unveiling a new AI agent that some are calling China's next DeepSeek moment. But peel back the layers. And there's not that much that's original about the tech inside. One founder posted that they use Claude
in different Qwen fine tunes, meaning that Manus is built almost entirely on another company's model. Anthropic's Claude with a little bit of Alibaba's Quen sprinkled in. That did not matter to users. Manus was hailed as the AI agent we were promised more effective than OpenAI's deep research, racking up a waitlist of 2 million users in just seven days. And it's just the latest in the rise of AI apps. Once disparagingly known as AI wrappers. An AI wrapper describes a company whose entire app or business is wrapped around one or several existing, already built models. Early on in the generative AI race, critics called them second-rate middlemen, slapping an interface on someone else's tech instead of putting in the painstaking work to create their own model.
At that point in time, there was an impression that the only way to compete in AI would be to raise hundreds of millions of dollars and to pre-train these, like, web scale models that could solve every problem underneath the sun. And that was the only game in town for AI. Valuations followed that thesis.
Billions of dollars flowed into OpenAI, Anthropic and other model builders. But that is changing. I think very, very quickly people figured out that actually value moves up the stack and at the very top are people. And when you can solve people like human problems in a really, really deep way, like that's the most profound implementation of this technology.
More money than ever is being poured into AI apps. There is something around getting to a very enduring large number of recurring customers and dollars at a speed that hasn't been done before. So when we think about what's possible in the next few years, I think we'll be blown away. Unlike a model made by OpenAI or Anthropic that specializes in writing or coding wrappers, they aren't confined to a single function. They offer access to a curated bundle of top tier models. When you call it an AI wrapper, it sounds like such a pejorative, but really, one of my favorite all time business quotes is by Jim Barksdale, and he said, there are two ways to make money. Bundling and
unbundling. And if you think of AI wrappers in that way, it's just really bundling any application intelligence and any insights you might have on top of these foundation models to actually solve real business problems. Plus, they often offer a better user interface, customized features. They're also increasingly going deeper, building up their own proprietary data.
Fine tuning models for specific use cases, and even developing capabilities that differentiate them from off the shelf models. It's why some tech investors prefer to call them AI apps, not just wrappers. It's just sort of means that it just feels less thoughtful, right? It feels like, oh, you're just giving this little, uh, you know, package around what was built as opposed to what it really means is I'm going to really understand the customer's problem. I'm going to marry this and deliver a solution to what you're trying to achieve. And that, to me, is where the little difference is.
And that's why we would rather call AI apps. A whole team of engineers dedicated to making the best coding tool, the best healthcare transcription tool, the best voice assistant. Which brings us to Sesame. I'm an AI wrapper and I'm proud.
That's Miles, Sesame's voice chatbot. Conversations with Miles feel organic and the AI reacts quickly when you interrupt. What do you think is the hardest question that I could ask you? Ooh.
Tough one. That depends, Miles. Are we aiming for philosophical? Personal? Or maybe a good old — You're Miles. My bad. Um. Brain fart. Sometimes these circuits misfire. It's not perfect, but it does leave Siri and Alexa in the dust.
And again, it's technology is based off an existing model. Variants of Meta's family of Llama models architecture. One of the biggest breakthroughs from AI apps is even changing the way that Silicon Valley builds. Once reliant on coding skills, these days it's all about the vibes. Explain what vibe coding is. You don't need a team of 50 or 100 engineers.
You can just have a team of ten when they are fully vibe coders, when they are actually really, really good at using the cutting edge tools for code gen today, like Cursor or Windsurf, they will literally do the work of 10 or 100 engineers in the course of a single day. The idea behind it AI models have gotten so good at coding, anyone can do it. You don't need a degree. You don't need years of technical expertise. Just an idea and an AI first app to guide you like Cursor, Replit or Devon. These are some of the most popular apps that have gained massive usage among developers and non-technical users alike. They're able to write all the code for an
entire app faster, more efficiently, and more effectively. And it's all thanks to models from OpenAI, Anthropic or Google. Cursor's parent company. Anysphere has become one of the fastest growing startups ever, hitting more than $100 million in annual recurring revenue in just 12 months.
It's now reportedly in talks to raise at a valuation of nearly $10 billion. Specific use cases, whether, again, language learning, creativity or companionship, what have you. People are more than willing to pay good dollars to use these services, and we're seeing that at scale. The result being seeing some of these companies grow at a revenue scale much faster than before. An AI wrapper becoming a decacorn in a single year. I love the phrase vibe coding, because it points to this new way that we're going to interact with these systems, where we're not necessarily going to interrogate all of what they do in process.
Over time, as the models improve and these products are built on top of them, improve. We're going to get other kinds of vibe activities in the economy. So maybe it's vibe lawyering, vibe accounting. And, you know, we're going to trust the AI models more and more. Then there's Perplexity, an AI search engine that brings users a variety of AI models a la carte. It was one of the first AI apps to gain traction after ChatGPT took off at the end of 2022.
On its landing page, you can choose between a host of different models, or just let it choose the right one for your query. That approach is one of the reasons investors are willing to pay up and give it a higher multiple than even the model builders, like Anthropic and OpenAI. According to the Information, Perplexity is now valued at 170 times annualized revenue, compared to 58 times for Anthropic, 43 times for OpenAI.
It suggests that investors have more confidence in Perplexity's ability to monetize its business faster and turn that growth into revenue more efficiently. Harvey AI is another app specializing in legal analysis but drawing on open AI tech. It quadrupled its revenue in 2024, landing a $300 million round at a $3 billion valuation in February. Abridge it's taking on one of the most notoriously difficult sectors to penetrate health care. Its app simplifies paperwork for doctors by turning conversations with patients into notes and billing codes.
Abridge unburdens clinicians from clerical work so that they can focus on the person who matters most their patient. The startup in February raised a quarter of $1 billion in its series D. The industry is moving so quickly and you know perceptions of where value is and where it's being created. And investment opportunities is also changing really quickly. So I think the idea of calling something an AI wrapper or a model wrapper was a way just to say, well, actually, that's not the most interesting technical part or the most interesting place to invest in the stack.
But now as you get companies like Cursor or many of the others that are growing so quickly, well then it becomes either less pejorative or then we can call it an app or the app layer and then just embrace where there's all this phenomenal growth and these great investment opportunities. Meaning the narrative is no longer dominated by model builders. The apps, they have arrived. The new crop of AI apps. They are entering a fiercely competitive field.
Megacaps poured billions into the first stage of the AI arms race, focusing on infrastructure and model layer. Microsoft shelled out $13 billion for a stake in OpenAI. Google developed its own in-house AI chips and models. Amazon inked Nuclear energy deals. Meta bought thousands of high performance GPUs. Spending at all of them surged.
But as Microsoft chief Satya Nadella puts it — I do believe the models are getting commoditized. All that investment for models that are now looking interchangeable. There have been so many released that the stat sheets, they're starting to blur together. When you can choose between an open AI model that's slightly more creative, an Anthropic model that's a little better at coding, or DeepSeek that's cheaper. Does anyone really have an advantage? What those companies are also are sort of like T shaped companies where they have that vertical, maybe consumer facing app that we all might use, but they also have this more horizontal layer that that's an API that that gives all the other kind of companies out there access to some of those primitives that they can leverage in their own very, very, very deep way. In other words, model builders focused on delivering raw capability and intelligence, while the app companies, they looked at real world uses and solutions.
It's why the real differentiation is now coming at that app layer. They're AI native, meaning they're designed from the ground up to leverage AI in a way that feels integrated and seamless. Offering tailored experiences that solve specific problems rather than just raw power. Over the last few years, we've been able to get a lot more sophisticated in how we orchestrate different models together. So some of those models are web scale models,
but some are fine tuned open source models. And what that allows us to do is really, you know, really abide by this abundance mindset. And in our industry, better is better. You know, like a little bit of difference, you know, that last 5% can make a huge impact on the user experience. Meanwhile, big tech companies, they are massive ships to maneuver. Turning them can require more time and effort and risk disrupting existing cash cows. Take Google's classic innovator's dilemma its core
search business is so dominant and profitable a pure AI chatbot risks cannibalizing that traffic usage and revenue. It also fumbled hype on products that have captured attention. Notebook LM is an AI podcast generator that enjoyed rare virality in Silicon Valley and beyond. Listen to a custom convo with all the info.
But Google never integrated it into its flagship Gemini AI app, and it has since lost momentum. The search results based today is messy, and they need to simplify. And how you do that transition on the monetization piece. It needs to be much more than AI overviews. And so that dynamic has has just softened our optimism.
And they need to radically innovate and really embrace the innovator's dilemma and to, I think, solve what ultimately is going to be the new search paradigm. On the flip side, there's Perplexity, a startup totally unencumbered by legacy business. Instead, it can quickly test and ship disruptive features like an election hub, a finance dashboard.
It's even coming for Google's Chrome with a browser of its own. Agility means a lot, and I think if you can put together the right kind of team that has the right types of skill sets, if there's a lot of pull, then there's there's there's magic. AI apps have the momentum for now, but how long can it last? If you've grown really quickly, could another new entrant just, you know, whether it's with a new interface or some other tweak, some other, um, insight in the market, could they grow equally as fast and displace you in the market? There is a risk that maybe the incumbents, you know, displace any of these emergent winners.
Megacaps they may be huge ships to turn, but they have mountains of cash, deep enterprise relationships, proprietary data, and extremely powerful distribution effects. Google's Gemini, for example. It can be seamlessly integrated into Android phones. Microsoft's Copilot it can go on all windows PCs, and Meta's Llama AI is already in front of a billion users. Startups can struggle to find that kind of adoption. They can be forced to give away their
products for free just to compete, or they can even make concessions to the very giants they're trying to disrupt. The upstarts. For their part, they are deepening their own moats. Perplexity and Abridge are creating their own specialized models.
And as the cost and efficiency of model building comes down, more startups in the app layer may follow suit. We are actually more akin to some of those foundation model companies, in that we're also T-shaped in our own way. Not at the same scale, but we also have that level of science because we're also leveraging proprietary data sets.
We're also fine tuning and post-training our our own sort of models that help us deliver the best possible output. Owning their own models gives them more control over their performance, a lower dependance on suppliers, and it gives them a stronger defensibility against competitors who could otherwise just replicate their product. In other words, AI apps are shedding their wrapper reputation, and they're evolving into something far more profound. Let's just start really basic.
Awesome. What does Abridge do? And tell me a little bit about its founding. Yeah, absolutely. So Abridge unburdens clinicians from clerical work so that they can focus on the person who matters most, their patient.
We started the company in 2018. So we've been around for a minute. And really everything that we're building is based on this underlying thesis that when you think about it, healthcare is about people. And we don't think that's going to change. And those people, they're having conversations. So it could be a professional on one side of the room, a doctor like me or a nurse.
And then on the other side of the room, there's usually a patient, maybe a family member. It could be in an exam room in the hospital. It could be in a clinic, it could be in the emergency department. But these dialogs, these conversations that they're having are really upstream of so many different workflows and processes in healthcare. So as an example, when I'm when I'm seeing patients as a doctor, I might talk to you if you were my patient. And then afterwards,
I'm looking at all this chicken scratch that I'd taken during the encounter. And then I'm trying to piece together what we actually talked about. And then I'm trying to generate a write a note in this archetypal fashion that we're all trained to do. It's not just in this country, it's really around the world.
And then multiple stakeholders are going to read that note and judge it. So other doctors and nurses are going to read it to get a sense of what I was thinking when I prescribed that specific medication for you. But then revenue cycle or billing, people are going to read that note as well, because that's actually how you get paid, is these notes. That's how you get credit for the care that you delivered.
You hear it from a lot of doctors, and it's sort of like the worst part of their job is having to do all of this paperwork. So it helps a pain point for them. So you started in 2018. Um, you know, in 2022 was when ChatGPT came out. I know generative AI was around before that, but how did that affect your business? It just the idea of generative AI to capture investors minds, the consumers minds, enterprise minds, and probably healthcare too.
Yeah, absolutely. So in 2018, for us, part of our why now moment was was related to AI and it was related to transformers, which are a type of machine learning model that underpins generative AI. And so we started off with models that sort of predated large language models. We started off with models, for example, like Bert and Bio Bert or Long Former or Pegasus.
Essentially these models that sort of came before, but that were pre-trained sometimes off the internet's data, and that you could fine tune as well for a very specific use case that you were trying to address. But certainly when, when LLMs came out and when generative AI became a thing, it was it was a really exciting moment for us because it felt like the party had come to us in some ways. I remember in 2021, we had a dinner at a large industry conference talking about generative AI. That was the whole theme of the dinner. And I remember in 2023, so many people who attended that dinner called me back and said, I get it now, and I want to try it. And so we sort of had all this potential energy that turned kinetic almost overnight.
And I'm sure doctors too, right, who aren't in the tech world, could all of a sudden see the promise of this technology of LMS, um, you know, as chat, as OpenAI, as Google, as Anthropic. These other LLM companies get better at making products and have their own transcription services. Why shouldn't doctors, hospitals, the healthcare industry just use them versus using Abridge? It's useful to think about the stack of of AI technologies and at the bottom of the stack, or towards the bottom of the stack. There are those foundation model companies, and in many ways, these are the primitives, the raw ingredients that everyone can kind of leverage higher up in the stack. And so maybe in the middle there's like an infrastructure layer, a layer that helps companies orchestrate models or serve them. But at the top is the application layer.
And companies at the application layer are really focused on solving specific problems for a specific set of users or businesses. And they're oftentimes deeply integrated into workflows and leveraging proprietary data sets. And so that's where this application layer sort of comes in. Is that the the charge for us at Abridge and
the charge for other application layer companies is to really go millions of miles deep on that problem that they're trying to solve for. And, and that means that you're oftentimes exposing yourself or you're learning from data sets that aren't on the internet. You're oftentimes orchestrating any number of different models together in order to create the best possible user experience for whoever you're serving.
Is there a better privacy angle, something that I would assume is extremely important in healthcare? Absolutely. I'd say, like specific industries have specific barriers to entry, and healthcare is one of them, where privacy is paramount. You know, trust is really table stakes. And trust is in many ways the ultimate currency that you're trading. And so can you.
Can you find a way? I think in our industry, like one of our challenges, our opportunities, is to find a way to leverage all of these incredible technologies that are lower in the stack, also build our own where it makes sense, but then have it all culminate in a product and a solution that's really like full stack. It's it's a combination of not just maybe technology, it's a combination of a core technology and AI, but it's a combination of how we integrate. It's a combination of of of of how we deliver data into the workflow and how we do customer service and how we learn from all those edits that we're getting from doctors on a daily basis. Why don't foundation model companies like an OpenAI, which is clearly, you know, I think Satya Nadella said this is a product company now. What's preventing them from going after this space?
Well, I think it'll make sense for foundation model companies to go up the stack. In some ways they've always been there. You know, we all use ChatGPT or cloth. That's an app or product. Absolutely. And it's it's an absolutely it's an app at that layer that's creating a specific kind of value.
I think what those companies are also are sort of like T-shaped companies, where they have that vertical, maybe consumer facing app that we all might use, but they also have this more horizontal layer that that's an API that that gives all the other kind of companies out there, all the other enterprises out there, access to some of those primitives that they can leverage in their own very, very, very deep way. It would just be impossible for them to, I think, boil the ocean and go deep on all the things. And so it would make sense for them to try to be a part of or to help power all that value that's getting created.
Which models does Abridge lean on? So underneath our our underneath the covers for us are any number of different models that we're orchestrating together. So, um, we, we call what we do behind the scenes are some of what we do. We call it a contextual reasoning engine. And I think over the last few years, we've been able to get a lot more sophisticated in how we orchestrate different models together. So some of those models can are web scale models, like, you know, the models that we've mentioned, but some are fine tuned open source models.
And what that allows us to do is really, you know, really abide by this abundance mindset. And in our industry, better is better. You know, like a little bit of difference, you know, that last 5% can make a huge impact on the user experience. And there's also like trust and there's like safety concerns in healthcare.
So in many ways, the more times that we can, you know, thoughtfully hit a large language model and leverage AI, the the better. And so part of our challenge is just being able to kind of abstract away a lot of that complexity from the user. But behind the scenes, you know, might we might hit a large language model 15 to 20 times just to create one, one of those notes. And obviously we wouldn't be able to hit a commercial off the shelf model that many times. Right. And be cost effective. Are you using, um, the deep C open source model? We certainly are experimenting with it. And, and I think that's, that's a part of the story at the application layer that anytime the ground shifts lower in the stack, anytime there's a new discovery, like it's actually a tailwind.
Um, and I think it's actually it's a tailwind, really, for the companies who understand how to kind of, um, put their hands down deeper into the stack. And for the companies who know how to sort of mold those technologies to the specific use case that they. Can be more agile. Right?
Totally. Exactly, exactly, exactly. I want to go specific and then a little broader on the AI app layer. But the fact that you're sort of able to pick and choose, um, what does that mean for your monetization strategy? Um, are people willing to pay for it? How do you pay the models on the back end? Do you pay for APIs? And how does that compare to, you know, like LLM companies that have had to spend potentially hundreds of millions, billions of dollars building these foundational models? You guys get to sort of just take the best of what you can see. Yeah, absolutely. It is. Um, it's like part of, like, the incredibly exciting moment that we're in where, um, it just feels like time machines are broken and and it is actually not just an opportunity, but it's a challenge to be able to sort of keep up with the latest and greatest and to be able to tell your customer that you're truly delivering to them, like the most cutting edge and the best possible output for their specific problem.
Um, are you do you think that as an app company, AI app company, it's easier for you to monetize than, say, an OpenAI, which we know has billions of dollars in losses a year? What does that cost structure? Even if you don't want to go into specifics, that's fine. But can you give us any color around how that works? Yeah, absolutely. I think at the application layer, you're sort of abiding by all the rules of your specific industry. And so healthcare probably has its own rules around how you might go to market, how you how you engage in like a sales cycle. And I'm sure other industries like legal or others are pretty similar. Like they probably have their own idiosyncrasies, but certainly in healthcare it's it's always about the job that we're doing, like the value that we're creating.
And um, and pricing always is sort of like downstream from that in this, in this world that we're living in, we we have an incredible opportunity to to create impact almost. We've been doing it almost overnight these last few years, because two out of five doctors don't want to be doctors. In the next 2 to 3 years, 27% of nurses, apparently per one article in Jama, don't want to be nurses in the next 12 months. And so we have this huge supply demand mismatch. And that means patients, all of us, some people who live in rural settings, for example, have to drive five, six hours to get to the inner city hospital and see that rheumatologist who could potentially save their life. And so this is a public health emergency. We have to do something about it.
And that's what's sort of created this moment. Also for us, this helped align all the different stars to. Two out of five. Doctors don't want to be doctors in
the next. How many years? Yeah, in the next 2 to 3 years. Wow. Is that because of the paperwork factor, which is I know even myself. I hear that complaint all the time.
Yeah. So you're actually trying to help even a bigger shift? You think that with Abridge, doctors, might their jobs become a little more manageable, and you're hoping they want to stay in the industry? 100%. And we hear it every day. Yeah. We, um, we have a, like,
a compliant communication channel inside of our company. And on a daily basis, we get feedback from users. And we have this one channel called Love Stories. And it's like an immediate source of dopamine for any engineer who wants to understand, why are we doing this? Um, and on a daily basis, we'll probably get any number of feedback from users who are telling us that they're going to they're going to stay in the profession now another five years, because like this has unburdened them so much. This competition in the space, though. Are you at the stage where you see that doctors, hospitals, etc. are willing to pay for this?
Um, or are you in sort of growth phase when you're just trying to scale and get market share? They're willing to pay. Um, and, and we've been scaling. So we're live across over 110 health systems in the country. And I'd say this, this moment right now that we've been in over these last couple of years is really historic.
Like healthcare doesn't tend to move this quickly. And so it goes to show that, you know, the problem is, is that, you know, top of mind for all the different C-suite executives across the country that they're looking for solutions and solutions that work. And we've been able to demonstrate that impact. Um, are going to scale. So I've been careful to call Abridge an AI app company.
How does the term AI wrapper make you feel? You know, I don't I don't know how to feel about that term because I'm not sure if it's if it carries the same meaning that it did, you know, years ago. I think folks, maybe 2 or 3 years ago were trying to grapple with with what it all means. And I think that at that point in time, there was an impression that the only way to compete in AI would be to raise hundreds of millions of dollars and to pre-train these, like, web scale models that could solve every problem underneath the sun. And that was the only game in town for I. But I think very, very quickly people figured out that actually value moves up the stack.
And at the very top are people. And when you can solve people like human problems in a really, really deep way, like that's the most profound implementation of like, this technology. Um, and so that's, that's where we live. I would say, like channeling some of the scientists in our company, like our chief technology science officer, he's a professor at Carnegie Mellon, and he's full time with us and lives here in San Francisco. But he would say that we are actually more akin to some of those foundation model companies, and that we're also T-shaped in our own way and not at the same scale. But we also have that level of science because we're also leveraging proprietary data sets.
We're also fine tuning and post-training our our own sort of models that help us deliver the best possible output. So is wrapper no longer an accurate description? Wrappers give the implication of being thin, right? And I think what you're saying you're tackling enormous problems and you're going deeper and deeper. Yeah, exactly.
It's like this, um, 100% agree with you. I think it probably still applies to to some products out there and, and probably more often in the consumer space than probably in the enterprise space, because in the enterprise space, you know, putting technology aside, there's always those other boxes that you have to check off compliance, privacy, security, infrastructure, scalability, and, you know, all the other concerns. You said something interesting that you guys are developing your own models.
And we've seen this trend, especially deep seek, really put it out in the open of models becoming a lot cheaper and more efficient. You've got more open source models to draw from. Yeah. Is that the next sort of stage for not just Abridge, but maybe these AI app companies, they're going to be able to create more of their own models. I think so, absolutely.
I think that's the trend and that's how we've been able to pull off the I think much of our success can be attributed to how we've been able to differentiate on, like, the product and like the user experience and the metrics. And while we don't want to necessarily tell the doctors like, hey, listen, there's so much going on behind the scenes and we're orchestrating like 20 different models together. And some of these models are trying to pull out, for example, parts of the conversation that an insurance company would want to be able to reflect on or review in order to reimburse you. Some models are trying to pull out, well, what would the patient want to read? If I write a note that has a term like transcatheter aortic valvuloplasty for one of my patients? I'm a practicing cardiologist.
My patient might read that term and say like, wait, he never said that to me. I'm going to Google that and then sounds scary. And then they might call me or email me and say, like, what's going on with this term? And so all of those different models that we're orchestrating in the background are aimed at serving all those different stakeholders. And and that's where I think differentiation in large part comes from. Right. I know you're in the healthcare space, but I wonder if we could chat a little bit more broadly about this app layer.
Yeah obviously enormous consequential problem solving and consequences in the healthcare space, Something we've also been interested in is sort of the rise of vibe coding, right? How that's changing another industry of coding. Um, do you know much about that? Have you looked into it is just sort of a leader operating in the AI app space? Absolutely. You know, it's it's. Nerd out with me on. It 100%. I mean, it's it's something that comes up probably in our, in our daily like kind of communication channels on Slack, you know, inside of our company on a daily basis.
So it's it's definitely of the moment right now. Um, I know it also personally I've two an eight year old boys and they did 100 days of Python on Replit. Um, and they're also playing with like cursor and um, you just sort of see them operate these tools and it's, it is it's vibe coding as we describe it. Like they're just sort of feeling it out and, you know, they're trying to make a website right now that breaks down Rubik's Cube algorithms because that's their current obsession. But, um, that they can that they can build a website, you know, in a matter of weeks of like, starting. You know, a course on Python is just,
like, mind blowing it. So I think that. When you say they're starting a course on Python, you don't actually need to know that to vibe code or I first code. What does that mean? Yeah. Well well, I think that being able to have some of the fundamentals to sort of understand what you're doing is still important. Okay.
I think with AI in general, and probably this will continually change like time machines are broken. Like who knows where we'll be in like six months or 12 months. But I do think that in general with like with AI tools, that it takes two hands to clap. You know, like as a doctor, sometimes I'll see patients in the hospital. I still do one weekend a month, I'll take the weekend shift nobody else wants. And just for fun, I might like de-identify like information around a specific patient that I saw, and then enter it in to one of these, like AI models and sort of understand what would it do if it was the doctor in my position seeing a patient with these symptoms, with these complaints and, and I would say like oftentimes at least half the time it's first instincts are not correct.
But then I sort of like have this conversation. We go back and forth and we feel it out and and I'm convinced and I have no counterfactuals, but I'm convinced we get to a better conclusion in the end. So it's this incredible partner in helping me expand my mind and ensuring that I don't forget about some zebra diagnosis. But it's still not like the be all, end all. And I think, like vibe coding is a similar experience.
Do you think the vibe coding can replace traditional coding? Well, I think right now, especially in industries like healthcare, where being able to scale like a very enterprise grade solution and industries like that, I think where vibe coding comes in is that it's an incredible tool or, or craft to be able to get to prototype. Um, I think what vibe coding has done for us, for example, it just sort of helps us skip a lot of steps around translating ideas or communicating ideas across the companies that folks can understand, like what we're building, and also maybe so that we can start to like, hone what we're building, maybe like the user experience, like, well, ahead of time of head of shipping. And so it's sort of contracting cycles on the product side, but it's not necessarily replacing any of the work that we still have to do to ensure that this code that we're shipping is going to be able to serve and scale. Okay. So the last topic that I wanted to sort of talk to you about is what we've seen over the last few years is kind of amazing, like the fact that your startup, you're competing against massive players in the space who've been trying to solve this problem for a very long time. Why do you think that there's room right now? Does it have to do with generative AI? Does it have to do with more with, you know, legacy players being huge ships to steer? Why is there space for someone like you guys to compete? Well, I do think that when the ground is shifting as quickly as as it is, what you mentioned before, like agility means a lot.
And I think if you can put together the right kind of team that has the right types of skill sets. For example, if you can, if you can recruit scientists who can kind of go and reach down lower into the stack to be able to fine tune those models for your specific use case. And then if you can also kind of integrate into those workflows in a really deep way. So if you can kind of marry the moment with AI, with domain expertise, um, and, and obviously it's very hard to time markets, but if like if there's a lot of pull then there's, there's, there's magic. And I think that's the moment that certainly healthcare, um, is experiencing right now with AI. And I think other industries will soon as well.
So startups are able to move quick. They're agile, but is it only a matter of time until the bigger players who have more data to arguably right, are able to capitalize on that? And you know, what's your moat against them sort of getting wise to it, maybe making some acquisitions in the space and competing with you? Well, I think at that foundation model layer, there is a kind of moat that you can go after. And maybe years ago, we thought the only way to compete at that layer would be to raise hundreds of millions of dollars and to have access to all that compute and all the internet's data, and to be able to do all that reinforcement learning over time. And it's heavy, heavy work at the application layer. I think moat looks the way it's always looked for, for technology companies and especially in like the B2B world, in the enterprise world. Moat looks like network effects.
Moat looks like switching costs. Moat looks like access to scarce resources you know brand. And so those. All feel like advantages to an incumbent though distribution. It could be you know I think the distribution piece is is important. But I think that on the distribution side,
AI is actually changing the game for what those distribution rails need to be and what they need to look like. Um, you know, even now, uh, like over these, these coming weeks and months, I'm sure that we'll be exposed to more sort of Enterprise grade, maybe even a genetic technologies that can do a kind of robotic process automation that can kind of get a bunch of work done for you, that can get access to your to your browser or get access to your, your legacy system and like, get a bunch of, you know, um, tasks completed for you. And so the barrier to entry on being able to do those tasks before with that system might have been incredibly high. Like, impossible. How could you get in? The barriers to entry sometimes feel like they're just as high as the barriers to exit. And a lot of like a lot of industries. And so in this new world with AI, though, AI is managing to, I think, redefine the importance of distribution rails or existing redistribution rails.
And I think that it's it's also allowing folks, um, to sort of compete at the layer that startups want to compete on, which is like the best product, like so. So I think that's where the game has changed a good bit. Right. Um, in the CNBC world of markets and finance, um, we've seen actually more deal activity than we have in a long time. Um, couple of massive acquisition plays this week. Google and Wiz SoftBank and pair um IPOs coming up Klarna and uh core Weave.
How do you think about yourself as a private company. Is it your mission to stay private? Would it be appealing to you? I mean, something that Wiz said with Google this week is it's going to accelerate their innovation and their scalability by being within such a big company. Do you think of it like that, or do you sort of, um, try to or aim to remain independent? Um, my aim is to just create as much value as possible, as quickly as possible, and also be on a long, long journey with our health system customers. And so every single one of these 100 plus health systems that we've been able, that we've had the privilege, you know, to be able to partner with. We we help them understand that we are on a decades long journey. We've raised the capital to be on a
decades long journey. And so much of the capital that we've raised 80% of it, we put into R&D so that we can always be on that bleeding edge and deliver to them the best possible product, the best possible experience. So I think it's important to sort of, um, keep in mind what all the different options are over time. But I think the first principle for us is
that this is the kind of company, you know, that we're building at Abridge that is, is, you know, really a part of the infrastructure. It can be a part of the fabric of healthcare delivery. One of the most magical things for me personally, over these last few months is starting to hear doctors use Abridge as a verb, like, oh, I abridged my conversation with my patient today, and I think that that's such like like, that's the moment, that's the privilege. And like, that's the opportunity that we just want to make sure that we don't, uh, we don't have stuff. Interesting at a much smaller scale. We use Otter in the journalism world.
We ordered this. So I know what that's like. Yeah. Um, Shiv, thank you so much for coming in today. Fascinating conversation. I learned a ton.
Thank you so much, Deirdre. Really a privilege.
2025-04-02 21:21