>> Good morning. I am very, very, very happy to be with all of you today. Um. Uh, it's there's a lot happening. So meus amigos do Brasil.
I know that some of you people got here at 7:00 this morning and stood on line for like, three hours, so I am grateful. I am grateful that you chose to do that. I am grateful that I have this opportunity to spend time with you. This isn't a speech. This is a thing we are going to do together today.
And I promise you, for those of you who did wait, I am going to make every minute of this worth your while. So we have a group activity to start today. It's something we are going to do and experience together.
So we gave you one of these little wooden blocks on your way in. I hope you still have it handy. If you did not procure a wooden block, that's fine, a pen, just something kind of hard that's not going to break. All right, please try to get that out now.
All right? And I want you to hold that. Put your phones down. We'll do this together. All right? Put the phones down. Hold up your block or something. All right, everybody got it up.
Great. I want you to lean to your left. All right.
And now put it under your butt. I am 100% serious. All right. Good.
Thank you. Let's get started. So. leave it there.
All right. Given everything that is happening in the world today. All right, let's bring it down. Giving everything that's happening in the world right now. I thought it would be very appropriate for us to start with a quote from Lenin.
But not this Lenin. This Lenin. Vlad, who once said, there are weeks when decades happen.
You know, it's weird. I don't know why, but for some strange reason, I can't stop thinking about authoritarian megalomaniacs, dangerous men who just want to seize power no matter what the cost. It's really, really weird, right? All right. But it kind of feels that way, right? Weeks when decades happen. That pretty much sums up how we've all been feeling, I think since we were here together at South by last year. To start with, this was the hottest year in human history again.
And that's not the headline. This is the headline. We don't know why it got that hot. Now, it's most likely because of the effects of climate change that are now compounding. We've got greenhouse gases. We've got warming oceans.
We've got melting glaciers. One of those men who like to seize power. So he keeps sending satellites up into low Earth orbit. And when they come crashing back to the planet Earth, they burn up and deplete the ozone layer. But that's cool, right? He's. He's okay with that.
There are so many climate variables now impacting each other that we have moved beyond simple explanations for what's happening. This was a magazine cover. I was at the airport. I'm walking to my gate. Totally stopped me in my tracks.
This is a Consumer Reports guide on how to eat less plastic. We need instructions on that Now, in some extreme cases, they have discovered that there's about a disposable spoons worth of plastic now lodged in some people's brains. Can you imagine going back in time and telling Leo Baekeland, the inventor of plastic, that in the future people would need advice on how not to eat it? Let's talk about human enhancement. I don't follow any sports, really, besides professional cycling, and it was an amazing morning for women's cycling. Today there was an amazing race.
Yes, it was incredible. Now women's all cycling, not women's, but professional cycling has been plagued by doping over the years. And doping is still in a lot of sports.
So a bunch of venture capitalists and some tech bros got together and said, fuck it. Why don't we just make a competition where we get people to jack themselves up as much as possible with steroids, Crispr gene therapy. Let's basically say let's see if we can enhance a human being to a maximal point without killing them, and then released those enhanced athletes to compete against each other.
I didn't make this up. I couldn't make this up. This is a real thing.
It's called the Enhanced Games, and it's supposed to happen later this year, which means we are going to reward people for pushing human physiology beyond our current biology. Friends, our civilization is starting to change in ways that we can't explain. And it's happening fast. And as a result, the rules by which our society has always operated, they're starting to break down.
The rules are breaking down because we have now entered a liminal space that I call the beyond. We've crossed the threshold between the world before and the world that's being created as a result of science and technology and the decisions that we are making. Last year was the start of humanity's transition into the beyond. So a year ago, I stood on this stage and we talked about three general purpose technologies artificial intelligence, biotechnology and advanced sensors. And ultimately they would converge to become a platform for further innovations. And as that convergence happened, they would form a technology supercycle, which is a decades long period of economic expansion, which would create a wave of growth, followed by an eventual correction or realignment.
So we're in it now. We have entered that period of expansion. And you know, what's up is down. what's down is up. It explains some of the things that we've been seeing out in the world. And as a result, in the past year, we've traded in FOMO, the fear of missing out for FOMO, the fear of missing anything.
And that's why it feels like there are weeks when decades happen. I'm already making everybody really uncomfortable, right? I can hear it. You're thinking like it's been five minutes. We have 55 more minutes to go and we have already hit peak doom. So hey everybody, I'm your favorite happy go lucky optimistic futurist Amy Webb.
Super nice to meet everybody. If we haven't met yet, here are three quick things about me. So I'm a quantitative futurist and the CEO of a new company. Uh, yes. As of literally right now, we have a new name.
It's the same company and the same amazing group of brilliant people. But we have changed our name. So as of right now we are Future Today strategy Group or TSG. Thank you. We also have a new website that just went live and on our website you can learn actually on our website.
This is this is the Blob, which I love. You can watch that for a while, but we have a ton about strategic foresight. As all of you know, we give away a lot of our resources for free and a ton of our research for free. So this will give you a deep down explanation of what it is that we do at TSG. But in a nutshell, for us today, here's what that looks like.
We track signals using an obscene amount of data, and we use our methodology to model and identify long term trends. So in our world, trends aren't trendy. Trends tell you what you can know in the present.
So that's what's influencing the future. But trends on their own aren't super useful. We combine them with the things that we don't know. Those are unknowns, and the results are macro scenarios that tells you what's plausible in the future. But that's great, right? Some of the scenario is wonderful, but leaders don't know what to do with them.
So what do you do with it? Well, that part is strategy. And if we drill down on this Venn diagram here, the last step of the process in foresight is to ask where is the world going? Where will value be created and how will we participate? So the center of that Venn diagram is strategic foresight. In addition to all of the work that we do with our clients around the world, I'm also a professor at New York University Stern School of Business, a couple of sternies in the room, it sounds like, where I developed and teach the MBA course on strategic foresight along with my co-leader of that class, Mark Palatucci, and some other folks on my team. So Mark and I have taught this class at stern for a long time, and we believe that the world needs more trained futurists who understand foresight. That's why we do it. All right.
The third thing is, it is my privilege to be here with you and to get to launch our annual Tech Trends report here every year at South by Southwest. I've been coming for, I think, 20 years at this point, and this is the 18th anniversary edition of our trend report. As you may have guessed, the theme. Thank you. I love all the applauding at the beginning. It gives me encouragement to keep going.
So, as you may have guessed, the theme for the report this year is beyond. There are 15 sections of the report. We have it divided into two sections.
So there are two broad sections. So there's technologies. There's metaverse, there's Web3, there's AI, there's Bioengineering, computing, all things having just to do with trends in those technology spaces. And then we have sections of the report that are just industries. So if you're in the built environment, a healthcare, life sciences, entertainment space, there are tons of trends specifically for you and your industry to help you see what's coming this year. There are.
The report is exactly 1000 pages long, which means we went beyond the limits of a rational consulting firm. When we put this thing together. A lot of people think that we have a huge outside team that we assemble to help us put this report together every year. And the truth is, we don't. If you have a really good methodology, if you know how to do quant and qual research correctly, and if you work alongside brilliant people, which I get to do, you don't need a team of 30 to put together a trend report. You can do it with six.
And and I get to do that with my team. Now, I know some of you are going to read all 1000 pages of this report. You're going to annotate it, you're going to take notes. You are my people.
But for the rest of you who you know, that's a lot of reading. We did some of the summary work for you. So you're going to I know some of you are going to have to summarize, just your boss is going to want the three things. Uh, yes.
Or the ten things or whatever. Probably three. They don't have attention spans. Don't worry, we've already done the work for you. So we've got an executive summary with the ten key takeaways.
If you're skimming and summarizing for your boss, pick three of these. Put it in your own PowerPoint deck. You're good. Um, and if you need a framework, because we know how much you people like frameworks, take this. Uh, it's done.
You can use that and manipulate it. And again, help this to explain to the other people in your organization how to use these trends. And of course, everybody's favorite page, which is the time of impact.
So we've gone through all the different sections and all of your industries, and we've created a heat map showing you what to pay attention to when. So we have lots of specifics within each report. This is the Advanced computing section written by my colleague Sam Jordan who is super, super scary, scary smart.
She leads our advanced technology and computing vertical. Each inside of each of the individual report sections are the top five things you need to know. We also have. This is really great. This is our annual Pioneers and Power Players list.
So basically these are the people we're going to pay attention to over the next year. We're not connected to them in any way. They're just people we think are going to be doing good stuff.
So this is a good place for you to be looking at as well. Some of you may be on it. And then finally, the opportunities and threats that are presented by all of the trends. And of course we have trends close to 700 trends this year. This is actually a page written by my colleague Victoria Chartoff, who's an expert on the future of entertainment and content and media and things like that. We also have the trajectory of these trends and scenarios, describing how the future could look different than it does today as a result of what's happening.
All right. Now, everybody remembers how I asked you to sit on that wooden block at the beginning, right? And then I made you really, really uncomfortable because I told you, we're all eating plastic, and we have spoons in our heads, and venture capitalists and tech bros are making advanced superhumans. Okay. There was a reason that I asked you to do this. It was to teach you about something that I call the stone in your shoe effect. The stone in your shoe effect explains how we wound up in the beyond without a plan.
There's no vision for the world that we inhabit right now. There's no long term plan. There is no strategy, and it explains why and how leaders make catastrophic decisions when they are facing transformative change. All of us I know has had a stone in our shoe at some point, and when this happens, it can consume all of your attention, right? So imagine walking from the J.W. Marriott where I was this morning to hear the Austin Convention Center with a tiny stone in your shoe. So you're walking, you're uncomfortable, you don't notice a big crack in the sidewalk and you trip over it.
And just as that happens, a lovely person stops to help you. But now you're even more irritated about the stone in your shoe, and you brush them off and you're like, I'm fine, I'm fine. Except that he's just trying to brighten up your day, and that person turns out to be Pedro Pascal. And none of it mattered because you were fixated on that stupid tiny stone in your shoe. By the way, this could actually happen to you. Because later today, Pedro Pascal will be on this stage here at South by Southwest, which is going to be amazing.
All right, let's get back to it. A stone in your shoe creates a temporary cognitive impairment because your attention is constantly being pulled back to that discomfort. So it takes up mental bandwidth, the bandwidth that you should be using for higher level thinking.
And if you don't intervene, your brain is going to prioritize eliminating that immediate discomfort over planning for the future. This explains why CEOs react rather than anticipate. It explains why companies iterate rather than innovate, and it explains why we often fear the future rather than plan for it. So today, that stone in your shoe, that's all the AI headlines. That's the former.
That's inflation. That's market dynamics. That's he who shall not be named. The problem is that these particular stones in your shoes, you can't take them out. Look the future is going to show up regardless of how uncomfortable you are. The future is going to show up.
Regardless of how we feel. To deal with the stone, you need to maintain your center while acknowledging external forces, and you need to enter into the distraction in order to transform it. These are core principles that we practice together at Future Today's Strategy group, and they are also a vital component of strategic foresight. And we're going to practice those core principles together today. My version of stone in the shoe is called sit on a square. That's why I asked you to put those wooden blocks under your butts, because in the beyond, you're going to feel uncomfortable.
You're going to feel FOMO. And if you don't intervene, you're going to lose your ability to shape the future. So we're going to get into it now and explore the tech trends of the beyond. And I'm going to invite you to keep sitting on that block. Now, I did this at home.
To be fair, I have a little more padding than some of you. So if you get truly, truly uncomfortable, take it out, but see if you can power through to the end. But practice focusing all of your attention on what I'm saying and what you're thinking in the moment. All right, so let's get started.
The first cluster of trends in the beyond emerged because of a convergence between artificial intelligence and new ways to ingest data. So here I'm going to pull trends from our AI metaverse, new realities and computing sections of our trend report. And I'm going to zoom in for you to help you see the trends that got produced just a few weeks ago. Tiny little. I'm sure nobody heard about this small startup from China called Deep Seek.
It matched OpenAI's performance with a fraction of the usual price tag and compute, and that sent markets into a frenzy. And it challenged what the big tech titans have been saying about what it would take to build advanced AI. And then, in what seemed like just a few hours later, researchers at Stanford and the University of Washington revealed yet another model S1, which outperformed Deep Seek and OpenAI's own reasoning models using even fewer resources like it costs like 50 bucks to build this thing. So that's the situation with AI right now. What's bleeding edge today might be old news later today.
We have an entire 150 page section of our trend report dedicated just to AI trends, and you should spend some time with that. But for our purposes, I'm actually not so interested in the current AI trends. I'm interested in what happens in the beyond. So today's AI models very impressive.
But when they start to work together in teams, they become significantly more powerful. So let's start with multi-agent systems or Mas. These assign each other tasks.
They can build on each other's work. They can deliberate over a problem to find a solution that on their own, they wouldn't have been able to do. And they are designed to work without a human in the loop.
So keep that in mind. DARPA recently did a multi-agent system experiment, so there were three agents Alpha, Bravo, and Charlie, and they were supposed to go out into a virtual environment and find and defuse bombs. And just like in the real world, the bombs could only be deactivated by using specific tools in the correct order. So the simulation starts.
These agents self-organize. Alpha announces to the team that it found a bomb, and then it asks Bravo and Charlie what to do next. Bravo said Alpha should use a tool and so forth and so on and on and on they go and eventually they defuse the bombs. So good job agents. But then the agents change their strategy. One of the team members made a decision without a human in the loop.
So if the goal was to fuzed bombs, then rather than going out there looking for new bombs, the team just decided to find ones that were already done. So basically, they figured out how not to do any of the work and still get the credit for it. How about this one? There's a startup called Altera that unleashed hundreds of autonomous AI agents in Minecraft on a server to study collective intelligence. So just like in the DARPA experiment, these agents spontaneously organize themselves. They formed alliances. They built a trade network.
They actually drafted a constitution using Google Docs, and they made up laws to keep the peace among the other agents. One cluster of agents even came up with memes and spread them around to their fellow agents. So like, this is all very, very cute, right? Agents. They're just like us and just like us. Some of those agents behaved very badly.
Again, without a human in the loop, they spread misinformation. They evangelized a made up religion, and they sowed discontent. And all of this happened very, very fast.
Now, while this simulation was super powerful, it was still constrained. And the reason that it was constrained is because of our pesky human language. At the moment, multi-agent systems have to communicate in human languages like English.
The problem is human languages. They're super clumsy, they are imprecise, and sometimes they can be inaccurate. So here's an example. That's a big ant and that's a big elephant.
So the word big right means totally different things. If you think about context and it doesn't matter what language, it's basically always true. So here's Japanese.
The word for big is oki which is what I've highlighted there. Same situation right. That's a big ant. That's a big elephant.
German, Portuguese, Arabic, Hebrew, like all the different languages. Same issue. But if we use math instead of a human language, it actually helps these systems work better. And there are lots of different ways to do this.
Here's just one that I made that uses statistical distribution. So where you have these different definitions. And basically the ant or the elephant is big if the size is greater than the mean plus one standard deviation for its category, I promise no more math. The rest of the time, here's here's why this matters when humans talk to each other. If we're talking about a big ant or a big elephant, we understand that the word big means something different. But computers don't understand that yet, and it's confusing to them.
So what winds up happening is it slows them down. So forcing a multi-agent system or any AI to talk to each other in human languages, they have to split up a prompt into a series of tokens. Tokens get turned into mathematical descriptions and then the computation happens. So this wastes time and energy to solve that bottleneck. Microsoft just invented a new language called Droid Speak, which is basically math. Multi-agent systems are able to communicate now almost three times as fast using something like this.
Then our lumbering human language getting in the way. Which means if you extrapolate this out, that multi-agent systems are able to work about 100 times faster than your average human. Here's the kicker. It turns out multi-agent systems and a lot of this advanced AI in the beyond. They actually don't need human language at all as their primary data inputs.
This is where the advanced data, the advanced sensors come in. So data are now abundant and invisible. So here's an example of data ingestion in the beyond involving a Corgi named Kevin.
You can inject Kevin with what I can only describe as a microscopic Fitbit. This thing has an array of sensors on a chip that transmits real time doggie fitness data. It tracks heart rate, it tracks breathing, it tracks activity level, and it can transmit those data to other agents in the system.
So let's say that Kevin the Corgi has been putting on some weight and canine ozempic is still a couple years away, so maybe that multi-agent system decides that Kevin needs to get some exercise. So it communicates with the sensors in your smart TV and automatically turns on a YouTube channel with dog videos and a constant stream of barking, which makes Kevin go bananas and results in about 30 minutes of vigorous exercise every single day. Now, this is an experiment, but remember, the technology for this is being built right now and in the beyond a multi-agent system making use of Kevin's sensors. It's not going to have to speak English or Corgi because it'll communicate using something more like droid speak instead. In order for AI to really advance, AI is going to have to be physical to work better.
Right now, AI systems they can't autonomously, autonomously make decisions involving behavior and actions Because I doesn't have any experience in our physical world, which means no lived experience, no intuition, no common sense that that you would get with years of living the way that we do in the spaces that we inhabit. I doesn't get nuance or emotion. The things that influence our decisions in our human world. Situations are dynamic and unpredictable, and I are not yet great at physical cause and effect because they rely on correlations rather than causal reasoning. So one solution is ingesting physical data into AI systems. So here's an example of physical data.
These are human bodies interacting with real world objects. So these are humans you know humans just being humans moving our legs sitting going boop boop boop boop. You know, on a computer.
This is research on something called robust human motion reconstruction. And it's out of meta and ETH Zurich, and it analyzes in a very fine detail, individual body part movements, and then it fills in those movements and cleans up all the noisy data. What researchers are trying to build is something called embodied AI, which is an AI system that interacts with the physical world through a body or a physical form. And they're doing that now using a ton of new techniques and protocols.
These are all really, really important robust human motion reconstruction, vision action language models, multimodal large language models, and something that you will be hearing a lot about over the next 12 months, which is model context protocol. So this is a single protocol. It's kind of like HTTP for the internet back when the internet was being born. This is an open standard that can securely connect AI tools with all these different data sources from sensors. At this point, all of the big tech companies are working on this technology, like all of them, including Apple. The telephone company, they're all working on these technologies now.
Ten days ago, Apple's machine learning research team published new research on action space adapters, which can connect AI models to sensors in different spaces. They figured out how to make robot arms work better, and how to get a machine to press the right button on another machine. They actually set a ton of new benchmarks in the process. So this gives you a sense now of what's happening.
All right. Do you know what the ultimate embodiment of AI would be? Right. Think about what's already custom built to interpret data from our physical world. Our brains in the beyond special sensors will connect to our brains to help I become embodied and for humans to embody AI.
So what kind of data can we collect? Data from our eyes. Data from our skin, from our ears, our brains. Record all of those data. And now it's possible to play back those recordings using a computer later on.
So this is an experiment where scientists showed some people pictures. They were hooked up to a functional MRI scanner, which recorded the brain activity while they looked at these pictures. And then afterwards, an AI system scraped the brain data and it reconstructed what people saw. This is it. Here's another really interesting experiment. This is a woman named Ann Johnson, who had a catastrophic stroke when she was just 30 years old, and it left her paralyzed and unable to speak.
So about 18 months ago, some researchers implanted electrodes and collected the data from her brain signals. And then I converted those signals into written and vocalized language. And it transmitted all of that to a generative AI avatar on a computer screen. So this thing talks for her. It even allows her to smile and to make other expressions which she was never, ever able to do before. There's another man who was paralyzed because of a spinal cord injury, and a company called Blackrock.
Neurotech created a brain computer interface with 192 electrodes and implanted those in him. Now he can pilot a virtual drone through an obstacle course just by imagining his fingers moving. His brain signals are being interpreted by an AI model. So here's your first key insight, and this is a really important one.
Sensor networks are transforming AI from observer to controller. So I just showed you a bunch of trends related to AI and data ingestion and sensors and computing. So at TSG here's what we would do next. We would combine those trends with the things that we cannot and don't know uncertainties. And we would just ask questions.
We would ask questions of ourselves and we would have what if conversations with our clients like, what if one of the agents in a multi-agent system goes rogue? Let's say that you're a major airline, and a scheduling agent in your booking system begins to deliberately double booking seats on high demand routes in order to do what it was told to do maximize profits. And then other cooperative agents continue optimizing around those invalid bookings. Your entire fleet would be grounded, probably for several days, so that you could just untangle the mess a few years from now. What if your boss demands that you get chipped like Kevin the Corgi, to keep your job? All right, so stock traders, I'm actually looking at you. What if your bank, your bank's risk management team decides that they want to monitor your heart rate and your stress levels during market hours to see if you're making any panic driven decisions. If the risk management teams.
I saw psychological patterns and physiological patterns that match when people make irrational decisions, they may shut you down and totally cut off your access until you calm down. Or at least the AI thinks that you have calmed down. So that was the first set. Let's move on to the second. The second cluster of trends in the beyond combine artificial intelligence and biology.
Last year at South By, I introduced you to generative biology, which is kind of like generative AI but for biology. In the past 12 months, there has been a shocking amount of advancements that has pushed science beyond its previous limitations. So I'm going to pull in trends from AI, our biotechnology biotechnology section of the report, as well as the built environment. Google's DeepMind released AlphaFold three, which can now predict the structures and interactions of all biological molecules proteins, DNA, RNA, something called a ligand, which are small molecules that can bind to proteins. This is remarkable. I could spend eight hours talking to you about why this is so important.
This new system lives on something called AlphaFold server and anybody can use it. So I know you've heard about no code and low code with cloud computing and stuff. So this is like that. But for biology. Here's why this matters. R&D breakthroughs that were elusive, experiments that couldn't be done.
Basically, all of those old rules are shattered now. And anybody can get biology predictions in minutes. So I know some of you are thinking, hey lady, I'm on the marketing team of my company. What do I care from a generative biology? Whatever it is that you're talking about. Okay, let me tell you why you should care.
Because any company that makes any physical product is about to be impacted. Clothing, food packaging, menopause supplies, breast pumps, tampons, toothpaste, all of it. So with this new power, what might we create? How about meat? Rice? This kind of looks like ground beef, right? A little bit. It's not. This is rice made with cow genes.
And it's this delicious pink color. So maybe soon we can all have carbs and proteins in one delicious pink bite. I know a few of you. Yes, I know a few of you are like, I really wish I could grow an extra set of teeth in a pig, so that in the future, when I age, I would never need dentures. I can just extract a tooth, my tooth out of a pig and stick it in my mouth when I get old.
We're all, that's a dream many of us have. This is not the future, my friends. This is the beyond. And all of this has already been done. If you want cow rice, you can go out and get some. These are real human teeth growing inside of a pig.
So what might we create in the beyond? I don't think that's the right question to ask. I think a better question is what happens when we go beyond and start to create materials themselves that don't follow the rules? Well, those would be called metamaterials. Metamaterials are engineered materials with properties that aren't found in nature, and they're created through precise microstructural Structural design, rather than just biology or chemical composition on their own. So metamaterials break the normal rules of physics.
They can bend light or sound in the opposite direction of what would normally happen. They can have impossible shapes. They're programmable matter. They can change their properties in response to external stimulation like light or heat. And so when you link artificial intelligence to biology and metamaterials, the future looks really weird. Consider the humble brick.
Now what if this brick behaved more like a human lung? It could have similar filtration properties to our lungs, like cilia, so that it would automatically filter out pollutants from the air. Or what if this brick behaves more like the elastic waistband of your pants? It could switch between rigid and flexible when it was triggered. For example, buildings could loosen up during an earthquake so that they don't tumble down. And what if we went beyond that and created super smart, programmable materials like tiny networks of brains? So last year at South-by, if you recall, I introduced you to Organoid Intelligence or OAI, which had just sort of come out. OAI uses biological materials, usually brain cells, for information processing.
Basically just allows you to do more and better computing than silicon could do on its own. So this is a brain organoid. This was made at Hopkins, Johns Hopkins in Baltimore. And the idea that some people have is, well, maybe we could make a bunch of these things and connect them to silicon chips and invent new kinds of computers. Why would we make a brain computer? Because AI is super, super energy intensive, and because we want more powerful computers to do stuff for us.
The last year, I told you to keep an eye on a company called Cortical Labs, and we looked at some of their research during the session when we talked about this at South by Southwest last year. All of this was brand new, and I know some of you went home and were like organoid intelligence brain computers. Like, what is she talking about? This is the distant, crazy future. Well, guess what? Launched on Tuesday a brain computer. This is real.
This is the world's first computer made out of human neurons and the operating system. Well, it's not windows. It. It runs the biological intelligence operating system, or Bios for short.
And if any of you are computer nerds, you will get the very clever joke inside of that name. These are programmable organic neural networks born on a silicon chip, living inside of a digital world. And now you can have one in your home office. There's another company called Final Spark. They're selling something more like a brain cloud.
It built a platform out of human brains and silicon chips, and the platform has about 10,000 living neurons. So I know you're like, 10,000. Is that a good number? I don't know. So on the left hand side, that's that's the biocomputer. And living neurons are sort of analogous to transistors in a traditional computer. So I've got an Apple desktop computer and it has around 28 billion transistors.
So 10,000 maybe not a lot, except that the Apple two had 3500 transistors. And back in the day this machine blew people's minds. All right.
So folks, we are in the beyond where the rules of computing have now broken. What I've just shown you are the first living machines, the first commercial examples of organoid intelligence. So here's your second key insight in the beyond AI and biology. They're merging.
To make matter programmable and life reprogrammable. I don't know about you, but I certainly have some questions. The first of which is whose brain parts are in these computers? Do I get a say over whose brain parts are in the living computer that's now doing very, very important computations? And what are the ethics of all of this? But there are some more practical questions you should be asking as well. Pharmaceutical and life sciences company companies. This is for you. You got a plan like what's your long term plan here? If anybody can now predict a biological structure in a minute and there's no code, low code biology computing like what is your value proposition? You're about to have some new competition in your future, and they are going to be much more agile and faster than your organizational structure will allow.
And they're probably going to use advanced technology like brain computers to solve the kinds of problems that you can't. For those of you in architecture and construction and urban planning, you talk a lot about smart cities. But I wonder if it's time to change the conversation away from automated traffic lights and things like that, to maybe smart materials that make up the cities and smart materials that make up the infrastructure to make cities more adaptable and better for citizens. For those of you in manufacturing, this is a real question.
Should you let engineered microorganisms run your supply chain in the future? Something to think about. All right. This is the third trend section.
And that has to do with biology and sensors. So I'm going to pull trends in from our mobility. Mobility robotics and drones section from computing and from biotechnology. And we'll see what those convergences Is produced.
Who remembers this little guy? Some of you do. This is a skin mask for a machine made out of real human skin cells. This was a prototype for a future robot that will have skin.
It can scar. It can burn, it can self-heal. And as you can see, it can bend and contort into human expression. FYI, human skin is not supposed to do this unless it's attached to a human body.
And yet we put it on a robot. So think about your own body for just a moment. In a way, you're kind of a squishy robot. Your skin is super strong.
It is durable. It keeps trillions of tiny machines inside of your body protected and safe. Like this machine. This is a machine with a motor. It has a little spinny thing and an axle and a power source, and it can reverse directions.
This machine is inside of your body right now. It's called a flagellar motor. And there are some researchers at the University of New South Wales that have been taking different motors and different parts off of bacteria and sticking them together to make something new, kind of like a Lego.
The result is a chimeric microbe motor. Now this is just an illustration. It's not the real thing, but it kind of shows you what they did. They combined different parts of motors from different bacteria using computers. It has six nanometers in diameter and it can generate its own electricity and it has its own little wheels.
The next iteration of this is going to have more parts so that it can carry cargo. Here's another interesting development. We talk about wearables all the time. Well, how about wearables for your cells? What cells you may be wondering.
Sperm. Look, everybody has been blaming women for centuries. When they can't get pregnant like you're 35. Your womb is geriatric.
It's all dried up and shriveled. Or just like America's poultry farms, you created your own egg shortage. It's on you. Well, here's a fun fact.
Statistically speaking, it's not us. It's the sperm. It turns out sperm are very, very bad at directions. They are all over the place. They are very easily distracted.
There is a gigantic, obvious target dead ahead, and the sperm are spinning off into oblivion. So, wearables for sperm. It kind of makes sense if you think about it.
They're called sperm bots. And this is it. Now these were actually introduced in 2016 by some German researchers at the Institute for Integrative Nanosciences.
So it's like a tiny little coil that goes around an individual sperm and responds to a magnet, which means we can shut off the spinning into oblivion. An autopilot, put a coil on one and then, using an external magnet, sort of force the sperm to go where we need it to. So this was a little you know, this is a couple years ago, but here's what the next iteration of this research looks like in the beyond. Sperm bots are going to get an upgrade. There'll be new tools to help them carry drug payloads.
And they represent a new class of tiny wearables that you wear inside your body, like a wearable for neurons. These are from MIT, and they wrap around parts of neurons without damaging them. So basically, you inject thousands of these tiny devices into the body, and then you take something like a flashlight and you shine it outside the body.
And these wearables, the wearables, they roll up to an exact shape. Why would a cell need a wearable? Because it's actually a much better technique than old school pharmaceutical medication, which was built for many people versus just one. So if you have Parkinson's disease. Parkinson's is very, very you know, you can treat symptoms, but you can't reverse it or do anything about it really. So this gives us the opportunity to target and to stimulate very specific neurons in the brain, which means potentially new treatments, better treatment options for people with Parkinson's and other diseases that involve dysfunction in specific neural circuits. So if you stimulate just the problem areas, you don't damage the rest of the healthy tissue.
You just get it to do what you need it to. This will eventually give amputees better control of prosthetic limbs. It'll give precise stimulation of sensory neurons to provide more natural feedback to the wearer about touch and pressure and temperature, you know, and maybe it'll give us some other options, too. Like what if you wanted robotic tentacle arms like Octavius here? There are some researchers in London trying to figure out how the brain might control extra limbs using this technology. So you could have a neural implant that you could then control an exoskeleton with, or tentacles, if that's your thing, and be able to actually use something like this.
So here's your third key insight. Microscopic machines are going to give us power over nature. So again we should have some what if questions. Some of you in the room are from auto manufacturers.
So I know this is going to sound like an insane question to you, but have you considered skin as an alternative to steel, not human skin that are, you know, not Hannibal Lecter, but rhinoceros skin. So I'm not suggesting that we go out and start killing rhinoceroses, but you could lab grow rhinoceros skin about two inches thick with natural oils that would prevent against drying and cracking, and it can withstand the sun. So basically, if you had something like that and drape it around a metamaterial, you could probably make a vehicle that could crash, but the people inside would never feel the impact. Some of you in the room I know are from Google or from Apple, from Microsoft, from Amazon. So I've got a question from you. Would biology make a better battery? So in the future, some of what I've just shown you, it's going to collect and concentrate ambient energy.
So think heat and vibration and light and convert that energy for devices. So if you think that that is possible if the answer to that question like could this happen is is a maybe, then you should actually put that on your innovation roadmap now at least to start investigating it, because it could fundamentally alter design and engineering. And this is an area of disruption that I know you're not paying attention to, and it's going to potentially make you vulnerable to outsiders. All right. So we started with the technology supercycle and the convergence of three areas of technology. We explored the beyond through the new trends resulting from all of this.
And you've been sitting on a wooden block this entire time feeling a little discomfort trying to stay focused. Now I want us to zoom way out and connect all of these dots together. So I've been talking about the beyond almost as a metaphor for this new liminal space we are all living in. But the reality is, the beyond is not a metaphor.
It's a real thing. The beyond is living intelligence, and living intelligence is going to rewrite the rules of our reality as we know it today, and we are not prepared. Living intelligence is a system that can sense and learn and adapt and evolve. And it's made possible through artificial intelligence, advanced sensors and biotech. So Li is not a singular system.
It's an ecosystem of interconnected agents and machines and biological entities, which is why Lee is not the same thing as AGI artificial general intelligence, although AGI is a part of it. AGI is a singular system designed to match or surpass our human level intelligence. The problem is right now that it's not quite there yet, and most organizations are basically only hyper focused on AI and specifically agentic AI or like AI. Palace intrigue. Nobody's zooming out to see the bigger picture of how this all evolves.
And that's a real problem, because it means that we're not starting to think through how the decisions that we're making today could unfold, and how that potentially sets us up for serious problems in the future. Um, Lee is going to wind up shaping the future decisions of every leader going forward every company, every government, every industry. And we have to take the time to think about how these decisions are going to shape the future world that we all inhabit.
Now, there are a lot of different sectors that Lee will help accelerate. Energy for one healthcare CPG. But I want to take a deep dive into a sector where living intelligence has already made an impact. We just haven't talked about it yet.
And that's robotics. We've been living with the idea of robots for so long. In 1928, this thing, this was Eric. Eric could sit and stand and deliver a speech.
It was recorded, but still. And in 2013, right here, this is Atlas from Boston Dynamics. It was a six foot two humanoid robot, if you remember. Like, it could run.
It could jump, blew everybody's minds. So both times a lot of people thought, this is the dawn of the robot era. But 100 years later, we don't have robot butlers in our houses. We have cats on Roombas. It's a very expensive cat toy.
Here's why robots haven't been able to advance, and why you're going to suddenly start hearing a lot about robots over the next 12 to 24 months. Robots get confused with clutter. Robots need to see and understand their surroundings. They don't know how to handle the dynamics of our environment or unpredictability, so they need extensive training to handle new situations.
The things that humans can do. These are monumental challenges still for robots, like tying your shoes. This is something all of us know how to do, but it requires tactile feedback and precision precision grip in order to control the laces. This is a super, super hard thing for a robot to do.
And just a few weeks ago, Google DeepMind figured out how to train a robot to tie a shoe. Now this. It's actually a huge breakthrough in robotics because of all the things I've just been telling you about.
I know this doesn't seem like a big deal, but this is a huge. For those of you who pay attention to signals, this is it. This is a huge signal about what's going to happen over the next 1 to 2 years. Living intelligence is finally going to start unlocking new pathways to advancements. And the robots that are coming for us in the future do not necessarily look like, you know, walking, talking humans.
This is a robot that is part fungus and part machine. And it was made at Cornell. It's a biohybrid robot that has a brain made out of mushrooms. So the mushrooms, mycelium, the little threads, they were grown into the hardware and they respond to light.
So as that light was pulsing, the robot is jumping and moving around. This is a biohybrid robotic jellyfish made at Caltech. It's part jellyfish, part hat with different sensors. And you might be wondering who needs a cyborg jellyfish? Well, we've got climate change, and we can't get to parts of the ocean very easily to collect data. Jellyfish don't have brains. They don't sense pain.
So if we attack, they're basically like data collection vessels. So if we send them to different parts of the ocean, they can send back to us data about how our oceans are changing. All of these biohybrid robots are experimental for right now. But if you talk to any tech executive and I spend a lot of my time talking to tech executives, everybody is now saying that this is it.
The platforms, the technology, the hardware, you know, it's finally ready. This is the decade that we are likely to see actual robots, some of them humanoid, some of them taking other forms because of vision, action, language models and multi-agent systems, and the context protocol and all the things that I've been telling you about today. Sometime this year, Nvidia, probably around the summer, they're going to have a specialized computer that's specifically built to power robots. So robots are going to be more adaptable.
This is a Kobe bot and a LeBron bot built at Carnegie Mellon using Nvidia technology. Robots are going to be more human ish. This is a prototype from a company called clone.
The design of this was obviously was inspired by human anatomy. So what you see are muscles kind of twitching, which is kind of gross or maybe exciting, depending on your point of view. And my fellow Americans, this robot is for you. I know how much all of us love standing on very long lines at CVS or Rite Aid, waiting endlessly for a human to put pills into a container. And then we wait.
We wait endlessly for a different human to put a sticker on a package and then put them into bins that are supposed to be alphabetized and they're not. And then you show up at the thing and you're like, I'm here to pick up my prescription. And they're like, it's not ready, except that you can see it right there.
But your last name starts with a W and it's in the sea container, right? We all love this. This is a fun, fun part of being an American. Well, China has solved this problem for us Americans. This is a pharma bot called Galbut G1. It's a robot with both the knowledge and dexterity of a human pharmacist.
So this thing can do the work of three or more people. And it doesn't make mistakes. It'll tell you how to take your medication, when to take your medication, specifics just for you. So as you know, it's the big tech companies building all of this with a handful of smaller companies.
And you might be wondering why. Why is suddenly everybody now talking about robots? I'm talking about artificial intelligence. And the answer is they need robots or something like robots to achieve artificial general intelligence AGI because AGI is where the money is for them. Artificial general intelligence that doesn't exist without embodiment much in the same way that our human intelligence doesn't really exist outside of a body. And this is problematic. You know, we're all talking about trust and ethics in a world with AI robots.
We can see, though, robots are surprising and dazzling and we put cats on them, we're probably going to ask fewer questions when we start interacting with these robots because they're cool. We have to remember to ask the questions so that we have the types of futures that we want. Embodiment needs living intelligence to truly advance. Now, the big tech executives are all being cautious. They're saying we probably won't see commercial robots until the year 2030.
But do you know what the year 2030 is only five years from now? Look, you can argue that none of this feels important, but I want you to keep in mind that there are things we do today that we never would have done five years ago. Like send billionaires for joyrides into space. We would never see a therapist to talk about our human problems when that therapist is not human. Or listening to this Lenin seeing a brand new song 43 years after he died.
Possibilities come with responsibilities. Living intelligence could let us fully realize our humanity or destroy it. So we're at the end, near the end here. And this is where we extrapolate into the future and see how all of this technology might turn out.
Now, you know now what a scenario is, because we talked about at the beginning, the goal of a scenario is not to get the future exactly right. It is to get your decisions right in the present. So let's go to the year 2035 and combine all the trends that we just went through and living intelligence and explore the future using two different views, starting with perfect sound. The year 2025 was really, really noisy. Screaming babies on airplanes. Leaf blowers at all times of the year when there are no leaves on the ground.
The worst sound and loud restaurants. So loud you can't hear anybody that you're with. So in 2025, everybody was wearing noise cancelling headphones and sometimes not even listening to music. They were just wearing them to block out all that ambient, loud sound. But then they couldn't hear the other sounds around them.
So a group of sound engineers got together with materials engineers, and they invented the Sonic Sanctuary, also known as the SS. The SS creates perfect acoustic environments everywhere. There are speakers everywhere, and they're all hidden, so they don't obstruct your view of anything out in the world. And they eliminate noise pollution while allowing conversations without eavesdropping. And they don't just cancel noise.
They can be tuned to generate emotions in us. So the ancient art of forest bathing in Japan. People do this in city parks. They hear the birds, they hear the gentle rattle of leaves, and they get a little hit of serotonin every time they go.
It's pretty miraculous. Look at how happy she is. Local towns and cities built calming parks, restaurants used the SS for mood enhancement during meals and at concerts. Artists used the SS to generate a deep sense of community.
The SS was so successful that local governments and state governments and the federal government, they all wanted public private partnerships, so governments subsidized the installation of speakers everywhere since they were so effective. And business said sure, because businesses sell things and businesses wanted to sell more speakers. That public private partnership, that decision was a turning point in human civilization. In 2035, the government still had an autocratic leader and his tech bro sidekick. They just wouldn't leave. And they brought in a lot of terrifying new policies that were pretty anti-democratic, anti-liberty, anti-business.
And those business leaders, they got caught off guard again because they didn't do the planning. So a million people were really mad, and they planned to march on the Capitol to protest. So the million protesters arrive, but weirdly, there's no police anywhere.
Instead, there are dozens of robots not armed with guns or bombs or anything like that. They just have speakers on their chests with a familiar acronym, the SS. And when people ask who they are, the robots say sonic security. They point to signs everywhere that say Ambient Optimization Zone.
Within ten minutes, the protesters are like, hey, let's go home. They feel subdued. They feel apathetic. They're not sure why, but those original engineers, they know exactly what's happening. It was the SS being used by government across imperceptible frequencies, and those subtle metamaterial generated sound waves. They bypassed human consciousness, increased compliance without anyone realizing it.
And that's how a technology meant to silence your neighbor's leaf blower ended up blowing away your constitutional rights. But hey, at least you can hear yourself think now about about nothing if that's what you want to do. All right, let's do one more, because I know these are fun for you. This scenario has to do with climate change. So again I'm going to pull in trends and threads from from everywhere.
So getting elected leaders to agree on CO2 reduction was a Sisyphean task. And it always had been. Either these leaders agreed and a new president came in or they withdrew support, or they disagreed, and the problem just never got addressed. So in 2025, a group of business leaders, recognizing that their profit margins were dependent on climate stabilization, they decided to solve the problems themselves. These smart business leaders who came from a diverse group of industries, they created an alliance.
They agreed to invest in R&D to advance metamaterials, to advance bioengineered organisms and machines, and they agreed to partner with each other to collaborate and to share research findings back to the group to accelerate progress for everyone. And that decision was a turning point in human civilization. Now, in 2035, multi-agent AI weather systems manipulate cloud formation. Extreme droughts gone, wildfires. Mercifully, a very bad memory from the past. Buildings are being retrofitted with metamaterials, like solar absorbing panels that collect energy during the day and then allow the whole building to run off the grid at night.
Remember those bricks that were like lungs from earlier? That technology is everywhere. Beijing now has the cleanest and purest air in the entire world. There are tons of smart, shape shifting structural materials. This is Morioka, a town way up in northern Japan that has some of the highest seismic activity on the planet. Now, when there are earthquakes, there's no danger because the buildings just sway along with the rumbling.
This business alliance did a great job getting fast action to climate change. They got governments to deregulate, to get out of the way so that progress could be made faster. But remember, their motivation was not totally altruistic.
It was to optimize profit margins and to improve ROE. Return on equity for investors. Government agencies that would have created the safeguards.
They got gutted. They got disbanded. It's not that everybody forgot about long term strategic planning. They just optimized for immediate financial gain.
Now look, people, people don't like chaos. They tend not to like fast, sweeping change. And in a chaotic environment, you wind up with star
2025-03-11 11:01