>> Hello, hello, hello South by. Thank you very much for having me. Um, someone much cooler than me last night said that you meant to say South by. Not south by southwest. So I'm trying it out. Let me know if you think I can get away with it.
Um. So. Yeah. My name is Neil Firth, and I'm executive editor at MIT Technology Review, where I manage our crack team of superstar editors and reporters.
Um, honestly, it's the best job in the world, because what it means is I pretty much get, like, a front row seat on how the future is being made and what the future is going to look like. Um, and the reason for me being here today is I'm hoping to give you a little bit of an insight into what that's like. So our newsroom covers emerging technologies. So that's things like AI obviously climate and energy solutions, biotech, robotics, computing space. And but today I'm here to share with you one of our biggest editorial products of the year, which is our ten breakthrough technologies.
We do it every single year. and it's the technologies that we think are going to have the biggest impact on our lives in the years to come. The idea, again, is like, give you a little bit of a glimpse at what's coming around the corner so you can be better prepared. So let's get started.
Firstly, I'd like to start with telling you a bit about who we are. So MIT Technology Review. And you might be able to guess this from the name is owned by MIT and is based on MIT's campus in Cambridge, Massachusetts. And we started almost 125 years ago, which makes us one of the world's continuously published oldest magazines. We're editorially independent, so MIT does not have any say in what we cover, but what our main base is in Cambridge, we also have hubs in London.
And surprise, surprise, that's where I'm from, San Francisco and New York. And we also have editors and reporters scattered around a lot of the US as well. So we're in the business of helping explain the future. That's what we do.
Turning facts into understanding. Um, we tell you what's going to happen now so you can plan for what's going to happen next. So to do this, our journalists go out every day, dig up scoops, spot trends, speak to researchers, go through scientific reports and find out the stuff that you really need to know about. So we skip the incremental things and we don't care about hype. Instead, we just try to focus on technologies that are going to have a really high impact.
And I think for that word, impact is something I'm going to come back to quite a few times while I'm chatting. We're always asking ourselves what really matters in the long run. So, for example, reporters will pitch me and the editors stories saying, you know, I'd like to write a story about this thing. And the question we ask is, okay, why do I care? Why does it matter? Why is it important? They're the questions that kind of cover all of our reporting.
It's also not just about what's cool, although we do cool cover a lot of cool stuff. We also think a lot about what it will take for a new technology to become commercially viable. What will make it become a reality.
We also think about the positive and negative effects that a technology might have. So we publish every day to technology review.com. So I recommend you all go and check it out. We also have a bunch of free newsletters we have that cover things like AI, climate, and biotech.
We also have a daily newsletter called The Download and I'll show you a link to that later on. And we also have a really cool print issue. Where is the picture? Oh yeah. Very cool.
Look at that cover. Amazing. So this is our newest issue.
And they come out every two months and they're themed. And this this one came out last week. It's all about relationships.
So it's packed with some really cool features, relationships between employees and employers and to do with surveillance and automation. Um, a really great package on the people who are having relationships now with chatbots and what that means for their lives and how they're using it in different ways. Also, relationships with our past, looking at ancient DNA and what it tells us about not just our ancestors, but also future humans as well. And then finally, there's a really thought provoking piece in this issue all about frozen embryos. There are hundreds of thousands of frozen embryos kept on ice around the world, and no one can decide what to do with them. So the vast majority of our stuff is online apart from the magazine.
So we also do big stories telling you about the future. Look at that one up on the right. What is I? That's one of our big swings that we do, which is really trying to encompass put your arms around everything that is AI and explain why it's actually such a difficult concept to nail down and why no one actually really agrees about what it is.
We also do just some rip roaring kind of yarns that just whizz you, along with fantastic storytelling, like that story about the guy who's helping troops on the front line in Ukraine. We put a huge amount of attention into getting things right, and I'm giving you this preamble because I want you to set up that you can trust us right before we get into the list itself. All our stories go through at least three layers of editing, and we also have fact checkers. On top of that, we make sure things are buttoned up right. And I think that's really important, particularly now we have a great explainer series, MIT Technology Review explains and a What's Next series, which gives you a little look around the corner on future topics.
We also have our narrated podcast where you can listen to our stories. Anyway, giving you all of this preamble to say, yeah, it's a pretty cool place to work with, and it's also an amazing place if you're just really, really insatiably nosy like I am. And I bet loads of you are too.
Anyway, I'm going to show you some details on how to read more of our stuff towards the end. So let's get on to the main event. So ten Breakthrough Technologies. So this started in 2001. The editors then it was before I joined decided to make a list of what they call educated predictions about the future. So which technologies are going to have the most profound impact on the economy and on people's lives in the future? So for that list, they picked ten technologies.
So 24 years later, we're still doing it in pretty much the same way. Our annual list is our way of giving our audience some informed speculation about the future. Where we think these developments are happening now will lead in the future. So a bit of background on the process of how we do this. We ask every reporter and editor in the newsroom to bring ideas for which technologies they think are going to make, should make this next year's list based on their reporting and their insights.
The criteria we use is basically, is this a technology that's going to have a major effect on the world? That effect doesn't have to be good, by the way. We also look at technologies that can have a big impact in a negative way as well. We also ask, is there a reason to feature it this year in particular? Why not last year? Why not next year? Is there something we can point to, a factor that's going to really accelerate its development, or a reason that it's now going to become much more widely adopted, for example? So we take everyone's ideas, we chuck them all in this massive Google doc and organize by category.
And then we have a lot of discussions and debates about which technologies should make it and why. So I guess that's kind of a euphemism for we have like some arguments about how this is going to work, but it's fun. That's what's what's fun about being in a newsroom. So then we vote for their favorites. Everyone votes for their favorites, and we whittle down the list. That knocks off like a few of the things which didn't get any votes apart from the person who submitted it.
And then we tried to pick a real mix of things. So not just things which are coming out of the lab, but also consumer stuff, different fields, some really deep tech as well, and the whole process takes quite a few months. And I've got to say, our track record is pretty good. Over the years, the annual exercise we go through has helped us spot a lot of major developments before they were widely known elsewhere. Here are a few examples.
So our very first list in 2001 had data mining on it. So a practice which I'm sure you all know for now, for better or worse, has underpinned online advertising and personalization across the web for decades. In 2004, we saw Advances in natural language processing. We saw where it was going and we put universal translation on the list.
Today, obviously, translation apps are widely available. Google, Apple, ChatGPT and elsewhere all use this technology and we all use it. I'm sure you guys do as well a lot, and it's almost taken it for granted now. Then in 2009, we put intelligent software assistant assistants on the list, really prompted by Siri, which at that time was being developed by a company of the same name and then was later acquired by Apple and obviously intelligent software assistants. It feels like another name for what we're going to get to later, which are AI agents.
And then five years ago, we put satellite megaconstellations on the list. Since then, the number of satellites orbiting Earth has nearly tripled, and with two thirds of those are part of SpaceX's Starlink constellation. Amazing.
For reaching parts of the world which don't have internet. Not so great for astronomers, but I'll be the first to tell you we don't always get it right. So in the spirit of radical transparency for you guys South By, we got some things wrong over the years. Or maybe we picked the wrong direction, the right direction, but the wrong application. So in 2005, we put airborne networks on the list.
This was a new kind of air traffic control, which would do away with ground based radar systems and control towers and allow planes to transmit and receive information while they're in the air and use software to navigate around each other. Here we are 20 years later, and we still have air traffic controllers, thank goodness. Then in 2010, we really thought social TV was going to be a big thing. This is these were platforms that allowed you to watch a show with your friends from different locations and chat about it all in the same interface. It never really took off.
I mean, it still exists. There are still some watch party apps out there, but it never really took off in the way that we envisaged it. But I would think I would suggest that maybe the rise of things like Twitter and second screening while you're watching, like a live sporting event or like a season finale of something. It's sort of along the same lines.
2015 we went all in on Magic Leap, which you may remember was a company that was developing an augmented reality headset, and the promo videos were amazing. I remember just being blown away like, oh my God, this is going to be the future. We've got a lot of buzz. Magic leap still exists, but really, companies like snap, Microsoft and Apple have taken that technology much further.
Apple was on our list last year for Vision Pro jury I think is still out. And lastly, in 2016, a company called SolarCity was building one of the largest plants for solar panels in the world in New York and made all kinds of big promises, got loads of government money, and we put that Gigafactory as it was called on the list. But the company ended up racking billions in debt and was later acquired by Tesla, which used the factory to make solar roof panels, which also didn't really pan out. Okay.
So we've been over what this list is all about, and some of our hits and misses over the over the years. So now you're probably thinking, come on, let's just get on with it. Let's get to the list. This is why I'm here. I'd just like to tease you as long as possible before we actually get to it. All right.
This is MIT Technology Review reviews ten breakthrough technologies. All right. Space starting with. So the night sky here in Austin is absolutely awesome. But when you glance up at it later tonight, possibly after a couple of beers and a taco, consider the particles inside.
Everything you can see only make up 5% of what's out there in the universe. Dark energy and dark matter make up the rest. But what seems crazy to me is that we don't really know what that is. We still don't know. We just know it's there. So the Vera Rubin Observatory is a massive new telescope in Chile.
We'll explore this question and other cool cosmic unknowns. It's been run by the US National Science Foundation. It's been completed now, and it's due to take its first images in the middle of this year if everything goes according to plan. The telescope it houses contains the largest digital camera ever built. It's about the size of a small car.
So it's named after Vera Rubin, an astronomer who observed that stars on the distant edges of many galaxies were behaving in an unusual way. They were moving much faster than expected. In fact, they were moving so fast that you'd expect them to escape their own galaxies and fling off into space. But they weren't doing that. We didn't know why some kind of powerful gravitational force was holding them in place. That was the best evidence to date of the existence of dark matter.
So the observatory's first mission is to complete a ten year study called The Legacy of Space and Time. What a ridiculously cool name for a for a study that every night it's going to take photos of the southern sky, capturing the same sections over and over and over, and doing the whole sky over the course of a few nights. Astronomers will use that information to make the most detailed 3D map ever of the Milky Way, and also create a time lapse video of the night sky.
So in this picture, I think you've got up. You see, that's the back of the camera. I mentioned the giant digital camera, and it has six filters designed to pick up many different aspects of the electromagnetic spectrum.
So doing this 3D map is going to help them discover billions of new stars and galaxies. And getting back to Vera Rubin's work, perhaps by seeing how all these galaxies are arranged and what's known as the cosmic web, we have a better idea of how dark matter helped shape this thing and the influence it's having. As you can see, just ridiculously cool engineering. I'm a real sucker for just like massive bits of metal in labs.
Um, all right. Next, by the way, look at that art. Um, next, we have generative AI search. This is probably, I reckon, the one that's most familiar to all of you here.
And we picked this one because obviously, all of us search the internet every single day. And this is represents the biggest change in decades in how we navigate the internet and find information online. Traditional search companies like Google and Microsoft, which makes Bing, of course, are incorporating large language models into their search engines, which means you can type in a query. And in Google's case, what's called an AI overview pops up. So I'm sure you've all seen this by now. You've all been using them.
So it's a large language models, summary of a topic or answer to your question. And it was generated for you just in the moment. And it's not programmed or scripted.
It's actually really cool. The idea is, you know, it's the idea, it's conversational in style. And really for me, it makes a lot more sense than I mean, when you look at it, you look it makes a lot more sense than the whole Google fu thing you have to do to try and like, you know, work out how you get your answer out of Google and then decide which of the links to click on to get your answer. If it can just be given to you in conversational style, surely that's much better. Well, obviously there are caveats to this, but it's happening not just in search engines, but also in things like ChatGPT and apps and on people's phones.
And it could be a step towards useful AI agents as well, which will go out onto the internet and complete tasks on your behalf. Remember these. These models don't actually know if that overview is wrong or right. They're just assembling it. They suck up the entirety of the internet and then guess what? What word should come next? And often right.
Sometimes wrong. If you might remember the the rocks on your rocks and your pizza incident from I overviews. Oh, no. Putting glue on your pizza. That's because we think that AI. Google AI overview had sucked up lots of Reddit joke entries about what you should put on pizzas.
So next, small language models. So you've probably all heard about large language models, right? Of course you have. So giant things trained on the entirety of the internet using trillions of parameters. These are the the values that get calculated during training that the model then uses to produce its results.
And they do jaw dropping stuff. They've taken over the world, but they're big, unwieldy, impossible to understand. And let's be honest, an environmental nightmare. Now, though, there's growing interest in something called small language models, which I think you can probably guess what they are. But the idea is that they don't have as much training data or as many parameters, and they still do some pretty awesome things.
The key thing is they're focused. So these small language models might be more suited to the specific purpose or task you're trying to get them to do. So for example, if you're a law firm that writes lots of contracts, you don't need a large language model that's been trained on the entirety of every Disney script ever.
You probably just need a language model that really knows about contracts and is really well versed in that. The other thing about them is that because they're much smaller, they're much more private. They don't have to be censored. You don't have to send queries off to another company or to a big public cloud. You can have them host them on your own private server, and you can train them on your company's own data and fine tune it on your data as well.
So even though they can't maybe chat about everything in the world like chat bots can, they can really get to the heart of the thing you're trying to get them to do. The the more efficient, the less expensive produce for your missions, which is becoming increasingly important. They can also work offline and soon be small enough to run on your phone.
And some of them are doing just as well as big ones on benchmark tests that we train large language models on. So OpenAI offers GPT four zero and GPT four mini, Google DeepMind has Gemini Ultra and Gemini Nano and Anthropic's Claude three comes in three flavors outsize opus, mid-sized sonnet, and tiny haiku. I've asked my colleague Will Douglas Heaven, our senior AI editor, to tell us more.
Over to you will see. >> Hi everyone. I'm Will Douglas Heaven, and I'm the senior editor for AI here at MIT Technology Review. Now, if you've been following the giant leaps in AI over the last few years, you may have noticed that bigger typically means better step changes in the size of large language models were matched by step changes in performance.
Look at OpenAI, the company that really first doubled down on this pattern. It's large language model. GPT two wasn't bad, of course. It was easier to impress people way back in 2019, but GPT three just a year later was the true breakthrough, the wow moment that cemented OpenAI's dominance and laid the foundations for today's boom, a boom that's been sustained across the industry by bigger and bigger models. How big are we talking? GPT three was more than 100 times larger than GPT two, and GPT four is thought to be ten times larger than GPT three. But what do we mean by bigger? Size is measured by the number of parameters that the model has.
Think of these as the math values inside a model that get adjusted when it's trained, and that then determine its behavior. More parameters means more fine grained pattern matching, helping the model capture more information from the data it learns from across billions and billions of documents. The more parameters also means more computation to train and then run these models. The correlation between size and performance became known as the scaling law. But there's a twist. The law has limits.
The marginal gains between top end models seem to be trailing off. When OpenAI released its latest model in the GPT lineup, GPT 4.5. In February, it claimed it was its biggest model yet, but it landed with an all round neck. It's good, of course it's good, but good enough to justify the eye watering costs, both financial and environmental.
Many people are not too convinced, and so the last year has seen an explosion of smaller and smaller models that can do more with less. Large language models have become a tool for almost any job, but we don't need Thor's hammer for every little nail. Regular hammers will do just fine. We're finding that for certain tasks, smaller models that are trained on more focused datasets can now perform just as well as larger ones, if not better.
That's a boon for businesses eager to deploy AI in a handful of specific ways. You don't need the entire internet in your model if you're making the same kind of request again and again. Most big tech firms now boast fun sized versions of their flagship models for this purpose. The growing number of smaller companies offer small models as well. It's also not too hard to downsize a large model. One of the takeaways that got lost in the deep sea hype is that they took the large model that they trained, and then squeezed many of its capabilities into a handful of smaller ones by training them on its output, a trick known as distillation.
This is happening more and more. Small models are more efficient, making them quicker to train and run. That's good news for anyone wanting a more affordable Omron, and it could be good news for the climate, too, because smaller models work with a fraction of the computing power required by the giant cousins, and so they burn less energy. These small models also travel well. They can run right in our pockets without needing to send requests to the cloud, which is good for privacy and slow data connections. Even car companies are building small models into their vehicles, as customers come to expect that chatbot experience wherever they are and whatever they're doing, small is the next big thing.
Thank you. Will. Um, yes.
As you may have guessed, he's also British. They're not all British, I swear. But a will is a bit of a superstar.
He's our most senior AI reporter. And he wrote that piece. What is AI? Which is a really, really epic look at what? What even is AI to begin with. So thank you Will. As he says, small is the next big thing. Or at least that is our educated prediction on the future.
So next we have a fun one. Completely different, completely different angle cattle burping remedies. Look at that art. Isn't that cool? Floating cows? Anyway, preface this one by telling you cows are evil. No. Cows are a nightmare.
Actually, cows are a nightmare. Because when the cows burp, they do that a lot. They burp up methane.
It's a byproduct of their digestion. And also very potent greenhouse gas. Methane is eight times as bad as carbon dioxide at trapping heat in the atmosphere. So we're we're looking at techniques that we can find to prevent the cows from giving off so much methane in the first place. Cow burps are also one of the largest sources of greenhouse gas emissions from agriculture.
Livestock emissions contribute up to 20% of the world's total climate pollution. Because of this, there's a lot of effort to over the years to get people to eat less meat. And one of the things we had on our list a few years ago was plant based meats, and then sort of the cellular agriculture. Lab grown meat to try and get people to reduce the amount of beef they eat. But honestly, it's probably not very realistic. The world is getting richer, and as the world gets richer, so does beef consumption.
So our next choice on the list are supplements that actually tackle the burping problem at its core. Oh, nice. So these supplements can be mixed into cows food or water. One called Bovaer is available in 55 countries already. And it inhibits an enzyme in the cow's gut that makes methane from hydrogen and carbon dioxide during digestion. It reduces emissions by about 30% in dairy cows.
Several other startups are working on different technologies, which use a type of red seaweed that they hope will be even more effective, maybe even into the 50 to 80% reduction in methane production. So I've asked my colleague James Temple, not a Brit, our senior climate editor, to tell us more. Tell us more, James. >> Thanks, Neil. Hello South Bay.
I'm James Temple, I'm the climate editor at MIT Technology Review. There is still some lingering questions about how effective these supplements could turn out to be in terms of cutting livestock emissions, whether they can be cost effectively administered in certain situations like grazing cattle, where farmers may only see them every few weeks or days, and whether or not more of them will ultimately earn the necessary regulatory approvals. But now that cattle protein supplements have begun moving into the market, for me, the big question is, are cattle ranchers and dairy farmers really going to widely adopt them? And that is most likely going to come down to whether or not they really pay off in some way. The promise among some of the companies is that they'll deliver a secondary benefit.
They should make digestion more efficient, which means the cattle will produce more meat and now, which means higher sales for farmers. The company marketing Bovier in the US to supplement that has been approved in most countries so far. Highlights that farmers can also earn and sell carbon credits for using the products, so that should perhaps cover the costs or even provide, you know, an additional revenue stream.
I also wonder if there's going to be a niche of consumers who will pay a little more for a low methane milk and meat or climate friendly cheese, which would enable companies to differentiate their products in some ways and be able to charge a bit of a premium for them. Finally, there are some ways that we can see public policy pushing boundaries. For instance, last year Denmark announced plans to enact a tax on cattle emissions. But farmers, not surprisingly, are not a huge fan of that kind of an approach.
And my suspicion is we're not going to see a lot of laws like that passed in countries where cattle farming is a big, powerful industry. So at least in the immediate years ahead, the real test will be whether or not these pills prove to pay off for the farmers that give them a try. Thank you. James. Um, I believe James is based in Reno, and he's our senior climate editor. And yeah, maybe climate friendly meat could be could be on the plate.
So confession the next one on our list is robotaxis and living in the UK, I've never actually been in one. In fact, though, I shouldn't feel too bad. I say about myself, most people around the world haven't been in one yet, but it's still pretty niche. But we think they're coming, really coming. For years, these companies, such as Waymo have been out gathering data and testing their services.
It's now possible to hail a ride in dozens of cities around the world fully autonomously, with no safety driver. We think this is going to become increasingly common as the competition heats up. So many companies are expanding into new areas Chinese firms including Autox, Weride and Pony. I plan to move into Singapore, the Middle East and the US. And Waymo, as I'm sure many of you know, has just announced a partnership with Uber to provide robotaxis here in Austin.
From this week. I have to try and get in one before I leave. Um, Zoox, which is Amazon's project, will launch in Vegas.
And of course, Elon Musk has promised that Tesla will begin testing robotaxis in California and Texas here this year. Companies are also getting smarter about training these vehicles. I think that's what's what's interesting, because if you remember back in the days of autonomous vehicle research, like 10 or 15 years ago, companies had to map every single bit of every single road with various sensors, lasers. And if you took that car away from the road that had been trained on and put it in a new road, it would be in real trouble. So now what car companies are doing, and particularly there's a few new startups are using generative AI to make synthetic training data for scenarios that don't happen that often, so they can get up to speed on what to do faster in those cases.
Others are trying to develop going even further and developing general purpose robotaxi AI models that can be trained in one city and then transferred over to a new city and go straight off without having to start again, learning that city from scratch. And that's the sort of thing that's going to turbocharge this area. I've asked our editor in chief, Matt Honan, to tell us about what to look for next. Matt, over to you. >> Hello from San Francisco.
My name is Matt Honan, and I'm the editor in chief at MIT Technology Review in San Francisco. These days, robo taxis like this Waymo I'm riding in right now are a common sight. It works just like an Uber. You summon a car with an app on your phone. It rolls up. You hop in, it takes you where you want to go.
It's price competitive with Uber, and I've found they often come faster than a human driver. In fact, these cars are now one of San Francisco's big tourist attractions. It's kind of weird. The company announced last week that it's at 200,000 paid trips per week. That's twice as many as there were last year.
That's not why we put them on the list. Robotaxis aren't just taking over San Francisco. They are taking over the world.
There are autonomous vehicles on the roads in San Francisco, LA, Phoenix, Las Vegas and Austin right where you are. And they're coming soon to cities like Atlanta and Miami. More to the point, they're in Beijing, Shanghai, Singapore and Abu Dhabi. In fact, it's a much bigger business in China than it is here in the United States. Baidu's Apollo Go, for example, just one of several robotaxi operators there announced it had completed more than 9 million passenger rides as of January this year.
And as you see, we just successfully dodged that bus. All right. Well, I got to go. I still get a little nervous in these things. But have fun there in Austin.
All right. Thank you. Matt. Imagine if that bus had just clipped in how good that would have been for the show. No commitment to the bit. All right.
Um. All right. That was robotaxi. So next on our list.
Cleaner jet fuel. So again, this might be something you're aware of, but this there are reasons why we think this is going to take off and are going to become big take off. That did not intend that pun but I'll keep it in the aviation sector is another area, a bit like agriculture, where getting rid of the emissions is really tough. Addition to biofuels, which have been around for a while, there are new ways of creating alternative jet fuels that are now starting to scale up in production and perhaps more importantly, new policies that are coming into play that will require that they are used.
Initially, these fuels will be blended with petroleum to reduce emissions in flights, but eventually we could have planes that run on 100% sustainable aviation fuels. Generally speaking, these planes can can do that without having to be substantially refitted, which is an important part of the. How do you get the technology out into the real world? So there's a company called 12 that's building a plant in Washington state, has a reactor that takes in carbon dioxide captured from the air, adds water, and then uses electricity and a metal catalyst to split them both. The company then can recombine those elements and form jet fuel.
Another company called Lanzajet, opened its first commercial scale production facility in Georgia about a year ago to turn ethanol from corn or sugar into jet fuel. And Lanzajet can also use municipal solid waste or industrial waste gas to make its fuel. So this progress is great to see. It's really cool technology. But the reason we put it on the list this year is not only because these new production facilities are opening up, but also because the new policy taking effect in Europe, that will really start to increase demand.
And I think this is one of those aspects where a big bloc like Europe puts puts a new policy in place, and it will have ripple effects elsewhere, because airlines don't want to have to do two things at once. It's an important part of the puzzle. So starting this year, sustainable aviation fuel has to make up at least 2% of all fuels used in airports in the EU and the UK. This is going to it's getting ramped up gradually until it reaches 70% in 2050.
In the EU, so the vast majority of jet fuel is going to have to switch over from fossil fuels to these low emission alternatives by then. And this is one of those ones. Sorry. Just go back to it. This is one of those ones that when it was brought to the pitching discussion in our newsroom, I was quite surprised by it.
I thought it was much more far behind where where it actually is. And I was really stunned by. By what the the compelling case that our client reporter made for this one. All right.
Fascinating robot. So I love robots. Like I'm really into robots.
I've been into robots since I've been a tech journalist more than 20 years. And I just love going to see robots do funny stuff. But roboticists have often struggled to get robots to actually do much that's useful. Probably the closest any of us have been close to a robot is either in an industrial arena or like with a Roomba, and I've seen a lot of robot demos in my life as a tech journalist, and they always show a robot doing something really specific, really, really well, like flipping a burger or like making me a cocktail. Everyone's like, wow, you video it.
But then if you took that robot and you asked it to flip something else, it'd be like, ah, I don't know. That's not my thing. I just do, I just do burgers.
So we're yet to see a robot that can just be adapted really easily to different, different environments, to different tasks. That's changing. We're getting closer now because of stuff that's coming out of the generative AI and large language model revolution. New data sets and new training techniques that have come from that, from that area. The shift is enabling here is like instead of teaching robots to do by carefully programming and scripting them, it's letting them learn how to do things on their own. For robots, what they need to know to move safely in the world is how to get stuff done.
It's like to move safely without bashing into things or swiping people. And for that to that, teach them about the the environment you need. It's not as easy as just like the written text and that you would need for a large language model producing AI overviews. For example, you need spatial data, audio, 3D maps. One of the ways you teach them is to get a human, cover them in sensors, and get them to do the task a few times, and the robot watches and learns from that on how to do it. So roboticists are combining all these different types of data into new data sets designed just for robots, and then using that special data set to train robot specific AI models that are specific to them and their needs, and they can be modeled and changed depending on which new environments you drop them into.
This cute little picture is an early prototype of robotic butler from a company called prosper, which can perform household tasks in like cleaning a kitchen or like throwing out the garbage. And it's got a nice sort of like Jetson y kind of retro vibe to it as well. So with these techniques, the idea is that it gets easier to train a robot on just a few examples of one task, and then have it pick up that task right away and then even transfer it to a completely new task.
And it also navigates terrain without having studied that particular terrain. And this is also very similar to the robotaxi idea that it's been really difficult to train it to move around. If you remember, watch the videos of the DARPA challenge from a few years ago where like really cool looking robots like start walking up a slight hill and then just fall over, or like they try to open a door and like, collapse. That kind of stuff.
It's because you had to train it to do every single movement individually. That is may no longer be the case soon. So we don't have robot butlers yet, but we might do soon. We we could say, okay, next, we've only got three more. Um, every time. Okay.
We're okay starting with long acting HIV prevention meds. So more than 1 million people around the world still become infected with HIV every year. You know, probably that there are preventative drugs out there, one called Prep, which can be pills or injections. And these work really well. But the problem is you have to take them every day or you have to take them ahead of possibly being exposed to the virus, which people don't always know they're going to do.
And also people can forget to take their tablets. Um, the new drug is much more convenient and accessible way of people to protect themselves against HIV. It's called Lonette Capovilla. And I wrote that out phonetically for myself here because I really struggle to say it. And you get it as an injection once every six months.
And it's made by a company called Gilead. It targets a structure within the virus that holds the virus's genetic material, and either prevents the virus from replicating and entering a human cell, or it disables copies of the virus that have already been made and stop them from spreading further. Last year, Gilead announced results from a trial of more than 5000 women and girls in Uganda and South Africa. It showed the drug was 100% effective in preventing HIV infection. I'd like to ask my colleague Jess Hamzehlu, our biotech senior reporter, to tell you more. And yes, you will note she's also British.
Jess, over to you. >> Thanks, Neal. I'm Jess. I cover health and biotech at MIT Technology Review.
Yes, this was extremely exciting for a few reasons. First of all, this drug only needs to be injected once every six months. That's a huge deal, considering that one of the biggest challenges with HIV prevention medicines is making sure that people remember to take them. The most exciting thing, though, is just how effective it is in that Uganda in South Africa trial. None of the women and girls who got the injections got HIV. The drug was 100% effective.
You don't ever hear 100% effective in medicine. So that result was jaw dropping. The big challenge will be making sure that once it's approved, this drug is accessible for those who need it. At the moment, approved forms of Leonard Kapoor in the US can cost over $40,000 per person per year. That's just not going to work in low and middle income countries to its credit. Gilead has already signed licensing agreements with six manufacturers who will make generic versions of the drug and sell it in 120 low resource countries that have a high incidence of HIV.
Critics have pointed out that none of these manufacturers are based in sub-Saharan Africa, where the clinical trials were conducted, so countries with the highest HIV burden might still end up having to import the drug. And the manufacturers aren't allowed to sell the generic drugs in middle income countries, which probably won't be able to afford those US level prices. Still, it's very exciting, but it's not the end of the story. We still need to make sure that when it's approved, this drug is accessible to everyone who needs it.
Brilliant. Thanks, Jess. And again, Jess is an absolutely sensational reporter, and I definitely recommend you go and check out her work.
Um, next up we have greensteel. Actually, before I get onto Greensteel, I just want to say I'm hoping we can have a bit of time for questions at the end, so if you can submit them via the Slido app, I'll try and get a couple in before the time cuts out. Um, so next up, Greensteel. So I'm going to be honest with you, obviously, making greener ways to make steel or concrete aren't as cool as robots and generative AI search and robotaxis. I'm not going to try and like pretend they are, but coming back to that word impact. Impact is what we're interested in.
And Greensteel has the potential to have a gigantic impact on emissions. Here's why. So the world produces a lot of steel, about 1.9 billion tonnes
a year, and we build it, use it to build all sorts of things. But it's incredibly dirty industrial process. For every ton of steel you make, you create an extra two tons of carbon dioxide, adding up to nearly 10% of such emissions worldwide. The global steel market is expected to grow by about 30% by 2050. If you imagine scaling up those emissions, what that's going to do. So now there's a company in Sweden called Stiga, which is building a new plant that will produce steel without creating any emissions.
It's the first of its kind in the world, and the company hopes to start producing steel there next year. So how it works. A key step in the steel making process is converting iron ore into iron, and this historically has required loads of coal. Stegger instead uses hydrogen gas to pull oxygen out of iron ore to make iron, and then it makes that hydrogen through electrolysis using electricity produced from wind and hydropower to split water. There are other approaches, too.
There's a startup called Boston Metal, which spun out from MIT. It's running an electrical current through the iron ore to break up its bonds and separate out pure iron. And it's also plans to use renewable energy to do that as well. All right. Last but not least, we have stem cell therapies that work.
And those two extra words on the end are pretty important. So we've been covering stem cell research for decades. And it's incredibly cool incredibly cool science.
The cells are really powerful because just with a bit of tweaking in the lab, scientists can use them to then make any other type of tissue in our bodies. It's pretty magical stuff. So we've known about them for quite for quite a while. But scientists have also found ways to make them in the lab without needing embryos, which has obviously had ethical issues with it. So no longer use that. You can just create stem cells from bits of other bits of human tissue or skin cells, that kind of thing.
So really cool science. But all this knowledge has, for the most part failed to translate into anything kind of useful or practical. Lots of promise. Lots of promise. I remember covering them like ten years ago, and we're all excited about what it might mean.
And it wasn't really. It took a long time to get to where it is now. But the reason it's on the list this time is that we've got some really solid evidence that now is the time for new treatments for type one diabetes, and epilepsy based on stem cells are showing real promise could help someday. Either help millions of people get off insulin for good, or dramatically reduce the number of fits that people have. So for an example, one of our reporters spoke with a patient who was in a trial by a company called Neurona Therapeutics.
He was having epileptic seizures every day. In the trial, he received neurons that were specifically made in a lab to fix the electrical signaling in his brain that was causing those seizures. He now has seizures about once a week, so they haven't gone completely. But it's a dramatic change in quality of life for people with type one diabetes. You have to monitor their blood sugar levels daily.
Now, there have been a few trials by different companies showing that scientists can take stem cells from someone. Transform them. Sorry. Take cells from someone, turn them into stem cells, and then turn them into the kind of cell that produces insulin in the pancreas. When researchers put those newly transformed cells back into patients, some of those patients started producing insulin on their own for the very first time. And it's really cool.
It's really dramatic. It's still quite early, but we're excited by it. Obviously, the treatments could have huge positive benefits for for the patients, but it also marked for us a really important moment in stem cell research that we wanted to mark on the list this year. So that's it. There's the list.
2025 ten Breakthrough Technologies from MIT Technology Review. And I feel like it'd be interesting to look back in five, ten years and see where we were right, where we were wrong, where we got close but didn't quite catch what the actual impact was going to be. Perhaps there are some things there that you're familiar with and others that hopefully are new to you for the first time. And of course, if you don't agree with what's on the list, that's totally fine.
I am prepared to fight people to the death if that's the if that's the case. No, sorry. I love to hear dissenting views. Everyone knows that about me. Um, even if you do agree with our picks, remember, these are our best educated projections about the future based on what we know right now.
Things can change. In fact, in our experience, we found they almost certainly will. So in that spirit, I'd like to show you three technologies that were nominated for the list but didn't quite make the cut. And I'll tell you why we didn't pick them this time around. So first up, this is close.
This is we have virtual power plants, digital systems that utilities use to measure power across the grid. And they allow utility companies to collect things like solar cells and wind turbines with grid batteries and EVs during peak times of peak electricity usage. Software linked to smart meters May 1st day automatically power someone's home by drawing electricity from a fully charged EV, sitting in a neighbor's garage and reducing demand on the grid. The software. The same software will then also work out how to compensate that EV owner accordingly.
So it's happening. It's starting to happen. In the US, there's an estimated 500 virtual power plants already providing up to 60GW of capacity. That's about as much total capacity as the US grid will add this year. And there are also systems up and running in China, Japan and Croatia and Taiwan. There's also a project in Baltimore, in Maryland to use F-150 trucks to power homes managed by a virtual power plant.
Other stuff happening in China and Taiwan, as I said. But we just didn't have enough examples to feel like this. Really, now was the time to get it on the list. Also, you need a lot more virtual power plants before you really make a big difference to the grid. So got killed it.
Next up AI agents. Now this has been a controversial one, I'll tell you. These are all the rage right now. The idea is that AI agents will schedule your meetings and book your trips, and carry out all kinds of tasks for you online for you perhaps interacting and coordinating with other people's agents along the way.
The grand vision for AI agents is kind of like, uh, like a human PA help you book your vacation, but also maybe remember that you prefer swanky hotels, so it will only show you things which are rated four star and above on TripAdvisor. And then go ahead and book the one that you like for you. It also suggests flights to work better with a calendar. Plan the itinerary for your trip based on the weather forecast. Make a list of things to pack. It might even send your itinerary to your friends who live in the area, and suggest you meet up and add meetup times to your calendar.
Isn't that lovely? Nothing more personal than somebody's agent arranging a meet up when they're in town. And one vision for agents is that they are multimodal means they can process language, audio and video. And I think they're really cool. It's a really cool idea.
I'm really into it, and I think it's where we're headed. If we want to find get AI to be really useful and really interact with our lives. But the truth is, it's kind of early here.
OpenAI and anthropic have both released early versions of AI agents, but they're very, very simple, very basic. They don't do all the things that we hope AI agents will be able to do down the road. And there's also loads of challenges around reliability. You know, if you're going to let an AI agent go off onto the internet and do stuff for you and book things for you, you're going to have to really trust that it knows exactly what to do. And we're not there yet with trusting AI.
Also interoperability challenges like how is it going to interact with, like, you know, your word doc and then also your browser and then also send you an email? There are lots of like technical challenges. And we thought maybe it's just a little bit too early to get on the list. But this was a particularly heated discussion, which I did enjoy. But in the end, we decided not this year.
If I come back here again next year, let's see if it's made it. And then the third one I wanted to mention that came up in our discussions was air taxis, electric vertical takeoff and landing aircraft evtols. It's got a very sort of like, dude, where's my jetpack vibe about this, hasn't it? It's like, that's never going to happen. But they're kind of like electric helicopters or super sized drones. They're not personal vehicles like you might remember from like sci fi.
Instead, the idea is a pilot will take passengers from the suburb downtown or take you from the airport in like, a little like a flying taxi. And there's some real progress here. Interest from major players in the US and in China and manufacturers doing test flights.
There's a Chinese company has received the first license there to mass produce its vehicles and start taking customer orders. And other countries like South Korea and the UAE have put in policies in place that will allow these things to fly in the US. A company called Archer received its FAA certification last year to begin commercial operations, but none of these companies have actually started commercial operations yet, so we thought not just yet for us. Ooh. All right.
So there's 2025. List. You've heard about the things that didn't make it. I have one more thing to share with you before we wrap up. I'm going to need your help on this one. This is gay interaction.
Every year we put out the list. We also publish a poll where we ask our audience, which includes you lovely people. What would you like to see featured as the 11th breakthrough? It's kind of like a People's Choice award for emerging technologies.
So you have four options. If you go to that QR code, get your phone out, take that and scan it and vote for your favorite one. It'll take you to the landing page of the package, and if you scroll to the bottom or click on the number 11, that'll get you right on the poll. I'm going to quickly run through and describe them to you, and I'm going to try and do it in a neutral way as possible to as not influence any potential votes. I do not want to be accused of vote rigging. All right.
In the poll we have brain computer interfaces. So several companies working on these that are now starting to do really cool stuff with trials. Obviously, Neuralink is the most famous for obvious reasons and has placed its implant in a few volunteers with plans for more.
Another company called Synchron is a bit further along and is working on a large clinical trial. Both companies are trying to use the chip in someone's brain to help people navigate the internet, communicate, play games. Some issues around it in that the chip can move over time and the signal can degrade, but it's really cool and it's advanced a lot in the last few years. Methane detecting satellites.
It's not just cows that are a nightmare. There are also leaks from oil and gas facilities. A number of projects using satellites to spot the leaks and are making the public the data, public and open source so companies can either use it to fix their equipment or more likely, probably advocacy groups can then use it to hold them accountable and name and shame. Hyper realistic deepfakes. It's suddenly become a lot easier to make a virtual version of yourself.
Their digital avatars look like you talk like you adopt your mannerisms. And Um, there's a company called Synthesia that we've covered, and our reporter had an avatar made of herself. And honestly, for the people who know her really well, you could tell it wasn't quite her.
But for people who didn't really know her, it was basically impossible. Impossible to to know it wasn't. And that's kind of amazing, but also, like, a little bit scary.
Um, the big people are using it in e-commerce. In China. It's already being used a lot. If you do e-commerce online, where you're selling things on the web, you can do your nine hours online. And then for the rest of the day, you can set up your avatar to take over seamlessly from you, and no one will ever know the wiser. There are also obviously negative potential impacts in terms of using these avatars for sort of non-consensual revenge stuff as well.
The other thing about them is also they're suddenly become a lot easier to make. You don't have to go to a big studio and be scanned in a camera like our reporter did. You can now pretty much do it with your laptop and a decent camera. And then finally, continuous glucose monitors. Not a new technology per se, but there have been suddenly become much more widely available in the last year or so.
They are life changing devices for people with diabetes. They monitor blood sugar 24 over seven without a finger prick, and then if you connect them with an insulin pump, you can get insulin delivered automatically whenever it detects that you need it. So do cast your vote and let us know what you're interested in. So that's all.
I've got time for you today. I've got a few minutes for a few questions, so if you start thinking and putting them through, um. Just. Sorry. Can I just go back? One slide. Oh, no.
Okay. All right, so I'll go straight. Oh, yeah. Sorry. Just something I wanted to mention to you all. Um.
Oh, yeah. This is basically, this is the slide where we have QR code to subscribe to cover our work. Um, this is, you know, we have the download, which is our daily newsletter that covers everything you need to know in technology that day, which is free and fantastic. And I honestly am not just saying that that we are one of the best tech outlets out there and we are trusted and trusted to get things right and not hype stuff and give you a look at what's happening in the future. So do have a look at that and subscribe if you can.
All right. On to the questions. Thank you.
Oh thank you. Okay. We've got a question. What technology on the list do you think will have the biggest impact on the most people? That's an interesting way of deciding like how how you work this out. Because really probably HIV prevention meds will have the biggest impact across the widest number of people.
But there's a lot of potential obstacles, as Jess mentioned, around cost for that. So that one I think is probably the biggest number of people, but probably in terms of day to day lives and how that changes the generative AI search and how we interact on the internet, I feel like is the most immediate one, and it has. I didn't mention it earlier when I was talking about it, but obviously as well as the potential issues around getting things right or wrong, there are also issues with what it means for publishers like us. We're moving towards an era where there's such thing as zero click. So if someone wants to know about what are the ten Breakthrough Technologies MIT Technology Review, they might search for that in Google and get an AI overview that gives them the information they need without actually having to come to visit us, which is obviously problematic in terms of us paying for our journalism based on the top ten lists from last year. Oh, sorry.
There's a question from Dakota. Based on the top ten list from last year. What would you change? Oh, that's a good one. Um, well, I would say I wouldn't say if I would change it. There's also another question.
Yeah, I wouldn't say I would change it, but Apple Vision Pro was on our list last year and it's incredibly cool technology. And we really thought because with Apple's track record, it was going to take off and change the way we interact with stuff in the augmented reality world, and it hasn't really done that. It's still really cool, it still works really well, but you don't see people around with them on all the time. And I think maybe that's a maybe it was too early, or maybe a future version of Vision Pro is going to be the one that will make our list. Um, have you ever had to pull an item off the list before it publishes? Um, uh, I don't think we have. Or what we often do is we end up with like 2 or 3 things right at the end, which are vying for the 10th spot
2025-04-05 16:31