Year in Review 2024

Year in Review 2024

Show Video

[ MUSIC ] CHRISTINA WARREN: Welcome to "Behind the Tech." I'm your co-host, Christina Warren, Senior Developer Advocate at GitHub. KEVIN SCOTT: And I'm Kevin Scott. CHRISTINA WARREN: And it is that time of year once again for our "Year in Review" episode. [ MUSIC ] So, obviously this past year, AI was a huge part of the conversations. Like probably every single episode, we did either touched on AI or dug in really deep into what's happening in AI right now, or some of the really exciting possibilities that is opening up as we see it out in the world now for a year and a half, maybe two years, almost two years.

But maybe the most exciting thing that we got to look at with AI this year was how it connected to creativity, especially in music and art. KEVIN SCOTT: I mean, this is of very high personal interest to me. I think there is this beautiful thread of creativity that goes through what we all do, whether we're an engineer, an artist, a teacher, just sort of pick your profession. Everybody has to be creative in their own unique way, and it's been really exciting for me to see how this new tool that we're building, Generative AI, and the infrastructure that surrounds it, is being used in such phenomenal ways to support that creative impulse that everybody has.

Just last month, we talked to Refik Anadol, who is creating these incredible, huge art installations using data sets of everything from weather patterns to heartbeats. [ MUSIC ] REFIK ANADOL: Digital artists like myself, the movement that's enjoying computation, games, and creativity with AI and so forth, we were the blind spots for the museums and galleries, super honestly. It's quantifiable for many reasons, because it's a new movement.

It's a new reaction to the field. The art world has a concrete base, like centuries-old techniques and tools, and it's this revolution. It's this renaissance happening right now we are all living in. But as an artist, I was so naturally saying this all the time, that I witnessed the birth of Internet, Web 1, Web 2, Web 3, AI, quantum, cloud. I mean, naturally, that's what I reflect back as a form of imagination. So I think there was this rejection for a while, I would say, or blind spot.

But then as soon as the more we created works that bring people together, like our project in the Walt Disney Concert Hall, which was also the Back-in-Time MFA project, thanks to really Lili Cheng. She was also one of the early mentors and advisors. And suddenly, that became a reality that brought 100,000 people together. That became a tangible idea.

Or the Casa Batlló project in Barcelona in Gaudi's building in Barcelona, 65,000 people. Or at MoMA, our show received 3 million people, the largest audience in the MoMA history, with 38-minute average viewing. And that all, I think, made these tangible results. And as soon as they become an experience in life, a memory, I think that really binded the context from a dream to reality. CHRISTINA WARREN: Refik's work is so incredible.

It lets you see what it can mean for AI and humans to truly collaborate. Really, the AI can process and interpret a volume of data that's just ways beyond a human's capacity. But the human is, I think as Refik puts it, is then getting to use that data as a paintbrush, which I think is really beautiful. KEVIN SCOTT: Yeah, and this is where I get so excited about this moment right now that we're living through, seeing this explosion of creativity that's happening with people like Refik, and I just love to hear the ways that artists are thinking about this, just fascinated by the creative process and, you know, obviously the engineering process of these things. Not everybody might know this about me, but I'm a huge classical piano nerd, and a couple of episodes ago, I got to really dig into this with a fellow piano nerd, Ben Laude. I want to talk about the instrument and the art as like, you know, two different things, because I think when you think about something like AI, like if you think about, you know, AI is both instrument and art together, you just start getting confused.

But if you think about AI as an instrument for an artist to use to go make something, it becomes altogether interesting. And so I was watching this morning, Murray Perahia doing a Masterclass in 2022 on the G minor ballade, Chopin, which is my favorite piece of music. And Murray, like one of his performances is my very favorite performance of that. And there's a part of that ballade that is -- that moves me the most is the lead up to bar 106. So when you release all of the tension, it's like that double fortissimo. It's the big dramatic moment in the middle of the piece.

And what he was asking this student to do is like there's this chord in the lead up to 106. And he's like, what does death sound like? And he's like, this is death. And he's like you need to like have this passage, this line that you're playing be foreboding and as if death is chasing you.

And not everybody has that in their mind when they're playing that particular passage. BEN LAUDE: It's also interesting that there's almost no spoilers in classical music. It's almost the reverse. It's knowing what's coming that builds the anticipation and the goosebumps. KEVIN SCOTT: Yes.

BEN LAUDE: And for me, it's knowing a piece really well that makes, if it's a great piece, that makes repeated listenings so meaningful. But, yeah, it's interesting to compare what you're describing to AI, and you would know much more about this than I would. But at least from what I can tell, classical piano and the literature and the art of interpreting it, I mean, these are expressions of human consciousness. KEVIN SCOTT: Yes. BEN LAUDE: Chopin's "First Ballade" is the expression of his organized consciousness, and he's expressing something. And the only way we can agree about the piece is if we just speak in generalities.

Well, it's dramatic. It seems to tell a story, whatever. But the moment you get into details, and you want to talk about how this phrase should be rendered, it's death for Murray Perahia. It's life-affirming for somebody else. It's dark for this interpreter. It's bright.

I mean, it's dry for Glenn Gould. It's wet for Horowitz. There's just suddenly the interpreter's consciousness is then mixed with the composer's, and you get a new cocktail of whatever thing we can't describe. And, yeah, I mean, maybe one day we will be sort of comparing Horowitz and Perahia's "Chopin Ballade" to AI's different version of it, and we can input, well, I want to hear an AI sort of play it, play it with this kind of expression. I don't know.

I mean, I'm not -- you might have a comment on that. Is that coming? Should we be concerned? What do you think? KEVIN SCOTT: No, no. Look, I've had this conversation with people, and I don't think it does because I think the point of a thing like classical piano is you have something inside of you that you're trying to express that's difficult or impossible to express any other way, and it's like part of your humanity. And you are -- it has meaning if you just play it for yourself, and it has a different meaning if you play it for an audience who are going to receive it in probably a different way than you are maybe even intending when you play it. BEN LAUDE: Right. KEVIN SCOTT: I'd never thought of death before hearing Murray's performance until I saw him teach that master class.

So that's not the thing I'm thinking of. It's like, you know, just this incredible emotional response that I get to it that I can't really put words on. And I think that's a beautiful connection that you've made, even though maybe that's not even what he was intending to do. CHRISTINA WARREN: Okay, let's talk a little bit about another one of your passions, which is learning and education. And this is another place where I think AI is kind of breaking the field wide open and potentially really transforming the way we think about it.

And you know, and of course, when we talk about AI in education, that kind of the first instinct for many folks is to worry about cheating. That's where everybody's mind always goes, but I think so many of our guests this year have really helped bring a different perspective to the conversation. KEVIN SCOTT: Yeah, absolutely. It was super cool that we got to talk with Sal Khan about this.

He's the founder of Khan Academy and definitely a person who's been leading the way in online education for many years now. But I think one of the interesting things about Generative AI in education is some people's knee-jerk reaction to it has been, oh, my God, this thing is bad. Let's get it out of the classrooms. It's just going to help students cheat, and you've got a very different take on it, I think, informed by all of the leverage work that you've been trying to do over the years with the core of Khan Academy. So talk about that a little bit.

SAL KHAN: Yeah, big-picture, even broader than education, technology just amplifies human intent. And if your intent is to be evil, you'll find ways to make the technology evil. If your intent is to be lazy, you'll find ways that technology can empower your laziness. But if you want to learn or if you want to help people learn, there's always ways that technology can be valuable. The same video technology that might have people watch not-so-great stuff, we can also use to teach them.

And so it's all about how do you mitigate the harms and maximize the benefits? And I tell everyone who is a well-intentioned person, just checking out and running the other way, that just means only the bad folks or the lazy folks are going to be using technology, especially now with these very powerful technologies like Generative AI. So it's obvious things like cheating, and then there's issues sometimes with AI, potentially around maybe bias, errors, hallucinations. What if students want to use it for unproductive ends? They want help making a bomb or something like that, or they want to harm themselves.

So what I told our team at Khan Academy is like, look, those aren't reasons not to work with Generative AI. Those are reasons to just put guardrails around it and turn those into features. Let's make it so the teacher can see what the students are doing if they're under 18. Let's make it so our AI doesn't cheat, but it can Socratically nudge you in the right direction. Let's make it so that we can support students in, say, writing an essay, making the student do the work, but acting as an ethical writing coach, and if the student goes to ChatGPT or someplace else to get their essay written for them and brings it into our system, then our system, when it talks to the teachers, is going to say, well, you know, Kevin and I didn't work on this essay together, and by the way, it's not consistent with his other writing.

We should double-click on whether Kevin really did this work. So I actually think the AI can actually be used to undermine AI cheating itself. So any tool can be used for good or for bad, and so that's kind of like the big theme of the book. Here's all of the ways it can be used well. Here's all of the fears and risks that people have, but here's how we should mitigate those and actually turn them into benefits.

KEVIN SCOTT: What is your advice for people as they're sort of thinking about maybe not even just AI, but we have an interesting future headed our way because some technologies like AI are developing really, really quickly. You have done a really tremendous job in your career using technology to help yourself. So computer science was your gateway into big Silicon Valley companies and startups and eventually into a hedge fund. Technology is sitting at the center of this nonprofit that you've created, that's having a big impact.

What's your advice to people for the future? SAL KHAN: My advice is, and I write about this in the book, there's people who think, oh, well, calculators exist. Kids don't need to know arithmetic, or computers exist. There's one less thing that you have to learn how to do. The internet exists.

Search exists. You don't have to learn knowledge anymore. And now with AI, people are like, well, do people even need to learn how to write, et cetera? But I always point out, if you look at any of these inflection points of technology, it has accrued the most benefit to the people with the deepest skills. And so I think the answer is this is a reason to double down on, for sure, the traditional skills, the math, the reading, and the writing, but also now augment that so that you learn how to creatively use these tools that can really amplify you, giving you almost godlike powers to do things that would have looked like science fiction even 5, 10 years ago.

And I also write in the book, this isn't a nice to have, it's an imperative now, because the status quo, unfortunately, most people aren't going to be in a position to leverage the AI because the AI is better than... We're already seeing the AI is operating at the 80th percentile of the LSAT. I would be worried if I was a 50th percentile lawyer of where this is going. Now if I'm a one-percentile lawyer, I know that there's certain things, yeah, the AI can help me draft a contract, et cetera, but I have certain expertise. I've fought certain cases.

I know the nuances that no AI can have. You're going to be superpowered. You're going to be able to get the AI to write your contracts. Maybe you'll hire fewer paralegals or whatever, but your expertise is going to be magnified even more, while if all you could do is draft a boiler point contract, you're going to be in trouble. So more people, I think the job market is going to broadly become kind of bipolar. The knowledge economy, if you want to be in the knowledge economy, and that's probably where the bulk of the value of AI is going to accrue, you need to upskill even more, and hopefully maybe you can use AI to help you get there, use Khanmigo, use Khan Academy.

I think there's also -- people shouldn't panic. I think even if you can't be a one-percent lawyer, I also think there's going to be a lot of, let's call it very human work, that as we have a more abundant society, we should have more resources so that we can have more caregivers, more people to fight loneliness, more people to provide help to the sick or to the elderly, whatever. So I think there's actually going to probably be work there, too. CHRISTINA WARREN: And here's maybe an interesting behind-the-scenes tidbit for listeners here.

We actually used Copilot as we were starting to put this "Year in Review" episode together, and it was able to identify some of the high-level themes that came up all over with our guests this year, and that's a lot -- like the ways that educators like Sal Khan talk about using AI, and it's this idea of co-intelligence, which was certainly one of the themes that came up over and over again, and it's something that you talked about with Ethan Mollick, who's a professor at the Wharton School. KEVIN SCOTT: Yeah, I really appreciate the way Ethan super clearly talks about this. ETHAN MOLLICK: So at least at the current stage, AI really works like a form of co-intelligence. It is a booster to your activities. It is a threat to some parts of your job, but not the parts you want to do, and it is something that is usable right now. And I think a lot of people in a lot of the books about AI have tended to focus on the future, and especially sort of scary versus, you know, like, are we all saved or all doomed? And I think that that is an important conversation, but in some ways, the least interesting conversation to have about AI that's already here.

And it's fascinating because when you talk to people who are using it, they want to talk about how to use it. It feels like the 80s again, right? Like, people want to figure out, what are the tips? They're exchanging information. There's excitement in the air among users, And I think that I wanted to try and bring that conversation to people and give people ways of getting started. And also to realize, like, this is kind of a big deal, right? It's a big deal in lots of ways that we would never have expected AI to be a big deal, And it's a big deal right now.

It out-innovates most innovators. It out-writes most writers. Elite consultants, it does a really good job. This is weird stuff that is going to have weird effects, and it is accessible, and that's part of why it's going to have such weird effects.

CHRISTINA WARREN: So let's bring this conversation back into the physical world for a moment here. Maybe we're just going through all of your hobbies and obsessions, which we love, Kevin, but the next one is the world of makers on YouTube, and earlier this year, we talked with Xyla Foxlin. KEVIN SCOTT: Oh, my God. I was so excited to talk with Xyla. She makes just the most incredible stuff, and I think she's also obsessed with learning and education and really breaking open the ways we traditionally think about how we learn. XYLA FOXLIN: I think that when I was a kid, it was sort of like you were interested in engineering or you were interested in art.

And it was a little bit gender segregated, but it was also just, like, those were separate categories. And the maker movement has kind of combined the two. I was never good at classroom learning.

I have to physically do something to really understand it, or I have to apply something to a project. And then I'll really understand why or how something works. KEVIN SCOTT: If a kid gets a chance, whatever brings the chance, but if they get the chance to discover that they are interested in and good enough at something where they just want to go put the work in, like you were talking about before, where you get better and better and better, like, that virtuous cycle is sort of the most important thing in the world. And the thing that is so unfortunate is how often kids don't even get a chance to get on the trailhead.

Like they don't have your wonderful fifth grade teacher or, you know, like my bad parenting and Grey's Anatomy. And they just don't figure out that, hey, here's the hook. And hard things are hard, but you've just got to be interested enough to go do the work to get good. XYLA FOXLIN: Yeah, and they have to be in, like, a safe enough environment where they can try things and they can fail, and I think most school systems are not like that. KEVIN SCOTT: Yeah. XYLA FOXLIN: And so it takes, like, a family environment or a really special teacher, like you said, to create that environment, and it's just so hard to do en masse.

KEVIN SCOTT: Yeah. Yeah, I guess it is with math hard because the way that we grade mathematics is there's a right and a wrong answer to a problem. And if you get it wrong, oftentimes you don't get an awful lot of feedback about what to do to get better, and then you just see the bad score at the end. And it's like, okay, well, I'm bad at this, which is -- XYLA FOXLIN: Right, it feels very black and white. Until you get up to calculus or even pre-calc, where now you're having -- or geometry, actually, is a great one.

Where you start -- you can get halfway through a proving theorem or you can kind of get most of the way there, and then it feels like there's some kind of progress. It's not like a multiplication problem where you either got it right or you got it wrong. KEVIN SCOTT: With math, I get frustrated with how we teach it because we sort of very frequently introduce the mathematical concepts absent any kind of motivation for why this thing is important. And that's not how the mathematical concepts are invented. I mean, there's some things in, you know, pure mathematics that get invented just for the sake of the math.

But most math got invented because somebody was trying to solve a problem and they needed a way to model something in the real world or, you know? And we just don't do a good job sharing that with kids early unless you have an exceptional teacher. XYLA FOXLIN: Right, right. But even an exceptional teacher is working within the bounds of the fact that they're teaching math class, and then the students are going to leave class and go to English class and then they're going to leave class and go to, like, science class. And there's only so much, at least in my experience, there's only so much the math and the science teachers can do to make their curriculums match up, especially if you're trying to meet state regulations and state requirements for testing and stuff. Yeah, I think even in engineering school, when you would think, like, the whole curriculum is designed to be applied to the real world, the math was still so separate.

CHRISTINA WARREN: Okay, so while we're still here in the physical world, we also got to talk with someone who is deeply connected with the physical hardware that powers everything that we do in the virtual world. Your conversation with Lisa Su of AMD was fantastic, and it really got into some of those questions about the physical world and hardware versus software. KEVIN SCOTT: Yeah, Lisa was teasing me a little bit about the different paths she and I have taken and this classical, sometimes almost comical tension between hardware and software people. LISA SU: No offense, software is very interesting, but at the time, hardware was much more sexy to me, and I had the opportunity to see how you could build chips and build very -- they weren't the most advanced chips in the world, but to me it was amazing.

It was amazing that you could build some transistors on something the size of a coin. You could look at it in the microscope. You could see. You could measure it on a test system, and that's how I got into hardware and that's how I got into semiconductors, actually. It's so important to see the results of what you're doing, and I love the fact that I can build products that I can touch and feel and, you know, walk into Best Buy and see those products or walk into your data center and see those products.

So that's what I enjoy. KEVIN SCOTT: Yeah, so something that's just so honestly mind-boggling about the moment that we're in right now is how these tools are not just adding a little bit onto our past capabilities, but they're multiplying and transforming what's possible. We talked to Mike Volpi about this. MIKE VOLPI: Humans are, in some sense, the pinnacle of biological technology.

We are, as far as we know, at the top of the pyramid of biological technology. And I felt like a system would not copy it, but attempted to emulate how humans worked and did so via a computer system, had to have a really, really bright future. It just was sort of like saying the horse is the best instrument we have to move around. If I only could build a mechanical system that mimicked a horse, I could actually improve productivity a lot, and somebody invented a car, right? And in the same way, a human is the best brain system that we know of, and if you try to build -- you know? A car is not a horse. They're very different instruments, but they serve a similar purpose, and that's at least how I think of AI, which is, actually, AI systems don't really work like the human brain.

They sort of -- loosely, but they serve the same benefit at the end. They just, because of the nature of scaling and computing, they can be much bigger. KEVIN SCOTT: What do you think is interesting going forward, either in AI or, you know, like, anything else that's sort of happening in technology that you think is interesting that people ought to be paying more attention to than they are? MIKE VOLPI: Look, I think there's a couple things on the AI side that I pay attention to. One is the physical embodiment of AI, which is interesting. I think the AI we experience through Bing Chat or ChatGPT or Cohere or whatever is a purely digital experience right now, and I think that we are analog beings, and at some level there needs to be a physical experience associated with that of some variety.

So whether it's robotics or it's devices or other things that allow a more physical embodiment of what we perceive to be AI, I think is super interesting. I do pay attention to technologies other than transformers. I think this is an investor bias.

I don't see how investors can -- startup investors can win in transformers now anymore because of the capital requirements of it. And I would say, for now, there is no obvious scaling boundaries to transformers, but maybe there might be. And if so, might there be a different approach? Maybe not, but it's my job to explore. KEVIN SCOTT: Yeah, and for listeners who may not be AI people, like when Mike's saying transformers, he's not talking about Optimus Prime. MIKE VOLPI: Oh, yes.

KEVIN SCOTT: This is the prevailing architecture for deep neural networks that is basically driving all of this crazy scale-based progress right now. MIKE VOLPI: Yeah, exactly. I mean, everything that everybody experiences today in AI is largely based on this technology called transformers.

And it has very good scaling characteristics, meaning that, you know, if you throw more computing power at it, it just gets better and better and better. That's not generally true with technology. Technology usually has a plateauing effect.

You can throw more resources at it, but the pace at which it improves flattens over time. KEVIN SCOTT: Yep. MIKE VOLPI: And this particular technology has not shown characteristics of flattening so far. But it also means that the resources required get massive.

So it means a lot of power, a lot of computers, a lot of data centers, all that stuff, very hard for a small company. KEVIN SCOTT: It's really exciting to start to think about and imagine what all the creative people out there are going to do with this stuff in the next few, say, five or ten years. CHRISTINA WARREN: Yeah, and that makes me think of something that Ethan Mollick said, that I think he had a really great way of thinking about this. ETHAN MOLLICK: And the model I would think about is the industrial revolution.

And in a way people don't usually think about, which is steam power came to a lot of factories in England at the same kind of time. The ones that won were not the ones that were like, hey, we could still make pots, but with less people. Those companies got destroyed, right? The ones that succeeded were the ones that we can now use the same number of people and make 10,000 more pots and ship them all over the world. KEVIN SCOTT: Yeah, absolutely. That's the metaphor we should be thinking about.

And let's bring in one more guest here, Mike Schroepfer. Shrep talks about technology as leverage. MIKE SCHROEPFER: I love the idea of leverage. I mean, technology as leverage. I always say that technology is one of those few things that removes constraints. So many problems in life, if you ever -- like the Economics 101 you take in high school, where it's like, all right, you have a $100 city budget.

You can either fund the libraries or the police or the fire department, but you can't fund all three fully. A lot of people live in a world every day where our problems are tradeoffs. I can do this, or I can do that.

And technology is one of the only things that's like, oh, hey, it's now half the price. Okay. CHRISTINA WARREN: Okay, so Kevin, if you had to sum up all of these amazing conversations that we've had over the podcast this year and think about this moment in technology, what comes up for you? KEVIN SCOTT: I think it's really clear at this point that we're in a moment of platform shift. It's not just that technology is becoming incrementally faster or more efficient or cheaper.

It's very dramatically changing and breaking apart the ways we operate, whether that's in creative things, software, medicine, everyday life, education. And I think it presents some really exciting opportunities to address big, thorny, complicated challenges like climate change. Let's play one last clip from Shrep here because I think he put it really well. Yeah, it's really interesting. I think there's a related thing with smart people where sometimes it's very enjoyable to wallow in complexity, to take a very hard thing and even to make it harder, and there's this joy you can get from spending cycles there. But those overly complicated things are almost never really useful.

MIKE SCHROEPFER: A hundred percent. I think -- I often describe when I'm working with people, this took me a long time to figure out, is I think there's complexifiers and there's simplifiers. There's someone you give a big, hard problem. They go like, here's 30 pages, but you really only need to understand three things.

Here's the three biggest things that matter here, and if you want to get into details, I've got it, but here's the thing. There's other people that come out with, "Here's 26 pages of detail. I've covered every base on this thing."

And you're like, that's not actually helpful. That's actually much worse. And I find simplifiers are a secret weapon of a lot of organizations. It's what we sought in our PMs at Meta It's what I look for in the founders I back, and it repeatedly has been successful for me in finding people who take a big, complex, gnarly thing and say, but these are the only things that matter.

KEVIN SCOTT: Yeah. I mean, I feel like you're giving the listeners sage advice here. So you compound these things, and they get very interesting. So folks who have high learning rate, who know how to experiment quickly, who are simplifiers, you just sort of stack these together. MIKE SCHROEPFER: Yeah. KEVIN SCOTT: And really, the union of those things are just superpowers.

MIKE SCHROEPFER: Yeah, 100%. Climate is a platform problem. It's going to impact tens or hundreds of millions of people, and the people most impacted are the least equipped to deal with the impacts. And so it's like, here I am with a bunch of resources.

Isn't it just an obligation for me to go off and do this? [ MUSIC ] KEVIN SCOTT: Hearing that, thinking about the things I'm thinking and seeing and learning, it leaves me feeling hopeful about what's coming next. CHRISTINA WARREN: So I've got to ask you, are you going to make any New Year's resolutions, any predictions for what's coming in 2025? KEVIN SCOTT: Oh, God, that's so hard. I think one of the mistakes that, maybe it's hubris, that we all make is just being quick with the predictions.

I think the thing that's just super clear is the AI platform shift will keep rolling, and I think, if anything, will pick up speed. Sort of look at the time that we're recording this like we've got brand new models that have hit the hands of developers like O1 that are just extraordinary leaps forward in capability, and I'm already seeing the amazing things that people are doing as soon as they get access to that. And so the year ahead, we will have even more capable models coming, and we will have more exciting things that people are doing with it. And I think, given that some of the AI applications are kind of trailing indicators of where the model capability actually is, that next year, we're going to start seeing some really incredible things happen that have just taken a while, a couple of years, to sort of percolate through the system and get ready to launch and have wide-scale impact.

So I think it's going to be a super-exciting year. And my personal New Year's resolution is to try to continue every day to come in and learn something new about how people are using AI in creative ways and to try to do something just a little bit creative every day myself. CHRISTINA WARREN: I love that.

I think that's a great resolution, and I might take some cues from you and try to do something similar. I really like that. Okay, that is all of our time that we've got for today. Huge thanks to all of our guests on the podcast this year.

And you can check out all of their full episodes on YouTube or your favorite podcast platform, and if you have anything that you would like to share with us, please email us anytime at behindthetech@microsoft.com. Thank you so much for listening. KEVIN SCOTT: See you next year.

[ MUSIC ]

2024-12-12 16:50

Show Video

Other news