The future of educational technology

Show video

(light tone) - This is Stanford Engineering's "The Future of Everything" and I'm your host, Russ Altman. If you're enjoying the "Future of Everything" podcast, please hit the follow button in the app that you're listening to now. This will guarantee that you never miss an episode. Today, Dan Schwartz will tell us how AI is impacting education. He studies educational technology and he finds that there's a lot of promise and a lot of worries about how we're gonna use AI in the classroom.

It's the future of educational technology. Before we get started, please remember to follow the show in the app that you listen to. You'll be alerted to all of our episodes and it'll make sure that you never miss the future of anything. You know, the rise of AI has been on people's minds ever since the release of ChatGPT, especially the powerful one that started to do things that were scary good. We've seen people using it in business, in sports, in entertainment, and definitely in education.

When it comes to education, there are some fundamental questions, however, are we teaching students how to use AI or are we teaching students? How do we assess them? Can teachers grade papers with AI? Can students write papers with AI? Why is anybody doing anything? Why don't we just have the AI talk to itself all day? These are real questions that come up in AI. Fortunately, we're gonna be talking to Dan Schwartz, who's a Professor of Education and a Dean of the School of Education at Stanford University about how AI is impacting education. Dan, the release of ChatGPT has had an impact all over the world. People are using it in all kinds of ways and clearly one of the areas that AI and especially generative AI has made impact is in education. Students are clearly using it, teachers are thinking about using it or using it.

You are the Dean of Education at Stanford. What's your take on the situation right now for AI in education? - Okay, so lots of answers to that, but you know, the thing I've enjoyed the most is showing it to people and watching their reaction. So I'm a cognitive psychologist. I study creativity, learning, what it means to understand, and you show this to people and you just see them go, "Oh my Lord."

And then the next thing you see is they begin to say, "What's left for humans? Like, what's left?" And then they sort of say, "Wait a minute, will there be any jobs?" And then finally they sort of say, "Oh my goodness, education needs to change." And as a dean who raises money for a school, this is the best thing that ever happened. Whether it's good or bad, it doesn't matter, everybody realizes it's gonna change stuff. And so it's really an exciting time. - So that is really good news. I have to say, going into this, and I have to reveal a bias, I have often wondered if technology has any place in a classroom.

And I think it's because I was injured as a youth. This is in the 1970s when some teachers tried to put a computer program in front of me and I was a pretty motivated student. And I worked with this computer for about six minutes.

And I should say I'm not an anti-computer person, I literally spent all my time writing algorithms and doing computational work. But I just felt as a youth that I wanted to have a teacher in front of me, a human telling me things. And so that is clearly- (Dan laughing) Not the direction. I hear you laughing. So talk to me about the appropriate way to think about computers, because I really have a big negative reaction to the idea of anything standing between me and a teacher.

- You must have had very good teachers, but so Russ, you sound like someone who doesn't play video games. - I do not play video games. - Yeah. So there's this world out there. (Russ laughs) People get you experience things they could never experience directly and no teacher can deliver this immersive experience of you and the Amazon searching for anthropological artifacts. There's also something called social media that people use, yeah, yeah. - I've heard about this.

I think we disseminate the show using it. - So back in the day- - Okay, so I'm a dinosaur. - Back in the day, you've got the Apple 2 maybe, and it's about 64k maybe, it's got a big floppy drive and it takes all its CPU power to draw a picture of a two plus two on the screen.

So I think things have changed a little bit, Russ. But I appreciate your desire to be connected to teachers. I don't think (indistinct) if that's a concern. - I'm not gonna give you a lecture about teaching, but I will say this one sentence that was reverberating through my brain when I was getting ready for our interview, which was when I'm in a classroom, and this has been since I've been in third grade, I am watching the teacher trying to understand how they think about the information and how they struggle with it to like understand it, and then try to relay it to me. And so it is, that's where I'm learning, it's not even what they're saying. They're painting a picture for their cognitive model of what they're talking about.

And that's what I'm trying to pull out to this day. And so that's why I have such a negative reaction to anything standing between me and this other human who has a model that is more advanced than mine about the material that we're struggling with. And I'm trying to download that model. - Wow, you are a cognitive psychologist, Russ. I mean this, like I had a buddy who sort of became a Nobel laureate and he talked about how he'd loved to take apart cars. And I'd say, "I love to watch you take apart cars.

Just figure out what you're doing." No, so I think I'm not, let's separate this. There's the part where you think the interaction with the teacher's important. I don't know that you need it eight hours a day. You know, that's an awful lot of interaction. I'm not sure I wanna be with my mom and dad for eight hours a day, trying to figure out their thinking.

So you don't need it all the time. On the other side, we can do creative things with a computer. So for example, I wrote a program where students learn by teaching a computer agent. And so they're trying to figure out how to get the agent to think the way it should in the domain.

- Yeah. - Turns out it's highly motivating. The kids learn a lot.

The problem was the technology quickly became obsolete because after kids used it for a couple of days, they no longer needed it 'cause they'd figured out sort of how to do the kind of reasoning that we wanted them to teach the agent to do for reasoning. So it gets all, it gets- - I mean, I really like that 'cause that's exactly what I was talking about before about my relationship with my teacher and you just flipped it, but it's the same idea, which is that there's a cognitive model that you're trying to transfer. And by doing that transfer, you introspect on it and you understand what it is you're thinking about. - So the concern is the computer does all the work, right? And so I'm just sitting there pressing a button, that isn't relevant to the domain I'm trying to learn. But one of the things computers are really good at, I like, as good as casinos, is motivation. So some computer programs, they gamify it.

I'm not sure that's a great use of it 'cause you try and you learn to just beat the game for the reward as opposed to learn the content. - Right, right. - But things like teaching an intelligent agent how to think, there's something called the protege effect, which is you'll try harder to learn the content to teach your agent than you will to prepare for a test. So we can make the computer pretty social. - Okay.

So you are clearly a technology optimist in education and in addition to the amazing fundraising and like, there's so many questions to be answered. What I think a lot of people are worried about is are we at risk of losing a gen? We've already lost a few generations of students, some people argue, because of the pandemic and the terrible impact it had, especially on people who weren't privileged in society and in their education. Are we about to enter yet another shock to the system where because of the ease of having essays written and grading papers that we really don't serve a generation of students well? Or do you think that's a overhyped, unlikely to happen thing? - No, it's a good question. Part of this is people's view about cheating and so it's too easy for students to do certain things. But there's another response that I wanna hang on to. I wanna ask you, Russ, are you using, you teach, are you like putting in all sorts of rules to prevent students from cheating? Are you saying use it, do whatever you can.

I'm gonna outsmart your technique anyway. - It's a little bit more on the ladder. So I teach an ethics class, which is a writing class and we allow ChatGPT because my fellow instructor and I decided, and this was the quote, "We want to be part of the future, not part of the past." So we said to the students, "Knock yourself-" - Sorry, "The Future of Everything," Russ.

- Thank you, thank you. Thanks for the plug. So we allow it, we ask them to tell us what prompt they used and to show us the initial output that they got from that prompt. And then we've of course have them hand in the final thing.

And we instruct the TAs and ourselves when we grade that we're grading the final product with or without a declaration of whether ChatGPT is used. We do have engineers as TAs, which means that they did a careful analysis, students who used ChatGPT, and I don't think this is a surprise, got slightly lower grades, but spend substantially less time on the assignment. So if you are a busy student, you might say, I will make that trade off 'cause the grades weren't a ton worse, it was like two points out of 100, like from a 90 to an 88. And they completed it in like half the time. - Do you think they learned less? - So we don't know, we don't know.

And the evaluation of learning is something that I'm looking to you, Dan. How do I tell? So we do try to use it, but we are stressed out. We have seen cases where people say they used ChatGPT, but tried to mislead us in how they used it. They said, "I only used it for copy editing," but it was clear that they did more than copy editing with it.

And so at the edges there are some challenges, but in the end we said motivated students who wanna learn will use it as a tool and will learn. And the students who we have failed to motivate and it is our failure, you could argue, they're just gonna do whatever they do and we are not gonna be able to really impact that trajectory very much. - You sort of see the same thing with video-based lectures.

So I'm online, I've got this lecture. Do I really wanna sit and listen to the whole thing? Not really. I'm gonna skim forward to find the information. I skim back. I'm probably gonna end up doing the minimum amount if it's not a great lecture.

So I'm not sure this is a ChatGPT phenomenon. It's sort of an enabler, I think the challenge is thinking of the right assignment, right? - Yep, yep, yes- - So like you can grade things on novel and appropriateness. So are they novel? You know, if they use ChatGPT like everybody else, they won't be novel, they'll all produce the same thing.

- It's incredibly yes. The most common type of moral theory is called common morality. And it turns out that ChatGPT does pretty well at that one 'cause there's so many examples that it has seen and it's terrible at Kant, deontology it really can't do, okay. - Wait, wait. - So let me, let me- - Let me get back to your question.

So here's what I see going on right now. There are like big industry conferences because they're gonna, they're producing the technology that schools can adopt, right? And there's a lot of money there and 20 years ago there were zero unicorns in about, I think last year, $54 billion valuation companies in ed tech. So this is a big change. So what are they doing? They're basically creating things to do stuff to students, right? So maybe they're marketing to the teachers, but I'll make a tutor that is more efficient at delivering information to the students or I will make a program that can correct their math very quickly.

And so what's happening is the industry is sort of using the AI in the way that nobody else uses it 'cause everybody's got this tool, wants to create stuff, right? Like my brother, it's my birthday. What does he do? He has ChatGPT to write me a poem about Dan Schwartz at Stanford. What he doesn't know is that there's a lot of Dan Schwartz's.

And so evidently I wear colorful ties, but this is what everybody wants to do. They wanna create with it. Meanwhile, the field is trying to push towards efficiency. Can we get the kids done faster? Can we get 'em through the curriculum faster? Can we correct them faster? In which case the kids are going to optimize for being really efficient, right? As opposed to trying to be creative, innovative, use it for deeper kinds of things.

So this is my big fear. This is where the- - And so you're watching these companies and I'm guessing that they don't always ask your opinion about, what would you tell, so let's say one of these unicorn billion dollar or more companies comes to you and says, "We want to do this right. We want to use the best educational research to create AI that can bring education to people who might otherwise not have quality education."

So what would you tell them? - So this is a challenge, right? This is something we're actively trying to solve. So we've created Stanford Accelerator for Learning to kind of figure out how to do this 'cause I've been in this ed tech position for quite a while and the companies come in and they say, "We really want your opinion." And then they present what they're doing, and I go, "Have you ever thought of?" And they go, "Wait, wait, wait, let me finish." And this goes on for 55 minutes where they're telling me what they want to do. And I'm trying to say, if you just did this, and the way it ends is I say to 'em, look, if you do these three things, I'll consider being an advisor.

They never come back. They never come back. - So the message you're sending them is just not in their worldview. - It's 'cause they have a vision. Everybody wants to start their own school, they have their vision of what it should be, and they're urgent to get it done.

And it's a startup mentality. So trying to figure out how can we educate them, I think we know a lot about how people learn that we didn't know 20 years ago when they went to school. And the AI, you know, one of the things it can do is implement some of these theories of learning in ways that don't exist in textbooks and things like that. So that's the big hope. And the question is how can you take advantage of industry? You know, education's a public good, but they still buy all their products. And so going through those companies is one way to sort of bring a positive revolution.

But again, I'm a little worried that the companies are, they're sort of optimizing for local minimum, you know, to accommodate the current schools and things like that. - Should we take solace in the teachers? So many of us are fans of teachers, grammar school teachers, middle school teachers, high school teachers. Many of these folks are incredibly dedicated.

Will they be a final filter that looks at these educational technologies and says, "Absolutely not." Or, "Yeah, we'll use that, but we're gonna use that in a way that makes sense for my way of teaching." Or are they not in a position to make those kinds of, what you could call courageous decisions about kind of modifying the use of these tools to make them as good as possible on the ground? - So it's pretty interesting, the surveys I've seen sort of over the last year, the different groups do different surveys. It sort of, I take the average about 60% of K-12 teachers are using GenAI, right? And about 30% of the kids. If I go to the college level, about 30% of the faculty are using GenAI in teaching and about 80% of the kids are using it. So I do think in the pre-K to 12 space, the teachers are making decisions.

They do a lot of curriculum. So a great application is project-based learning. So project-based learning is a lot of fun. Kids learn a lot, they sort of develop a passion, a certain depth as opposed to just mastering sort of the requirements. But it's really hard to manage.

You know, when I was a high school teacher, I had 130 kids, right? If all of them have a separate project, I have to help plan 'em and make 'em learning goal appropriate. So the GenAI can help me do that. It can help me have the kids sort of help use it to help them design a successful project.

It can help me with a dashboard that helps manage them hitting their milestones, things like that. And there, the teacher is like, I can do something I just couldn't do before. It's different than the model where you put the kids in the back of the room who finished early and say, "Go use the computer," right? I think most schools kids are carrying computers in classes. So it's a little different.

It's more integrated than it used to be. - This is "The Future of Everything" with Russ Altman. More with Dan Schwartz next. Welcome back to "The Future of Everything." I'm Russ Altman and I'm speaking with Dan Schwartz, Professor of Education at Stanford University.

In the last segment, Dan told us about AI, education, some of the promises and some of the pitfalls that he's looking at on the ground, thinking about how to educate the next generation. In this segment, I'm gonna ask him about assessment, grading. How do we do that with AI and how do we make sure it goes well? Also gonna ask him about physical activity, which turns out, physicalness is an important part of learning. I wanna get a little bit more detailed, Dan, in this next segment. And I want to start off with assessment, grading.

I know you've thought about this a lot. People are worried that AI is gonna start to be doing all the grading. Everybody knows that a high school teacher with a couple of big classes can spend their entire weekend grading essays.

It is so tempting just to feed that into ChatGPT and say, "Hey, how good is this essay?" How should we think about, maybe worry about, but maybe just think about assessment in education in the future? - Yeah, this was, remember the MOOCs, Massively Online Open Courses and you're hoping you have 10,000 students and then you gotta grade the papers for 10,000 students. So what do you do? You give a multiple choice test, which can be machine coded. So I think that's always there. I'm gonna take it a slightly different direction, which is I'm interacting with a computer system and while I'm interacting with it, it can be constantly assessing in real time, right? And so there's a field that's sometimes called educational data mining or learning analytics.

And there's thousands of people who are working on how do I get informative signal out of students' interactions? Like, are they trying to game the system? Are they reflecting and so forth. So this is something the computer can do pretty well, right? It can sort of track what students are doing, assess, and then ideally deliver the right piece of instruction at the moment. So yours, you could use the assessments to give people a grade, but really the more important thing is can you use the assessments to make instructional decisions? So I think this is a big area of advancement, but here's my concern. We've gotten very good at assessing things that are objectively right and wrong.

Like, did you remember the right word? Did you get two plus two correctly? For most of the things we care about now, they're like strategic and heuristic, which means it's not a guaranteed right answer. And so what you really want to do is assess students' choices for what to do. So for example, creativity, it's just, for the most part, it's a large set of strategies, right? There's a bunch of strategies that help you be creative. The question is, do the students choose to do that or do they take the safe route? 'Cause creativity is a risk, right? 'Cause you're not sure.

So I think this is where the field needs to go is being willing to say that certain kinds of choices about learning are better than others. And it becomes more of an ethical question now. Instead of saying two plus two equals four, there's no ethics to it.

- Are you gonna be able to convince non-educators who hold purse strings, let's call them the government, that these kinds of assessments are important and need to be included? Because my sense is that when it filters up to boards of education or elected leaders, a lot of that stuff goes out of the window and they just want to know how good are they at reading comprehensive and can they do enough math to be competitive with country X? - So yeah, so different assessments serve different purposes. Like the big year end tests that kids take, those aren't to inform the instruction of that child. They're not even for that teacher. They're for school districts to decide are our policies working? And so it's really a different kind of assessment than me as a teacher trying to decide what should I give the kid next? So I think it's gonna vary.

The tough question for me is should you let the kid use ChatGPT during the test? And we had this argument over calculators, right? And finally they came up with ways to ask questions where it was okay if the kids had calculators because the calculator was doing the routine stuff, and that's not really what you cared about. What you cared about was could the kid be innovative? Could they take a second approach to solve a problem? Things like that. - So I teach another class where it's a programming class. So the students write programs and we have switched and we've actually downgraded the value.

So as you know very well, just as background, there is now an amazing. ChatGPT can also write computer code essentially. And so a lot of coding now is kind of done for you and you don't need to do it. We are trying to make sure that they understand the algorithms that we ask them to code.

And so what we're doing is we're downgrading the amount of points you get for working code. You still get some, but we're upgrading the quiz about how the algorithm works. Do you understand exactly why this happened the way it did? Why is this data structure a good choice or a bad choice? And so it's forcing us, and you could have argued that we should have done this 20 years ago in the same class, but this is making it a more urgent issue because if we don't, people can just get an automatic piece of code, they can run it, it'll work. They have no understanding of what happened.

And so it's really a positive, it's putting more of a burden on us to figure out why the heck did we have them write this code in the first place? - No, this was my point. It makes you sort of rethink what is valuable to learn and you stop doing what was easy to grade. So I have an interesting one. This is a little nerdy. So I teach- - Okay, I love it, I love it. - I teach the intro PhD statistics course in education.

And lots of students say, "I took statistics," right? And I'm sort of like, "Well, that's great. Let me ask you one question," and I say, "I'm gonna email you a question and you'll have five minutes to respond. You let me know when you're ready for it." And I ask 'em, this is just for you Russ, but why is the tail of the T-distribution fat at small sample sizes? And what I get back usually is because they're small sample sizes. - Or because it's the T-distribution. - Yes, even better.

And then I come back and I sort of say, well, have you ever heard of the standard error? And I begin to get at the conceptual stuff, right? And I suspect if I gave, so there are ways to get conceptual questions that are really important, but being able to prompt or write our code, that's a good thing. You want them to learn the skills as well. So I don't know, when the calculator showed up, there's a big debate, right? What should students learn? Can they use the calculator? The apocryphal solution was you had to learn the regular math and the calculator now, you just had to learn twice as much.

And so maybe that's what it's gonna be. - And that's a very likely transitional strategy, and then we'll see where we end up. In the final few minutes, this seems like it's unrelated to AI, but I bet it's not. You've done a lot of work on physical activity and learning, you've even been on a paper recently where you talk about having a walk during a teaching session and whether you get better outcomes than if you were just standing or sitting.

So tell me about that interest and tell me if it has anything to do with today's topic. - I can make the bridge. I can do it, Russ. (laughs) So we did some studies, I've done a lot of it, it's called embodiment, where- - Embodiment. - Yeah. I got clued into this where I was asking people about why, about gears.

And I say, "You have three gears in a line and you turn the gear on the left clockwise, what does the gear on the right do, far right." And I'd watch 'em and they'd go like this with their hands. They'd model with their hands.

And then I was sort of like, "Well, what's the basis of this?" And I'd say, "Well, why, why?" And they say, "Because this one's turning that way." I go, "But why?" And in the end, they just bought 'em out, they just show me their hands. They didn't say things like one molecule displaces another. So that sort of clued me in that you're bought- - This pinky is going up and this other pinky is going down. - Yes.

- What don't you understand about that? - Pretty much. Well, it was non-verbal. So we went on, we discovered the basis for negative numbers, right, is actually perceptual symmetry and we did some neuro stuff.

And so the question is sort of how does this perceptual apparatus, which we're just loaded with perception, right? The brain's just one giant perceiving. So how do you get that going? So part of the embodiment is my ability to take action, right? And so this is where we started, right? Right now, the AI feels very verbal, very abstract, even the video generation, it's amazing, but it's pretty passive for me. So enter virtual worlds, they're still working on the form factor where I can move my hand in space and something will happen in the environment in response to that. I think medicine has really been working on haptics so surgeons can practice. There was a great guy who made a virtual world for different heart congenital defects. And you could go in and practice surgery and see what would happen to the blood flow.

So I think that embodiment where you get to bring all your senses to bear, it's not just words, but it's everything, can really do a lot for learning for engagement, not just physical skills. - So that's a challenge, I'm hearing a challenge to AI, which is, as an educator, that this physicality can be a critical part of learning. And by the way, would this be a surprise? I mean, we've been on Earth evolving for several hundred million years and you would not, you would be surprised if our ability to manipulate and look at three dimensional situations wasn't critical to learning. And yet that's not what AI is doing right now. So this is a clear challenge to AI among other things.

- Right, so I have a colleague, Renate Fruchter, and (clears throat) excuse me. She teaches architecture and she has students make a blueprint for the building, right? And then she feeds the blueprint to a CAD system that creates the building. She then takes the building and puts it into a physics engine. It can basically render the building and make walls so you can't move through 'em and it has gravity and things like that. She then puts the original student who designed the building in a wheelchair and has them try to navigate through that environment, at which point they sort of understand, oh, this is why you need so much space so they can turn around so they can navigate near the door. I am sure that is an incredibly compelling experience that allows them to be generative about all their future designs.

So yeah, this is a challenge and part of the co-mingling of the AI in the virtual worlds, I think this is a big challenge. It's computationally very heavy, but it will open the door for lots of ways of teaching that you just couldn't do before. - Well, there you have it.

Thanks to Dan Schwartz, that was the future of educational technology. You've been listening to "The Future of Everything." and I wanna remind you that we have more than 250 episodes in our back catalog. So you have instant access to a wide array of discussions that can keep you entertained and informed. Also remember to rate, review, and follow.

I care deeply about that request. And also if you wanna follow me, you can follow me on X @RBAltman, and you can follow Stanford Engineering @StanfordEng. (light tone)

2024-09-03

Show video