AI: Unpacking the Black Box // The Future of AI

AI: Unpacking the Black Box // The Future of AI

Show Video

♪♪ >> Support for "AI: Unpacking the Black Box" comes from viewers like you and from Goodwill Keystone Area. It's the last tea party for Krista with Miss Marshmallow and Sarah's first day of management training at Goodwill. When you donate to Goodwill, you help provide skills, training, and career placement. And the things you loved start a new life too. >> Picture this.

It's 2026 and you're gathered around the television with your loved ones. The air is thick with anticipation as the president addresses the nation. His tone is somber yet tinged with optimism. What's the announcement? Artificial General intelligence, or AGI, has been achieved.

He explains that this breakthrough means machines can now perform any intellectual task that a human can. He assures us that this technology will usher in a golden age, but you can't quite shake the fear gnawing at your gut. Now, the next day, life seems unchanged. Weeks pass and your attention drifts to other regular life stuff. Then suddenly you see it. A humanoid robot appears at your favorite cafe, filling a position that had been vacant for months.

At work, half your team is let go. Headlines flash across the screen. AI cures cancer. Quantum computing threatens global banking. Students protest as degrees become obsolete thanks to artificial intelligence. Now, just as the pace of change becomes overwhelming, another announcement rocks the world.

We've achieved ASI, or artificial superintelligence. The future has arrived and it's moving faster than we ever could have imagined. Let's take a step back. How did we get here? What does this mean for our jobs, our health, our very existence? These are the questions we've been exploring throughout our series, "AI: Unpacking the Black Box." We've journeyed through the realms of health care where AI is revolutionizing diagnosis and treatment. We've grappled with profound questions of faith and what it means to be a human in the age of artificial minds.

We've pondered the concept of digital immortality and the ethical implications that it brings. We've examined the exponential growth of technology and its impact on education in the workforce. We've confronted the sobering realities of AI in defense and warfare, where the stakes couldn't be higher.

Tonight in episode eight, we'll bring all of these threads together. We'll look at where we stand now and what the future might hold. We'll explore the potential benefits and the risks, the hopes and the fears. Again, I'm your host, John McElligott, and I'm honored to be your guide as we journey into the future of humanity in the age of artificial intelligence.

>> The robots of the future are going to be time machines. They'll see the laws of science, giving them the ability to simulate the future. They would be able to simulate the future more realistically than we can ever imagine.

And if you now use the atom as a way to compute, you are infinitely more powerful than a digital up or down computer, because now you can go to any angle you want. That's the power of a quantum computer. A quantum computer computes on atoms, not magnetic poles of up and down, North Pole, South Pole, on or off.

These computers are in principle millions, billions of times more powerful than a digital computer. But they can only do one problem at a time. But that one problem is a problem that an ordinary computer would take thousands of years, millions of years to solve. So the potential is there. And that's why a lot of nations are working on this to break other people's codes. Think about that.

The world economy is based on codes. If you can break the codes, you can steal the crown jewels of any nation. Their nuclear secrets, the banking system, all that is codified using digital technology. But quantum computers can eat digital technology for breakfast. >> Technologies like computation, sensors, networks, AI, robotics, 3-D printing, synthetic biology, AR, VR, blockchain -- These technologies are growing exponentially and they're converging to reinvent industries and business models. We fear the future we don't understand.

But for me, this is the most important time ever to be alive. In the next 10 years, right, we're going to see 100 years worth of progress over the last century. So, you know, 1925 to 2025, that amount of progress we'll see in just the next 10 years.

Unfortunately, we as humans, our brains and our societal structures are not able to fathom that speed of change. So there's going to be disruptive elements. And so how do we deal with that? How do we make sure that it's all positive and not, you know, unduly negative. I come out on the side that we should fear our world without digital superintelligence, that having that level of capability has the potential to keep the world safe.

I think the more intelligent and wise a system is, the more peace loving and abundance loving it is. And so there will be a point in the future where some some tyrant says to its Ai, "I want you to figure out a way to kill those million people over there." And the AI will say "No, that's a ridiculous thing to do. What's your problem with them? I'll just go and speak to their AI and we'll figure out a peaceful settlement." So what does it mean when AI models are so vastly intelligent that they're able to ask and answer questions that we can't even conceive of? And I think we need to show people that technology can uplift society, and that we can create a better world.

There can be a world of abundance. Our default software is scarcity and fear. Our first reaction to an unusual circumstance is fear. And when we get under pressure, we are scarcity minded.

I don't want to share. This is mine. If you're showing your brain all the negative news over and over and over again, you go into a state of fear and scarcity, and that's a terrible place to face the future from.

The realization for me is that technology is a force that takes whatever used to be scarce and makes it abundant over and over again. So I think the most important takeaway for everybody in this age of exponential growth, this age of AI is that we no longer need to settle for the way things were. We don't have to settle for hardships or problems that all of us have the ability to positively impact the life of a billion people, that we can use these extraordinary tools to uplift humanity, to make the world a better place.

>> Senior year of college, I was working at a summer camp up in the Poconos and had a spinal cord injury. And I just kind of randomly was called up by one of my buddies from college one day, and he was like, "You want to get a chip in your brain?" And I was like, "Sure, why not?" Like, I got nothing else going on. They said that we're going to take a chunk out of your skull and implant all of these threads, you know, in your motor cortex, and we're going to go this deep, and then we're going to, like, mount this to your skull and put this little flap of skin that we cut right back over and staple it up. It's all going to be done -- Well, most of it's going to be done by the robot.

Like, the surgeon will actually come in and cut a hole in the skull, and then the robot will implant all the threads, and then the surgeon will kind of mount it to your skull and close you up. When I got cursor control for the first time, that was maybe a few days. That was really, really cool.

That was like a super cool moment because I could finally control the cursor, but it fit like a glove, like, so quickly to the point where I wasn't waking up and, like, getting on my device and using it, thinking, like, "Wow, this is an amazing technology." I was getting on thinking, "Okay, like, what -- what sort of tasks do I have to do today?" And I was, you know, starting to, like, be able to play certain games with it when I finally, like, realized I'm doing all of this with my mind. If I attempt to move my hand in any way, shape, or form, that's an attempted movement, and obviously I can't really move it. But those signals are still firing in my brain. They're just not getting through. So the, um, Neuralink is picking those signals up still, and then it's turning those into, like, cursor control.

The day that I didn't try to physically move my hand and just thought like I wanted the cursor to go somewhere, that really blew my mind. This thing is better than I thought it was. Like, I thought it was doing one thing, but it's so much more than that. Makes me wonder, like, what it actually is capable of doing. If it's capable of, instead of having to, like, write out each character of a word if it's able to just, like, understand the word that I'm about to write and just do that, and then if it's able to do that, full sentences, and then like full sort of ideas that I have, can it, like, put that into words or put that onto paper in some way, you know? 10 years minimum, I want to be able to be able to control, like, a whole keyboard, like, with every key, better than anyone else that I know. I want to be able to play on consoles with controllers just like I'm using two hands.

I think that -- or maybe even more because I'll be using it by brainpower. I think that is the minimum of what they should be able to accomplish in 10 years. >> I think the whole architecture of how we interact with the Internet will change. I think of all the data being generated in the world as, let's say, a big world feed of information. And we're getting to the point where something running locally or something running on a server that you personally own or control can filter all of this data for you in a way that's a lot more personalized across every platform.

So TikTok, let's say, has an algorithm and it roughly knows all your likes and dislikes from the past, and it's starting to surface things that it thinks you might like. But, let's say one day you want to only see science-related content. There's no way to communicate with the recommendation algorithm and go, "Look, I'm not feeling great seeing all these political posts. I want to see science stuff today." That just won't fly because they want to optimize for your attention.

And if they optimize for science things, you actually have to train the algorithm by clicking on a bunch of science stuff. You might instead just talk to your personal AI and it could do it for you, right? You could watch a bunch of science things and work with the external recommendation algorithm to eventually surface just the content that makes sense for you. And I do think it becomes a sort of basic, fundamental human right that you have a personal AI that you can own. Here's a scenario. You walk into a mall and we're all wearing Ray-Ban glasses or some open-source hardware, and what we see on the screens is dependent on our biometrics. And the recommendation system sort of shows whatever is good for us.

The closer we get to becoming the sort of merge between human and non-human intelligence, the more I'm starting to value human experiences myself. Like spending time with people you love and enjoying the beauty of nature and the things that sort of can't quite be described. But you do feel it in common with other humans. >> We help every government and business in the world solve their weather-related challenges. And we do that by offering something called a climate resilience platform, where, for instance, organizations can come in, understand how weather is going to impact them in advance, and they can update their operations to either mitigate risk or sort of harness the weather.

We had farmers in Kenya, where we started in Africa, where, you know, they weren't quite sure exactly when to plant or when to harvest, and they were taking signals based on how things had been done in the past. And, you know, the time to harvest now is different than it was 10, 20, 30 years ago. But they didn't have, you know, sort of the infrastructure or the weather forecast to know when to do that.

And so we knew we wanted to get them better insights, to know when to do certain things. But how you get those alerts to folks is quite challenging, especially in different countries. So in that example, we worked sort of both with the government as well as organizations on the ground to understand exactly when weather is going to impact specific farms and tell the farmers when to plant, when to harvest, and all those things. AI is one of the things that really helped us do it. And how do you communicate those types of alerts to folks? Here, You know, I'm based out of Boston, where everyone has a smartphone. It's not the case in every other part of the world.

Different languages, different literacy levels, and all those types of things. Like how do you just make sure that you're able to communicate in one way or multiple ways to help people adapt to the insights that they need? >> We often assume that the AI we want or that are coming that is inevitable, is human-like AI. This is this concept of general artificial intelligence. AGI as it's sometimes called, right, which is the idea of an AI system which has the general functional capacities of a human being. But I think this is really the wrong way to look at it. You know, it's part of this thousand-year-old story of humans trying to play God and create things in their own image, whether it's fashioning a golem from the clay in the riverbanks, you know, in Jewish folklore or Frankenstein.

Or now, there's this implicit idea that we should, you know, success will be when we create or even exceed ourselves. And I think that's something we need to think about and question. Because the future of AI is not yet written. Nothing is inevitable.

And we have to make some clear decisions about the kind of AI that we want, which means we have to see AI clearly for what it is, because it's only with clarity that we'll have the agency that we need in order to shape the future in a socially positive way. And I don't think building replacement artificial human beings is the right way to go. >> I mean, AI is a funny term because, you know, what's called AI today, you know, an earlier generation was called machine learning, a generation before that was called big data. A generation before that was really just called statistics, right? So it's not that there's like some sort of jump in capability, but it's really a phase shift where the combination of, you know, better algorithms and better compute and more powerful compute is this natural language capability, right? So, you know, the models got powerful enough where when you're interacting with them, it doesn't feel awkward to the point of, like, you're just like it's glaring that this is a computer that doesn't understand anything, right? But the frequency with which that happens got low enough where, you know, you can start seeing a whole new class of applications that become possible. The best mental model I have for, you know, AI is you have these machines that have infinite timescales, right? Like, they have all the time in the world.

They're doing computations in parallel. We only have one brain at a time. And so they're able to do, you know, a lot of work for you in parallel and save you a lot of time, right? And when machines are saving you time, your productivity goes up.

>> The kind of stuff we do impacts the state of New York and its citizens in so many ways. Learning from the history -- This is not the only technology, right? From the advent of fire to, you know, wheels to whatnot. I think any technology which is transparent, people know what it is, there's a very high chance they're going to adapt it so that it makes their life better. With AI, these models, for example, are built as a black box. The only way in my head it will become a very useful, adaptable, and acceptable is when you make the thing transparent.

Generative AI, 10 years from today, I cannot even imagine what it will look like, to be honest with you, but it will definitely be much more adaptable and usable and transparent than it is today. I do believe that things are going to get settled. Right now, both sides are unsettled. There are a lot of Zoomers and a lot of doomers, you know, like, and the answer is always somewhere in the middle.

>> This stuff goes back almost as old as time. The ancient Greeks imagined automata. According to legend, you know, they were like humans. You couldn't tell them apart. And, of course, sometimes they would go awry and they wouldn't do what you wanted them to do. And, you know, in the '60s, there was a kind of a mild panic for a while about computers able to do so many different things.

They were going to wipe out all the jobs. There was a president's report about that. So there have been these waves where we sort of saw what it could potentially do.

Now, most of those were basically either mythology or false alarms. I think this time is different. And there's this instinct, a really bad instinct, that when people are threatened, they try to preserve things and they try to keep everything from changing, and they try to lock in the past. And that's exactly the wrong instinct. America has never thrived by freezing in place the past, by locking in the technology, locking in the job descriptions, and having everybody do the same as what they used to do.

It's always been dynamism that's made Americans broadly prosperous. And I fear that we are retreating and we're afraid to have that change. But ultimately, I think it's actually more risky to not embrace change. It's more risky to try to freeze in place things rather than say, "You know what, we're going to welcome the dynamism. We're going to welcome the change. And that way when our, you know, competitors from other countries or new technologies come along, we're going to roll with that and we're going to take advantage.

We're going to be on the better side of it." >> Hi. We're the Starrs. I'm Tom. >> I'm Louise.

>> I'm Tim Starr. >> And we're here to talk a little bit about our experience with both artificial intelligence and robotics. One opportunity we had was when John was here presenting some seminars or classes on artificial intelligence and robotics.

>> And I was amazed at the different ways it would be incorporated into the workforce. >> So it was kind of eye opening to us about the development of AI and where they were at, what they're doing, and how it was being used for good. >> The wonderful thing for me was seeing that Tim could participate, and that gave me great hope, because we are a part of the special needs community of Kansas City. I was just very impressed to see that they were not going to be left behind.

If they wanted to be a part of this, there was a place for them. >> There's not consensus in the field, in my opinion, that this is accelerating. And so when people say, "Hey, we're going to have superintelligence or AGI in 18 months," I would usually want to say, "Who is saying this? And do they have, you know, reasons for saying it that are beyond just telling you what the truth is," right? Sometimes it's hype and hype can cause their stock value or their investment value to go up. And also the future is hard to predict. You know, some of these if you say, okay, what does this mean in five years? I would ask you to say, well, five years ago, what were all the best estimates of the jobs that were going to go away by this time? What was it? Everyone said driver, truck driver.

Now nobody's saying truck driver, even though that's still probably a thing that could happen. But not now, because it's harder than they thought it was going to be. And now it's, you know, investment analyst or, you know, information -- some kind of information worker. Programmer, right? We have to be careful.

These predictions of, you know, what things are going to happen in the longer term are much harder to get right. It's much easier for us to say what's likely to happen in the next two or three years. It can be much more integrated with our personal lives and how we interact with people physically.

The context of where I am, it knows I'm in my office right now, which may make my AI act differently than if I'm riding my bike like I was 20 minutes ago getting to work. So that, kind of how that interface adapts, that's what we're going to start to see in three to five years. In the short term also, we're going to have more of these AIs be able to work together to do things for us, rather than I'm just interacting with ChatGPT and asking a bunch of questions. I mean, I think we should be careful to not treat these algorithms as living beings, because they're not.

>> As we conclude our journey, I want to take a moment to reflect on the ground we've covered and the implications for our future. Throughout this series, we've explored AI's impact across various facets of our lives. In health care, we've seen AI has potential to revolutionize diagnosis, treatment, and drug discovery. We've witnessed how it's already saving lives and how it might one day cure diseases we thought incurable. We've delved into the intersection of faith and technology, questioning what it means to be human in an age of artificial minds.

Can a machine have a soul, consciousness? How do our beliefs adapt to the world where intelligence isn't exclusively human? We've grappled with the concept of immortality through digital consciousness. The idea of uploading our minds to computers all once seemed like pure science fiction, but now it's a serious field of study. We pondered the ethical implications and the profound questions it raises about identity and the nature of existence. We've examined how AI is reshaping education and workforce. The jobs of tomorrow may look vastly different from those of today. Or maybe we don't even need jobs.

We've explored how we might need to redefine work itself, and education must evolve to prepare us for a world where AI is ubiquitous. We've confronted the sobering realities of AI in defense and warfare. The potential for autonomous weapons and AI-driven conflict strategies raise serious ethical concerns and could reshape global power dynamics. Now, you might think these scenarios belong in the realm of science fiction, but they're closer to reality than you might imagine. As recently as November 2024, while this series was airing, executives from leading AI companies like OpenAI and Anthropic warned the government that artificial general intelligence could be here in just 18 months.

The scenarios we've discussed in this series, from AGI to ASI, from robot workers to AI-driven medical breakthroughs, are not science fiction. They're potential near and future realities that we as a species need to prepare for. This preparation requires a new way of thinking. We need to cultivate exponential thought, the ability to anticipate and adapt to rapid exponential change. We need to understand that the future won't just be an extension of the present, but a radical transformation. Let's approach it with open minds, critical thinking, and a commitment to shaping a future that benefits all of humanity.

The challenges are enormous, but so are the opportunities. We have the chance to create a world of abundance, to solve problems that have plagued humanity for centuries. But we must act now. We must engage with these technologies, understand them, and guide their development.

We must ensure that AI serves humanity, not the other way around. Thank you for watching "AI: Unpacking the Black Box." Again, I'm John McElligott. Until next time, keep your eyes to the future and your hands open. >> Support for "AI: Unpacking the Black Box" comes from viewers like you and from Goodwill Keystone Area. It's the last tea party for Krista with Miss Marshmallow and Sarah's first day of management training at Goodwill.

When you donate to Goodwill, you help provide skills, training, and career placement, and the things you loved start a new life too. ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪ ♪♪

2025-01-09 00:59

Show Video

Other news

Let Fusion Cook - Use All Computers Together! 2025-01-16 15:49
The CATL Finally Released The SOLID STATE Batteries and Will Shock the Entir Industry! 2025-01-12 23:32
How This Small Shop Broke Into Aerospace in 2 Years | Motor Control Technology Machine Shop Tour 2025-01-12 04:09