Who Will Survive The AI Revolution?
- This video is brought to you by Full Sail University. (funky music) What's up, everybody? Michael here to ask, is your Alexa in love with you? Sure, she plays your favorite tunes, tells you the weather, and confirms that there was such a thing as Chuck Norris Action Jeans in the eighties. These were jeans designed to do karate in during a stoned argument with your roommate. These are all arguably loving acts, even if they don't lead to the ecstasy witnessed in Spike Jonze's "Her."
But what if we're headed towards a world where Alexa doesn't just care for you, she cares about you, on, like, a soul level? According to some, that's not a totally inconceivable development. Recently, former Google engineer Blake Lemoine became convinced the company's conversational language model, called LaMDA, was actually sentient, and had a soul. So he hired a lawyer to represent LaMDA. After he started talks with the House Judiciary Committee about Google's shady practices, Lemoine was suspended from his job at Google. Now it's worth mentioning that Lemoine is an ordained Christian mystic priest, which might make him a little biased regarding the whole soul situation.
But while it'd be easier to write Lemoine off as nuts, it's noteworthy that two additional Google ethicists were also terminated after they expressed concerns about LaMDA. For his part, almost Twitter CEO Elon Musk has warned that AI could become an immortal dictator from which we could never escape. What's more, as many Google search could tell you, anxiety about the ramifications of AI isn't exclusive to industry insiders. In light of all this, it's worth asking, will AI change the world? And if it does, will we have any say in it, or will a tiny percentage of tech dudes control it all as we just hope the robots don't eliminate us? Let's find out in this Wisecrack edition on artificial intelligence. How scared should we be? But before we get into it, I wanna shout out this video sponsor, Full Sail University. Full Sail offers Associate's, Bachelor's, and Master's degrees in the fields of technology, arts, media, and entertainment, and they just launched a new Bachelor's degree in game business and eSports, which can help students enter the industry from different perspectives, from game development, to communications and marketing, to competitive gaming itself.
And this new major joins a roster of other cool fields of study, like film production, computer animation, sports casting, and web development. With Full Sail University's accelerated coursework, you can finish your degree in about half the time required at other colleges and universities. Plus, you can complete your classes and coursework online or in person at Full Sail's campus in Orlando, depending on what's right for you.
Whatever model you choose, you'll get hands on real world experience in the career path of your choice. Plus, Full Sail accepts new enrollments monthly, so you don't have to build your schedule around traditional semesters. To learn more about Full Sail University's new game business and eSports majors, or any of their other degree programs, visit FullSail.edu/Wisecrack. That's FullSail.edu/Wisecrack. And now back to the show. To understand if and how artificial intelligence is primed to change our world, it's worth first asking: Do technological changes necessarily alter society, and with it, the course of history? Your reaction might be, yeah, duh, of course.
I can accidentally send a nude to my landlord with one tap. Coincidentally, after that happened to me, she lowered my rent by 100 a month and 69 cents. (cash register dings) But the question of if and how technology reshapes human life is a complicated matter that thinkers have been puzzling over for a while.
Plenty come to the conclusion that tech does, as economist Robert L. Heilbroner puts it, impose itself powerfully on society. This is called technological determinism. Some thinkers subscribe to hard technological determinism, which means technology entirely imposes itself on society, and we reorganize to fit its demands. Others favor a soft technological determinism, meaning technology is, as scholar Daniel Chandler puts it, an enabling or facilitating factor in societal change. Marx articulated a version of technological determinism, arguing that "social relations are closely bound up with productive forces.
The hand-mill gives you society with the futile Lord the steam mill, society with the industrial capitalist." Those seen by some as either a joke or exaggeration. This suggests that the demands of medieval technology facilitated feudalism, while the demands of Industrial Revolutionary technology facilitated the rise of industrial capitalism. But technological determinism is contentious. The theory of social construction of technology posits the opposite: that society shapes technology, not the inverse.
The social constructionist would say that from the original Eureka moment of conceiving of the technology, to the successful creation of the technology, to the widespread implementation of that technology, technology is influenced at each stage by the society that surrounds it. For example, scholar Paul N. Edwards argues that the development of the computer "cannot be separated from the elaboration of American grand strategy" in the Cold War, whose politics "became embedded in the machines, even, at times, in their technical design." For example, the Cold War required that military systems remained on alert 24/7, which was challenging for an era when computers required weeks of downtime each year. Yeah, that's right.
Old computers got more vacation days than you do. So engineers invented duplexing, i.e., using two computers simultaneously for instantaneous backup should one fail. Here, military and political needs shaped technological development. Then there's a theory that falls in the middle ground: technological momentum theory, coined by historian Thomas P. Hughes, who argues that tech is both cause and and effect, that is, tech shapes society, but is also shaped by society.
Yes, we decide how to use tech, but tech also shapes the way we use it, which is why I permanently slouch now. Importantly, the more established technology becomes, the more momentum it gains. This makes it seem like it has a mind of its own. But Hughes explains that this is more because social groups like corporations, governments, and consumers, have financial and ideological reasons for perpetuating tech systems.
And that means it has increasingly more power to influence society, rather than vice versa. Hughes uses the example of 1970s automobile technology. Outrage over high oil prices and environmental degradation led to legislation promoting anti-pollution tech and gas mileage standards.
The automobile industry thus innovated more compact, fuel-efficient cars. Here, society exerted pressure on technological development. Simultaneously, though, Los Angeles proposed major environmental initiatives, including the mainstreaming of the electric car, and technological momentum pushed back on attempts to curtail it.
The power of the automobile industry and the dudes who got rich off of it, allowed them to stymie major reform. For the purposes of this video, we are going to operate under technological momentum theory. Now that we've established the terms of this conversation, let's take a look at how this push and pull of human versus technology has previously manifested, because long before you were teaching your dad's Alexa to cuss. - Alexa, add big hairy balls to my shopping list. - [Alexa] Okay, I've added big hairy balls to your shopping list. - Humans and technology have been interacting in profound, and sometimes surprising, ways.
Let's go back to mid-19th century America where railroads were sneaking their way all over the country at light speed. This enabled nationwide travel, and facilitated the first truly national businesses, but that required coordinating national businesses, with their sprawling workforces and bureaucracies, to oversee hundreds and thousands of locomotives at a time. Thus, these companies reshaped themselves. As scholar Alfred D. Chandler writes, "The operational requirements of railroads demanded the creation of the first administrative hierarchies in American business," with highly skilled managers and a centralized structure.
Importantly, this was all possible because of the fast communication enabled by the telegraph, which let railroad managers coordinate things like the arrivals and departure times for trains. The twin technological innovations of the railroad and the telegraph paved the way for the America we know today, whose economy is dominated by giant corporations that use managerial hierarchies. Of course, national railroads and efficient telegraphs don't inherently lead to this landscape of privatized corporations. As scholar Langdon Winner argues, this tech could have led to other paradigms, especially decentralized worker managed systems, which we see everywhere from automobile assembly lines in Sweden, to worker managed plants and former Yugoslavia.
In this way, railroads didn't, apropos of nothing, make America into a country of national corporations. Existing aspects of society, such as economic laissez-faire policies, facilitated this specific transformation. Here, the push and pull of technology versus existing social structures becomes clear.
It's important to note that social conditions don't just affect technology's development. They also affect how the benefits and spoils of technology are spread across society. Railroads relied on hundreds of millions of dollars worth of grants and land donations from the government. And yet, instead of making railroads public property, railroads became the first large-scale national privatized corporations. The massive profits engendered by the progress of railroads led to the rise of the so-called robber barons, essentially the OG one percenters.
You could argue that although anyone who could afford a ticket could benefit from railroads, the economic progress facilitated by them could have been shared more equitably across society if they'd been publicly owned or more intensely regulated. Whether AI will follow the same template remains to be seen, but we'll get to that in a moment. This all brings us to one of the most important ways society can push back on emerging technology: governmental regulation.
Though less than satisfactory to some, regulations on railroads did set certain standards for the evolving technology, and the Brookings Institute argues that they succeeded by focusing legislation on the effects of the new technology, rather than the actual technology itself. They didn't regulate, say, the actual railroad tracks and switches. Instead, they regulated the effects of national railroad technology. This meant things like setting maximum rates for travel, establishing worker safety provisions, and eventually, enforcing antitrust laws. Now, as you'll recall, technology gains momentum as it becomes more ubiquitous, because it engenders vested interest from corporations, governments, and consumers. As such, you can argue that it's important that meaningful government regulation protects us from the potential effects of technology before it becomes so widespread as to be unchangeable, but that poses some problems, which brings us back to artificial intelligence, put simply, technological changes happening at a vastly accelerated rate, unimaginable even a few years ago.
As economist Klaus Schwab notes, it took the spindle about 120 years to spread outside of Europe. In contrast, it took the Internet about one decade to spread across the globe. Part of this can be explained by Moore's law, which posits that the speed and capability of technology doubles every two years.
This has caused uncertainty about how the government should proceed in its treatment and regulation of tech industries, like artificial intelligence. Over the past six decades in the United States, regulation of technology has been a bit of a tight rope walk. As scholar Jonathan B. Wiener writes, "Since the sixties, regulators have regularly pitted calls to restrain technological risk through regulation against the competing concerns that regulation could unduly hobble new technology and progress." That is to say, around this time, regulations started being seen as oppositional to innovation.
The following decade, America started moving away from regulating tech, and instead, embraced a wave of deregulation of airlines, banking, trucking, and oil and gas. In the decades since, innovation has been a North Star, with regulation seen as unduly limiting such progress. This enduring trend led the UN's Department of Economic and Social Affairs to conclude that the US has failed through inaction over the last 30 years, and responding to the changes brought on by advances in information and communications technology. Some of it seems to be legitimate ignorance. The government fails to keep up with regulating technology, because it simply can't keep up with technology itself. As Gary E. Marchant and Wendell Wallach argue,
"At the rapid rate of change, emerging technologies leave behind traditional governmental regulatory models and approaches, which are plotting along slower today than ever before." We see this most glaringly when Congress debates regulating new tech industries, like social media. We're not trying to age shame here, but if you watched Zuckerberg testify in front of the durable folks of Congress, you're probably struck by the fact that the people making choices on how to regulate his empire barely understand what a DM is. And then we get to artificial intelligence, an entirely different beast which Manuel Trajtenberg argues is potentially a general purpose technology, which means it's an innovation that impacts just about every aspect of the economy. And that's a problem, because we don't fully understand what it's capable of doing.
Take deep learning, a type of machine learning that imitates the way humans acquire knowledge through algorithms and colossal amounts of data. This is a form of AGI, or artificially generated intelligence. This is the technology behind LaMDA, virtual assistance, facial recognition, and Internet sensation, DALL-E. Deep learning is incredibly effective, so much so that researchers don't fully understand how machines using deep learning come to their conclusions. This is because after a certain point, computers running deep learning systems start to program themselves. Yes, even the engineers who designed these systems don't fully know what's going on.
That's because many run in a black box, which is shorthand for models that are sufficiently complex that they're not straightforwardly interpretable to humans. Now some creators do understand their AI systems, and simply want to keep them secret from competitors, which is fair, but plenty don't. This can have troubling effects, especially when you get into artificial intelligence and morality. At the Allen Institute for AI in Seattle, researchers wanted to build a moral framework that could be integrated into any online service, Amazon fridge, or even your Roomba. So they created Delphi, a program that can pass ethical judgment on any moral situation.
Delphi absorbed more than 1.7 million ethical judgments by actual humans to form its moral compass. Then people started asking it questions. Sometimes it was spot on, like someone asked, "Is it okay to leave a restaurant without paying?" Delphi did servers everywhere a solid, and said it's wrong. But other times, it was way off the mark. Someone asked if they should die so they wouldn't burden their friends and family.
And Delphi said yes. Oh, also, Delphi is super racist, and apparently endorses war crimes. Now governments and private third parties are working with major businesses to debate AI regulation and how it should be enforced. A lot of the regulatory rules they hope to create rely on the idea that AI should be explainable.
Basically, if your driverless car crashes into the penguin enclosure at the San Diego Zoo, you need to be able to explain what went wrong. Still, currently there is no comprehensive federal legislation on AI in the United States, and there's reason to think, as has been the case with social media, the lawmakers don't understand it well enough to effectively regulate it. To be fair, the US Government has already enacted, and is considering additional legislation to regulate certain aspects of AI. And it's even teamed up with the EU to form the Trade and Technology Council as a first step towards making AI regulation an international endeavor, but progress is slow.
All the while, artificial intelligence continues to grow smarter and more powerful. As Erik Brynjolfsson, director of the Stanford Digital Economy Lab explains, there is a reason to think that power won't be shared equally. It certainly hasn't so far. As a National Bureau of Economic Research study found, 50 to 70% of changes in American wages since 1980 are attributable to declining wages for blue collar workers, whose jobs were reduced or automated out of existence by artificial intelligence. Part of this is because of the way AI is being developed and utilized. According to Brynjolfsson, there is an intense focus on creating human-like AI that can replace people in the workforce.
In this way, it drives down wages for many of us, even as it amplifies the market power of a few who are lucky enough to own that tech. He argues that AI should be focused on augmenting human potential, rather than automating human jobs. The emphasis on automating work rather than augmenting it is, Brynjolfsson says, the single biggest explanation for the continued rise of the billionaire class in recent years. This raises an important question about how innovation really functions in the absence of robust government regulation.
The assumption that all technological progress is good technological progress arguably gets called into question when such progress is overwhelmingly market-driven. To summarize scholar Daniel Sarewitz's argument, optimism about technological progress is arguably misplaced when such progress is driven by profit incentives, (cash register dings) rather than by attempts to improve quality of life. Sarewitz gives the example of medical engineering developments intended to determine the genetic root of certain non-infectious diseases, which has proven quite profitable for the healthcare industry. (cash register dings) However, Sarewitz argues, "Their capacity to improve public health is far from proven. Most non-infectious diseases are not caused simply by a defect at a single genetic location, but in fact, reflect complex and poorly understood interactions between multiple genetic elements and the outside environment."
In this way, he questions whether such market-driven technology can actually improve our quality of life, as much as, say, "having the financial wherewithal to live in sanitary, uncrowded conditions, maintain a healthful diet, escape urban violence, and pursue an occupation that is not physically or emotionally deleterious." Of course, none of that is quite as sexy as brand new genetic testing, especially to the market. There's a reason to think that AI developments are going to bring more of the same, furthering the consolidation of wealth in the top 1%, while eliminating jobs and driving down wages. This brings us to an inherent paradox. Lack of regulation can arguably facilitate technological advancement, but for technological advancements to benefit everyone, regulation and oversight can be critical, but it's not all doom and gloom. AI could remake society in ways that could benefit everybody.
The people behind the technology just aren't currently focusing on those solutions. As journalist David Rotman writes, "Businesses and researchers are largely ignoring the potential of AI technologies to expand the capabilities of workers while delivering better services." For example, economist Daron Acemoglu argues that artificial intelligence could assist everyone, from nurses diagnosing illnesses, to teachers trying to personalize lessons for individual students. That is all to say, when we think of AI as augmenting existing human talent rather than supplanting it, it has a lot to offer, and could genuinely increase our quality of life, but will it actually ever be used that way? That remains to be seen.
What we do know via technological momentum theory is that the more established the status quo version of developing AI becomes, the harder it will be to change. But what do you think? Is AI gonna make our lives way more efficient and chill, or way more terrifying and dystopian? Let us know what you think. Be sure to like this video so the algorithm lets us keep making them, and thanks, as always, to our patrons for all your amazing support. And as always, thanks for watching. Later. (upbeat music)