(gentle music) - [Carl Sagan] Maybe it's a little early. Maybe the time is not quite yet, but those other worlds, promising untold opportunities, beckon. We can't help it. (gentle music continues) Life looks for life. There's a tingling in the spine, that catch in the voice, (birds chirping) a faint sensation as if a distant memory of falling from a great height, We know we are approaching the grandest of mysteries.
- Are we alone? Is there any other question that so relentlessly haunts our thoughts, that captures our imaginations, that gnaws at our very being, a question that speaks to the very meaning of our existence? (gentle music continues) In 1960, the late Frank Drake used the Green Bank Observatory to listen for artificial radio transmissions from two other stars, Tau Ceti and Epsilon Eridani. Nicknamed Project Ozma, it's generally recognized as the dawn of modern SETI, the search for extraterrestrial intelligence. Since then, the field has grown in scope and size with projects like Breakthrough Listen now committing $100 million to the endeavor. But 14 years after Project Ozma, Drake and others went a step further using the Arecibo telescope to switch from just listening to broadcasting.
Rather than just SETI, this was now METI, messaging extra terrestrial intelligence. They used Earth's most powerful transmitter to send 1,679 bits of data, which is the product of prime numbers 73 and 23. Unfolding that subprime into 73 rows and 23 columns, a simple pictorial message appears, depicting humans, numbers atomic numbers, and a graphic of the solar system. There's something profound about that day, humanity's first attempt to say hello to the cosmos, although it has to be said there's not much chance of anyone replying for a very long time, as the message was sent towards a distant globular cluster 25,000 light years away.
Truthfully, the Arecibo message was largely a technical demonstration, a proof of concept. Even so, it sparked considerable controversy. Soon after, astronomer and Nobel Laureate Martin Ryle published a protest, warning that any creatures out there may be malevolent or hungry and even called for an international ban on any future efforts. In 1989, the International Academy of Astronautics adopted a declaration stating, "No response to a signal or other evidence of extraterrestrial intelligence should be sent until appropriate international consultations have taken place." And in 1995, the SETI Permanent Study Group presented a draft declaration that any messaging should be approved by the United Nations General Assembly.
Clearly then, many scientists have no qualms with listening to the universe but objected to broadcasting, and it's this dichotomous stance that underscores the subject of today's video, the SETI paradox. Not to be confused with the Fermi paradox, the SETI paradox was first formalized by Alexander Zaitsev in 2006, who concluded that searching is meaningless if no one feels the need to transmit. In essence, why do we expect anyone to be broadcasting if we don't? This is a hotly debated topic, and I wanted to summarize the arguments to you today but also to hear from you because this decision should be one for the world, not just for a handful of astronomers to decide, but before making up your mind, please do watch and listen to the arguments, and then see where you land.
Detractors of METI often invoke two core arguments. First, there's no prospect of short-term gain. It would take a decade or more to get a response even if the nearest stars were inhabited, more likely centuries or even millennia. SETI hasn't found anything yet, but that's hardly surprising either. It's studied a tiny fraction of the possible frequencies, bandwidths, times, and locations.
SETI scientist Jill Tarter describes the situation as akin to filling up a bathtub with ocean water, seeing no fish in it, and then concluding that the oceans must be devoid of life. So, the argument here would be patience, no need to start panicking and sending out messages into the dark when we've barely even listened yet. Just give it time. The second argument and the one most people might initially consider is existential risk. If other technological civilizations are out there, then they're unlikely to be at the same level as us and could be far more advanced. That raises the prospect that they could do something that we can't, eliminate a civilization residing on a distant planet.
Now, a typical knee-jerk response to that is, "Why the hell would they do that?" Surely, if they're more advanced than us, that means they're more enlightened, more benevolent, maybe something more like the Federation from Star Trek. I mean, when was the last time that you saw Picard destroy another planet? - Remarkable. - Of course, our science fiction is just that. It's fiction. It's meant for entertainment. In truth, we have no idea what the likely actions or motives of another civilization would be.
That's trying to guess xenopsychology. But we have to admit, at a bare minimum, that it's on the spectrum, on the range of possible behaviors. As a pointed example, consider that humans regularly eliminate entire colonies of ants without a second thought, nothing personal. They're just a pest that we don't want in our backyard in case they spread further into the house. Now, most of us don't really question the ethics of that. "They're a limited species," we might justify to ourselves.
Removing them is just pest control or even more like a weed in the flower bed. In the same way, it would be presumptuous for humanity to assume that alien civilizations must necessarily cherish, respect, or even recognize societies such as our own. In the end, we may be nothing but an ant hill to them.
Many scientists besides Martin Ryle have rallied against METI. Most famously, Stephen Hawking compared METI to inviting European colonizers onto First Nations' soil, a disastrous outcome for the Indigenous peoples. Indeed, rarely does one see instances in human history where interactions between peoples of asymmetric capabilities does not lead to gross exploitation or worse. And if you'll indulge me to extend Hawking's analogy beyond human-human interactions, human-animal interactions are, largely, even worse. Perhaps the most pugnacious METI detractor is John Gertz, who once compared METI scientists to someone who cultures and releases anthrax.
And scientists like Ryle, Hawking, and Gertz and their supporters have made it extremely difficult for METI scientists to send any messages out with their proposals on radio telescopes across the world being frequently blocked, well, at least for now. Indeed, to date there have been only 16 distinct deliberate messages sent out from the Earth transmitted to just 26 different targets and accumulating a few 10s of hours of transmission time altogether. So, the METI detractors have been winning the argument thus far, but what say the METI proponents? A principal argument is the SETI paradox itself. Look, if we conclude that messaging is not worth it, then why would anyone else? If so, then there's no point to SETI.
And so, one might argue that, really, you can't do one without the other. In this way, one can think of METI as a kind of accelerator to SETI. You can go out fishing without any bait if you want, but you're gonna need a lot of patience, far quicker if you actually put something on the end of that line that might be interesting to a fish. Second, there is typically plenty of skepticism about the annihilation scenarios peppered with sneer comments about reading too much sci-fi.
They're probably very far away from us, so it's probably not worth it, or they're too enlightened to wish such an act. We've already discussed this argument before, and as I said previously, I think it's awfully presumptuous about the limits of technology or the mindset of these other entities. I have no idea how likely or unlikely that scenario is, but I also don't think we can casually dismiss it either, especially given the stakes, which is our very existence.
Third and final, METI proponents highlight that they probably know we're here already anyway, especially if they're capable of annihilating us from light years away. Look, directed transmissions is not the only game in town for detecting our technology. Our orbiting satellites, atmospheric pollution, city lights, and radio leakage all provide clues to our existence. We call these technosignatures, and many astronomers, including my own team, the Cool Worlds Lab, are developing new techniques to try and detect these from afar. Now, humanity has only been producing these signatures for the last century, and so only civilizations within 100 light years or so have a chance of seeing us.
But beyond that, the Earth would surely stick out as a pale blue dot in the middle of a quiescent star's liquid water zone. A bit of spectroscopy on their part would reveal oxygen in our atmosphere, betraying the fact that Earth harbors life, a fact that would be determinable from effectively anywhere within the galaxy. In this context, METI proponents argue that they probably already know that we're here anyway, but crucially, all of those signatures are mediums we can't control. At least with METI, we have a chance to write the narrative to control what kind of information we send out to tell them about what kind of a people we are.
And who knows? Perhaps aliens respect each other's privacy and don't make contact unless invited to via a message like this, a kind of communications prime directive. Thomas Cortellesi recently published a similar argument with what he calls the continuum of astrobiological signaling. At the lowest level, you have the basic properties of our planet, which would already be of astrobiological intrigue. At a higher level, you have our planetary spectrum, which is littered with biosignatures like the red edge from vegetation and the oxygen-rich atmosphere from photosynthesis.
Then, you have the technosignatures, satellites, pollution, artificial lighting, all the way up to the idea of directed messaging, METI. In this way, METI isn't distinct and thus particularly dangerous. It's just another signature of our existence. I think that Cortellesi makes a good argument here, but I'm gonna push back just a little bit because I think an implicit assumption in this is that our perception of METI being part of a continuum, part of a spectrum of technosignatures would be similarly shared by this other civilization. And that's guessing their xenopsychology, something that we must always strive to be agnostic about.
Look, for all we know, our passive technosignatures, such as atmospheric pollution, might be perceived as being radically different from that of a deliberate and directed radio transmission sent towards their home. To a xenophobic people, it could be alarming, like receiving a letter in the mailbox from a stranger. "There's this planet called Earth. They're giant-size bipeds, and they know where we live and have just sent us a message telling us so." If they misinterpreted that message, our symbols of benign communication could be translated as symbols of war or aggression. How likely is that? I have no idea, but I don't think Cortellesi does either, nor indeed anyone else.
When one is dealing with questions like this, one has to admit our enormous ignorance about predicting the reactions to our behavior, however fanciful they might seem by human norms. So, is there any hope of breaking the stalemate between these two camps of the METI debate? One way we can tackle the problem is with game theory because, really, that's what it is. It's a game. Player one is us, humanity, and player two is the aliens, either some single civilization or a collective ensemble.
It really doesn't matter. Each player has multiple different strategies they can pursue. First, they can just listen, L, a totally passive program. Next, they can listen and broadcast, LB, which is what METI proponents advocate for. Third, they can listen and only transmit and reply to signals, LR.
Fourth, they could listen and then annihilate anyone they receive a signal from such that the reception of a directed artificial radio transmission acts as a kind of trigger for them. And fifth, they can kind of bait the universe by listening and broadcasting, hoping to get a reply signal to which they can then go ahead and annihilate the sender. This is definitely the most pathological strategy on the board. Splitting up the possibilities, each square here represents a possible combination of two player strategies. So, for example, the top left box here represents the scenario where humanity pursues a listening strategy, and the aliens do the same.
Now, in each of these boxes, let's fill out the payoff or value of that scenario as perceived by each player. So, for example, again, in that top left box, neither player really benefits, and so we'd say the value is zero, where we've color-coded each of those values to distinguish between aliens and humanity. We can now go ahead and fill out all of the payoffs under each of the 25 possible scenarios. To break this down, V1C here represents the value of achieving one-way contact, whereas V2C is the value of achieving two-way contact. VX is kinda dark.
It's the value a civilization gets by exterminating a competitor. You could think of it as the deranged thrill that they get by killing someone off, but perhaps it's better to think of it as the value a corporation might gain by eliminating a competitor. And that leaves us just with our final term, VE. That is the value assigned to existence.
If you get exterminated, then you lose VE. So, a negative sign is here, and one might presume that VE is going to be more extreme than the other values. So, now, we have our payoff matrix as it's known in game theory, and the next step is to ask, "What kind of game are we even playing?" In a finite game, the purpose is to win and beat the competitor. In that case, we would add up who obtains a higher payoff than their opponent under the various possible strategies. But really, I don't think that's what we care about.
I don't think we're driven here by trying to score more points than our opponent. Rather, we're just trying to maximize our outcomes, maximize our payoff. Really, in the end, we're trying to keep the game going, a so-called infinite game. - Why are you stalling, captain? - I don't want the game to end.
(gentle music continues) - So, let's try to keep the game going and look at our options. To simplify things, we can actually discount these lower two strategies for humanity because, after all, we do not have the capability of pursuing them, at least not until someone invents a death star. Further, the SETI paradox really isn't about whether to reply to a message or not but rather whether to broadcast. We can argue and debate as much as you like about whether to reply to a message if and when the day comes that we actually receive something, but for now, let's just ignore it.
So, for now, let's just compare our strategies L and LB, representing SETI and METI. We can calculate the expected payoff of each of these two strategies by summing the individual payoffs multiplied by their respective probabilities, which is what I've appended over here. So, in conclusion, we find that the payoff of METI would exceed that of just listening if we deemed that this term is greater than this term. In other words, we have the following inequality for METI proponents to win the debate. To simplify things, let's assume that V1C and V2C are approximately similar.
Let's just call it VC, the value of contact. With a bit of rearrangement, that gives us the following inequality for METI to win. And we can even further simplify. Let's be generous to the METI proponents here and assume that the whole bait and annihilate scenario, LBA, is inherently unlikely, giving us this. Annotating back on the words and explanations here, we have the following.
METI proponents are correct to argue for messaging if the value we assign to our existence divided by the value we assign to making contact is less than the probability of them replying divided by the probability of them annihilating us. So, the left-hand side here are terms related to the sender, in this case us for a METI program. And the right-hand side here are terms related to the receiver, in which case would be the alien civilization. So, what does this all mean? Well, to see this, let's work through an example together.
Let's say, hypothetically, that we valued our existence, VE, as being 100 times greater than the value of making contact, VC. In that case, we would only engage in METI if we deemed that the probability of them replying to our message was at least 100 times greater than the probability of them annihilating us. If we could not conclude that, then METI would not be worthwhile doing because the risk of annihilation would be too high.
This equation really encapsulates the whole argument. If you're a METI proponent, you implicitly believe that this term is greatest. If you're a METI detractor, you think that this term is greatest. In practice, these quantities can't really be assigned definitive values, but even so I think the equation illuminates some important implications that have been omitted in the debate thus far, in particular, the nature of the kind of civilization that chooses to engage in METI.
There's really two ways in which we can argue for METI in this framework. One, the ratio of our existence value to contact value is decreased. Or two, the ratio of their reply probability to annihilation probability is increased.
Consider case one first, but rather than thinking of humanity as the sender, let's try to put ourselves in the shoes of a hypothetical alien civilization. Why would such a civilization decrease this ratio? Why would they deemphasize their own existential value, and why would they increase the value of making contact? To me, a depressed civilization seems to fit the bill. If things are dire, the end is on the horizon, a civilization's existential value will fall. If they believe they're gonna die anyway, then there's really not much risk in sending out messages. Things (laughing) can't get any worse.
Just like someone diagnosed with terminal cancer, they're more willing to go out and try those risky things that they previously never would've thought of doing. Now, the chance of them receiving a reply would surely diminish if the end were indeed imminent. But even so, they might be quite content with that.
They might be satisfied to know that somebody received their message even if they never pick up the reply. Why? Legacy. As our own lives end, we tend to think about that more. What was the point of our lives? Who will remember us after we're gone? What impact did we leave behind? Although their civilization might be doomed, they can at least be remembered. Someone out there would know what they did, who they were, and what it was to be them, an idea that we previously covered in our "Imins" video. I am certainly speculating here, no doubt, but I think there is a plausible argument that a tanking civilization would have less to lose by engaging in METI.
And if you accept that, then there's a profound consequence. It raises the prospects that, if we ever do receive a signal one day, it might be more likely to be a tragic swan song from a civilization, a message from their deathbed. Let's leave our depressed civilization and consider the other end of the spectrum, an extremely advanced civilization, an ancient one with immense capabilities and who has already made contact with many, many other civilizations. In this case, the existential value is presumably still a large number, but the risk of external annihilation has surely diminished either by spreading out to multiple worlds and colonies or simply the ability to intercept threats. However, the value of making contact can also be argued to have diminished for them here as well.
First contact is undoubtedly a historic, society-changing event, but second contact, 10th contact, 1,000th contact? By the law of diminishing returns, one could argue that the value that they gain in making contact declines as they advance, becoming almost mundane at a certain point. Yet more, their development so far exceeds that of our own that there's exceedingly little to learn from us through contact over simply remote observation. Maybe if they're a little more advanced than us, they could be motivated by philanthropy to help us out, but extreme disparity might dissolve even that. After all, we rarely try to converse with ants in order to help them or to learn about them. We would simply study their behavior remotely.
For me, this case is a little bit more debatable as to whether they'd want to engage in METI. Yes, they're less likely to lose, but there's also less to gain. You know, ultimately, it shows how this whole debate is really about the balancing act of loss aversion and prospecting, fear versus gain. Each day, we all essentially make the same kinda choice. We could stay in our homes, never step out for fear of being hit by a bus, or we could accept the risk and live a more meaningful, rich life. Why do I fly an airplane? (laughing) Why do I drive fast cars or do anything else risky? The risk is always there, but what is the point of being alive if you don't live? Maybe some civilizations conclude the same, that existence in isolation is too worthless, too empty, that they'd rather die risking contact than never try and reach out.
Or, maybe the value associated to be taking this risk, just the very concept of it never really materializes in our minds. If you had never seen the outside world, how could you miss it? And in the same way, how can those who have not experienced alien contact truly miss it either? As you can see, the SETI paradox is complicated, nuanced, and invariably touches on the motives and capabilities of other civilizations. I am personally a big supporter of SETI, and indeed, that is something that we work on in my team. But METI is something that I believe we all have to weigh in on. Today, we've covered a lot of hypotheticals and scenarios, and those have been fun to explore together, but now, I want to hear from you. Let me know down below where you sit on this debate.
Should we engage in METI? How real are the risks to such activity? And what kind of civilization do you think chooses to engage in METI? This is something that we genuinely need your input on. This is too big of a choice for any one person to govern because, in the end, the outcome of this decision may affect us all, for better or for worse. So, until next time, stay thoughtful and stay curious.
(pensive music) Thank you so much watching, everybody. I hope you enjoyed this video. If you did, be sure to like, share, subscribe, and if you really wanna help us out, you can become a donor to my research team, the Cool Worlds Lab at Columbia University, just like our latest two supporters, that is Emerson Garland and Alex Leishman. Thank you so much for your support, guys. So, see you in the next video, and have a cosmically awesome day out there.
(pensive music continues)
2022-11-06