The Fermi Paradox Timebombs

Show video

We know the dangers of Science and  Technology and how they might mask   doomsdays as hidden treasures, but  could the very quest for knowledge,   or the existence of the conscious mind itself,  be a ticking time bomb waiting to wipe us out? Once, long ago, some human first looked up at  the stars and wondered what they were, and how   vast that celestial sphere was, and if it might  be home to other peoples or the gods themselves.   As we grew in knowledge we began to realize just  how immense and ancient the cosmos truly were,   and wondered if life might have arisen out  on those distant and uncountable worlds.  Surely many are like our own pale blue dot, and  such planets must number into the untold billions,   yet we see no ironclad signs of the mighty  empires that should dwell among the stars and be   impossible to mistake for any natural phenomena. This seeming paradox, an ancient and immense   universe, and yet a quiet one, is known as  the Great Silence or the Fermi Paradox. Many  

solutions are proposed, including that these  aliens are indeed present and hide from us,   or we choose to ignore the evidence of them.  But the largest collection of solutions revolve   around the idea that civilizations like ours  either evolve very rarely, filtered out by   many challenges and conditions that make  Earth Rare, early filters on life emerging,   or middle filters that prevent the rise of  complex, Intelligent, or Technological life.  The other side of that are what we call the  Late Filters, those which our civilization   has not yet encountered or fully passed, and  which might explain the Great Silence not by   civilizations being rare, but by them  being short-lived or unable to engage   in interstellar travel. There’s not much  paradox if it turns out every civilization   self-destructs in a few centuries or that they  can’t colonize much beyond their homeworld,   at least so long as intelligent life doesn’t  develop so regularly that virtually every planet   develops it and redevelops it again and again when  they do blow themselves up, if they don’t utterly   sterilize their whole planet in the process. We discussed that scenario in Earth:   After Humanity, and many of the other filter  options, early, middle, or late, have their   own focused episodes that detail the science and  arguments for and against that filter. Indeed,   we looked at examples of technological timebombs  in its own episode a couple years back, and   while we’ll touch on them today, including some we  skipped or skimmed then, our focus is on different   kinds, including those that might be inherent  to all intelligent species simply by existing. 

That perhaps intelligence itself is the timebomb,  and not simply because it opens the door   to inventing dangerous technologies. And  today we’ll be focusing on these theories,   as opposed to technological timebombs specifically  or our other timebomb parallel of psychic poisons,   mindsets a civilization might develop that bring  on self-destruction or a stasis and stagnation   state that they stay in or constantly return to,  such as concerns the possibility of a civilization   becoming nihilistic might doom them, which we  examined some months back in Nihilistic Aliens.  But at a fundamental level, when it comes to  late filters, they tend to break into doomsdays   wrecking mankind, or alien cousins, or reasons  we or they can’t or wouldn’t spread out to space.   Most of those doomsdays seem avoidable and we’d  tend to think at least some civilizations would   navigate through them. Your personal cynicism  and mileage may vary, but to me the majority of  

doomsdays on the horizon are the sorts of things  that at most set us back for a generation, while   most of the remainder would seem like a choice you  could make. Let’s say wrecking your environment,   spawning a homicidal AI, or nuking yourself till  every beach was turned into glass that glowed in   the dark, was a simple 50/50 gamble as to whether  or not a civilization hit the brakes or went off   the cliff. So with three of them, that is a  1 in 8 chance you avoid all three and survive   playing with fire. Although some might say,  as we’ll discuss in a bit, that playing with  

fire in the first place already doomed us. But 1 in 8 means even if there were only   a thousand civilizations in a galaxy that had to  clear these hurdles or late filters, that 125 that   made it through. And even if the odds of surviving  each were only 1 in 10, then you still have 1 in   a thousand clear that hurdle and there’s still  someone in the galaxy that beat us to the stars.  And that still only implies intelligent  civilizations arose on less than 1 in a   million of the plausible planets in this galaxy,  and even that’s assuming only a billion planets in   this galaxy are decent earth-parallels  on which life might plausibly arise.  We’re generally not talking about late filters  that murder off millions of civilizations at a   99.99999% failure rate. Rather, the assumption is  usually that the very long catalog of early and  

middle filters thins the numbers down a lot. For  practical purposes, once you get the odds of the   typical star system producing a Fermi Paradox  ending event - colonizing an astronomically   visible chunk of a galaxy basically - down to one  in a sextillion - 10 to the 21st - you don’t need   anything beyond that because barring faster  than light travel, it would be very unlikely   one those emergent civilizations arose at a  place and time where the light from it would   have reached us yet. Thus, no Fermi Paradox. It is amazing how fast those odds can start   stacking up. We had a video way back in  the early days where we just went through   50 well known candidates for filters that  people generally thought were more likely   than not to thin the numbers down and at  50/50 odds the cumulative result is less   than one in a quadrillion, 10 to the 15th. It could be a lot lower than that from those  

filters too, 50/50 is generally optimistic, but  it could be way better too. And yet some filters,   if true, could individually drop the odds to flat  out zero, minus scenarios like a Boltzmann Brain.   We assume they’re wrong since we already passed  them ourselves, but we haven’t passed them all.  AI is sometimes thought to be one of them, but  it's not a great Fermi Paradox filter because   we tend to assume an AI, if it wiped us out, would  probably replace us and proceed to colonize space.   Maybe not all AI does, maybe 90% of AI that wipe  out their makers just fall over in a depressed   heap or existential crisis, but probably not all. And yet, when we talk about time bombs in a Fermi  

Paradox context, while we typically  envision technology like AI that’s   just so attractive you adopt it without  even realizing you cut your own throat,   we can also mean entirely existential problems  that cause civilizations to inevitably collapse.  Indeed, some make the case that the inevitable  self-destructive act civilizations commit is   arising in the first place, and the road to  civilization and technology is one that always   leads to ruin. Peter Watts, in his excellent  novel Blindsight, even suggests sentient higher   intelligence is an anomaly that rapidly goes away. We also can’t dismiss any danger we see now but   haven't actually avoided yet as an example of  a potential timebomb, ticking away waiting to   catch us. While nukes seem less an inevitable  end of mankind than many felt in the 50s,   60s, and 70s, it’s still there and more  nations have them then ever before. So too,  

while we have made huge strides in creating  technologies that cut down on waste, pollution,   and emissions, we are still doing quite a lot of  them, so declaring victory is premature and so is   assuming we will or that anybody else did. Though  for my part I think we will and that most do,   of course I am rather notorious for being a  techno-optimist, and an optimist in general,   even though I personally have  never regarded myself as either.  And AI is certainly a threat currently dominating  the horizon and not one anyone dismisses. However,   it’s a good example of a non-technological  timebomb, in the sense that any civilization   might create other intelligences which pose a  threat to them but not necessarily a replacement   species for the purpose of the Fermi Paradox. What  can be done with AI can be presumably also be done   with selective breeding, and a civilization  might take itself out by simply by creating   intelligent labor animals, or breeding for  simple-minded and obedient people, either   intentionally like in a Brave New World, or by  inevitable accident like in the movie Idiocracy.  Often by time bombs though we mean ones that are  so attractive you can’t not use them once you find   them, and to which no warning of danger is given.  Imagine a portal someone made in a lab that could  

bring energy in from some other universe but  every twenty or so years flipped polarity and   turned into explosive anti-matter. So everyone has  time to adapt to using it casually the same as we   do batteries, and then boom, every single device  running on one blows with the force of a nuke.  Now we can never rule something like that  out, like the Zoo Hypothesis or Simulation   Hypothesis of the Fermi Paradox, the very nature  of the problem makes it not only impossible to   disprove, but also limits the ability of  any outsider to forewarn us. In this case,  

everyone dies the moment they found out. There  could be some incredibly attractive and harmless   seeming technology already in play or waiting for  discovery but discussing it serves little purpose,   if it were something you could discover  and avoid, it wouldn't be a great Fermi   Paradox filter. Or more especially  a Great Filter of the Fermi Paradox,   which are the types of filters we view as so  constraining that your odds of making it through   are similar to winning the lottery, if not worse. All we can advise on these is that a civilization   be mindful of caution signs, things too good to be  true or which might be so addictive that even if   we later find out they are harmful, we might trick  ourselves into disbelief, and again we covered   the more technological options in technological  timebombs a couple years back. But any technology   that would instantly allow you to colonize huge  chunks of the Universe are rather suspect, since   it implies no one else has them, either because  no one else exists or they did exist, and they did   discover that technology and it ruined them. But what about ones like intelligence or   complexity being inherently harmful,  or technology being a trap at a basic   level? What’s the reasoning there? First, I should note that it’s hard to   say if these would qualify as late filters  since they involve steps we already took,   but since a late filter is one that hits after  our present date, which hasn’t been reached yet,   I would still say these all qualify as  late filters, at least for when they’d be   unavoidable. Some may be avoidable and already  avoided and thus could be a mid-stage filter. 

Let’s start with Peter Watts’ case from  his novel Blindsight. It’s an excellent   read and this will include some spoilers so be  warned. Watts himself is a marine biologist and   his aliens in the novel are quite creative and  believable while extraordinary, but in many ways   it’s his portrayal of human intelligence  and variants that really strikes home.  The novel’s early focus is on an exploratory  crew approaching a brown dwarf in the Oort Cloud   nicknamed Big Ben that seems to be the origin of  a wave of alien probes that reached Earth. They   get hailed on radio as they approach and talk in a  variety of human languages. The book is from 2006  

so the focus is on whether or not they’re talking  to a Chinese Room, something not conscious but   mimicking it well, but in modern context they  realize the communication is from ChatGPT. And   for those who remember a lot of us referencing  the novel when ChatGPT and earlier incarnations   were brand new, it was because the novel nailed  it with creepy accuracy. But once they decide   it’s not conscious and just a sophisticated  automated response system, they decide to   ignore its warnings not to approach further and  it lights them up and starts cursing at them and   mocking them for thinking it wasn’t conscious. The aliens we soon encounter when they do land   start making it very ambiguous how smart  they are, and discussion revolves around   the difference between intelligence and  consciousness and philosophical zombies.  A philosophical zombie, or p-zombie, is a person  - or alien or computer - used for a thought   experiment where they are identical to a normal  person but do not have conscious experiences.   The usual example being it can’t feel pain but  if you jab it with a stick or hot needle, it   would react the same as you or I. The aliens they  encounter frequently seem to do things that would  

indicate they’re conscious, and indeed display  speed of thought far faster than the human speed,   but constantly leave telltales that they  are created or evolved to be unconscious,   and indeed the book makes the case that  consciousness is an evolutionary fluke   and not necessarily a beneficial one. Let’s step through some of the reasoning   presented, but first, the novel’s name,  blindsight, refers to the condition where   vision is non-functional for the conscious  brain - you are blind - but you can still   react to visual stimuli at a nonconscious level  and indeed faster than your conscious reactions.  Essentially the relevant notion is that your  conscious mind is a committee sitting down   to discuss a problem in depth, and a lot of  times that is an expensive and dangerously   slow approach to handle a problem. You don’t  want to consciously decide if you should leap   out of the way of a train, for instance,  since it will take longer to think on that   then your reflexes will need to move you. Decisions take time and can lock you down,   and as an amusing example of that, I’m recording  this right after a cleaning at my dentist’s,   and when I was asked to sign a receipt on my  way out, the receptionist gestured to a set   off pens on my left, and I started to reach for  it, then she pointed to another set on my right,   and if took me a few seconds to  break out of the sudden hesitation   this caused as I tried to decide what to do. I was already leaning and reaching to the left,   but I am right-handed, so normally I would  have reached with my right hand for a pen. 

My brain had decided to call a committee  and have a discussion and a vote. We’ve discussed this in our episode on using  robots in warfare, and how a lot of times you   want to program for very simple and fast  because complex is slower and will often   lose. They’re too slow on the draw, so to speak. The rebuttal is that a lot of dangers, especially   those involving other intelligent actors, can  not be handled properly by reflexive action   on a predetermined script, and while that  may be true, I tend to think so anyway,   it doesn’t dismiss that a lot of problems  can be handled by scripts, especially if   you can load very large scripts back and forth  between members of that species or inherit them.  

And as I mentioned in one of Shorts right after  Dune 2 came out, wonderful movie incidentally,   inherited or genetic memory doesn't work with our  biology but we do pass on instinctive knowledge   and a different biology might allow a much higher  bandwidth or total on passing information along.  Our main survival advantage of intelligence,  pre-technology, is that we can imagine ways to   die so that we don’t actually have to do them.  We can think of potential ways to get injured   or outcomes to actions and build a script for  them in advance. Indeed this is generally my  

approach to futurism, I try to think of ways  to kill a civilization off, and once I do,   I try to think of ways to navigate around that.  The late Charles Munger, the well-known investor,   apparently used this same tactic when he was  in the military with airplanes and later in   handling companies, he began by trying to figure  out ways to crash the airplane, then try to avoid   paths that could lead there. I’ve found this  approach is good in normal life strategies too,   from relationships to work, figure out what  might kill a thing then avoid going near it.  Intelligence’s usefulness in terms of technology  mostly came later, as other than fire and   sharpened sticks, discovered a million years  ago or more, most of the time we had huge brains   compared to even our primate cousins, we had to  justify that very expensive piece of equipment   and it was mostly in that ability to imagine and  ruminate. Indeed even language is thought to be a   relatively new innovation, maybe 200,000 years  old, at least in terms of being significantly   more sophisticated than what Great Apes have  now, and either way, those are big advantages   that helped us pay for that brain before  metal-working, pottery, and agriculture came in.

But other than language, fire just for  cooking and staying warm is no big deal   to a critter with a good fur coat or from  a more temperate climate or planet. So too,   a sharp stick is no advantage to a creature with  sharp claws. Indeed, we see tool usage even with   non-mammals so it doesn’t require huge brains, and  a much simpler but dedicated chunk of that brain   could allow a very simple organism to act with  the accuracy of a sniper when it comes to tossing   sharp rocks or sticks at predators or prey. So the only advantage of consciousness that   would seem like it couldn’t be evolved  separately without much brains is that   ability to contemplate and ruminate, prepare  scripts for dangers imagined, and pass those   on to each other by abstract language. But especially for something like a  

hive-mind or collective that can pass full  details along, they can get away with a   simple learning approach of “Don’t do what X did”. To continue the argument, the novel suggests that   non-sentient beings or entities, like the aliens  we encounter in the book, called the Scramblers,   can process information and make decisions more  efficiently than conscious beings. Without the   distractions of self-awareness, internal  dialogue, and the complexities of emotions,   these beings can react to their environment and  make optimizations purely based on survival and   functionality. The Scramblers, who embody this  principle, demonstrate remarkable intelligence   and problem-solving abilities without any  indication of self-awareness or consciousness. 

It further argues the evolutionary cost, that  consciousness is an expensive trait in terms of   evolutionary resources. It requires complex neural  structures, significant energy consumption, and   can lead to decision-making that prioritizes the  individual's immediate well-being or desires over   long-term survival. In contrast, entities that  operate on instinct or pre-programmed responses   can allocate more resources towards reproduction,  adaptation, and survival in hostile environments.  And again this all very believable in a modern  context with the rise of AI like ChatGPT, but the   book takes it a bit further, and explores the idea  that consciousness can lead to misinterpretation   and misunderstanding, both within a species  and in attempts to communicate with other   life forms. The complexity of conscious thought  and language can introduce ambiguity and error,   whereas communication between non-conscious  entities might be more direct, efficient,   and less prone to misinterpretation. This will  come back up in a moment when I say what the   second half of Watts’ Fermi Paradox Solution is. Now a lot of folks point to the novel including  

vampires in it as a bit of weird thing, and  it kind of is, the captain of the exploratory   ship is one for instance, kind of, but that’s  the semi-appropriate nickname for an extinct   species of humanity that they’ve found  in the novel setting and cloned up from   recovered DNA who have a lot of strange mental  traits. I view them as a vehicle for both   exploring some neurologically atypical thinking  and pointing out that a dominant species is often   going to diverge into having a sub-species  or clade that develops to prey upon others.  As a simple example, imagine a single  species of bacteria, or nanobot gray goo,   taking over a planet. That now means all  competition for resources is with each other,   so if you can kill a competitor and eat it,  you’ve now got two birds with one stone,   and that’s how predator cycles and divergence  can occur even with something like AI down the   road. But another part of it is the p-zombie  notion since a cannibal is typically going to   be effectively a sociopath or psychopath, probably  not play well with others, even fellow cannibals,   and to succeed might need to get very good at  mimicking decent human behavior. We’re given many   other examples of the civilization existing in the  novel using genetic, neurological, and cybernetic   engineering to create very odd humans who are  also incredibly useful to their civilization,   or dangerous, or both. And we can argue that  we tend to encourage non-survival traits like  

self-sacrifice or workaholic behavior ourselves. This leads some to make the case that society   currently breeds for sociopaths which you  can argue becomes a Fermi Paradox solution,   and parallels our discussion earlier this year  in our looks at the Hermit Shoplifter Hypothesis,   the Cronus scenarios, and Interdiction Hypothesis. Watts advances the idea that the universe might   indeed be teeming with life, but that life may  predominantly be non-conscious, operating in ways   that are efficient and effective yet entirely  alien to our understanding. Conscious life,   especially technologically advanced  and communicative life like humans,   might be exceptionally rare or even  self-limiting on a cosmic scale. That   the universe might favor entities that are not  burdened by the complexities of self-awareness,   positing consciousness not as an evolutionary goal  but as a peculiar deviation that might ultimately   limit the potential of life forms possessing it. Now for the book, Watts posits that the aliens  

have come to Earth because we’re filling space  with lots of garbage communication or scrap code   and they’re reacting automatically to this to  silence that source of weird and damaged data.   It’s not a conscious decision by the aliens,  it’s just an immune system like response.  It’s left mysterious in the end, always a  good idea for concluding a suspenseful and   thought-provoking work, but the basic idea  is that consciousness doesn't evolve much,   and when it does, it tends to self-destruct,  either by a normal evolutionary path, or by   developing technologies that let them engineer  life and different types of minds. Particularly  

non-evolutionarily normal psychologies  or intentionally making safe AI or slave   races by giving them lots of brain but not much  consciousness. When this fails to take them out,   then the various very smart but non-conscious  lifeforms in the galaxy come by and wipe them out   because they interfere with good communication. I find the last a stretch but the idea of us   engineering very smart but non-conscious minds  definitely is not, that’s essentially our main   modern aim with artificial intelligence. Keep it  simple, keep it dumb, the minimum brains for the   task, and where you can’t, keep it non-conscious  or non-dangerous in possible mentalities. 

I do not find the idea that non-conscious  minds could win an interstellar war very   compelling though, because in the absence of  FTL, Faster Than Light travel or Communication,   speed of thought and reaction becomes less  decisive. I also don’t see any evolutionary   path to interstellar travel at anything better  than interstellar comet seeding speeds without   technology, and evolving spaceship drives just  seems implausible. Some machines with blueprints   for spaceship drives that they can replicate ad  nauseam once their human, or alien, creators are   extinct, does seem decently plausible though. So too, naturally occurring dyson swarms or   Kardashev 2 ecologies strikes me as a very  plausible scenario if life can get going in   space environments, and artificially created  void ecologies is something we’ve discussed   before too. This is why I sometimes reason  that if alien intelligences are modestly   common but either self-destruct or abandon  galactic expansion, we might find space   littered with some quasi-biological and alien  examples of systems where von Neumann probes   or gray goo ran amok and then diverged into  an ecology afterwards. If we start reaching   other solar systems and finding we constantly  have to plow through a host of techno-organic   hive minds nesting inside various asteroids,  moons, and comets, we might determine that   this was the Fermi paradox solution in play too. The reasons I don’t find this compelling though is  

that I can’t see conscious minds voluntarily  and universally surrendering consciousness,   and I can't see any overwhelming compulsion  for non-conscious entities to try to wipe out   conscious ones or being terribly effective  at it, beyond normal scenarios like gray   goo. The weakness of the scripted response  is that you can potentially be fooled by   the same trick over and over, the weakness  of the conscious mind is that it is slow,   but if it has time it can develop tricks. We’ll discuss this scenario and the Medea   hypothesis, or a gray goo version of it, in a  moment, but let’s discuss Idiocracy options first.  

As we discussed in our Devolution episode, there’s  a general concern that a side effect of technology   is that selection pressures which might have  favored good physical health or intelligence might   go away in a highly automated and paradise-like  society. High-tech civilizations able to bring   about paradises are also ones with vast access to  resources and technology and are not going to have   a problem like that sneak up on them and have a  host of tools available to prevent or reverse it,   so it only works if we assume they don’t care  or actively want to be dumber, and as we noted   in that episode, the idea that ignorance is  bliss is at odds with science, which finds   smarter folks generally report being happier. The other notion is that we don’t have to wait   for a distant paradise future and that our own  current society has lower birth rates for smart   people. Idiocracy is a funny film though  its scientific accuracy and plausibility   are not very high and again we discussed  it more in the aforementioned episode.  There’s no reason to think we’re  breeding for dumber people currently,   even ignoring the Flynn Effect, and I’ve lost  track of the number of highly intelligent and   successful people I know who have full-blooded  siblings who would fit the criteria for brains   or behavior mocked in the film. Nature versus  nurture is an ongoing debate when it comes to   intelligence and other mental traits but the  biggest factor in a lot of people having lots   of kids or not was mostly luck influenced by how  well they restrained their youthful hormones. 

And an awful lot of what folks tend to view as  success in life requires following a very simple   formula, spend less money than you bring in and  stick the extra funds in something that earns   compound interest. The earlier and more rigorously  you do that, the better the effect, and obviously   having kids at a young age interferes with that.  There’s a million other factors that can help or   hinder that of course, but they key notion is  that if two twins both head off to college,   and one ends up with a kid in their freshmen  year and drops out to take care of them,   or one develops too great a fondness for  partying and alcohol, they will still be   passing the same genetics on to their kids. On the flip side, there is a worry that a lot   of predatory narcissist types are pretty good at  mimicking the characteristics people seek in mate   selection, especially in what younger folks tend  to be able to assess, and that they’re prone to   short-term mating strategies. So if that trait is  genetic, and there’s a decent amount of literature   supporting it is at least partially hereditary,  then a civilization might tend to see that trait   become more common. That’s some parallel reasoning  to what we were discussing with the Vampires from  

Blindsight, that antisocial traits wouldn’t  generally be favored in mate selection but if   they’re paired up with a good ability to mimic  or fit in, then they might suddenly become very   likely to be passed on to future generations. Same as before, I think it’s a stretch for the   Fermi Paradox as a strong late filter, especially  as psychology and neurology ought vary wildly   among aliens, but the case for it being a decent  middle or late filter is better. Not so much that   intelligence fizzles out by paradise bringing an  absence of selection, but civilization breeding   itself to be better at fighting and preying  on its only real competition, each other. 

Again, this seems to be conditional on the idea  that we voluntarily let it happen though, given   that we are aware of it and we have improving  technology for detecting and addressing mental   health issues. We should never assume trends in  genetics or culture inevitably plow on, especially   when we’re aware of them, at least if you have the  conscious ability to contemplate how they’re bad   and what actions you might take to handle it. You determine this is a way you could crash   the metaphorical airplane, and  now you can avoid the hazard.

So again, not really feeling the  inevitable time bomb applies here. Now,   the same applies to the Medea Hypothesis, that  we looked at in our Gaia Hypothesis episode.   The gaia hypothesis is the general idea that  life moves toward every greater intelligence,   particularly toward unified world minds, while  the Medea Hypothesis argues that most life is   very simple and unicellular and that complex  life isn’t the fate of evolution, but rather   a temporary condition it eventually wipes out,  and our notion that conscious thought might be   a temporary aberration runs along a similar line. Again, with a greater understanding of biology and   science in general, our susceptibility to any  naturally derived plague or other catastrophe   goes down, as does our ability to detect that in  advance. It seems ever more unlikely we would be   caught unaware by some effect that was caused by  anything that was itself unaware. Non-conscious   minds or cycles shouldn't be good at outwitting  conscious minds, when they have time to think,   and even more than interstellar travel, evolution  is something where time is not in short supply. 

Now the exception to that is gray goo or  semi-intelligent machine minds, call it   the Grey Medea Hypothesis, as something like our  own medical nanobots could act like any bacteria,   but can and probably would be designed for  rapidly strategizing and communicating with   their neighbors to combat ailments and  diseases, thus they could potentially   get all those complex behavior scripts  that we normally would only associate to   higher intelligence but not actually need them. This is even more true given that we would like   to build them to be very good at that so as to  avoid the need for an AGI rival, under the “keep   it simple, keep it dumb, or else you’ll end up  under Skynet’s Thumb” rule. So they don’t have   to evolve the ability to rapidly communicate  and share strategy with each other, we’ll work   our butts off to give it to them and make it the  best we can. And if that turned sour, then you get   the usual gray goo scenario, only a lot nastier  and harder to resist, and not some puddle of goo   emerging from a lab but rather 10 billion puddles  exploding out from every human who had them. 

Not a great situation for resistance either given  that the folks least likely to have had nanobots   would be the ones least likely to have advanced  technical knowledge of them. The good news is   that, as before, while this scenario might be  new to us now, we don't have such nanobots yet,   and it would seem inevitable to think it  up and contemplate how to deal with it.  In the end, while intelligence and consciousness  may be our undoing, I don’t think it’s intrinsic,   that it’s the time bomb that explodes  eventually and inevitably to destroy us. Rather,  

I think it’s the ability that will save us and  let us explode out onto the galaxy one day. Ever since last year’s 3-hour long Fermi Paradox  Compendium, I’ve found myself revisiting a lot   of our Fermi Paradox topics with a fresh eye,  and today is just one of several this year,   out already or planned, and that started with a  look at the Hermit Shoplifter Hypothesis which we   released exclusively on Nebula back in December.  One of the common threads since then has been   the idea that worlds you settle far from Earth  might be hard to control and a threat to you,   making civilizations more hesitant to settle  the galaxy, and a lot of that notion derives   from how good those pioneers need to  be at self-reliance and independence   from the day they set sail from Earth. To make any attempt to settle a new   world relies on being able to extract and use  resources locally, what we call In-Situ Resource   Utilization, or ISRU, and it is beyond difficult,  but it should be possible and will be vital to   even setting up a base on our proverbial doorstep,  the Moon. In this month’s Nebula Exclusive  

we will look at ISRU concepts and emerging  technology, challenges, and suggested solutions. In-Situ Resource Utilization is  out now exclusively on Nebula,   our streaming service with a newly designed  category-based interface, where you can also see   every regular episode of SFIA a few days early  and ad free, as well as our other bonus content,   including extended editions of many episodes, and  more Nebula Exclusives like last month’s episode   Machine Monitors, April’s Galactic Beacons,  Crystal Aliens from March, February’s Topopolis:   The Eternal River, January’s Giant Space  Monsters, December’s episode The Fermi Paradox:   Hermit Shoplifter Hypothesis, Ultra-Relativistic  Spaceships, Dark Stars at the Beginning of Time,   Life As An Asteroid Miner, Nomadic Miners  on the Moon, Retrocausality, and more.  Nebula has tons of great content  from an ever-growing community of   creators. Using my link and discount it’s  available now for just over $2.50 a month,   less than the price of the drink or snack you  might have been enjoying during the episode. Or   even sign up for a lifetime membership, to see  all the amazing content yet to come from myself   and other creators on Nebula. When you sign up at my link,  

https://go.nebula.tv/isaacarthur and use my code,  isaacarthur, you not only get access to all of the   great stuff Nebula offers, like  In-Situ Resource Utilization,   you’ll also be directly supporting this  show. Again, to see SFIA early, ad free,   and with all the exclusive bonus content,  go to https://go.nebula.tv/isaacarthur Next week we’ll be looking at the idea of  habitable moons with a discussion of Moons   with liquid water on their surface that are heated  by both sunlight and tidal heating, and how that   could affect life developing there or adapting  to such an Oceanic Moon. Then Sunday, June 16th,   we’ll be looking at another place to  live, and a very nice one at that,   as we examine the idea of Paradise Planets,  places even better than earth for us to live.  

Then we’ll ask what would have happened if  dinosaurs had never died off, on June 20th,   and if we may have seen a world with a mix of  big mammals and dinos, or even if they might   have developed a civilization one day. After  that we’ll explore the idea of hollow planets,   both notions that Earth might be one, and  that maybe it will become one in the future.  If you’d like to get alerts when those and other  episodes come out, make sure to hit the like,   subscribe, and notification buttons. You can  also help support the show by becoming a member   here on Youtube or Patreon, or checkout  other ways to help at IsaacArthur.net.  As always, thanks for watching,  and have a Great Week!

2024-06-14

Show video