Émile P. Torres | Highway to Hell: The Dystopian Fantasies of Tech Billionaires

Émile P. Torres | Highway to Hell: The Dystopian Fantasies of Tech Billionaires

Show Video

transhumanism was an ideology developed in the early 20th century explicitly as a replacement for traditional religion Christianity started to decline it was right around that time that you get a bunch of ideologies that offered basically the same promises that Christianity did but are secular in nature transhumanism basically offers the possibility of Heaven on Earth by using Science and Technology to re-engineer humanity one way of understanding transhumanism is not just as a utopian possibility not just as a solution to the problems that we now confront as a result of Technology but also as a means for capitalism to continue its growth March we're pushing up against all these limits of resources well there's another resource that hasn't really been tapped and that's the human organism so like if you want to keep the engines of economic growth and productivity roaring one way is to re-engineer humanity in this episode of the overpopulation podcast we'll be talking with philosopher and historian Dr Emil Torres about some of the more Twisted visions of ecologically blind Tech billionaires and their dreams of defying nature transcending humanity and colonizing the universe welcome to the overpopulation podcast where we tirelessly make ecological overshoot and overpopulation common knowledge that's the first step in right sizing the scale of our human footprint so that it is in balance with life on Earth enabling all species to thrive I'm Nanda Bajaj co-host of the podcast and executive director of population balance I'm alen wear co-host of the podcast and researcher with population balance the first and only nonprofit organization globally that draws the connections between pronatalism human Supremacy and ecological overshoot and offers solutions to address their combined impacts on the people planet and animals and now on to today's guest Emil P Torres is a philosopher and historian whose work focuses on existential threats to civilization and Humanity they have published on a wide range of topics including machine super intelligence emerging Technologies and religious eschatology as well as the history and ethics of human Extinction Emil's work has appeared in various academic journals such as Futures and bioethics and in popular media Outlets such as the Washington Post big think current affairs and many others their most recent book is human extinction a history of the science and ethics of annihilation well Emil we have been following your work with great interest for a while and are so impressed with the breadth and depth of both your knowledge and your lived experience in the fields of human existential risk long-termism effective altruism and so many other important philosophical issues of our time we have listened to hours of your interviews and lectures and are excited that we get to chat with you in real time today thank you so much for taking the time to join us thanks so much for having me it's wonderful to be here amazing and though your philosophical study is so wide ranging given the limited time that we have with you today we'd like to focus today on your analysis of the effect of altruism and long- termis of movements effective altruism as you've noted comes with a large degree of utilitarianism which is an ethical theory that posits the best action is the one that maximizes utility and is often simplified as that action that produces the greatest well-being for the greatest number of people and that sounds reasonable in theory but as well discussed today over the past several years you've expressed deep concerns about effective altruists approach to doing good in the world so let's start with the brief description of effective altruism and look at some of its history what would you describe as the main beliefs of the effect of altruism or EA community and the origins of this approach yeah great question so effective altruism the key idea is that we should use the tools of science and reason to figure out the best ways to do the most good in the world and so that sounds quite nice and compelling at first until you you sort of look at the details and you know some of the metrics that they use to figure out the most effective interventions to increase the good increase value in the world turn out to be somewhat ableist some of those have been abandoned over time but also another example of an effective altruism idea that sort of looks appealing to at least some people at first Clans but in practice has had some rather negative consequences is their idea of earning to give so this ties into a critique of EA that they are insufficiently attentive to the possibility of systemic change so the idea is very individualistic you as an individual are embedded in this system our system being capitalism neoliberalism and so on then the question is within this system how can you personally do the most good and so earning to give is an answer to that question and it says that imagine a scenario where you could go and work for an environmental nonprofit and you know maybe you get paid $50,000 a year or something and you have some kind of modest impact on the ability of that nonprofit to push through climate mitigation legislation and so on alternatively you could go get a job for a petrochemical company or some company on Wall Street maybe make a million dollars a year and then if you take that money and you donate it to that environmental nonprofit let's just say they can hire 10 people so rather than you going to work for them they get an extra employee you pay them all this extra money that you're making on Wall Street and then they hire 10 extra people and so ultimately you've done more good one of the criticisms of this is that working for petrochemical companies or companies on Wall Street which the co-founder of EA William CasCal himself describes as quote unquote immoral organizations one of the problems with working for immoral organizations is that you compromise your moral Integrity but this is where the utilitarian kind of element of EA comes into play where within utilitarianism moral Integrity only matters as a means it only matters instrumentally because for utilitarians the only thing that's important ethically speaking are the consequences so there's nothing intrinsically bad about working for an immoral organization there's nothing intrinsically bad about murder or lying or fraud yeah right it all depends on the consequences one of the great success stories of the earnto give idea was Sam bman freed right so he sat down in 2012 with Will mccal and mccal told him about EA and about this idea of earn to give bman freed was he either just graduated or was about to graduate from MIT and he was I believe thinking of working for like an animal rights organization and masal convinced him to go work on Wall Street so he went and did that with a bunch of other EAS like Matthew wage and in fact bankman Freed's brother Gabriel also worked on Wall Street for the same company James Street Capital and so after several years at James Street Capital bankin freed decided to as one journalist put it get filthy rich for charity sake by going into crypto and obviously he was very successful at least for a little while on paper so yeah it's sort of funny to think about B and Fred's biography because if he hadn't met will mccal hadn't been introduced to the ideas of EA he probably never would have gone into crypto and he would have been just like an unusual kind of nerdy maybe interesting guy working at some nonprofit rather than an individual in federal prison who's responsible for maybe the biggest case of us fraud in history right so they earn to idea is I think an example of how trying to maximize the good in this kind of utilitarian manner can lead to some really bad consequences ironically enough so that sort of EA and the EA movement really goes back to around 2009 so this is when the first EA organization was founded giving what we can sort of co-founded by Toby or and will mccal both at the University of Oxford well thank you and I became aware of the effect of altruism movement after reading a book by Peter Singer about 10 years ago the most good you can do and as an animal rights Advocate I was trying to figure out how do I be the most effective person I can for the animals and as I was reading the book there were a lot of things that just did not sit right with me and at the time I didn't know the difference between consequentialism or deontological kind of philosophical worldviews I just thought how could someone be asking you to compromise your own moral integrity and work in a field that clearly we have evidence is creating all sorts of exploitation of people and non-human beings to make a lot of money so that then you can give that money to whatever charity and some of the things that you brought up in terms of the blind spots of the movement is they can Define what good means and they can define what effective is and they've taken it upon themselves to decide how to combine those two to have the most positive impact and to your point it's so individualistic it kind of buys into that same neoliberal model is that you are the captain of your own life not only that but you can single-handedly have this incredible impact by giving through philanthropy without ever stopping to challenge any of the power hierarchies any of the systems currently in place that have allowed them to become that rich right yeah absolutely so consistent with that some EAS have described people like Bill Gates as the greatest philanthropists in human history and they seem to be completely unaware that the empires of a lot of these billionaires Tech billionaires are built on exploitation and I think in general of EAS maybe also duft tailing with their consequentialist utilitarian Tendencies tend to focus entirely on the outcomes of certain interventions without asking questions about the causes of the situation in the first place so maybe it's precisely the system that enabled Bill Gates to make billions and billions of dollars that system may be responsible maybe in large part for the plight of people around the world who are struggling Toby or published a book in 2020 which was discussing a central idea within EA and especially long-termism namely existential risk and so these are threats that could if they were to be actualized erase what toyard himself refers to as our vast and glorious future in the universe as we spread into space become digital and and ultimately maximize value and the reason I mentioned this is throughout the something like 300 pages of the book as I recall there isn't a single mention of capitalism you know so I mean some could argue quite compellingly that capitalism is an underlying sort of fundamental driver of the climate crisis which according to a recent study threatens to kill a billion people this Century a billion people will die according to this study from Direct effects of climate change and capitalism also is very much at the heart of the current race to build AGI artif general intelligence so that's the explicit goal of companies like open Ai and Deep Mind anthropic X recently founded by Elon Musk so AGI is supposed to be this system that is at least at the same level of intellectual capability as human beings so I think there are two main phenomena that are fueling this race to build AI one is these sort of utopian ideologies long-termism would be a part of that group but also capitalism Microsoft and Google Amazon and so on they're investing billions of dollars in these companies in hopes of making a massive profit and Sam ultman the CEO of openai himself said during an interview with I believe the CEO of Airbnb that the likely outcome of AGI will be human Annihilation but he adds at least there will be some really core AI companies in the meantime wow and so all of this is to say that capitalism is also driving this race to build what the CEOs themselves of these AI companies describe as an extremely dangerous technology capitalism is behind climate change and so on capitalism has a big part to play in global poverty and yet EA have said virtually nothing about capitalism and its relation to this proliferation of global catastrophic risk that are completely unprecedented this century and we've been touching on long-termism you know a little bit here and there but it'd be great to go deeper into it so you know when you first hear the word long-termism it kind of brings to mind this certain ethical value that we should be putting on the lives of future Generations kind of like the Seventh Generation principle based on the ancient hodon philosophy that the decisions we make today should result in a sustainable world for Seven Generations into the future right and I would say that sadly we don't even seem to be doing that at all within our current Paradigm structure but this long-termism view that's kind of emerged out of this transhumanism view is quite different and as you've alluded to quite perverse and you've described some of the basic beliefs of long- termist who are perpetuating this kind of ideology and you've named some of the prominent thinkers you know mcll being one of them but let's start at the basics of long-termism a little bit you know what are the tenets of long-termism and who are other than Masco some of the thinkers in the movement yeah so I think it might be useful to start off by distinguishing between long-term thinking and long-termism so I am a huge fan of long-term thinking and believe that we need a whole lot more of it in the world especially given the fact that you know climate change will affect Earth for another roughly 10,000 years so that's a longer period of time than civilization has existed so far long-termism goes Way Beyond long-term thinking and so part of the claim about how big the future could be is a claim about how big the future should be and I think this is where long-termism sort of diverges with a lot of people's intuitions about what it means to care about the long-term future and also diverges with this sort of Seventh Generation idea so here is the claim if there is a future possible life that would be better than miserable so it would contain like a net positive amount of value to simplify just a tiny little bit but not much um if it would contain a net positive amount of value then if it could exist then it should exist right so could exist implies should exist on the condition that the life will have a net positive amount of value so I think this idea is very counterintuitive and there's two reasons that long-term is would give for this one is that there's this very strange and highly controversial idea within philosophy which is that there are kind of two basic ways to benefit someone ordinary benefits which is just what would come to mind like holding the door for somebody or giving somebody uh who is unhoused you know $100 or something like that that's a benefit in an ordinary sense but then there are these existential benefits so another way to benefit someone is that if they will have a you know half decent life bringing them into existence in the first place benefits them so then there's a question of like well you could imagine yourself in a situation where like okay I have two choices I can benefit someone by giving them this person who already exists $100 or I could benefit somebody by not giving them $100 but instead having a child or doing something that would bring someone into the world and maybe the second option might be better in some circumstance so that's pretty counterintuitive I think to most people but that's part of the claim and the other claim really has to do with utilitarianism because the our sole moral obligation according to utilitarianism is to maximize the total amount of value that exists in the universe as a whole so then there are two ways to maximize the total amount of value one is you could say like okay within a population of people like the population of individuals who currently exists I'm going to increase their well-being make them happier whatever that requires giving them money or Better Living circumstances and so on another option is just to create new people right in so far as those people are going to be quote unquote happy meaning they don't have miserable lives then that's a second way so utilitarianism says you should do both and consequently this is why long terms are so obsessed with calculating how many future people there could be because if there could be an enormous number of future people compared to current day people then maybe the best way to maximize the total amount of good is to kind of like not worry about current day people and focus on how your actions today are going to ensure the realization of these future people and these ideas that I'm referencing were ideas that a philosopher named Nick Bostrom had articulated in the early 2000s drawing from Modern cosmology basically that Earth will remain habitable for a really long period of time right so we have about another billion years life has been around for 3.8 billion years we have another billion years to go and our species has been around for 300,000 years so that's a huge future and the first person to calculate how many people could exist in the future I think was Carl Sean back in 1983 he said if we survive for another 10 million years there could be 500 trillion people so that's just like an enormous number of future people by comparison there have been about 117 billion humans who have existed so far so much much larger number of future people than past or present people but basically boster pointed out that what if we spread beyond Earth then there could be a much much greater number than 500 trillion and so he calculated that if we spread into space and become digital beings living in computer simulations so computer simulations could house a much larger population than if we just live on terraformed exoplanets so other planets out there that we make like Earth and he estimated that within the universe as a whole there could be 10 to the 58 people so that's a one followed by 58 zeros again much much larger than 117 billion much larger than 500 trillion so all of that is to say if you combine those claims about how big the future could be with the effective altruist imperative to do the most good you get the following line of reasoning if your goal is to positively influence the greatest number of people and if by far most people who could exist will exist in the far future then maybe what you should be doing as a good rational alterist is focusing on the very far future not on the present and it's by virtue of how huge the future could be if we colonize space become digital beings and so on that all contemporary problems that do not threaten the realization of this huge future just fade into basically nothingness so this is the idea of long-termism which emerged out of the EA movement and there's a great tension between this cause area within EA of long-termism and the initial cause area of alleviating Global poverty because alleviating Global poverty like yes that's going to help I think there's 1.3 billion estimated in multi-dimensional poverty so yes that is a huge number in absolute terms but in relative terms that is just a tiny fraction of the total number of people who could exist if you take this Grand Cosmic view across not just space but across time into the future and that is where I think it's it's just very counterintuitive and like I mentioned before it diverges with these other Notions of like the the Seventh Generation which generally sort of presupposes the existence of future people while also knowledging that we can't really anticipate what the far future is going to look like or what people in the far future are going to want so the seventh generation one thing that's really nice is that renews every generation right so every generation is thinking about seven generations ahead and you have this sort of chain you know this link that extends into the future whereas the long term is sort of have this Vision about what ought to be in millions or billions or trillions of years from now so there's a big tension between those two views and so that's the history of EA and how long-term is emerged out of it yeah I think that techno Optimist view that they have of an assumption that history and the future are fairly linear and will play out in a fairly orderly rational type of process meanwhile they're ignorance of the social inequalities potential revolutions Brewing below them the ecological damage potentially leading to collapse they become blind to the nonlinearity possibility of just collapse the whole system that is is feeding them through earn to give where they're going on Wall Street or Silicon Valley and they're learning a lot about mergers and Acquisitions and algorithms and thinking that they know more than the NGS themselves but they really just have the power they have the money power and that gives them the right to set the table on that and to be blind to these kind of churning collapses underneath their linear view that very much threaten a return to a more cyclical view of history of human history that isn't onward upward you know that artificial intelligence itself could be you know a Great Depression a stock market collapse could really suck a lot of the capital out of tech The Magnificent Seven on the stock market right now could evaporate overnight and so there is an assumption that money will just keep flowing cultural power will just keep flowing and no doubt AGI is doing a lot it is quite powerful but underneath them is a biophysical substrate of energy and material that reality is not all information as a lot of them seem to think we can live entirely in an world of ideas and Concepts and not appreciating that meanwhile the biophysical substrate that supports all of that is eroding and in danger of collapse and it just feels like a a real form of blindness ecological material energy blindness and hubris and arrogance that you can maintain for quite a while because you are the money power you are the cultural power but at some point point a lot of that can be pulled from under you and which I suppose any power is often blind to its own weaknesses yeah I think that's right the linear view of history is very prominent I mean I maybe would describe this as a component of a kind of colonialist mindset which I think is really influential although not explicitly recognized as such at least not normally within the general EA Community but definitely this linear View of History you know we started in this quote unquote primitive state of hunting and Gathering and then we Advanced to an agricultural State and then we advaned further through industrialization and so on and one of the valuable contributions that the co-author book The Dawn of everything makes is calling into question this sort of linear view saying actually you know there's some people that experimented with Agriculture and then decided agriculture is actually much worse it's a much worse life what the EA and long terms do is then just extend this very one-dimensional I would describe sort of impoverished way of thinking about the history of human development they extend this into the future and so just the next obvious step is again consistent with the colonial mindset is spreading beyond Earth and colonizing space and there are all these vast resources out there that long-terms like Nick Bostrom complain are being wasted every day you know stars out there that are burning up their reserves of hydrogen all that energy is being wasted quote unquote it's not going to fuel valuable Enterprises like creating or sustaining huge literally planet-sized computers running virtual reality worlds a full of trillions and trillions of people that's interesting because that fits one of the theorists of ecological overshoot William katton who is a sociologist who talks about takeover which was the colonial process of mainly Europeans taking over land taking over people taking over the materials and then draw down taking down the minerals the stuff under the crust in our case fossil fuels but on Mars the colonial take over of planets and then draw them down suck the energy and materials out of them it's an interesting analogy so I would say that EAS and long-term in general have pretty habitually underestimated the significance of environmental destruction you know they don't really see it as an exential risk like maybe it's going to contribute to exential risk and that's why we should be concerned about and I think part of it is that their view like in practice is pretty anthropocentric and they would say our ethics because you know utilitarianism it's not anthropocentric it's what they would call Centric so it centers any kind of sensient entities right the thing is that humans are as it were more sensient than other beings and so we end up mattering more than other creatures and so sort of fitting together with this in a certain way you find will mccal writing in his 2022 book what we ow the future that our destruction of the environment might actually be net positive and the reason is that because we care about sensient and therefore we care about the experience of suffering not just in humans but in other creatures because there is a lot of suffering in the wild among wild animals this is debatable but this is a premise to the claim the fewer wild animals there are the less wild an will suffer gosh wow so consequently by going out and obliterating ecosystems raising forests polluting the oceans and so on you all that sounds bad but that just means there are fewer wild animals to suffer and obviously the limit of that is well in the future maybe the best thing to do is just get rid of the biosphere altogether maybe we all become digital beings maybe there are no animals maybe we simulate animals in the virtual reality that we all live in but that's one issue that's kind of shocking and irrelevant to what you were saying I think well you had a great example in one of your essays you quote Mill from what we owe the future where he mentions that even 15 degrees of warming the heat would not pass lethal limits for crops in most regions and then you consulted several experts Agricultural and climate change who said that's just pure nonsense 15 degrees Celsius is that really Celsius Celsius just to be clear Celsius oh my God so that leapt off the page and smacked me in the face when I read it because that is absolutely wild I mean again the most recent studies say with just the warming that's expected this Century two degrees three maybe four billion people will die two billion will be displaced maybe those two numbers will overlap and a lot of people displaced will die but I mean that is just unfathomable and I spoke to a climate expert yesterday who's a friend of mine and he was pointing out that even if there were just like if there was event maybe a climate related event or maybe an event resulting from some of these AI systems that killed 2 million people in a relatively short time like would civilization survive like just really reflect on like the Mayhem the shock the psycho cultural trauma of something like that I mean that would just be extraordinary and again that's a small number compared to the 1 billion who are expected to perish by the end of the century so you've talked about how there is kind of a religious type of strength to it and belief system in terms of the ideology of both effective altruism and long-termism and that they emerged basically out of the hole that religion left and they were trying to fill that same hole with this kind of Godlike Messianic techno utopian grandiose vision of a world and that they are the gods that will bring us all there well not us but the future trillions of disembodied people living on servers as you said where there's no life there's no suffering and you know kind of a very anti-life anti-natalist type of a view of the Universe I just wanted to make that comment of just how many parallels one can draw between this kind of completely ecologically blind human supremacist ideology and many of the religions that also seem to share that ecological blindness and human supremacist worldview so yeah you're exactly right I think that one way of understanding the development of EA and long-termism is with respect to the transhumanist movement which predates EA and long-termism so EA sort of in a sense its roots go back to two different phenomena one is the transhumanism so Toby ARD for example was co-authoring papers with Bostrom back in 2006 several years before EA was created that were defending the transhumanist worldview and the other strand of EA goes back to Peter singer's work and I I think a lot of singers work is really quite problematic I me he's utilitarian who takes his utilitarian ethics very seriously and follows the premises of utilitarianism to their logical ends which leads him to say things like if there are babies infants with certain kinds of disabilities then the morally right thing to do might be to kill them so anyway so so those are the two strands and and with respect to transum the reason I mentioned that is that this was an ideology developed in the early 20th century explicitly as a replacement for traditional religion you know Christianity was absolutely dominant within the west from roughly the fourth or fifth centuries of the Common Era so a few centuries after Jesus lived and died that's when it became really widespread in the the Roman Empire and then it just dominated all the way up until the 19th century that's when it started to decline it was that Century that KL Marx denigrated religions the Opium of the masses frederi n famously said God is dead why is he dead because we have killed him you know through science and basically branches of theology you know started looking into the historical reliability of of the Bible and so on and the results weren't very good so anyways Christianity declined and it is extraordinary if you read the literature at the time this is around the time that the term agnostic was coined as well so there are a lot of agnostics and atheists who were just reeling from the loss of religion like all of the meaning and purpose and the eschatological hope so hope for the the future of humanity that traditional religion that Christianity provided all that was gone so the atheists were wondering like what is the point of any of this how do do we live our lives like what is the purpose darwinian ISM says that we emerge just by happen stance through contingent Evolution and then physics tells us that the universe will become uninhabitable in the future so why right why are we here and anyway so I mentioned that because it was right around that time that you get a bunch of ideologies that offer basically the same promises that Christianity did but are secular in nature so this is when Marxism emerged and there's the promise of a kind of utopia future once we get this worldwide communist state and yeah the parallels between Marxism and Christian esy their Narrative of the end of the world are very striking also then transhumanism emerged and the first book that really developed the transhumanist idea although it didn't use that term I believe the term was evolutionary humanism was by Julian Huxley and it was revealingly titled religion without Revelation so it was like this transhumanism here's a new religion you don't need faith actually what we need is just to rely on science and technology and so you know transis basically offers the possibility of Heaven on Earth by using Science and Technology to re-engineer humanity so there's the promise of immortality you know the abolition of or at least the significant reduction of suffering in the world and you know if you fast forward to the rise of modern transhumanism as opposed to this early transhumanism from the earlier 20th century so modern transhumanism really emerged in the the 1980s 1990s so right around that time you also get the promise of Resurrection so the first people to articulate the modern transhumanist ideology we involved in cryonics so if you don't live long enough to live forever as Ray kwell says if you don't live long enough to get access to these Technologies then you can always just have your body cryogenically frozen so it can be resurrected it's some point you know maybe 21 to 50 or whenever we have the Technologies available so I think it's a very way of you know knocking an ideology is to describe it as a religion right lots of people do that wokeism is a religion conservatism is religion so on so on but in this case transhumanism it really is like very much a religion and that really is the foundation I think of EA and definitely long-termism which basically just subsumes transhumanism within it which is another way to say that long-termism builds on transhumanism right well the one thought I keep having is it doesn't take very much to understand how perverse and just how self-aggrandizing the movement is and yet I'm part of the animal rights movement you know really sophisticated minded people who have really caught on to the EA philosophy in fact a lot of animal Charities are based on EA principles and Masco has become a real hero in a lot of these movements in fact you might even know that this data analysis site our world in data that gets you know upwards of 80 million people each month in terms of website visits they're very much influenced by the effective altruism movement they get a ton of funding from EA from The Gates Foundation from the musk foundation and yet they are seen as the go-to data analysis and interpretation site and of course they're getting their data from reliable sources but it's the inter a of the data that really reveals their biases and they have the same kind of techn fundamentalist worldview that things are getting better and better and better and we just need more technology to help us get out of these catastrophic predictions and how is it that so many people have fallen into this trap what is so attractive about it is it the Godlike qualities that the promise of a utopian future that so many young and intelligent sophisticated educated people are buying into this that may be part of it I mean on the one hand this kind of technos solutionist approach has an obvious appeal to people in the tech world you know it tells them that they're the answer to all the world's problems but also I mean tying this back to this idea of transhumanism long-termism as a sort of religious worldview another parallel that I didn't mention was that many people in the community see AGI or super intelligence a version of AGI that is not just human equivalent in its capabilities but far superior than us they see superintelligence as to borrow their term Godlike Ai and so one way of like reconstructing this is if God doesn't exist why not just create him alternatively why not become him so the reason I mentioned that is once you have God then if God loves not his children but his parents again using he because most people in this community are men right we notice that yes yeah it's overwhelmingly white and overwhelmingly male which definitely ties into one of my other critiques of this community but you know if we create this God that loves us then it will do whatever we tell it to do and also a crucial idea here is that on the sort of long- termist EA transhumanist View pretty much every problem in the world is understood as an engineering problem and so you know what makes a good engineer well they would say intelligence more intelligent person is can be better at engineering than a less intelligent person I think this notion of intelligence is deeply problematic but I'm just going to go with their premises here so consequently if you have a super intelligence then it will be capable of super engineering Feats and since everything is an engineering problem including climate change Wars social upheaval social religious conflicts around the world and so on once you have super intelligence then you can just task it with solving all these problems and it will do that maybe it'll take five or 10 seconds to like think and go okay I have a solution to climate change as if we like don't know how to solve climate change now it's really just a political will problem and a coordination problem so yeah I think this Tech solution it's appealing to Tech billionaires for this reason I think there's also this widespread notion that which ties into this linear view of human development and it's sort of techn deterministic to use a sort of technical term this notion that there are no breaks on the technological Enterprise there's only one way forward so if you were to believe that okay technology got us into this mess or at least enabled us to get into this mess but more technology is going to help us get out of it that fits very nicely with this view that well we can't stop technology anyways supposed to if you if you hold the opposite view like my view which I think building more and more technology is probably just going to make things worse it complicates our threat environment even more make it even more unmanageable and untractable and so on and that this is basically a deadend road yeah we've had many guests on talking about more ecological knowledge whether indigenous or Western science and having more of an ignorance-based worldview as we learn the complexity of how plants talk to each other with fungi the animal behavior the things that we're learning because we've really only had ecological science Western ecological science for maybe a 100 years so this technological rationalistic technocratic engineering mind meanwhile just blunder buses forward creating a trail of problems in its path and they still have this enormous kind of arrogance and huus that isn't grounded in an ecological humility there's very little humility and it reminded me of Mark Andre's techno Optimus Manifesto where he's saying things like techno Optimist belief societies like sharks grow or die we believe everything good is Downstream of growth it's all about growth and technology and moving forward and not dying anything else is stagnation and so and very linear that way too progress right onward and upward there's no life cycle yeah absolutely there is sort of a great irony in the long-term literature something I was very aware of when I was part of the community and contributing to this literature which is that there is a widespread acknowledgement that technology is overwhelmingly responsible for our unprecedented predicament these days with respect to the risks that we're facing global scale risks but the idea is that more technology so this much technology is bad it gets us into all sorts of problems but a bit more technology is going to save us and in fact this also ties into this idea of everything being an engineering problem so so one way that a lot of long-term is couch or frame our current situation is that we have all of this technological power without sufficient technological wisdom you know we're just not wise enough as a species to wield this technology in a safe way so we realize all the great wonderful utopian benefits while neutralizing all the risks okay so if that's a problem and if all problems are engineering problems and that's an engineering problem so how do we solve this problem of a mismatch between wisdom and our technology well we just re-engineer Humanity yeah so you know one of the the main Solutions put forward to this problem is that we should try to develop human enhancement Technologies particularly cognitive enhancement Technologies so that we can radically enhance our capacity for wisdom and thereby putting us in a position to responsibly use these Technologies so elowsky would be one of many many examples where he's very worried about AGI being developed in the near future and that causing our Extinction he's literally said in podcast interviews that if we were a San society as he puts it we would ban all work on AGI right now take a lot of those resources and reallocate them towards developing these Technologies to create a new cognitively intellectually Superior post-human species this also ties into capitalism again you have these utopian ideologies and then this capitalist ideology and one way of understanding transhumanism is not just as a utopian possibility not just as a solution to the problems that we now confront as a result of Technology but also as a means for capitalism to continue its growest March because right we're pushing up against all these limits of resources well there's another resource that hasn't really been tapped and that's the human organism so like if you want to keep the engines of economic growth and productivity roaring one way is to ensure that the individuals who are part of that engine part of that system that they are more productive and so by re-engineering Humanity maybe you could create organisms that are even better little capitalist agents they're even more productive they're even more efficient they're better at optimizing tasks and you know and so on and will mccal references this in his book what we of the future that Global depopulation something he's very worried about same with Elon Musk and the others so global population decline this is a big concern because it could result in economic stagnation well what could we do then well we could just reineer Humanity we just create designer babies ensure that they are all as quote unquote intelligent as Einstein or he says if that doesn't work then we just create new AGI artificial general intelligences at the human level to replace workers in the economic system so I hope all of this ties in like transhumanism it's again it's it's a reengineering humanity is one way to solve the problem of global catastrophic risk that technology itself is responsible for overwhelmingly but it's also trans could be understood as just an extension of techn capitalism this is an argument that a friend of mine Alexander Thomas makes in a forthcoming book which is really compelling so you know we're just another you know heiger would say standing Reserve reserves to be exploited in order to keep this Juggernaut of capitalism moving forward so the couple of things that I've you know emerg for me is people are now really starting to buy into this depopulation Panic as a result of fertility rates declining because of Greater gender equality and you know women finally having the autonomy to decide for themselves whether or not they want children and if so how many and we see Reproductive Rights and environmental rights completely intertwined right when Reproductive Rights are under attack through patriarchal oppression it's the same patriarchal oppression that is extended toward the Earth you know in the form of as you've said neocolonialism and capitalism and extractivism of the planet for the longest time we were just concerned about the patriarchal conservative control of reproduction in order to create either bigger Empires bigger States bigger Capital more conservative tribe tribes and we thought with feminism and with liberal values we can push against these and that's what we need to do and now we've got this new branch of people emerging who apparently call themselves The secularists or liberals right who are now also feeding into the same depopulation Panic that a lot of nationalists and conservatives are feeding into so in both cases whether it's the far right or the long- termist they're both looking at Women's reproduction as vessels through which these gasly Futures will be realized and that's really scary you know they're both extremely pronatalist groups it is really interesting to see the sort of alignment of these different groups it's fascinating and also a bit alarming I think one is that with respect to the more sort of political right groups that are anxious about depop a lot of worry about the great replacement also you know I think a lot of long-terms would say that part of the reason population decline is so unfortunate is that there is a positive correlation between population size and Innovation and since climate change environmental destruction is an engineering problem ultimately if you have less Innovation then you're less likely to stumble upon or to create a solution to these problems of environmental degradation so consequently the claim then is that that if you want to solve the climate crisis Etc you should actually want there to be a larger population because that means more Innovation more Innovation means greater chance of actually solving it the tenants of free market fundamentalism yeah yeah absolutely there's a lot of that sort of fundamentalism I think In This Crowd libertarianism is very influential has been from the origins of modern transhumanism in the 1980s 1990s the first organized transhumanist movement being the extropia movement and and they were explicitly libertarian Ein Ran's atas shrug was on their official reading list and so on and I think that libertarian tradition extends through EA to long-termism today which is not to say that every long- termist is a Libertarian but many are so ultimately even if there are different reasons for being concerned about population decline there is this fascinating kind of alignment of different groups yeah ultimately ending up at the same general conclusion that we should be really worried about population Decline and at exactly the moment when you have climate scientists many of whom are starting to scream that we have too many people and we need to reel in these sort of growest tendencies that are at the root of our sociate economic system yeah they're concerned with technological innovation just in terms of an effective altruist you'd think they would care enough about the education of the billions that could be educated so much better in this world to further Innovation so there are so many children getting virtually no education that if you just poured your effective altruism into those children truly poured it into them in a significant way you wouldn't be playing a numbers game with technical Innovation so it's interesting the disconnect they had there just counting up humans and yet they also as EA or long termist presumably care about the utility of all the humans on the planet but here are these humans existing now who could be much more improved through education and help their technological progress that they're so worried about on the EA account and I think this largely goes for transhumanism and long-termism as well you could really see ethics as a branch of Economics right it really is just about crunching the numbers and so on and the fact that there such a focus on number crunching means that there's a very strong quantitative bias consequently you consider interventions like improving education in certain parts of the world that are impoverished it becomes really difficult to put numbers on the outcome of those interventions consequently those interventions get sort of deprioritized or they just don't fit in the the quantitative metric framework that EA Embraces and yeah so then ultimately you might as an EA conclude that maybe focusing on education is not the best way to go because there's a lot of uncertainty well it's interesting with the gates malarial bed nuts example right where the unintended consequences of a lot of those people using those nuts to over fish was it Lake Chad and they were using the nets for other things right because they thought wow this is a great net I'm not now I don't have to sew a net together to find fish you kind of maximize these certain metrics and then you're blind to all these unintended consequences yeah I think it's a problem with a simplistic kind of one-size fits-all approach and this has definitely been a criticism of Western philanthropy Global North based philanthropy is that people do come in with this notion that oh if this program worked in region a of some part of the world then it's going to work in regions B through Z and so this is one argument for actually why this whole kind of top down approach to philanthropy is not good and maybe the best thing that you could do is to fund Grassroots organizations that have ground level understanding of the particulars of their predicament and why individuals are trapped in the cycle of poverty and also just another thing to add because it's shocking but relevant here is that this numbers game is precisely what leads one of the founders of long-termism Nick Beckstead to argue in his PhD dissertation which is widely regarded as one of the founding documents of the long- termist ideology he argues that if you are in a situation where you have to choose between Sav a life of somebody in a rich country versus saving a life of somebody in a poor country you should definitely save the life of the person in the rich country because from a long-term perspective lives lived in rich countries are much more valuable rich countries are more economically productive ultimately they're just better positioned to influence the very far future than lives in a poor country so to tie this into what you were saying is like if you are an EA long- termist and consequently what you care about the most is that things go well in the very long run future which means not just ensuring that people in the future have a decent life but that they exist in the first place because again could exist implies should exist assuming that they have a half decent life then taking your finite resources and spending them on programs that would improve the education of people in in poverished region of the world maybe that's just not the best way to go about things again a life- saved in a rich country should be prioritized over a life save in a poor country I hope that makes sense yes and of course a reflection of an extreme version is the colines and they've taken it into their own hands to repopulate the world with their own genetic material given that it is the most Superior and Rich and intelligent and and all that you know they've kind of talked about as long as each of their descendants can commit to having at least eight or 10 children children for just 11 Generations their bloodline will eventually outnumber the current human population so again I'm so shocked that so many news outlets have given them a platform to share their idea and Elon Musk has retweeted what a great thing they are doing in terms of service to humanity and it feeds very much into the again what we were talking about earlier the conservative population Panic of the great replacement fear of being overtaken by the wrong kind the wrong color the wrong religion of people and basically taking matters into your own hand and instead of educating people raising people out of poverty and really just proliferating more rights-based justice-based values they're saying well no we know how to create the right kind of people and we're going to do it talk about hubris yes more of me is what's going to save the world yes and so hubristic I mean you mentioned Elon Musk he's as I understanded has supported the columns yes but of course he has a bunch of children and sees himself as playing a role in this but the Collins also you know their organization pronatalist dorg has received if I remember correctly hundreds of thousands dollars from leading figures within the long- termist EA trans humanist Community like yon Talon so yon Talon is a co-founder of Skype a multi-millionaire I believe he has just under a billion dollars so almost a billionaire but he's been a major funer of the AGI so it's a very bizarre moment when we we need more people we need more of a certain type of people and I mean the Collins have this very strange like extreme what scientists would call hereditarian view that you know a lot of our traits as individuals are based in our genes I mean there's a trivial sense in which that's true knocking out a single Gene might have all sorts of consequences but the claim is stronger than that it's that you know there particular traits that are determined by our genes and the Collins I remember reading an interview with them where they were saying that they believe that even ideology is genetically based at least to a a significant degree and so you know if you see that there are a bunch of Nazis in the world who are reproducing well that's bad because their children have a good chance of being Nazis for gentic reasons right you know not just cultural reasons but genetic reasons so that is just a really extreme view I mean I lived in in Germany for three years recently and there are lots of people my age whose Grandparents were Nazis they are not Nazis this is not genetically determined but even traits like intelligence like is that genetically determined I don't know it's intelligence is like such a complex phenomena there's so many different types of intelligence there are so many gen means interacting in complex ways that it's just deeply problematic to say oh well you know I scored high on this very impoverished narrow one-dimensional test called the IQ test which you know measures intelligence in some meaningful way which I strongly disagree with but I scored high on it so I should have more children because they're going to be high IQ and the higher the average IQ of our society the better the society is going to become it's all just deeply problematic from a scientific perspective to this higher level kind of sociological point of view so all of it's just really bizarre we're in such a bizarre moment how do you see after Sam bankman freed and the humiliation of that and the exit of him and his billions where is EA movement in long-termism now do you think great question beginning in the summer of 2022 when mcal published his book on long-termism there was a big push to evangelize for this ideology among the public and I think for several months it was in general very successful so mcll the effective altruist movement and this spin-off ideology of long-termism was getting coverage on the front page of Time magazine New York Times you the guardian was running articles on it mcal himself made an appearance on The Daily Show with Trevor Noah Trevor Noah seemed to be very enamored with m school's long- termist views so I think this sort of Outreach effort was very successful and then at exactly the worst moment for this whole project of trying to convert people members of the general public to the long- termist ideology FTX collapsed and I think that sort of undid all of the progress that mcal and other ba long-terms had made and perhaps in a way that was irreversible it sort of tarnished the image of EA and long-termism so all of that is to say that I think EA now is tightly linked with bankman freed and arguably the worst case of fraud in US history same with long-termism and consequently I think the general public has lost interest for the most part in EA and I know there were a lot of people who initially found EA and long-termism very compelling and appealing who now kind of want nothing to do with it that being said EA Still Remains a really powerful influential Force within the political Arena and especially I think within the tech world you know so there are a lot of powerful people like a lot of individuals like AI researchers at companies like anthropic which is mostly just an EA organization who still very much subscribe to the the EA worldview and before FTX collapsed the EA Community had 46.1 billion in committed

funding just for like its own projects so when FTX collapsed a good chunk of that money was lost but there still remained billions and billions of dollars I mean this was made explicit by leaders of the EA Community 2 community members that there still is just lots and lots of money for research projects and so on so with our a little bit of time left we wanted to ask you about the essay that you wrote in medium last year titled why I won't have children could you share with us some of the reasoning behind that decision the way that I sort of couch that article is when I was young I had this assumption that the world was in general a good place and then there was somebody who told me about starvation around the world and certain diseases like brain tumors that young children you know basically my age at the time would get and that was an occasion for me to reflect on the possibility that actually maybe the world is a bit more menacing than I had thought and so the Articles basically just could be seen as a progress report on my thinking about this issue and you know for decades I maintained this belief that in general the world is a good place but by the time I got to my early 40s I'm 41 now having you know seen a bunch of close friends of mine Die Young some as a result of suicide and then just taking a broader view of the situation of humanity in the 21st century and recognizing the extraordinary unprecedent Perils of climate change and the six mass extinction event the risks associated with development of emerging Technologies and so all of this is to say that when I pivot from thinking about experiences that I

2024-05-06 19:48

Show Video

Other news