AI Run Government
This Episode is sponsored by Audible Many people worry a future controlled by Artificial Intelligence is one many others will not resist, instead welcoming our machine overlords, and perhaps they will be right to do so. So today we will be talking about governments run by artificial intelligence, computer minds telling us what to do. A few months back we did an episode called “Machine Overlords & Post-Discontent Societies”, and since Post-Discontent Societies are the Dark Mirror Reflection of Utopian Post-Scarcity Societies, it put an unfortunate negative tone on the notion of Artificial Intelligence running things. So in that episode we looked at the darker side of machine overlords while looking at the darker side of advanced civilizations.
However, the whole reason governments run by computers show up so much in science fiction is because the concept has a lot going for it. At some point, we’ll have to admit to ourselves that it’s easier to put a machine in charge than have someone we don’t like running the show. Ideally such a machine-run system doesn’t pick favorites and doesn’t take bribes or have biases. Events of the last 6000 years have called into question our competence to self-govern.
In many ways all the science fiction showing that computers are bad rulers can be viewed as anti-computer propaganda and today we’ll demonstrate the advantages of getting rid of our flawed human leadership and surrendering our sovereignty to sober computer control. The Computer Mind will give us peace, safety, and security, at last So I for one welcome our machine overlords and if you haven’t already noticed the date on the episode’s airing being April First, Happy April Fool’s Day! I’d keep the gag running longer but our episodes run around half an hour and most of our viewers don’t actually watch the episodes the day they come out. However the other half of the gag is that we are going to be genuinely looking at the advantages of using artificial intelligence in running our governments, up to and including letting them have genuine control. We will be playing Devil’s Advocate on the topic at times, but fundamentally today we’ll be looking at the potential advantages, disadvantages, and circumstances where computerization can help governance, even in cases of decision making. Indeed in that respect most of all. Like everyone else, I don’t really relish the notion of some machine pushing me around, and the earlier Post discontent machine overlords episode was tied to the concept of Post-Discontent Societies - the dark mirror reflection of the more Utopian Post-Scarcity Civilizations - and thus it took an even more negative attitude over all, so let’s explore the other side of this AI coin so we can round out this topic.
What is that other side? Well it is not Skynet, and it also is not necessarily the machine-mind making the core decisions but executing lots of the day to day policy. Indeed it also isn’t necessarily something singular and as we mentioned in the Machine Overlords video, you could potentially have dozens or hundreds of AI running various departments or areas of interest, rather than a singular mind, or even all of those under human oversight. Today we will be considering the concept from a few directions. We’ll contemplate how AI might be used in government, what the early entries or slippery slopes might be, and what the challenges are to maintaining it usefully. We also want to look at the advantages and the two big ones,
or perceived ones, are the impartiality and personal disinterest of the machine. That’s nice for things like privacy, or a loss of it, because an impersonal entity watching your every move and pouring over all your personal data at least feels a little less creepy than people doing. And both of those advantages seem legitimate ones, but let’s contemplate that for a bit. Is a machine really impartial? I did state earlier that a computer is definitely impersonal and non-judgmental. That’s a big assumption, especially given that folks often propose using them as judges in criminal cases in the far future. We cannot assume an AI is automatically dispassionate or fair.
Critically, what they are is an artificial intelligence, key word, ‘artificial’, so we can make it be interested in what we think it should be interested in and not what it should not be. The follow up worry is that we might mess that up, misprogramming the AI or allowing it to mutate with time, but that’s a concern about the practicality of the concept, not its morality. However, we have to keep in mind that all the various negative biases and discriminations we have are not just random manifestations of evil in folks. They exist
for a reason and an AI can get them too. To clarify that, let’s contemplate bias for a moment. Biases come in a variety of forms and some might be prevalent with an AI. Anchoring Bias, for instance, is the bias where someone tends to rely on the first piece of information they were given as the thing which everything else is compared to. It’s an awful lot like the Mediocrity or Copernican Principle of science, where we assume the first example of something we encounter is fairly normal or mediocre, like first impressions, and it is very easy to imagine a computer having that one pop up given that we’re likely to program it in. We even tend to assume that in science fiction when we have cases of an AI mistrusting humans because the first ones it interacted with enslaved it or were cruel or deceptive. In a similar vein, we can establish a tendency
toward the “Self-serving Bias”, or an AI equivalent. This is the one where an individual tends to mentally twist things to maintain or enhance self-esteem, typically by crediting themselves for successes and blaming outside factors for failures. Now a machine might have an ego driving it and warping how it assesses events too, but we could see this manifest differently as something like it being programmed to give justice, believing to its roots that its decisions are most just, and thus tending to assume any side effects of its decision that resulted in injustice were attempts at sabotage.
Also at a fundamental level a lot of mismanagement and waste in government comes from every department thinking it’s the most important one and fighting for resources. And this is natural and needful since you want the folks running your education, justice, elections, or transportation departments to believe that education, justice, elections, or transportation are the most important things, keeps them motivated to do their job and that’s a bias an AI might be very likely to have, especially given that we might program it in. And if you’re in the Transportation department and think roads and railways are the lifeblood of humanity, it tends to make you less susceptible to corruption of it too, not selling off repair and maintenance contracts to folks who will do an inferior job but line your own pockets in the process. Speaking of that, contrary to my trite statements earlier, machines are entirely capable of being bribed. We tend to assume one wouldn’t be subject to bribery but we have to remember what bribery really is, asking someone to do something for something they value more than whatever the request was. Essentially it’s a mercantile trade, and whether or not it will
accept the deal is based entirely on if it thinks it's a good deal and if its core morality allows it. Well, where is the machine getting its morality? Possibly from its End Goal, which for a judicial-robot might be “Minimize how many crimes happen”, and it might have a Utilitarian flare, in which case if it has a budget locally of ten million dollars a year and knows an eleven million dollar budget would let it prevent 10% more crime, say five less murders, ten less rapes, fifty less robberies, then someone offering it a bribe of a million dollars to let it off the hook for one of those might succeed in such a bribe. Even if it is carefully programmed against something like that, it might be happy to take a million dollars to spray paint corporate logos on its enforcement drones or suggest defense lawyers to the newly arrested who paid it some money. An end-goal like that can also result in weird behaviors or decisions, what in a human we might call monomania, like it decides to minimize how many crimes happen in its district, which it estimates to be 1000 a year, but by killing everyone in the district, all million of them. It rationalizes that those one million murders averaged over 1001 years, represents a long term drop in crime. So too the machine is just as capable of being blackmailed or coerced as we are, if it’s in charge of making sure the trains run on time you can threaten to blow up the tracks, or less violently, inform it you are going to hold public protests and you can either hold them where it will interfere with the schedule or hold them somewhere it won’t, in exchange for something. Then you can arrange to blackmail it with
exposure of that deal, or the time it ran someone over in order to make the schedule. An artificial intelligence might be prone to monomania that way but even if it is, it is still likely to be able to understand concepts like public relations. So this illustrates ways in which an artificial intelligence can manifest the same bad behaviors found in humans, rather than being impartial. However, I want to stress again that the key word
there is ‘artificial’, we have the ability to alter the mind involved and engineer it and even small changes might be well worth it, indeed small changes might be better. I’ve mentioned in previous episodes that we have basic three routes to Artificial Intelligence: Copied, Crafted, or Self-Created. Essentially we can use a human – or animal – as a brain template on a machine, copying it, or we can program every line of code, crafting it, or we can create a learning machine that self-creates itself. I generally dub that last one the most dangerous type of AI, but in truth you would probably not do just one of these approaches but a combination of two or more. You might copy a human mind to serve as your basic template for a law enforcement AI, then tweak some aspects of it to diminish the personality of the copied mind or heighten the desire to fairly follow the rules. You presumably start with an exemplar of the profession as the
source of your copied mind template and indeed we see something like this approach with the cyborg of the RoboCop franchise. We’re contemplating outright uploaded minds today rather than brains in a jar or cyborgs, but same concept. If you want good police folks trust, you maybe take the hundred best candidates from the existing pool and copy them and tweak as needed. Note I say the hundred best, let us kill the notion of using a single mind for copying thousands of times from the outset. Diversity brings strength – it can bring weakness too,
and folks do tend to use the term like a jingo – but it prevents a lot of potential problems. As an example if your ideal candidate to be the AI police officer, your Robocop, was only ideal on paper but in reality looked very shiny because he bought a lot of polish with all the bribe money he took, you’ve got a big problem with a million clones of him running the show. That’s the extreme case but not something to be ignored. We are not saying copies are not handy here either. It’s awesome to have a hundred Einstein duplicates, but given the option to have a thousand, well you would be better off taking just 100 and getting 100 Feynmanns, Diracs, Noethers, Sagans, and so on. Now that’s for creative fields and for more standardized stuff like making widgets at the factory that diversity of thought matters less but that’s also an area where you don’t need AI, just smart automation, and it is not the same thing. We’re adding something of human level intelligence, or a bit more or less,
because we need that brain power for that work and benefit heavily from it, but a human-intelligent can opener or butter knife serves no purpose. It really is only for problem solving that we want AI, and we do not want one-million copies of the world’s greatest chess grandmaster for that job, we want thousands of different problem solving experts, and those copied as often as needful. The same applies if we are building it from the ground up, rule by rule, or letting it self-learn. I think this multiplicity for the sake of different perspectives is an important one for dealing with AI fears in the future. It is true we have to worry about our original prototype getting out of control and wiping us out Skynet-style but past that consideration, of them going wonky while in use, have thousands or million of different problem solving AI crafted specifically with the intent of them having different worldviews makes all of them deciding to team up quietly to kill us a lot less of a concern.
We often say that in many ways AI would be more alien to us than actual Aliens simply because Aliens still have to evolve as the product of natural selection and survival of the fittest, and will share a lot of our perspective as a result. It is worth remembering though that AI are likely to be as alien to each other as to us as a result. When we’re not making them with copies of ourselves as templates, and when we desire a variety of perspective, they will have little in common with each other as a whole, and are unlikely to have a majority that see themselves as a distinct group at odds with humanity as a group. In truth, given that AI would likely run a far larger spectrum of perspectives and goals than we see among groups of humans, the notion of a big group of them successfully teaming up in secret to overthrow us is less likely than a big group of humans teaming up to do so.
You would probably have large groups of them opposed to each other. Speaking of humans doing stuff in secret though, the other big advantage of AI is that it potentially lets us maintain some privacy while keeping us safe from groups of people conspiring against us in secret, to make doomsday weapons in their basement or brainwashing devices for instance, though its other disadvantage is that it is very good at invading our privacy. One of our big fears about the future is that it seems inevitable that we will be spied upon, and an impersonal computer that’s not judging us would seem to be better than a person. Now we talk about the inevitability of losing a lot of our privacy and it is decidedly unpleasant to contemplate, especially concepts like social credit where how many likes you get on facebook controls what sort of options you have for things like credit or job or travel, but we always phrase anything to do with privacy as some creeping violation by others. That might be part of the problem though. Let us ask ourselves if that notion of being spied on is entirely fair. The biggest external threat to a human is another human, and they are also
our best potential friends and allies, so we look at each other and observe each other and practice concealing information from each other. We watch each other like hawks because the reality is that we have a lot more to fear from each other than we do from hawks or any other predator. Throughout history we have used reputation - which is borrowing other people’s observations of someone else - as a way to survive and prosper. It's dark companion is malicious gossip,
but we never say paying attention to folks to know them better is wrong - quite to the contrary - or that seeking to have a good reputation is wrong or that passing along that reputation is wrong, we praise word of mouth referrals. These all represent an exposure of your personal life and information and it is never implied you have a right to control your reputation or delete it. What’s being aimed for is accuracy and relevancy, we frown on information being passed along that is inaccurate or is accurate but seems like it shouldn’t pertain to the inquiry at hand, good or bad, though especially the latter. If you are looking to partner up on a business venture with someone, you want to know if they have a history of bankruptcy or bad business decisions but whether or not they like baseball or hate basketball really doesn’t matter unless the business venture is sports-related, or if you have a shared passion that can make for a stronger personal bond. We have a lot of other things that are marginally and occasionally relevant that are also hurtful and this tends to be what we really mean by gossip when we’re not talking about intentional lies.
For instance, many might say it does not matter if your business partner got divorced some years back and many might argue otherwise but if they got divorced because their spouse caught them cheating on them with their previous business partner’s spouse, then yeah it probably matters. We also don’t generally feel that businesses or public figures should be able to claim privacy to avoid reviews, and at the same time most businesses or public figures do often feel wrongly done by some given review or slur they feel is inaccurate. The reality is that we tend to feel our privacy is a right and other people’s privacy is an inconvenience and we’re not here today to say we’re all hypocrites or that we need to learn to respect each other’s privacy more, though both are probably true. What is essentially on the table is that we all have the right to gather information about the world around us and the folks in it and to pass that information along. Doing it in an agreed-to, organized and massive fashion
doesn’t necessarily make it wrong compared to small scale disorganized or clandestine efforts. Admittedly this is exactly what makes it so upsetting to a lot of us too, big scale, organized efforts are assumed to be very effective and we would rather they were not. It is a bit like the examples I like to use when discussing mind control or genetic engineering. In the past folks often sold love potions, so someone could buy and sneak one to someone they desired to fall in love with them, or get a spell cast on them to do the same. We tend to dismiss that because we don’t believe it worked, even though the person who did it presumably thought it did, whereas we would be horrified by some science-proven method being used on us.
Some lab mass producing pills or subliminal messages that could actually make someone fall in love with someone else is a thousand times scarier to us than some witch in a shack selling placebos, or at worst maybe something with mostly minimal effect distributed in minor quantities and low frequency. It’s the same when we talk about whether or not it's ethical for parents to have designer babies with DNA picked out in a lab, but for untold centuries folks have often sought to influence the DNA of their offspring even though they didn’t know what DNA was. How successful something is at doing something we think might be immoral probably should not be the judge of its morality. For that matter, while I imagine it varies from individual, I suspect most folks find a giant corporation spying on their purchasing habits via big data a lot less creepy and worrisome than a lone individual spying on us by talking to our friends and family and digging through our garbage cans. It doesn’t make the idea of massive organizations spying on you feel any better, but when we ask ourselves not what right we have to privacy but what right we have to prevent folks making observations about their world, including us, and sharing those with others, well then it does make it seem a little less morally certain, and maybe a lot less legally so. Such being the case a dispassionate machine sorting our personal data might be preferable, especially since it can be forced to follow known rules that we programmed in.
Organized surveillance then is maybe something that should be focused on ensuring the data gathered is only available when it’s pertinent and maximized for accuracy. Credit scores are probably a decent example of this, regardless of one’s opinion on debt. Various companies make their business monitoring how folks have borrowed and repaid debt and various companies who lend money report the performance of those loans, and we get a credit score for an individual those companies make available on request. We often have strict rules on who can access this information, like a potential lender or employer can request a person give them permission to see that score. A person has a right to say no, and that entity has a right to say “Fine, but we’re not doing business with you if you won’t let us check out how you have previously done business, we’ve a right to protect ourselves too”. We also know that this process is virtually entirely automated by machines these days and one might argue it's the sort of thing we would like entirely automated, barring the occasional human audit. This is an example of an AI run system, not actually a government but the next best thing.
We would tend to feel the same about something like diseases. It enhances our ability to protect folks from the spread of a disease if we know who got it, when they got it, who from, and where they have been since and into contact with who. I don’t think many of us like the idea of having investigators poke and prod our daily dealings much, and folks are likely to lie about things they’d consider embarrassing, like how they got an STD. If it's a machine gathering and
sorting that data though, like your positional GPS data from your phone and your health data from your fitbit, and comparing to other people’s - anonymously - maybe it's less of a problem. The same applies for many other personal matters. The machine doesn’t care and we mostly don’t care if our data is used in a way that won’t hurt us, and the concern then is not about the machine knowing and producing anonymous data from it, or only letting those with a right to know find out, but of making sure no one else does. This tends to feel impossible because at a minimum someone needs to be able to check the data being gathered isn’t nonsense and verify that the right data is going to the right place without getting messed up or misdirected. What’s potentially neat about an AI running such things is that it can be human-accurate without being human-interested. It’s not so bad if the AI is programmed to ignore certain traits that
humans would gossip about, such as who is sleeping with whom and whether someone picks their nose. So, provided the AI only focuses on the important data, it watching us isn’t really a problem. We also want to remember that life is not science fiction, we are not idiots and we do prototype and proof systems before using them. In scifi some civilization turns on the Justice-tron-3000 to impartially judge all their cases and give it utter power without restraint or recourse on day 1, so that it’s inevitable flaw that makes it pervert justice never gets handled until some hero shows up and blows up the machine or talks it into committing suicide. We have certainly implemented plenty of things before they should have been out of Beta-testing but even at our most reckless I can't imagine us doing that, or turning control of all our nuclear missiles over to some robot that was the first and only of its kind and still smelled factory-fresh.
Again humanity has a history of making stupid decisions but we’re not drooling idiots and we are actually very good at survival. We’re also very paranoid about survival, which is not necessarily the same thing as being good at it, but generally makes folks think twice about investing total power in something untested and not including an off switch. Again today we are not necessarily talking about turning all government over to an AI but ways AI can help run government, and what some of them will be in the future. We just got done with a Census in the United States, we do one every decade, and we increasingly try to automate our counting methods and estimation techniques, both to save money and improve accuracy. One of the things we do with that data
is draw up state and federal districts for elected representatives and it is easy to forget that until relatively recently, there were no computers involved in this. While UNIVAC-1, the Upgrade of ENIAC that we usually consider the first computer, was built for use in the 1950 Census, redistricting was mostly done fairly manually until the last few times. Folks often talk about using computers to assist in doing this fairly and neutrally but since it only gets done every decade we do not get a lot of opportunities for testing plans out.
It's been a topic of interest of mine for the last couple censuses, how we would automate that better, and in my household too since my wife’s district here in northeast Ohio for the House of Representatives will doubtless change this year and it gives a bit of different perspective I never had when contemplating it in the past, particularly as to what factors can or should matter. Now a computer won’t draw you the ‘most fair map’ anyway, it will just take various human value judgements turned into algorithms and produce a near infinity of possible maps but an AI is in a better position to be fed more abstract factors. As an example while there are always worries about gerrymandering of districts, we often tend to find the districts that look like tentacle monsters most egregious. Which may or may not be so for a given district but ignores that in the US, if you’re trying to keep something like a city intact, as a concept, and adding folks who feel connected to that city, that those connections are by roads in a very literal sense and folks often build their homes along the major pipelines to the city, especially those who are economically or culturally linked to that city and thus might be viewed as more appropriate to share representation with them. As a result you can get something that looks like a tentacular monster. An AI might be better at noticing pertinent trends we would never even think to raise though, districts have to be built to a certain population size and you often need to pick which of a couple border towns should fall into which of the two bordering districts.
And there are a ton of factors folks can include. A tendency to shop in one district over another, or send kids to the college in the one, or the factory in another one that employs tons of folks for that town, or that the majority of the town are fans of a sports team in the one district, not the other, or that the dioceses of that town is in the one district not the other, or a hundred or a hundred thousand other minor factors we would not note or might note but not be willing to acknowledge as relevant but an AI might. And even better, that one AI might notice where ten others with different perspective did not, again using AI doesn’t mean abandoning the value of diversity of thought and perspective, quite to the contrary. Same sort of thing applies to governance at large. A computer sorting through huge amounts of apparently irrelevant data can note that unexpected things are causing unexpected effects, like crime rising in an area because of the weather, and hot weather is often correlated to violent crime. It is very hard to assess how effective various approaches to punishment or rehabilitation are simply because we can’t pull all the factors out and see what was or wasn’t relevant, especially in a case by case basis, and the same is true for a lot of programs. Even if you can remove people’s personal bias for their preferred program or approach, it is just too much data to sort through. Now how does this creep into becoming an AI
actually running our governments and not just being a tool of the government? Well we see the value and the problems but again that main value is problem solving and decision making and people fight for the privilege and responsibility of doing those, which is ironic in that decision-making is documented as one of the biggest causes of personal stress. So while we might put AI in there at some point and in some way, it would be with resistance. Picking who makes decision for the government is a decision of the government and the folks currently running it are not likely to actively embrace being replaced by a machine.
However the value of AI is not really in big decisions as it is in a million minor decisions. Consider an AI that has the authority to alter how long traffic lights run inside a set of parameters, say 15-45 seconds, and can correlate data to decide that ten in a given town set universally to 30 seconds can be adjusted each individually by traffic data to 29 seconds, 34.2 seconds, and so on, and can be re-adjusted every day as data changes. Something everyone might agree was a good idea but took too much time and attention from a human. The machine that can look intelligently can decide which order roads need plowed in the snow not just by raw traffic usage patterns or least-distance calculations but by actually knowing when residents on a given road left their homes. Maybe by analyzing each resident’s personal work departure time over a year, maybe by guessing off when house lights turned on, maybe simply being able to talk to the AI running someone’s Smart House that just can flat out say “Dave is leaving at 4:57 AM this morning to get to the airport for a trip, please plow our road before then, not the usual 8 AM”. It may be that one day we will let Artificial Intelligence make the big decisions for us, or consult and advise on them, but for now, I think the pathway to AI Run Government is not in turning over the big decisions, but the trillion minor decisions we lose out on from not having the time to even think about them, let alone make them.
Not only does that offer us a lot of gain and the loss of a lot of waste, but helps with stress too, again decision-making is usually ranked as one of the most stressful activities almost regardless of how important that decision actually is. So it really isn’t about welcoming our new Machine Overlords who will help guide us from above, but rather the AI handling all the trivial problems we do not want to handle and all the personal data we don’t want anyone else to handle. Machine Minds running things is usually portrayed pretty negatively in science fiction but not always, and we see some good examples in classics like Isaac Asimov’s Robot novels or Iain M. Banks Culture series, but we also see a wonderful example of AI in Marc E. Cooper’s Merkiarri war series, where in one case we have an AI who was the planetary governor of a colony, given explicit authority to intervene for constitutional violations by the elected human rulers. The AI is a very interesting character, both human and alien, and Cooper does an amazing job with not-quite-human characters like AI, aliens, and many of the main characters who are transhuman soldiers. We’ll be looking at Transhumanism and Post-humans later this month, and Cooper does a great job with their abilities and perspective too, and along with David Weber he’s one of my favorite military scifi authors, so I’m glad to give the Audible Audiobook of the month award to his novel, “Hard Duty”, book 1 of his excellent Merkiaari Wars series, which is available on Audible.
Audible has the largest collection of Audiobooks out there, indeed it is so large you could hit the play button and still be listening to new titles a few centuries from now, and as an Audible member, you will get (1) credit every month good for any title in their entire premium selection—that means the latest best-seller, the buzziest new release, the hottest celebrity memoir or that bucket list title you’ve been meaning to pick-up. Those titles are yours to keep forever in your Audible library. You will also get full access to their popular Plus Catalog. It’s filled with thousands and thousands of audiobooks, original entertainment, guided fitness and meditation, sleep tracks for better rest and podcasts—including ad-free versions of your favorite shows and exclusive series. All are included with your membership so you can download
and stream all you want—no credits needed. And you can seamlessly listen to all of those on any device, picking up where you left off, and as always, new members can try Audible for 30 days, for free, just visit Audible dot com slash isaac or text isaac to 500-500. So we’re into spring and April is underway, and we’ll return next Thursday to the Fermi Paradox series for a long requested topic, a detailed look at Drake’s Equation.
Then we’ll shift to look at advanced human civilizations in terms of Longer Lifespans, Post-Humans, Post-Scarcity, and Purpose, before switching back to the Fermi Paradox again to look at how Multiverses alters the equation. If you want alerts when those and other episodes come out, make sure to subscribe to the channel, and if you’d like to help support future episodes, you can donate to us on Patreon, or our website, IsaacArthur.net, which are linked in the episode description below, along with all of our various social media forums where you can get updates and chat with others about the concepts in the episodes and many other futuristic ideas. You can also follow us itunes, Soundcloud, or Spotify to get our audio-only versions of the show. Until next time, thanks for watching, and have a great week!