Can Democracies Compete in the AI Race? | #innominds S2EP7 (Part 2)
yeah so in the film The Social dilemma we talk about this uh Lab at Stanford called the Stanford persuasive technology lab uh where the co-founders of Instagram studied and where I took a class on Behavior design the professor BJ Foggs very famous notably people need to understand that it wasn't just a lab work Evil Geniuses you know twirled their mustaches saying how do we manipulate and persuade the world it was actually about how do we use persuasive technology for good so what do we mean by persuasive technology it means just like a magician knows something about your mind that you don't know about your own mind and it's the asymmetry of knowledge that makes the magic trick work persuaders have an asymmetry of information they know something about how your mind is going to respond to a stimulus like if I put a red notifications badge on that product that social media product it's going to make people click on it more than if I put a blue one if I make it so that when you pull to refresh like a slot machine it'll be more addictive that's a stronger persuasive technique to keep you coming back for more than if I don't have the pull to refresh kind of feature on the on the product [Music] hi welcome back to Innovative Minds I'm your host t-plus here with me are Tristan Harris and Audrey Tang building on our discussion on the dangers of persuasive Technologies let's dive further into the topic of AI ethics Tristan how can we establish ethical standards to ensure AI is integrated into our lives in a responsible manner a favorite quote of mine from a mentor Daniel schmachtenberger is you can't have the power of gods without the wisdom love and Prudence of gods when you match power with wisdom love and care basically you live in a stable Society but if you decouple power from the amount of care that's needed to wield that power you are going to end up in in dangerous scenarios think of it like a a biosafety level 4 laboratory where you have a laboratory that has this crazy power to develop pathogens that could spread around the whole world and be very dangerous but you don't want that lab to be running the safety practices of your dentist's office like you probably want the people in the big balloon suits you know um and so we have to make sure we're matching the safety practices the level of wisdom love prudence and care with the level of power Distributing the thing that worries me about generative AI is that we're rolling out all of this new power that's going to affect so many different aspects of our society and we're not coupling or binding that power to what level of care is required and that's really the principle that I I think we need to attend to but I love you know Audrey if you disagree with any of that or want to add something yeah totally uh and the point we made earlier about the liability uh Colossus uh and the incentive tool we're at bridging uh Clauses is exactly to make a competition work on a level of who is more careful than the other yes who is more powerful than the other I think that's the narrative frame where uh basically missing from the first Contact and it's very sorely needed in this second contact just to link a couple Concepts here because this is profound for me when I kind of first realized it um you know part of being caring is actually caring about the whole how whole can your perspective be how uh all-encompassing and if you think about what externalities are externalities are a failure of seeing the whole because the premise of an externality is you're not accounting for something your vision is missing something you have the power of gods of Zeus and you bump your elbow and you just you know accidentally Scorch the Amazon uh because you weren't your level of awareness didn't match the level of your power um and so when I think about you know a competition for care it's a competition for how much you can be with the whole because part of the problem of any incentive or maximizing for GDP or even minimizing CO2 or increasing attention or engagement is that you're competing for a narrow incentive and any narrowly defined metric by definition is not reflective of the whole the whole thing and so one of the challenges that we face is how do we reorient Society to be competing for the care for the whole because when we have the power to actually shape the entire biosphere with our you know whole you know energy economy we have the power to shape the entire information sphere of the whole world with our you know attention economy uh digitally mediated by social media we have to have those systems care for the same level of wholeness that they are impacting and that really is I think the central principle yeah definitely uh and just as a care uh relationship between parent and child requires whenever the child cries the parents respond in a immediate fashion so would this listening skill uh need to work so that when the generative AI company detect uh that there is actually somebody crying somebody being harmed by the Injustice that their system is propagating we need to measure how quickly is this concerned heard and how quickly is the realignment that is to say the tuning of the systems going to happen like is it within minutes within hours within days and if the answer is within years then we have a real problem right and that would reflect um the fact that there are harms that are accruing that are not being accounted for meaning that there's a um harm showing up in the hole that's not seen by the hold there's a gap essentially in the amount of care that we're able to provide compared to that we need to provide so I totally love that and that's why your work is so important because you're using technology to be able to augment that so we can better close that complexity Gap that you're not if I one metaphor I sometimes uses you push a button and it impacted 10 dimensions of reality so I just I push a button and then there are 10 dimensions of reality that are different but then my solutioning only happens in three dimensions so I'm like trying to do Cleanup in the ocean and I'm trying to clean up the air but it turns out I was when I pushed the button I was I was impacting seven more Dimensions that I couldn't see those seven dimensions are going to accrue over time and create more fragility or toxicity or pollution and I feel like the kind of what we need to do what your work is showing is how do we increase the dimensionality of care to match the dimensionality of power and using technology so it's not an anti-technology conversation it's you're really showing how to do it with technology right exactly it's it's like in a like uh freshly Frozen uh ice sheet of a river or something and we were crossing it we want to pay attention of course to the brittle parts so that we don't all fall into the Frozen River but we want to do it in a way that's decentralized meaning that each person that is on the rivers different spots somewhere we need to have some sort of communication so that anyone who discovered vulnerabilities right in the ground that we're trading uh notify everybody and this is like a dimensional scouting so that people can't understand what's the most existentially uh risk for uh and then we everybody pay attention and take care of that and so the ability to sound the alarm and ability to pay attention and to take care of each other I think it's Paramount in this day and age and one last thing which is directing attention to repair it before those who could take advantage of those things I think about like bug Bounty programs and security systems where like we want to make sure that those who discover vulnerability in the you know thin ice of the iPhone security system uh asymmetrically Point attention to those who can repair uh and care for that that area rather than those who would exploit it and wanting to make sure in AI systems we want to extend the analogy that there's more power uh or at least privileged power going to the repair and Care rather than to um the people who could exploit those vulnerabilities right the most careful should win the prize yeah exactly I understand that you propose a decentralized system of vigilance allowing all parties to report AI vulnerabilities and work together towards finding Solutions you are also in favor of Regulation but let me challenge you a bit American commentator Noah Smith once wrote that trying to intentionally slow down the progress of industrial technology is a bad idea because if history seems to have taught us anything it is that we have no idea which technologies will actually displace human workers kill jobs and decrease wages in the future so my question to you is could over regulation be a break on innovation so um this is a really good question and um one of the arguments that people make uh uh against what are called the doomers or the people who are concerned with safety um at the very least is that we always have moral panics about new technologies and the luddites we're worried about you know the previous Industrial Revolution Technologies like you're saying and there are even societies I think in turkey that they banned the printing press when it first arrived because they were worried about what the printing press would do right so just to sort of uh make sure we're steel Manning or like at least mentioning the fact that they're that's kind of the side of the argument that they take I I will first say that I actually don't have an opinion on whether AI will replace or augment jobs I'm not an expert in that I don't claim to have a point of view what I'm interested in what I'm concerned about is that AI uh creates kinds of powers that can be used in very catastrophic ways or could have catastrophic impacts that it's very unique the printing press actually it did cause a Hundred Years War uh uh or you know could cause a a real disruption like that and AI could do that too but the printing press didn't automatically have the the ability to take a nuclear power plant and and break it right but generative AI does have the ability to hack into this critical infrastructure a generative AI does have the ability to um overwhelm our legal system and our court system uh generative AI does have the ability to take open societies that rely on the authenticity and veracity of information and flood it with new kinds of targeted information that could not just influence people's political opinions but could be used in ways that I won't discuss here on this podcast but can be using pretty uh catastrophic ways so I think what we have to do is make sure we're identifying what the level of like how big a problem could the misuse of that technology be and how big could an accident um how bad could an accident be so if I one thing to think about is like imagine the person making the biggest possible accident a single how big an accident could a single person cause in the year 1900 like not that big of an accident but in the year 2023 a single person can cause you know a pretty big accident if you're positioned in exactly the right place like if you're working in a nuclear silo or if you're um you know at a bio safety level for a lab and so one of the things we have to think about is as we're increasing everyone's power on top of a world that is um you know more reliant on the continuity of very you know existingly you know powerful infrastructure like nuclear power plants and so on AI does create much more risk that we have to account for and think about very very carefully and that's kind of the the main thing that I believe about what we need to do to regulate AI yeah so this is about mitigating the risk this is not about stopping GPU production so I think I think we actually do both of us agree with Noah in that saying let's not produce gpus it's not the most effective response at this point on the other hand mitigating the risk as exactly as you said a gain of function research doesn't you know take place just everywhere and anywhere they take care of the protection of the biohazards and even then there were some accidents and so uh the statement that I signed um I think Tristan also signed on the same statement they're mitigating the risk of the extinction from AI including abuses and misuses should be a global priority alongside like pandemics societal skill risks and we said alongside especially because there are existing ways for the design of care around say preventing pandemics and preventing kind of function research from causing this global scale issues yeah yeah exactly exactly now I have a follow-up question for your suggestion on mitigating risks rather than inhibiting Innovation from a geopolitical point of view one way of looking at AI nowadays is that states and companies are engaged in an arms race where the first mover gains an advantage there is a risk of Democratic states falling behind authoritarian ones which could have severe consequences how would you address this concern yes I think what I'm hearing you say is um obviously AI confers power to the states and the companies that adopt it first and once that power is conferred it starts to race and if you do not coordinate that race the race ends in tragedy if it's dealing with the commons or you know losing control it's a rip it's a race to the cliff type scenario and right now that's where we're at right we are in both a race between for-profit companies that are building artificial general intelligence um who they actually was just having dinner with people who are at safety run work on leading safety at a couple of the major AGI Labs just last night and they say said if we could we would actually have the whole world not pursue artificial general intelligence because they believe it's too dangerous that's what they would prefer um however they don't know how to coordinate that outcome and because they don't know how to coordinate that outcome and they can't stop China for pursuing it and so on and they can't stop the other labs from pursuing it they have to they believe they have to build it and simply do safety right and align it and get there first the question is is there actually a way to do that now to your point there's one version of this Democratic states losing ground to authorities authoritarian States um in using AI to get ahead there's also a different sort of aspect of that which is that open societies are more vulnerable right now to the capabilities that generative AI creates because in a society where there's like no surveillance which we I live in the United States there's no surveillance uh at least not visible surveillance of everybody uh in the same way that there is in let's say China and that means that people if I was on my computer right now and I wanted to open up something to synthesize or explore how to make certain pathogens that's not something that the government can easily uh track when I'm just playing around on my own computer whereas in China that is probably going to be different that's going to be more locked down so there's two issues that I heard you sort of mentioning here one is how do western states use and deploy AI to gain to maintain an advantage over close societies because if they don't they'll just lose the race economically in terms of technological development the other side is how to open societies maintain resilience against the new capabilities of generative Ai and I'd love to hear what Audrey thinks about balancing and managing that because I'm it's a really big open question yeah um so again taking pandemic as an example Taiwan did two things early on 2020 uh one is of course shutting down international travel uh so stopping the virus uh at a border uh and the second thing of course is this daily press conferences at 2 pm that just not just teaches epidemiology which will be very top down but rather a led a journalist ask the minister anything and everything until they run out of questions of that day and through this kind of every 2PM conversation uh people generally start to understand epidemiology start to understand why it's important to wear a mask and keep distance wear a mask is not effective unless you clean your hands and things like that so it's increased what we call competence and when people's competence around a emergent threat increases people become Innovative right the Civic technologists discover how to visualize duration in of masks contact tracing that is privacy preserving many many other things and so increasing the capability of open society's citizens in response to the Democratic you know society's threats is the most important thing that the open Society can do and I would argue it is our main advantage in that the solutions and Innovations can come from everywhere and anywhere within our society but it does require us putting a very clear delineation of basically saying okay uh this is the civic participation platform of Taiwan but to participate you will have to first show that your Taiwanese usually by using local SMS numbers but we do have citizens that are abroad physically and they want to participate too and they don't have a local SMS number so they can use Fido or citizen digital certificate an app basically on their phone to continue to prove that they're a citizen but everybody else is not party to this Democratic conversation and so this is is a idea of this stopping those 5 million fake AI Bots at the tracks akin to the border control that we took early 2020. um there's so many questions uh so what I hear you saying Audrey is basically you created a mini shared Reality by having this daily press conference for the whole country to be on the same page where the information that got to be deployed in that channel was for the caring you know benefit of educating and informing everyone to understand epidemiology create transparency trust um and in a cacophony of what would have been the rest of the kind of media environment you created a little island of coherence and made sure to refresh that island with new information on I guess every day right yeah every 2 p.m and anyone can call this line 1922 to add to the agenda for the next days uh press conference yeah right I guess my my question or what I'm interested in is imagine that we're going into the 2024 U.S elections and we have the maximum incentive for Russia and China and you know different countries to be flooding The Zone with not even disinformation but just taking existing truths that are spun in ways that maximally divide the population uh and you know amplify them like what's the equivalent of what we should have in the U.S that you could think would be a reasonable strategy like a is that a daily uh media weather storm of here's the memes that are almost like a weather forecast here's the memes that are being deployed by our adversaries or something like that yeah I think weather forecasts it exactly the thing right uh because we learn from the weather forecast persons uh every day like there's uh extra ultraviolet light or things like that so these are actually quite scientific uh sometimes quite technical but if you hear about it every day it becomes less of a jargon but more of a daily vocabulary when you talk about a weather you don't just talk about the weather but also the science behind the weather if you hear the weather people talking about it all the time and I think exactly that the science um like encountering foreign information manipulation and interference which the EU people call fimi which is useful because femi takes all sorts of ways exactly I used to say it's not necessarily false it's not necessarily myth or this information it is sometimes true and sometimes just memes that amplify the polarization so we call it femi and femi uh exactly like the virus except it's of the mind not of the body is basically the idea is that a femi need to run its course but we develop antibodies not by getting everybody you know infected without cure vaccines but rather having this lab in which that we very quickly wrap the MRNA right the the traces of that femi into this harmless Spike protein and then wrapping it into mRNA vaccine that's how vaccines are made and so there needs to be ways to very quickly identify what the trending femi is and just create comedy or whatever narratives around it that renders it non-toxic and then spread it and so when you have a viral vaccine that is even more viral than the viral virus then your population is safe because people look at the MRNA strands people laughed at it and become immune in their mind against a particular female when you think about this in the age of generative AI is there a way to think about hey for all these divisive memes which are again are not going to be untrue they're going to be true things but they're going to have a toxicity and a harm could we ask GPT for or you know the next generative AI system to come up with more jokes or memes exactly sort of inoculate us around that Meme as fast as that they're coming yes yes and the future is already here in Taiwan not evenly distributed to the other parts of the world we talked about Colfax collaborative fact checking uh and at that point they were relying on crowdsourcing people contributing these uh re-contextualizing uh clarifying information around the trending femi at a day and Colfax has been employing the use of language models for quite some times now exactly the way you describe it so the previous imbalance in which that many femi operators work on it as a full-time job right they actually literally work nine to five to spread a female whereas the fact Checkers from the community are more amateurs they do it when they have spare time so there was a imbalance back then but now language models can afford to do this full-time uh in like simultaneously really with how the femi is first spread so if you go to the kofax website um like all the time the first clarifying contextualizing comments comes from a language model so other question I've been wanting to ask you is um how do we make sure that the vaccine is more well I don't want to use the word vaccine even because that's pre-polarized in the United States is more viral how do we make sure that the Cure is more viral than the virus and you know you talk about using comedy um you can do that in a bespoke way but if we really care about doing this in scale like we're talking about open societies need to out-compete closed Societies in the age of generative AI which means that any kind of manipulated divisive memes that are being manufactured at scale need to have some kind of counter response that's more viral than the incoming virus and we can use generative AI to maybe get there but I'm just curious to hear a little bit more like how would we do that yeah exactly uh the kofax project used language moldus because language models literally is the only way that you can match the speed and the variety of the virus and to synthesize the Cure using generative AI itself is um not hindering progress as I mentioned we've been working with the top AI Labs on something called alignment assemblies alignment assemblies is basically a way to steer the AI based song of specific community's needs for example the Kovacs Community can run a police conversation that gets what their ideal Mentor ideal caretaker of the kofanks conversation is they would just have all the herbs fears conversations so on with the kofax community to the basically raise a ideal uh Prodigy right of fact checking and synthesizing uh clarifying information and then taking this Collective will of the community into the large context that many language models are now having now this language model can use where the anthropic people call constitutional AI to train an adapter on top of a large language model that makes the large language more behave the way the community wants it to behave and in this aligned not fully aligned aligned to community language model can be then used to predictably generate more reliable narratives when it comes to synthesizing the cure hmm and when you talked about it before you made you had comedy writers making those memes funny is there a way to do that at scale with language models yes I think so yes so uh the idea of recent research uh from like Orca and so on is that you need uh basically create a curriculum instead of like more text the better the more data the better you need to manually uh pick like a thousand top-notch jokes that makes it uh comedic because cure or facts or clarifying context is by nature more viral than falsehood falsehood that sounds right may be trendy for a while but a profound truths have a way to uh you know spiritual Traditions right are basically the waiver profound truths to be viral across entries and so what we really need is just to take this profound truth connect it to the conversations of today and then rabbit in a way um like the I sometimes look at the AI not kill everyone is a memes on Twitter and and basically that sort of memes uh it's what we're looking for fascinating would love to talk to you more about how we can actually apply that going into the U.S elections I think we need a whole Manhattan project for just uh dealing with that in the U.S so we'll talk about
that offline yeah Taiwan can't help it's great to hear the profound truth can withstand the test of time we just talked a lot about the problems associated with AI and persuasive Technologies now I'd like to talk about Solutions Tristan could you introduce us to the notion of algorithmic accountability yeah so algorithm accountability is actually not an area that I think of as um the needed solution for the social media issues but it's often brought up and obviously we need to have transparency into how an algorithm works and that's necessary and then if it if it works in a certain way that we don't like we need to have the ability to make it accountable to new goals and Audrey's work has shown how do you do Democratic inputs to what those new goals should be so I'm all for that um I shouldn't say that I'm against algorithm accountability I I think the key thing is just making sure that we don't let a system that is basically a cancer cell just be reported giving us quarterly reports on how fast or slowly it's killing its patients because oftentimes what we talk about algorithm accountability we talk about making sure that Facebook just gives us research reports on what it's doing and if Facebook's business model is still directed in a cancerous incentive towards a cancerous incentive then it's transparency and disclosures are are only again asking a cancer cell to be accountable for its actions in a negative way without actually changing the code of the cancer to being in set a healing agent and what we really want is to change the DNA code from the cancer cell to turn it into a healing organism yeah exactly and I think it's sometimes just diluted down to just mean answerable like you can interpolate it yeah okay so what right and we talk about in the very beginning of conversation just to Define it as something like liable but liable is of course like twofold one is that if it's liable to to a fine or something well then if you're big enough you just paid F5 but uh but what Tristan is basically saying is that the liability should also carry in it is the duty to change uh to basically align and when you're aligned toward care then that kind of accountability is where we're really after that's exactly right there's recently a lawsuit in the United States for pfos forever chemicals that came from basically 3M these are carbon bonded things that literally as they recycle through our atmosphere they don't break down and they give cancer you know they give people cancer testicular cancer um you know stomach cancers all these horrible things and there was a lawsuit of 10 billion dollars against 3M to you know for all these people who've been affected by it but that doesn't change the fact that their bodies and you know their children are affected by pfos right and they're not even phasing them out for another couple of years so what we really want to make sure is we don't just have liability in the frame of dollars and costs as as Audrey said we need to make sure whenever liabilities are discovered that they have a realignment obligation to eliminate those externalities and actually a question I have for you Audrey if you don't mind me taking us a little bit off track is one of the things that I'm struggling with is in the form of externalities our institutions are prepared to deal with um they're prepared to deal with separable concrete and attributable or measurable and measurable harm terms they're not very well equipped to deal with non-attributable long-term chronic and diffuse harms I'm thinking things like air pollution lead things that we don't discover you know shortening attention spans in the case of social media long-term polarization long-term self-image body image issues in the case of social media and so in general when we think about EO Wilson's quote that the fundamental problem of humanity is we have Paleolithic brains medieval institutions and Godlike tech one of the problems of our medieval institutions is that they the liability-based framework tends to deal with these concrete and attributable and short-term harms and what we need are these new forms of Institutions that have a fast update rate at finding these more chronic diffuse invisible and non-attributable harms and then finding ways to realign to eliminate those externalities I'm just curious to hear your reactions to what is a 21st century institution that deals with the long-term chronic and non-tributable harms yeah I think uh what we are looking at really is new forms of Institutions that are running on a different interval a different time uh sense of time right because the institutions that you talked about uh the quarterly report the um once every four-year elections and so on you know they operate on the time scale of months or quarters years and I said um I think during our conversation this time that if the response to the harms is not counted within hours or days then we have a real problem I personally I think we should settle for nothing less than a week anything that doesn't spring about change from the surfacing of the harm clear evidence of harm to the actionable changes if it's more than a week then we're in big trouble so especially because uh it's very difficult to reverse the harms you talk about forever chemicals right the pfas and they're here like literally almost forever and so there bound to be people that closest to the pain close to the suffering closest to the site that first experienced the suffering they could be researchers in a gain of function lab right and so we need to work on institutions that let those people sound the alarm Bell and then uh democratize the solution making capabilities for example when doing like hugging phase open source Ai and so on there are a lot of people who are very interested in working on the sort of Laura's the lowering adapters that are the alignment filters that we just talked about that Alliance communities needs and if we mobilize these people and make sure that they don't see themselves as a black sheeps right because they work on and sensors models but rather have a way to quickly synthesize the cure for any thread and so on then we have something that is tapping into the open source Community but with the conscience that's the most important part you know I hadn't connected the dots until just now but this is one of the reasons that whistleblower protections are so important because people the people who would know that pfos for example forever chemicals at 3M were dangerous are the people who are closest to working at those companies who saw the dumping the early dumping right because it's going to take 10 years for those communities who get cancer maybe 10 15 20 years down the line so we have to have a wise version of internal whistleblower protections to I guess preempt those long-term chronic and diffuse harms that's one of the ways is institutional reward systems and incentive systems to to incentivize in a decentralized way everybody who's closest to that harm area who can predict that that's going to happen but then do that in a way that doesn't cause you know what do they call a taddy tail or what's that phrase you know you like that yeah yes so we talk about bug bounties and so on which is uh basically making the solution provide us right uh part of the prize uh is uh to be more careful uh in reporting responsible disclosure of the vulnerabilities we need to take the same idea but apply it to the whistleblowers of the top AI Labs or really of anyone who are suffering from the harm or Injustice before anyone else and it will we can Empower them uh just like you know anyone can call 1922 the toll-free number to set the agenda for the counter epidemic uh press conference the next day we need to have something like that as well yeah Tristan are there more and more AI researchers reflecting on the consequences of their invention I think for the beginning this is a good conversation we know we've done a lot of things right because we're referring so much to how we started um one of the I think positive elements or effects of the social dilemma is that going into this next contact with AI the world and people inside of technology companies are much more conscious of how we can get those risks wrong how we can get technology wrong um one thing that distinguishes AI from say social media is that literally the people who started the companies even started them with the notion of how much risk was or was part of building it like people literally thinking we could break the world or end the world if we get this wrong imagine if social media companies like if Mark Zuckerberg and Jack Dorsey when they started Twitter and Facebook said we need the entire risk team because we know that we could wreck democracies in open societies how different we we would have ended up if we had started this with that Consciousness right in the case of social media we had to argue to this day there are people who don't believe that Facebook people inside of Facebook or people inside of Twitter who don't believe that it caused all the polarization we've had to kind of win that argument over the course of the last decade and we've been working very hard at that if we're more optimistic we do need to um uh so we can celebrate the fact that AI researchers are much more aware of the risks is that enough no I think that um you know eliezkowski and others have pointed out that there are there is I think a 30 to one Gap in research and investments into increasing capabilities versus increasing safety so if you have 30 times as many people making the car go faster versus investing in the steering and the brake pedals of the car it's not going to end up very well um now if if you had an internal survey of people who work on safety who believed that compared to what they think was enough or adequate to work on safety saying that this is plenty then we'd be fine with the amount of people who are working on it but right now there's this famous survey that 50 of AI researchers believe that there's a 10 or greater chance that basically Humanity goes extinct or is severely disempowered by the way that AI is currently going and that would be like if you know the engineers at Boeing um you know 50 of the engineers who built a plane said there's a 10 or greater chance that this plane goes down uh you know currently the way that it's currently going so we should you know just like the atomic bomb where they had to calculate the probability that the first bomb would ignite the atmosphere we should ask labs to say and have a formal attestation of what they believe um is the likelihood that that they will disempower or actively you know hurt Humanity Not Again by intention but due to the current clock rate of the of the industry and we should ask the question not should we slow down AI but we should ask the question are we moving at a pace that we can get this right and I think that question is a unifying question because everyone can assess that I think we're currently moving at a pace that we're not going to get it right and again we have to collectively do that because China and Russia and you know United Arab Emirates who built the big open source language model for AI are all racing to build it um and we need to be able to I think move at a pace that everyone would answer collectively that we're getting it right Audrey in a previous conversation with Professor Yuval Noah Harari you said that we should favor AI systems to be biased towards harmlessness instead of harm or towards honesty instead of telling lies what are the latest developments around this matter in Taiwan yes um as tall as digital minister of course I have a bias my and and you may call it my mission really or my job description um so it's pinned on my Twitter I I prefer internet of beings rather than just internet planes I prefer share reality over isolating virtual realities I prefer assistive intelligence that led as collaborative learn uh then machine learning that are authoritarian and nature and so it's set on the tin right I admit these are biases and we're putting all our budget into assistive intelligence into alignment research and development and testing and certification instead of putting it really any time on uh even you know the larger language model I'm making them even larger because there are other people doing these things and I happen to believe that the care and Alignment are the important things and partly uh through like recording podcasts such as this one and we want to make sure that people who can go into either line of research to power or to care consider care not just more noble but more needed for the continued existence of humanity in general and civilization in this era in particular and so yeah I think it's about providing the best research environment the best social status privileges whatever to balance this current imbalance between power and care I really see no other way out of this dilemma that we're in totally thank you for your answer to end this interview I'd like to ask each of you to give one piece of advice to our audience on how they can make their lives Freer in our connected age so yeah um one of my suggestions uh would be to make computers uh internet uh and touch screens avoid touch screen if you can but otherwise my touchscreen is a social object to have a conversation across the screen with another human being as we're having now or uh with your friend or child or parents look at the same screen at the same time and have conversations with each other I think to reorient the screens as a social object is going to do wonders to our brain stems to let us not be addicted to the isolating experience that many people are experiencing at the moment as a Audrey am I correct that you are a former student of Doug engelbart yes but not a personal student just read uh dogs yeah and made a phone call to to him but it's a while ago yeah well it's very angle Bart angle bartian of you to respond that way and I agree that we need more social experiences with technology and right now we can notice that the design decisions that inform how all of our touch screens work and home screens work um you know and face ID is it's all about an individual user touching an individual object an individual virtual reality and I think it's very right to notice that um that's not how a lot of objects you know in our physical reality were designed you know toasters and newspapers and you know puzzles or board games that we do together they're social objects and when a lot of our Rich experiences come from the sociality around technology and uh you know it's always easy to tell people you know take a break from your phone and disconnect for a while um you didn't ask about this sort of what were some of the things that we've done and to help tech companies make the world a little bit better I'm proud to say that you know some of my earlier work on something called time well spent actually did lead to you know Apple uh you know enlancing these screen time features and the Do Not Disturb features on the phone and um helping us helping us live a little bit more of a disconnected life from technology and if you haven't done it in a while I still have to have friends remind me to do this as simple as it is of just having a friend take away your phone for four hours and just let it let you come back to it uh in in a few hours and you will thank them believe me after they take it away because you don't notice how addicted you are until someone physically takes it away from you or try charging your phone in a different room in your house I did that recently and as many years as I've been working on this just simply charging your phone in another room uh really makes a difference um so yeah a little advice I totally agree I've been doing that for years now that's awesome you're ahead of me it's through small gestures like these that we can free ourselves I am deeply grateful to both of you for investing your time in this thought provoking dialogue together we have evaluated and crafted effective solutions for the future if you liked today's episode be sure to subscribe share and let us know what you think see you next time on Innovative Minds hi I'm Tristan Harris a co-founder of the center for Humane technology and I'll see you on Taiwan plus hello I'm Audrey Tang taiwan's digital Minister see you on Taiwan Plus [Music] thank you
2023-09-07 08:17