Is AI an Existential Threat? | The Agenda

Is AI an Existential Threat? | The Agenda

Show Video

>> Steve: REMEMBER WAY BACK IN LATE 2022, WHEN EVERYONE WAS TRYING OUT THE ARTIFICIAL INTELLIGENCE APPS, HAVING FUN AND BEING WOWED BY WHAT THEY COULD DO? HOW QUICKLY THINGS HAVE CHANGED. JUST MONTHS LATER, MAJOR VOICES IN THE TECH UNIVERSE -- INCLUDING SOME OF THE CORE DEVELOPERS OF THOSE TECHNOLOGIES -- NOW SAY WE NEED TO PUMP THE BRAKES, BECAUSE AI COULD POSE "PROFOUND RISKS TO SOCIETY AND HUMANITY." WITH US NOW FOR MORE, LET'S WELCOME: IN SEATTLE, WASHINGTON: PEDRO DOMINGOS, PROFESSOR EMERITUS OF COMPUTER SCIENCE AND ENGINEERING AT THE UNIVERSITY OF WASHINGTON, AND AUTHOR OF "THE MASTER ALGORITHM"; IN THE DOWNTOWN OF OUR NATION'S CAPITAL: JEREMIE HARRIS, CO-FOUNDER OF GLADSTONE AI AND THE AUTHOR OF "QUANTUM PHYSICS MADE ME DO IT"; AND HERE IN OUR STUDIO, GILLIAN HADFIELD, PROFESSOR OF LAW AND SCHWARTZ REISMAN CHAIR IN TECHNOLOGY AND SOCIETY AT THE UNIVERSITY OF TORONTO AND THE CIFAR AI CHAIR AT THE VECTOR INSTITUTE. GILLIAN, GREAT TO HAVE YOU BACK HERE IN THAT CHAIR AND TO OUR FRIENDS PEDRO AND JEREMIE IN POINTS BEYOND, THANKS FOR JOINING US ON TVO TONIGHT. I JUST WANT TO SET UP OUR CONVERSATION BY READING SOMETHING FROM AN OPEN LETTER SIGNED BY THOUSANDS, THIS WAS BACK IN MARCH -- ELON MUSK SIGNED IT, STEVE WOZNIAK, THE CO-FOUNDER OF APPLE SIGNED IT, MANY OTHERS -- CALLING FOR A SIX-MONTH PAUSE ON AI DEVELOPMENT.

HERE'S AN EXCERPT FROM THAT LETTER. SHELDON, BRING THIS UP, IF YOU WILL, AND I'LL READ ALONG WITH ... >> Steve: LAST WEEK, GEOFFREY HINTON, KNOWN OF COURSE AS THE GODFATHER OF AI, SPENT A LOT OF TIME AT THE UNIVERSITY OF TORONTO, HE LEFT GOOGLE TO WARN OF THE DANGERS OF AI, AND HE SAID: "MAYBE WHAT IS GOING ON IN THESE SYSTEMS IS ACTUALLY A LOT BETTER THAN WHAT IS GOING ON IN THE BRAIN. LOOK AT HOW IT WAS FIVE YEARS AGO AND HOW IT IS NOW. TAKE THE DIFFERENCE.

PROPAGATE IT FORWARDS. THAT'S SCARY." THAT'S GEOFFREY HINTON. OKAY. GILLIAN, HOW WOULD YOU CHARACTERIZE THIS MOMENT IN HISTORY FOR AI? >> Gillian: WELL, I THINK WE ARE AT A REAL INFLECTION POINT IN THINKING ABOUT AI.

AS YOU POINTED OUT WE'VE SEEN TREMENDOUS ADVANCES. WHAT WE WERE SEEING IN THE FALL WAS EXCITING AND NEW, AND I THINK WHAT WE REALLY ARE SEEING IS, GOSH, MAYBE THERE IS A LOT MORE HAPPENING HERE THAN WE'VE UNDERSTOOD, AND I THINK THAT'S WHAT THE PAUSE LETTER IS ABOUT, IS, HEY, FOLKS, MAYBE WE'RE GOING A LITTLE TOO FAST. >> Steve: PEDRO, WHAT'S YOUR VIEW ON THAT? >> Pedro: I THINK WE'RE NOT GOING TOO FAST AT ALL. IN FACT, I THINK WE'RE NOT GOING FAST ENOUGH. IF AI IS GOING TO DO THINGS LIKE CURE CANCER, DO YOU WANT TO HAVE A CURE FOR CANCER YEARS FROM NOW OR YESTERDAY? WHAT IS THE POINT OF A 6-MONTH MORATORIUM? I DON'T THINK THERE'S AN OUT-OF-CONTROL RACE. I THINK THAT LETTER IS A PIECE OF HYSTERIA AND I THINK THE BIGGER WORRY FOR ME IS NOT THAT AI WILL EXTERMINATE US OR FOR MOST AI RESEARCHERS IS THAT A LOT OF HARM WILL BE DONE BY PUTTING IN RESTRICTIONS, REGULATIONS, MORATORIA AND WHATNOT THAT THERE'S REALLY NO REASON FOR IT.

>> Steve: JEREMIE, YOUR TAKE ON THAT QUESTION? >> Jeremie: YEAH, WELL, I THINK IT'S CLEAR THAT WE'VE TAKEN SOME SIGNIFICANT STEPS TOWARDS HUMAN-LEVEL AI, IN THE LAST THREE YEARS IN PARTICULAR. I THINK THAT'S ACTUALLY SO MUCH THE CASE THAT, AS WE'VE JUST SEEN, WE HAVE MANY OF THE WORLD'S TOP AI RESEARCHERS, INCLUDING TWO OUT OF THREE OF THE EARLIEST PIONEERS OF MODERN AI WONDERING ALOUD ABOUT WHETHER WE MIGHT HAVE FULLY CRACKED THE CODE ON INTELLIGENCE. I MIGHT NOT GO THAT FAR PERSONALLY. BUT THIS IS ACTUALLY BEING TALKED AT THROUGH THAT LENS, AND LOOKING AT WHETHER THE TECHNIQUES WE HAVE TODAY, JUST BY THROWING MORE DATA, MORE PROCESSING POWER AT THESE SAME TECHNIQUES, WE MIGHT BE ABLE TO ACHIEVE SOMETHING LIKE HUMAN-LEVEL OR SUPERHUMAN AI. IF THAT IS ANYTHING LIKE THE CASE, IF THAT IS EVEN IN THE BALLPARK OF POSSIBILITY, WE NEED TO CONTEMPLATE SOME FAIRLY RADICAL SHIFTS IN THE RISK LANDSCAPE THAT WE'RE EXPOSED TO BY THIS TECHNOLOGY AND THEN SOCIETY MORE BROADLY. >> Steve: FAIRLY RADICAL SHIFTS MEANS WHAT? >> Jeremie: WELL, THERE ARE A LOT OF DIFFERENT DIMENSIONS.

ONE KEY ONE IS MALICIOUS USE. AS THE SYSTEMS BECOME MORE POWERFUL, THE DESTRUCTIVE FOOTPRINT OF MALICIOUS ACTORS THAT USE THEM JUST GROW AND GROW AND GROW. WE'VE ALREADY SEEN EXAMPLE, FOR EXAMPLE, CHINA USING POWERFUL AI SYSTEMS TO INTERFERE IN TAIWAN'S ELECTORAL PROCESS.

WE'VE SEEN CYBERSECURITY, MALWARE, BE GENERATED BY PEOPLE WHO DON'T KNOW HOW TO CODE. IT'S A FUNDAMENTAL SHIFT IN THAT LANDSCAPE. AND THEN OF COURSE THERE'S THE RISK OF CATASTROPHIC AI ACCIDENTS, WHICH FOLKS LIKE GEOFF HINTON ARE FLAGGING THERE.

I THINK THOSE TWO BROAD CATEGORIES ARE REALLY BIG. THERE'S WORKFORCE DISPLACEMENT AND OTHER THINGS TOO. BUT WHAT REALLY GETS ME TO PERK UP I THINK IS MALICIOUS USE AND THE ACCIDENT PIECE. >> Steve: GILLIAN, LET ME GET YOU TO CIRCLE BACK TO THE INITIAL COMMENT BY PEDRO, THAT IF YOU'RE LOOKING FOR A CURE FOR CANCER, YOU DON'T GO SLOW, YOU GO FASTER. WHAT DO YOU SAY? >> Gillian: I ACTUALLY AGREE THERE ARE LOTS OF TREMENDOUS BENEFITS AND I THINK WE ABSOLUTELY WANT THOSE BENEFITS AND WE WANT TO CONTINUE DOING OUR RESEARCH AND BUILDING THE SYSTEM. SO I THINK THAT'S A REALLY IMPORTANT POINT.

BUT I THINK ONE OF THE THINGS THAT'S REALLY CRITICAL, IF YOU THINK ABOUT OUR MEDICAL RESEARCH, IT TAKES PLACE WITHIN A REGULATED STRUCTURE. WE HAVE WAYS OF TESTING IT. WE HAVE CLINICAL TRIALS. WE HAVE WAYS OF DECIDING WHAT PHARMACEUTICALS AND MEDICAL DEVICES AND TREATMENTS TO PUT OUT THERE.

AND WHAT -- WITH AI, WE SEE SUCH A LEAP IN CAPABILITY THAT A LOT OF THE TECHNIQUES, THE TOOLS, THE WAYS IN WHICH IT TRANSFORMS RESEARCH AND WORK HAS BASICALLY OUTSTRIPPED OUR EXISTING REGULATORY ENVIRONMENTS AND WE HAVEN'T BUILT THE ONES THAT I THINK MAKE THIS THE KIND OF THING THAT'S SAFE ENOUGH IN THE EXACT SAME WAY WE REGULATE ALL THE REST OF OUR ECONOMY, TO MAKE SURE IT'S GOOD AND SAFE AND WORKING THE WAY WE WANT IT TO. >> Steve: PEDRO, WHAT ABOUT THE NOTION THAT WHEN YOU'RE SEARCHING FOR THE CURE FOR CANCER, THERE ARE A LOT OF PATIENT PROTECTION REGULATIONS IN THERE THAT ARE NOT THERE WHEN IT COMES TO AI AND THEREFORE WE OUGHT TO BE CAREFUL. WHAT SAY YOU? >> Pedro: I THINK THIS ANALOGY THAT PEOPLE KEEP MAKING BETWEEN AI AND BIOLOGY IS MISTAKEN AND MEDICINE AND THINGS LIKE DRUG APPROVAL, WHICH ITSELF BY THE WAY HAS LOTS OF PROBLEMS. THE DRUG APPROVAL MECHANISMS THAT WE HAVE IN PLACE COST LIVES AND, YOU KNOW, EVEN THE FDA, FOR EXAMPLE, IN THE UNITED STATES UNDERSTANDS THAT THAT NEEDS TO CHANGE. SO THAT IS HARDLY A GOOD MODEL.

BUT MORE IMPORTANT THAN THAT, REGULATING AI IS NOT LIKE REGULATING DRUGS. IT'S MORE LIKE REGULATING QUANTUM MECHANICS OR MECHANICAL ENGINEERING. YOU CAN REGULATE THE NUCLEAR INDUSTRY OR REGULATE CARS OR PLANES, YOU CAN REGULATE AND SHOULD AND DO REGULATE SPECIFIC APPLICATIONS OF AI. BUT REGULATING AI QUA AI JUST DOESN'T MAKE SENSE.

THE PROPOSALS THAT I'VE SEEN DON'T MAKE ANY MORE SENSE THAN THAT DOES. THERE ARE SOME PLACES LIKE EUROPE THAT HAVE THESE THINGS CALLED THE AI ACT THAT ATTEMPTS TO DEFINE AI. IT IS SO BROAD THAT ANYTHING COULD BE AI OR NOTHING AND THEY ARE HAVING TO REVISE IT IN THE LIGHT OF CHATGPT. I THINK WE REALLY NEED TO ASK QUESTIONS BEFORE WE SHOOT AND UNDERSTAND THE TECHNOLOGY AND FIGURE OUT WHAT NEEDS AND DOESN'T NEED TO BE REGULATED AND DO WHAT IN FACT OPEN AI HAS BEEN DOING, WHICH I THINK IS GREAT, WHICH IS DON'T RESTRICT IT. THE BEST WAY TO MAKE AI SAFE IS TO PUT IT IN THE HANDS OF EVERYBODY SO EVERYBODY CAN FIND THE BUGS. WE KNOW THIS IN COMPUTER SCIENCE.

THE MORE COMPLEX THE SYSTEM, THE MORE PEOPLE NEED TO LOOK AT IT. WHAT WE DON'T NEED IS, OH, LET'S PUT THIS IN THE HANDS OF A COMMITTEE OF REGULATORS OR EXPERTS AND THEY'RE GOING TO FIGURE OUT WHAT'S WRONG WITH IT. THAT'S EXACTLY THE WRONG APPROACH.

>> Steve: WE'RE GOING TO COME BACK AND WE'LL DO MORE ON REGULATION LATER IN OUR DISCUSSION. BUT LET'S TRY THIS: JEREMIE, DO YOU THINK AI POSES AN EXISTENTIAL RISK TO HUMANITY? >> Jeremie: I THINK THAT THE ARGUMENT THAT IT DOES IS ACTUALLY BACKED BY A LOT MORE EVIDENCE THAN MOST PEOPLE REALIZE. IT'S NOT A COINCIDENCE THAT GEOFF HINTON, KNOWN AS THE GODFATHER OF MODERN AI FOR A REASON, FOR HAVING NOT INVENTED DEEP LEARNING BUT CERTAINLY CONTRIBUTED SIGNIFICANTLY TO IT, IS ON BOARD HERE.

IT'S NOT A COINCIDENCE THAT WHEN YOU TALK TO FOLKS AT THE WORLD LEADING AI LABS, THE ONES THAT ARE BUILDING THE WORLD'S MOST POWERFUL AI SYSTEMS, THE ChatGPTs, THE GPT4s, THE CLOSER YOU GET IN CONCENTRIC CIRCLES TO THOSE PEOPLE, THE HIGHER YOU HEAR THEIR ESTIMATE FOR THE PROBABILITY THAT THIS WILL ACTUALLY AMOUNT TO AN EXISTENTIAL RISK. THE MAIN VECTOR THAT'S USUALLY PROPOSED OR ONE OF THEM AT LEAST IS THIS IDEA OF POWER-SEEKING IN SUFFICIENTLY ADVANCED SYSTEMS, THAT'S THE ONE THAT GEOFF HINTON PUT ON THE TABLE, AND IT'S ONE THAT WE'VE SEEN WRITTEN UP AND STUDIED EMPIRICALLY IN SOME OF THE WORLD'S TOP AI CONFERENCES. IT'S QUITE WELL-BACKED NOW BY A VARIETY OF DIFFERENT SOURCES OF EVIDENCE. SO I THINK IT'S SOMETHING WE SHOULD BE TAKEN SERIOUSLY. NOTHING IS GUARANTEED. THAT'S PART OF THE UNIQUE CHALLENGE OF THIS MOMENT.

WE'VE NEVER MADE AI SYSTEMS. WE'VE NEVER MADE INTELLIGENCE SYSTEMS SMARTER THAN US. WE'VE NEVER LIVED IN A WORLD WHERE THOSE SYSTEMS EXIST.

SO WE KIND OF HAVE TO DEAL WITH THAT UNCERTAINTY IN THE BEST WAY WE CAN. PART OF THAT IS BY CONSULTING WITH FOLKS THAT ACTUALLY UNDERSTAND THESE SYSTEMS AND SPECIFICALLY ARE EXPERTS IN TECHNICAL SAFETY. >> Steve: GILLIAN, I'M CURIOUS AS TO YOUR REACTION WHEN YOU HEARD GEOFFREY HINTON MAKE THE COMMENTS THAT HE MADE BECAUSE CERTAINLY I THINK THE REST OF THE WORLD WAS UTTERLY SHOCKED. WHAT DID YOU THINK? >> Gillian: WELL, I THINK GEOFF WAS TRULY SURPRISED -- HAS BEEN TRULY SURPRISED BY THE ADVANCES THAT WE'VE SEEN IN THE LAST -- I MEAN SIX MONTHS.

OBVIOUSLY HE'S BEEN TREMENDOUSLY CLOSE TO THIS. BUT I THINK A NUMBER OF PEOPLE DIDN'T REALLY THINK THAT JUST SCALING UP LANGUAGE MODELS, GENERATIVE MODELS, WAS GOING TO PRODUCE THE KIND OF CAPABILITIES THAT WE'VE SEEN. I THINK WHAT YOU SEE WITH GEOFF, AND I'VE HAD LOTS OF DISCUSSIONS WITH HIM -- HE'S ON MY ADVISORY BOARD AT THE SCHWARTZ REISMAN INSTITUTE -- AND SO THIS I THINK WAS A REAL UPDATE FOR HIM AS TO THE NATURE OF THE RISK. AND IF YOU LISTEN TO WHAT HE HAS TO SAY, IT'S NOT, "I KNOW THAT THERE'S AN EXISTENTIAL RISK," HE IS SAYING, "THERE IS SO MUCH UNCERTAINTY ABOUT THE WAY THESE BEHAVE THAT WE SHOULD BE STUDYING THAT PROBLEM AND NOT GETTING OUT AHEAD OF IT. AND SO I THINK THAT'S AN IMPORTANT UPDATE FOR EVERYONE. IT'S AN IMPORTANT STATEMENT.

>> Steve: PEDRO, WHEN SOMEBODY THE LIKES OF GEOFFREY HINTON RINGS THAT EXISTENTIAL BELL, DOES IT NOT GIVE YOU PAUSE? >> Pedro: WELL, I HAVE KNOWN GEOFF FOR A LONG TIME. HE'S A GREAT RESEARCHER. HE'S ALSO AN OLD ANARCHIST AND A LITTLE OTHER-WORLDLY, AND I THINK WE NEED TO BE CAREFUL ABOUT OVERINTERPRETING WHAT HE'S SAYING.

PEOPLE NEED TO KNOW THAT MOST AI RESEARCHERS DO NOT THINK THAT AI POSES AN EXISTENTIAL THREAT TO HUMANITY. THAT'S NONSENSE. NOW, IT'S INTERESTING TO UNDERSTAND WHY SOME PEOPLE WHO USED TO BE AT THE FRINGES BELIEVE THAT, AND IF THERE ARE INDEED SOME, IN FACT IRONICALLY IT'S THE MORE OPTIMISTIC PEOPLE ABOUT HOW FAST WE'RE GOING TO GET TO AI THAT ARE THE MOST WORRIED AND GEOFFREY IS AN ULTRA OPTIMIST.

SO THAT IS ENTIRELY CONSISTENT -- >> Steve: WELL, HE WAS AN ULTRA OPTIMIST. I'M NOT SURE HE IS ANYMORE, IS HE? >> Pedro: NO, NO, NO. HE'S AN OPTIMIST OF HOW FAST WE ARE GOING TO GET TO AI, HOW FEASIBLE IT IS, HOW EASY THE PROBLEM IS. THERE ARE MANY RESEARCHERS WHO THINK WE WILL NEVER EVEN GET TO HUMAN-LEVEL AI, WHICH IS QUITE POSSIBLE. I THINK WE WILL GET THERE, BUT I DON'T THINK IT'S GOING TO BE TOMORROW.

I THINK PEOPLE NEED TO UNDERSTAND THAT, NUMBER ONE, WE ARE STILL VERY FAR FROM HUMAN-LEVEL INTELLIGENCE. AI, FROM THE BEGINNING, HAS DONE THIS THING OF, IT ALWAYS SEEMS MORE INTELLIGENT THAN IT IS BECAUSE WE PROJECT OUR HUMAN QUALITIES INTO IT. WE ALSO PROJECT VERY OFTEN TO THIS PROBLEM OUR HUMAN WANTS AND EMOTIONS AND DESIRES, OF WHICH IT HAS NONE. SO TRYING TO -- GEOFF HAS SAID, LIKE, WHAT IF AN AI WANTS TO TAKE OVER? AND YANN LeCun SAID, BUT AIs DOESN'T WANT ANYTHING. EXACTLY RIGHT. IT'S AN ALGORITHM.

THAT'S ONE THING, RIGHT? PEOPLE NEED TO UNDERSTAND THIS, AI IS AN ALGORITHM. IT'S NOT SOMETHING THAT WE DON'T CONTROL. NOW, WHEN YOU DO MACHINE LEARNING, THE AI DOES SOMETHING YOU CAN'T PREDICT. BUT THAT IS BEING DONE TO OPTIMIZE THE FUNCTIONS THAT WE DETERMINE, AND THAT'S WHERE THE DEBATE NEEDS TO BE, IS HOW TO OPTIMIZE IT. AND THEN THERE'S ANOTHER VERY IMPORTANT THING WHICH THE PUBLIC DOESN'T KNOW BUT I THINK SHOULD KNOW BY NOW, WHICH IS THAT AI IS FOR SOLVING PROBLEMS THAT ARE INTRACTABLE, MEANING IT TAKES EXPONENTIAL TIME TO SOLVE THEM. THAT'S THE TECHNICAL DEFINITION OF AI.

BUT IT'S EASY TO CHECK THE SOLUTION. SO AN AI COULD BE EXPONENTIALLY SMARTER THAN WE ARE AND IT WOULD STILL BE FINE CONTROLLING IT. >> Steve: LET ME GET TO GILLIAN HERE. SHE HAS GIVEN ME A LOOK THAT SUGGESTS THAT SHE WANTS TO COMMENT ON WHAT YOU JUST HAD TO SAY, SO GO AHEAD, GILLIAN.

>> Gillian: I WANT TO PUT THE EXISTENTIAL THREAT QUESTION IN A BROADER LENS AS WELL. IT'S TRUE THAT -- I THINK THERE'S MALICIOUS USE TO BE CONCERNED ABOUT, BUT I DON'T REALLY THINK THAT THE RISKS ARE AROUND ROGUE AI, A ROGUE AI THAT DEVELOPS ITS OWN GOALS AND TERMINATOR-STYLE, WANTS TO KILL US ALL. THAT'S NOT THE THREAT THAT I THINK IS OUT THERE.

THE THING I WORRY ABOUT, IF SOMEBODY THINKS A LOT ABOUT HOW OUR COMPLEX HUMAN SYSTEMS WORK, OUR FINANCIAL SYSTEMS, OUR ECONOMIC SYSTEMS, OUR SOCIAL SYSTEMS, MY CONCERN IS REALLY NOT ABOUT HUMAN-LEVEL AI PER SE, BUT IT'S ABOUT THE CAPACITY FOR AUTONOMOUS AGENTS THAT ARE OUT THERE ALREADY TRADING ON OUR FINANCIAL MARKETS, PARTICIPATING IN SORT OF OUR LABOUR MARKETS, REVIEWING AND HIRING. AND AS WE BUILD THESE SYSTEMS THAT HAVE HIGHER AND HIGHER CAPABILITIES, OPEN AI, FOR EXAMPLE, YOU CAN -- THERE ARE PLUG-INS THAT YOU CAN GET THE MODEL TO DECIDE NOT JUST WHERE SHOULD YOU GO ON YOUR HONEYMOON BUT GO OFF AND MAKE THE RESERVATIONS AND BOOK THE AIRPLANES AND SO ON. AND IT'S WHEN YOU START INTRODUCING THAT KIND OF AUTONOMOUS AGENT INTO THE WORLD AND YOU SAY, OKAY, SO HOW DO WE MAKE SURE THAT AUTONOMOUS AGENT DOESN'T BREAK OUR FINANCIAL SYSTEM, DOESN'T BREAK OUR EMPLOYMENT SYSTEM? >> Steve: I THINK THAT'S GETTING CLOSER TO WHERE JEREMIE GETS CONCERNED THAT WE ARE ENTERING TERMINATOR TERRITORY. AM I READING TOO MUCH INTO YOUR VIEWS HERE, JEREMIE? >> Jeremie: NO, NOT AT ALL, ACTUALLY.

AS WILD AS IT MIGHT SOUND, THOUGH I GUESS NOW IT'S NICE TO BE IN GOOD COMPANY WITH THE LIKES OF GEOFF HINTON. A COUPLE THINGS. FIRST OFF, ON THIS QUESTION OF THESE ARE JUST ALGORITHMS, WHAT DO THEY REALLY WANT AND SO ON? THIS IS REALLY WHERE THE ENTIRE DOMAIN OF POWER -- I MEAN, THIS IS AN ENTIRE SUBFIELD IN AI SAFETY. IT'S WELL RESEARCHED.

THESE OBJECTIONS ARE VERY ROBUSTLY ADDRESSED, AT LEAST IN MY OPINION. YOU KNOW, MY TEAM HAS ACTUALLY MADE CONTRIBUTIONS TO THIS BODY OF WORK DIRECTLY. THIS IS A REAL THING. AND THE CLOSER YOU GET TO THE CENTRES OF EXPERTISE AT THE WORLD'S TOP AI LABS, THE GOOGLE DEEP MINES, OPEN AIs, AGAIN THE VERY LABS BUILDING ChatGPT AND THE NEXT GENERATION SYSTEMS, THE MORE YOU SEE THE EMPHASIS ON THIS RISK CLASS.

AGAIN, I THINK GEOFF HINTON AND YOSHUA BENGIO, TO MAYBE A LESSER DEGREE BUT STILL IMPORTANTLY, ARE BOTH ON THIS TRAIN FOR A REASON. I THINK THEIR VIEW REFLECTS THE VIEW OF A GROWING BODY OF AI RESEARCHERS, AND IT'S NOT EVEN PARTICULARLY CLEAR TO ME THAT THE MAJORITY OF AI RESEARCHERS DO THINK THAT CATASTROPHIC RISK IS NOT A THING. THERE WAS A POLL DONE A FEW MONTHS AGO THAT SAW THAT 48% OF GENERAL AI RESEARCHERS, NOT SPECIALISTS IN SAFETY WHO WOULD BE THE BEST INFORMED ON THIS, BUT GENERAL AI RESEARCHERS ESTIMATE A 10% OR GREATER PROBABILITY OF CATASTROPHIC RISK FROM AI. IF YOU WERE LOOKING TO GET ON A PLANE AND 50% OF THE ENGINEERS THAT MADE THAT PLANE SAID, "YOU KNOW, I THINK THERE'S A 10% CHANCE THIS THING IS GOING TO CRASH WITH YOU IN IT." THIS IS NOT A GAME OF RUSSIAN ROULETTE THAT I THINK MOST PEOPLE WOULD PARTICULARLY WANT TO PLAY IF THEY WERE TRACKING IT PERHAPS AS SOME FOLKS AT THE CUTTING EDGE ARE. >> Steve: NO, I HEAR YOU ON THAT FOR SURE.

LET ME READ A QUOTE OUT TO EVERYBODY HERE. THIS IS THE HEAD OF AI AT META, I GUESS FACEBOOK, WHAT USED TO BE FACEBOOK. HERE'S WHAT HE RECENTLY TWEETED. HE SAID: >> Steve: OKAY. JEREMIE, PICK UP ON THAT, IF YOU WOULD? WHY DO YOU ASSUME THAT AI, IF IT DOES BECOME MORE INTELLIGENT THAN US, WOULD AUTOMATICALLY WANT TO CONQUER US? >> Jeremie: I THINK THE WORD "ASSUME" THERE IS BEARING AN AWFUL LOT OF THAT LOAD.

THIS IS ACTUALLY NOT AN ASSUMPTION, IT'S AN INFERENCE BASED ON A BODY OF EVIDENCE IN THIS DOMAIN OF POWER-SEEKING. AGAIN, LIKE A BUNCH OF WORK THAT'S BEEN DONE AT FRONTIER LABS WITH CONTRIBUTIONS FROM THE WORLD'S TOP AI RESEARCHERS, SUGGESTS THAT THE DEFAULT PATH FOR THESE SYSTEMS IS TO LOOK FOR STATES, LOOK FOR SITUATIONS TO OCCUPY, TO POSITION THEMSELVES IN, THAT HAVE HIGH OPTIONALITY. BASICALLY THAT'S THE TREND THAT WE START TO SEE. THE SYSTEMS SEEK HIGH OPTIONALITY. BECAUSE THAT'S USEFUL FOR WHATEVER OBJECTIVE THEY MIGHT BE TRAINED OR PROGRAMMED TO PURSUE.

THE FUNDAMENTAL ROOT OF THIS POWER-SEEKING BEHAVIOUR, AND AS YOUR AI SYSTEMS GET MORE INTELLIGENT, THE CONCERN IS THEY START TO RECOGNIZE THESE INCENTIVES, THEY START TO ACT MORE AND MORE ON THESE INCENTIVES. ANYWAY, SO I GUESS I WOULD JUST ZOOM OUT THERE AND SAY THIS SORT OF FUNDAMENTALLY I THINK FAILS TO ENGAGE WITH A PRETTY IMPORTANT PIECE OF FUNDAMENTAL AI SAFETY LITERATURE THAT IT REALLY SHOULD. >> Steve: WHILE YOU WERE SPEAKING, PEDRO HAD A VERY, VERY BIG SMILE ON HIS FACE AND I WANT TO KNOW WHAT'S BEHIND THAT SMILE, PEDRO. >> Pedro: SORRY, I JUST HAVE A HARD TIME TAKING ALL OF THIS SERIOUSLY.

AS YANN LeCun, WHO BY THE WAY IS ONE OF OTHER THREE CO-FOUNDERS OF DEEP LEARNING. SO, AGAIN, HE'S AT THE SAME LEVEL AS GEOFF AND YOSHUA. I THINK IT'S INTERESTING TO THINK OR TO NOTICE THAT THE PEOPLE WHO TEND TO BE MORE HYSTERICAL ABOUT AI ACTUALLY ARE THE PEOPLE THAT ARE FARTHEST FROM ACTUALLY USING IT IN PRACTICE. AI ENGINEERS, PEOPLE AND COMPANIES LIKE YANN WHO ARE AT META AND WHATNOT, THEY KNOW WHAT IT IS LIKE TO WORK WITH AI. THE PEOPLE WHO TEND TO BE PARANOID ARE THE ONES WHO ARE ALWAYS KIND OF FASCINATED WITH THE POTENTIAL AND, YOU KNOW, THERE'S ROOM FOR A FEW CRAZY PEOPLE IN ACADEMIA.

THE WORLD NEEDS THOSE. BUT ONCE YOU START MAKING IMPORTANT SOCIETAL DECISIONS BASED ON THEM, YOU NEED TO THINK TWICE. I'M WITH GILLIAN HERE THAT THE REAL -- I MEAN, MY BIGGEST PROBLEM WITH A LOT OF THIS IS THAT THESE TERMINATOR CONCERNS ARE TAKING ATTENTION AWAY FROM THE REAL ISSUES, THE REAL RISKS THAT WE NEED TO BE TALKING ABOUT, WHICH ARE VERY MUCH THE RISKS OF MALICIOUS USE. WE NEED SOMETHING LIKE WHAT WILLIAM HOUSTON CALLED THE TURING POLICE, RIGHT, WE NEED THE COPS, THE AI COPS TO DEAL WITH AI CRIMINALS.

WE NEED THAT. WE HAVE THE PROBLEM OF AI IN THE HANDS OF TOTALITARIAN REGIMES. DEMOCRACIES NEED TO START MAKING BETTER USE OF AI. AND THERE'S THE BIGGEST PROBLEM OF ALL WHICH IS A PROBLEM OF TODAY, NOT OF TOMORROW, WHICH IS OF INCOMPETENT, STUPID AI MAKING CONSEQUENTIAL DECISIONS THAT HURT PEOPLE. THE MANTRA OF THE PEOPLE TRYING TO CONTROL AI IS THAT WE NEED TO RESTRICT IT AND SLOW IT DOWN. IT'S THE OPPOSITE.

STUPID AI IS UNSAFE AI. THE WAY YOU MAKE AI SAFER IS BY MAKING IT SMARTER, WHICH IS PRECISELY THE OPPOSITE OF THINGS LIKE THE MORATORIUM LETTER ARE CALLING FOR. >> Steve: GILLIAN, WHERE ARE YOU ON ALL OF THAT? >> Gillian: WELL, I SIGNED THE LETTER AND I SIGNED IT SO THAT WE WOULD BE HAVING THIS CONVERSATION, NOT BECAUSE I THINK THAT, YOU KNOW, THAT IT'S GOING TO HAPPEN, THAT I THINK IT'S ESSENTIAL THAT WE STOP.

I DON'T THINK WE'RE ON A PRECIPICE. BUT I DO THINK IT'S CRITICAL THAT WE'RE HAVING THE CONVERSATION ABOUT -- CURRENTLY THESE SYSTEMS ARE BEING BUILT ALMOST EXCLUSIVELY INSIDE PRIVATE TECHNOLOGY LABS. AND LOTS OF REALLY MOTIVATED PEOPLE -- I WORK WITH FOLKS AT OPEN AI, I KNOW THERE'S A LOT OF CONCERN ABOUT SAFETY, BUT THEY'RE MAKING THOSE DECISIONS INTERNALLY ABOUT HOW TO TRAIN IT SO IT SPEAKS BETTER TO PEOPLE, RIGHT? THAT WAS PART OF THE ADVANCE THAT WE GOT OUT OF ChatGPT. WHEN SHOULD WE RELEASE IT? WHAT RATE SHOULD WE RELEASE IT? TO WHOM SHOULD WE RELEASE IT? WHAT LIMITS AND GUARDRAILS SHOULD WE PUT IN PLACE? MY CONCERN IS I THINK THOSE ARE THINGS THAT WE SHOULD BE DECIDING PUBLICLY, DEMOCRATICALLY, WITH EXPERTISE OUTSIDE OF THE ENGINEERING EXPERTISE THAT IS DOMINATING HERE. SO I THINK THAT -- MY CONCERN IS ABOUT THE SPEED IS THAT -- SO I'M A SOCIAL SCIENTIST, AN ECONOMIST, A LEGAL SCHOLAR.

I THINK ABOUT THE LEGAL AND REGULATORY INFRASTRUCTURE THAT WE BUILD, AND I DON'T SEE US PAYING ENOUGH ATTENTION TO THAT SET OF QUESTIONS. THE EUAI ACT AND SO ON -- THAT'S ALREADY ANACHRONISTIC RELATIVE TO THE TYPE OF SYSTEMS WE'RE LOOKING AT TODAY. >> Steve: OUT OF CURIOSITY, HOW DID YOU COME TO SIGN THE LETTER? WHO APPROACHED YOU? WHO PITCHED YOU? >> Gillian: I THINK I JUST SAW IT -- I SAW IT TWEETED. COLLEAGUES HAVE SIGNED IT.

I WORK A LOT WITH STUART RUSSELL, ALTHOUGH I DON'T THINK HE'S THE ONE WHO ASKED ME TO SIGN IT. I'M PRETTY SURE HE DIDN'T. IT CAME ACROSS MY -- AND I REALLY DID SIGN IT FROM THIS POINT OF VIEW SAYING, AS JEREMIE IS EMPHASIZING, THIS IS ACTUALLY WHAT A LOT OF PEOPLE INSIDE THESE ORGANIZATIONS ARE SEEING.

I THINK WE NEED TO HAVE ATTENTION TO IT. SO THE FACT THAT WE'RE HAVING -- THAT YOU'VE GOT THIS SHOW, STEVE, ADDRESSING THIS QUESTION IS PRECISELY WHY I SIGNED IT. >> Steve: GOTCHA. HERE'S GEOFFREY HINTON AGAIN, FOR WHAT IT'S WORTH. HE SAYS IT'S ABSOLUTELY POSSIBLE I'M WRONG. WE'RE IN A PERIOD OF HUGE UNCERTAINTY WHERE WE REALLY DON'T KNOW WHAT'S GOING TO HAPPEN.

OKAY. LET'S LOOK FORWARD BASED ON THAT. WHEN THE PRINTING PRESS WAS INVENTED, WE DIDN'T KNOW WHAT IMPACT IT WOULD HAVE. WE HAD TO WAIT A LITTLE WHILE TO FIND OUT. SAME FOR THE INTERNET. WE DIDN'T KNOW WHAT THE IMMEDIATE IMPACT OF THAT WAS GOING TO BE.

YOU COULD SAY THE SAME FOR THE STEAM ENGINE, THE COTTON GIN, THE CLOCK. THESE WERE ALL AMAZING INVENTIONS. PEOPLE DIDN'T KNOW WHAT TO MAKE OF THEM AT THE MOMENT. JEREMIE, TO YOU FIRST: WHY IS THIS TECHNOLOGICAL MOMENT, IF YOU BELIEVE IT IS, ANY DIFFERENT FROM ANY OF THOSE PREVIOUS DISCOVERIES? >> Jeremie: I THINK IT'S A REALLY GOOD QUESTION AND A REALLY IMPORTANT THING FOR US TO GRAPPLE WITH, TO UNDERSTAND WHAT ANALOGIES WE CAN ACTUALLY DRAW FROM PAST TECHNOLOGIES AND APPLY TO OUR THINKING AROUND POLICY AND STUFF. LOOK, THE REALITY IS WE'RE CURRENTLY ON A TRAJECTORY TO BUILD SOMETHING POTENTIALLY SMARTER THAN OURSELVES. IT MAY OR MAY NOT HAPPEN.

BUT IF IT DOES, WE'RE GOING TO FIND OURSELVES IN A PLACE WHERE WE JUST CAN'T PREDICT ANY IMPORTANT INFORMATION ABOUT HOW THE FUTURE IS GOING TO UNFOLD. POWERFUL AI INTRODUCES UNCERTAINTY ALONG A WHOLE BUNCH OF DIFFERENT DIMENSIONS AND TO A DEGREE WE'VE JUST NEVER EXPERIENCED IN OUR EXISTENCE. THROUGHOUT HUMAN HISTORY, HUMAN INTELLECTUAL CAPACITY HAS BEEN THE ONE CONSTANT, RIGHT? WE'RE ALL BORN WITH BIOLOGICAL THINKING HARDWARE, AND THAT'S REALLY ALL WE'VE HAD TO WORK FROM.

SO WE JUST DON'T HAVE A CIVILIZATIONAL POINT OF REFERENCE HERE WHICH IS WHY I THINK IT'S IMPORTANT TO TRACK THIS AREA REALLY CLOSELY AND THINK DEEPLY ABOUT WHERE IT MIGHT GO. >> Steve: PEDRO, HOW ABOUT YOU ON THAT SAME QUESTION? >> Pedro: I THINK IT IS TRUE THAT AI INTRODUCES UNCERTAINTY BECAUSE IT IS A VERY POWERFUL TECHNOLOGY. IT COULD BE USED FOR LOTS OF DIFFERENT THINGS, AND WE CAN'T POSSIBLY PREDICT WHAT ALL OF THEM ARE. AND THAT'S GOOD. THE BEST TECHNOLOGY IS LIKE MANY OF THE EXAMPLES YOU GAVE, MOST OF THE BEST APPLICATIONS ARE SOMETHING THAT NO ONE COULD HAVE ANTICIPATED. BUT WHAT HAPPENS IS, WE ALL WORK ON THE GOOD APPLICATIONS AND NOT CONTAINING THE BAD ONES.

THIS NOTION, HOWEVER, THAT IS PREVALENT AMONG A CERTAIN AI FRINGE, THE TRANSHUMANISTS OR SINGULARITARIANS AND SO ON THAT AI IS GOING TO MAKE THE FUTURE COMPLETELY UNPREDICTABLE, THIS IS MISTAKEN. AI IS STILL SUBJECT TO THE LAWS OF PHYSICS AND THE LAWS OF COMPUTATION, AND TO THE SOCIOLOGY OF SOCIOTECHNICAL SYSTEMS, WE'RE GOING TO ACTUALLY HAVE A LOT OF CONTROL OVER AI, EVEN IF THERE ARE ASPECTS OF HOW IT WORKS THAT WE DON'T UNDERSTAND. A GOOD ANALOGY HERE IS A CAR.

YOU DON'T KNOW HOW THE ENGINE OF YOUR CAR WORKS. YOU DON'T FEEL AN URGENT NEED TO UNDERSTAND IT BECAUSE IT MIGHT BLOW UP. THAT'S FOR THE MECHANIC. WHAT YOU NEED TO KNOW AND WHAT WE AS A SOCIETY NEED TO KNOW IN THE CASE OF AI, YOU NEED TO KNOW WHERE THE STEERING WHEEL AND PEDALS ARE SO YOU DRIVE THE CAR.

AND AI DOES HAVE A STEERING WHEEL, IT'S CALLED THE OBJECTIVE FUNCTION, AND WE NEED TO DRIVE THAT. AND I VERY MUCH AGREE THAT WE NEED MORE THAN JUST THE TECHNOLOGISTS THINKING ABOUT THIS. BUT WHAT WORRIES ME IS THAT PEOPLE ARE SAYING, OH, AI IS JUST GOING TO BLOW EVERYTHING WIDE OPEN. NOT SO, RIGHT? IT'S GOING TO BE A VERY INTERESTING PROCESS. THERE ARE DEFINITELY GOING TO BE RISKS.

BUT IT'S NOT LIKE -- THIS FAILURE OF THE IMAGINATION THAT, OH, ONCE THERE'S A BIGGER INTELLIGENCE THAN US ON THE PLANET AND, LIKE, YOU CAN'T RELEASE ANYTHING ANYMORE. NO. AS YANN WAS SAYING, WE CAN HAVE VERY BIG INTELLIGENCE AND STILL BE IN CONTROL OF IT. IT WON'T HAPPEN BY DEFAULT.

SO IT'S NOT LIKE THERE'S NOTHING TO WORK ON HERE. BUT WE SHOULDN'T THROW UP OUR HANDS AND SAY, OH, YOU KNOW, IT'S THE END OF CIVILIZATION, RIGHT? I MEAN, LIKE, AND AIs CAN'T EXIST WITHOUT MASSIVE SUPPORT FROM HUMANS EVERY STEP OF THE WAY, RIGHT? THERE'S ALL THESE REASONS WHY THIS ISN'T GOING TO HAPPEN. IN FACT, THE LIMITING FACTOR IN THE PROGRESS OF AI IS ACTUALLY NOT THE TECHNICAL SIDE, IT'S THE HUMAN ABILITY TO USE IT AND COME TO TERMS WITH IT AND ADAPT TO IT. THAT'S WHAT'S GOING TO SET THE RIGHT LIMIT, AS IT SHOULD BE.

SO, YOU KNOW, AI IS LIKE -- AI LOWERS THE COST OF INTELLIGENCE AND SO NOW WE CAN USE IT MORE. IN PARTICULAR, THE LAWS OF ECONOMICS STILL APPLY. THE DEMAND FOR INTELLIGENCE GOES UP AS THE VALUE OF THE complement GOES UP, ALL THESE THINGS HAPPEN. BY THE WAY THERE ARE PEOPLE AT THE UNIVERSITY OF TORONTO WHO HAVE STUDIED THIS VERY WELL, RIGHT, BRINGING AN ECONOMIST'S VIEW TO AI.

IT'S NOT LIKE ONCE WE HAVE VERY HIGH INTELLIGENCE IN MACHINES, SUDDENLY NOTHING CAN BE PREDICTED ANYMORE. >> Steve: LET ME GET GILLIAN ON THIS THEN. DO YOU THINK THIS TECHNOLOGICAL MOMENT IS DIFFERENT FROM THOSE OTHER ONES I LISTED? >> Gillian: I DO. WHAT'S CRITICAL HERE IS THE SPEED WITH WHICH IT TRANSFORMS ENTIRE MARKETS. SO I'VE SEEN, FOR EXAMPLE, IN THE LEGAL DOMAIN, THE WAYS IN WHICH THE LARGE LANGUAGE MODEL-BASED -- YOU CAN BUILD ON TOP OF THAT LANGUAGE MODEL, ON TOP OF GPT4, A CAPABILITY THAT CAN DO WHAT LAWYERS DO, TAKES THEM A WEEK TO DO, THE MACHINE CAN DO IT IN A COUPLE OF MINUTES. SO THAT'S GOING TO BE VERY DISRUPTIVE.

SO THE SPEED OF TRANSFORMATION IS VERY HIGH. THE POTENTIAL SCALE -- BECAUSE IT'S A GENERAL PURPOSE TECHNOLOGY, IT CAN SHOW UP EVERYWHERE. AND SO THAT MEANS IT CAN BE DEPLOYED EVERYWHERE. SO EXACTLY RIGHT THAT THE ECONOMICS OF USING AND BUYING LESS EXPENSIVE INTELLIGENCE IS GOING TO FIND ITS WAY INTO JUST ABOUT EVERYTHING THAT WE DO, SO IT'S GOING TO HAPPEN AT THAT KIND OF SCALE. SO LIKE THE COMPARISON TO LIKE THE AUTOMOBILE -- >> Steve: YOU'RE CONCERNED IT'S GOING TO BE STEPHEN KING'S CHRISTINE.

IS THAT THE PROBLEM? >> Gillian: I AM NOT CONCERNED ABOUT THAT. I ACTUALLY AM NOT CONCERNED ABOUT THAT. >> Steve: JUST CHECKING.

>> Gillian: WHAT I AM CONCERNED ABOUT IS WE LIVE IN A WORLD OF LOTS OF REGULATION AROUND HOW WE BUILD AND DRIVE AUTOMOBILES. IT TOOK 50 YEARS BEFORE WE BUILT ALL THAT REGULATORY STRUCTURE. THERE WAS BASICALLY NONE WHEN CARS START, YOU KNOW, AS A CONSUMER PRODUCT. BUT A CAR CRASH, SADLY, DOES KILL PEOPLE, BUT A FEW PEOPLE AT A TIME AND IT'S JUST A CAR AND IT'S NOT EVERY PART OF YOUR ECONOMY. SO I THINK IT'S THE POINT THAT IT GOES SO FAST AND IT'S SO PERVASIVE THAT I DON'T THINK WE CAN APPROACH IT AS WE HAVE WITH OUR PREVIOUS TECHNOLOGIES AND SAY, LET'S PUT IT OUT THERE, LET'S FIND OUT HOW IT WORKS, WE WILL FIGURE OUT HOW TO REGULATE IT. I THINK WE HAVE TO BE A LITTLE BIT MORE PROACTIVE THAN THAT.

>> Steve: IN WHICH CASE, JEREMIE, TELL US WHAT THE NEXT -- IF WE'RE TALKING ABOUT A PAUSE, WHAT DOES THAT ACTUALLY LOOK LIKE IN TERMS OF THE SAFE DEVELOPMENT OF AI? >> Jeremie: YEAH, AND ACTUALLY JUST TO QUICKLY PIGGYBACK ON WHAT GILLIAN WAS FLAGGING. I THINK ONE MAJOR ASPECT OF THIS STORY IS ALSO THE CORRELATION OF RISK THAT THIS INDUCES, RIGHT? WHEN YOU HAVE MANY DIFFERENT DOWNSTREAM PRODUCTS ALL RUNNING ON THE SAME FUNDAMENTAL AI SYSTEM, THIS FUNDAMENTALLY SHIFTS THE RISK LANDSCAPE WHERE IF SOMETHING GOES WRONG WITH THAT BASE MODEL, THAT RISK GETS INHERITED. THERE ARE A LOT OF NEW DIMENSIONS HERE.

IN TERMS OF THE PAUSE, SO I ACTUALLY -- I'M NOT A SIGNATORY OF THE LETTER. I DON'T THINK IT'S NECESSARILY A GREAT IDEA. PARTLY BECAUSE -- YOU PAUSE AI DEVELOPMENT IN THE WEST FOR SIX MONTHS. THAT DOES NOTHING FOR CHINA. THAT'S ONE ISSUE. ANOTHER THING IS IF YOU ARE CONCERNED ABOUT THE RACE DYNAMICS HERE, YOU MIGHT BE INTERESTED TO NOTE THAT THE COST OF PROCESSING POWER GOES DOWN OVER TIME.

AS WE GET BETTER AT BUILDING MORE POWERFUL PROCESS SOURCE, REACHING THE FRONTIERS GETS CHEAPER. YOU START WITH A SMALL NUMBER OF ACTORS, YOU PAUSE FOR SIX MONTHS, AND WHEN YOU HIT GO AGAIN YOU'RE DEALING WITH A WHOLE PLETHORA OF ACTORS AND A MUCH LESS MANAGEABLE SITUATION. THE LETTER WAS GREAT.

I THINK IT'S DONE ITS JOB. BUT THE WORK OF POLICY REALLY BEGINS IN EARNEST. I THINK THERE ARE A COUPLE OF DIFFERENT THINGS WE COULD DO WHICH ANYWAY I'M SURE WE'LL GET INTO IT. BUT IT'S A CONVERSATION. >> Steve: I'LL THROW THAT OVER TO PEDRO RIGHT NOW. I APPRECIATE THE FACT THAT YOU'RE NOT A FAN OF THE PAUSE.

BUT IF WE DO WANT TO MAKE SURE THAT AI DEVELOPS IN A WAY THAT IS SAFE AND GIVES A LOT OF PEOPLE A SENSE OF SECURITY ABOUT THE WHOLE THING, WHAT DO YOU SEE AS BEING NECESSARY TO THAT GOAL? >> Pedro: WELL, FIRST OF ALL, HERE'S SOMETHING THAT PEOPLE SHOULD BE AWARE OF. THE PAUSE IS NOT JUST A BAD IDEA, IT'S COMPLETELY UNENFORCEABLE. THERE'S NO WAY TO IMPLEMENT A PAUSE IN AI.

THERE'S THIS NOTION THAT AI IS LOCKED INSIDE THESE HUGE BIG TECH COMPANIES, BUT THAT'S NOT THE REALITY. MOST OF AI RESEARCHERS IN ACADEMIA. ANY KID WITH A COMPUTER CAN DO AI. AND MOREOVER, THAT'S STARTING TO BE THE CASE, WHICH IS ACTUALLY A GREAT THING.

SO SOMEBODY SAYS WE'RE GOING TO PAUSE AI AND AI JUST DOESN'T PAUSE. YOU CAN'T STOP IT EITHER INTERNALLY WITHIN DEMOCRACIES, OF COURSE, YOU KNOW, LET ALONE COUNTRIES LIKE CHINA AND WHATNOT. THE WHOLE NOTION OF A PAUSE IS JUST -- I HAVEN'T SEEN ANYBODY ACTUALLY MAKE A CONCRETE PROPOSAL AS TO HOW THIS WOULD BE IMPLEMENTED -- >> Steve: IF NOT A PAUSE, HOW DO WE GO FORWARD SAFELY? >> Pedro: RIGHT. I THINK HOW WE GO FORWARD SAFELY IS A COUPLE OF THINGS.

ONE OF THEM IS EXACTLY WHAT OPEN AI HAS BEEN DOING, SAM ALTMAN, YOU KNOW, ADVERTISES THIS THING, WHICH IS WE ROLL IT OUT, EVERY STEP OF THE WAY LETTING PEOPLE PLAY WITH IT, GETTING THEIR FEEDBACK. THIS IS HOW YOU CATCH BUGS AND ERRORS. THE MORE POWERFUL THE TECHNOLOGY, ACTUALLY, THE MORE PEOPLE NEED TO BE LOOKING AT IT, NOT FEWER.

AND, YOU KNOW, THIS WHOLE NOTION THAT WE NEED TO PROACTIVELY REGULATE AI, EVERY PIECE OF HISTORY TELLS US THAT THAT DOESN'T WORK. IT WILL WORK EVEN LESS WITH AI BECAUSE AI IS LESS PREDICTABLE. IT'S NOT LIKE YOU HAVE TO REGULATE AND PUT IN THE GUARDRAILS AND WHATNOT AS YOU SEE WHAT HAPPENS, NOT BEFORE.

BECAUSE BEFORE -- I MEAN, LIKE, WE DON'T EVEN KNOW WHAT A REAL AI IS GOING TO LOOK LIKE. WE DON'T HAVE ONE YET. SO TRYING TO REGULATE IN ADVANCE IS ALMOST GUARANTEED TO BE A MISTAKE. NOW, WHAT WE NEED TO DO, RIGHT, IS -- THIS I THINK IS SOMETHING THAT IS VERY IMPORTANT IS THE GOVERNMENT, THE REGULATORY ORGANIZATIONS, NEED TO HAVE THEIR OWN AI WHOSE JOB IS TO DEAL WITH THE AIs OF THE GOOGLEs AND THE AMAZONs AND SO ON. THERE'S NO FIXED OLD-FASHIONED SET OF REGULATIONS OF THE KIND THAT YOU USE FOR CARS, THAT WILL WORK WITH AI. WE NEED SOMETHING AS ADAPTIVE ON THE GOVERNMENT SIDE AS IT IS ON THE CORPORATION SIDE, RIGHT? AND THEN THESE AIs TALK WITH EACH OTHER.

WHEN I TALK ABOUT THIS TO POLICYMAKERS AND POLITICIANS AND WHATNOT, THEY SIGH BECAUSE THIS IS CLEARLY NOT ON THEIR RADAR YET. BUT IT'S ALREADY STARTING TO HAPPEN IN, FOR EXAMPLE, THE FINANCIAL MARKET, BECAUSE THERE'S NO CHOICE, RIGHT? THERE'S ALL OF THIS BAD ACTIVITY AND YOU'VE GOT TO HAVE THE AI TO DEAL WITH IT. THE LAWMAKERS NEED TO START THINKING ABOUT WHAT IS OUR AI GOING TO DO TO DEAL CORRECTLY WITH THE AIs OF THE GOOGLEs AND SO ON.

I THINK THAT'S A MUCH BETTER APPROACH. >> Steve: OKAY, JULIE, MAYBE I CAN GET YOU ON THIS ISSUE OF THERE APPEARS TO BE NO CONSENSUS ON A PAUSE, BUT THERE CERTAINLY IS CONSENSUS THAT THIS BE DEVELOPED IN A RESPONSIBLE, APPROPRIATE WAY. HOW DO WE DO THAT? >> Julie: WELL, I THINK THERE'S TWO THINGS I WANT TO PICK UP.

JUST ONE THING. I GOT A SAN PEDRO, AN ARTICLE. I HAVE A PAPER ON REGULATORY MARKETS. I AGREE THAT WE'RE GOING TO NEED TO REGULATE.

I THINK WE NEED TO BUILD THAT AS A SECTOR, ACTUALLY, AS A COMPETITIVE SECTOR, PROVIDING THAT KIND OF TECHNOLOGY. BUT I DO THINK IT'S REALLY IMPORTANT TO RECOGNIZE WHEN WE TALK ABOUT REGULATION, THERE ARE SOME BUILDING BLOCKS THAT WE DON'T HAVE IN PLACE CURRENTLY THAT ALLOW US TO REGULATE OTHER PARTS OF THE ECONOMY. SO FOR EXAMPLE, SO I THINK ONE THINGS WE CAN DO RIGHT NOW AND I THINK WE SHOULD DO RIGHT NOW IS WE CAN CREATE REGISTRATION. SO WE COULD HAVE A NATIONAL REGISTRATION BODY THAT SAYS IF YOU HAVE A LARGE MODEL.

SO THE PAUSE DOESN'T SAY, STOP WORKING ON AI. IT SAYS DON'T TRAIN. I WOULD HAVE SAID DEPLOY MODELS BIGGER THAN GPT FOUR, THE BIGGEST ONE THAT HAS BEEN SO FAR PUBLICLY MADE AVAILABLE. SO IF WE THINK THAT THERE ARE SPECIAL CHARACTERISTICS AS VERY LARGE MODELS, AND I THINK THERE ARE I THINK WE SHOULD HAVE THAT WE SHOULD HAVE A REGISTRY SYSTEM SO THAT WE HAVE EYES ON AS A GOVERNANCE GOVERNMENT, ON WHERE THEY ARE AND WHAT THEIR HOW THEY'VE BEEN TRAINED, WHAT THEY LOOK LIKE. THE REGISTRY SYSTEM RUN BY WHOM? I THINK THAT'S THAT'S A GOVERNMENT FUNCTION.

SO HERE'S THE ANALOGY. EVERY CORPORATION IN THE COUNTRY IS REGISTERED WITH A GOVERNMENT AGENCY. SECRETARY OF STATE IN THE U.S. OR, YOU KNOW, CORPORATIONS BOARD.

SO THEY HAVE AN ADDRESS. THEY HAVE SOMEBODY RESPONSIBLE. THEY HAVE THERE IS A THERE'S A WAY IN WHICH A CORPORATE THE GOVERNMENT CAN SAY, OKAY, WE KNOW YOUR DUTY. WE KNOW YOU'RE THERE. YOU REGISTER YOUR CAR, WE KNOW WHERE THEIR CARS ARE. WE KNOW WHO'S GOT THE CARS.

RIGHT NOW. WE DON'T HAVE THAT KIND OF VISIBILITY INTO WHAT'S IT AND THAT'S YOUR STARTING POINT. SO I WOULD SAY JUST AS A IT'S JUST A REGISTRATION, IT WOULD JUST BE DISCLOSE BASIC PIECE OF INFORMATION ABOUT THE MODELS TO GOVERNMENT, NOT PUBLICLY, NOT ON THE INTERNET. THAT GIVES US VISIBILITY INTO THAT AS A COLLECTIVE.

AND THAT REGISTER THAT REGISTRATION SYSTEM THEN PROVIDES YOU WITH THE TOOLS YOU'LL NEED IF YOU DISCOVER THERE'S A DANGEROUS METHOD OF TRAINING, THERE'S A CAPABILITY THAT THAT THAT EMERGES YOU SHOULDN'T ALLOW. YOU SHOULD ONLY DEPLOY IT TO. SO OUR MODEL, YOU KNOW, FOR CERTAIN USERS TO CERTAIN KINDS OF USERS, THEN YOU HAVE THE INFRASTRUCTURE IN PLACE TO DO THAT. WE DON'T EVEN HAVE THAT BASIC INFRASTRUCTURE IN PLACE. IF WE DECIDE WE NEED TO REGULATE. GOTCHA.

>> Steve: JEREMY, I'LL GIVE YOU THE LAST WORD ON THIS. WHERE HOW DO WE NEED TO PROCEED? >> Jeremy: YEAH. SO I THINK TO PETER, HIS POINT EARLIER, YOU KNOW, THIS IDEA THAT THE STUFF HAS ALREADY PROLIFERATED, IT'S OUT IN THE WILD AND OPEN SOURCE IS KIND OF UNCRACKABLE. THE GENIE IS OUT OF THE BOTTLE.

I THINK IT'S WORTH NOTING THAT STILL THE MOST ADVANCED A.I. CAPABILITIES REQUIRE GIANT PROCESSING POWER BUDGETS. WE'RE TALKING ON THE ORDER OF HUNDREDS OF MILLIONS OF DOLLARS. RIGHT. SO GOOD FOR OPENAI'S LATEST MODEL.

THAT ONE WAS IN THE HUNDREDS OF MILLIONS FOR SURE. AND WE'RE SEEING THAT COST RISE AND RISE AND RISE. AND SO THAT IMMEDIATELY IMPLIES A BUNCH OF COUNTERPROLIFERATION STRATEGIES SORT OF LEVERS THAT YOU CAN START TO THINK ABOUT PULLING IN THINGS LIKE EXPORT CONTROLS INVOLVING SEMICONDUCTOR TECH. THAT'S ALREADY BEING DONE. WE'VE SEEN THE DEPARTMENT OF COMMERCE IN THE U.S.

LOOK INTO THAT AT THE STATE DEPARTMENT AND SO ON. GLOBAL AFFAIRS CANADA TO MATCHING KIND OF THAT. SO WE HAVE CERTAIN POLICIES THERE, BUT THEN LOOKING AS WELL AT EXPORT CONTROLS ON THE SOFTWARE ITSELF. IF I BUILD A POWERFUL A.I. MODEL, SOMETHING LIKE JEOPARDY FOR SOMETHING THAT COULD RADICALLY INCREASE THE LIKE WE TALKED ABOUT THE DESTRUCTIVE FOOTPRINT OF MALICIOUS ACTORS LIKE YEAH, I PROBABLY SHOULDN'T BE ABLE TO GIVE UNRESTRICTED ACCESS TO THAT SYSTEM AND SELL IT TO, YOU KNOW, CHINA, RUSSIA OR OR AN ADVERSARY STATE, IF NOTHING ELSE. SO THERE'S A WHOLE BUNCH OF STUFF AROUND THAT, EVEN INFORMATION SECURITY REQUIREMENTS FOR SOME OF THESE CUTTING EDGE LABS TO MAKE SURE THAT THEY'RE KEEPING THEIR AI SYSTEMS UNDER LOCK AND KEY, AS THEY SHOULD, AS THEY SAID THEY SHOULD.

AND LAST THING I'LL MENTION OPENING, I REALLY GREAT EXAMPLE AGAIN TO PETER'S POINT EARLIER, THEY HAVE LED BY EXAMPLE BY INVITING THIRD PARTIES TO AUDIT THEIR A.I. MODELS FOR BEHAVIORS ACTUALLY LIKE POWER SEEKING AND MALICIOUS CAPABILITY. AND I THINK THAT'S SOMETHING IT WOULD BE GREAT TO SEE A LOT MORE OF IN THE AI COMMUNITY, SORT OF INVITING THIRD PARTY AUDITS JUST TO MAKE SURE THAT THERE'S A SORT OF A PAIR OF EYES OVER THE SHOULDER, SO TO SPEAK.

>> Steve: TERRIFIC DISCUSSION EVERYBODY. THANKS SO MUCH FOR HAVING IT ON TVO TONIGHT. PEDRO DOMINGOS IN SEATTLE, WASHINGTON, HE OF THE UNIVERSITY OF WASHINGTON; JEREMIE HARRIS, CO-FOUNDER GLADSTONE AI FROM THE NATION'S CAPITAL; GILLIAN HADFIELD, PROFESSOR OF LAW, SCHWARTZ REISMAN CHAIR IN TECHNOLOGY AND SOCIETY, U OF T, ALSO THE AI CHAIR AT CIFAR HERE IN OUR STUDIO IN THE BIG SMOKE.

THANKS, EVERYONE. >> All: THANKS.

2023-05-19 23:10

Show Video

Other news