what you see is the ability to achieve what appears to be a high level of breadth of intelligence relative to the breadth of an individual human I think this category of Technology says huge economic and social implications even if it never gets extended toward general intelligence [Music] thank you [Music] hey guys today we are joined with the one and only Ben gertzel he is the founder of Singularity you guys know Singularity net agix has been absolutely crushing it over the last couple of months AI has came out of nowhere it's interesting uh you and I've been we were actually talking about when I was in New York last year for MCA YC we were talking a lot about Ai and and what you know could possibly be coming in the future and uh it's just amazing how AI has just been this sudden wave onto the tech space over the last few months but I consider you to probably be the Godfather of you know AI I mean you built so many awesome things you've had like a lot of very high profile jobs I just want to know from your perspective were you surprised to see AI hit this fast this Hardware chat GPT yeah I mean to be honest I was a bit surprised by how well gpc3 and chat GPT worked given the Simplicity of the underlying architecture and the fact that these systems are really not architected like artificial official General intelligences are protected like human minds they're really a fairly simple sort of AI system and it was interesting to see how far you could go with this simple sort of AI system I mean after seeing it and studying it a bit it's not hard to understand in retrospect but it was not a huge shock but a bit of a surprise initially and I think in hindsight the reason it's surprising is clear I mean we just don't have a good intuition for what happens when you're dealing with this scale of data right I mean human intuition is closely honed to the things that we evolve to do so we have a very poor intuition for the difference between the trillion and a billion and the quadrillion right and you can hone that intuition if you're running the Federal Reserve or something but I mean by Nature we're bad at that like if you ask people to estimate how much does a 747 airplane weigh or something people may be off by one or two orders of magnitude like we're just not good thinking about really big numbers so in the same way what happens when you scale up from like merely the size of Wikipedia to the scale of the whole web like the difference between what happens that these different scales of document database is not immediately intuitive to people and so what we see with systems like chat GPT and then Google barred and Facebook llama and all these systems what you see is the ability to achieve what appears to be a high level of breadth of intelligence relative to the breadth of an individual human so it's getting a sort of breadth of intelligent functionality without having a really dramatic ability to to generalize and you know we humans we're making wild creative leaps of generalization from Early Childhood on every two-year-old is making wild amazing creative leaps of generalization as they you know learn to do what every other five-year-old already knows right so we achieve our human level understanding by repeatedly generalizing far beyond our experience making creative leagues being badly wrong getting smacked in the face making another lead being badly wrong finally making a leap and being and being right now the thing is a system like that GPT achieves a certain level of bread by a totally different method it achieves it just by having a really broad training data set so that most things that people throw at it have some close match in the knowledge base of the system right so say you say ask chat gbt to write an epic poem about the Glorious of Haskell programming right and I mean you know it has a lot of Epic poems it has every epic poem on the web and it's knowledge base it has a lot of Haskell programs it has people saying wonderful and horrible things about Haskell programming it also has the archive of the the Pearl poetry Reddit group or something right so which is people writing poems about in and about the Pearl programming language so is just it probably even has a few poems in Haskell that some random geek put online somewhere right so it's really got a breadth of stuff there it can draw on now being able to model and synthesize New Creations from this incredible amount of stuff on the web is not easy and it's impressive and it's cool it's still a quite different thing than what humans are able to do but it's quite impressive and I mean now that we see what it is I mean it's not hard to extrapolate what could be done with it and what both the limitations and the capabilities of this category of technologies will be I mean once you saw you could solve chess with game trees it was clear eventually someone would solved go with game trees right I mean you could see what the category of Technologies was able to do and I think what's clear to me as a technical person though is not necessarily going to be clear to the non-technical user and then that's the other thing we're seeing in the media and the markets now so the one thing we're seeing is this is really a very cool achievement like it I mean it's a threshold achievement in history of AI and Technology just as say getting face recognition to work back in 2015 or so was the threshold achievement like before that people were like we don't know if machines could ever recognized faces like people maybe use a network shop online but they have to go to the stores you know yeah yeah right so people didn't think face recognition would ever work and now suddenly it's obvious and people didn't think true self-driving would ever work no people were like well maybe one's wrong maybe it'll take 10 more years and it may take five more years it might take 10 more years though I doubt it but not many people think it's impossible anymore right so I think one thing is you know chat GPT and similar systems are a true breakthrough and then I mean the core breakthrough was made by Google's model in 2017-18 open AI double down on it and did better product development based on it which is to their credit and it's cool I think this category of Technologies has huge economic and social implications even if it never gets extended toward general intelligence I mean just this sort of narrow AI which can synthesize stuff based on this super broad training database yeah has an awful lot of powerful economic applications on the other end for the average end user they don't have the knowledge of the inner workings of the system that I do so it's not obvious to them necessarily that this category of system cannot ever really be like be like a human like it it cannot fundamentally leap Beyond its its training data regards and thinks they can and it probably will at some point correct well it depends on what it is I don't think a GPT type system ever will yeah I agree but there is something out there that may be able to yeah yeah and there's a fair bit of subtly here which is a challenge because these are technical things for people to have to think about right so I think a lot of people in the AI field people really know what they're talking about a lot of them do think that you know maybe gbt7 will be able to think like a human with all the creativity and innovation of a human I mean I mean I think that's totally wrong and it baffles me that technically knowledgeable people would think that but some of them do think that right and many technological people think these systems are a parlor tree they're totally off in a different direction from what you would need to do to build a real thinking machine right yeah and I understand that line of thinking because the way these large language models like chat gbt are approaching particular problems is really really different from our human mind approaches many of those particular problems my own view is someone in the middle of those two extremes I think that large language models like gpt4 and so on can be very valuable as part of a broader AGI architecture but I think they're like one lobe of the AGI brain rather than being the whole AGI brain but still like having one load with a brain well developed is still something right I mean that that's still real progress it's still really interesting foreign [Applause] [Music] [Applause] [Music] [Applause] [Music] [Applause] then you have people worrying about whether the risks about to ask about next I was about to ask you about this next we had Elon Musk just coming out yesterday I believe or maybe two days ago and saying that hey like maybe we need to slow down on this AI development just a little bit because there are some concerns elon's schizophrenia or uh dissociative identity disorder would be a more more correct psychiatric diagnosis perhaps his his dissociative identity disorder on AI is fascinating because it reflects a dissociation in society as a whole but he's so upfront about about putting his own incoherence self-contradiction and dissociation out there right so I mean I mean he's like AI is terrible AGI is going to kill us oh by the way we're going to have true self-driving two weeks from now right yeah I mean you can see like great people he plainly sees the potential downsides if AGI goes the wrong way and then particular the downsides of AGI is developed by greedy Silicon Valley Tech Bros the species he knows quite well right on the other hand he can also see like wow Advanced AI technology that can learn and reason and leap Beyond its training data is really valuable and we need it and it will make our products better and it will make our lives better it's called self-driving being one among many examples of that right so I mean he clearly sees both sides and with Tesla he's pursuing One Direction with open AI he was trying to pursue a different direction of sort of Open Source non-corporate beneficial AGI development although now of course some Altman and Elon Musk went different ways and open AI is now socked into the big Tech corporate Universe anyway so yeah Elon among others in future of Life Institute put forward this uh petition saying let's pause on training large language models for yeah for at least six months while we sort out the ethics associated with them then who I used to work with in a loose sense it was a AI ethics pundit well-known in Silicon Valley he wrote an op-ed that I saw this morning which is like six month pause isn't long enough let's blow up all the AI servers basically and I'm only slightly exaggerating right yeah clearly he's trying to grab attention but he's trying to grab attention not just because he loves attention which he does but because he's really alarmed he doesn't think gpt4 is an AGI he's more like well we'll GP T5 or gpt6 be an AGI or will it be so complicated and weird that we can't tell if it's an AGI or not we've just gotta stop it all right now right it's fascinating to me you've got people saying this when you have a system that probably is not that intelligent amazing as it is like it's I mean I've worked with chat gbt and Lama and other llms a fair bit now I'm working with a few groups building vertical Market specific AI products on top of them that will be launched on singular unit platform I mean for a product engineer point of view these are really really cool tools to have in your car right on the other hand it's very clear to me like well I could use GPT to help write the JavaScript behind the web application it's not useful to me in figuring out how to code the new AI system right like I mean we're developing our own AGI architecture which is called opencog hyper on designed for decentralized deployment on single Learning Net like we're creating a new AI programming language called Meta Meta type talk to script the AI thought processes and exceed the AI thought processes you know GPT is totally useless in doing this level of software development and computer science or for say our hyper cycle project we're developing a new ledger-less blockchain I mean if you're developing the data structures and algorithms by the new ledger-less blockchain I'm in GPT is completely useless for this sort of software development I mean if you're starting in a new programming language and you don't know what header files and what libraries to use it's very cool if you're an expert in a programming language and you're trying to push the limits to develop new Ai and blockchain Frameworks I mean this software cannot understand what what the [ __ ] you're doing it doesn't come close I mean I mean it's not anywhere near to being able to like recursively self-improve and improve its own AI learning and reasoning algorithms are like pulling itself from a centralized to decentralized infrastructure or something like that there's a very very big gap between the GPT systems as they exist now or as they could possibly be in a gbt5 or whatever there's a very big gap between that and something that's going to be designing new AI systems or new blockchains and developing and deploying them and I I think that Gap is not that obvious to people who aren't technical but it's incredibly obvious to me yeah I think the people worried about safety and risks there's really two categories there's people who are worried that gpt5 might be an AGI and take over the world or help help nasty people to take over the world and blow up the world or something and most people who think that are not very technically knowledgeable yeah but then you have people like Ellie gutkowski who are somewhat technologically knowledgeable but just really extreme in their ideology and their paranoia who think that anyway right then there's a bunch of other people who signed that petition who know full well that gbt5 is not going to be an artificial general intelligence it can really lead Beyond this experience and knowledge but they're just worried about the disruption that these jacked up narrow AI systems can do in society even without becoming agis and helping people to take over the world like what how many jobs will they obsolete you know will they help you know Iris find a way to create new chemical weapons or sneak into into the high school and harass people or something right so they're more worried about like we don't know what could be the ways people could use this non-agi technology to do to do something bad so that's possible we figure it out right so I think you have both of those categories signing this petition no I didn't sign that petition I've talked to an awful lot of people who didn't most of them who didn't are a little fearful to come out publicly and say I'm not signing this somewhat silly and misdirected thing because there's a lot of value in virtue signaling and I mean coming out and saying I'm not signing this thing because I think it's beside the point you then run the risk of being viewed as careless and Reckless and a bad guy right and so there's not much upside for most people in running that risk I mean I'm insane enough and already out there enough in my opinions I've been willing to come out there like I mean this is beside the point in a number of different ways I mean it's it's not like there's no case where I would say hey we better pause this development right I mean certainly if you have a system that really truly was showing Sparks of general intelligence which I don't think gpg54 genuinely is if it was showing Sparks of general intelligence and was not clearly beneficial and compassionate toward people I mean clearly then you're at a point where you want to pause take a deep breath and understand what what's going on we're not really at that point yet although Microsoft found the Flames a bit by publishing Vapors saying hey are there Sparks of AGI in here and lacking or recognized formal definition of AGI I mean events you get rigorously say no there's no sparks of AGI and they're like what is a spark anyway not a mathematical Criterion though you could argue well okay we're not there yet but but okay maybe GPT 5 which gpt5 is a component will show Sparks of AGI but not be clearly beneficial and compassionate so then at that point I want to put on the brakes but if there's a 20 chance we'll be there in two years shouldn't we put on the brakes now all right and so there's people taking that perspective it sounds like to me that a lot of people are thinking about this kind of in the right way it's just some people are kind of being disingenuous about it that's kind of what I Hear What I Hear You saying a little bit yes some people are being sincere some people are being disingenuous many people are being partly sincere and partly disingenuous and this is what people will do then there's a whole other piece of it which is I didn't see the names of Vladimir Putin or Xi Jinping on the list of stimulatories to this petition Putin is a bit distracted but I mean we met with the the CEO of spur Bank in Moscow a few years ago he understands AGI they've got a substantial AI research team and a lot of servers obviously China has huge AI teams and they're trading huge large language models they're not about to stop they also haven't vowed not to use the AI for Espionage or military purposes and Xi Jinping just give a whole bunch of speeches about how China is beefing up their military they're increasing military recruiting right I mean so there's a whole other geopolitical angle to it which is if the US and Western Europe really slowed down AI development and these things really are going to be super powerful well then under that premise why are we giving a head start in developing this super powerful thing to China which we seem so worried about I mean in fact I'm pretty big on collaboration and cooperation with with China I live 10 years in Hong Kong and my wife is mainland Chinese and the last thing I want is to see tsmc's uh chip Fab has blown up in an altercation right so but I mean again there's a bit of dissociative identity disorder here in terms of like okay on the one hand let's slow down developing AI because this AI may be very powerful on the other hand let's be paranoid about China developing chips and onshore Chip development by the way China is doubling tripling and quadrupling down on developing their own large language models and their military and Espionage development is done within their big tech companies right so again there's some perplexing inconsistencies that that you see here [Music] foreign [Music] [Music]
2023-04-08