Ben Goertzel: AGI is 5 Years Away!

Ben Goertzel: AGI is 5 Years Away!

Show Video

AGI in five years or something from now. What's cool is how uncontroversial these statements are in so many circles are now, right? We will have massively superhuman AGI that will exceed humans in essentially every respective intelligence. So I mean, I think what happens to human society during that transitional phase is very nasty and difficult. Today we talk about large language models, artificial general intelligence, and merging with machines. Ben Goertzel is a computer scientist, a mathematician, and an entrepreneur. He's the founder and CEO of SingularityNet, and his work focuses on AGI, which aims to create truly intelligent machines that can learn, reason, and think like humans.

This was another talk that was conducted at the Center for the Future Mind at the MindFest conference at the beautiful beach of Florida Atlantic University. Expect to hear more discussions on AI by speakers like Goertzel. Goertzel will also appear with David Chalmers on an upcoming episode, as well as David Chalmers on his own, as well as Wolfram on his own as a part two to the lecture that he's already given. Part One for Wolfram is listed here. This will occur over the next few days and weeks on the Theories of Everything channel.

As usual, links to all are in the description. I would like to thank Susan Schneider, professor of philosophy, because this conference would not have occurred without her spearheading it, without her foresight, without her inviting Theories of Everything as the exclusive filmer of the event, as well as we're going to host a panel that will be on later with David Chalmers and Susan Schneider. I also would like to thank Brilliant for being able to defer some of the traveling costs. Brilliance is a place where there are bite-sized interactive learning experiences for science, engineering, and mathematics.

Artificial intelligence in its current form uses machine learning, which uses neural nets, often at least, and there are several courses on Brilliant's website teaching you the concepts underlying neural nets and computation in an extremely intuitive manner that's interactive, which is unlike almost any of the tutorials out there. They quiz you. I personally took the course on random variable distributions and knowledge and uncertainty because I wanted to learn more about entropy, especially as there may be a video coming out on entropy, as well as you can learn group theory on their website, which underlies physics, that is SU3 cross SU2 cross U1 is the standard model gauge group. Visit brilliant.org slash taupe, T-O-E,

to get 20% off your annual premium subscription. As usual, I recommend you don't stop before four lessons. You have to just get wet. You have to try it out. I think you'll be greatly surprised at the ease at which you can now comprehend subjects you previously had a difficult time grokking.

Thank you again to Ben Goertzel. There are many, many more plans coming up for the Theories of Everything channel. Taupe is not just a podcast, it's a project. If you'd like to hear more from this channel, this project, then feel free to subscribe and YouTube will suggest you more. There's a variety of upcoming content on the themes of theoretical physics, consciousness, artificial intelligence, and philosophy.

Enjoy. Thank you. Thank you.

Thank you. Thank you. Thank you. Thank you. Thank you. Thanks for inviting me.

And it's a really, it's a fun time to be at a conference of this nature with all the buzz around the AI and intelligence and AGI and so forth. I mean, as Rachel alluded, I've been doing this stuff for a while, like many of the speakers here. I mean, I did my PhD in math, even I did my PhD in math in the late 80s, but even at that point, I was interested in AI and so the mathematics of mind and in implementing the mathematics of mind in computers.

And of course, most people in this room know that most people on the planet do not, the AI field was already quite old by the 1980s, right? I mean, now, you know, in the Uber ride over here, I told the lady driving the Uber, I was going to a conference on basically machines that think like people and how to make machines think like people. I mean, she obviously had no tech background. I said, I've been working on this since the 80s. So first of all, she's like, oh, I had no idea people were working on this that long ago.

Secondly, she's like, but I thought that it had already been done and machines could already think like people, right? So I mean, her assumption is it had already been solved and was running in the background, making some billionaires money and that was just the state of things, right? So, but as interesting, that certainly wasn't the case five or 10 years ago, right? But I think, you know, folks in this room are aware that work on these topics has been going on a very long time. And also that many, perhaps almost all of the core ideas underlying what's happening in AI now are also fairly old ideas, which have been improved and tweaked and built on, of course, as better and better computers and more data and better networks and so on have allowed implementation of things just at a larger scale and thus experimentation at a larger scale. So, you know, it's been fascinating to hear talks on fundamental nature of consciousness and consciousness in babies and organoids. And then on the structure and dynamics of the physical universe being addressed using data structures and dynamical equations, really more characteristic of AI and computer science than the physics.

So there's clearly fascinating sort of convergence and cross-pollination going on with biology, psychology, physics, computer science, math. I mean, more and more everything is coming together. What I want to focus on in my talk today is what I think are viable paths to get from where we are now, where we have machines that can fool the random Uber driver into thinking that they are human-like general intelligences.

How do we get from where we are now to machines that actually are human-level general intelligences? And I believe shortly after that, we will have machines that are vastly greater than human-level general intelligences. But I'll say a little bit about that, but I'm gonna focus more on the path from here to human-level AGI. And, you know, I'll give a few preliminaries before I get into that. I'll talk a little bit about these GPT-type systems, which are sort of the order of the day, and a little bit about how I connect intelligence with consciousness. I'll try to go through these huge topics fairly rapidly and then move on to approaches to engineering AGI.

So I think regarding CHAT-GPT, other transformer neural networks, a bunch of correct and interesting things have been said here already. I mean, I was also impressed and surprised by some aspects of the function of these systems. I was also not surprised that they lack certain sorts of fundamental creativity and the ability to do sustained, precise reasoning.

And I think, you know, while I was surprised at just how well CHAT-GPT and similar systems can sort of bullshit and bloviate and write college admission essays and all that, in a way, I'm not surprised that I'm surprised because I know that my brain doesn't have a good intuition for what you do when you take the entire web and feed it into a database and put some smart lookup on top. I mean, just like I know my brain is bad at reasoning about the difference between a septillion and a nonillion. Like, we're not really well adapted to think about some of these things that we don't come across in our everyday lives. So even now, like, if you ask CHAT-GPT to compose a poem, and it does, I don't have a good intuitive sense for like how many poems of roughly similar nature are on the web and were fed into it. You could do that archeology with great time and effort by putting probes into the network while it does certain queries, but I mean, that's a lot of work, right? What's intriguing to me as someone who's written and thought a lot about general intelligence is these systems achieve what appears like a high level of generality relative to the individual human, right? They're very general compared to an individual human's mind, but they achieve that mostly by having a very, very broad and general training database, right? They don't do big leaps beyond their training database, but they can do what appears very general to an individual human without making big leaps beyond their training database because their training database has the whole fucking web in it, right? And I mean, that's a very interesting thing to do.

It's a cool thing to do. It may be a valuable thing to do, right? It may be that using this sort of AI, not GPT in particular, but large language models, transformer neural nets of this character done cross-modally integrated with reinforcement learning. I mean, it may be that with this type of AGI, excuse me, it may be that with this type of narrow AI system even falling short of general intelligence in the human sense, I won't be shocked if that ultimately obsoletes like 95% of human jobs, right? I mean, there's a lot more work to be done to get there, of course. I mean, many jobs involve physical manipulation and integrating LLMs with robots to do physical manipulation is hard as we see from these robots here, like this dog whose head just fell off and Sophia who's on a tripod, although she's been on wheels and legs at various times. So there's a lot of engineering work. There's a lot of training and tuning work, but fundamentally, I wouldn't be shocked if very high percent of jobs that humans now get paid to do could be obsolete by this sort of technology.

And I mean, there are gonna be jobs like a preschool teacher or hospice care or therapist that you just want to be done by a person because it's about human to human connection, just like we see live music, even though recorded music may sound better because we like being in the room with people, other humans playing music, right? But that's a minority of jobs that people do. And there are also jobs doing things that these sorts of neural nets I think will never be capable of, and I'll come to that in a moment, though I think different sorts of algorithms could do it, but it's not a big percent of human jobs, right? So one lesson to draw here is almost everything people get paid to do is just rote and repetitive recycling of stuff that's already been done before, right? So if you feed a giant neural net a lot of examples of everything that's been done before that can then pick stuff out of this database and merge them together judiciously, okay, you eliminate most of what people get paid to do. And it takes a little bit of time to roll this out in practice, not necessarily that long. Like I know some friends of mine who were from the AGI research community started a company called Apprentay maybe four or five years ago. And they started out wanting to build AGI, their VC investors channeled them as would be the usual case and they're doing some particular application instead.

And what they ultimately did was automate the McDonald's drive-thru. They sold the company to McDonald's maybe two and a half years ago. It's now their technology is starting to get rolled out in some real McDonald's around the world, right? So you're getting that guy who sits behind the drive-thru window listening to stuff over that noisy, horrible microphone, like give me a big Mac and fries, hold the ketchup, right? So they're finally automating away these people, which one thing that's interesting to me there is to see from that technology being shown to work to it actually being deployed across all the McDonald's is taking at least five years, right? So I mean, it's obvious that could be automated to me a long time ago. It was shown two and a half years ago that it could work on some McDonald's, but it's still not rolled out everywhere.

It's rolled out in certain states, right? But then even like replacing the guy pushing the hamburger on the cash register with a touchscreen where you push the hamburger yourself, like even that's taking a long time to get rolled out. So these practical transitions will take a while. They're really, really interesting, but there's some things I think are held back not by practical issues, but by fundamental limitations of this sort of technology. I mean, in essence, I think these are anything that intrinsically requires taking a big leap beyond everything you've seen before.

And this sort of gets it, the fundamental difference between what I think of as narrow AI and what I think as AGI. What I think of as AGI, Artificial General Intelligence, which is the term I introduced in 2004 or something in an edited book by that name from Springer. And this refers to a system that has a robust capability to generalize beyond its programming and training and its experience and sort of take a leap into the unknown. And that every baby does that, every child does it.

I mean, I have a five-year-old and a two-year-old now and three grown kids and every one of them has made an impressive series of wild leaps into the unknown, like as they learn to do stuff that we all consider basic. Now that doesn't mean an AI system can do the same things a two and five-year-old can do without itself making a leap into the unknown. It could do it by watching what a billion two-year-olds did and interpolating, but kids still do that.

And in terms of job functions that adults do, I mean, doing impressive science almost always involves making a leap into the unknown. I mean, there's a bunch of garbagey science papers out there but if you look at the Facebook Galactica system, which was released and then retracted by Facebook, I mean, a large language model for generating science papers and such, you can see the gap between what large language models can do now and even pretty bad, mediocre science. Like what Galactica spat out was pretty much science-looking gibberish.

Like it spat out, like, you ask it, tell me about the Lenin-Ono conjecture and it will split out some trivial identity of set theory invented by John Lennon and Yoko Ono and it's amusing. But it's not able to do science at the level of a mediocre master student, right, let alone a really strong professional researcher. And I mean, the part of the core reason there is that doing original science is about taking a step beyond what was there. It's specifically not about just recombining in a facile way what was there before. So writing an undergrad essay for like English 101 kind of is about making a facile recombination of what was there before.

So that's already automated away and we have to find other ways to attempt to assess undergrad students, right? So I mean, in music, I would say, synthesizing like a new 12-bar blues song, there's no release system that can do that now, but I'm sure it's coming in the next few years. And some folks on my team in SingularityNet are working on that too. I mean, Google's model Music LM goes partway there, but it's not released. It's clear how to do better. On the other hand, think about if you fed a large language model or other comparable neural net with all music composed up to the year 1900, let's say, just supposing you had it all in the database, is it gonna invent jazz? Is it gonna invent progressive jazz? Is it gonna invent rock? Like you could ask it, like, let's put West African drumming together with Western classical music and church hymns. It's gonna give you Mozart and swing low, sweet chariot with a West African polyrhythmic beat, which may be really cool, but it's not gonna bring you to Charlie Parker and John Coltrane, Jimi Hendrix and Spongle or whatever else, right? I mean, it's just not gonna do it.

So there's a sense in which this sort of creativity is combinatory, right? I mean, jazz is putting together West African rhythms with Western church music and chord progressions rock is drawing on jazz and simplifying it and so forth. But that type of combination that's being done when humans do this sort of fundamental creativity, it's different than the type of combination the ChatGPT type system is doing, which really has to do with how knowledge is represented inside the system. So I think these systems can pass the Turing test. They may not quite yet, but I think they can certainly, if you're talking about fooling a random human, that it's a human, it can probably already do that.

I suspect without solving the AGI problem, you could create a system that would fool me or anyone into thinking it was a human in a conversation because so many conversations have been had already and people aren't necessarily that clever either, right? So I don't think these systems could pass a five year long Turing test because I could take a person of average intelligence and I could teach them computer programming and a bunch of math and I could teach them to build things and so on over a long period of time. And I don't think you could ever teach GPT-4 or chat GPT in that sort of way. So I mean, if you give me five years with a random human, I could turn them into a decent AI programmer and electronics engineer and so forth. And that goes beyond, right? But Alan Turing didn't define the Turing test as five years long, right? He defined it as a brief chat. But of course, he wasn't imagining what would happen if you put all of the web into a lookup table either, right? He was very smart, but that was a lot to see at that point in time.

So I think, I mean, another example of something, I think this sort of system wouldn't be able to do, let's say business strategy or political policy planning at the high level, because the nature of it is you're dealing with the world that's changing all the time and always throwing something new and weird at you that you didn't expect. And if you're really just recombining previous strategies, it's not a terrible thing to do, but it's not what has built the greatest companies. What's built the greatest companies is, you know, pivoting in a weird way and making a leap beyond the experience.

So I mean, there certainly are things humans do that go beyond this sort of facile, large scale pattern matching and pattern synthesis, but it's interesting how far you have to reach to find them. On the other hand, it does mean if you had a whole society of chat GPTs, it will never progress, right? I mean, and some people might like that better, but it would genuinely be, it will be stuck, stuck now in its shallow derivations of where you could get from now, right? It's never gonna make another, it's not gonna launch the singularity. And there's a lot of other smaller things in that it's not gonna do. So I dwelt on that a bit, partly because it's topical, but partly because I think it frames the discussion on general intelligence reasonably well, in the sense that it highlights quite vividly what isn't a general intelligence, right? Now, what is an intelligence is a bigger and subtler question, obviously. And I'm gonna mostly bypass problems of consciousness that were discussed here this morning, not because they're not interesting, but they're just, that's a whole subject area in itself.

And I don't have that much time, but I'm in the, I'm fundamentally, I'm somewhat panpsychist in orientation. So, I mean, I tend to think that, you know, this microphone has its own form of consciousness. And I don't care much if you wanna call it consciousness or proto-consciousness or blah, blah, blah, or whatever.

But I think that the essence of what it is to experience, to me, is just imminent in everything. And it does manifest itself differently in the human brain than in a microphone. And how similarly it will manifest itself in a human level AGI from a biological brain, that's a very interesting question. And it probably depends on many aspects of how similar that AGI is to the human brain.

So like, what's the continuity between structure and dynamics of a cognitive system and the experience felt by that system? Is a small change in structure and dynamics, so let's do a small change in the felt experience. Like, there's a lot of fascinating, subtle questions there, which I'm gonna punt on for now. We can give another talk about that some other time. But the, what is intelligence? It's a slightly more interesting and relevant question. I also think not such a critical one. Like, fussing about what is life is not much of what biologists do.

And you can do a lot of progress in synthetic biology without fussing a lot about what is life and worrying about whether a virus really is or isn't alive. Like, who really cares? It's a virus, it's doing its thing. It has some properties we like to call lifelike.

It lacks some others. And synthetic biology systems, each of them may have some properties we consider lifelike and lack some others. That's fine.

But there's still something to be gained by thinking a little bit about what is intelligence, what is general intelligence. Marcus Hooter had a book called Universal AI published, would have been 2005 or something. He proposed a formalization of intelligence which basically is the ability to achieve computable reward functions in computable environments. And you have to weight them. So you're averaging over all reward functions in all environments.

And what he does is he weights the simpler ones higher than the more complex ones. And this leads to a bunch of fairly vexed questions of how do you measure simplicity and how equivalent is it to transfer between one measure of what environments rewards are simpler than others. But one thing that's very clear when thinking about this sort of definition of intelligence is humans are pretty damn stupid.

Like we're very bad at optimizing arbitrary computable reward functions in arbitrary environments. I mean, for example, running a 708 dimensional maze is very hard for us, right? I mean, which is not even a complex thing to formalize, right? I mean, we learned to run a 2D maze, 3D maze maybe. Beyond that, most people will become very confused. But then, I mean, in the set of all computable environments and reward functions, there may be far more higher dimensional mazes than two or three dimensional mazes depending on how you're weighting things, right? Let alone fractal dimensional mazes. I mean, so there's a lot of things we're just bad at. We come out very dumb by that criterion, which may be okay.

We don't have to be the smartest systems in the universe. An alternate and more philosophically deep way of thinking about intelligence was given by my friend Weaver, aka David Weinbaum in his PhD thesis from the Free University of Brussels, which was called open-ended intelligence. And he's going back to continental philosophy in the Duluze and Qatari and so forth.

He's looking at intelligence systems as a complex self-organizing system, which is driven by the dual complementary and contradictory drives of individuation, trying to maintain your boundary as a system, which relates somewhat to autonomy as it was discussed earlier today, but is I think more clearly defined. Individuation and then self-transcendence, which is basically trying to develop so that the new version of yourself, while connected by some continuous thread with the older version of yourself, also has properties and interactions that the old version of yourself couldn't ever understand, right? And of course, all of us have both individuated and self-transcended over the course of our human lives. Human species has also. And this doesn't necessarily contradict Marcus Hutter's way of looking at it. I mean, you could say through the iterated process of individuation and self-transcendence, maybe we've come to be able to optimize even more reward functions and even more environments, right? I mean, there's all these abstract ways of looking at things don't really give us a way to tell how much smarter a human is than an octopus, or how smart a chat GPT is relative to Sophia, or exactly how far we've progressed toward AGI.

I think all these theoretical considerations have a lot of mathematical moving parts and are quite abstract. In practice, what we see is most people will give credit to chat GPT for being human-level AGI, even though experts can see it isn't. I had posed a while ago what I called the robot college student test, where I figured if you had a robot, say a couple dot versions ahead of this one, you have a robot that can go to, let's say MIT, do the same exact things as a student, roll around the classes, sit and listen to the assignments, take the exams, do the programming homework, including group assignments, and then graduate. I figure then I'm going to accept that thing is, in effect, a human-level general intelligence. And I mean, I'm not 100% on that, so we might be able to hack that, but you can see the university is set up, it is set up precisely for that purpose, right? I mean, it's set up to teach, it's set up to teach a science university especially, it's set up to teach the ability to do science, which involves leaping beyond what was known before.

And it's set up to try to stop you from cheating also, right? So, I mean, I'm assuming there the robot isn't going to class and cheating by like, sending 4G messages to some scientists in Azerbaijan or something, but like going through it in a genuine way. But again, that sort of test, you could argue about the details, it's measuring human-like general intelligence. And I mean, it's very clear, you could have a system that's much, much smarter than people in the fundamental sense, but misses some, like misses social cues so that it wouldn't do well in group assignments in college or something. And you could see that for the fact that there are autistic geniuses who are human and would miss social cues and do poorly in group assignments, right? And they're still within the scope of human systems. So I'd say fundamentally, articulating what is intelligence, it's an interesting quest to pursue. I'm not sure we've gotten to a final consensus on what is intelligence that bridges the abstract to the concrete.

I'm not sure that we need to. It's pretty clear we don't need to. Like we could make a breakthrough to human-level AGI and even superhuman AGI, and we still haven't pinned down what is intelligence. I mean, just as I think we could do synthetic biology to make weird new freakish life forms come out of the lab without having a consensus on like fundamentally what is life. So how do I think we could actually get to human-level general intelligence if transformer neural net chat GPT type systems are not the golden path? I don't see any reason there's one true golden path.

I mean, I think a well-worn but decent example is manned flight. I mean, you've got airplanes, you've got helicopters, you've got spacecraft, you've got blimps, you've got probably ways of flying that we have pedal-powered flight machines. Probably many things that we haven't thought of. I mean, so I mean, there you add the fundamental principles of aerodynamics and fluid mechanics, and when you know that, you can figure there's a lot of different ways to fly. And I think there's gonna be a lot of different ways to make human-level general intelligence.

Now, some will be safer than others, just like blimps blew up more than other modes of flying, and some will be easier to evolve further and further. So some ways of flying in Earth atmosphere are more easily evolved than the ways of flying into space, right? Balloons are very poorly, hot air balloon doesn't turn into a spacecraft as well as you could take an airplane and sort of morph that to make it a spacecraft. But, so I think there's gonna be multiple different routes, and I'm gonna briefly mention now three routes that I think have actual promise, one of which is what I'm currently mostly working on.

So the first route I think has actual promise is actually trying to simulate the brain. And again, the people in this room are among the small percent of the human population who realize how badly current computer science neural nets fare if you think of them as brain simulations, right? I mean, the formal neuron embodied by some threshold function bears very little resemblance to a biological neuron. And even if you wanna look at equational models, you have like Izhekevich's chaotic neuron model. I mean, you have Hodgkin-Huxley equation. I mean, you have mathematical models of a neuron that also aren't quite right, but they at least try. And what's inside current computer science neural nets don't try.

Then you have astrocytes, glia, all these other cells in the brain that are known to be helpful with memory. You have all this chemistry in the brain. You have extracellular charge diffusion through the extracellular matrix in the brain, which gives you EEG waves. I mean, you've got a lot of stuff in the brain we don't understand that well, and we're not modeling in any computer science neural net system. You also have a few cases known of wet quantum biology doing stuff in the brain and how relevant they are to thinking in the brain is an unknown. But even without going quantum, we don't know enough about the brain to make a real computational brain simulation.

There's no reason we couldn't. So I mean, I had a devious plan for this involving Sophia, which hasn't taken off yet. So what I planned is to make her a big fashion star. So then having a transparent plate on the back of your head was viewed as highly fashionable.

Then you get people to remove the back of their skull and replace it with a transparent plate like Sophia has, because it looks really cool, right? But then once people have that transparent plate, then you can put like 10,000 electrodes in there and measure everything that's happening in their brain while they go about their lives in real time. With that sort of data, you might be able to make a crack at really doing a biological simulation of the brain. And hopefully someone invents a less hideous and invasive mode of brain measurement, right? I mean, things like fMRI and PET are incredible physics technologies. I feel like if we got another incredible physics technology to scan the dynamics of the brain with high spatial and temporal precision, we might gather the data we need to make a real brain simulation.

And brain measurement is exponentially getting better and better. It's just so far the exponent isn't as fast as with computer software and AI, right? But it's coming along. Even without better brain simulations, I think we could be doing a lot better. I mean, no one is devoting humongous server farms to huge nonlinear dynamics simulations of all the different parts of the brain using say, Izhekevich neurons and chaotic neural networks. I mean, if the amount of resources one big tech company puts into transformers were put into making like large scale nonlinear dynamics simulations of brain based on detailed biology knowledge, I mean, we would gain a lot.

We would learn a lot beyond where we are now. We still don't have data on astrocytes and glia and a lot of neurochemistry, right? So we're still missing a lot. Interesting to think about the strengths and weaknesses of that approach though. I mean, one weakness is we don't have the data. Another weakness would be once you have it, all you have is another human and a computer.

And we've already got billions and billions of irritating humans, right? So, I mean, granted, that's a human where you can probe everything that happens in their digital brain, right? So then you could learn a lot from it. But the human brain is not designed according to modern software engineering principles or hardware engineering principles, right? For better and worse. I mean, say short term memory, seven plus or minus two. What if you wanna jack that up a bit? Like there's not, probably not a straightforward way to do that. That's probably wrapped up in weird nonlinear dynamic feedbacks between the hippocampus cortex, that thalamus and so forth.

We're not designed to be modded and upgraded in a flexible way. We do have some interesting adaptive abilities. Like if you graft a weird new sense organ into the brain, the brain will often adapt to being able to sense from it.

There's weaknesses and then there's potential ethical weaknesses also. I mean, the maxim that absolute power corrupts absolutely was, this was a sort of partial truth formulated by observing humans. It's not necessarily a truth about all possible minds.

But if you're making a human in a computer, if you do find a way to jack up its intelligence, then maybe you're creating like a horrible science fictional anti-hero, like this human who lives in a computer and is smarter than everyone else. But no, this will never really have a human body. I mean, we can see how the movie ends. But that's, anyway, one possible route.

Another possible route, which is very interesting to me and would be a lot of fun, but is not something I'm putting a lot of time into right now, is a more artificial life type approach. I mean, the field of A-Life had a peak in the 90s or something. You were trying to make these sort of ecosystems of artificial organisms that would then evolve smarter and smarter little creatures. Didn't go as well as it wanted, but of course, you know, when I was teaching neural nets at University of Western Australia in the 90s, it took three hours to run a 30-neuron network with recurrent backprop, and everyone was bitching that neural nets are bad because they're too slow and will always be too slow, right? So it could be that what happened with artificial, with neural nets, can also happen with A-Life, right? I mean, it could be just scale.

I mean, certainly the ecosystem has a lot of scale, right? And what you find is when you have more scale, you can then screw around with the details more and find out what works. So it seemed like an artificial life never found quite the right artificial chemistry to underlie the artificial biology. Not that many things were tried. A guy named Walter Fontana had a cool system called algorithmic chemistry in the 90s and early aughts where he was taking little LISP programs and just made a big soup in which LISP codelets would rewrite other LISP codelets and just trying to get autocatalytic networks to emerge out of that.

Didn't go that well, but I mean, the amount of computational firepower being leveraged there was very, very small, right? So it seems like, I mean, there's an argument against it, which is like it took billions of years for life to emerge on Earth with a very large number of molecules involved with doing randomish sort of things. On the other hand, we can take leaps, we can watch experiments, we can fine tune things as a human, like more aggressively than the holy creator appears to have done with evolution on Earth, right? So I mean, I think, again, this is something that gets very little attention or resources now, but it would be really interesting to see what like a Google scale experimentation in artificial life would lead. There's not an obvious commercial upside to early stages of that sort of research as compared to the question answering systems or something. And I have some further ideas on how to accelerate artificial life, but I'll mention that at the end because they involve my third plausible route to create AGI systems, which is what I'm actually working on now.

And I'll just give a few minutes so that I've given a lot of talks on it before, which you can find online. So in terms of name brand systems that would be AGI systems I'm working on now is called OpenCog Hyperon, which is a new version of the OpenCog system. So we had a system called OpenCog launched in 2008 based on some older code before that. Now we're making it pretty much from the ground up rewrite of that called Hyperon, but the ideas underlying it could be levered outside that particular name branded system. And one way to look at this is that we're hybridizing neural, symbolic and evolutionary systems. Symbolic meaning logical reasoning, but not necessarily old fashioned sort of crisp predicate logic.

I mean, for those who are into wacky logic systems, it's probabilistic, fuzzy, intuitionistic, paraconsistent logic. So it's a, which sort of means probabilistic and fuzzy, probably you know what they mean. Paraconsistent means it can hold two inconsistent thoughts in its head at one time without going ape shit.

Intuitionistic pretty much means it builds up all its concepts from experience and observation. So, but still logic theorem prover, right? So we're trying to deal with symbolic stuff by actual logic theorem proving. We're using neural nets for recognizing patterns in large volumes of data and synthesizing patterns from that, which they have obviously shown themselves to be quite good at.

We're using evolutionary systems, genetic programming type systems for creativity because I think mutation and crossover are a good paradigm for generating stuff that leverages what was known before, but also goes beyond it. But again, it depends on what is the level of representation at which you're doing the mutating and crossing over. So we're integrating neural, symbolic, and evolutionary methods, not by saying, okay, neural's in the box, symbolic is in the box, evolutionary is in the box, and then the boxes are communicating across these channels. What we're doing, we're making this large distributed knowledge metagraph.

A metagraph is like a graph, but you can have links that span more than two nodes, like three, four, five, or 100 nodes, and you can have graphs pointing to whole subgraphs. So a hypergraph is a graph which has n-ary as well as binary links. A metagraph goes beyond, you can have links pointing to links or links pointing to general subgraphs.

So we have a distributed knowledge metagraph, there's an in-RAM version of the knowledge metagraph also. We represent neural nets, logic engines, and evolutionary learning inside the same distributed knowledge metagraph. So in a sense, you just have this big graph, parts of it represent static knowledge, parts represent active programs. The active parts run by transforming the graph, and the graph represents the intermediate memory of the algorithms also.

So you have this big self-modifying, self-rewriting, self-evolving graph, and the initial state of that graph is that some of it represents neural nets, some of it represents symbolic logic algorithms, some of it represents evolutionary programming, some of it just represents whole bunches of knowledge which could be fed in from databases, they could be fed in by knowledge extraction from large language models, or they could be fed in from pattern recognition on sense perception, right? And to go deeper than this into what we're doing with HyperON involves more math than I could go into here, especially without the presentation or anything. But if you look at it, there's a paper I wrote and posted on ArcSci a couple years ago called The General Theory of General Intelligence. And what I go into there is how you take neural learning, probabilistic programming, evolutionary learning, logic theorem proving, you represent them all in a common way using a sort of math called Galois connections. So I use Galois connections to boil these AI algorithms all down to fold and unfold operations over metagraphs. So that's probably gibberish to anyone without some visibility into the functional programming theory literature.

But I guess the takeaway from that is we're trying to use advanced math to represent neural symbolic and evolutionary learning as separate views into common underlying mathematical structures so that they're all kind of different aspects of the same meta algorithm rather than different things living in separate boxes. Now, there's a connection between this and the artificial life approach to AGI which I would love to approach at some point. And the connection is if you were brewing a bunch of artificial life populations on many different machines around the world, wouldn't it be interesting to shortcut evolution and train a smart machine learning system to predict which artificial life populations had promise and kill the ones that didn't early, right? And you couldn't do that too aggressively or you're gonna kill the hopeful monsters, right? But you could certainly identify a lot of things that just aren't promising and identify something early as really promising and make multiple clones of it, right? So the idea of a narrow AI and then eventually AGI like evolution master to help brew the artificial life soup seems really, really quite interesting to me and maybe could shortcut past the like 4 billion years of however many billion years life has been evolving on earth problem, right? So I think, I mean, of course, there's also ways more and more advanced AI can help with a neuroscience approach to AGI also. I mean, there's no doubt. I mean, machine learning is already all over neuroscience. So there's no doubt that steps toward AGI could help with inferring things about how the brain works from available neuroscience data.

I think, I still think you may fundamentally need more data than we have now. So those three approaches, I think, are all promising and could work. And finally, I wanna briefly note the role of hardware in all this just for a couple of minutes because that's sort of what ended up bringing me here to Florida right now, actually it was the hardware side of things. So if you look at, you know, what caused neural nets to transition the way that they did? I mean, we were all doing neural nets for decades. They were slow, they were conceptually intriguing, but they weren't doing incredibly, incredibly amazing things.

The reason they took off so much is pretty much porn and video games, right? I mean, it's because GPUs became so popular and the GPUs do matrix multiplication really fast and they plug them into regular PCs. They do matrix multiplication across many processors concurrently. But lo and behold, matrix multiplication is also what you need for running many simulations in areas of science.

And it's also what you need for running neural nets quickly, right? So it turned out that these GPU cards, which were created for video games and video rendering, right, I mean, these turned out to be the secret sauce for scaling up neural nets just so they could run faster. And I mean, in 1990, when I was a professor at University of Nevada, Las Vegas, we had a $10 million Cray YMP supercomputer. It could do 1000 things at a time, which was so much for then, $10 million.

I remember we programmed it in sort of parallel Fortran. And I mean, now GPU, of course, garden variety GPU can do more than 1000 things at a time. And each of those things has done much faster than the Cray did.

So we were playing then with neural nets on the supercomputer, we saw what it could do. But now, I mean, you have multi GPU servers and racks and racks and racks of them, right? So clearly, the hardware innovation, it didn't exactly let you take the code we were running in the 80s and 90s and make it work better, but it let you experiment with that code, see what didn't work, tweak it, tweak it, tweak it with fast experimentation, find something conceptually fairly similar that does amazing stuff. And so one question is, what hardware would let you pursue these other three approaches to AI that I outlined way, way better than has been done historically. And for brain simulation, I think it's clear what you need are actual neuromorphic chips, right? I mean, most of what are called neuromorphic chips are not so much, but you can, I mean, you can take Izhekevich's chaotic neuron and put it on a chip, and there's some research papers on it though it's not being done at scale. I mean, you could take glia and astrocytes and put our knowledge about them on the chips.

I mean, you could try really hard to make an actual neuromorphic chip to drive large scale brain simulation. On the side of hybrid architectures, I'm actually working on a novel AGI board together with Rachel St. Clair, who introduced me up here, who's a postdoc here and who invited me to come speak here.

So Rachel had designed this hypervector chip, which puts on hardware very fast manipulations of very high dimensional bit vectors, which gives faster ways to implement neural nets, but also faster ways to do various things with logic engines. I had developed a chip that allows you to do pattern matching on graphs very fast by putting the graph on hardware. So we figured we can put her hypervector chip, my graph pattern matching chip, deep learning GPU and the CPU, put them on the same board, connect them with very modern fast processor interconnect. Now, maybe if you do that, you'll have a board that does for this sort of hybrid neural symbolic evolutionary system, something similar to what GPUs did for neural nets, at least it's a plausible hypothesis. So we're going through the simulation process and looking for manufacturers and so forth. But again, that's both a real project, which I think is cool, done through Rachel's company Simuli and it's a sort of just case in point, right? Like we should see a flourishing of more diverse sorts of hardware that bake diverse sorts of AI processing into the hardware.

And that's as important as experimenting on the software because we can see historically, it's a lot of what led us where we are with neural nets today. So yeah, to briefly wrap up, I mean, it's a super exciting point in the history of AI. We have systems that do more human like stuff than ever before. I think they're not AGIs and cognitive science thinking is very useful for understanding the ways in which they're not intelligent like humans are.

On the other hand, I think many of the same underlying technologies are gonna be useful for building actual AGIs. So while I don't think the chat GPT type systems are on the direct path, I mean, I think they're indirect evidence that we are probably not that far off from AGI. So I think I agree with Sam Altman, we could be at human level AGI in five years or something from now. I also won't be shocked if it's 15 years, I'll be shocked if it's 50 years.

And what's cool is how uncontroversial these statements are in so many circles are now, right? It's cool and it's scary, but certainly an exciting time to be doing this sort of research. So if you wanna find out more about all this, there's my website, gertzel.org has links to a lot of things I'm doing. The website of my company, singularitynet.io has links to a lot of AGI stuff, as well as telling about our blockchain based platform for running AI decentralized across a global network with no central controller, which I think is critical to the ethical rollout of AGI, but I didn't even have time to get into today.

And now we all have to go to the beach and have a barbecue. So. That was fascinating.

Thank you so much, Ben. All right, so questions. So yes.

Sometimes we put AGI as a high bar of what we're trying to achieve, but it's probably not the case. AGI is a high bar of what we're trying to achieve, but it's probably gonna be pretty uneven. So in what ways will it exceed human intelligence? What is the likely scenarios? And will those areas be identifiable by humans? Well, I think that within, let's say a couple years, just to throw a concrete number out there. I think within a couple years of getting a true human level AGI, we will have massively superhuman AGI that will exceed humans in essentially every respective intelligence. So I mean, I think once we have an AGI that can do computer science and math and computer programming, that can do the stuff that people in this room can do, I see no reason it couldn't upgrade its code base and improve the algorithms underlying itself to make itself say 1.2 times as smart as it was initially.

And then you lather, rinse, repeat. And this gentleman here wrote a paper on this some years ago. So I mean, so in which ways the first AGI will exceed people is not obvious and could depend on what route you take, right? It's like if it came out of an approach with a symbolic logic engine in it, it's gonna be way better at reasoning than people are. If it came out of a brain simulation, then it might not be better at reasoning than people are, but you could still feed more sensors into it than you can feed into a single human brain. So it would get some added understanding that way.

But no matter how you get there, I think there's a recursive improvement loop you'll enter into, particularly when you consider you can make a large number of copies of this system, right? Like you have one smart human, and then you say, okay, well then, within reasonable amount of cost, you have a hundred, maybe a thousand smart humans, but they can do direct brain to brain sharing of knowledge, right? So it's pretty easy to see how you get that recursive self-improvement. I mean, you can't rule out there being some limit, but it seems really outlandish that like there's a fundamental limit at only 1.5 times human intelligence. To me, that's like saying you'll never make something run more than 1.5 times as fast as a cheetah or something. That doesn't feel right.

I finally get to ask one question. I'm curious because we think that AIs do have this recursive self-improvement capability, but when we're thinking about AI as a distributed environment with an ecosystem of different AI services and large language models and all kinds of entities controlled by who knows what, right? Certainly not aligned organizations. Why think that the future brings this, you know, improvement in the intelligence level? Why not think of the future more in terms of what's been happening in bad scenarios with the amplification of discontent on Facebook, for example? Well, I think the recursive self-improvement, at one level, it happened, I mean, in one sense, it happens on a different level than that, right? You can think about a large knowledge metagraph like we have in OpenCog Hyperon, and we have our own programming language, which is called META, M-E-T-A, META Type Talk is the acronym. So we have our META language, which basically interprets directly into nodes and links.

And actually, to model the semantics of that, we use the mathematics of the infinity groupoid from category theory, which is equivalent to Wolfram's Ruliad that he was talking about. So I thought that was interesting. The same, he uses this Ruliad structure built of all these hypergraphs. The Ruliad is basically the infinity groupoid from category theory, although Ruliad is a whizzier name.

And we use metagraphs, which are like hypergraphs with a few extra features. So actually, the self-rewriting, self-organizing data structure we're using in Hyperon is highly similar to this self-rewriting data structure he's using to model fundamental physics. Although the statistics of the networks you see for modeling particles and objects are different than the statistics of networks you see if you're trying to model common sense knowledge, but there's not a contradiction.

Those could be structures on different levels of the same network. So, I mean, at that level, the ability to self-modify and self-organize would occur within the distributed network mind of a single open-cog system or something. Now, if you're talking about across the whole planet, I mean, then you're basically, you're looking at two different scenarios before or after the AGI takes over the world, right? So, I mean, before the AGI takes over the world, you probably have a highly splintered-off scenario, like right now where China is building its own networks, the US is building its own networks.

Russia was trying to before they got distracted murdering people, right? So, I mean, so there's a, on the other hand, what we're trying to do with SingularityNet is make it open and decentralized infrastructure for deploying AI. So, and you think of things like the internet or Linux, which are everywhere with no central controller, right? So, if the first AGI is rolled out like that, it becomes like BitTorrent or something without the illegal copyright aspect. But I mean, it becomes like it's all over the place.

It's running on machines all over the place with different nodes. And no one country has a monopoly of it. No one can stop it. But again, in the transitional phase, well, we make the transition from narrow AI to AGI, and then from the first inklings of AGI to full-on super AGI, what happens to human society during that transitional phase is a very nasty and difficult question, right? Like what happens when 90% of human jobs are obsoleted, but the super AGI hasn't yet created a molecular nano-assembler to airdrop into everyone's farm, right? Then the developed countries will give universal basic income to everyone, and Africa will remain subsistence farmers with no work outsourced to them. Then their kids who can computer hack will hack into the power grid in the West and wreak a lot of havoc. So I think there can be quite difficult scenarios in the interim, yet I'm an optimist on the whole, in that I think once you have an AGI that's several times human-level intelligence, then it can just cut through all this.

I mean, then it's much smarter than we are, and it can build its own robot factories to build new robot factories to create smarter AGIs. And paperclip factories, maybe, right? Well, humans become like the squirrels in the national park, right? I mean, they carry out their own love lives, they hunt, they fight, they build stuff, and the rangers don't try to interfere with their social lives, right? It's going to be fun talking to you more, Ben. I think we're on the same wavelength. Okay, so there were some earlier questions, starting with Garrett, and then Carlos. So actually, I have a question, because you did bring it up a little bit, and I could talk your crap about this, but I just want to ask about this, because you brought up A-Life, and also, obviously, the significant hardware limitations with the idea of AGI, but then sometimes, right, I guess, what would you say in terms of how important the hardware question is for realizing AGI in the near term? So. I don't think we need, I don't think we fundamentally need different hardware to get to human-level AGI.

I mean, unless, unless we're all wrong that classical computing is good enough, and you really need quantum computing, which I don't see evidence for, but I can't give it a zero probability, but I mean, by and large, from what we see in the brain, and what we see with AI systems out there now, you don't need radically different hardware, but by the same token, you don't need GPUs to do neural nets either, right? You could do it all on CPUs, it just costs more. The thing is, a couple orders of magnitude extra cost and extra power consumption is the sort of practical obstacle that can delay something by decades, but I mean, in the scope of history, delaying by decades doesn't matter either, right? But I guess that kind of gets to where I wanted my question to go to, right? Which is, there's a difference between achieving the goal of AGI with the hardware, despite the outrageous energy cost of mining the core of the planet, to realize this kind of, whatever magnitude order of intelligence greater systems. I mean, it seems like, it seems like with the hardware Rachel and I are working on, you could speed up the operations of systems like OpenCog or biologically realistic neural nets by at least like a couple orders of magnitude.

So, I mean, if you can speed things up by between 100 and 1,000 times, very helpful. So, I mean, that, so if you think about, say a GPG3 model cost $5 million, $10 million to train. Well, if you didn't have GPUs, let's just for sake of argument, say it took 100 times longer, right? I mean, so then instead of 10 million, it's a billion dollars but I mean, there's companies, these companies have a billion dollars, right? So, I mean, and now OpenAI is getting $29 billion, right? But the thing is, no one wanted to give them $29 billion before they spent the 10 million, right? So, it just, it's the higher cost will slow things down and making chips that can speed things up by 100 or 500 times.

I mean, obviously we'll shave time off it but I don't think it's a really fundamental necessity. I mean, if quantum computing were needed, that would be more like a fundamental necessity. I mean, you could of course simulate the Schrodinger equation on a classical computer but then you're getting into like many, many, many orders of magnitude slowdown that becomes infeasible. Go for it.

Carter. Yeah, thank you. So, on the three alternatives towards AGI, like one of them is one in which artificial general and intelligence sort of like emerges from simulating the brain in a kind of like individualistic isolated manner and the other ones are more like how we evolve and the conditions or the constraint. No, I mean, I think artificial life is how we evolved.

I think an open cog system is very much not how we evolved. So, my question is the role of social intelligence in the development of general intelligence. Yeah, I think that's somewhat independent of the three avenues that I outlined and that any one of those avenues could be pursued in a way that's heavy on social intelligence, right? I mean, because you could, instead of making a single brain simulation, you could simulate the brains of a tribe and put them in robots or in virtual characters in a game world and let them buzz around and do things and certainly with open cog systems, I didn't go into this, but we're looking at exactly that. We're looking at us

2023-04-07 08:54

Show Video

Other news