Human-level AI by 2025 Intelligence artificielle 54 ft Up and Atom

Show video

Haleh I have a question for you you're an expert, in AI right when, do you think there will be human level a eyes hi. Jade, so usually, videos. On my channel in French how's. Your French, Jenny. Pop Alphonse say, okay. Well. Since. You're such a great science youtuber I'll do an exception for you for today I'll, be speaking in English even if it means every one of my viewers mocking, my French accent, thanks. Lis so what do you think a human level a is possible, could, an AI ever, outperform. A hearin at all tasks. And for lower costs, and if so why, when could all of this happen these, are fascinating, questions, and it's, extremely, tempting, to have a strong opinion about such controversial, topics, but it's extremely, important, to first note that, predicting, the future is an extremely challenging task. Especially, given how complex the modern world is and how fast it is changing, in particular, my big advice would be don't, trust, your guts and don't, trust my guts either okay, sure but it's, still an important, question I mean a human level a eyes would imply such huge changes to, the world how, can we build up more of an informed, answer, well. Even, if no individual, can be said to be reliable, it's often the case that the, opinion, of a community, of experts yields. More reliable, answers yeah, so I found this article of the MIT Technology. Review and it says that AI experts. Don't think that human-level, AI is a threat to humanity wait. Did, you see that there was a follow up to this article look, at the bottom of the article yeah. That's what's weird the follow-up, article says the exact opposite so, what's, going, on here I guess, that a key takeaway, is that there are big disagreements. Among experts, and that there are even disagreements. About what the agreements, of the experts are it's. A big mess and unfortunately. Many AI experts, are not helpful and prefer. To present their own views on the future of AI as well as their own views on the consensus, of the experts, of AI but, there's also a more subtle and more fundamental. Reason why the two articles, disagree, do, you see what I'm hinting at. It's. About how to think, about risks. I think, I get or you're saying in the survey discussed, in the first article they asked AI experts, to say when they thought there would be human-level, AI whereas, the survey discussed in the second article gave specific dates, and asked AI experts, to say the probabilities. They thought that there would be human-level, AI by, that time so it might be the case that nearly all AI experts, think that there won't be human level AI by 2050. But, at the same time they think there's around a 10% probability that, there will be human-level, AI by 2020. Exactly. And when we discuss risks, it's crucial, to think in such probabilistic. Terms indeed, a nuclear, plant that is more likely not to explode, is not necessarily, a reassuring, nuclear, plant definitely.

Even, If it has a 1%, probability of, exploding, I'd argue that that's still cause for concern and we should still prepare, for the worst yes, and we should not give too much importance, to surveys that only ask about most, likely scenarios, it's important, to consider surveys, that discuss how likely different, scenarios are and fortunately. For us in 2015. Ai expert. From the two main a AI conferences. Were surveyed that way and out of 1600. 34 of them. 352. Accepted. To answer the questions of the survey you should check it out Jade let, me see. Wow. This. Survey, asked AI experts, to plot on a graph for any given date how likely they thought it would be that human level a eyes would have arrived by that date and here's. What the curves of different experts look like also. Here in red is an aggregate, of the predictions, of the experts, when, crucial thing to note when looking at the graphs is how, much AI experts, disagree, this is important to note since it means that no expert, is representative, of all experts, in particular there, is a huge, selection bias, whenever, you give great importance, to one single expert, so, let's look at the aggregate according. To the expert, there's a 50%. Chance that, by the Year 2060. One, human. Level AIS will have arrived so, we probably have, a bit of time according. To experts human-level, AI in a few decades away is the more likely scenario, but in terms of risks we should also care about less likely scenarios, they, also say that there's a 10%, chance of human level a eyes by 2025. Lay that's so soon yes. It is and 10%, is far from being negligible. Again. The nuclear plans with a 10% chance of exploding, should definitely, not be built, yet, human-level, AI may be even more explosive than a nuclear plant as we've, discussed it in length on my channel and as Robert Mars does on his human, level a I possess, existence, of risks for mankind, unless, we massively, invest on AI safety, human-level. AI may, well destroy, humanity at. Least there is a strong case made by Nick Bostrom in his super, intelligence, book that we should regard the destruction, of mankind as the default, scenario if AI becomes, superhuman, okay. But 10% is the prediction of the experts, how, likely is it that they're right about this I mean having experts, made very earnest predictions, in the past they. Have they definitely have, for, instance Marvin Minsky famously thought that human-level AI would, be there by the 1970s. He, had overestimated, the. Speed of progress but, prediction, errors would not always due to our estimation, of future progress in fact lately it seems that the opposite holds more often for instance in the 2015. Survey experts. Predicted that we take 12 years for a eyes to outperform, humans, at the game of go but alphago, beat lee sedol a few months later assuming. That they are equally wrong about human-level, AI this. Suggests, that we cannot rule out the possibility that, a eyes will reach human level within a couple of years I see. I guess. The unreliability. Of experts, should be regarded as added, uncertainty. If the, distribution. Of expert, predictions, for human-level, AI looks, like a bell curve then. This added uncertainty, should flatten, the curve. But. Look this means that the probability of extreme, events, increases. Yes. Indeed because, experts, are unreliable, we should in fact be even, more concerned about the possibility, of human level AI in, a near future and instead. Of a 10%, probability of human level AI by, 2025. We might consider a 10%, probability, of human-level AI by, say 2022. Okay. But this is very hard to imagine to, reach human, level wouldn't a eyes need some kind of real intelligence. Or consciousness, and what, about energy consumption, the human brain is very efficient, can computers, really be as efficient, as the human brain there are definitely huge hurdles, to get to human-level, AI but.

Research Is progressing at an impressive pace both, in terms of hardware, and software in, terms of hardware for, instance the so called Akuma's law suggests that the energy consumption of a single computation, is dropping. Exponentially. As it is divided, by two every, 18 months and innovations. In paralyzed. If not decentralized. Computing. And data storage may allow this trend to carry on, so then what about the software side don't, we need some kind of major breakthrough. Probably. It's hard to tell to understand the software challenge, it's useful to get back to an insurance 1950. Paper in, these paper he suggested, that a large part of the complexity of the human brain is critical. To reach human, level intelligence yet. The human brain has around a million billion synapses. If 1,000 of them are essential, for human, level intelligence it, suggests, that an AI will, need to be of size at least one terabyte, to which human level today, the size of large areas is usually, of the order of, Giga bytes and the largest AIS, seem to slowly reach one terabyte this, suggests that in terms of complexity, we, may soon which what's necessary, for human level intelligence still. It's hard to imagine that this is sufficient to reach human, level a eyes I mean, don't we still need to train, those a eyes yes. Indeed but much of the necessary data is arguably, already, out there to, reach human level large enough and well designing of a I could, read Wikipedia again, and again and watch science youtube videos it's, one thing to be exposed, to this data but could an AI actually, learn from this data could. It lead to a real intelligence. Again, it's hard to tell but, it's important, to note that it is hard to tell it seems that many people believe that since AIS are just doing mechanical operations. They cannot compete. Humans, but the thing is that long the mechanical, operations, are actually full of surprises, they can lead to results that we humans, cannot predict, not, because the operations, are complicated, in fact each, operation is extremely, simple, however, a is, performed, a huge, number of such simple operations, this, is something that our brains cannot do and this, is why our brains cannot predict, the outcome of a, eyes computations. Which means that they will lead to results that surprised us Alan. Turing put it brilliantly, he, wrote the view that machines cannot give rise to surprises. Is do I believe, to, a fallacy, to which philosophers, and mathematicians are, particularly, subject, this. Is the assumption that as soon as a fact is presented, to a mind all consequences. Of that facts bring into the mind simultaneously. With it it is, a very useful assumption. Under many circumstances but. One too easily forgets, that it is false a natural. Consequence, of doing so is that one Lynne assumes that there is no virtue in the mere working, out of consequences, from data and general principles. Exactly. And thus, we should not be overconfident, about. What, we think that huge eyes can, do their long computations. Will, likely surprise. Us well in fact if you've been following the recent developments, of a guy you should probably, have been surprised, by. Invidious photorealistic, images, YouTube's. Automated, captioning, and Google duplexes, phone calls. You. Know me who you hi. I'd. Like reserve a table for Wednesday, the 7th. For. Seven. People. It's. For four people well. People win. Wednesday. At 6 p.m. Oh, actually. Really. Sir for like up were like a 5 people for, you for below you can come.

How. Long is the way usually to be, seated. When. Tomorrow. For. Next Wednesday the 7th. Oh no. It's not too easy you, can. Call me back okay. Oh I. Got, you, bang. Again. That was real call with many of these examples, where, the calls quite, don't go as expected but. The assistant understands. The context, to nuance it. New tasks for wait times in this case and handle, the interaction, gracefully. Yes. We. Should be prepared, for potential surprising. Developments, in AI that. Would give them human, level reasoning, abilities, in particular. Given, all we've discussed, I would claim that there, is a strong case for saying. That human-level, AI by, 2025. Should, be given a probability larger. Than 1%, and, come to think of it 1%, is huge, especially given, how disruptive this, human-level, AI would be this is why I would argue that we should really take the threat of human-level, AI seriously. And invest. Massively, on AI, safety. Hey. I hope you've enjoyed this video a big thank you to Jade. From an item I highly, recommend, her YouTube channel is one of the greatest things out, there and of course her videos about physics which is her background are really, really good but I strongly, recommend even, more her videos about computer, science ideas particle, there's one on the singularity, and there also videos about machine learning overfitting. Or the, optimal, stopping problem I highly recommend these videos and, also we just did a video on Jade's, channel it's about a pragmatic, solution, to implementing, value, loading by simulating, a virtual, democracy. On extrapolated. Versions of ourselves for. Self-driving, cars faced with the trolley problem before. Coming back to the comments of the previous video, I want to highlight again the article that I've written about, how to handle a human, level AI and, in particular how to load values into, a which, is a very very very difficult problem, I strongly, recommend anyone. Who's a bit interested, by these ideas to go and have a read of these paper as I think it is the most important, thing that I would have ever written my life -, about the komono opinion. Ticket is a sweet la Vida solution, automatic, so our kuhmo about the mission poor isn't what they plan to the heat as a Muto here, Samuel, Genesee, endless. Possibilities, because Veronica, qf9. Hace una, she'll, go eight no this, one here is a. Upon. What did they seem pretty fancy, algae, off non-apology on due to tech community, particular. Individual. So in so high-pass seed if your could you to visit the floor, if. Not a section oh boy yeah somos. De. Cuisine no a/c power finish here Sacramento. Mucho. To mom to come you so I would say some people Sentinel, -, kevie. Christoph, Michelle a general, doesn't, miss a video so, you're a fat killer the pugilist, underwater if I'm the. Sequester. Would improve, achieve easy the pan you ate if you will it's. Really. Cool, did you get a shock. Media. More, character. Yeah. I, said this position, for. Young fillies in seven men you say, what you owe him some clearly just a partial body. To oppose. Him because. There. Was no education, draconian. Was too early flame. Duties, as from pass on duty, to numeric ooh patently. Burns X naught open or short 1finity parafoil. A shell Gribble a addition individual but, I was molten sick on citizens on extermo, MPG, severely. Unique cupola, water indication, keyword, talented. Computer. From restitution, pacified UK so national. Say. Cash whiskey no me some bleep. Job. Yes coupon just hold on a Kinnaman. Back guarantee, Domini. It. Shows, there. As you go sniff, get you more Benishek, become, or reduce it radically befell the reform education, national, past. Gives ya beaucoup to the, person role a case. Where this you know our foursome. Oh they, make them talk a. Situation. Symington toe through. Inequity prison some. Coupons not fulfilled decision when you go to a PTA is our phone book with the time P suppose you know senior subs. A necessity given would not canola for the. 11. New CEO days no senior it's, a demonic with Danone will be J but 2005. Tracking. PK a so source qtz the technology, come. Later. Go eternal combination, 205. Jean pong world you know educativa. Precook, toka - ah she actually let, you guys know soon I'd say, so I've, shown. To computers, if you ask a busy.

Effective Nopon, do you read the ones he, also, said no Appa action action or limit a computer. You just, go come on - on YouTube potato, I'm wonderful book mucho gusto oh come on the video. Youtube poop, world. Even. On beyond today detection. Revealed a, formula. Walk you ma'am yeah the YouTube phenomenon, city on wax yeah - total net it's, a travesty that one IKEA, single tonight a dictum. Like stone, ain't, no. Simple super here they come to go to school a dick on Facebook they come to a tweet a link to youtube if you really want to step on your. Paws, on Martinez on the meritorious. To the envy job with Ricky Satya, a calico. Simon we said you could clowns, in yet and you know you know it's. Like a public here to the. Clean air, you could divine your presence, WebP is important corrupt because most popular, group do you know if. You have to screw a cool content, I suppose fun options you know seppuku. Mija. Individual. Each week like only, cumin Kentucky. Beyond, supplement. Pass a predict. Into Bitcoin. You would. Speculate. On growth a, CL. Facade vicinity hot Louis acting modem we know dude, Jenny do a dramatic sir, adversely. Girl who a girl motivation, illegality, cool. Sean tonight yet, to see is to other do anthem we are typical, wet, if you put a needle on it on, Queenie new up like the, men would also no Dooku. Won't pay Lincoln you know please professor that. Left leg on the shell exploit a $25. For Turkey yo, you, so especially, complicated, owner happy, mom you know yeah, they're a party, Alice muna lost. His notebook is, on. Is human profile, is actual manual, computer, mafia, he, continually, repelled alpha don't. Say domain Pau Zotoh a Malloy Siana kotelnyk wah-wah looking, older the, track also su sitio de. Tousser, on, trophies. Hell. Yeah. See, a vortex. Monsoon more enormous you're associating. Media and pull on silly word boy after you tell your concern, I'll prove expertise, into an empty wave holy fire audition, - a. Necessity. Upon, Hotel hotel decision. Wife. In Yeldon even, more, one more one more you know a, Lugano.

Jamison, Confesses. Calculus. Was gonna be chiefly octopi. Accurately soonish near Houma hide of God who, in. Nuna's man. She seemed to pretty to myself what choice -, -41. In the pros - pawn years especially in, Europe in short quite develop a lead Kazuma you. Cooperate - you, applied also - Yahoo. Element. Array at, the. Met compress this is same a Popeye's, don't exterminate humanity. Well. No more power ratios and to all don't, permit episode which, must include replica, some - computer, by secure massage. Video, passage knee pad champion Pasi wife. Yeah Dyneema, sofia one more flippy, I just. Gotta save a miss a video clip conclude offer, last, year he sewer, nothing, else athlete is yet no, confuse Joey do not give, I'm on fire optimist a I'm effectively promote, Dante Joseph failure didn't we know I'm all taken a favor by more and more people he, or she turns up a secular powerful, 9u probability, can only media blowed up at Machpelah you can hold your, new society, don't, assign your Ania, mmm does he add a description. Very Monica Civic, psychopathically. Your Jean who receives, focus even a very competent specialist, University, electricity, Rhodesia job. Isn't Eliza question 20, no Zelda, Nagumo, suppose. Vermont will feature your humanity. 800, me, a soy sauce and former persecuted, videos Philip Oakey needs learning. Portion Idaho. Courage Palmer, don't give me something to pathology modify, behavior. From back on, overtake. Hiddenville City FL disagree parallel, Vince, you of the basin is a common possum terminus attitude from some Bergamo, and inject crucial, for, be impossible induce, a generic problem see come home they defeat room by the waterfront a record. Amount in you a. Routine-use, knock, knock I did okay for food well how did I say to say they feel differently music so. How Lucia the lot Jose the Videocon over there a key, right for come home you develop, a special consequence, if I'm unlucky Oracle, robbery we welcome the key a secondary. Super cool. Do confusion, a Absalon. Fact is action, camera. Mobility a decree. For absent from color I noticed the beauty but so, sad you could attend the rotisserie, the, linear, portion survey, miss Egidio possibility a coma telepathic, on suburban upon, 22, episode now cheaper although he just Beckham Sevilla now pushing for the. Legendary, mathematician. John von Neumann was the first to use this phrase to describe this, lovely. Day when he said the ever accelerating, progress, of technology gives the appearance of approaching, some essential, singularity. In the history of the race beyond which human affairs as we know them could not continue. 70%. Of the AI experts, agree. That this is at least a moderately important, problem and how, much to the AI experts, think that society should prioritize. AI safety, research well, 48%. Of them think we should prioritize it more than we currently are and only 11% think, we should prioritize it less so, there we are AI, experts, are very unclear about what the future holds but they think the catastrophic, risks are possible and that, this is an important, problem so, we need to do more AI safety research.

2018-12-24

Show video