Some friends and I had a lot of fun with this crackpot idea at the time, and I'd long since all but forgotten about him and his theories. But to my shock, I've recently spotted them making something of a comeback. In particular, prominent figures such as entrepreneur Elon Musk, tycoon/philanthropist Bill Gates, and even physicist Stephen Hawking appear to be lending weight to de Garis' old fears. So perhaps it's worth reviewing
To begin with, we need to ask what, exactly, we mean by intelligence. This is a deceptively difficult question--deceptive because most people believe they have a very clear intuitive sense of what intelligence is. The problem is that this intuitive definition, when considered carefully, ends up amounting to nothing more than "behaving like a human." Computer science pioneer Alan Turing even codified this intuition, defining a "Turing test", in which a computer and a human are each conversing with a human tester over a teletype, and the tester is tasked with distinguishing them. If the tester can't identify which is the computer and which is the human in this "imitation game" (the same one that inspired the film title), then the computer is judged to be intelligent.
The intuitive appeal of this test hides enormous difficulties. To begin with, why is intelligence dependent on a human tester's skill at discerning it (or lack of skill at spotting an absence of it)? We know that people are inclined to "see" nonexistent intelligence all over the place--in their own pets, for instance--and frequently incapable of detecting it where it does exist--say, in severely disabled, unconscious or mentally ill patients. Clever "chatbots" have been devised that can convince many people that they're intelligent by using various tricks to make their pre-programmed behavior look spontaneous and creative. (For example, they can simulate human-looking spelling or grammatical errors--ironically, making themselves look less intelligent in order to seem more human, and hence intelligent.) While experts can still distinguish the best of them from humans, there is no reason to believe that chatbot technology, like computer chess technology, can't one day reach the point of outdueling the world's greatest experts. But would a chatbot that fools even expert humans one hundred percent of the time--say, by perfectly imitating the conversation style of a flighty, celebrity-obsessed teenage girl--necessarily be intelligent?
Let's put that objection aside for a moment, though, and assume that humans can somehow learn to be ideal "Turing testers", infallibly distinguishing human-like intelligences from sub-human ones. If that is our criterion for "intelligence", then what, exactly, distinguishes intelligence from human-ness? If the answer is "nothing", then AI appears to be completely pointless. After all, we know how to create people--the process is widely considered to be a lot more fun than AI research, in fact--so why do we need another method?
Presumably we'd like an answer that's not "nothing"--that is, some set of measurable properties, distinct from "behaves just like a human", that we can use to characterize intelligence. But what could they possibly be? Intelligence test-taking ability, to take one example, clearly doesn't do the trick: superb IQ test-taking machines that are not actually intelligent are easily as within the realm of possibility as superb chess-playing machines that are not actually intelligent. In fact, our intuitive notion of intelligence is so bound up with human-ness that no such set of criteria has ever been proposed that even comes close to matching our intuitive ideas about intelligence. And that's why, more than a half-century after its invention, everyone still talks about Turing's indistinguishable-from-humans criterion, rather than some more objective, property-based one.
Let's imagine, though, that we've somehow overcome that obstacle and come up with such a set of objective criteria that still "smells" like intelligence. Unfortunately, even that's not enough--we must then ask: do our criteria also allow for a non-human, at least theoretically, to surpass humans? And do such superhumans, by our criteria, still seem intuitively intelligent? What if, for instance, one of our criteria is some level of "unpredictability", analogous to human creativity? Would a "superhuman" by our measures then be "hyper-creative"? Or simply so erratic and random as to seem uselessly dysfunctional? And what about the type of pattern recognition that IQ tests often test for? Would a hyperintelligent machine recognize patterns so complex that it ignores simple ones, and thus appear unintelligent to us, rather than brilliant?
But let us suppose once again that we've somehow overcome all these definitional issues, and we've moreover managed to create a whole line of machines, each as unmistakably hyperintelligent as, say,...this man. Kim Ung-yong, the Guinness Book world record holder for IQ, is as close to a superhumanly intelligent being as we've ever seen--he's literally one person (himself) away from being more intelligent than every human being on earth. Yet he has a fairly ordinary job, and values happiness above material success so much that his countrymen have labeled him a "failure" for having accomplished little beyond making a healthy, pleasant, prosperous life for himself. In the de Garis nightmare, hyperintelligent machines are bent on world domination at our expense. Where did they get this motivation? Because they're just like humans, and that's what we'd do if we were hyperintelligent? What about Kim Ung-yong?
Again, the de Garis/Musk/Gates/Hawking scenario appears to derive from a vague intuition based purely on human personality, not human (let alone superhuman) intelligence: since we treat at least some non-human creatures with subhuman intelligence as disposable commodities, killing them at will, so would a superhumanly intelligent machine treat less intelligent humans. Putting aside the fact that human behavior is far from so uniformly heartless--think of vegan pet-owners--we seem to have once again made a completely unjustified equivalence between "intelligent", and "behaves like a human (towards inferior beings)". Remember, though, that we've explicitly asserted that these superintelligent machines don't necessarily act like humans. (Otherwise, how can they surpass humans in intelligence?) We could therefore just as easily hypothesize that all sufficiently intelligent machines go insane from all that brilliant thinking, or get suicidally depressed and destroy themselves. (Intelligence in humans is, in fact, positively correlated with mental illness, including depression.) Certainly the suspicion that humans might behave badly in such circumstances is by itself no reason at all to suspect the same of our hypothetical future hyperintelligent creations.
Note that I haven't even made the argument here that human control will preclude ruthlessness towards humans--I've simply accepted the common assumption in all the dystopian nightmares that our hyperintelligent machines will somehow cleverly "bypass" their programming and do as they please despite our attempts to prevent them. But it's hard to even make sense of such an assumption, let alone imagine how it could come to pass. We humans, for instance, have only very loose, imprecise "programming safeguards" installed by our millions of years of evolution, and we're also programmed for considerable flexibility and individual variation as part of our survival kit. Yet the vast majority of us are quite incapable of bypassing our programming--by committing suicide, for instance, or abstaining forever from sex--and it's not clear that even those of us who do are actually bypassing anything, as opposed to faithfully executing rare, "malfunctioning" variants of our standard built-in survival-and-reproduction code. So what would it mean for a hyperintelligent machine to bypass what would presumably be a core element of its very being? How would hyperintelligence help it to follow a path different from the one it was programmed to follow, any more than, say, Kim Ung-yong's intelligence could somehow lead him away from his natural path towards happiness and contentment and towards wanton destruction of his inferiors?
Finally, what if the "bypass" is actually a result of flawed human programming--that is, that humans in effect mistakenly program a machine to destroy humanity, rather than the computer deciding to do so itself? In fact, Stanley Kubrick envisioned exactly such a machine in "Doctor Strangelove", and it's even been reported that Kubrick's "doomsday machine" was actually built, by the Soviet Union. But none of that has anything in the slightest to do with intelligence, except in the sense that intelligence, whatever one defines it to be, is probably hard enough to program correctly that bugs are inevitable. The obvious lesson to draw is not, "don't develop superintelligence"--much less, "we will inevitably develop superintelligence, and it will destroy us"--but rather, "make the fail-safe mechanisms on whatever we build a lot simpler and more reliable than Kubrick's Soviets did."
There remains one last question: if the hyperintelligent-machines-destroying-humanity scenario is so shot full of logical holes, then why do so many prominent nerds seem to find it so compelling? I can't say for sure, but I have an amateur-psychological hypothesis: for a certain type of successful, self-absorbed, math-and-logic-oriented personality, intelligence is less a talent than a kind of mystical power. They see that they possess more of it than most people, and have experienced the advantages it gives them in certain competitive situations. They have probably used it at times to defeat or even harm others in various (presumably legal) ways. And when they imagine a creature who has a lot more of it than they do--whatever that might mean to them--they grow very, very afraid.
12 comments:
"If the hyperintelligent-machines-destroying-humanity scenario is so shot full of logical holes, then why do so many prominent nerds seem to find it so compelling?"
Why? Because it's fun. It's been a staple of science fiction since forever. Discussing the future of harmless vacuum cleaners gets boring very quickly.
Okay...but I'm also discussing the future of hyperintelligent machines, without saying anything stupid (that I know of). And as far as I can tell, I'm having every bit as much fun doing it as Gates, Musk et al. are. Why can't they just follow my lead, and make fun of others who don't, instead of making fools of themselves?
The word is "wanton". Wonton is a Chinese dish. Otherwise, your analysis is brilliant.
Yikes! Fixed--thanks. (I was rushing a bit to finish when I wrote that part of the post.)
You're stupid as hell.
The proper analogy is not kim:you but rather you:spider. Machines will give your life the same regard you give a spider.
You're an idiot. You acknowledge bugs can't be eliminated, but then say that we just need to make simple reliable failsafes. YOU CAN'T DO THAT.
You're a fucking retard and the only fool here is you. Fuck off.
And don't post on ISTS, you lower the intelligence of the conversation, dumbass.
Not that I'm an AI risk proponent, but re "we know how to create people... Why would we need another method?": hardware limitations. Humans can't be backed up, can't be duplicated, and fail after ~80 years. Some chemical-stimulant-based overclocking is possible but severely limited compared to other possible architectures.
Presumably, escaping these hardware limitations would have a significant impact on an individual's ability to influence their surrounding world....
Ironically, this line of argumentation rebuts itself by dint of the very anthropocentrism it criticizes -- not in the sense that angry metallic skulls will glare at us with evil eyes, but rather in that no reasonable conclusion is possible with our present state of knowledge. It is my belief that superhuman AI, if any, will materialize as an emergent phenomenon, a "phase transition" of sorts whose irreducibility makes it infeasible to foretell or analyze. Such an AI is unlikely to care enough about us even to acknowledge us or perhaps even to be aware of our existence (and vice versa). Over billions of years, the same emergent behavior turned globs of formless matter into us. I suspect it'll be a bit quicker this time.
Well, if we have no way of knowing anything about what superhuman AI will be like, and will in fact quite possibly never even be aware of its existence, then I guess there's no real point in worrying about it. As for anthropocentrism, your description of superhuman AI emerging out of the primordial muck and evolving into something endowed with an ineffable property you call "irreducibility" sounds suspiciously like a spiritual retelling of the origins of the human soul...
We are the muck; the superhuman AI will emerge from a base formed by our technology. Soon we should have a critical technological mass of something a little above the muck, obviating the billions of years of evolution it took to transform the low form of muck into us. I would agree that any worrying about this is largely pointless, because even if we do manage to rise higher and open our eyes, it would likely be with one second left to live.
From https://www.ijcai.org/proceedings/2019/0846.pdf
"The experiment results
showed that existing approaches, although only targeting on
one specific (sub-) category, still perform worse than human
being in general. We argue that IQ test provides an interesting
and meaningful benchmark for the current development of
AI research."
This is from 2019. So your 2015 assertion that "superb IQ test-taking machines that are not actually intelligent are easily as within the realm of possibility as superb chess-playing machines that are not actually intelligent" is not well-supported.
Another quote from your article:
"our intuitive notion of intelligence is so bound up with human-ness that no such set of criteria has ever been proposed that even comes close to matching our intuitive ideas about intelligence."
And a non-human centered definition from Wikipedia:
"it can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.
So again, you appear to be making assertions without having bothered to do much research. Perhaps the Wikipedia page was different in 2015, but easily accessible examples of substrate-independent definitions of intelligence which work well on both machines and humans were easy to find at the time.
Again, the de Garis/Musk/Gates/Hawking scenario appears to derive from a vague intuition based purely on human personality, not human (let alone superhuman) intelligence: since we treat at least some non-human creatures with subhuman intelligence as disposable commodities, killing them at will, so would a superhumanly intelligent machine treat less intelligent humans.
No. The fear of powerful AI is based on instrumental convergence, which you would know if you had bothered to read anything written by AI researchers before writing this post. https://en.wikipedia.org/wiki/Instrumental_convergence. There are proposals for creating non-maximizing agents such as those contained in Stuart Russel's book on the topic, but these techniques are not how we currently build AI.
I've simply accepted the common assumption in all the dystopian nightmares that our hyperintelligent machines will somehow cleverly "bypass" their programming and do as they please despite our attempts to prevent them
This is NOT the assumption of AI researchers, which again, you would know if you had bothered to read just about anything in the field. The concern is not that they will ignore their programming, but rather that they will follow it exactly and such single-minded pursuit will have consequences the designers did not intend. You can already see many examples of such behavior, from the mundane: https://www.youtube.com/watch?v=tlOIHko8ySg
To the slightly amusing: https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist
To the actually concerning: https://techcrunch.com/2019/02/18/youtube-under-fire-for-recommending-videos-of-kids-with-inappropriate-comments/
The fact that you are still linking to this poorly researched article six years after writing it suggests you have not learned anything in the interim. How do you expect to understand these "nerdly concerns" if you haven't read anything written by the people who are concerned about it?
I don't know if you're even interested in learning, but at the very least you should read some writing from actual experts in the field rather than Elon Musk's 240 character tweets. I'd recommend Stuart Russel's writing on the subject, particularly his book "Human Compatible".
Thank you for your comments. Some responses:
- I don't think "AI researchers are having trouble producing AIs that pass IQ tests" is quite the refutation you think it is. For one thing, they used to say the same thing about chess and go, so their record of predicting success at "dumb" solutions to AI problems isn't exactly stellar. For another, "hyperintelligent AI is a terrifying threat to humanity, but AI that can do IQ tests is still quite a ways off" isn't exactly a compelling case for the urgency of the problem.
- I agree that objective definitions of "intelligence" that are broad enough to include insect-level cognition are not too difficult to come up with. The problem arises as we approach the human (or even higher mammal) scale of intelligence.
- I don't recall whether I was familiar with the "instrumental convergence" version of AI risk back in 2015. (I believe de Garis' original vision was more in line with the one I described than with the more modern "paper clip maximizer" scenario.) Regardless, I consider a hypothetical hyperintelligent AI figuring out a way to bypass its "Asimov's laws of robotics" programming (which would obviously be included, right?) in pursuit of some also-programmed goal to be just a particular version of the "insufficient failsafe" problem that I mentioned as an existing issue, applicable equally to a hypothetical hyperintelligent AI and to the kind of dumb automation we deploy all over the place today.
To summarize, it's a standard tactic of proponents of crank science to argue that skeptics simply haven't studied the question deeply enough to understand the answers to all their skeptical questions. Unfortunately, there's far more crank science around than I can delve into in depth, so I'm going to have to decline your suggestion that I delve deeply into this particular topic.
First of all, you spend a lot of time talking about the definition of intelligence. The one I like is "being able to accomplish goals in a wide range of circumstances", so the more intelligent you are, the better you are at accomplishing goals in more different situations. I think this is sufficient for any argument about superintelligence. Like, maybe we'll get machines that are "intelligent" by some measure but end up "so erratic and random as to seem uselessly dysfunctional" like you say, but no one is worried about those. We're worried about ones that are extremely good at accomplishing their goals. So what will their goals be?
None of the actual serious arguments about AI say that they will "bypass their programming". As you point out, that doesn't make any sense. What people are worried about is that we don't actually know how to program an AI to want to do any specific thing (have any specific goals), or to avoid doing any specific thing. If it's smarter than you it'll do whatever it wants, and we don't know how to decide what it will want.
You say we need to "make the fail-safe mechanisms on whatever we build a lot simpler and more reliable than Kubrick's Soviets did."
This is the entire problem. Like, if you figured out how to do this, I would abruptly stop being worried about AI. No one knows how to make fail-safe mechanisms on AIs, and people are running forward trying to make AIs anyway. It's in fact harder to make an AI with a fail-safe mechanism than one without it.
You seem to be skeptical that it's possible for an AI to be actually dangerous, and accomplish more than a smart human can? You say for example that "applicable equally to a hypothetical hyperintelligent AI and to the kind of dumb automation we deploy all over the place today." I mean obviously unaligned dumb automation isn't dangerous? But superintelligent AIs could destroy the world if they wanted to? The world as it is isn't like, secure. If it could figure out nanotechnology, or hack all our computers (computer security is actually a joke), or engineer worse versions of normal viruses, it could take over the world from there. Or it could do something we haven't thought of, because almost by definition it's better than us at thinking of things.
Wow that ended up longer than I thought it would, I hope you still read comments lol. In conclusion, mr. "I could be wrong" I really *really* hope you're right.
Post a Comment