Thursday, February 19, 2015

About two decades ago, I happened to attend a conference also attended by one Hugo de Garis, an artificial intelligence researcher with some rather eccentric ideas about the future of his field.  To put it simply, he believed, and apparently still believes, that humankind is doomed to be destroyed--that is, physically exterminated--by its own hyperintelligent creations.  Once artificially intelligent machines have evolved to the point where they are enormously more intelligent than humans and capable of surviving and advancing without us, he reasons, we will become a superfluous irritant to them, and they will easily dispose of us.

Some friends and I had a lot of fun with this crackpot idea at the time, and I'd long since all but forgotten about him and his theories.  But to my shock, I've recently spotted them making something of a comeback.  In particular, prominent figures such as entrepreneur Elon Musk, tycoon/philanthropist Bill Gates, and even physicist Stephen Hawking appear to be lending weight to de Garis' old fears.  So perhaps it's worth reviewing why these people are such idiots some of the fundamental problems with these predictions.

To begin with, we need to ask what, exactly, we mean by intelligence.  This is a deceptively difficult question--deceptive because most people believe they have a very clear intuitive sense of what intelligence is.  The problem is that this intuitive definition, when considered carefully, ends up amounting to nothing more than "behaving like a human."  Computer science pioneer Alan Turing even codified this intuition, defining a "Turing test", in which a computer and a human are each conversing with a human tester over a teletype, and the tester is tasked with distinguishing them.  If the tester can't identify which is the computer and which is the human in this "imitation game" (the same one that inspired the film title), then the computer is judged to be intelligent.

The intuitive appeal of this test hides enormous difficulties.  To begin with, why is intelligence dependent on a human tester's skill at discerning it (or lack of skill at spotting an absence of it)?  We know that people are inclined to "see" nonexistent intelligence all over the place--in their own pets, for instance--and frequently incapable of detecting it where it does exist--say, in severely disabled, unconscious or mentally ill patients.  Clever "chatbots" have been devised that can convince many people that they're intelligent by using various tricks to make their pre-programmed behavior look spontaneous and creative.  (For example, they can simulate human-looking spelling or grammatical errors--ironically, making themselves look less intelligent in order to seem more human, and hence intelligent.)  While experts can still distinguish the best of them from humans, there is no reason to believe that chatbot technology, like computer chess technology, can't one day reach the point of outdueling the world's greatest experts.  But would a chatbot that fools even expert humans one hundred percent of the time--say, by perfectly imitating the conversation style of a flighty, celebrity-obsessed teenage girl--necessarily be intelligent?

Let's put that objection aside for a moment, though, and assume that humans can somehow learn to be ideal "Turing testers", infallibly distinguishing human-like intelligences from sub-human ones.  If that is our criterion for "intelligence", then what, exactly, distinguishes intelligence from human-ness?  If the answer is "nothing", then AI appears to be completely pointless.  After all, we know how to create people--the process is widely considered to be a lot more fun than AI research, in fact--so why do we need another method? 

Presumably we'd like an answer that's not "nothing"--that is, some set of measurable properties, distinct from "behaves just like a human", that we can use to characterize intelligence.  But what could they possibly be?  Intelligence test-taking ability, to take one example, clearly doesn't do the trick:  superb IQ test-taking machines that are not actually intelligent are easily as within the realm of possibility as superb chess-playing machines that are not actually intelligent.  In fact, our intuitive notion of intelligence is so bound up with human-ness that no such set of criteria has ever been proposed that even comes close to matching our intuitive ideas about intelligence.  And that's why, more than a half-century after its invention, everyone still talks about Turing's indistinguishable-from-humans criterion, rather than some more objective, property-based one.
Let's imagine, though, that we've somehow overcome that obstacle and come up with such a set of objective criteria that still "smells" like intelligence.  Unfortunately, even that's not enough--we must then ask:  do our criteria also allow for a non-human, at least theoretically, to surpass humans?  And do such superhumans, by our criteria, still seem intuitively intelligent?  What if, for instance, one of our criteria is some level of "unpredictability", analogous to human creativity?  Would a "superhuman" by our measures then be "hyper-creative"?  Or simply so erratic and random as to seem uselessly dysfunctional?  And what about the type of pattern recognition that IQ tests often test for?  Would a hyperintelligent machine recognize patterns so complex that it ignores simple ones, and thus appear unintelligent to us, rather than brilliant?

But let us suppose once again that we've somehow overcome all these definitional issues, and we've moreover managed to create a whole line of machines, each as unmistakably hyperintelligent as, say,...this man.  Kim Ung-yong, the Guinness Book world record holder for IQ, is as close to a superhumanly intelligent being as we've ever seen--he's literally one person (himself) away from being more intelligent than every human being on earth.  Yet he has a fairly ordinary job, and values happiness above material success so much that his countrymen have labeled him a "failure" for having accomplished little beyond making a healthy, pleasant, prosperous life for himself.  In the de Garis nightmare, hyperintelligent machines are bent on world domination at our expense.  Where did they get this motivation?  Because they're just like humans, and that's what we'd do if we were hyperintelligent?  What about Kim Ung-yong?

Again, the de Garis/Musk/Gates/Hawking scenario appears to derive from a vague intuition based purely on human personality, not human (let alone superhuman) intelligence:  since we treat at least some non-human creatures with subhuman intelligence as disposable commodities, killing them at will, so would a superhumanly intelligent machine treat less intelligent humans.  Putting aside the fact that human behavior is far from so uniformly heartless--think of vegan pet-owners--we seem to have once again made a completely unjustified equivalence between "intelligent", and "behaves like a human (towards inferior beings)".  Remember, though, that we've explicitly asserted that these superintelligent machines don't necessarily act like humans.  (Otherwise, how can they surpass humans in intelligence?)  We could therefore just as easily hypothesize that all sufficiently intelligent machines go insane from all that brilliant thinking, or get suicidally depressed and destroy themselves.  (Intelligence in humans is, in fact, positively correlated with mental illness, including depression.)  Certainly the suspicion that humans might behave badly in such circumstances is by itself no reason at all to suspect the same of our hypothetical future hyperintelligent creations.

Note that I haven't even made the argument here that human control will preclude ruthlessness towards humans--I've simply accepted the common assumption in all the dystopian nightmares that our hyperintelligent machines will somehow cleverly "bypass" their programming and do as they please despite our attempts to prevent them.  But it's hard to even make sense of such an assumption, let alone imagine how it could come to pass.  We humans, for instance, have only very loose, imprecise "programming safeguards" installed by our millions of years of evolution, and we're also programmed for considerable flexibility and individual variation as part of our survival kit.  Yet the vast majority of us are quite incapable of bypassing our programming--by committing suicide, for instance, or abstaining forever from sex--and it's not clear that even those of us who do are actually bypassing anything, as opposed to faithfully executing rare, "malfunctioning" variants of our standard built-in survival-and-reproduction code.  So what would it mean for a hyperintelligent machine to bypass what would presumably be a core element of its very being?  How would hyperintelligence help it to follow a path different from the one it was programmed to follow, any more than, say, Kim Ung-yong's intelligence could somehow lead him away from his natural path towards happiness and contentment and towards wanton destruction of his inferiors?

Finally, what if the "bypass" is actually a result of flawed human programming--that is, that humans in effect mistakenly program a machine to destroy humanity, rather than the computer deciding to do so itself?  In fact, Stanley Kubrick envisioned exactly such a machine in "Doctor Strangelove", and it's even been reported that Kubrick's "doomsday machine" was actually built, by the Soviet Union.  But none of that has anything in the slightest to do with intelligence, except in the sense that intelligence, whatever one defines it to be, is probably hard enough to program correctly that bugs are inevitable.  The obvious lesson to draw is not, "don't develop superintelligence"--much less, "we will inevitably develop superintelligence, and it will destroy us"--but rather, "make the fail-safe mechanisms on whatever we build a lot simpler and more reliable than Kubrick's Soviets did." 

There remains one last question: if the hyperintelligent-machines-destroying-humanity scenario is so shot full of logical holes, then why do so many prominent nerds seem to find it so compelling?  I can't say for sure, but I have an amateur-psychological hypothesis:  for a certain type of successful, self-absorbed, math-and-logic-oriented personality, intelligence is less a talent than a kind of mystical power.  They see that they possess more of it than most people, and have experienced the advantages it gives them in certain competitive situations.  They have probably used it at times to defeat or even harm others in various (presumably legal) ways.  And when they imagine a creature who has a lot more of it than they do--whatever that might mean to them--they grow very, very afraid.

Saturday, February 07, 2015

A shocking development has rocked the world of journalism:  One of the nation's foremost practitioners of the art of reading news off a teleprompter while conveying the misleading impression of being an actual experienced, trustworthy professional journalist has been discovered to have actually misled people about his experience, trustworthiness and journalistic professionalism.  It's as yet unclear at this point whether he'll be able to return to his job of reading the news while conveying his usual misleading impression of experienced, trustworthy journalistic professionalism, or whether his having been discovered to have actually misled people has done career-ending damage to his ability to continue to convey that same misleading impression.

Tuesday, February 03, 2015

The current brouhaha over vaccination presents a fascinating case study illustrating three frequent conflicts in American politics:  between democracy and individual liberty, between science and public policy, and between individual and collective goods.  While it should go without saying that vaccination is a vital and powerfully effective disease-fighting tool, the issue of how a democracy should deal with anti-vaccine fanatics isn't quite so cut-and-dried, and has exposed some of the idiosyncrasies of American political debate:
  •  Libertarianism and distrust of democracy:   Most of the arguments I've seen so far amount to hysterical rants about the evil and stupidity of "anti-vaxxers", coupled with nasty partisan accusations of the "other side's" pronounced anti-vaccine tendencies.  (In fact, there are loud anti-vaccine fringes on both the left and the right, to which mainstream politicians on both sides have occasionally pandered.)  Sprinkled in amongst these diatribes are indignant anti-vaccine pronouncements steeped in libertarian self-righteousness, larded with references to alleged scientific proof of the dire consequences of vaccination,  and finished off with apoplectic accusations of "poisoning children".  This acrimony is typical of debates--such as abortion--where both sides deeply distrust the democratic process to provide the "right" answer, and prefer instead to gin up enormous volumes of rhetorical fury in order to provide preparatory justification for whatever possibly extra-democratic tactics might be necessary to win the day for their own side.
  • Science as authority, citizen pursuit or villain:  Some of the most embarrassing arguments are the ones in which the participants attempt to invoke science on behalf of their claims.  The anti-vaccine efforts are laughable, of course, citing vaguely-referenced studies alleging all manner of terrible harms, while simultaneously denouncing the medical establishment for covering up the awful truth.  but the pro-vaccine ones are rarely better--they either cite a scientific consensus whose strength and validity the speakers are completely unqualified to assess, or else recite potted versions of the actual science that they don't really understand.  The truth is that neither side is really competent to discuss, much less judge, the science behind this issue.  And it's a good bet that on one or more other issues--second-hand smoke, say, or genetically modified organisms, or global climate change, or evolution--any given pair of disputants would find themselves adopting each other's previous approach to the relevant scientific evidence. 
  • Allergy to collective burdens:   Vaccination, like virtually all public policy options, imposes risks on some people, and benefits for others.  It therefore produces "winners" and "losers" either way--the losers, in this case, being those who suffer from either a vaccine reaction or the disease itself.  As it turns out, there's a "free-riding" opportunity here:  if only a very few people refuse vaccinations, then they are effectively protected by "herd immunity", while avoiding the risk of an adverse reaction to the vaccine.  This risk is very low, and is also insured by a strict tort liability system incorporated into the funding of vaccine production.  But Americans are generally very reluctant to enter into this sort of collective assumption of even very small individual risks.  That's why the vehicle for the insurance system is tort law, rather than normal insurance--it gives individuals the sense that they are autonomously receiving compensation for a wrong, rather than passively accepting societal care following a misfortune.  And that's why enough Americans still choose to free-ride that the herd immunity on which the free-riders depend is in some cases quite possibly on the verge of collapsing.
In an earlier time, when circumstances were dire (thousands of children dying of infectious diseases), these problems were overcome of necessity.  And one certainly hopes that they will be overcome once again, before any preventable diseases rage completely out of control.  But in the meantime, it sure would be nice if more Americans discussed the issue as if they hoped to persuade their fellow voting citizens, rather than bulldoze them; admitted that they don't really understand science and are simply using their common sense as best they can; and spoke more of their concern for each other's well-being than of their rights and entitlements to do as they please and still have others look out for them.