Date: 16 September 1993, 12:53:15 EDT
From: David M. Chess                                 CHESS    at YKTVMV
To:   sf-reviews at presto.ig.com
Subject:  Review of Harry Harrison and Marvin Minsky's "The Turing Option"

%A Harrison, Harry
%A Minsky, Marvin
%T The Turing Option
%I Warner Books; Questar Science Fiction
%C New York
%D October 1993 (copyright 1992)
%G ISBN 0-446-36496-7
%P 409 pp.
%O paperback, US$5.99

Note : Some mild spoilers in the following, although the book isn't
   particularly a suspense novel, and IMHO knowing the outcome
   will not materially reduce the reader's enjoyment of the book.
   (I can't stick in c-L due to my environment, but the moderators
   certainly can if they want to.)

Executive summary : Marvin Minsky's "Society of Mind" is must reading
   for anyone with an interest in AI.  Harry Harrison, while not my
   favorite sf author, has done some good stuff, and is certainly
   respected in the field.  From the combination I expected "The
   Turing Option" to be a really well-written novel with interesting
   plotting, good science, and neat new ideas.  I was disappointed.

Setting : 2023 to 2026, North America.  Thirty years in the future, but
   it feels a lot like 1993.  There have been some significant advances
   in science, people carry gigabytes in their pockets and there's a
   little nanotech around, but basically people, nations, etc, are still
   the same.  Vinge's singularity is nowhere in sight.

Premise : As it is about to be demo'd for the first time, a new advanced
   AI system is stolen, and its inventor shot and left for dead.  The
   investigation of the crime makes no progress.  The inventor has had
   a bullet through the brain, severing critical connections between the
   various parts of his thinking gear.  Using state-of-the-art
   nanotech and brain science, and some technology developed by the
   patient himself, many of the connections are restored.  He ends up
   with his memories intact up to about the age of 14, and sets out
   to re-invent the AI that was stolen, and catch the bad guys.  He is
   hampered by the need for intense security to keep the bad guys from
   coming back and finishing him off.

Story : The story itself is reasonably well-done.  The pacing is fast
   enough, the plotline simple enough, and the underlying concepts
   interesting enough to get me from the start to the end.

Characterization : Weak to non-existent.  The premise has the potential
   for at least two major character-developments: Brian (the inventor)
   needs to go from almost-dead to 14-year-old-in-24-year-old-body to
   grownup, and the machine intelligences that he creates need to go
   from non-working prototype to human-level (or beyond) minds.  But
   the authors don't show us either of these things.  Brian goes from
   almost-dead, through a couple of dream-memories of his childhood,
   and then ZONK he's a supposedly-14-year-old who is in fact completely
   rational, has no apparent internal conflicts or confusions, is able
   to function completely as an adult, and doesn't change noticably
   throughout the rest of the book.   The AIs go from not working,
   through one amusing almost-working demo, and then ZONK they're
   there, as intelligent flawless super-human-type machine intelligences
   that can learn a new language or a new skill in minutes, are
   politer than Brian, and call up phone-sex lines to practice their
   language skills and study human sexual culture.  Oh, well.

   The minor characters are also flat.  The Bad General is a cardboard
   cutout Bad General, the main bad guy who arranged for the original
   theft and almost-killing of Brian is barely seen at all, and has
   no plausible motivation when he is, and so on.  Good sf novels can
   of course get away with little or no characterization if the ideas
   or storytelling are neat enough.  Read on...

Storytelling : "The Society of Mind" is a marvelously-told book, made
   up of one-page nuggets of clearly-expressed stuff that link together
   and point to each other in compelling ways.  Harry Harrison's books
   generally have a certain touch of wry humor that gives them a
   distinctive flavor.  This book is neither of those things; I kept
   looking for an "as told to Biff Jones" somewhere on the copyright
   page.  It's done in the uninspired high-school-English-class prose
   of your average written-for-paperback hack novel.  Many important
   actions are completely undermotivated: Brian at one point decides
   that he doesn't *want* to get back all his disconnected memories
   and become his previous 24-year-old self, because of some notes he
   finds that his previous self wrote about "Zenome Therapy".  This
   seems like it could be a major plot element: Brian's attempt to
   re-invent his AI without at the same time awakening too much of the
   former self that's still in his brain somewhere, and falling into
   whatever "Zenome therapy" is again.  But that doesn't happen;
   "Zenome therapy" itself is mentioned exactly once more in the book,
   on the same page, and no conflict between the current and former
   Brians is ever brought in again; the issue of his missing ten
   years of memories vanishes about 150 pages in and never reappears
   in any significant way.  (With the exception of the bizarre last
   page of the book, in which Brian suddenly declares that the
   Bad Guys really won, and killed his humanity, and he's really
   just a Machine Intelligence himself, cry, whine, moan.  This is
   also completely unmotivated.)

   In another key scene, Brian, following up a clue that his AI
   found hidden within the programming of an AI recovered from the
   bad guys, walks into what from the reader's point of view has
   at least a 50% chance of being a deathtrap.  But, as he apparently
   knew all along (perhaps the authors told him), the message was
   planted by a good guy who was just working for the bad guys for
   awhile, and really has Brian's best interests at heart.

Editing : There are a few nitty oddnesses in the book that suggest
   hasty or scanty editing.  The (non)word "orientated" occurs at least
   a couple of times, as does reference to "a circuitry" in a context
   that clearly means "a subroutine".  There is also evidence of some
   uncareful shortening; we are shown a demo of an AI that doesn't
   work because of too much inhibition, but the following dialogue
   clearly suggests that there was also a demo of one that had not
   enough inhibition, but we missed seeing that somehow.  (It's
   possible that some of the undermotivated actions I moan about
   above are also due to overhasty editing-out of motivating or
   explanatory scenes.)

Science : The science in the book should have been a compelling current
   theory coupled to an experienced sf writer's ability to extrapolate.
   It wasn't.  The basic idea of mind as a quasi-hierarchy of agents
   that each do a simple job and are overseen by other agents, and so on,
   played a key role in the plot, as Brian's agents are re-connected in
   order to restore his mind.  But the concept struck me as *just* a
   relatively isolated plot element.  Except for one incident in his
   youth, the idea is never used to show Brian, or the AIs he creates,
   in any interesting lights.  The idea itself is not developed in any
   speculative ways; you'll get more fiction-like speculation in
   The Society of Mind than in this novel.

   There are also a painful number of science problems outside the
   main scientific thrust of the book.  At one point Brian discovers
   that he can access the memory banks of the computers that were
   implanted in his brain as part of his operation.  The surgeon tests
   this by uploading the contents of a scientific article into the
   CPUs in his head, and he can then "read" the article word-for-word.
   No mechanism is suggested by which this might work; it's the usual
   bad-sf assumption that all information-processors speak the same
   language.  I cannot myself imagine *any* mechanism by which the
   neurons in Brian's brain could have learned ASCII, and I would have
   appreciated at least some hand-waving towards the question.  At
   another point, an Expert Systems guru that has been hired to
   assist Brian decides that she can help solve the original crime by
   writing an Expert System to consider all the information, and
   suggest answers.  She does, and it provides great help in solving
   the case.  Gee, funny no one thought of doing that before!  Seems
   clear that if ES technology were at that level, it would be a
   routine part of criminal investigation (the book does not suggest
   that she has made some great breakthrough in ES in order to do it).

   The last part of the book suffers from the Transporter Problem.
   Gene Roddenberry (I think it was) once commented that the writers
   on Star Trek had problems coming up with situations that the
   Transporter couldn't solve.  The AIs that are developed towards the
   end of this book have a similar effect: in any physical or
   intellectual activity, they are better and faster than humans.
   They can teach themselves languages and skills almost instantly,
   do many things at once, have micromanipulators that let them juggle
   individual molecules, can listen in on radio and telephone traffic
   apparently by magic (another bad-sf premise: all machines speak
   the same language), and so on.  The main Bad Guy is found at the
   end of the book because someone happens to see him walking down the
   street.  Why didn't the magic AIs just scan through all the world's
   photographic databases looking for his face, or whatever?  Every time
   the humans have some problem towards the end of the book, the
   obvious right thing to do would just be to ask an AI.  But they
   only do that when it fits the plot.

   This leads to my main tech-related frustration with the book.  Mankind
   has now developed intelligent systems that are faster, smarter, tougher,
   and more reliable than he is.  What will this lead to?  In the real
   world, I think it would obviously lead to an unimaginable shakeup of
   every facet of world culture.  There would be riots, religious
   denunciations, acts of sabotage and rebellion, the potential for
   massive (human) unemployment, the end of nations, breakdown of many
   cultural institutions, etc, etc.  Humanity would face a huge
   challenge in trying to come to an accommodation with the machine
   intelligences, without either being wiped out, pushed aside as an
   irrelevant inferior species, or ending up in a disastrous series of
   wars to eliminate the new competitors.  I'd love to see a well-written
   novel addressing these things.  But in "The Turing Option", the only
   people who can think of any uses for the AIs are Brian, the AIs
   themselves (sorry, "MIs"; they prefer to be called "Machine
   Intelligences"), and the bad guys who stole the original AI.  And
   what are the uses they come up with?  The bad guys produce a
   product called Bug-Off, which is a robot with a dumbed-down AI
   that picks bugs off of plants.  Brian goes beyond this, pointing
   out that MIs will also be really good at planting and harvesting
   crops, and hey maybe even transporting them to market.  And
   he thinks they'll make really good household servants!  What
   intellectual daring.

   The bizarre final scene of the book suffers from the Transporter
   Problem acutely.  Without giving it away entirely, it's your
   typical "brave good guys walk in to arrest the bad guy, but
   it turns out he unfairly has a GUN, and a tense confrontation
   ensues" scene.  The problem is that one of the MIs is there.
   To be consistent with the MI abilities in the rest of the book,
   he should have simple picked up a stone with the manipulators
   in his left pseudo-foot and flung it at supersonic speed at the
   bad guy, knocking the gun from his hand and engraving "I am a
   Bad Person" on his forehead on the rebound.  Instead, the MI
   *grapples* with the bad guy to save Brian's life, and the gun
   goes off and you get to guess who got shot etc.  Shortly after
   that Brian begins whining about how the bad guys really won
   after all, for no apparent reason (see above).

Recommendation : I see I've waxed pretty negative here.  I don't think
   it's a great book, nor that it'll be remembered long (ironically, the
   back cover says that it "ranks with Michael Crichton's Jurassic Park";
   I tend to agree: I think they're both ephemeral).  I wouldn't recommend
   it to the general reader, or the very picky sf reader.  On the other hand,
   if you enjoy 400-page quick reads, and are interested in having a
   reasonably complete collection of current AI-related sf, it's probably
   worth the six bucks.


- -- -
David M. Chess                /  "In the long run, life depends less on
High Integrity Computing Lab  /    an abundant supply of energy than on
IBM Watson Research           /    a good signal-to-noise ratio." - Dyson