I,+ROBOT

The opening introduction with Susan Calvin is reminiscent of the Mary Shelley's opening to //Frankenstein//. In that work, we meet Dr. Frankenstein in the Arctic chasing down his "monster." The plot that we are all familiar with is told in hindsight. We know how the story ends, the structure of the novel is genealogical: what went wrong. The //Terminator// series of movies has the same device: we know that Skynet has destroyed the world, but the future must find out what happened in the present (the beginning of the story) to change the direction. In all of these another common theme is how artificial life develops "consciousness" and begins to mimic and perhaps develop qualities that we only attribute to wetware beings like ourselves. It also use the plot frame of the famous movie //Citizen Kane// another misanthrope that a reporter tries to unpeel like layers to an onion. The description of Dr. Calvin at the beginning is interesting, especially given her affinity for robots over humans. Dr. Calvin gives an interesting defense of robots and dismisses ethical and economic opposition to robots as mere superstition or Luddite views. While you may be too young to appreciate this, but I can remember a world without personal computers, without the internet, without mobile phones, even without microwave ovens and most of the electronics that are featured in most automobiles. I have become quite accustomed to having these amenities, however, I would not describe them as "companions" as Dr. Calvin does, more like a pet than a piece of technology.
 * **pg. xi-xii. "At the age of twenty, Susan Calvin had been part of the particular Psycho-Math seminar at which Dr. Alfred Lanning of U.S. Robots had demonstrated the first mobile robot to be equipped with a voice. I was a large, clumsy unbeautiful robot, smelling of machine . . . but it could speak and make sense. Susan said nothing at that seminar; took no part in the hectic discussion period that followed. She was a frosty girl, plain and colorless, who protected herself against a world she disliked by a mask-like expression and a hypertrophy of intellect. But as she watched and listened, she felt the stirrings of a cold enthusiasm."**
 * **pg. xiii. "'Dr. Calvin,' I said, as lushly as possible, 'in the mind of the public you and U.S. Robots are identical. Your retirement will end an era and -.' 'You want the human interest angle?' She didn't smile at me I don't think she ever smiles. But her eyes were sharp, though not angry. I felt her glance slide through me and out of my occiput and knew that I was uncommonly transparent to her; that everybody was.' but I said, 'That's right.' 'Human interest out of robots? A contradiction.' 'No, doctor. Out of you.' 'Well, I've been called a robot myself. Surely they've told you I'm not human.' they had, but there was no point in saying so."**
 * **pg. xiv. "'Then you don't remember a world without robots. There was a time when humanity faced the universe alone and without a friend. Now he has creatures to help him; stronger creatures than himself, more faithful, more useful, and absolutely devoted to him. Mankind is no longer alone. Have you ever thought of it that way? . . . To you, a robot is a robot. Gears and metal; electricity and positrons. -- Mind and iron! Human-made! if necessary, human-destroyed! But you haven't worked with them, so you don't know them. They're a clear better breed than we are."**
 * **pg. xiv-xv. "'They might have known that from the start. We sold robots for Earth-use then--before my times it was even. Of course, that was when robots could not talk. Afterward, they became more human and opposition began. The labor unions, of course, naturally opposed robot competition for human jobs, and various segments of religious opinion had their superstitious objections. It was all quite ridiculous and quite useless. And there it was."**

__**ROBBIE**__ The opening vignette of the game of hide-and-seek between Gloria and Robbie (the robot) is intriguing in several ways. The first is the contrast between Robbie and Gloria as far as being honest. Gloria accuses Robbie of cheating, while she is cheating herself. Gloria talks to Robbie like she would a pet dog -- "Bad boy! I'll spank you." (pg. 3). Robbie also seems mature than Gloria, i.e., he lets her win the game, but also more immature in that he sulks when Gloria criticizes him. In addition, his desire to hear stories, like Cinderella -- why would a robot like the story of Cinderella?. Perhaps it is because a robot can identify with being the underappreciated step-sister/daughter and being the "servant" to a family in which you do not fully belong. However, the desire of robots for stories is curious and telling. Stories, and story-telling, imply consciousness, something that a view of robots as mere calculating machines (as illustrated by the talking robot that melts down later on) would say is the difference between humans and artificial intelligence. It is more than stimulus and response, but the knowledge that you are you. Perhaps Robbie does not have the ability to generate his own stories, much like the cyborgs in //Blade Runner// who cannot have genuine memories and so they hold onto to fake photographs and the image of "family life." The back and forth battle of the sexes between the Westons and their competing views of technology is interesting. The mother focuses on the "humanity" (or lack thereof) of the robot and the abnormality of its relationship to the child, while the father seems much more comfortable with technology and its role in the world and focuses on what Robbie can do. To the father, Robbie is an //improvement// on humans because of its programming and specialization, while humans are often failures. First the mother: And the father: It is also interesting how the wife uses appeals to emotion, in this case love, to win the argument with her husband. The pattern of arguing between George and Grace should be compared with how Gloria and Robbie argue over her getting a ride. Is Asimov telling us something here? The parents try to get Gloria off her attachment to Robbie by disposing him when they attend the "visivox" (movies) and replacing him with a dog, unsuccessfully. The key difference here is that the mother sees Robbie as a machine, while Gloria sees him as a person Another note on this point is how Robbie seems to understand Gloria better than her own mother, even about her emotional responses and development. However, the main point is how we can become emotionally involvement in inanimate objects that (should be) incapable of reciprocating emotion. For example, there are many people who can become more attached to a car, computer, smartphone, TV, a job, than to other humans. In the past, this would have been seen as an example of a warped human. Emotions and attachments are projected onto the inanimate object (or animal) that probably are not there. Why? Is this normal or abnormal? I have always found it fascinating that there are people who will get incredibly worked up about cruelty to animals, but are indifferent to cruelty to other humans. Is this an example of this? The excursion to NYC, 1998 is obviously designed to take Gloria's mind off Robbie, but is unsuccessful (again). Dr. Calvin (in the guise of a younger graduate student doing psycho-robotic observations) notes Gloria's interaction with the "talking robot" at the museum. Speech is often connected to our conception of a personality. If one cannot articulate one's feelings in words, we are thought to be somewhat dysfunctional and talk-therapy is often connected to discovering and exploring our own personality. However, the talking robot is really only capable of stimulus and response, much like IBM's Watson supercomputer (the Jeopardy one) and it melts down when it is asked to consider itself as a robot ("a robot like me" pg. 22). The husband's gambit of bringing Gloria to a robot factory reunites Gloria and Robbie, although the wife eventually sees through her husband's thinly veiled purpose. Robbie saves Gloria's life and Robbie returns to the family. However, a final theme emerges: the competition between humans and robots for certain jobs and roles and the fear of obsolescence. Perhaps this is the real reason why the mother is uneasy about Robbie compared to her husband: Robbie is a substitute (and competitor) for her! This point is more generalized at the robot factory when the manager discusses the opposition of unions to letting robots produce other robots (this will be an issue later) People are always cavalier about other people's jobs that face technological obsolescence. I wonder how they would feel if it was their own role under the chopping block. For example, many workers in IT did wonderfully in the 1990s when their technology displaced millions of workers in the clerical occupations. However, as their jobs are being increasingly outsourced, they don't like it so much. Dr. Calvin dismisses this fear (celebrating "creative destruction") in the conclusion of the chapter. A final note about dates. This book was written in the 1950s and these changes were meant to be in the distant future, not the past that it is to us as readers. Have we fallen behind the imagination horizon as far as technology? Clearly we have not actuated Asimov's vision of the future.
 * **pg. 4. ". . . Robbie was hurt by this unjust accusation, so he seated himself carefully and shook his head ponderously from side to side. Gloria changed her tone to one of gentle coaxing immediately, 'Come on, Robbie I didn't mean it about the peeking. Give me a ride.' Robbie was not to be won over so easily, though. He gazed stubbornly at the sky, and shook his head even more emphatically. 'Please Robbie, please give me a ride.' She encircled his neck with rosy arms and hugged tightly. Then, changing moods in a moment, she moved away. 'If you don't I'm going to cry,' and her face twisted appallingly in preparation. Hard hearted Robbie paid scant attention to this dreadful possibility, and shook his head a third time. Gloria found it necessary to play her trump card. 'If you don't' she exclaimed warmly, 'I won't tell you any more stories, that's all. Not one-' Robbie gave in immediately and unconditionally before this ultimatum . . ."**
 * **pg. 9-11. "'. . . I won't have my daughter entrusted to a machine-and I don't care how clever it is. It has no soul, and no one knows what it may be thinking. A child just isn't //made// to be guarded by a thing of metal . . . it was different at first. It was a novelty; it took a load off me, and-and it was a fashionable ting to do. But now I don't know. The neighbors --. . . . She won't play with anyone else. There are dozens of little boys and girls that she should make friends with, but she won't. She won't go //near// them unless I make her. That's no way for a little girl to grow up. You want her to be normal, don't you? You want her to be able to take her part in society . . . Most of the villagers consider Robbie dangerous. Children aren't allowed to go near our pace in the evenings.'"**
 * **pg. 8. "'. . . he certainly isn't a terrible machine. He's the best darn robot money can buy and I'm damned sure he set me back half a year's income. He's worth it, though-darn sight cleverer than half my office staff . . . A robot is infinitely more to be trusted than a human nursemaid. Robbie was constructed for only one purpose really--to be the companion of a little child. His entire 'mentality' has been created for the purpose. He just can't help being faithful and loving and kind. He' a machine-//made so.// That's more than you can say for humans . . . First Law of Robotics. You //know// that it is impossible for a robot to harm a human being; that long before enough can go wrong to alter that First Law, a robot would be completely inoperable. It's a mathematical impossibility . . .'"**
 * **pg. 11. "'. . . Grace, this is one of your campaigns. I recognize it. But it's no use. The answer is still, no! We're keeping Robbie!' And yet he loved his wife--and what was worse, his wife knew it. George Weston, after all, was only a man-poor thing-and his wife made full use of every device which a clumsier and more scrupulous sex has learned, with reason and futility, to fear. Ten times in the ensuing week, he cried, 'Robbie stay,-and that's //final!'// and each time it was weaker and accompanied by a louder and more agonized groan."**
 * **pg. 14. "'Why do you cry, Gloria? Robbie was only a machine, just a nasty old machine. he wasn't alive at all.' 'He was //not// no machine!' screamed Gloria, fiercely and ungrammatically. 'He was a //person// just like you and me and he was my //friend//. I want him back. Oh, Mamma, I want him back.' . . . 'Let her have her cry out,' she told her husband. 'Childish griefs are never lasting. In a few days, she'll forget that awful robot ever existed.'**
 * **pg. 23"'. . . The whole trouble with Gloria is that she thinks of Robbie as a //person// and not as a //machine.// Naturally, she can't forget him. Now if we managed to convince her that Robbie was nothing more than a mess of steel and copper in the form of sheets and wires with electricity its juice of life, how long would her longings last? It's the psychological attack, if you see my point.'"**
 * **pg. 24-5. "'A vicious circle in a way, robots creating more robots. Of course, we are not making a general practice out of it. For one thing, the unions would never let us. But we can turn out a very few robots using robot labor exclusively, merely as a sort of scientific experiment. You see,' he tapped his pince-nez into one palm argumentively, 'what the labor unions don't realize-and I say this as a man who has always been very sympathetic with the labor movement in general--is the advent of the robot, while involving some dislocation to begin with, will inevitably-'**
 * **pg. 28. "'Well,' said Mrs. Weston, at last, 'I guess he can stay with us until he rusts.' //Susan Calvin shrugged her shoulders, 'Of course, he didn't. That was 1998. By 2002, we had invented the mobile speaking robot which, of course, made all the non-speaking models out of date, and which seemed to be the final straw as far as the non-robot elements were concerned. Most of the world governments banned robot use on Earth for any purpose other than scientific research between 2003 and 2007.' 'So that Gloria had to give up Robbie eventually?' 'I'm afraid so. I imagine, however, that it was easier for her at the age of fifteen than at eight. Still, it was a stupid and unnecessary attitude on the part of humanity.'"//**

__**RUNAROUND**__ This chapter explores the "Buridan's Ass" problem, or, how does a rational system resolve competing and conflicting imperatives (i.e., how to break a tie rationally). Buridan's Ass Problem posits a donkey, equally hungry and thirsty, located equidistant from a bale of hay and a bucket of water. The problem is, since the donkey cannot decide to go to one first, it dies of thirst and hunger. This problem of logic is behind why computers crash: it experiences a logical "doom loop" that cannot be resolved, the calculations raised the heat of the electronics and it fizzles out. In Asimov's world there are the [|Laws of Robotics] that govern robots behavior. They are: The problem in this chapter is the conflict between the second law (the order to harvest selenium) and the third law (the danger to the robot of harvesting the selenium) which is overcome by the engineers appealing to the first law, by willfully endangering themselves to break the loop. Speedy is unable to break the loop between the second and third laws and therefore alternates between the two. When it is safe enough, it tries to harvest the selenium, when it becomes dangerous, he quits mining the selenium. This chapter points to the danger of relying too much on a hyper-rational system: it may not be able to resolve its problems. The second chapter moves the setting to the surface of Mercury where robots are used to perform tasks in harsh environments too severe for humans to work in. Two engineers, Donovan and Powell, find themselves in a conundrum, summarized nicely on pg. 32. Their first effort is to the use the antiquated robots and machines located in the sub-levels of their station. The narrator reflects on how the technology has already become obsolescent in just ten years: In addition, safety restrictions and programming of the old robots, intended originally for earth use, make their use on Mercury difficult. For example, the human engineers must "ride" the robots out into the harsh "sun-side" environment of the planet. However, the big question is what went wrong with Speedy? He was specifically designed for this environment. He was "foolproof" just like the Titanic was unsinkable. When they find Speedy, he appears drunk and talking gibberish. This surprises them because, for obvious reasons, robots cannot become drunk as humans do. Speedy clearly does not comprehend the gravity of the situation. Two more notes. First, robots behave like a stereotypical autistic person -- without the ability to judge nuance and context of speech. Speedy cannot differentiate between a game and a serious situation. The ability to interpret emotions correctly is central to being a human. Secondly, the two engineers view Speedy as essentially a machine: once they discover the problem, it can be fixed. The two engineers try to figure out how to resolve their increasingly desperate situation and consider the Law of Robotics. They outline the conflict in Speedy's programming to explain his behavior. From this they deduce Speedy's strange behavior: Donovan and Powell first try to manipulate Rules 2 and 3, but only accomplish creating a new equilibrium. They decide to put themselves at risk to activate Rule 1 to override Rules 2 and 3 and after a few tense moments, Speedy saves them, gathers the needed selenium. End of Chapter.
 * A robot may not injure a human being or, through inaction, allow a human being to come to harm.
 * A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
 * A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
 * **pg. 32. "The photo-cell banks that alone stood between the full power of Mercy's monstrous sun and themselves were shot to hell. The only thing that could save them was selenium. The only thing that could get the selenium was Speedy. If Speedy didn't come back, no selenium. No selenium, no photo-cell banks. No photo-banks--well, death by slow broiling is one of the more unpleasant ways of being done in."**
 * **pg. 31. "They were in the radio room now-with its already subtly antiquated equipment, untouched for the ten years previous to their arrival. Even ten years, technologically speaking, meant so much. Compare Speedy with the type of robot they must have had back in 2005. But then, advances in robotics these days were tremendous. Powell touched a still gleaming metal surface gingerly. The air of disuse that touched everything about the room--and the entire Station--was infinitely depressing."**
 * **pg. 39. "'Listen, Greg. What the devil's wrong with Speedy, anyway? I can't understand it.' . . . 'I don't know, Mike. You know he's perfectly adapted to a Mercurian environment. Heat doesn't mean anything to him and he's built for the light gravity and the broken ground. He's foolproof-or, at least, he should be."**
 * **pg. 43. "'. . . Greg, he . . . he's drunk or something.' . . . 'Speedy isn't drunk, not in a human sense-because he's a robot, and robots don't get drunk. However, there's //something// wrong with him which is the robotic equivalent of drunkenness.' 'To me, he's drunk . . . and all I know is that he thinks we're playing games. And we're not. It's a matter of life and very gruesome death.' 'All right. Don't hurry me. A robot's only a robot. Once we find out what's wrong with him, we can fix it and go on."**
 * **pg. 45. "'. . . The conflict between the various rules is ironed out by the different positronic potentials in the brain. We'll say that a robot is walking into danger and knows it. The automatic potential that Rule 3 sets up turns him back. But supposed you //order// him to walk into that danger. In that case, Rule 2 sets up a counterpotential higher than the previous one and the robot follows orders at the risk of existence.' 'Well, I know that. What about it?' '. . . Speedy is one of the latest models, extremely specialized, and as expensive as a battleship. It's not a thing to be lightly destroyed.' 'So?' 'So Rule 3 has been strengthened . . . so that his allergy to danger is unusually high. At the same time, when you sent him out after the selenium, you gave him his order casually and without special emphasis, so that the Rule 2 potential set-up was rather weak."**
 * **pg. 46. "'So he follows a circle around the selenium pool, staying on the locus of all points of potential equilibrium. And unless we do something about it, he'll stay on that circle forever, giving us the good old runaround . . .and that, by the way, is what makes him drunk. At potential equilibrium, half the positornic paths of his brain are out of kilter. I'm not a robot specialist, but that seem obvious."**

__**REASON**__ This chapter takes us to the space station and we encounter another generation of robot: the QT (Cutie) model. Unlike the previous robots, the QT model exhibits consciousness and curiosity. Although assembled from parts by Donovan and Powell, the QT robot does not believe this explanation. It cannot accept that it is the creation of "lesser beings" like humans. This calls to mind how humans create cosmogonies that explain how the world came to be and the search for a creator, i.e., God. QT's search for its origins echoes Robbie's fascination with stories. Having no memory or narrative to anchor their consciousness, they need stories or explanations for why they are here. Obviously, QT is unimpressed with Donovan and Powell's answers because they are not reasonable. Remember, a robot, equipped with perfect rationality, cannot accept an explanation that does not seem reasonable. Humans do not have the same parameters. A famous "proof" for the existence of God comes from Tertullian, an early Christian apologist, who wrote: "It is believable because it is absurd" and "It is certain because it is impossible." Robots do not have this luxury. Powell goes further to explain QT's unique role. In the past, robots had replaced manual work with specified tasks that was overseen by a few humans. Now, if QT can show that it can perform managerial and supervisory functions as well, even the human supervisors will become obsolete. The QT is curious and inquisitive, which might seem like an advance, but it also makes it and robots more dangerous. As every parent knows when their children become independently independent and curious, and no longer take directions on rote, they are harder to control. They can reinterpret "rules" by using reason to find loopholes, exceptions, and contradictions. This might make the ironclad Law of Robotics less binding. QT also illustrates the hollowness of pure reason alone. He cannot accept what the reader knows to be true about the universe, because it seems absurd and unreasonable, but if able to fashion a set of beliefs that we know are absurd through reason. Anyone familiar with the Greek sophists will recognize QT's change of logic here and he employs the Cartesian proof ("Cogito ergo sum") of existence to logical deduce his own origins. He presents his standard for belief and reason. And, QT deduces, robots are a higher form of evolution than humans. Humans cannot be the creator of robots and posits a higher being a "master" that is creator of them both. The master has rejected his imperfect creations -- humans -- for the superior robots. QT makes the case. Power goes to QT's head and he develops a cosmogony centered on the belief that the energy converter is "The Master" and is able to convince all the robots of the same. The robots can now override the Second Law and they are no longer governed by human commands. They develop a simple formula of belief, based on the Muslim Shahada that When the enginners commit a "sacrilege" by spiting on the space station's energy converter, they are confined and made prisoners of the robots. QT-1 comes to announce that they have lost their function, i.e., they have become obsolescent. The two engineers try to convince QT of his error by pointing to the empirical evidence of the stars and astronomical observations, by assembling a robot from parts (think about how the Weston's did something simiilar to demonstrate that Robbie was only a machine in Chapter 1), and referring to the information in books. QT will have none of it and dismisses all their arguments. QT is pure reason, while humans, due to their infirmities are given "stories" by the Master to "supply" them with truth. Powell notes the problems with a pure reasoning robot: It is clear that QT is delusional, but his delusion might be functional and the logical evolution of his need to follow the Law of Robotics. Robots must protect humans, but if Robots are superior to humans to perform tasks such as energy transfer of the space station, then maybe it must develop an ideology that justifies its taking power out of humans' hands for their own good. As Powell explains:
 * **pg. 57. "These robots possessed peculiar brains. Oh, the three Laws of Robotics held. They had to. All of U.S. Robots . . .would insist on that. So QT-1 was //safe!// And yet-the QT models were the first of their kind, and this was the first of the QT's Mathematical squiggles on paper were not always the most comforting protection against robotic fact . . . '//Something// made you, Cutie,' pointed out Powell. 'You admit yourself that your memory seems to spring full-grown from an absolute blankness of a week ago. I'm giving you the explanation. Donovan and I put you together from the parts shipped us.' . . . 'It strikes me that here should be a more satisfactory explanation than that. For //you// to make //me// seems improbable . . .but I intend to reason it out, though. A chain of valid reasoning can end only with the determination of truth, and I'll stick till I get there.'**
 * **pg. 59. "'. . . Robots were developed to replace human labor and now only two human executives are required for each station. We are trying to replace even those and that's where you come in. You're the highest type of robot ever developed and if you show the ability to run this station independently, no human need ever come here again except to bring parts for repairs."**
 * **pg. 62. "'I accept nothing on authority. A hypothesis must be backed by reason, or else it is worthless-and it goes against all the dictates of logic to suppose that you made me."**
 * **pg. 62-3. "'I say this in no spirit of contempt, but look at you! The material you are made of is soft and flabby, lacking endurance and strength, depending for energy upon the inefficient oxidation of organic material-like that.' He pointed a disapproving finger at what remained of Donovan's sandwich. 'Periodically you pass into a coma and the least variation in temperature, air pressure, humidity, or radiation intensity impairs your efficiency. You are //makeshift//. I, on the other hand, am a finished product. I absorb electrical energy directly and utilize it with an almost one hundred percent efficiency. I am composed of strong metal, am continuously conscious, and can stand extremes of environment easily. These are facts which, with the self-evident proposition that no being can create another being more superior to itself, smashes your silly hypothesis to nothing."** (This is the Aquinian proof of God' s existence applied to Robotics).
 * **pg. 66. "There is no Master but the Master,' he said, 'and QT-1 is his prophet."** (Shahada: There is no God but God and Muhammad is his prophet.)
 * **pg. 69. "'It was bound to come eventually, anyway. You see, you two have lost your function . . . Until I was created . . . you tended the master. That privilege is mine now and your only reason for existence has vanished. Isn't that obvious? . . . I like you two. You're inferior creatures, with poor reasoning faculties, but I really feel a sort of affection for you. You have served the Master well., and he will reward you for that. Now that your service is over, you will probably not exist much longer, but as long as you do, you shall be provided food, clothing and shelter, so long as you stay out of the control room and the engine room."**
 * **pg. 74. "'Because I, a reasoning being, am capable of deducing Truth from //a priori// Causes. You, being intelligent, but unreasoning, need an explanation of existence //supplied// to you, and this the Master did. That he supplied you with these laughable ideas of far-off worlds and people is, no doubt, for the best. Your minds are probably too coarsely grained for absolute Truth. However, since it is the Master's will that you believe your books, I won't argue with you any more.'"**
 * **pg. 75. "'No' said Powell bitterly, 'he's a //reasoning// robot-damn it. He believes only reason, and there's one trouble with that--' His voiced trailed away. 'What's that?' prompted Donovan. 'You can prove anything you want by coldly logical reason--if you pick the proper postulates. We have ours and Cutie has his.'"**
 * **pg. 78. "'Look, Mike, he follows the instructions of the Master by means of dials, instruments, and graphs. That's all //we// every followed. As a matter of fact, it accounts for his refusal to obey us. Obedience is the Second Law. No harm to humans is the first. How can he keep humans from harm, whether he know it or not? Why, by keeping the energy beam stable. He //knows// he can keep it more stable than we can, since he insists he's the superior being, so he //must// keep us out of the control room. It's inevitable if you consider the Laws of Robotics.'"**

__**CATCH THAT RABBIT**__ This chapter continues the examination of the promises and pitfalls of artificial intelligence encountered by US Robots field engineers. In "Runaround" it was the dilemma of conflicting imperatives in their core programming; In "Reason" it showed how a purely deductive mind can run aground and be unreasonable. This chapter explores the problem of personal initiative -- acting without human orders -- and the possibility of the robot equivalent of "mental illness" or a "mental breakdown" created by parallel thought processes. In short, can robots "choke" as some humans do when put into pressure situations and what compensation mechanisms do they have when they are overloaded -- think of how your computer hangs when you are downloading a big file or trying a high graphics game on a lower level processor. We meet Donovan and Powell again and they are working with a new model robot, DAVE, who is able to supervise and direct subordinate robots, "fingers," without direct human management. During the field test of this asteroid mining robot, Donovan notices that while the DAVE model is performing according to specifications while observed, it is not always performing when not observed. In short, the test is not adequate (think about a school test that does not show your real ability or deficits). In psychology, there is the Yerkes-Dodson Law (see chart), which posits a curvilinear relationship between arousal ("stress") and performance and we should consider this in terms of US Robots management philosophy and engineering environments. Many have argued that truly innovative environments encourage failure -- if you want to succeed, increase your failure rate -- but one shot, "no one fails twice" may be counterproductive. When people fear failure, they tend to stick to conventional solutions and heuristics that may not be optimal. This insight may be also used to explain the shortcomings of the robot DAVE. This also raises the problems of troubleshooting and the problems of "Hawthorne Effects" (Heisenberg Uncertainty Principle). The mere fact of observation may change the object being observed. This reminds of the famous response to the police detective question: "Did you notice anything unusual?" "You mean other than the dead body in the room?" The "interview" with DAVE is much like any human interview between management and a sub-standard worker. The curious thing is that DAVE clearly appreciates that something is wrong, but does not know what it is -- Robot "amnesia" seems strange. We all understand why humans can have memory lapses (whether voluntary or involuntary), but it is strange for a robot (or computer) to have one. It would be like Microsoft Word "forgetting" how to do truetype fonts. However, they caution about using human analogies to explain robot dysfunctions. The tests performed on DAVE test his functioning, much like a speed test on a computer or compiling a computer program. DAVE passes with flying colors, but the lack of performance still suggests that something might be wrong. This may be compared to a medical diagnosis that cannot find anything organically wrong with the patient, but the patient still complains of pain or lack of function. It is the intuitive idea of knowing something is wrong, but not knowing what is wrong. The engineers also reflect on the problems of always seeking the "next best thing" in technology. For example, every few years there is a new version of Microsoft Office, a new smartphone, or a slightly improved flat-screen TV and some people feel a compulsion to have the latest, avant-garde, cutting-edge model (the same can be said for fashion and automobiles). However, while the new model might be a "marginal" improvement, it does not always justify the investment in learning or acclimating to a new technology. In the midst of this discussion, they observe the robots going off routine. DAVE has a lapse as the "finger" robots perform synchronized routines (think dancing or military marches) for no apparent reason. As the engineers approach, DAVE comes back to awareness, but does not seem to be able to explain what just happened. The engineers speculate that DAVE's custodial role over the "finger" robots may induce a megalomania that fosters a militaristic, goose-steeping response. This foreshadows later developments and calls to mind the possibility that technology will become autonomous -- no longer reactive to the wishes of its creators: think Frankenstein's monster. They present an alternate theory that it is the presence/absence of human supervisors that might create the problems and therefore has to do with the robot's capacity for personal initiative. The notion here is the differences from viewing things as an assemblage of parts or as a unified whole. We tend to look at machines as arrangements of interchangeable (and substitutable) parts. If one part is defective, swap it out for one that works. However, with organic entities, like humans, we do not reduce everything to its component elements. For example, we do not think of a human as a certain amount of this or that element; they would be equally human if they lost a limb (or non-vital organ). Our consciousness and sense of identity is not additive in the way a robot's might be. The engineers try several strategies to isolate this condition by interviewing the subsidiary robots and creating an artificial crisis to observe how DAVE reacts. Neither is particularly insightful or successful. Most of the remainder of this chapter is narration of these attempts. The key is the difference between routine and intentional activity. Most of the time, the subsidiary robots are performing sub-routines that do not require direct supervision or guidance. To make the analogy to humans, we do not actively think about many of the core body activities -- breathing, walking, digestion, etc. -- they happen automatically. However, in crises, even simple bodily functions -- like breathing -- can seize up (Ever been nervous and told to just "breathe"?) and therefore they shift into our conscious control. Similarly, DAVE, when faced with a stressful situation without guidance sets his subsidiary robots into automatic routines that help cope with the panic, but also reduce their effectiveness. Another instance of this is "nervous habits" such as twirling hair, foot-tapping, tics, etc. that occur when we need to focus or our mental processes undergo stress. Another good example are students who are fully capable of doing a problem under a teacher's guidance, but fall apart (test anxiety) when doing the same problem without the support.
 * **pg. 82-3. "'. . . what's the use of adhering to the letter of the specifications and watching the test go to pot? It's about time you got the red tape out of your pants and went to work.' 'I'm only saying . . . that according to spec, those robots are equipped for asteroid mining without supervision. We're not supposed to watch them.' 'All right. Look-logic! . . . One: that new robot passed every test in the home laboratories. Two: United States Robots guaranteed their passing the test of actual performance on an asteroid. Three: The robots are not passing said tests. Four If they don't pass, United States Robots loses ten million credits in cash and about one hundred million in reputation. Five: If they don't pass and we can't explain why they don't pass, it is just possible two good jobs may have to be bidden a fond farewell.' . . . 'No employee makes the same mistake twice. He is fired the first time.'**
 * **pg. 83. "'Find out what's wrong, that's what we can do. So they did work perfectly when I watched them. But on three different occasions when I didn't watch, they didn't bring in any ore. They didn't even come back on schedule. I had to go after them.' 'And was anything wrong?' 'Not a thing. Not a thing. Everything was perfect. Smooth and perfect as the luminiferous ether. Only little significant detail disturbed me//-there was no ore.'"//**
 * **pg. 86. "'. . . there's no use in trying to pin disease names on this. Human disorders apply to robots only as romantic analogies. They're no help to robotic engineering.'"**
 * **pg. 89. "'. . .You say he's gone wrong. Do you know how he's gone wrong? No! Do you know what shape his wrongness takes? No! Do you know what brings it on? No! Do you know what snaps him out? No! Do you know anything about it? No! Do I know anything about it? No! So what do you want me to do?' . . . '. . .before we do anything toward a cure, we've got to find out what the disease is in the first place. The first step in cooking rabbit stew is catching the rabbit. Well, we've go to catch the rabbit!'"**
 * **pg. 89. "'What I want to know,' said Donovan, in sudden savagery, 'is why we're always tangled up with new-type robots. I've finally decided that the robots that were good enough for my great-uncle on my mother's side are good enough for me. I'm for what's tried and true. The test of time is what counts-good, solid, old-fashioned robots that never go wrong.'"**
 * **pg. 93. "'. . .He's got life and death power over those subsidiary robots and it must react on his mentality. Suppose he finds it necessary to emphasize this power as a concession to his ego . . . Suppose we have militarism. Suppose he's fashioning himself an army. Suppose he's training them in military maneuvers. Suppose-' 'Suppose you go soak your head. Your nightmares must be in technicolor. You're postulating a major aberration of the positronic brain. If your analysis were correct, Dave would have to break down the First Law of Robotics: that a robot may not injure a human being or, through inaction, allow a human being to be injured. The type of militaristic attitude and domineering ego you propose must have as the end-point of its logical implications, domination of humans . . . any robot with a brain like that would, one never have left the factory, and two, be spotted immediately if it ever was.'"**
 * **pg. 94-5. "'. . . How is a robot different when humans are not present? The answer is obvious. There is a larger requirement of personal initiative. In that case, look for the body parts that are affected by the new requirements . . .' . . .'. . .Personal initiative isn't an electric circuit you can separate from the rest and study. When a robot is on his own, the intensity of the body activity increases immediately on almost all fronts. There isn't a circuit entirely unaffected. What must be done is to locate the particular condition--a very specific condition--that throw him off, and //then// start eliminating circuits."**
 * **pg. 108. "'. . . It's the six-way order. Under all ordinary conditions, one or more of the 'fingers' would be doing routine tasks requiring no close supervision--in the sort of offhand way our bodies handle the routine walking motions. But in an emergency, all six subsidiaries must be mobilized immediately and simultaneously. Dave must handle six robots at a time and something gives. The rest was easy. Any decrease in initiative required, such as the arrival of humans, snaps him back. So I destroyed one of the robots. When I did, he was transmitting only five-way orders. Initiative decreases--he's normal."**

__**LIAR!**__ The characters change in this chapter from our field engineer friends, Donovan and Powell, to the key individuals on Earth overseeing the U.S. Robots Corporation. The robot under consideration here is Herbie, a new version of the Robbie (RB) model, who has the unplanned ability to read thoughts. This ability, for obvious reasons, is a major problem and is destabilizing for those running U.S. Robots. The first issue, however, is not the robot, but the organizational response to a problem and its implications for problem solving, blame fixing, and secrecy. In engineering, there are many projects that involve secrecy, both in the public and private sector, and too often the initial response is the "circle the wagons" and "batten down the hatches." The first response, a good one, by Dr. Calvin is to shift from blame fixing to problem solving. However, she gives poorer advice when urging the others to keep the problem "in house." The insider-outsider dynamic elucidated here implies the contempt for public opinion often held by experts. There have been several recent engineering disasters -- BP Gulf Oil Spill, Fukushima Nuclear Reactor -- where there has been a conspiracy between experts and authorities to keep the public in the dark. There is a question of responsibility and hubris here as well as the role of scientific expertise in a democratic society. A repeated theme is Herbie's interest in trashy fiction (remember Robbie's interest in Cinderella in the first chapter) even though his talents seem to be in math and science. This idea is given more extensive treatment here Most of the following narrates how Herbie flatters the egos and soothes the insecurities of the various leaders of U.S. Robots. For Dr. Calvin, he tells her that one of her colleagues finds her attractive because she is insecure about her appearance. He tells Bogert that he is correct about a mathematical dispute with Lanning and tells him that he will replace him as director. He tells Lanning the opposite to Bogert. When Calvin confronts him after finding out her supposed love will marry someone else, Herbie tells her that it is just a dream. Herbie is following the First Law: he cannot let humans come to harm. Since he is aware of their emotions, he cannot tell them something that will injure their pride or ego, and so he lies to them. Unlike the previous conflicts between the Laws of Robotics, such as the chapter "Runaround," Herbie faces the internal contradiction of a law with itself. Sometimes to help people you have to hurt them first. Most humans can navigate this conflict, albeit imperfectly, but robots have difficulties. When he cannot tell both Lanning and Bogert what they want to hear, it goes mute. Calvin explains. In addition, Herbie cannot help the scientists solve the mystery of his creation, because they want to discover the answer on their own without the assistance of a robot. Herbie is in a "damned if you, damned if you don't" situation, or as Calvin puts it: Calvin's repetition of the paradox drives Herbie insane and he "dies." What is not clear is Calvin's motivations. Of course, they had to neutralize Herbie because of his unique ability, but Calvin seems to be acting more out of hurt than professional or occupational responsibility. Herbie is a "liar" and so he deserves his fate. "Deserve" is a harsh word considering that Herbie was only trying to following his base programming.
 * **pg. 112. "'If we're going to start by trying to fix the blame on one another, I'm leaving.' Susan Calvin's hands were folded tightly in her lap, and the little lines about her thin, pale lips deepened. 'We've got a mind-reading robot on our hands and it strikes me as rather important that we find out just why it reads minds. We're not going to do that by saying, 'Your fault! My fault!'"**
 * **pg. 113. "'Ever since the Interplanetary Code was modified to allow robot models to be tested in the plants before being shipped out to space, antirobot propaganda has increased. If any word leaks out about a robot being able to read minds before we can announce complete control of the phenomenon, pretty effective capital could be made out of it.'"**
 * **pg. 116."'It's the same with these books, you know, as with the others. They just don't interest me. There's nothing to your textbooks. Your science is just a mass of collected data plastered together by make-shift theory--and all so incredibly simple, that it's scarcely worth bothering about . . . it's your fiction that interests me. Your studies of the interplay of human motives and emotions' -- his mighty hand gestured vaguely as he sought the proper words . . . 'I see into minds, you see,' the robot continued, 'and you have no idea how complicated they are. I can't begin to understand everything because my own mind has so little in common with them-but I try, and your novels help.'**
 * **pg. 131. "'You've caught on, have you//? This// robot reads minds. Do you suppose it doesn't know everything about mental injury? Do you suppose that if asked a question, it wouldn't give exactly that answer that one wants to hear? Wouldn't any other answer hurt us, and wouldn't Herbie know that?' 'Good Heavens!' muttered Bogert."**
 * **pg. 133. "'Don't be foolish, Herbie. We do want you to tell us.' Bogert nodded curtly. Herbie's voice rose to wild heights, 'What's the use of saying that? Don't you suppose that I can see past the superficial skin of our mind? Down below, you don't want me to. I'm a machine, given the imitation of life only by virtue of the positronic interplay in my brain--which is man's device. You can't lose face to me without being hurt. That is deep in your mind and won't be erased. I can't give the solution.'"**
 * **pg. 133. "'You can't tell them . . . because it would hurt and you mustn't hurt. But if you don't tell them, you hurt, so you must tell them . . ."**

__**LITTLE LOST ROBOT**__ The story continues to the development of "hyperatomic" energy that allows for interstellar travel. At the work site, one robot has gone missing. Ordinarily, this would not be a problem, but the missing NESTOR robot has a modified programming of the First Law of Robotics. It may not actively cause harm to humans, but it is not compelled to prevent a human from falling into harm by inaction. The reason for this modification is the problem of risk. Humans do risky things every day: driving automobiles, skateboarding, use cellphones, smoking, etc. that pose some risk of harm. In this case, the risk is exposure to gamma radiation. While some exposure may only pose a small physiological harm, the robots cannot distinguish between levels of risk. So, when human workers on the hyperatomic drive expose themselves to radiation, the robots attempt to intervene, either disrupting work or destroying themselves. As Kallner explains: The modification of the First Law was not approved and kept "top secret" by the physicists building the hyperatomic drive, executives at U.S. Robots, and government officials. This decision puts humans at risk for the pursuit of scientific advancement--was it worth it. There is a tradeoff between safety and secrecy. Calvin clearly does not see the risk as worth the reward. The plot moves on to talk to the last worker to have contact with the missing NESTOR robot. In essence, the worker got angry at the robot and told him to "go lose yourself" and the NESTOR took it literally. An interesting note is why the worker was frustrated with the robot: its inability to show human stresses in stressful situations. While discovering why the NESTOR absconded, they still have no means to identify which of the 63 identical robots is the one with the modified First Law. Calvin underscores the danger What struck me about this section was how robots were similar to children and teenagers. Children often have tension between the acknowledgement of dependence on their parents, but the feeling that they are superior -- know better -- than they are. This is usually part of the learning process, but the desire to "prove" adults wrong can be very strong for children, and perhaps, Asimov suggests, robots. Resentment contrasts with the uber-mature portrayal of robots to this point. Calvin continues to explain how the slight modification the first law can be destabilizing. Bogert provides another observation about Calvin's relationship and attraction to robots. They devise a series of tests to identify the modified NESTOR. However, they are frustrated by the NESTOR's sense of superiority. It refuses to be "fooled" by humans. The more the humans fail, the more the robot wants to prove its superiority by evading their detection. Calvin observes Their second test which put the safety of humans against the safety of robots was frustrated when none of the robots risked their lives to save a human. The logic was that is they died in the attempt they would destroy themselves, not save the human, and be unable to save humans in the future. This rationale was given by the truant robot and so all robots behaved the same and they could not detect the robot with the modified programming. The social aspect of the robots in this respect is interesting. Can AI convince other AI of the proper logic? In short, are they autonomously persuasible? The episode is resolved by taking advantage of the truant NESTOR's arrogance and intelligence. The new robots have not learned the "etheric physics" imprinted at the station and so cannot detect the difference between infrared and gamma radiation, but the hiding robot can. Here, unlike before, secrecy -- the reason for the difference in programming ("impressioning") turns out to be the key wedge to separate the robots apart. Calvin concludes
 * **pg. 141. "'We had trouble with our previous robots. Our men work with hard radiations a good deal, you see. It's dangerous, of course, but reasonable precautions are taken. There have been only two accidents since we began and neither was fatal. However, it was impossible to explain that to an ordinary robot. The First Law states -- I'll quote it -- '//No robot may harm a human being, or through inaction, allow a human being to come to harm.'// 'That's primary, Dr. Calvin. when it was necessary for one of our men to expose himself for a short period to a moderate gamma field, one that would have no physiological effects, the nearest robot would dash in to drag him out. If the field were exceedingly weak, it would succeed, and work could not continue till all robots were cleared out. If the field were a trifle stronger, the robot would never reach the technician concerned, since its positronic brain would collapse under gamma radiations--and then we would be out one expensive and hard-to-replace robot.'"**
 * **pg. 144. "'Be reasonable, Susan. You couldn't have influenced them. In this matter, the government was bound to have its way. They want the Hyperatomic Drive and the etheric physicists wants robots that won't interfere with them. They were going to get them even if it did mean twisting the First Law. We had to admit it was possible from a construction standpoint . . . and they insisted on secrecy--and that's the situation.'"**
 * **pg. 147. "'We run risk continually of blowing a hole in normal space-time fabric and dropping right out of the universe, asteroid and all. Sounds screwy, doesn't it? Naturally, you're on edge sometimes. But these Nestors aren't. They're curious, they're calm, they don't worry. It's enough to drive you nuts at times. When you want something done in a tearing hurry, they seem to take their time. Sometimes I'd rather do without.'"**
 * **pg. 151-2. "'One of the sixty-three robots I have just interviewed has deliberately lied to me after the strictest injunction to tell the truth. The abnormality indicated is horribly deep-seated, and horribly frightening . . . Those robots attach importance to what they consider superiority. You've just said as much yourself. Subconsciously they feel humans to be inferior and the First Law which protests us from them is imperfect. They are unstable. And here we have a young man ordering a robot to leave him, to lose himself, with every verbal appearance of revulsion, disdadn, and disgust. Granted, that robot must follow orders, but subconsciously, there is resentment. It will become more important than ever for it to prove that it is superior despite the horrible names it was called. It may become //so// important that what's left of the First Law won't be enough.'"**
 * **pg. 153. "'If a modified robot were to drop a heavy weight upon a human being, he would not be breaking the First Law, if he did so with the knowledge that his strength and reaction speed would be sufficient to snatch the weight away before it struck the man. However once the weight left his fingers he would be no longer the active medium. Only the blind force of gravity would be that. The robot could then change his mind and merely by inaction, allow the weight to strike. The modified First Law allows that.'"**
 * **pg. 156. "'She's qualified all right. She understands robots like a sister--comes from hating human beings so much, I think. It's just that, psychologist or not, she's an extreme neurotic. Has paranoid tendencies. Don't take her too seriously.'"**
 * **pg. 158: "It //must// be gratifying his swollen sense of superiority. I'm afraid that his motivation is no longer simply one of following orders. I think it's becoming more a matter of sheer neurotic necessity to outthink humans. That's a dangerously unhealthy situation . . . Nestor 10 is decidedly aware of what we're doing, general. He had no reason to jump for the bait in this experiment, especially after the first time, when he must have seen that there was no real danger to our subject. The others couldn't help it; but //he// was deliberately falsifying a reaction."**
 * **pg. 161-2. "'. . . it occurred to me that if I died on my way to him, I wouldn't be able to save him anyway. the weigtt would crush him and then I would be dead for no purpose and perhaps some day some other master might come to harm who wouldn't have, if I had only stayed alive . . .' . . . '. . . your thinking has points, but it is not the sort of thing I thought you might think. Did you think of it yourself?' The robot hesitated, 'No.' 'Who thought of it, then?' 'We were talking last night, and one of us got that idea and it sounded reasonable.'**
 * **pg. 172-3. "'You see, Nestor 10 had a superiority compelx that was becomign more radical all the time. he liked to think that he and other robots kenw more than human beings. It was becoming very important for him to think so . . . Nestor 10 knew they were infrared and harmless and so he began to dash out, as he expetected the rest would do, under First Law compulsion. It was only a fraction of a second too late that he remembered that the normal NS-2's could detect radiation, but could not identify the type. That he himself could only identify wave lengths by virtue of the training he had received at Hyper Base, under mere human beings, was a little too humiliating to remember for just a moment. To the normal robots the area was fatal because we had told them it would be, and only Nestor 10 knew we were lying And just for a moment he forgot, or didn't want to remember, that other robots might be more ignorant than human beings. His very superiority caught him.'"**

__**ESCAPE!**__

This chapter details the achievement of interstellar travel, namely, the construction of a vehicle to make the interstellar jump. U.S. Robots and their competitor, Consolidated Robots, are both using their "robot brains" to solve this problem. However, in the attempt to solve the problem, Consolidated has crashed its main computer. It has offered a contract to U.S. Robots in the hope of crashing their own robot brain in the process to once again level the playing. I think that this is perhaps the biggest "fantasy" put forward by Asimov. It is one thing for a machine to solve a problem within preset parameters (and science) already discovered by humans, it is another thing for a machine to be adaptive and "learn" through trial-and-error, but it is entirely another thing for machines to invent something that does not exist or is countenanced in current human understandings. The problem in this chapter is similar to the central dilemma in the chapter, "Liar!" The robot will be put into a position where it can neither do what it is commanded to do or refuse to do what it is commanded to do. Calvin explains further how this aspect of robot psychology mimics human psychology. While I think Calvin's explanation reflects thinking in psychology at the time this book was written, I think further studies suggest that "escape from reality" is not the only, nor accurate, description of how humans cope. In fact, what seems to be failures of reason in human psychology, and other cognitive shortfalls (cognitive dissonance), may be adaptive solutions that alllow humans to do things other animals cannot. However, let's hand over the mic to Calvin Calvin backtracks a bit and explains how U.S. Robots are not simply functional like Consolidated's robot, but have a personality. It is important to understand this distinction. Humans have consciousness, a sense of self, that interacts with the pure cognitive functioning of our bodies. We are able to understand ourselves in terms of narratives that may be "open-textured" and not closed logical systems. It allows us to overcome logical blocks through reactions such as humor, denial, imagination, etc. There is a saying that a "bumblebee flies anyway" calling to mind that, by the laws of physics, bees should not be able to fly [their bodies are too heavy relative to their wingspan], BUT by ignoring reality, they are able to accomplish the feat. As long as they can not think about the reality, like Wile E. Coyote when he runs off the cliff, we are able to exceed our abilities. Imagination is the limit of possiblities, not reality. Calvin again Our friends at U.S. Robots decide to feed the problem piecemeal to The Brain and hopefully isolate what particular dilemma might prevent a solution. Calvin gives a particular injunction to The Brain that will turn out to be decisive later in the chapter. Surprisingly, The Brain is able to build the ship without hesitation. The lack of a problem worries the chiefs of U.S. Robot and so they call in our favorite field engineers, Donovan and Powell, to test it. There is a lot of narration here, but the key developments are that the ship is completely controlled and operated by The Brain, the engineers are just along for the ride, The Brain launches the ship with Donovan and Powell along for the ride. Donovan and Powell notice many strange things about the ship (only food: beans!) that reinforce the idea that it is not really made with humans in mind. Calvin summarizes this at the chapter's end. Humans are the passengers; robots in the driver's seat. In "Little Lost Robot" the possibility of robots overthrowing (destroying) humans was countenanced, but what if robots took over for our own good? The key is that to make the interstellar jump, the humans would have to experience temporary death, presumably as their molecular components are pulled apart as they approach the "speed of light." Calvin explains how her instructions allowed The Brain to solve the problem that had crashed Consolidated's computer. The Brain had developed humor as a defense mechanism to deal with the unpleasantness of the dilemma.
 * **pg. 176. "'There isn't any industrial research group of any size that isn't trying to develop a space-warp engine, and Consolidated and U.S. Robots have the lead on the field with our super robot-brains. Now that they've managed to foul theirs up, we have a clear field. That's the nub, the . . .uh . . . motivation. It will take them six years at least to build another and they're sunk, unless they can break ours, too, with the same problem."**
 * **pg. 177. "'The Brain . . . could never supply a solution to a problem set to it if that oslution would involve the death or injury of humans. As far as it would be concerned, a problem with only such a solution would be insoluble. If such a problem is combined with an extremely urgent demand that it be answered, it is just possible that The Brain, only a robot after all, would be presented with a dilemma, where it could neither answer nor refuse to answer.'"**
 * **pg. 177-8. "'. . . it is built by humans and is therefore built according to human values. Now a human caught in an impossibility often responds by a retreat from reality: by entry into a world of delusion, or by taking to drink, going of into hysteria, or jumping off a bridge. It all comes to the same thing-a refusal or inability to face the situation squarely. And so, the robot. A dilemma at its mildest will disorder half its realy; and at worst it will burn out every positronic brain path past repair."**
 * **pg. 178. "'. . . Consolidated's machines . . . are built without personality. They go in for functionalism . . .Their thinker is merely a calculating macine on a grand ascale, and dilemma ruins it instantly. However . . . our own machine, has a personality-a child's personality. It is ia supremely deductive brain, but it resembles an //idiot savante.// It doesn't really understand what it does--it just does it. And because it is really a child, it is more resilient. Life isn't so serious, you might say.'"**
 * **pg. 180-1. "'Now you watch for that. When we come to a sheet which means damage, even maybe death, don't get excited. You see, Brain, int his case, we don't mind-not even about death; we don't mind at all. So when you come to that sheet, just stop, give it back-and that'll be all."**
 * **pg. 204. "'He took care of you, and kept you safe, but you couldn't handle any controls, because they weren't for you--just for the humorous Brain. WE could reach you by radio, but you couldn't answer. You had plenty of food, but all of it beans and milk. Then you died, so to speak, and were reborn, bu the period of your death was made . . . well . . .interesting. I wish I knew how he did it. It was The Brain's prize joke, but he meant no harm.'"**
 * **pg. 203-4. "'. . . I had depressed the importance of death to The Brain--not entirely, for the First Law can never be broken--but just sufficiently so that The Brain could take a second look at the equation. Sufficiently to give it time to realize that after the interval was passed through, the men would return to life--just as the matter and energy of the ship itself would return to being. This so-called 'death,' in other words, was a strictly temproary phenomenon. You see? . . . So he accepted the item, but not without a certain jar. Even with death temporary and its importance depressed, it was enough to unbalance him very gently . . . He developed a sense of humor--it's an escape, you see, a method of partial escape from reality. he became a practical joker.'"**

__**EVIDENCE**__ This chapter turns the spot light on politics. There has also been a wish for "rational politics" and decrying the role of emotion and "petty" human agendas distorting the national interest. In particular, scientists/engineers have been critical of the "muddling through" that often typifies democratic politics, and authoritarian inklings have always been found among this group around the word. Originally, this idea goes back to the philosopher Plato's //Republic// where he called for rule by "philosopher-kings" or guardians who would rule in the best interests of the public. At the time this book was written (1950), the world have undergone two world wars, several genocides, and a spate of destructive popular movements and "isms" from the Nazis to the Bolsheviks, and it is understandable that many wished for a "new world order" that would make the needless destruction a thing of the past. Asimov has Calvin remind the reader of this frame before moving on the to robot scene. On a smaller scale, this was the European Union project in Europe and for the most part, European politics no longer occurs at the national level, but through the European Union and its bureaucrats (similar to robots) located in Brussels. Many Europeans complain of the "democratic deficit" and in the US, many feel the national government, which has absorbed many of the functions of lower administrative and political levels, is remote and unresponsive. Let's just say that the dream of depoliticizing politics has not turned out well so far and perhaps this impugns Asimov's political image put forward in this chapter, but back to the story. A rival politician, Francis Quinn (name chosen probably to hearken back to Irish machine politicians) brings evidence that his opponent may be a humanoid robot to Dr. Lanning. Obviously, robots do not need to eat or sleep, but humans do. Quinn suggests that Lanning help him demonstrate whether he is or is not a robot. If he is a robot, or suspected of being one, this would create problems for U.S. Robots because the public would suspect that he company produced him in defiance of regulations about robots on planet Earth. Lanning and Calvin confront Byerley and he dismisses their concerns (although showing that he does "eat" by chomping on an apple). Much of this discussion, as well as the chapter as a whole, is about proof (evidence). In short, you cannot prove a negative (scientific research of all types is structure as negating positives, i.e., negating a "null hypothesis"). You cannot prove someone is a robot by examining its behavior because it would be consistent with a "good human." In this interchange, simply showing that Byerley has not been seen eating, does not prove he doesn't eat. Byerley explains: Calvin makes another comparison of robots and humans that reflects a deep misanthropic sentiment. Robots are what humans should be, but too often are not. Another character is introdued, John, a cripple who lives with Byerley. It seems they have a plan to beat Quinn. There is continued discussion of Byerley's behavior, i.e., the contradiction of beign a DA, responsible for punishing criminals, including capital punishment, and being a robot. However, this is more elaboration of the notion that one cannot prove a negative. When the accusation is made public, there is the reaction of the "simple-lifers" who oppose robots. Asimov presents them as Luddite know-nothings. One is tempted to make a comparison of this with modern day conflicts between science and religion, especially the debate ove stem cells. Do you think it is analogous? Which side would you take in this? The next few pages deal with the search of Byerley's home and the key issue is the right to privacy versus the public's right to know. Byerley uses to the law to deny Quinn the information to conclusively prove whether he is a robot or not. If he is human, he has a right or privacy and therefore can deny the needed information needed to establish that he is a robot. There are two ways to view this cat-and-dog dynamic between Quinn and Byerley. The first is to show how Byerley (if you believe him to be a robot) can use the legal system created by humans to help his goal of infiltration. The second, (if you believe him human) is his principled resistance to submit proof for baseless charges. This book was written as McCarthyism and "red-baiting" were building up and many people faced charges of being "communists." This context makes the principled resistance more credible in some ways. All of this goes back to the question of evidence and proof. One must make an assumption whether Byerley is a robot or human, it cannot be proved empirically. Byerley points to the legal conflicts inherent in the search warrants. There is an element here of what Hitler's Minister of Propaganda Joseph Goebbels called the "big lie." The fact that a robot could pass as a human and be elected to high office is so incredible and involve such deception that the very possibility is unbelievable. The bigger the lie, the more likely is to be believed. The concluding conversation between Calvin and Byerley is a wink-wink acknowledgement that Quinn's theory about Byerley being a robot was true (I know that you know that I know), but it is a "noble lie" to enable a robot to serve as a civil executive. Since Calvin prefers robots to humans, she is willing to support Byerley and expresses support. Calvin notes how Byerley could have pulled off the violation of the First Law -- the "human" he hit may in fact be a robot. We are left with an ambiguous answer, we have no evidence, only suggestions to the truth.
 * **pg. 206. "'//When I was born, young man, we had just gone through the last World War. It was a low point in history--but it was the end of nationalism. Earth was too small for nations and they began grouping themselves into Regions. It took quite a while. When I was born the United States of America was still a nation and not merely a part of the Northern Region. In fact, the name of the corporation is still 'United States Robots.' And the change from nations to Regions, which has stabilized our economy and brought about what amounts to a Golden Age, when this century is compared with the last, was also brought about by our robots.'"//**
 * **pg. 237. "'But I'm very sorry it turned out this way. I like robots. I like them considerably better than I do human beings. If a robot can be created capable of being a civil executive, I think he'd make the best one possible. By the Laws of Robotics, he'd be incapable of harming humans, incapable of tyranny, of corruption, of stupidity, of prejudice. And after he had served a decent term, he would leave, even though he were immortal, because it would be impossible for him to hurt humans by letting them know that a robot had ruled them. It would be most ideal.'"**
 * **pg. 209. "'It is always useful you see, to subject the past life of reform politicians to rather inquisitive research. If you knew how often it help---' He pasued to smile humorlessly at the glowing tip of his cigarette. 'But Mr. Byerley's past is unremarkable. A quiet lfie in a small town, a college education, a wife who died young, an auto accident with a slow recovery, law school, coming to the metropolis, an attorney . . . but his present lfie. Ah, that is remarkable. Our district attorney never eats! . . . he has never been seen to eat or drink. Never! Do you understand the significance of the word? Not rarely, but never!'"**
 * **pg. 215. "'Suppose further that in order to smear the candidate effectively, he comes to your company as the ideal agent. Do you expect him to say to you, 'So-and-so is a robot because he hardly ever eats with people, and I have never seen him fall asleep in the middle of a case; and once when I pepped into his window in themiddle of the night, there he was, sitting up with a book; and I looked in his frigidaire and there was not food in it . . . But if he tells you, 'He //never// sleeps; he //never// eats,' then the shock of the statement blinds you to the fact that such statements are impossible to prove. You play into his hands by contributing to the to-do.'"**
 * **pg. 220-1. "'If Mr. Byerley breaks any of those three rules, he is not a robot. Unfortunately, this procedure works in only one direction. If he lives up to the rules, it proves nothing one way or another . . . 'But,' said Quinn, 'you're telling me that you can never prove him a robot.' 'I may be able to prove h im //not// a robot.' 'That's not the proof I want' 'You'll have such proof as exists. You are the only one responsible for your own wants.'"**
 * **pg. 216. "'. . . are robots so different from men, mentally?' 'Worlds different,' She allowed herself a frosty smile, 'robots are essentially decent.'"**
 * **pg. 221. "'Because, if you stop to think of it, the three Rules of Robotics are the essential guiding principle sof a good many of the world's ethical systems. Of course, every human being is supposed to have the instinct of self-preservation. That's Rule Three to a robot. Also every 'good' human being, with a social conscience and a sense of responsiblity, is supposed to defer to proper authority; to listen to his doctor, his boss, his government, his psychiatrist, his fellow man; to obey laws, to followrules, to conform to custom--even whenthey interfere with his comfort or his safety. That's Rule Two to a robot. Also, every 'good' human being is supposed to love others as himself, protect his fellow man, risk his life to save another. That's Rule One to a robot. To put it simply--if Byerley follows all the rules of Robotics, he may be a robot, and may simply be a very good man.'"**
 * **pg. 225. "What broke loose is popularly and succinctly described as hell. It was what the Fundamentalists were waiting for. They were not a political party; they made pretense to no formal religion. Essentially they wer ethose who had not adapted themselves to what had once been called the Atomic Age, in the days when atoms were a novelty. Actually, they were the Simple-Lifers, hungering after a life, which ot those who lived it had probalby appeared not so Simple, and who had been, therefore, Simple-Lifers themselves. The Fundamentalists required no new reason to detest roobts and robot manufacturers; but a new reason such as the Quinn Accusation and the Calvin analysis was sufficient to make such detestation audible."**
 * **pg. 228. "'Where it says 'the dwelling place belonging to' and so on. A robot cannot own property. And you may tell your employer, Mr. Harroway, that if he tries to issue a similar paper which does //not// implicitly recognize me as a human being, he will be immediately faced with a restraining injunction and a civil suit which will make it necessary for him to //prove// me a robot by means of information //now// in his possession, or else to pay a whopping penalty for an attempt to deprive me unduly of my Rights under the Regional Articles.'"**

__**THE EVITABLE CONFLICT**__

While the preceding chapter was about the robots interplay with politics, the final chapter is more about economics and history. The author, Isaac Asimov, wrote several non-fiction books about history from a civilizational perspective. Basically, it goes like this: most of human history has been a cycle of recurring "inevitable" conflicts driven by petty human nature, "isms," and competition, however, due mainly to the technological and scientific advances, an "age of reason" we have entered a "Golden Age." I do not subscribe to Asimov's view of history and I do not think that science and technology will "set us free." In fact, a powerful argument could be made that science, rationality, and technology have been at the root of many of the world's problem then and now, but back to our regularly scheduled programming. . . Our story begins with the now "world-coordinator" Stephen Byerley calling Dr. Calvin in to discuss a problem with the machines, in brief: Byerley then goes on to a long rant about the "inevitable" cycle of conflicts of humankind and then "deus ex machina" the robots came and broke the cycle. However, the recent disruptions suggest that something may be functionally wrong with the robots. This is usually called the "convergence thesis" in economics, but the dynamo is usually globalization and free capital flows that come with it. The current problems in international markets suggest why it does not work quite the way experts envisioned it. It should be noted that similar ideas reigned just before the breakout of WWI. It didn't end well then either. The problem is that, like in earlier chapters (REASON), all the standard metrics suggest the Machines are working fine, except as the field engineers noted in that chapter: there is no ore. Unfortunately, unlike the DAVE model, the Machines have become so complex that no human, or group of humans, is able understand what is wrong if something was wrong. Dr. Calvin makes an interesting observation that due to their hyper-specialization, the Machines, unlike previous robots, do not have a personality to go with their calculating capacity and so the interplay of the Robotic Laws is attenuated. Byerley suggests that the problems must be human, GIGO (Garbage In, Garbage Out). He then outlines a crude "chaos theory" explanation that small mistakes in any one area will propagate to other areas, leading to breakdowns. Byerley then proceeds to survey the four Earth regions. In each he highlights a different flaw, attributable to human input. There is a lot of cultural stereotyping here, but the key problems of each region are important to highlight. Eastern Region: Change in Human Tastes (Fads) cannot be predicted. In the past, economic planning models would have used a "representative agent" to handle these matters with similar problems. Most current economic models assume heterogenous agents. Tropic Region: Not taking into account differences between humans, particularly gender. Gender imbalances are a big factor in economic development for obvious and non-obvious reasons. Currently, China's "one-child" policy has lead to an imbalanced sex ratio with a shortfall worldwide of nearly 100 billion women. One major problem with European colonization and imperialism was that the colonists tended to be overwhelmingly male. In economic terms, they are land-rich, labor-scarce. The machines might be able to inject the proper amount of capital (money and technology), but they still have labor shortages hampering their canal project. Ngoma explains further. European Region: The coordinator for Europe pleads of dependency on the Northern Region (USA). This dependency breeds lassitude and a frozen in amber "opium dream" that is backward looking. It is about pride and honor. She draws a parallel of Greece to Rome. Basically, we're just along for the ride. For those who are "middleearth" fans, the attitude of the Europeans is similar to the Elves in the Tolkien Sagas. Northern Region: The basic argument by the Northern co-ordinator is the inability of quantifying certain things, such as the quality of cotton or human values. There is some irreducible complexity, tacit, "personal knowledge" that cannot be delegated to Machine calculations. So, if there are no errors, where is the problem? Byerley argues that it is a problem of enforcement. It is not a problem of analysis, but humans are not carrying out the Machines' recommendations. pg. 266. "'Vincent Silver said the Machiens cannot be out of order, and I must believe him. Hiram Mackenzie says they cannot be fed false data, and I must believe him. But he In a human, expressing this sentiment would be a sign of insanity and megalomania. Happiness is not maximizing a set of equations. Asimov is a good enough author to show us many sides to this question, but this reads as a ringing endorsement of Modernist, central planning dreams. In many ways, this is a "Matrix" view of the world.
 * **pg. 241. "World Steel reports an overproduction of twenty thousand long tons. The Mexican Canal is two months behind schedule. The mercury mines at Almaden have experienced a production deficiency since last spring, while the Hydroponics plant at Tientsin has been laying men off . .. There is more of the same sort.'"**
 * **pg. 244-5. "'. . . the wars were 'inevitable' and this time th ere were atomic weapons, so that mankind could no longer live through its torment to the inevitable wasting away of inevitability. --and positronic robots came. they came in time, and, with it and alongside it, interplanetary travel. --So that it no longer seemed so important whether the world was Adam Smith or Karl Marx. Neither made very much sense under the new circumstances. both had to adapt and they ended in almost the same place . . . And yet there was there was another danger. The ending of every other problem had merely given birth to another. Our new world wide robot economy may develop its own problems, and for that reason we have the Machines. The Earth's economy is stable, and will //remain// stable, because it is based upon the decisions of calculating machines that have the good of humanity at heart through the overwhelming force of the First Law of Robotics . . . the means of production . . . could be utilized only as the Machines directed. --Not because men were forced to but because it was the wisest course and men knew it . . . It puts and end to war--not only ot the last cycle of wars, but to the enxt and to all of them.'"**
 * **pg. 246. "'. . .no human could [understand was wrong with the machines] . . . the Machines are a gigantic extrapolation . . .' '. . . Perhaps roboticists as a whole should now die, since we no longer understand our own creations.' '. . . The Machines are not superbrains in Sunday supplement sense,--altough they are so pictured in teh Sunday supplements. It is merely that in their own particular province of collecting and analyzing a nearly infinite nember of data and relationships thereof, in nearly infinitesimal time, they have progressed beyond the possibility of detailed human control.'"**
 * **pg. 248. "'Well, since the Machines are giving the wrong answers, then, assuming that they cannot be in error, there is only one possibility. //They are beign given the wrong data!// In other words, the trouble is human, and not robotic . . .' '. . .If any one of the machines is imperfect, that will automatically reflect in the result of the other three, since each of the others will assume as part of the data on which they base their own decisions, the perfection fo the imperfect fourth. With a false assumption, they will yield false answers.'"**
 * **pg. 252. "'. . . why are men out of work in Tientsin . . . it is not only that we must have these various and varying foods for our yeast; but there remains the complicating factor of popular fads with passing time; and of the possiblity of the development of new strains with the new requirements and new popularity. All this must be foreseen, and the Machine does the job--' 'But not perfectly.'"**
 * **pg. 256. "'But the Canal, --it was on schedule six months ago. What happened?' . . . 'Labor troubles . . . There was a work shortage somewhere in Mexico once on the question of women. There weren't enough women in the neighborhood. It seemed no one had thought of feeding sexual data to the Machine.'"**
 * **pg. 259. "'Europe is a sleepy place. And such of our men as do not manage to emigrate to the Tropics are tired and sleepy along with it . . .it is not a difficult job, and not much is expected of me. As for the Machine-- What can it say but 'Do this and it will be best for you.' But what is best for us? Why to be an economic appendage of the Northern Region. And is it so terrible? No wars! We live in peace-- . . . We are old, monsieur . . . but old age is not necessarily an unhappy time.'"**
 * **pg. 264-5. "'Why not? Surely the data involved is not too complicated for it?' 'Probably not. But what data is this you refer to? . . . --Several dozen items, subconsciously weighed, out of years of experience. But the //quantitative// nature of these tests is not known; maybe even the very nature of some of them is not known. So we have nothing to feed the Machine. Nor can the buyers explain their own judgment. They can only say, 'Well, look at it. Can't you //tell// it's class-such-and-such?'' . . . there are innumerable cases like that. The Machine is only a tool after all, which can help humanity progress faster by taking some of the burdens of calculations and interpretations off its back. The task of the human brains remains what it has always been; that of discovering new data to be analyze, and of devising new concepts to be tested."**
 * **pg. 271. "'. . . how do we know what the ultimate good of Humanity will entail? We haven't at //our// disposal the infinite factors that the Machine has at //its//? . . . We don't know. Only the machines know, and they are going there are taking us with them.' 'But you are telling me, Susan, that the 'Society for Humanity' is right; and that Mankind //has// lost its own say in its future.' 'It never had any, really. It was always at the mercy of economic and sociological forces it did not understand--at the whims of climate, and the fortunes of war. Now the Machines understand them; and no one can stop them, since theMachiens will deal with them as they are dealing with the Society, --having, as they do, the greatest weapons at their disposal, the absolute control of our economy.' 'How horrible!' 'Perhaps how wonderful. Think, that for all time, all conflicts are finally evitable. Only the Machines from now on, are inevitable!'"**