Advertisement
Guest User

Untitled

a guest
Oct 21st, 2019
265
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 29.68 KB | None | 0 0
  1. That American Technopoly has now embraced the computer in the same hurried and mindless way it embraced medical technology is undeniable, was perhaps inevitable, and is certainly most unfortunate. This is not to say that the computer is a blight on the symbolic landscape; only that, like medical technology, it has usurped powers and enforced mind-sets that a fully attentive culture might have wished to deny it. Thus, an examination of the ideas embedded in computer technology is worth attempting. Others, of course, have done this, especially Joseph Weizenbaum in his great and indispensable book Computer Power and Human Reason . Weizenbaum, however, ran into some difficulties, as everyone else has, because of the “universality” of computers, meaning (a) that their uses are infinitely various, and (b) that computers are commonly integrated into the structure of other machines. It is, therefore, hard to isolate specific ideas promoted by computer technology. The computer, for example, is quite unlike the stethoscope, which has a limited function in a limited context. Except for safecrackers, who, I am told, use stethoscopes to hear the tumblers of locks click into place, stethoscopes are used only by doctors. But everyone uses or is used by computers, and for purposes that seem to know no boundaries. Putting aside such well-known functions as electronic filing, spreadsheets, and word-processing, one can make a fascinating list of the innovative, even bizarre, uses of computers. I have before me a report from The New York Times that tells us how computers are enabling aquatic designers to create giant water slides that mimic roller coasters and eight-foot-high artificial waves. 1 In my modest collection, I have another article about the uses of personal computers for making presentations at corporate board meetings. 2 Another tells of how computer graphics help jurors to remember testimony better. Gregory Mazares, president of the graphics unit of Litigation Sciences, is quoted as saying, “We’re a switched-on, tuned-in, visually oriented society, and jurors tend to believe what they see. This technology keeps the jury’s attention by simplifying the material and by giving them little bursts of information.” 3 While Mr. Mazares is helping switched-on people to remember things, Morton David, chief executive officer of Franklin Computer, is helping them find any word in the Bible with lightning speed by producing electronic Bibles. (The word “lightning,” by the way, appears forty-two times in the New International version and eight times in the King James version. Were you so inclined, you could discover this for yourself in a matter of seconds.) This fact so dominates Mr. David’s imagination that he is quoted as saying, “Our technology may have made a change as momentous as the Gutenberg invention of movable type.” 4 And then there is an article that reports a computer’s use to make investment decisions, which helps you, among other things, to create “what-if” scenarios, although with how much accuracy we are not told. 5 In Technology Review , we find a description of how computers are used to help the police locate the addresses of callers in distress; a prophecy is made that in time police officers will have so much instantly available information about any caller that they will know how seriously to regard the caller’s appeal for help. One may well wonder if Charles Babbage had any of this in mind when he announced in 1822 (only six years after the appearance of Laënnec’s stethoscope) that he had invented a machine capable of performing simple arithmetical calculations. Perhaps he did, for he never finished his invention and started work on a more ambitious machine, capable of doing more complex tasks. He abandoned that as well, and in 1833 put aside his calculator project completely in favor of a programmable machine that became the forerunner of the modern computer. His first such machine, which he characteristically never finished, was to be controlled by punch cards adapted from devices French weavers used to control thread sequences in their looms. Babbage kept improving his programmable machine over the next thirty-seven years, each design being more complex than the last. 6 At some point, he realized that the mechanization of numerical operations gave him the means to manipulate non-numerical symbols. It is not farfetched to say that Babbage’s insight was comparable to the discovery by the Greeks in the third century B.C. of the principle of alphabetization—that is, the realization that the symbols of the alphabet could be separated from their phonetic function and used as a system for the classification, storage, and retrieval of information. In any case, armed with his insight, Babbage was able to speculate about the possibility of designing “intelligent” information machinery, though the mechanical technology of his time was inadequate to allow the fulfillment of his ideas. The computer as we know it today had to await a variety of further discoveries and inventions, including the telegraph, the telephone, and the application of Boolean algebra to relay-based circuitry, resulting in Claude Shannon’s creation of digital logic circuitry. Today, when the word “computer” is used without a modifier before it, it normally refers to some version of the machine invented by John von Neumann in the 1940s. Before that, the word “computer” referred to a person (similarly to the early use of the word “typewriter”) who performed some kind of mechanical calculation. As calculation shifted from people to machines, so did the word, especially because of the power of von Neumann’s machine. Certainly, after the invention of the digital computer, it was abundantly clear that the computer was capable of performing functions that could in some sense be called “intelligent.” In 1936, the great English mathematician Alan Turing showed that it was possible to build a machine that would, for many practical purposes, behave like a problem-solving human being. Turing claimed that he would call a machine “intelligent” if, through typed messages, it could exchange thoughts with a human being—that is, hold up its end of a conversation. In the early days of MIT’s Artificial Intelligence Laboratory, Joseph Weizenbaum wrote a program called ELIZA, which showed how easy it was to meet Turing’s test for intelligence. When asked a question with a proper noun in it, ELIZA’S program could respond with “Why are you interested in,” followed by the proper noun and a question mark. That is, it could invert statements and seek more information about one of the nouns in the statement. Thus, ELIZA acted much like a Rogerian psychologist, or at least a friendly and inexpensive therapist. Some people who used ELIZA refused to believe that they were conversing with a mere machine. Having, in effect, created a Turing machine, Weizenbaum eventually pulled the program off the computer network and was stimulated to write Computer Power and Human Reason , in which, among other things, he raised questions about the research programs of those working in artificial intelligence; the assumption that whatever a computer can do, it should do; and the effects of computer technology on the way people construe the world—that is, the ideology of the computer, to which I now turn. The most comprehensive idea conveyed by the computer is suggested by the title of J. David Bolter’s book, Turing’s Man . His title is a metaphor, of course, similar to what would be suggested by saying that from the sixteenth century until recently we were “Gutenberg’s Men.” Although Bolter’s main practical interest in the computer is in its function as a new kind of book, he argues that it is the dominant metaphor of our age; it defines our age by suggesting a new relationship to information, to work, to power, and to nature itself. That relationship can best be described by saying that the computer redefines humans as “information processors” and nature itself as information to be processed. The fundamental metaphorical message of the computer, in short, is that we are machines—thinking machines, to be sure, but machines nonetheless. It is for this reason that the computer is the quintessential, incomparable, near-perfect machine for Technopoly. It subordinates the claims of our nature, our biology, our emotions, our spirituality. The computer claims sovereignty over the whole range of human experience, and supports its claim by showing that it “thinks” better than we can. Indeed, in his almost hysterical enthusiasm for artificial intelligence, Marvin Minsky has been quoted as saying that the thinking power of silicon “brains” will be so formidable that “If we are lucky, they will keep us as pets.” 7 An even giddier remark, although more dangerous, was offered by John McCarthy, the inventor of the term “artificial intelligence.” McCarthy claims that “even machines as simple as thermostats can be said to have beliefs.” To the obvious question, posed by the philosopher John Searle, “What beliefs does your thermostat have?,” McCarthy replied, “My thermostat has three beliefs—it’s too hot in here, it’s too cold in here, and it’s just right in here.” 8 What is significant about this response is that it has redefined the meaning of the word “belief.” The remark rejects the view that humans have internal states of mind that are the foundation of belief and argues instead that “belief” means only what someone or something does. The remark also implies that simulating an idea is synonymous with duplicating the idea. And, most important, the remark rejects the idea that mind is a biological phenomenon. In other words, what we have here is a case of metaphor gone mad. From the proposition that humans are in some respects like machines, we move to the proposition that humans are little else but machines and, finally, that human beings are machines. And then, inevitably, as McCarthy’s remark suggests, to the proposition that machines are human beings. It follows that machines can be made that duplicate human intelligence, and thus research in the field known as artificial intelligence was inevitable. What is most significant about this line of thinking is the dangerous reductionism it represents. Human intelligence, as Weizenbaum has tried energetically to remind everyone, is not transferable. The plain fact is that humans have a unique, biologically rooted, intangible mental life which in some limited respects can be simulated by a machine but can never be duplicated. Machines cannot feel and, just as important, cannot understand . ELIZA can ask, “Why are you worried about your mother?,” which might be exactly the question a therapist would ask. But the machine does not know what the question means or even that the question means. (Of course, there may be some therapists who do not know what the question means either, who ask it routinely, ritualistically, inattentively. In that case we may say they are acting like a machine.) It is meaning, not utterance, that makes mind unique. I use “meaning” here to refer to something more than the result of putting together symbols the denotations of which are commonly shared by at least two people. As I understand it, meaning also includes those things we call feelings, experiences, and sensations that do not have to be, and sometimes cannot be, put into symbols. They “mean” nonetheless. Without concrete symbols, a computer is merely a pile of junk. Although the quest for a machine that duplicates mind has ancient roots, and although digital logic circuitry has given that quest a scientific structure, artificial intelligence does not and cannot lead to a meaning-making, understanding, and feeling creature, which is what a human being is. All of this may seem obvious enough, but the metaphor of the machine as human (or the human as machine) is sufficiently powerful to have made serious inroads in everyday language. People now commonly speak of “programming” or “deprogramming” themselves. They speak of their brains as a piece of “hard wiring,” capable of “retrieving data,” and it has become common to think about thinking as a mere matter of processing and decoding. Perhaps the most chilling case of how deeply our language is absorbing the “machine as human” metaphor began on November 4, 1988, when the computers around the ARPANET network became sluggish, filled with extraneous data, and then clogged completely. The problem spread fairly quickly to six thousand computers across the United States and overseas. The early hypothesis was that a software program had attached itself to other programs, a situation which is called (in another human-machine metaphor) a “virus.” As it happened, the intruder was a self-contained program explicitly designed to disable computers, which is called a “worm.” But the technically incorrect term “virus” stuck, no doubt because of its familiarity and its human connections. As Raymond Gozzi, Jr., discovered in his analysis of how the mass media described the event, newspapers noted that the computers were “infected,” that the virus was “virulent” and “contagious,” that attempts were made to “quarantine” the infected computers, that attempts were also being made to “sterilize” the network, and that programmers hoped to develop a “vaccine” so that computers could be “inoculated” against new attacks. 9 This kind of language is not merely picturesque anthropomorphism. It reflects a profound shift in perception about the relationship of computers to humans. If computers can become ill, then they can become healthy. Once healthy, they can think clearly and make decisions. The computer, it is implied, has a will, has intentions, has reasons—which means that humans are relieved of responsibility for the computer’s decisions. Through a curious form of grammatical alchemy, the sentence “We use the computer to calculate” comes to mean “The computer calculates.” If a computer calculates, then it may decide to miscalculate or not calculate at all. That is what bank tellers mean when they tell you that they cannot say how much money is in your checking account because “the computers are down.” The implication, of course, is that no person at the bank is responsible. Computers make mistakes or get tired or become ill. Why blame people? We may call this line of thinking an “agentic shift,” a term I borrow from Stanley Milgram to name the process whereby humans transfer responsibility for an outcome from themselves to a more abstract agent. 10 When this happens, we have relinquished control, which in the case of the computer means that we may, without excessive remorse, pursue ill-advised or even inhuman goals because the computer can accomplish them or be imagined to accomplish them. Machines of various kinds will sometimes assume a human or, more likely, a superhuman aspect. Perhaps the most absurd case I know of is in a remark a student of mine once made on a sultry summer day in a room without air conditioning. On being told the thermometer read ninety-eight degrees Fahrenheit, he replied, “No wonder it’s so hot!” Nature was off the hook. If only the thermometers would behave themselves, we could be comfortable. But computers are far more “human” than thermometers or almost any other kind of technology. Unlike most machines, computers do no work; they direct work. They are, as Norbert Wiener said, the technology of “command and control” and have little value without something to control. This is why they are of such importance to bureaucracies. Naturally, bureaucrats can be expected to embrace a technology that helps to create the illusion that decisions are not under their control. Because of its seeming intelligence and impartiality, a computer has an almost magical tendency to direct attention away from the people in charge of bureaucratic functions and toward itself, as if the computer were the true source of authority. A bureaucrat armed with a computer is the unacknowledged legislator of our age, and a terrible burden to bear. We cannot dismiss the possibility that, if Adolf Eichmann had been able to say that it was not he but a battery of computers that directed the Jews to the appropriate crematoria, he might never have been asked to answer for his actions. Although (or perhaps because) I came to “administration” late in my academic career, I am constantly amazed at how obediently people accept explanations that begin with the words “The computer shows …” or “The computer has determined …” It is Technopoly’s equivalent of the sentence “It is God’s will,” and the effect is roughly the same. You will not be surprised to know that I rarely resort to such humbug. But on occasion, when pressed to the wall, I have yielded. No one has as yet replied, “Garbage in, garbage out.” Their defenselessness has something Kafkaesque about it. In The Trial , Josef K. is charged with a crime—of what nature, and by whom the charge is made, he does not know. The computer turns too many of us into Josef Ks. It often functions as a kind of impersonal accuser which does not reveal, and is not required to reveal, the sources of the judgments made against us. It is apparently sufficient that the computer has pronounced. Who has put the data in, for what purpose, for whose convenience, based on what assumptions are questions left unasked. This is the case not only in personal matters but in public decisions as well. Large institutions such as the Pentagon, the Internal Revenue Service, and multinational corporations tell us that their decisions are made on the basis of solutions generated by computers, and this is usually good enough to put our minds at ease or, rather, to sleep. In any case, it constrains us from making complaints or accusations. In part for this reason, the computer has strengthened bureaucratic institutions and suppressed the impulse toward significant social change. “The arrival of the Computer Revolution and the founding of the Computer Age have been announced many times,” Weizenbaum has written. “But if the triumph of a revolution is to be measured in terms of the social revision it entrained, then there has been no computer revolution.” 11 In automating the operation of political, social, and commercial enterprises, computers may or may not have made them more efficient but they have certainly diverted attention from the question whether or not such enterprises are necessary or how they might be improved. A university, a political party, a religious denomination, a judicial proceeding, even corporate board meetings are not improved by automating their operations. They are made more imposing, more technical, perhaps more authoritative, but defects in their assumptions, ideas, and theories will remain untouched. Computer technology, in other words, has not yet come close to the printing press in its power to generate radical and substantive social, political, and religious thought. If the press was, as David Riesman called it, “the gunpowder of the mind,” the computer, in its capacity to smooth over unsatisfactory institutions and ideas, is the talcum powder of the mind. I do not wish to go as far as Weizenbaum in saying that computers are merely ingenious devices to fulfill unimportant functions and that the computer revolution is an explosion of nonsense. Perhaps that judgment will be in need of amendment in the future, for the computer is a technology of a thousand uses—the Proteus of machines, to use Seymour Papert’s phrase. One must note, for example, the use of computer-generated images in the phenomenon known as Virtual Reality. Putting on a set of miniature goggle-mounted screens, one may block out the real world and move through a simulated three-dimensional world which changes its components with every movement of one’s head. That Timothy Leary is an enthusiastic proponent of Virtual Reality does not suggest that there is a constructive future for this device. But who knows? Perhaps, for those who can no longer cope with the real world, Virtual Reality will provide better therapy than ELIZA. What is clear is that, to date, computer technology has served to strengthen Technopoly’s hold, to make people believe that technological innovation is synonymous with human progress. And it has done so by advancing several interconnected ideas. It has, as already noted, amplified beyond all reason the metaphor of machines as humans and humans as machines. I do not claim, by the way, that computer technology originated this metaphor. One can detect it in medicine, too: doctors and patients have come to believe that, like a machine, a human being is made up of parts which when defective can be replaced by mechanical parts that function as the original did without impairing or even affecting any other part of the machine. Of course, to some degree that assumption works, but since a human being is in fact not a machine but a biological organism all of whose organs are interrelated and profoundly affected by mental states, the human-as-machine metaphor has serious medical limitations and can have devastating effects. Something similar may be said of the mechanistic metaphor when applied to workers. Modern industrial techniques are made possible by the idea that a machine is made up of isolatable and interchangeable parts. But in organizing factories so that workers are also conceived of as isolatable and interchangeable parts, industry has engendered deep alienation and bitterness. This was the point of Charlie Chaplin’s Modern Times , in which he tried to show the psychic damage of the metaphor carried too far. But because the computer “thinks” rather than works, its power to energize mechanistic metaphors is unparalleled and of enormous value to Technopoly, which depends on our believing that we are at our best when acting like machines, and that in significant ways machines may be trusted to act as our surrogates. Among the implications of these beliefs is a loss of confidence in human judgment and subjectivity. We have devalued the singular human capacity to see things whole in all their psychic, emotional and moral dimensions, and we have replaced this with faith in the powers of technical calculation. Because of what computers commonly do, they place an inordinate emphasis on the technical processes of communication and offer very little in the way of substance. With the exception of the electric light, there never has been a technology that better exemplifies Marshall McLuhan’s aphorism “The medium is the message.” The computer is almost all process. There are, for example, no “great computerers,” as there are great writers, painters, or musicians. There are “great programs” and “great programmers,” but their greatness lies in their ingenuity either in simulating a human function or in creating new possibilities of calculation, speed, and volume. 12 Of course, if J. David Bolter is right, it is possible that in the future computers will emerge as a new kind of book, expanding and enriching the tradition of writing technologies. 13 Since printing created new forms of literature when it replaced the handwritten manuscript, it is possible that electronic writing will do the same. But for the moment, computer technology functions more as a new mode of transportation than as a new means of substantive communication. It moves information—lots of it, fast, and mostly in a calculating mode. The computer, in fact, makes possible the fulfillment of Descartes’ dream of the mathematization of the world. Computers make it easy to convert facts into statistics and to translate problems into equations. And whereas this can be useful (as when the process reveals a pattern that would otherwise go unnoticed), it is diversionary and dangerous when applied indiscriminately to human affairs. So is the computer’s emphasis on speed and especially its capacity to generate and store unprecedented quantities of information. In specialized contexts, the value of calculation, speed, and voluminous information may go uncontested. But the “message” of computer technology is comprehensive and domineering. The computer argues, to put it baldly, that the most serious problems confronting us at both personal and public levels require technical solutions through fast access to information otherwise unavailable. I would argue that this is, on the face of it, nonsense. Our most serious problems are not technical, nor do they arise from inadequate information. If a nuclear catastrophe occurs, it shall not be because of inadequate information. Where people are dying of starvation, it does not occur because of inadequate information. If families break up, children are mistreated, crime terrorizes a city, education is impotent, it does not happen because of inadequate information. Mathematical equations, instantaneous communication, and vast quantities of information have nothing whatever to do with any of these problems. And the computer is useless in addressing them. And yet, because of its “universality,” the computer compels respect, even devotion, and argues for a comprehensive role in all fields of human activity. Those who insist that it is foolish to deny the computer vast sovereignty are singularly devoid of what Paul Goodman once called “technological modesty”—that is, having a sense of the whole and not claiming or obtruding more than a particular function warrants. Norbert Wiener warned about lack of modesty when he remarked that, if digital computers had been in common use before the atomic bomb was invented, people would have said that the bomb could not have been invented without computers. But it was. And it is important to remind ourselves of how many things are quite possible to do without the use of computers. Seymour Papert, for example, wishes students to be epistemologists, to think critically, and to learn how to create knowledge. In his book Mindstorms , he gives the impression that his computer program known as LOGO now makes this possible. But good teachers have been doing this for centuries without the benefit of LOGO. I do not say that LOGO, when used properly by a skilled teacher, will not help, but I doubt that it can do better than pencil and paper, or speech itself, when used properly by a skilled teacher. When the Dallas Cowboys were consistently winning football championships, their success was attributed to the fact that computers were used to evaluate and select team members. During the past several years, when Dallas has been hard put to win more than a few games, not much has been said about the computers, perhaps because people have realized that computers have nothing to do with winning football games, and never did. One might say the same about writing lucid, economical, stylish prose, which has nothing to do with word-processors. Although my students don’t believe it, it is actually possible to write well without a processor and, I should say, to write poorly with one. Technological immodesty is always an acute danger in Technopoly, which encourages it. Technopoly also encourages in-sensitivity to what skills may be lost in the acquisition of new ones. It is important to remember what can be done without computers, and it is also important to remind ourselves of what may be lost when we do use them. I have before me an essay by Sir Bernard Lovell, founder of Britain’s Jodrell Bank Observatory, in which he claims that computers have stifled scientific creativity. 14 After writing of his awe at the ease with which computerized operations provide amazing details of distant galaxies, Sir Bernard expresses concern that “literal-minded, narrowly focused computerized research is proving antithetical to the free exercise of that happy faculty known as serendipity—that is, the knack of achieving favorable results more or less by chance.” He proceeds to give several examples of monumental but serendipitous discoveries, contends that there has been a dramatic cessation of such discoveries, and worries that computers are too narrow as filters of information and therefore may be antiserendipitous. He is, of course, not “against” computers, but is merely raising questions about their costs. Dr. Clay Forishee, the chief FAA scientist for human performance issues, did the same when he wondered whether the automated operation of commercial aircraft has not disabled pilots from creatively responding when something goes wrong. Robert Buley, flight-standards manager of Northwest Airlines, goes further. He is quoted as saying, “If we have human operators subordinated to technology then we’re going to lose creativity [in emergencies].” He is not “against” computers. He is worried about what we lose by using them. 15 M. Ethan Katsch, in his book The Electronic Media and the Transformation of Law , worries as well. He writes, “The replacement of print by computerized systems is promoted to the legal profession simply as a means to increase efficiency.” 16 But he goes on to say that, in fact, the almost unlimited capacity of computers to store and retrieve information threatens the authority of precedent, and he adds that the threat is completely unrecognized. As he notes, “a system of precedent is unnecessary when there are very few accessible cases, and unworkable when there are too many.” If this is true, or even partly true, what exactly does it mean? Will lawyers become incapable of choosing relevant precedents? Will judges be in constant confusion from “precedent overload”? We know that doctors who rely entirely on machinery have lost skill in making diagnoses based on observation. We may well wonder what other human skills and traditions are being lost by our immersion in a computer culture. Technopolists do not worry about such things. Those who do are called technological pessimists, Jeremiahs, and worse. I rather think they are imbued with technological modesty, like King Thamus.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement