a guest Jul 19th, 2019 275 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. How Is the NII Like a Prison? Alan Wexelblat When using the Internet we often forget that we’re not alone. People chat online, enter “rooms” where they can be with others, but all the while there are aspects to the Internet that we for the most part ignore. Alan Wexelblat explains in cogent terms how the use of the panoptic sort (which he also kindly defines in his essay) can turn the Internet into a tool that can be used for functions completely different from those private citizens would like to see. Whether the Internet is a prison or not is debatable, but the desires of large businesses to exploit the Internet (its formal name is the National Information Infrastructure) is undeniable. If you doubt business’s ability or intent to exploit every possible advantage, I suggest you take a quick reality check … and—no pun intended—not at your local bank. Alan Wexelblat, now PhD, was a researcher at the MIT Media Lab’s Software Agents Group. He has returned to the commercial world, working for a small software company. This article was written in the mid-1990s.
  4. The National Information Infrastructure is evolving on our screens. But behind the scenes another infrastructure is growing, one that threatens to turn the NII not into an information superhighway but into an information prison. Everyone has a different vision for the NII, from five hundred channels of consumer heaven to networked egalitarian communities. There are nearly as many models for the NII as there are writers interested in the topic. Regardless of which model holds, however, it seems clear that the NII will be a primary mechanism for the transaction of business between companies and customers and between government and citizens. A recent book, The Panoptic Sort: A Political Economy of Personal Information, by Oscar Gandy, attempts to paint a picture of an emerging phenomenon that affects how these transactions will be carried out. This mechanism, which he calls the panoptic sort, describes an information collection and use regime that severely impacts on the privacy of, and opportunities afforded to, people in our late capitalist culture. The panoptic sort is a set of practices by government and especially by companies whereby information is gathered from people through their transactions with the commercial system. The information is then exchanged, collated, sold, compared, and subject to extensive statistical analyses. As Gandy describes it: The panoptic sort is the name I have assigned to the complex technology that involves the collection, processing, and sharing of information about individuals and groups that is generated through their daily lives as citizens, employees, and consumers and is used to coordinate and control their access to the goods and services that define life in the modern capitalist economy. The panoptic sort is a system of disciplinary surveillance that is widespread but continues to expand its reach. The goal of these activities is to enable information-holders to make predictions about the behavior of the people on whom the information was collected. The ultimate goal is to be able to sort all the people the company comes in contact with along whatever dimension of information is desired: • How likely is this person to pay his charge bill? • How likely is this person to become pregnant at some point in her work career? • Does this family qualify for food stamps? The essential element of the panoptic sort is the transaction. People, for the purpose of the sort, exist only in discrete interactions, when some exchange is made for goods or services. The prototypical transaction is the application, where the person exchanges detailed information in exchange for potential access (to a job, to medical care, etc.). People are usually not permitted to withhold information from a transaction. For example, credit card applications (even so-called preapproved ones) will not be processed unless the applicant provides a Social Security number (SSN). Similarly, the government now requires all children above the age of two to have an SSN if their names appear on any bank accounts or tangible assets. In order to make discriminations such as the ones above, the decision makers need complete information. Thus, the term panoptic, or all-seeing. Gandy draws the term from its earlier use by Jeremy Bentham, an English prison reformer of the nineteenth century. Bentham proposed constructing prisons in the form of something he called a Panopticon. In this model, prisoners would be held in cells with glass doors arranged around a ring. At the center of the ring would be the guard tower. Important to Bentham’s design was that the prisoners were isolated from each other and could not see each other, nor could they see the guards. The guards in the tower, however, could see all the prisoners without the inmates knowing they were being watched. Gandy points out that the panoptic sort operates by essentially the same principles: our lives as consumers are opened up to scrutiny by arbitrary persons at any time for undisclosed purposes. We are atomized—treated as individual consumer-units unable to act collectively. At the same time we are prevented from knowing about the companies that observe us. The panoptic sort also serves to extend control over unprecedented distances. Though the methods and techniques that are involved today have precedents and roots back to the beginnings of the industrial revolution, the technology in use now and in the near-NII future enables the extension of controls over global distances. Increasingly we find not just our workplaces but our homes invaded. The transit between home and work and our vacations also face intrusion. Part of this chapter was written on an airplane on which the flight steward announced that “your nightmare has come true: now you can be called in-flight.” Presumably we trust that the content of these calls will not be captured and analyzed for others’ advantage the way the early telegrams were read by Western Union. There are a number of consequences for people subjected to this sort of pervasive control and observation regime, not least of which is that we self-censor. People trained to expect denial (of services, credit, or opportunity) will soon cease applying for more. Subject to observation at any time by unknown persons with unpredictable means of retribution, we chill our own speech and action in ways antithetical to democracy. This process is already in evidence in America today. Chomsky has repeatedly pointed out that official censorship is not found in America because the speech is not particularly threatening to anyone in power.
  6. Means of Operation The panoptic sort operates by means of a three-step process: identification, classification, and assessment. Identification involves the association of persons, at the time of a transaction, with an existing file of information such as a credit or medical history. The panoptic sort not only requires us to submit increasingly detailed verifications of our identity, it requires the potential involvement of third parties merely to vouch for who/what we are; that is, our credit card companies vouch for us when we write a check, or the Department of Motor Vehicles when we buy a drink. Identification proceeds from a basis of complete distrust. Identificative distrust has infiltrated our society to such an extent that we are all accustomed to being required to carry identificative tokens. Each of these tokens is the result of a transaction with the panopticon; each is granted to us in acknowledgment of our contribution of information to another file of information. Common “documentary tokens” (as Gandy calls them) include: • Birth certificate • Driver’s license • Social security card This process of identification-via-token continues to expand. In reaction to mounting losses and falsifications associated with common tokens, new proposals are being made. The most successful of these so far is the ATM (automatic teller machine) or debit card. This card requires the user to enter a PIN (Personal Identification Number) and acts as a cash equivalent in many situations, though its online, real-time nature provides excellent data-gathering opportunities. Banks report losses through ATM/debit cards that are twenty to thirty times lower than losses associated with credit cards. The next step in this process is currently under discussion. The technology involved is the “smart” card, so named because in addition to the ability to record information (on a magnetic strip or onboard computer memory) the card contains processing power to update the stored information and do computation with it in real time. Several proposals have been put forth recently to establish a national identification system around such smart cards. In these systems, everyone would be required to carry a card that contained potentially vast amounts of personal information about the bearer’s health, financial status, physical condition, residence, and so on. In addition, the cards’ memory can be used to hold recent transactional information, such as the last n purchases made or the last n banking transactions. The card could also be programmed to do real-time identification of the holder, replacing PINs with some form of biometric analysis, such as voice identification or a fingerprint. It is worth noting that in every case, the proposal is made in response to a supposed problem: illegal immigration, welfare “cheats,” national driver’s licenses, access to personal medical information in an emergency. Invariably, the solution requires that we give up more of our privacy and personal information. Rather than fixing systemic causes, or looking rationally at whether these “cures” are worse than the problems they might solve, the operators of the panoptic sort use the publicity and fear associated with societal ills to expand their reach. The rational observer is left to wonder what information from his national ID card might be made available to whom and what information might be stored on the card without his knowledge. Classification is “the assignment of individuals to conceptual groups on the basis of identifying information.” Classification is fundamentally about control. Since complete detailed information on everyone is impossible, companies use increasingly small “buckets” or groupings into which people can be classified. The assertion being made is that certain discernible information, such as income, number of children, marital status, and so on, can be used to assign people to a category such as “young, upwardly mobile professional” (the original classification that led to the term “yuppie” entering the public discourse). Once people have been assigned to such groupings, their behavior can be predicted by statistical techniques applied to the group as a whole. That is, if we can say with a high degree of confidence that all yuppies will do such-and-such (for example, buy a new car within the next three years), and we have assigned you to such a category, then we can infer that you are likely to buy a car within three years. Although professional statisticians caution against such descents from the general to the specific level, nevertheless these predictive techniques are widely used. Anyone who has ever dealt with a recalcitrant bureaucracy or an unyielding corporate “service” person knows how dehumanizing such a process can be. Classifications are based on particular measurements; differences that are not measured—such as individual variation—do not exist for the purposes of the panoptic sort. On an individual level, we might argue that no matter the accuracy of predictive statistics in regards to any group of people, they do not account for our individual behavior. But once assignments into these groups are made, we are no longer treated as individuals. Instead we become “welfare mothers” or “older graduate students” and are expected to conform to type. Interestingly, people seem eager to assign such labels to themselves, perhaps for the sense of community they feel in being part of an identifiable group. Many groups have used such self-identification to reclaim a sense of history (e.g., the black experience in America) or assert control over terminology (e.g., gays reclaiming the word “queer”). Classification is never value-neutral; it always includes an assessment, a form of comparative classification. What makes someone “black” is often more a matter of politics than genetics or any other science. In Nazi Germany it was decided that anyone who had at least one Jewish grandparent was thereby Jewish. The income boundaries for such classifications as “upper class” or “middle class” are highly arbitrary and usually reflect the value system of the classifier: Think of the phrase “middle-class tax cut” and how it is used. Even such seemingly objective classifications as medical diagnoses are subject to the vagaries of time and culture: Think of the changes in psychiatric evaluations of female “hysteria” or homosexuality. Statistical techniques cannot take into account these variations. Assessment is the process of measuring deviance or variation from the statistical norm of the class to which the assignment has been made. Assessment is a risk-avoidance procedure, a means by which the company seeks to limit its risk in relation to possible goods or services it might provide the person involved in the transaction. Assessment also encompasses the delineation of whole classes of people who may be systematically excluded or treated specially. Assessment involves computations based on probability, opportunity reduction, and loss prevention. Assessment is based on prediction and events today show that prediction techniques are being extended to ever more ambiguous domains. For example, the defense lawyers in the O. J. Simpson trial accumulated detailed profiles on potential jurors and used these profiles to “predict” which people were more like to vote for conviction. These people were, of course, peremptorily challenged to prevent them being on the jury. Gandy points out that there are actually three kinds of prediction and that each has its own strengths and weaknesses, but these are rarely noted: statistical prediction, based on comparisons of the behavior of a group with the behavior of an individual; “anamnestic” prediction, based on the person’s past behavior; and clinical prediction, based on an expert’s evaluation of the individual’s behavior. We might instinctively prefer statistical prediction because it is “scientific” and open to proof and challenge of assumptions; however, the meaning of statistics is not often so clear. The fact that a person is a member of a group which is, for example, ninety-five percent likely to buy a new car in three years does not mean that the person in question is ninety-five percent likely to do so. From the point of view of the panoptic sort, though, this is not relevant. Concerned with optimal efficiency, it appears more efficient to (for example) prevent default than coerce those who might default or who have defaulted.
  9. What Might Be Done One of the most frightening things about the panoptic sort is that it is not the result of some massive heinous centralized bureaucracy. Rather, it is a particular tragedy of the information commons, wherein each rational actor does that which seems to be in his best business interest but the overall result is the loss of something valuable. In many ways the panoptic sort is not new—it has roots at least as far back as the time-and-motion studies of the early Industrial Age. However, the presence of telecommunications technologies is permitting the extension of control over times and distances which were insurmountable in the past. It is no exaggeration to say that the modern multinational corporation simply could not exist without these technologies and it is these corporations that are the primary agents of the panoptic sort. One might argue that the simple solution to the problem posed by these corporations’ information gathering and to the commons tragedy of the panoptic sort in general is to control the release of information about oneself. Indeed, Gandy discusses the growing refusal of Americans to participate in marketing or opinion surveys and their resistance to official statistics-gathering, such as the U.S. Census. Gandy points out that though awareness of privacy problems is growing, peoples’ attitudes toward the problem and potential solutions (such as government regulation) is related to their power relative to the organizations in the panoptic sort. Generally speaking, the more power people believe they have, the less they are concerned (though this can be changed by direct personal experiences with the panoptic sort, especially negative experiences). Regardless of our power relations, we must face the reality that in order for commercial transactions we initiate to complete, we are compelled to give up information. This is most obvious in something like a credit or loan process, which inevitably begins with an application form that demands specific and often very personal information. People may object to the gathering of such personal information. Nevertheless, Gandy points out that businesses often have what we might all agree are legitimate needs for information about the people they transact business with; the results of giving up that information, though, may turn out to be more than expected. This can be true for even the most trivial-seeming interactions. Gandy uses a simple and compelling example: Imagine that you go to a tailor to have a pair of pants fitted. It is impossible to complete this transaction without giving the tailor your measurements. But based on these measurements, it would not be difficult to detect a segment of the population which could be characterized as overweight. If your tailor was to share this information with your health insurance company, the consequences could be an increase in your insurance rates. This example may seem silly: no one’s tailor talks to his health insurance company. At least, not yet. But in the near-NII future when both the tailor and the insurance company are “wired” it would be a simple matter for the insurance company to make an electronic query of the tailor and offer an incentive for the list of people whose measurements fit certain criteria. In fact, the information could be automatically transmitted as it is entered into the tailor’s (insurance-company-supplied) PDA. The company could then not only incorporate this information in its files, but continue to propagate it, perhaps to vendors of weight-loss plans, to defray the costs. In summary, the problem is not simple release of information; as the example above shows, we must give out some information in order to get what we need. Rather, the problem is the information’s propagation to unknown parties and its application to unknown, unintended uses with unforeseeable consequences. The problem is complicated by the fact that we cannot choose to remove ourselves from participation in the panoptic sort without loss of possibly essential goods and services.
  11. Technology and Marketing One of the most easily understood (and yet least harmfu)l consequences of the panoptic sort is the increasing pervasiveness and intrusion of marketing. As goods and services proliferate in a capitalist culture, an increasing effort must be made by purveyors to bring their particular product to the attention of potential customers. Advertisers are always seeking to improve the efficiency of their marketing. Currently, direct-marketing firms that mail to lists of “prospects” consider a three to four percent return rate to be very successful. That means that for every potential customer they contact, they must intrude on and annoy to some degree thirty to fifty other people. The ability to target that three to four percent beforehand is a primary motivation in the panoptic collection of information. We might argue on a detailed basis whether we feel it is desirable for advertisers to have this or that level of information about us. One side might argue that having better information reduces the level of intrusion into our lives; the other might argue that personal information is the property of the person about whom it speaks and that people should be able to choose what information they release to whom. However, Gandy points out that it is worth asking the larger question of why we must have this debate in the first place. That is, we should consider the relationship of technology to marketing and to capitalist culture at large. Technology is not neutral; it is introduced by parties with interests to further and, in turn, it has ripple effects that can be only dimly foreseen. One obvious example is the Internet itself: originally conceived as a network for researchers to exchange scientific information, it instead became primarily a rapid-communications medium and a means of establishing nongeographical communities. However, while the street has its own uses for things, often it is the humans who must be reshaped to accommodate the technology. The debate about the acceptable level of advertiser knowledge and intrusion would not be occurring without our having previously been conditioned to accept a continual bombardment of advertising. This subtle reworking of people is also a part of the panoptic process. Gandy shows that this process, too, has roots in the earliest parts of the Industrial Age and, in fact, significantly predates advertising. He cites and quotes Jacques Ellul, an analyst of technology. Ellul traces the mechanization of the production of bread, pointing out that an attribute of the wheat made it difficult for the machines to produce bread that was like that baked pre-machine. Rather than adapting the machines, industrialists set about to create a demand for a new kind of bread. The goal was efficient (that is, profitable for the owners of the bread-making machines) production, and if people had to be reshaped for efficiency, so be it. This process has become so ingrained in our culture that we no longer recognize it. As we witness the transformation of the Internet into a marketing medium and locus of business transactions, we should remember how far this process has come. Gandy quotes the modern analyst David Lovekin on this: Thus, a simple food like potatoes becomes Tater-Tots, something that is not clearly food at all and that contains elements of no clearly known nutritional value. What is clear is that each piece is made to look like the other pieces, identities which are also different, new. McDonald’s markets and produces sameness … To understand fast food, a purely technological phenomenon, one must look to the walls and notice the pictures of the food. One buys the picture, which will never nourish, but which will always keep the customer coming back for more; the ever-perfect, indeed, the same hamburger, designed in the laboratory and cooked by computers. As we watch the development of Web sites promoting ever more unrealistic images of companies and their products, it is both an interesting game and a frightening prospect to imagine what new products we are being conditioned to accept. We see the beginnings of the intrusions of panoptic data gathering on the Web. Sites maintain (and sometimes publish) information about the hosts that connect to them. Many sites require users to “register” or “sign in,” once again enforcing the transactional model of information gathering.
  14. Outcomes As with any analysis of the present situation and associated trends, the range of possible futures that could be developed is quite large. However, we can characterize a spectrum along which the future probably lies by examining its extreme ends. Here are two futures that lie at opposite ends of a realm of possible results. The first is the Panopticon, the second cryptoprivacy. The Panopticon This scenario can be seen as the result of momentum, or inertia, rather than the influence of any specific set of factors. As noted above, the panoptic sort is the result of individual (rational) actors working to further what each sees as his own best interest, his most efficient operation. In this scenario, nothing much changes: companies continue to migrate to places (both real and electronic) where they are most unencumbered by the regulation of increasingly irrelevant governments. Consumers, anesthetized by media, indifferent to the slow erosion of rights they do not understand, silently acquiesce to the process. Governments may even abet the process, as they chase what Bruce Sterling characterized as “the Four Horsemen of the Modern Apocalypse”: terrorists, child pornographers, drug kingpins, and the Mafia. It is notable that the response to each public tragedy or threat in modern America seems to involve a call for citizens to surrender more of their rights. Recently, we have seen such calls for surrender in response to terrorist bombings and in response to the potential availability of pornography on the Internet. Privacy is, after all, a notion contextualized by social time and place, and legal history. The modern conception of privacy can be traced back to a law review article published in 1890 by Samuel D. Warren and Louis D. Brandeis, titled “The Right to Privacy.” In the future, we may reconceive privacy as something less related to information. Perhaps privacy will come to mean something like the ability to keep our moment-to-moment thoughts from being known by others. If this conception seems strange, remember the example of the tailor. In a truly networked nation, it seems logical to assume that any entity which can communicate information will do so. Corporations’ drive for efficiency will provide us with an ever-growing stream of products customized for our specific situations, manufactured just in time to meet needs we didn’t even know we had. Of course, information will be provided to us as well. In response to our manufactured needs, we will be fed a steady diet of 500+ channels, each with its content carefully labeled to avoid potentially offending anyone, just as CDs and video games are labeled and rated. These ratings will be the result of panoptic classifications and the people who buy them can expect to have their preferences recorded and analyzed so that the next offerings to reach their homes, cars, and offices will be closer to their expected tastes and values. In this version of the future, business efficiency is paramount. All other needs are subsumed to the desire to have the most successful competitive capitalist structure. Neither businesses nor governments need to enact new policies for this scenario to come to pass; it does not depend on any particular new technological advances. All that is required is that we do nothing, that we continue to make decisions as they are made today, that we extend current technological advancements to more sectors of society. The consequences of this scenario would be unnoticeable. Remember that the panoptic sort does not advance with speed; rather it moves in cautious increments, taking advantage of the willingness of people to go along with things that appear to be in everyone’s best capitalistic interest. All that would happen is that our grandchildren would listen to our stories of the “old days” and shake their heads amusedly.
  16. Cryptoprivacy With the lessening dominance of mass media and consequent reduction in its tendency to homogenize opinion and enforce compliance with current power structures, we can speculate on the possible reemergence of a critical thought consciousness in American political discourse. Such a consciousness, presumably similar to that raised after the abuses of Watergate were made known, might lead to modification or lessening of the panoptic sort. Gandy, in reporting his studies of corporate attitudes and policies, notes that corporations are most acutely aware of public opinion and possible governmental regulation. If these factors appear to be favoring a move toward greater regulation, corporations respond by preemptively changing their policies. Presumably, they believe that voluntary changes will both ameliorate negative opinions and will be less severe than external regulation or public outcry. We might hope that Net-based political consciousness would motivate such changes. Sadly, it seems increasingly unlikely that this will happen. Though the Net provides a potential medium for discourse and consciousness-raising dialogue, it has proved incapable of making an organized response beyond single issues such as the alerts found at the Electronic Frontier Foundation Web site. Though the Net is world wide, the most effective use of the medium has been community networks used to address town- or local-level issues and dialogue. While it is always dangerous to hope that technology will provide answers or solutions to social problems, it does seem that we are on the verge of seeing a technology emerge which could revolutionize the power relationship between companies and individuals. This technology, ubiquitous easy public-key encryption, would permit individuals to maintain more control over their personal information. This technology and its implications are being investigated, publicized, and hotly debated by a group of hackers, mathematicians, libertarians, and social reformers loosely referred to as cypherpunks. Cryptography itself is at least as old as Julius Caesar. Loosely speaking, encryption is the process of taking a text X and applying a function f to it to produce a cyphertext Y. The reverse process is to take Y and apply another function g to decrypt it and get X back. A major problem in the past has been that f depends on a key k such that if I know f and I find out k then I can do the decryption. Most such functions are what is known as invertible. An obvious solution is to use non-invertible functions, however, these are still susceptible to key loss. This problem was solved by three mathematicians: Ron Rivest, Adi Shamir, and Leonard Adleman. They patented a technique for splitting k into two parts, one public and one private. The functions associated with these keys are constructed such that if I have someone’s public key and Y, I still cannot retrieve the original message. Only the owner of the private key can decrypt the message. The best-known implementation of the RSA algorithms is Phil Zimmermann’s program called PGP (Pretty Good Privacy). For the rest of this scenario I will use PGP as a synonym for public-key encryption. The implications of this technology are potentially enormous; for the purposes of this future scenario, we will assume they are developed. The first implication is that communication can be secure from outside intrusions. As noted above, one of the most insidious effects of panoptic surveillance is that people begin to self-censor. However, if we weaken the ability of outsiders to monitor our speech, then we can speak more freely. Of course, speech in a public forum is still public and potentially monitored. However, one of the unusual conditions of public speech on the Net is that it is speech identified with a person by virtue of an electronic address. That address can also be concealed; indeed, the cypherpunks have already set up a network of anonymous remailers that permit people to send email and post messages anonymously. We can imagine that this network will be extended in the future to permit anonymous transmission of all kinds of information. Conversely, in cases where it is important that speakers be reliably identified, these networks can refuse to transmit messages which are not validated by the proper keys. In cases such as pronouncements from public officials, this can be critically important. The second implication of public-key cryptography is that people can generate unique signatures. In particular, given a document and a private key an author can produce a signature (a block of numbers) that is unforgeable and undeniable. That is, no other key will produce that signature and in addition any change to the message will produce a different signature. Thus, tampering and forgery are easily detected. Verification is simple and can be done by anyone with access to the author’s public key, which can be freely distributed. This capability is the converse of the first; what we say can be identified with us to a degree of certainty at least equal to that provided by physical signatures. Remember that one of the fundamental operations of the panoptic sort is identification—people are identified with file records and people are trained to carry and supply identificative tokens that reveal intimate physical information such as height, weight, and birth date. PGP allows people to be identified by their public and private keys. No necessary connection exists between a person and a key pair—people can have as many key pairs as they need, companies can generate new key pairs for each customer if they so choose. Ultimately, an identity is a key pair. Alan-Wexelblat-who-works-for-MIT is not precisely the same person as Alan-Wexelblat-who-buys-Macintosh-computers. The importance of making this distinction can be seen in the “disclaimers” regularly made in e-mail and Usenet postings by people who wish it to be known that they are speaking solely for themselves and not for an organization that might be attached to their name. Keys themselves can be signed. A person may have any number of other signatories to his key. These people, in effect, testify that this key belongs to this person. They, in turn, can have their keys signed. The result is what is referred to as a “web of trust” in which I may not directly know the holder of a given key, but I may know someone who knows him or someone who knows someone who knows him. Such chains, which might be thought to be potentially quite long, are limited by the principle that all people in the world are connected by a chain of no more than six people. In addition, we can imagine that well-known institutions such as MIT would establish key-signatory authorities. Since these institutions must verify personal identity before admitting people, they can in turn testify to the identity of these people to any who want to know by signing their key. This replicates today’s identificative structures wherein agents accept particular tokens because they trust that the agencies which issue those tokens have done the work necessary to establish that the bearer is indeed the person specified. However, by having a trustworthy token with no connection to myself, I break one of the fundamental connections of the panoptic sort: the association between a person and his identification. This, in itself, is not necessarily a significant disruption to the panoptic sort, but it does move in the right direction. The final implication of public-key encryption is the one which might have the most impact: digital cash. That is, in a future where this technology is widely used, it will be possible to buy and sell goods and services over the network with “coins” that are as valid as physical money is today, as unforgeable as the digital signatures described above, and as anonymous as encrypted messages. The significance of this advance for disruption of the panoptic sort, and for government in general, is enormous. Digital cash is like physical cash in that it is potentially untraceable. With digital cash I can pay for goods and services with the surety of the bank or other organization that issued the digital coins, and yet not have to reveal anything at all about myself. This strikes directly at the heart of the panoptic sort. The recourse to cash is not new. In today’s society, those who are most excluded from the benefits of society are most likely to resort to using cash. In many cases, it is their only recourse—denied credit, unable to prove themselves sufficiently to make checks acceptable, they must pay with cash often after paying exorbitant fees for converting their payroll or government checks to cash. In doing so, they do not create transaction records and do not “build up credit.” In a negative sense, it can be seen as a process that keeps poorer people (or people who have bad credit or who have declared bankruptcy or whatever) from taking advantage of many of the services available to others. In a positive sense, it can be seen as a way to exempt oneself from the panoptic sort. Digital cash would make it possible for people of all means to exempt themselves to a significant degree. This scenario supposes a series of radical changes in governmental policy. At present, U.S. cryptographic policy is strongly opposed to the widespread use of public-key encryption. Governmental agencies (particularly the FBI) would have to accept the idea that citizens would have conversations and hold information to which the government would potentially have no access. Currently, the government’s approach centers on escrowed keys, export restrictions on cryptographic information (which is treated as munitions), and wiretap capabilities built into the telecommunications system (and presumably into the NII). Businesses would also have to change their model of contact with customers. Currently, businesses feel compelled to “push” their information out to potential customers. To do that efficiently they require the ever more detailed information of the panoptic sort. However, if that information is not available, businesses would have to adopt more of a “storefront” approach where they advertise only their general existence and types of goods and wait for potential customers to come to them. This model is, to some degree, what is practiced today on the World Wide Web. Legal changes would also have to occur to recognize a digital signature as valid. It is likely, however, that practice would lead legislation in this case—the law has often recognized technological changes as they prove themselves. For example, the recent changes that allow DNA “fingerprinting” to be admitted as evidence; there are as yet no federal laws on DNA use in court, but it is becoming accepted practice. Therefore, this scenario assumes that a series of legal cases have built up the necessary precedents for digital signatures to have the force of law. The most important aspect of cryptoprivacy is also the one that would require the most changes. For digital money to become an everyday reality would require significant legislative changes; the ability to make money is one of the most closely held powers of any sovereign state. David Chaum, the inventor of digital cash, has set up the first company to issue and redeem DigiBucks, as they are called. While it is highly unlikely that governments will give up their power to mint money, our economy has moved away from minted money as the primary means of exchange. Credit cards proliferate, as do electronic funds transfers. The IRS collects most of its taxes from corporations in electronic form; vast sums are transferred between banks and the Federal Reserve digitally. The fact that consumers still use physical monetary tokens is merely an indication that the electronic funds part of the NII still has not been wired up to the “last mile”—i.e., each person’s house. This is changing, however, as personal financial programs such as Intuit’s Quicken encourage electronic payments and personal tax preparation programs encourage electronic filing. Chaum’s company, DigiCash, has been set up in the Netherlands. However, most of its suppliers and users are in the United States. This points up one of the most troubling consequences of this scenario for the government. As noted above, corporations have historically been quite willing to change locations (“move offshore”) in order to provide more favorable environments for themselves. If digital cash becomes widely accepted and the country’s consumer transactions go electronic, then government may have tremendous trouble accepting an anonymous system such as DigiCash. Currently, online means that information is more accessible to the panoptic sort. Credit records, electronic payments and so on all carry critical identificative information. Digital cash does not. It is, in effect, a virtually invisible economy and one that could spell the end of government’s ability to monitor and collect taxes.
  18. Conclusion This chapter has described the outlines of a pervasive practice of control, the panoptic sort. This practice is not a conspiracy of any person or group; rather, it is a tragedy of the information commons where each actor works in his own best interest and the result is something undesirable for all of us. The panoptic sort works to control us by shaping our behaviors, our expectations, as well as our reactions to society and to each other. The goal of this control is optimum efficiency, expressed in terms of maximizing business profitability. The techniques of the sort are not particularly new, but the technology of the network era allows unprecedented extensions of control into every aspect of our lives. This very extension is itself undesirable, as it conflicts with our modern notions of privacy. Two possible outcomes have been described, providing endpoints on a spectrum of possibilities. In one extreme case, nothing changes and we sink slowly into an information panopticon. In the other, everything changes and we establish technological barriers to protect ourselves. In reality, the future probably lies somewhere in between these two extremes. Governments may take some steps to protect individuals’ privacy as might the people themselves. Corporations may realize that it is not in their best interest to continually intrude and could exercise some measure of self-restraint. Fundamentally, though, the most important question is what we think our society is good for. If we allow the panoptic sort to continue we are resigning ourselves to a world in which corporate efficiency is the highest goal we can aspire to. Somehow, there seems to be something wrong with that idea. Thanks Special thanks to the members of the MIT Media Lab’s Narrative Intelligence reading group for helpful discussions of material from the Gandy book. Brad Rhodes provided helpful comments on a draft.
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand