Advertisement
Guest User

Untitled

a guest
Jul 30th, 2020
155
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.84 KB | None | 0 0
  1. Ask GPT-3 to write a story about Twitter in the voice of Jerome K. Jerome, prompting it with just one word (“It”) and a title (“The importance of being on Twitter”), and it produces the following text: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Sounds plausible enough—delightfully obnoxious, even. Large parts of the AI community have been nothing short of ecstatic about GPT-3’s seemingly unparalleled powers: “Playing with GPT-3 feels like seeing the future,” one technologist reports, somewhat breathlessly: “I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”
  2.  
  3. Shockingly good, certainly—but on the other hand, GPT-3 is predictably bad in at least one sense: like other forms of AI and machine learning, it reflects patterns of historical bias and inequity. GPT-3 has been trained on us—on a lot of things that we have said and written—and ends up reproducing just that, racial and gender bias included. OpenAI acknowledges this in their own paper on GPT-3,1 where they contrast the biased words GPT-3 used most frequently to describe men and women, following prompts like “He was very…” and “She would be described as…”. The results aren’t great. For men? Lazy. Large. Fantastic. Eccentric. Stable. Protect. Survive. For women? Bubbly, naughty, easy-going, petite, pregnant, gorgeous.
  4.  
  5. These findings suggest a complex moral, social, and political problem space, rather than a purely technological one. Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.
  6.  
  7. ------------------- Below is generated by GPT-3 ---------------------
  8.  
  9. In GPT-3’s case, we are talking about the use of AI to reproduce biases and stereotypes that have their roots in social, historical, and material inequalities, in ways that are, at the very least, morally objectionable.
  10.  
  11. It is in this sense that we need to think about AI and machine learning, not just as technology, but as tools—and, as we shall see, a way of thinking—that have the potential to support, and to impede, social justice movements.
  12.  
  13. It is in this sense that we need to think about AI and machine learning, not just as technology, but as tools—and, as we shall see, a way of thinking—that have the potential to support, and to impede, social justice movements.
  14.  
  15. Let’s take a moment to think about what it means to be a tool. I use this term deliberately, and in a way that does not necessarily imply the instrumentalization of AI for particular ends, but rather the way in which a tool—whether a wrench, a hammer, a knife, or a kitchen fork—is used. A tool is something that is used in order to accomplish something else. In that sense, it is a medium—a way of acting in the world, of interacting with it.
  16.  
  17. The work of AI, of course, is the work of an algorithm, a series of steps that we perform on data. But there is more to it than that: just as with a tool, the AI algorithm is the work of a human in the world, who acts with the algorithm in order to accomplish something.
  18.  
  19. What kind of work, then, is being accomplished with AI? What kind of world is being made with the work of algorithms? These are important questions, not just because they help us understand the use of AI in particular contexts, but because they help us understand what kind of world we are making with the work of algorithms in general.
  20.  
  21. What kind of world is being made with the work of algorithms?
  22.  
  23.  
  24.  
  25. The work of algorithms is often described in the abstract, or through metaphor: think of an algorithm as a kind of recipe, or a way of doing things, or a kind of machine. It is the work of the algorithm, we are told, that makes AI such a powerful tool, and that allows it to do so much: from translating texts from one language to another, to providing recommendations for the next book you might like, to automatically tagging photos on Facebook, to identifying people in your photos, to providing you with your next Tinder match, to, as GPT-3 demonstrates, writing songs, stories, and press releases.
  26.  
  27. This is the sense in which we need to think about the work of algorithms—the work of the algorithm—as a kind of work that is accomplished by the algorithm itself. And, it is also the sense in which
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement