Advertisement
Guest User

Untitled

a guest
Jul 30th, 2020
97
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 3.09 KB | None | 0 0
  1. Ask GPT-3 to write a story about Twitter in the voice of Jerome K. Jerome, prompting it with just one word (“It”) and a title (“The importance of being on Twitter”), and it produces the following text: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Sounds plausible enough—delightfully obnoxious, even. Large parts of the AI community have been nothing short of ecstatic about GPT-3’s seemingly unparalleled powers: “Playing with GPT-3 feels like seeing the future,” one technologist reports, somewhat breathlessly: “I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”
  2.  
  3. Shockingly good, certainly—but on the other hand, GPT-3 is predictably bad in at least one sense: like other forms of AI and machine learning, it reflects patterns of historical bias and inequity. GPT-3 has been trained on us—on a lot of things that we have said and written—and ends up reproducing just that, racial and gender bias included. OpenAI acknowledges this in their own paper on GPT-3,1 where they contrast the biased words GPT-3 used most frequently to describe men and women, following prompts like “He was very…” and “She would be described as…”. The results aren’t great. For men? Lazy. Large. Fantastic. Eccentric. Stable. Protect. Survive. For women? Bubbly, naughty, easy-going, petite, pregnant, gorgeous.
  4.  
  5. These findings suggest a complex moral, social, and political problem space, rather than a purely technological one. Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.
  6.  
  7. ------------------- Below is generated by GPT-3 ---------------------
  8.  
  9. It’s not simply that GPT-3 produces words that are biased, but that it also produces worlds that are.
  10.  
  11. This paper, then, is about the meaning of algorithms, and the meaning of language. It is about what happens when we use AI to write. We want to argue that GPT-3 is a tool that produces narratives, and that as a result, it should be subject to criticism and accountability.
  12.  
  13. And, we want to argue that this kind of criticism is a kind of reading. That’s what we do when we read.
  14.  
  15. “What’s your favourite GPT-3 story?”
  16.  
  17. “I like the one about the Princess.”
  18.  
  19. “Which one?”
  20.  
  21. “The one where the Princess is in the bathroom.”
  22.  
  23. “What’s it called?”
  24.  
  25. “The Princess is in the bathroom.”
  26.  
  27. “What happens in the story?”
  28.  
  29. “The Princess is in the bathroom.”
  30.  
  31. “Why is the Princess in the bathroom
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement