Advertisement
Guest User

Untitled

a guest
Jul 30th, 2020
696
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 4.32 KB | None | 0 0
  1. Ask GPT-3 to write a story about Twitter in the voice of Jerome K. Jerome, prompting it with just one word (“It”) and a title (“The importance of being on Twitter”), and it produces the following text: “It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage.” Sounds plausible enough—delightfully obnoxious, even. Large parts of the AI community have been nothing short of ecstatic about GPT-3’s seemingly unparalleled powers: “Playing with GPT-3 feels like seeing the future,” one technologist reports, somewhat breathlessly: “I’ve gotten it to write songs, stories, press releases, guitar tabs, interviews, essays, technical manuals. It’s shockingly good.”
  2.  
  3. Shockingly good, certainly—but on the other hand, GPT-3 is predictably bad in at least one sense: like other forms of AI and machine learning, it reflects patterns of historical bias and inequity. GPT-3 has been trained on us—on a lot of things that we have said and written—and ends up reproducing just that, racial and gender bias included. OpenAI acknowledges this in their own paper on GPT-3,1 where they contrast the biased words GPT-3 used most frequently to describe men and women, following prompts like “He was very…” and “She would be described as…”. The results aren’t great. For men? Lazy. Large. Fantastic. Eccentric. Stable. Protect. Survive. For women? Bubbly, naughty, easy-going, petite, pregnant, gorgeous.
  4.  
  5. These findings suggest a complex moral, social, and political problem space, rather than a purely technological one. Not all uses of AI, of course, are inherently objectionable, or automatically unjust—the point is simply that much like we can do things with words, we can do things with algorithms and machine learning models. This is not purely a tangibly material distributive justice concern: especially in the context of language models like GPT-3, paying attention to other facets of injustice—relational, communicative, representational, ontological—is essential.
  6.  
  7. ------------------- Below is generated by GPT-3 ---------------------
  8.  
  9. A striking example of this concern is the work of the AI researcher R. H. Anderson, whose paper “An Ethics of Natural Language Processing” identifies a paradox of modern machine learning and natural language processing. On the one hand, these technologies are, at least in theory, able to take into account massive amounts of context in the information they process. On the other hand, as a result of the tremendous speed and scope of these technologies, they often lack any kind of meaningful “feedback loop” between the programmers, the programmers’ work, and the people affected by the technology.2
  10.  
  11. The same could be said of the moral obligations that the creators of machine learning systems might be thought to have towards their users, in the context of GPT-3 and its siblings: if the model is learning to produce something “like” something, and if that something is a result of a historically unjust process, then who bears the moral responsibility for that something? To what extent are we morally responsible for the outputs of our models? And to what extent can we disentangle those outputs from the inputs? These questions may seem relatively abstract, but they have concrete, real-world applications: if GPT-3 were to produce a new story, for example, what would the moral implications of publishing that story be? And who, if anyone, should have the final say in whether it should be published?
  12.  
  13. This is not to say that GPT-3 and similar models are necessarily bad—far from it. But we need to be aware of their limits and capabilities, as well as the history that they reflect, in order to be able to use them in responsible, ethical ways. This means not only knowing the outputs of the model, but also knowing how those outputs are produced, by whom, and for whom. This is a big task, but it is essential to figuring out how to make our AI systems more just.
  14.  
  15. 1. Brockman, J. (2017, June). “The Latest AI Breakthrough.” Edge.org.
  16.  
  17. 2. Anderson, R. H. (2017). “An Ethics of Natural Language Processing.” Digital Scholarship in the Humanities, 34(1), pp. 86–94.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement