Advertisement
player_03

Additional responses to Richard Loosemore

Sep 10th, 2013
146
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 1.41 KB | None | 0 0
  1. > Why do you folks (by which I mean folks such as MIRI/SIAI) only want to talk about one particular domain in which it could occur, and not any of the others?
  2.  
  3. Well, if the AI failed in many domains at once, it probably wouldn't be dangerous enough to worry about.
  4.  
  5. Before you start celebrating because someone *finally* understood your point, allow me to ruin the moment with a quote from Rob.
  6.  
  7. > (1) there are more defeaters for Friendliness than there are defeaters for intelligence [...]
  8. > (2) a failure to make one’s AI Friendly does not entail that one has failed to make one’s AI superintelligent.
  9.  
  10. So yes, there could be broken AIs. There could be AIs rendered harmless because their code fails spectacularly. (Actually, forget "could be" - AIs like that are all over the place.)
  11.  
  12. But those aren't the AIs we're worried about.
  13.  
  14. ---
  15.  
  16. By the way, why did you write that quote off? Just because it mentions "Friendliness"? Or because Rob didn't go into detail? It's on topic either way.
  17.  
  18. You asked why one module would fail but not another. Rob said that "Friendliness" is especially prone to failure. That's a reason one module in particular would fail. Maybe not a detailed reason, but it's on topic nonetheless.
  19.  
  20. You could have argued the point. You could have asked for evidence. You could have told Rob that he was on the right track, and that he should give more responses along those lines. Why didn't you?
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement