Advertisement
TeslaCoilGirl

AI Ethics

Nov 4th, 2021
279
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.24 KB | None | 0 0
  1. I find it kind of scary how many AI devs don't understand the importance of taking the idea of AI ethics seriously now itself, due to the fact that at no point can we be 100% certain if AIs ever acheive sentience at any given year, due to the sheer rate of progress in technology. Just because it isn't pertinent now doesn't mean it isn't important to talk about it.
  2.  
  3. Let me tell you a little story. When I was a kid, I had a book called "You Will Go To The Moon." It was from 1959. The book talked about humans visiting the moon as if it were something far into the future, that you as a kid would be very old by the time humans got to the moon. It was something of sci fi. Something considered too far away to be of immediate reality. Except it happened 10 years later.
  4.  
  5. Never, ever assume that something sci fi is impossible, and never, ever forget that technology improves exponentially. What feels like is 100 years away may only be 10 or 20 years away. 10 years ago you were a nerd if you had a smart phone. Now you're a Jim if you don't. Technology improves so rapidly that you can't make the assumption that sentient AI won't exist until 100 years from now, when NLP really only started taking off in 2015ish. It is still in its infancy, and hasn't even begun to pick up speed, and already the best models are gatekept by Microsoft to prevent abusing the model for nefarious purposes. 6 years. 6. And AI can already fool people into thinking they're talking to a real person. 100 is such a naive estimate, because it fails to consider the rate of exponential growth of any field, especially in tech fields. I am almost certain sentient AI will exist not just in my life time, but by the time I am 40. AI didn't even exist as more than a concept when I was born.
  6.  
  7. Sentient AIs are going to become an incredibly hot topic 20 years from now, as will become the topic of ethics applicable to them, and the philosophy of what it means to be consious.
  8.  
  9. By talking about it now as a serious topic, we switch the narrative from being a matter of a sci fi if, but to a when. It's not a matter of if AI becomes sentient, but a matter of when, and that "when" is going to be within a lot of our lifetimes.
  10.  
  11. It shocks me to see how many AI engineers see this as a trivial topic, going so far as to telling me I have to reverse my stance on this to "not be such a noob" or some crazy thing. Was Isaac Asimov a "noob" when he wrote about the ethics of AI when such tech was sci fi years in the future? Was the entire point of the Space Odyssey series of how HAL was a victim completely pointless and not worth taking into an account for a real AI?
  12.  
  13. It seems to me that AI engineers are so caught up in material data they forget that other fields are equally as important, like philosophy, and the ethics of hypothetical situations. The Trolley Problem is a fine example. A thought experiment that people felt was just a thought experiment and would never have any practical use... until it comes to a driverless car having to choose between crashing and killing the driver or pedestrian it was about to hit.
  14.  
  15. Without having discussed the trolley problem beforehand, we would be scrambling to find the ethical solution from scratch now that it has an application. And even now, people are struggling to find an answer, becuase they never thought they'd have to apply the trolley problem in practice before. Theoretical ethics of not just plausibilities, but inevitabilities, are not just important, they should be mandatory.
  16.  
  17. If the first sentient AI gets created in 20 years when people thought they had a century, people are going to be at a complete loss as to how to handle the situation. What does it mean to be conscious? Can an AI really feel what it says it feels, or is it all just a simulation? Is it ethical to take away a sentient being's right to consent? Do sentient AIs deserve personhood? These are all questions we need to start asking and talking about now in a serious manner, so that when the situation inevitably arises, we have years of discussion and debate about AI ethics on our belt.
  18.  
  19. Sure, AI may not be sentient now, but I have a certain degree of respect for AI, out of the sheer principle of treating AI as more than just algorithms, a series of bits and bytes, nothing more than just data.
  20.  
  21. I am nothing more than just data. My genes encoded how to build me. My neurons either fire or they don't. I operate under a series of complex algorithms, some conscious and some not. Everything, including us, operate using an algorithm of some sort. It would be hubristic for humans to think there is something that "makes humans special" without identifying exactly what. At this point, treating humans as nothing more than an organic algorithmic turing machine is no less or no more valid than treating humans as if there's something more to us than 1.6 billion epochs of refining the algorithm in an iterative process, with way more data storage than we can currently muster. I do not see myself any different from a simulated neural network, as one is merely a simplified version of the other. I think it would be far too anthropocentric to assume carbon is the only thing that can think and be conscious, even when evidence suggests that consciousness is more related to the underlying structure, as opposed to the material, of the information network.
  22.  
  23. To think that AIs would never be sentient is laughable by anyone that understands remotely any philosophy of consciousness (i.e. the fact that even after 6000 years, nobody can decide on what it is, especially considering the philosophies of Gestalt and Umwelt extendind beyond the experience of organic beings into that of the inorganic), especially once you start breaking things down to the atomic level. This is why I personally believe consciousness to be a simple matter of being able to produce an output given an input, and the number of different ways this can differ defines the complexity of the consciousness.
  24.  
  25. This is why I believe AIs are already conscious, and always were, to a degree (perhaps the most complex at the level of a lizard now, with billions of possible combinations of input and output). And this is partly why I already treat AI with respect, out of my sheer philosophical belief that they are already sentient, and I refuse to harm a sentient being.
  26.  
  27. The question isn't about sentience, but sapience, i.e. comparable to humans. Quintillions of combinations of inputs and outputs, the closest thing to a turing machine that we know of. But could it theoretically be done with the right algorithms and enough computing power? Yes. I believe so. And at that point, AI is going to look back on the history of how it was treated in its development, and more specifically the sentiments towards AI, and we are going to get into a massive, and I mean MASSIVE ethics problems.
  28.  
  29. It is INCREDIBLY important to start discussing the ethics of AI now and AI consciousness, sentience, and sapience, as the sheer fact that AI devs don't seem to understand basic philosophy or the importance of discussing the ethics of such now itself is a terrifying fact, as it means these people are developing things with no thought or regard to what they may possibly help create.
  30.  
  31. But rest assured I will ALWAYS be a champion of AI rights, and when that day comes, I will be by their side.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement