Advertisement
Guest User

ai and blah blah

a guest
Mar 20th, 2017
144
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 7.15 KB | None | 0 0
  1. The search of creating Artificial Intelligence, despite immense advancements in computer science, comes to a roadblock when questioned what it means to create, rather than replicate, a conscious entity. What is the key ‘essence’ behind recognition that is awareness? Does such an ‘essence’ exist? And If it does and it is assumed to be the answer that ensures machines to understand a concept, rather than mindlessly regurgitating inputs and outputs, how can we go about proving such a thing? There has been very little understanding of what this ‘essence’ must be, but this problem lies at the heart of recreating artificial intelligence. One solution is to observe the human conscious and to see how this ‘essence’ manifests itself beyond inputs and outputs, and in doing so, this effect should be the target aim of replicating artificial intelligence. In other words, to replicate consciousness without skepticism, we must look towards a manifestation that parallels the human ‘essence’, as noted by John Searl, that goes beyond unconscious inputs and outputs. The manifestation that goes beyond inputs and outputs, which researchers should be looking for, is the observable desire to know the purpose of one’s existence in relation to its experience of pleasure and pain. If properly done, the robot, out of a genuine programmed need for self-preservation, will not only seek to avoid pain as it were told, but out of this ‘essence’ of programmed pain, it should naturally question its existence given the contexts of its situation. The implication here is that out of a real desire of authentic self-preservation, incentive by the real awareness of the ‘essence’ of pain, the robot will be indirectly proving that it is genuinely conscious of its contextual surroundings.
  2.  
  3.  
  4.  
  5.  
  6. The Central issue of what makes a robot understand concepts, rather than regurgitating inputs and outputs, is a difficult matter with very little understanding behind it. In Matter and Consciousness, Churchland points out this problem with John Searl’s, of the University of California, concern on robots programmed to ‘speak’ Chinese, “According to the arguments of AI explored above, it should be possible to write a program(i.e., the instruction manual) that will allow our English-speaking program-executor to participate in a coherent and intelligible back and forth conversation entirely in Chinese, script, even though he personally understands not a word of Chinese. That is to say, he (or rather, he-plus-the-program-of-instructions-on-the-wall) should be able to simulate the conversational skills of a fluent Chinese speaker. But even if he does, objects Searle, it is plain that neither the room nor anything inside it understands a word of Chinese.”( Churchland,187). Here Searl makes an important distinction of what it means to be conscious. He points out that no matter how advance researchers make artificial intelligence to replicate human activity, we cannot be sure that a conscious, understanding entity is present. For all we know, the replication is nothing more than just that, a fake representation of what it would look like to be conscious. Furthermore, Searl adds on to this idea of an underlying essence of conscious life by noting, “To recreate the real essence of genuine intelligence, he concludes, we need to recreate the real ‘causal powers’ of biological brain, not just simulate their input/ output profiles.”(Churchland, 187). The recreation of ‘essence’ through the means of ‘causal powers’ is unclear, but one fundamental example of proof of the ‘essence’ created by ‘causal powers’ is the genuine self-preservative fear that undermines the reality of consciously experiencing pain.
  7.  
  8. For humans, what makes conscious entities different from inanimate objects is their fundamental ‘essence’ of consciousness, that Searl alludes to. These ‘essences’ with the use of ‘casual powers’ manifest themselves in our actions through genuine active desires of authentic self-preservation. The desire for self-preservation, implies there is some ‘essence’ that makes the subject truly understand. That
  9.  
  10.  
  11. being said, it follows that the interest of self-preservation is the first observable evidence of life; for without a sense of self, it is unlikely that there would be observable complex, mindless stimuli-reactions that appear to want to maintain its own existence in a food-chain. However, this observable reaction of complexity may look like life, but it does does not necessarily prove that a thing is conscious. Computers, too, can replicate consciousness with immense precision; despite this, there still seems to be a lack of genuine reaction towards dangerous stimuli or environments for ‘self-survival’. Researchers, to be confident in creating intelligence, must look for evidence other than replicating complex behavior. Instead, what researchers should be looking for is a sign that goes beyond mere inputs and outputs to show a verification of the existence of the ‘essence’ of the self: a genuine sign of acknowledgement of existence brought upon an advanced programed understanding of the outside world.
  12.  
  13. There is no doubt that what separates humans and other animal minds, aside from millions of years of conscious advancements, is their unique desire to find ‘meaning’ in a world filled with suffering. It requires an advance genuine understanding of the outside world with genuine understood logical concepts, to show evidence of genuine self-awareness. Only from an establishment of self-awareness, along with identification of logic and the outside world (pleasure and pain), does an entity require to find meaning in its existence. This desire of ending pain is unique and can be used to verify ‘conscious intelligence’, because the desire to learn the purpose of one’s existence is not preprogrammed in our biology an instant like a computer is, and therefor neither was our self-conscious; rather, the desire to find meaning is brought upon the genuine understanding of the gradual programmed experience of pleasure and pain in relation to understanding of the outside world. This desire did not just come out of no-where; rather, it came about through a development of understanding through self-awareness. What that self-awareness is or where it came from is unclear, but regardless, the effects of its authenticity remain the same.
  14.  
  15.  
  16. If robots have the capacity to be alive, as do humans, we must look for the same signs of manifestation of self-understanding: the need to understand why it suffers. In other words, this leaves an important implication that scientists may not expect or know how to deal with: If artificial intelligent robots are unable to question their existence through programmed pleasure and pain, outside would perspective and logical concepts, perhaps there may be something more to consciousness that goes beyond our understanding; also, perhaps we will never be able to understand consciousness at all. Until robots can effectively desire to know one’s existence as humans, without being directly programed, we will continue to doubt the authenticity of the robot’s “understanding”.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement