Advertisement
Guest User

Untitled

a guest
Jun 15th, 2016
170
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 52.86 KB | None | 0 0
  1. <h1>Group members</h1>
  2. <p><strong>Timo Aerts</strong></p>
  3. <p><strong>Niek Morskieft</strong></p>
  4. <p><strong><strong>Quint van Stokkum</strong> </strong></p>
  5. <h1><strong>Problem statement and sub-theme</strong></h1>
  6. <p>In today&rsquo;s society there are a lot of smart robots active, from smartphones which we control to cars that control their self with us in them while parking reverse. All these innovations seem to be well received, however there are quite a lot of people who are not convinced of the capability of these robots, some are even scared and don&rsquo;t trust them at all. In the case of a smartphone or a car it won&rsquo;t be really difficult to avoid these machines, but what if we were to implement these technologies in nursing or even surgical robots? And what is it that make these robots untrustworthy? We think this is mainly caused by the grey area around autonomic devices and artificial intelligence, people are scared of the decisions that will be made by the robots; will they make a correct and humane decision and what if there are multiple options which all have negative as well as positive aspects?</p>
  7. <p>A big challenge is posed with the question: &lsquo;When can robots take control of nursery in terms of decision making? It is almost inevitable that robots will take an even bigger part then they already do in as well nursing as surgical tasks, but where do we let them take control of these actions, will there always be a doctor or nurse present when a robot is operating or do we trust it enough to take our hands off of it? Another important aspect to take into account is the person who will have to undergo the surgery, he or she has to be comfortable with the fact that not a human but a robot will treat him. If not, the patient could cancel the surgery and another hospital with human employees can be chosen to do the operation. If this happens repeatedly, this will lead to a major loss of income. With this given, a hospital should wait for the right moment to offer robotic controlled surgery, otherwise it wouldn&rsquo;t be profitable.</p>
  8. <h1>Actors: User, Society, Enterprise</h1>
  9. <p>For the decision making robots there are of course stakeholders. These stakeholders are part of the terms User, Society and Enterprise. The relevant stakeholders for the subtheme of decision making of the robots are:</p>
  10. <ul>
  11. <li>
  12. <p>U, the users of the robots, patients of hospitals</p>
  13. </li>
  14. <li>
  15. <p>S, the employees of the hospitals</p>
  16. </li>
  17. <li>
  18. <p>E, the society of Dutch hospitals, which is the NVZ</p>
  19. </li>
  20. </ul>
  21. <p>For the patients of the hospitals it is very important that the decision making of the robots is done very well, one small mistake and something can go horribly wrong (If you have for example a surgery robot). Besides that it is not known if a patient inside of a hospital really wants a robot to make the decision instead of a human being. Most often the human being</p>
  22. <p>will decide in a humane way, right now robots cannot do this. Right now it is not known what a robot should do if a patient refuses medication, should the robot let her skip a dose and cause harm, or should the patient still take one (which could impinge on the patient's autonomy)?[1] From the User perspective it is evaluated whether the robots are doing a fine job, just like the nurses or a doctor and checked whether people like to be treated by a health-care robot instead of a nurse or a doctor. The employees of the hospitals will have an immediate effect of health care robots, since the tasks that they used to do (decide which medicine someone has to use and what has to be done during a surgery) is now done by robots. This means that there is a possibility that the employees will lose their job. It is easily shown that this will happen, since factories were also automated by robots, if the same happens to hospitals it is known that things can be done quicker, cheaper and maybe even better for the patients, but the doctors are left with nothing.[2] From the Society perspective it is evaluated whether the robots will completely take over the work of the people that currently do the jobs of these robots. Societies like NVZ will have to make an importantdecision whether hospitals have to use the health care robots who make decisions rather than a human, this means that the NVZ will have to look into the robots. They have to test them, to see if they are reliable and usable in health care. From the Enterprise perspective it is evaluated whether the robots should be used instead of employees which currently do the tasks. If this is the case hospitals can probably be cheaper since less staff is needed inside the hospitals. For that, a lot of different aspects have to be taken into account, for example the responsibility of the hospital, which has these robots under their control. If the robots eventually succeed the healthcare can improve a lot from what it is right now.</p>
  23. <p>The stakeholder that is important for Enterprise is the NVZ. They are the ones that have to make the decision and distribute them to the hospitals. The NVZ must evaluate if a robot is trustworthy and can make the same decisions as a human. The NVZ also wants that the patients and other clients are happy with the robots, for this it is important that all the good and bad aspects of the robot will be discussed and looked after. If something is not correct yet the robots can never be distributed across different hospitals, the robots need to be precise and their accuracy should be on point. This is difficult since robots do not think like humans. From this it can be concluded that the NVZ will contribute and try to improve the robots by working together with the manufacturer. The NVZ will also be able to give better care to people if robots are indeed successful. These are also the actions and the NVZ has to make. This is a positive thing, it is good that NVZ can try to improve the robots and help to let them decide properly, someone has to do it so it is even better if an organization that will provide them to the hospitals helps with the development. Another positive thing is that if the healthcare is better the hospitals will be more efficient and the quality could be way better than it is right now.</p>
  24. <p>At certain hospitals there are already robots which help the doctors with certain operations, the precision with these robots is better. This means that it is easier and safer to help someone with the robot instead of doing it all by yourself. These robots will only be used if there are advantages for the patient, which also include less time inside of the hospital, less scars and the recovery procedure is shorter.[3] There are some things which are uncertain, since we do not know if robots will be good at decision making and to decide like a human. This might lead to a problem if that is not possible. One fact that is not looked into is to the fact that these robots are quite expensive and thereby not available to everyone that may need one. That could eventually lead to a new problem.</p>
  25. <h1>How history matters</h1>
  26. <p>Medical robots are a relatively new technology. Because of this, their history is not a long one. However, even though the research didn't start until a few decades ago, there has been a lot of progress in the development for medical machines.</p>
  27. <p>The first use of medical robots was in 1985, namely the PUMA 560. This was specifically designed for precise control in surgeries relating to the brain and nerves. Up until 1988 the PUMA remained in use for prostate surgery as well, but by then the PROBOT had been developed. The PROBOT was specifically created for prostate surgeries. In 1992 the ROBODOC, designed for total hip replacements, joined the family. Finally in 2000 the da Vinci robot was designed. This system allowed so called telesurgeries to be performed. The PUMA, PROBOT, ROBODOC and da Vinci were all laparoscopic machines, meaning that they would be controlled by a surgeon while showing the necessary information to them via a video display. [1] [2] [3]</p>
  28. <p>The fact that there has been so much research to medical robots shows that people see it as a viable option. Computers have often been praised for lacking certain flaws that humans have, like a slight tremble to the hand, while also being more precise than us. However since all machines were laparoscopic up to this point, it shows that we do not wish to be dependent on the robots without any human input and therefore have not given any of the robots any form of decision making. This may however just be an unintended consequence. It could quite possibly be the case that since this technology is so new we first with to make sure the robots are up to the task as a tool in the surgeons hand. In many forms of fiction heavily featuring robots, they tend to go rogue in one form or another. While not specific to medical robots, if such an AI were to go rogue or haywire, it could cause a lot of casualties, especially in the medical sector. This risk carries many costs with it. It could cost a lot of money to fix the software in the robot or replace it for the hospital, while they would also be haunted by lawsuits, bad reputation and demands of some form of compensation. All of these would cost the hospital either money or future business which in turn would cost the NVZ.</p>
  29. <p>Dutch hospitals have been using robots in the medical field since 2000. In 2013 there were 3670 robotic procedures, a 150% increase compared to 2010. These robots are however very expensive, costing around 2300 euros. It is uncertain whether the robots will compensate for themselves in the form of shorter recovery times etc. [7]</p>
  30. <p>Since the past of medical robots, while rich, is still very short, it is difficult to tell whether it is desirable for the robots to make decisions on their own. There's both evidence showing interest and evidence showing disinterest in having medical robots make their own decisions. The past has shown however, that the smallest mistakes during an operation can quickly prove to be fatal for the patient.</p>
  31. <h1>Conflicting values in the societal debate</h1>
  32. <p>The moral values which are relevant for healthcare robots are as followed:</p>
  33. <p>Trust, the robots must be trusted by the people who are using them, if there is no trust the robot cannot do anything because the patient doesn&rsquo;t let the robot help him/her.</p>
  34. <p>Privacy, this is always an important value, if you have a robot with a camera this can be hacked and therefore the privacy of a patient can be lost. Some people may even find it weird that a robot comes whenever they are in the shower or on the toilet. This is needed though, if someone is not able to do it himself.</p>
  35. <p>Independence, some people find it weird if they are not able to do something, so they want some independence. This is not always possible though, since a patient needs help with certain daily tasks.</p>
  36. <p>Health, what is a healthcare robot without caring about health? The health of a patients needs to be the number 1 priority since it is the reason why such a healthcare robot should be used.</p>
  37. <p>Support, the healthcare robot should be able to support the patient and help with daily tasks.</p>
  38. <p>Security, a healthcare robot should be secure, the robots should be protected so that personal information is private (security and privacy are backing each other up).</p>
  39. <p>Safety, it should of course be safe to use the robot, otherwise the whole idea of a healthcare robot is nonsense. Since safety is very important the robot should be safe to use and shouldn&rsquo;t make wrong decisions which lead to a less safer environment.</p>
  40. <p>Responsibility, the robots are responsible for your health, so they are responsible for every decision that it makes. This should be taken into account whenever you want to implement robots into the healthcare sector.</p>
  41. <p>A societal debate is safety/security vs privacy and people never know what is more important, this can be seen at the following debate which let people vote which is more important, someone&rsquo;s safety or someone&rsquo;s privacy. [8] Some people say that safety is more important, since if you&rsquo;re safe you are able to live longer, but the problem with safety is, is that you need to give up privacy. For example camera&rsquo;s at stores, they are there for your safety but they also record you and your privacy (maybe you do not want someone to know that you are at that store). Another issue is health vs independence, if you have a robot he wants you to be healthy, if something is wrong with you he will remind you of that. The only problem with this is, is that you lose your independence. You&rsquo;re always dependent of what the robot says and are not able to make your own decisions about taking medication. There are four important values for the business point of view, these are safety, group welfare, health and responsibility. As a business point of view you want the robots to be safe, you don&rsquo;t want to have bad publishing with your robot, in this case the NVZ doesn&rsquo;t want to hear bad stories about the robots since they decided to implement them at certain hospitals. You also want your hospitals and patients to have a good experience with the robots and that their health is really good after implementing the robots. You are also responsible for the robots and how they work in the society since you also don&rsquo;t want bad publicity in this case.</p>
  42. <p>Our actor stands in the debate with safety vs privacy and with health vs independence. As a business point of view it is important that the robot is safe and will be helping you in every way, but for the patient privacy might be as important as safety. The same counts for health vs independence someone wants to be independent, but this might not be possible if a patients really needs the robot for his or her health.</p>
  43. <h1>Possible solutions</h1>
  44. <p>One possible solution to the conflict between Privacy and Health is to make robots obey the Robotic Laws of Asimov, specifically the first two. These laws are:</p>
  45. <ol>
  46. <li>A robot may not injure a human being or, through inaction, allow a human being to come to harm.</li>
  47. <li>A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. [9]</li>
  48. </ol>
  49. <p>In this specific case, this means that the robot would keep everything its owner says a secret, unless it would be crucial information to their health. A robot would have to be programmed with sufficient medical knowledge, as well as the intelligence to ability this with knowledge about the patient's status to find out whether keeping this information from the doctors would cause harm to the patient. The user would have to be instructed to tell as much to the robot as possible, from the colour of their urine and bowel movements to the little pains they woke up with that day. This is to provide the robot with the information it requires. The enterprises on the other hand would have to pay a lot of money for such research. These robots would be required to have a very advanced AI, which would bring a high pricetag with it. Fortunately, once the research has been done and the robots have been properly developed, mass producing these would be relatively cheap, since the most expensive part of this is the software development, which has been done already by then. This solution would provide a balance between the values of health and privacy, but not for privacy and safety nor for health and independence. It also prioritizes the health of the patient over their privacy, by ignoring their privacy the moment their health could be in danger.</p>
  50. <p>A solution for health vs independence would be to have the robots act in a passive way. If a robot would always either ask whether the user requires help and wants it or remain completely inactive until the user directly asks for help in some way, only monitoring to make sure they're not having a debilitating problem that prevents them from asking for assistance, such as a heart attack or a seizure. For this to work, the robot wouldn't require as advanced AI to be able to recognize and link symptoms together although it would need to be able to recognize when a patient would be having one of the aforementioned problems. Its medical database would not have to be as extensive either, just large enough to be able to help out with the various kinds of issues a user might as assistance with. For the user this would mean the robot is constantly asking for permission to help or might simply not help at all since the user might forget. The enterprises would have to spend less money for this solution, since the technological requirements for it are much lower than implementing Asimov's laws of robotics. It would still require a large sum of money though, but fortunately the mass production would again be relatively cheap.</p>
  51. <p>The solution to the problem of safety vs privacy would be to have the robots constantly monitor their own state. While this solution heavily favours the safety over the privacy, we believe the importance of their safety weighs more heavily than that of their privacy. This way, the robot will run self-diagnosis every so often and send a report to the hospitals. Ideally this includes some information about its surroundings. The user would not have to do anything extra for this, but it would damage their privacy. Self diagnosis processes are very prevalent in technology, meaning enterprises would only have to spend a small amount of money to tweak the existing processes into more specific ones for this robot. They would get constant reports, allowing them to warn people in time when their robots are turning faulty and should be replaced or repaired. In the long run, this means they can actually earn more money than if there were no ways of self-diagnosis. Again, the mass production of the robots would remain cheap.</p>
  52. <p>All three of these solutions are to balance out two values/interests that are at odds, but also increase the other two values/interests. Since these solutions were thought of with enterprises in mind some values might be given a higher priority than that other actors would give it. For example. The first solution is made with health and privacy in mind, slightly favouring health over privacy. The U-group and S-group have these two values in mind as well and would rank health higher than privacy as well. Therefore to balance out these two values, this would be a perfect solution since it's in line with the other groups.</p>
  53. <p>The third solution on the other hand, heavily favours safety over privacy. The U-group and S-group might actually think that the safety of the machines isn't as important as the privacy or might say we're crossing a line where we're giving away too much privacy for safety. This means that solution 3 might not be accepted by those two groups at all, even though it seems like an optimal one for the enterprises.</p>
  54. <p>One of the more easily seen similarities between all three of our solutions is that the eventual costs for mass production of the robots will remain small and, while also small in the third solution, the initial costs for research in solution 1 and 2 are large. All three of these solutions also deal with the stress between two values pitted against each other and tries to find a middle ground, while a substantial difference is the fact that the first and second solution actually succeeds in finding such a middle ground, while for solution 3 we'll have to compromise quite a bit and seem to lean more towards one extreme than the other.</p>
  55. <h1>Ethical evaluation of our solutions</h1>
  56. <h1>Intuition</h1>
  57. <p>Intuitively, the best solution would be the first, that incorporates Asimov's laws. For one, this solution causes as little compromise of the important values as possible. Secondly, this solution will make sure that a rogue AI would always remain under human control. Thirdly, the first solution will cause our most important value, health, to become as large as possible. Ranking our solutions from best to worst would give us:</p>
  58. <ol>
  59. <li>Asimov's Laws (1st solution)</li>
  60. <li>Passive Helping (2nd solution)</li>
  61. <li>Constant monitoring (3rd solution)</li>
  62. </ol>
  63. <h1>Utilitarianism</h1>
  64. <p>From a Utilitarian perspective our first solution, where we program the robots according to Asimov's laws, would be the best solution.</p>
  65. <p>This solution relies on only one assumption, the fact that the happiness that a person experiences by surviving a condition or disease is greater than the happiness lost by a person due to the breach of privacy by the robot. We think this assumption is fair to make, since death is often seen as the thing that causes the most unhappiness. Assuming that this is true, the only difference the robot makes with this solution is that he will ensure the survival of the patient when their condition reaches a certain threshold. Other than this, the robot will always respect the users privacy and safety causing no loss of utility.</p>
  66. <p>In the second solution the robot will do the same as in the first solution, advising the user and helping out when he asks, but he will not disregard their orders when things take a turn for the worse. With this solution, the user can actually die if he decides not to follow up on the robots advice too often. Since dying causes a great deal of unhappiness, this is ethically not a correct solution according to utilitarianism.</p>
  67. <p>Our third solution involves breaking the privacy and trust of the user and, in doing so, making sure the robot's software is up to date and it's hardware will be taken care of when it's wearing down. In this case it boils down to monetary value of having to buy a new robot and how much people value their privacy. Since most tend to value their privacy more than the cost of such a robot, we think this will actually lead to a decrease in happiness and therefore it would not be ethically correct.</p>
  68. <h1>Kantian Theory</h1>
  69. <p>We believe that according to Kant our second solution, where the user is the best solution.</p>
  70. <p>If this solution were to be universalized there would be no contradictions, since it would just mean that everyone would remain alive. Every person with this robot would eventually get a doctor. However, since after a certain amount of time, after the threshold is reached, we take away the choice of the user and phone in an expert regardless of the user's wishes, we are treating them as a means, rather than an end.This means that, according to the Kantian theory, this is not an ethical solution.</p>
  71. <p>In our second solution, our robot would only advise and help when given permission. If a user decides to follow up on the advice, it would lead to a significantly higher chance of survival, meaning there would not be any logical contradictions. Just like in the first case, this solution just gives everyone an advisor. Since the robot won't act if you don't want him to, this can easily be universalised. Because of this, the robot not acting unless given the permission of the user, we are always giving them the final decision, meaning that in this case, the user is treated as an end rather than the means.</p>
  72. <p>Finally, in our third solution the robot will perform routine checkups on itself. This is, of course, a rule that can easily be universalized without resulting in some form of a contradiction. The robot would observe, report and get updated, which is impossible to contradict itself. The main goal of this solution, however, is to cause the robot to perform at top condition and remain working at all times, while sacrificing the users privacy to do so. This again means that the robot is treating the user as a means to an end, rather than an end itself. Because of this, this solution is not ethically correct according to the Kantian theory.</p>
  73. <h1><strong>Virtue ethics</strong></h1>
  74. <p>There are many robots that can perform really complicated surgeries, but is there any way that these robots have virtues of some sort? Doctors are highly trained and able to understand the position the patient is in. They have virtues that helps with their job, like trustworthiness, professionalism and emphasising. Robots, however could be chosen above the human doctor. 85% of men undergoing prostate cancer surgery are choosing medical centers that offer robotic surgery due the precision of these surgeries. [10] It is said that robotic surgery can remove more cancerous tissue with less disruption of adjacent nerve endings than other methods, helping to reduce cancer recurrence and retain sexual function. But is it a problem that these robots have no virtues?</p>
  75. <p>The biggest virtues at stake here are trustworthiness, because the patient is risking his health, professionalism, because the surgery should be done properly and without any mistakes and emphasising, it is common for humans to emphasise with each other, while robots can&rsquo;t do that. These are the main values a patient seeks in its doctor. We came up with a few solutions to these problems:</p>
  76. <p>Solution 1:</p>
  77. <p>A doctor should always be present at the surgery to supervise and take control of the surgery if necessary. This way people feel as safe as with a doctor performing because the doctor will prevent the robot from making bad decisions. It also could improve the results due the precision of the robot.</p>
  78. <p>Solution 2:</p>
  79. <p>The doctor controls the robot this way the doctor is always responsible for his work, the robot would only be used as a tool to help the doctor. It will maintain the virtues of the doctor and it will increase the effectiveness of the surgery because of this new tool. This solution resembles to the first one, the only difference is that the doctor will make every decision instead of only intervene if something goes wrong. This way it will be even more trustworthy.</p>
  80. <p>Solution 3:</p>
  81. <p>The robot does only do part of the job, like an assistant. It provides the doctor with the right tools and function as a really steady assist just the way the doctor needs it. Here again the responsibility totally lays with the doctor, although the robot delivers a great part of the job. This solution has the same benefits as the second one, but it emphasize even more on the trust on the doctor.</p>
  82. <p>Role of ethics in the debate</p>
  83. <p>In our debate we had the NVZ as an actor, we were defending our solutions and came to a good solution for the objections that we had. For this we worked together with the employees of the hospitals and the users of the robots who all had their own moral values which were important. We came to a compromise when it comes to what a robot can and should do in hospitals, since we all thought that the robot should be able to analyse the patient and record the data to see whether something is wrong with the patient, we think that this is a good start, since the users should allow a robot to analyse and help them (here trust could be an issue, but with the USE perspectives we found this a good solution).</p>
  84. <p>Other parts which we did not fully agree on was privacy and freedom of choice, a patient should indeed have the choice but eventually he or she needs to be treated (only if someone like an expert comes and talks with the patient a further decision can be made what the patient really wants). When it comes to privacy, a lot of privacy is already given away nowadays, even if you don&rsquo;t know it exactly. It&rsquo;s hard to come with a solution that fully covers the issue of privacy, but we tried to do that aswell.</p>
  85. <p>During the debate we did not look at what was technologically possible, since health care by robots is still relatively new and technology has to be made accordingly we didn&rsquo;t take this into account.</p>
  86. <p>All of the USE groups had their own values, since everyone had their own actors and tried to come up with a solution which satisfies the values of their actor. The user group came with the solution that professionals should make the final choice, the society group came with the solution that the final decision lies at the user and we, as the enterprise group, had the solution that robots would handle according to Asimov's laws. This shows that every group picked a solution which was relevant for their actors.</p>
  87. <p>We came to the following compromise as a group:</p>
  88. <p>The online <a href="http://htmltidy.net/">HTML Tidy code editor tool</a> helps you organize your code easily in your browser without download or registration. It is absolutely free!</p>
  89. <p>The robot analyses the patient and gives recommendations to him/her based on these analysations. If the patient ignores these recommendations and doesn&rsquo;t do anything to increase health the robot will phone in an expert and make sure that the patient is doing fine. This compromise follows the ethical theories of Kantianism.</p>
  90. <p><br/>The negotiation went quite well, since most of us had almost the same idea in our head. Asimov&rsquo;s laws kinda included both the user and society ideas so we didn&rsquo;t have to negotiate that much. We saw that the user perspective was really based on the patient and wanted to have all the relevant values that they had.</p>
  91. <h1>Historical evaluation of the three options</h1>
  92. <p><strong><em>Solution 1, Asimov&rsquo;s laws.</em></strong></p>
  93. <p><strong>A quick recap of this solution: Robots will be programmed with an advanced AI and large medical database to be able to tell what sort of measures a patient should take. It will also be programmed to follow the 3 rules of Asimov&rsquo;s laws of robotics:</strong></p>
  94. <ol>
  95. <li>
  96. <p><strong>A robot may not injure a human being through action or inaction</strong></p>
  97. </li>
  98. <li>
  99. <p><strong>A robot may not disobey an order given to it by a human being, unless this would conflict with the first law.</strong></p>
  100. </li>
  101. <li>
  102. <p><strong>A robot must preserve itself, unless this would conflict with the first or second laws.</strong></p>
  103. </li>
  104. </ol>
  105. <p>This means that the robot will listen to what the patient tells him and cross check it with already known knowledge of this patient and a large database with diseases, illnesses and infections together with their symptoms. This way the robot can advise the patient on certain courses of actions. So if the patient might be displaying a few symptoms of heart problems, the robot would advise him to see a doctor. Should the patient refuse and his condition worsen, the robot will advise to take measures again. If the patient refuses repeatedly and their condition gets to the limit, the robot will call in a doctor, to abide by the first law.</p>
  106. <p>Since this solution is very AI-heavy and the rules a robot should follow are already precisely laid out, a technocratic approach would suit it the best. Although the AI required for this solution would need to be very complex, the past has shown a tremendous increase in this field. In 1997, the AI cleverbot was created by Rollo Carpenter. [11] This robot learned from conversations with actual people and would improve its own conversational skills. It could then have better conversations with other people and improve even further. This lead to it passing the Turing test in 2011 with a grade of 59.3%, while actual humans would score 63.3%. [12] In 2016 the company DeepMind developed the AI AlphaGo, which would compete against the Go world champion Lee Sedol and win. [13] Since Go is regarded as the most complex game known to man, this shows a tremendous amount of progress in the fields of AI. This leap in technology in such a short amount of time means that the AI required for our solution is not far fetched and provides a lot of leeway for us.</p>
  107. <p>It might be possible to modify this solution a bit in a way such that we gain more leeway. This would mean modifying the rules that will govern the robots. The current rules though, are clear cut, universal and absolute. This makes them easy to understand and avoids loopholes or special circumstances and leaves them completely unambiguous. If we&rsquo;d modify these, it might get more difficult to avoid these kinds of loopholes or cause ambiguity.</p>
  108. <p>One of the possible consequences might be misuse. A robot would have to store a lot of information about the patient in its memory. It would also require some way to contact a doctor, most likely via some form of an internet connection. This would mean that a skilled hacker would be able to hack into the robot. In doing this, they could learn sensitive information about the consumer or worse, should the hacker be skilled enough and have such malicious intents, tamper with the robot&rsquo;s database, making them feed misinformation to the user. This is something we wish to avoid at all costs. To avoid this we&rsquo;d need advanced encryption software and constant updates to said software. Sadly, because of the nature of the solution and this specific consequence, there is no way to modify it in a way such that this consequence can&rsquo;t happen anymore.</p>
  109. <p>Another possible consequence would be misuse again, but this time from the user. If the user doesn&rsquo;t trust the robot and therefore refuses to tell him anything, the robot can&rsquo;t do its job properly. Alternatively, the user could also tell the robot only what they think is necessary. Oftentimes though, a person might not think something minor, such as a headache, is worth noting. While combined with other symptoms though, even something like a headache can act as a symptom for major problems. [14] Unfortunately both these issues - the lack of trust between user and robot and the lack of knowledge - can&rsquo;t be solved by modifying our solution. What could help, is during the advertising of the product, making sure that the user realises that it needs to cooperate for the robot to have any effect as well as telling them that they should tell it anything you would tell an actual doctor, such as pains, discoloured feces and abnormal weight loss. This ensures that the robot would have all information necessary to properly advise the user.</p>
  110. <p>Solution 2: Passive Helping</p>
  111. <p>This solution is very similar to the first, except that it puts the final decision with the user. Instead of the robot deciding to call a doctor once the user&rsquo;s health has reached a certain threshold, it will continue advising the user as it always has. The robot will therefore not be following Asimov&rsquo;s laws, which could cause ambiguity and make things unclear. Because of this, a participatory approach would be the most wise. This solution requires not just technological expertise but also a good understanding of each other&rsquo;s values and proper implementation thereof. This means that it would be impossible for just engineers alone to create such a robot. The AI would still have to be quite advanced, nearly at least as advanced as the first solution. Since, because of this, the same arguments regarding AI could be raised, the same examples of Cleverbot and AlphaGo still stand (See [11], [12] and [13]). The solution could be modified for more leeway though, especially in a way that would also reduce the ambiguity. By using the same rules of Asimov but swapping the 1st and 2nd rule with each other the solution becomes more clear-cut and transparent. This would change them to:</p>
  112. <ol>
  113. <li>
  114. <p>A robot may not disobey an order given to it by a human being</p>
  115. </li>
  116. <li>
  117. <p>A robot may not injure a human being through action or inaction, unless this would conflict with the first law.</p>
  118. </li>
  119. <li>
  120. <p>A robot must preserve itself, unless this would conflict with the first or second laws.</p>
  121. </li>
  122. </ol>
  123. <p>This would make it clear what exactly the robot does in what situations and remove any ambiguity about its behaviour, providing more leeway in the implementation of this solution.</p>
  124. <p>Since this solution is so similar to the first, one of the possible consequences is the same. This robot, would have to store sensitive information about the user, meaning that if it gets hacked, this information would get in the wrong hands. Again, this is the problem that we want to prevent at all costs and its solution remains the same. With sufficient encryption software and updates for it, we would be able to make the data virtually inaccessible.</p>
  125. <p>A different consequence, yet again misuse of the consumer, would be if the consumer repeatedly ignores the advice of the robot, to the point of it being self-destructive. The robot will consistently offer advice which it believes is the right course of action, due to the new second rule, but if the user refuses to cooperate, the robot would be unable to do anything about it without breaking the first rule. Again our solution can unfortunately not be changed to prevent this consequence. But it&rsquo;s solution is similar to that of the second consequence of the first solution. Via proper marketing, we would be able to make sure users understand that the robot has their best interests in mind.</p>
  126. <p>Solution 3: Constant Monitoring</p>
  127. <p>This solution is different from that of the other two, since it sets out to solve a different problem. To recap, the robot will constantly monitor its own state and surroundings. It will then use this information to create detailed error reports and send them to the company. These error reports can then be used to maximize the longevity of the robot. Since this solution is almost completely mechanical and software-based, a technocratic approach would suit this solution the best. This solution deals with how to maximize the lifespan of a robot and what information it would require for it, which means that people other than engineers and experts in their respective fields wouldn&rsquo;t be able to aid much in the process.</p>
  128. <p>The AI required in this case would be a lot simpler and standard maintenance processes have existed for as long as computers have. This provides a lot of leeway in implementing this solution. On the other hand, people often value their privacy very highly. It is very probable that they&rsquo;d value it over the cost of getting a new robot. This in turn limits the implementation of the solution. By having the robot monitor only itself and not its surroundings, we could increase the leeway. However, by doing this it would be able to provide less detailed status reports, meaning that they wouldn&rsquo;t last quite as long as they could, but still longer than they would without. With this new implementation, people&rsquo;s privacy wouldn&rsquo;t be as compromised and it would be more likely that they&rsquo;d accept this small breach in return for longer lasting robots, which in turn leads to better health.</p>
  129. <p>One possible consequence, could be that this information would have to be sent to the company. This means the servers would have to be quite large and that the internet traffic would be quite busy. A lot of natural traffic makes for perfect targets of Denial of Service attacks, a form of attacking a company where one tries to overload their servers by sending more requests than it can handle. If this were to happen, not only would all robots be unable to send their status reports correctly, it would also cost the company a substantial amount of money to get the servers running again. This is another solution we want to make sure will be prevented. Since these DoS attacks are done via robots, standard anti-robot measures which already exist should suffice against most attacks, unless said criminal is really committed to attacking this specific company.</p>
  130. <p>Another consequence could be that this info would be intercepted. Hackers would be able to try and intercept the information sent from the robots, to learn about the user&rsquo;s residence. Thankfully, the robot will not be sending as sensitive information as in the other two solutions and the way to solve this problem would be the same. With sufficiently advanced encryption software, all information sent would be safe against any hackers with ill intents.</p>
  131. <h1>Reflection based on ethical analysis</h1>
  132. <p>For the ethical analysis we first need to look at the concrete situation that we have. Robots will have to make decisions when it comes to health care. If you want to apply these robots in hospitals it has to make the right decisions, but what is ethically right? If you look from the utilitarianism perspective you see that any option where the robot can let the patient die and cause unhappiness is not good, sometimes the user doesn&rsquo;t want to live anymore but that&rsquo;s not taken into account if you look from this perspective. The good thing about utilitarianism is that if you care a patient well most people will be happy, which is ofcourse the goal of a decision making health care robot (it has to make the right decisions). If you pick the kantian theory happiness isn&rsquo;t that important, so you should look at the moral options. Letting the robot decide whether or not a patient has to do something is morally not correct, the people should decide on themselves which isn&rsquo;t the case if a health-care robot takes over their decision making. The strong point of the kantian theory for our situation is that the user still decides for himself whether or not he wants something to happen (like a surgery). And if you look at the virtue ethics it&rsquo;s a problem that robots will take over the control of a human. This is because of the fact that some virtues cannot be done by a robot, or cannot be done yet. The biggest problems are trustworthiness, professionalism and emphasising, robots don&rsquo;t really have professionalism while doctors have, a patient will trust a doctor more often and can emphasise with the patient.</p>
  133. <p>Right now we should be able to redesign our technology, we did this together with the other groups in order to get a solution which is more USE-proof than the one which we already had. The solution that we had was solving the issue that the robots would harm the patient, so solution one would be our best solution. If we combine this with all the factors that the other USE groups had we came to a solution which will help the patients and diagnose them. Whenever the robot sees that something is wrong he will warn the patient and let him/her decide. If the user takes no action the robot will take action and phone in an expert to come and see the patient, he will then help the patient accordingly and is able to communicate properly with the patient.</p>
  134. <p><br/>According to us this is the best ethical solution since the user will still be able to decide on himself while the robot is just there to help in case something is really going wrong. All the values which the other groups had are in this solution since we came to this solution during the debate with them.</p>
  135. <h1>Reflection based on historical analysis</h1>
  136. <p>Solution 1:</p>
  137. <p>One of the main pros of this solution is that the patient still decides about what he does, if a doctor would come he still has it in his own hands, since you can talk to such a person and not to a robot. The cons for this solution however are that right now it is almost impossible to make such an AI for a robot, but might be possible in the near future. Besides this it is also possible to misuse such a robot and a doctor has to be ready in case he gets called (so the robots cannot fully replace the human (yet)).</p>
  138. <p>Solution 2:</p>
  139. <p>The cons of this solution are somewhat the same of these of the first solution. This is because the same rules are applied of what the robot should do. The only difference is that the robot will not call a doctor and the user can do whatever he wants. That&rsquo;s the pro of this solution, the user has control over the situation instead of the robot or doctor.</p>
  140. <p>Solution 3:</p>
  141. <p>This solution is easier to implement and make than the other two solutions, but it also has less capabilities. The robot will only be able to monitor and send that information to experts, this is ofcourse nice for the patient, but privacy should be taken care of as well. The lifespan of these robots is very long, so that&rsquo;s one of the pros, since you don&rsquo;t need to replace them every now and then. This solution isn&rsquo;t as safe as the other two and is therefore not good enough.</p>
  142. <p>We still think that the solution which is given in the debate part is the best one, since it is USE proof and everybody is happy in that case. It also has a big part of solution 1 which we had as our best solution as well. We only need the robots in the case something goes really wrong, which is ofcourse a good thing since doctors aren&rsquo;t always available.</p>
  143. <h1>Reflection and recommendation based on combined ethical &amp; historical analysis</h1>
  144. <p>Our solutions to the ethical- and to the historical issues were quite similar, the main focus laid on to which extend we could let the decision-making up to robots. Could we go all the way, or should we always consult a real doctor? In the debate we quickly noticed that not every human is equal, therefor not everyone will let robots make the same amount of decisions. As the designer of these systems, it is our obligation to create a solution that can fit the needs of every party and every user. According to this statement we came up with an adjustable solution, a solution where the individual can alter the actions of the robot. If we can come up with such a solution, we can give the users time to get used to this innovation and get to trust the robots.</p>
  145. <p>As mentioned earlier, our best solution for both the historical- as the ethical problems of our design is the same, this made it very easy to make a compromise. Our final solution states the following: the robot analyses the patient and comes up with an fitting solution, if the patient trusts and approves it, then the robot can proceed executing the proposed action, if not, the robot has to find an alternative solution. This goes on until a fitting solution is found, if no fitting solution can be found, the robot has to contact a human doctor so he or she can advise the patient. This way the patient can influence the decisions and contact the doctor if required, this creates the possibility to build trust between the patient and the robot. It also contributes to an opportunity to take the ethical interests of all parties into account.</p>
  146. <h1>Reflection on the Engineering Cycle</h1>
  147. <p>The engineering cycle composes of these steps in order. Since it's a cycle, you repeat step 1 after the last of course to reiterate your design until it fulfills all criteria as perfect as possible. These steps can be linked to the phases we have gone through in our project as follows:</p>
  148. <ol>
  149. <li>Define the problem. Phase 1 &amp; 2 First we defined our problem and chose our societal actor. This narrowed down exactly what our goals are and who our stakeholder is.</li>
  150. <li>Do background research. Phase 2 Next we researched the past. We'd find out how the past has treated our problem and realize how much lee way and limits we have. This further refines it by showing us how much possibilities we have and how many aren't possible at all.</li>
  151. <li>Specify the requirements. Phase 2 Here we research what our solution needs to solve exactly. We also determine our stakeholder's values and how much they prioritize each one of them. This narrows it down by showing us exactly what specifics our solution needs to fulfill.</li>
  152. <li>Develop solutions. Phase 3 &amp; 4 Next, we start by brainstorming for ideas, starting relatively broad. These ideas are then developed into better solutions, making sure they will still address our problem definition and specifics as well as fit in the lee way we have. But removing irrelevant aspects and ideas that could hinder us. In doing so, we narrow down to these few solutions.</li>
  153. <li>Choose the best solution. Phase 4 &amp; 5 Lastly, we look back on our solutions via various ethical viewpoints, such as Kant and Utilitarianism as well as Virtue ethics. Using these, we decide upon which solution would solve our problem best and remains within the required ethical norms, narrowing down to exactly what we were looking for.</li>
  154. </ol>
  155. <p><br/>This approach creates various advantages in both the redesigning of technology as well as societal debate. Firstly, since these steps obviously build on each other, it helps avoid excess work and effort and promotes efficiency. Each step is a foundation on which you can build the next step. By starting with a strong foundation, the next step was easier to do, costing us less effort to work and avoiding us repeating too much in consecutive steps. With the next step building on the previous, each consecutive step would get these same benefits, getting progressively easier and avoiding significants amount of excess work.<br/>Secondly, the engineering cycle also ensures a good structure. We'd start out with the problem definition. Then followed background research and the specification of our requirements for the solution. Once these are in place, our solutions were developed, building on the previous steps as pointed out above. Lastly we'd chosen our best solution. By following this structure, our research follows a pyramid principle, gradually narrowing down from the top until we reach what we were looking for.<br/>Thirdly, the cycle requires us to look back at what we've done afterwards. Maybe our problem definition was incorrect, or our specifics should have been different. This forces us to create new iterations of our solutions. In doing so, we remove issues that previous prototypes may have and end up with a more refined and relevant solution to the actual problem.<br/>Because of these reasons, the engineering cycle is a very useful and relevant tool for redesigning technology, as well as answering societal debates.</p>
  156. <h1>References</h1>
  157. <p>[1] Deng, B. (2015, July 01). Machine ethics: The robot&rsquo;s dilemma. Retrieved May 7, 2016, from&nbsp;<a href="http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881">http://www.nature.com/news/machine-ethics-the-robot-s-dilemma-1.17881</a></p>
  158. <p>[2]&nbsp;<a href="http://www.medscape.com/viewarticle/531748">http://www.medscape.com/viewarticle/531748</a></p>
  159. <p>[3]&nbsp;Koors, N. (2014, July 01). Atrium MC: DaVinci Robot. Retrieved May 07, 2016, from&nbsp;<a href="https://www.atriummc.nl/home/patienten-bezoekers/poliklinieken-afdelingen/urologie/davinci-robot/">https://www.atriummc.nl/home/patienten-bezoekers/poliklinieken-afdelingen/urologie/davinci-robot/</a></p>
  160. <p>[4]: Sanchez, M. (2012, August 08). The History of Medical Robots! Retrieved May 06, 2016, from&nbsp;<a href="https://neurochangers.com/2012/08/09/the-history-of-medical-robots/">https://neurochangers.com/2012/08/09/the-history-of-medical-robots/</a></p>
  161. <p>[5]:&nbsp;<a href="http://www.medscape.com/viewarticle/552230_2">http://www.medscape.com/viewarticle/552230_2</a></p>
  162. <p>[6]: History of Robotic Surgery. (n.d.). Retrieved May 06, 2016, from&nbsp;<a href="http://biomed.brown.edu/Courses/BI108/BI108_2004_Groups/Group02/Group%2002%20Website/history_robotic.htm">http://biomed.brown.edu/Courses/BI108/BI108_2004_Groups/Group02/Group 02 Website/history_robotic.htm</a></p>
  163. <p>[7]: Ruurda, J., Hillegersberg, R. V., Velde, S. V., &amp; Broeders, I. (2014, November 06). Met een robot opereren gaat gewoon beter. Retrieved May 07, 2016, from&nbsp;<a href="http://www.medischcontact.nl/archief-6/Tijdschriftartikel/147102/Met-een-robot-opereren-gaat-gewoon-beter.htm">http://www.medischcontact.nl/archief-6/Tijdschriftartikel/147102/Met-een-robot-opereren-gaat-gewoon-beter.htm</a></p>
  164. <p>[8] Is a citizen's safety more important then a citizen's privacy? (n.d.). Retrieved May 17, 2016, from <a href="http://www.debate.org/opinions/is-a-citizens-safety-more-important-then-a-citizens-privacy">http://www.debate.org/opinions/is-a-citizens-safety-more-important-then-a-citizens-privacy</a></p>
  165. <p>[9]:Three Laws of Robotics. (n.d.). Retrieved May 17, 2016, from&nbsp;<a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics">https://en.wikipedia.org/wiki/Three_Laws_of_Robotics</a></p>
  166. <p>[10] MacRae, M. (2012, May). The Robo-Doctor Will See You Now. Retrieved June 15, 2016, from <a href="https://www.asme.org/engineering-topics/articles/robotics/robo-doctor-will-see-you-now">https://www.asme.org/engineering-topics/articles/robotics/robo-doctor-will-see-you-now</a></p>
  167. <p>[11]&nbsp;Cleverbot. (n.d.). Retrieved June 06, 2016, from&nbsp;<a href="https://en.wikipedia.org/wiki/Cleverbot">https://en.wikipedia.org/wiki/Cleverbot</a></p>
  168. <p>[12]&nbsp;Aron, J. (2011, September 6). Software tricks people into thinking it is human. Retrieved June 06, 2016, from&nbsp;<a href="https://www.newscientist.com/article/dn20865-software-tricks-people-into-thinking-it-is-human/">https://www.newscientist.com/article/dn20865-software-tricks-people-into-thinking-it-is-human/</a></p>
  169. <p>[13]&nbsp;Russell, J. (2016, March 15). Google AI beats Go world champion again to complete historic 4-1 series&nbsp;victory. Retrieved June 06, 2016, from&nbsp;<a href="http://techcrunch.com/2016/03/15/google-ai-beats-go-world-champion-again-to-complete-historic-4-1-series-victory/">http://techcrunch.com/2016/03/15/google-ai-beats-go-world-champion-again-to-complete-historic-4-1-series-victory/</a></p>
  170. <p>[14]&nbsp;Healthline Editorial Team. (2014, July 28). Headache Warning Signs (K. R. Hirsch, Ed.). Retrieved June 06, 2016, from&nbsp;<a href="http://www.healthline.com/health/headache-warning-signs#Sensitivity6">http://www.healthline.com/health/headache-warning-signs#Sensitivity6</a></p>
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement