Vous êtes sur la page 1sur 9

Karl 1

Eddie Karl

Prof. Martin

ENGLI 1102

6 May 2019

Artificial Intelligence in Medicine

With the advancement of time, especially in this era, new technology is rapidly evolving.

More and more technological advancements are being made, and with the help of these

advancements, feats once thought impossible or improbable in the past are now becoming

mainstream today. These impossible feats of technology include things like virtual reality,

foldable screens, and even wireless phones capable of fitting in your pocket. One new piece of

groundbreaking technology that has been under development is the use of machine learning, or

artificial intelligence. Artificial intelligence is like an automated computer system that can think

and react on its own. In other words, it is similar to a brain, a brain that will one day surpass our

own in terms of its cognitive abilities. Even though it is in its infancy stage, artificial

intelligence has been applied and is currently being used in a number of unique scenarios and

across many lines of work. It is being used in things like self-driving cars, speech recognition

software, personal home assistants such as the amazon Echo, and is even starting to be applied in

the medical field. This may all sound like good advancements in technology, but there are some

important issues that need to be addressed with it, specifically when referring to implementing

artificial intelligence to be used in the medical field. Some of these issues include but are not

limited to data privacy, human interactions, and liability in terms of stolen data. It isn’t right to
Karl 2

incorporate artificial intelligence computer systems into the medical field due to the problems

and ethical issues associated with it.

First and foremost, what is perhaps the largest ethical issue surrounding the

implementation of artificial intelligence into the medical field revolves around the topic of data

privacy. According to Michael Rigby, a PhD candidate in molecular neuroscience and a fifth

year student in the Medical Scientist Training Program, in his article “Ethical Dimensions of

Using Artificial Intelligence in Health Care”, “Artificial intelligence can be applied to almost any

field in medicine… its potential contributions seem limitless… and with its robust ability to

integrate and learn from large sets of clinical data, AI can serve roles in diagnosis, clinical

decision making, and personalized medicine” (Rigby para. 2). This is important because sure,

the usage of artificial intelligence in the medical field may come with a large quantity of

benefits, but these benefits are only able to make an impact and be useful if the AI has data to go

off of. This means that lots and lots of patient’s data from around the world would need to be

entered into this system in order for it to work at its highest potential. And with the storage of

large quantities of patient’s medical data such like this comes lots of data privacy concerns. Effy

Vayena, a PhD and who studied Medical History and Bioethics at the University of Minnesota,

agrees with this in their article “Machine learning in medicine: Addressing ethical challenges.”

She states, “Developers [should] pay close attention to ethical and regulatory restrictions at each

stage of data processing. Data provenance and consent for use and reuse are of particular

importance, especially for [machine learning] that requires considerable amounts and large

varieties of data” (Vayena para. 2). A system storing vast amounts of potentially sensitive

medical data has a huge risk to be targeted by hackers looking to steal this information. This

raises serious concerns, especially for the clients whose data could be stolen. There would need
Karl 3

to be multiple ways of securing medical data so that cases like this do not happen. Without the

proper security, the usage of artificial intelligence in the medical field would be largely

unethical.

Continuing off of the stolen data issue is another piece of the ethical puzzle, which is the

question of who is responsible/liable for the stolen data? As Mike Barlow, an award-winning

author and journalist with expertise in information technology, said in his book titled “AI and

Medicine”, “It’s not clear which parties would be held accountable when something goes wrong

with an [artificial intelligence] or [machine learning] system. Who bears the risk? Who is

responsible and who pays for damages? Can an AI system be sued for malpractice?” (Barlow

para. 37). There is confusion surrounding this problem, because theoretically the artificial

intelligence system could belong by many groups. Would the company who created the system

be responsible for any stolen or lost data? Or would the medical facility who’s currently

employing the artificial intelligence system be liable? This confusion causes problems with the

implementation and security of artificial intelligence and is something that absolutely must be

figured out before machine learning systems are implemented into medical care around the

globe.

Another problem with using artificial intelligence to use and analyze sensitive medical

data is the artificial intelligence itself. A question posed by Diana Zandi, who is affiliated with

the world health organization, in her article “New ethical challenges of digital technologies,

machine learning and artificial intelligence in public health: a call for papers” captures this issue.

She states, “Do we know enough about the algorithms that are being used to generate the

outputs? … What are the limitations, level of inclusiveness and quality of data sets used for self-

learning artificial intelligence algorithms?” (Zandi para. 4). These are incredibly important
Karl 4

questions, especially in the area of advanced technology. Without proper testing of the

algorithms being used, it might be too late when we figure out their full capabilities. Algorithms

are very powerful pieces of code, especially the ones used in artificial intelligence. This is why

it isn’t fully ethical yet to be feeding them millions of patients’ medical data from around the

globe, because something might happen that we weren’t prepared for.

The ethical issues associated with the implementation of artificial intelligence into the

medical field are not only limited to the privacy and software issues, but also encompasses the

affects that this will have on real doctor and patient interactions. According to Anita Ho, author

of numerous articles written for the Wiley Online Library, in her article “Deep Ethical Learning:

Taking the Interplay of Human and Artificial Intelligence Seriously”, “an overreliance on virtual

technologies may depersonalize medical interactions and erode therapeutic relationships” (Ho

para. 2). When she says this, she is talking about the relationships that are formed between both

the medical professional and the patient. By using artificial intelligence in medicine, this

connection could take a significant hit. It is important that people receive human interaction

during their medical needs in order to provide comfort and to help them feel safe. Interacting

with an artificial intelligence system would not be able to provide this to the patient. This is

especially important when it comes to therapeutic relationships, like Ho stated. The whole point

of a therapeutic relationship is that one on one, person on person interaction, which is an

impossibility if the patient is only interacting with an advanced computer system. Another point

that Ho brought up in her article was how “The increasing expectation that patients will be

actively engaged in their own care, regardless of the patients’ desire, technological literacy, and

economic means, may also violate patients’ autonomy and exacerbate access disparity” (Ho para.

2). This shows how not only could patients possibly lose their one on one connection with
Karl 5

another person during medical visits, but also how they could potentially be forced into it due to

changing times. This would be especially unethical with older people, because many would be

uncomfortable with the prospect of having to solely interact with a machine instead of a doctor to

get the care that they need.

It is important to take into consideration the viewpoint of the vast majority of people who

this artificial intelligence system would be assisting, or possibly even replacing when it comes to

ethics behind it: the medical professionals themselves. The biggest issue that arises with medical

professionals and artificial intelligence revolves around trust. According to Omar Khan, a PhD

in Chemical Engineering and who currently works at the Massachusetts Institute of Technology,

in his article “Artificial Intelligence in Medicine: What Oncologists Need to Know about Its

Potential — and Its Limitations”, “Another major consideration is the acceptance of AI by

medical professionals and patients. The degree to which people trust automated systems depends

on human factors, machine factors and environmental factors” (Khan para. 24). It is important

that a system like this is trusted and accepted by the professionals, otherwise why would the

patients trust it either? Khan goes on to make another point, “In particular, the transparency of

logic behind an automated system’s decision correlates with its perceived trustworthiness. As a

result, the black box effect (due to its lack of transparency) is a major hindrance to widespread

adoption of neural networks” (Khan para. 24). Seeing as the medical professionals themselves

are having a hard time gaining the trust in an artificial intelligence system, it would be unethical

to force it upon them. It would be even more unethical to be using an untrusted system on the

patients. Doctors are trusted by the people they care for, and there would be a large chance that

they would lose that very important trust with their patients by using such a system.
Karl 6

Despite the evidence presented that shows the overwhelming unethicality of using

artificial intelligence in the medical field, advocates of it have provided multiple supportive

points in favor of it. According to Cade Metz, a technology correspondent with the New York

Times, in his article “A.I. Shows Promise Assisting Physicians”, “Drawing on the records of

nearly 600,000 Chinese patients… and because its privacy norms put fewer restrictions on the

sharing of digital data — it may be easier for Chinese companies and researchers to build and

train the “deep learning” systems” (Metz para. 6). This may sound like a good thing at first

glance, but it is also showing the direct loss of privacy of the patients whose medical records are

being used, which is incredibly unethical. No matter what the data privacy restrictions are for a

country, the loss of this privacy is still unethical, especially since their data is being fed into a

machine that could potentially be used globally and not just in that one country.

Another argument in favor of artificial intelligence in the medical field encompasses the

relationship between the system and the medical professional. According to Irene Chen, a

doctoral student in electrical engineering at the Massachusetts Institute of Technology, in their

article “Can AI Help Reduce Disparities in General Medical and Mental Health Care”, “AI and

machine learning may enable faster, more accurate, and more comprehensive health care. We

believe a closely cooperative relationship between clinicians and AI—rather than a competitive

one—is necessary for illuminating areas of disparate health care impact” (Chen para. 24). This

also seems like a good thing at first glance, but like the previous argument, it falls short due to

another unethical reason brought up. It was already established that many medical professionals

are having a hard time forming a trustworthy relationship with artificial intelligence, so it would

be very difficult to form the closely cooperative relationship between them like Chen said.
Karl 7

It is unethical to incorporate artificial intelligence computer systems in to the medical

field due to a number of serious problems. There is a violation of privacy of the patients whose

data is entered into these machine learning systems. There is confusion around who is actually

liable and responsible for the data entered into these systems in the case of stolen data. Because

these systems are so advanced, it is unknown what exactly artificial intelligence is capable of or

how it could affect us and evolve. There would be a degradation of person on person contact

between patients and medical professionals, and therapeutic relationships would suffer as a

result. Finally, artificial intelligence systems in medicine are not even fully trusted by the people

they are sought to assist in the medical field. The world of technology is still taking its first baby

steps into the capabilities of machine learning. There are a multitude of problems and ethical

issues that need to be addressed and resolved before it will be ready to be applied to medicine.

Once these issues are fixed however, the benefits provided will be an extraordinary leap in the

medical field. Advanced artificial intelligence system are not yet ready to be incorporated into

medicine, and will not be ready for a long time to come.


Karl 8

Works Cited

Barlow, Mike. AI and Medicine. O'Reilly Media, 2016, O'Reilly,

learning.oreilly.com/library/view/ai-and-medicine/9781492048954/ch01.html.

Chen, Irene Y., et al. “Can AI Help Reduce Disparities in General Medical and Mental Health

Care?” AMA Journal of Ethics, vol. 21, no. 2, Feb. 2019, pp. 167–179. EBSCOhost,

doi:10.1001/amajethics.2019.167.

Ho, Anita. “Deep Ethical Learning: Taking the Interplay of Human and Artificial Intelligence

Seriously.” Hastings Center Report, vol. 49, no. 1, Jan. 2019, pp. 36–39. EBSCOhost,

doi:10.1002/hast.977.

Khan, Omar F., et al. “Artificial Intelligence in Medicine: What Oncologists Need to Know

about Its Potential — and Its Limitations.” Oncology Exchange, vol. 16, no. 4, Nov.

2017, pp. 8–13. EBSCOhost,

cod.idm.oclc.org/login?url=http://search.ebscohost.com/login.aspx?direct=true&db=a9h

&AN=126798907&site=ehost-live&scope=site.

Metz, Cade. (2019). A.I. Shows Promise Assisting Physicians. [online] Nytimes.com. Available

at: https://www.nytimes.com/2019/02/11/health/artificial-intelligence-medical-

diagnosis.html. [Accessed 16 Feb. 2019].

Rigby, Michael J. “Ethical Dimensions of Using Artificial Intelligence in Health Care.” AMA

Journal of Ethics, vol. 21, no. 2, Feb. 2019, pp. 121–124. EBSCOhost,

doi:10.1001/amajethics.2019.121.
Karl 9

Vayena, Effy, et al. “Machine Learning in Medicine: Addressing Ethical Challenges.” PLoS

Medicine, vol. 15, no. 11, Nov. 2018, pp. 1–4. EBSCOhost,

doi:10.1371/journal.pmed.1002689.

Zandi, Diana, et al. “New Ethical Challenges of Digital Technologies, Machine Learning and

Artificial Intelligence in Public Health: A Call for Papers.” Bulletin of the World Health

Organization, vol. 97, no. 1, Jan. 2019, p. 2. EBSCOhost, doi:10.2471/BLT.18.227686.