Vous êtes sur la page 1sur 103

Research ethics involves the application of fundamental ethical principles to a variety of topics

involving research, including scientific research. These include the design and implementation of
research involving human experimentation, animal experimentation, various aspects of academic
scandal, including scientific misconduct (such as fraud,fabrication of
data and plagiarism), whistleblowing; regulation of research, etc. Research ethics is most
developed as a concept in medical research. The key agreement here is the 1964 Declaration of
Helsinki. The Nuremberg Code is a former agreement, but with many still important notes.
Research in the social sciences presents a different set of issues than those in medical
research.The academic research enterprise is built on a foundation of trust. Researchers trust that
the results reported by others are sound. Society trusts that the results of research reflect an
honest attempt by scientists and other researchers to describe the world accurately and without
bias. But this trust will endure only if the scientific community devotes itself to exemplifying and
transmitting the values associated with ethical research conduct.There are many ethical issues to
be taken into serious consideration for research. Sociologists need to be aware of having the
responsibility to secure the actual permission and interests of all those involved in the study.
They should not misuse any of the information discovered, and there should be a certain moral
responsibility maintained towards the participants. There is a duty to protect the rights of people
in the study as well as their privacy and sensitivity. The confidentiality of those involved in the
observation must be carried out, keeping their anonymity and privacy secure. As pointed out in
the BSA for Sociology, all of these ethics must be honored unless there are other overriding
reasons to do so - for example, any illegal or terrorist activity.Research ethics in a medical
context is dominated by principlism, an approach that has been criticized as being
decontextualized.Research ethics is different throughout different types of educational
communities. Every community has its own set of morals. In Anthropology research ethics were
formed to protect those who are being researched and to protect the researcher from topics or
events that may be unsafe or may make either party feel uncomfortable. It is a widely observed
guideline that Anthropologists use especially when doing ethnographic fieldwork.Research
informants participating in individual or group interviews as well as ethnographic fieldwork are
often required to sign an informed consent form which outlines the nature of the project.
Informants are typically assured anonymity and will be referred to using pseudonyms. There is
however growing recognition that these formal measures are insufficient and do not necessarily
warrant a research project 'ethical'. Research with people should therefore not be based solely on
dominant and de-contextualized understandings of ethics, but should be negotiated reflexively
and through dialogue with participants as a way to bridge global and local understandings of
research ethics.Furthermore, it is the researchers ethical responsibility to not harm the humans or
animals they are studying, they also have a responsibility to science, the public as well as to
future students.

In terms of research publications, a number of key issues include and are not restricted to:

Honesty. Honesty and integrity is a duty of each author and person, expert-reviewer and
member of journal editorial boards.
Review process. The peer-review process contributes to the quality control and it is an
essential step to ascertain the standing and originality of the research.
Ethical standards. Recent journal editorials presented some experience of unscrupulous
activities.
Authorship. Who may claim a right to authorship?[5] In which order should the authors be
listed?

What is Ethics in Research & Why is it Important?


When most people think of ethics (or morals), they think of rules for distinguishing between
right and wrong, such as the Golden Rule ("Do unto others as you would have them do unto
you"), a code of professional conduct like the Hippocratic Oath ("First of all, do no harm"), a
religious creed like the Ten Commandments ("Thou Shalt not kill..."), or a wise aphorisms like
the sayings of Confucius. This is the most common way of defining "ethics": norms for
conduct that distinguish between acceptable and unacceptable behavior.Most people learn ethical
norms at home, at school, in church, or in other social settings. Although most people acquire
their sense of right and wrong during childhood, moral development occurs throughout life and
human beings pass through different stages of growth as they mature. Ethical norms are so
ubiquitous that one might be tempted to regard them as simple commonsense. On the other hand,
if morality were nothing more than commonsense, then why are there so many ethical disputes
and issues in our society?One plausible explanation of these disagreements is that all people
recognize some common ethical norms but different individuals interpret, apply, and balance
these norms in different ways in light of their own values and life experiences.Most societies also
have legal rules that govern behavior, but ethical norms tend to be broader and more informal
than laws. Although most societies use laws to enforce widely accepted moral standards and
ethical and legal rules use similar concepts, it is important to remember that ethics and law are
not the same. An action may be legal but unethical or illegal but ethical. We can also use ethical
concepts and principles to criticize, evaluate, propose, or interpret laws. Indeed, in the last
century, many social reformers urged citizens to disobey laws in order to protest what they
regarded as immoral or unjust laws. Peaceful civil disobedience is an ethical way of expressing
political viewpoints.Another way of defining 'ethics' focuses on the disciplines that
study standards of conduct, such as philosophy, theology, law, psychology, or sociology. For
example, a "medical ethicist" is someone who studies ethical standards in medicine. One may
also define ethics as a method, procedure, or perspectivefor deciding how to act and for

analyzing complex problems and issues. For instance, in considering a complex issue like global
warming, one may take an economic, ecological, political, or ethical perspective on the problem.
While an economist might examine the cost and benefits of various policies related to global
warming, an environmental ethicist could examine the ethical values and principles at
stake.Many different disciplines, institutions, and professions have norms for behavior that suit
their particular aims and goals. These norms also help members of the discipline to coordinate
their actions or activities and to establish the public's trust of the discipline. For instance, ethical
norms govern conduct in medicine, law, engineering, and business. Ethical norms also serve the
aims or goals of research and apply to people who conduct scientific research or other scholarly
or creative activities. There is even a specialized discipline, research ethics, which studies these
norms.There are several reasons why it is important to adhere to ethical norms in research. First,
norms promote the aims of research, such as knowledge, truth, and avoidance of error. For
example, prohibitions against fabricating, falsifying, or misrepresenting research data promote
the truth and avoid error. Second, since research often involves a great deal of cooperation and
coordination among many different people in different disciplines and institutions, ethical
standards promote the values that are essential to collaborative work, such as trust,
accountability, mutual respect, and fairness. For example, many ethical norms in research, such
as guidelines for authorship, copyright and patenting policies, data sharing policies, and
confidentiality rules in peer review, are designed to protect intellectual property interests while
encouraging collaboration. Most researchers want to receive credit for their contributions and do
not want to have their ideas stolen or disclosed prematurely. Third, many of the ethical norms
help to ensure that researchers can be heldaccountable to the public. For instance, federal policies
on research misconduct, conflicts of interest, the human subjects protections, and animal care
and use are necessary in order to make sure that researchers who are funded by public money can
be held accountable to the public. Fourth, ethical norms in research also help to build public
support for research. People are more likely to fund research project if they can trust the quality
and integrity of research. Finally, many of the norms of research promote a variety of other
importantmoral and social values, such as social responsibility, human rights, animal welfare,
compliance with the law, and health and safety. Ethical lapses in research can significantly harm
human and animal subjects, students, and the public. For example, a researcher who fabricates
data in a clinical trial may harm or even kill patients, and a researcher who fails to abide by
regulations and guidelines relating to radiation or biological safety may jeopardize his health and
safety or the health and safety of staff and students.
Codes and Policies for Research Ethics
Given the importance of ethics for the conduct of research, it should come as no surprise that
many different professional associations, government agencies, and universities have adopted
specific codes, rules, and policies relating to research ethics. Many government agencies, such as
the National Institutes of Health (NIH), the National Science Foundation (NSF), the Food and

Drug Administration (FDA), the Environmental Protection Agency (EPA), and the US
Department of Agriculture (USDA) have ethics rules for funded researchers. Other influential
research ethics policies include the Uniform Requirements for Manuscripts Submitted to
Biomedical Journals (International Committee of Medical Journal Editors), the Chemist's Code
of Conduct (American Chemical Society), Code of Ethics (American Society for Clinical
Laboratory Science) Ethical Principles of Psychologists (American Psychological Association),
Statements on Ethics and Professional Responsibility (American Anthropological Association),
Statement on Professional Ethics (American Association of University Professors), the
Nuremberg Code and the Declaration of Helsinki (World Medical Association).The following is
a rough and general summary of some ethical principles that various codes address*:
Honesty
Strive for honesty in all scientific communications. Honestly report data, results, methods and
procedures, and publication status. Do not fabricate, falsify, or misrepresent data. Do not deceive
colleagues, granting agencies, or the public.
Objectivity- Strive to avoid bias in experimental design, data analysis, data interpretation, peer
review, personnel decisions, grant writing, expert testimony, and other aspects of research where
objectivity is expected or required. Avoid or minimize bias or self-deception. Disclose personal
or financial interests that may affect research.
Integrity
Keep your promises and agreements; act with sincerity; strive for consistency of thought and
action.
Carefulness- Avoid careless errors and negligence; carefully and critically examine your own
work and the work of your peers. Keep good records of research activities, such as data
collection, research design, and correspondence with agencies or journals.
Openness- Share data, results, ideas, tools, resources. Be open to criticism and new ideas.
Respect for Intellectual Property
Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data,
methods, or results without permission. Give credit where credit is due. Give proper
acknowledgement or credit for all contributions to research. Never plagiarize.
Confidentiality
Protect confidential communications, such as papers or grants submitted for publication,
personnel records, trade or military secrets, and patient records.
Responsible Publication

Publish in order to advance research and scholarship, not to advance just your own career. Avoid
wasteful and duplicative publication.
Responsible Mentoring
Help to educate, mentor, and advise students. Promote their welfare and allow them to make
their own decisions.
Respect for colleagues- Respect your colleagues and treat them fairly.
Social Responsibility- Strive to promote social good and prevent or mitigate social harms
through research, public education, and advocacy.
Non-Discrimination- Avoid discrimination against colleagues or students on the basis of sex,
race, ethnicity, or other factors that are not related to their scientific competence and integrity.
Competence
Maintain and improve your own professional competence and expertise through lifelong
education and learning; take steps to promote competence in science as a whole.
Legality
Know and obey relevant laws and institutional and governmental policies.
Animal Care
Show proper respect and care for animals when using them in research. Do not conduct
unnecessary or poorly designed animal experiments.
Human Subjects Protection
When conducting research on human subjects, minimize harms and risks and maximize benefits;
respect human dignity, privacy, and autonomy; take special precautions with vulnerable
populations; and strive to distribute the benefits and burdens of research fairly.
Ethical Decision Making in Research
Although codes, policies, and principals are very important and useful, like any set of rules, they
do not cover every situation, they often conflict, and they require considerable interpretation. It is
therefore important for researchers to learn how to interpret, assess, and apply various research
rules and how to make decisions and to act in various situations. The vast majority of decisions
involve the straightforward application of ethical rules. For example, consider the following
case,

Case 1:
The research protocol for a study of a drug on hypertension requires the administration of the
drug at different doses to 50 laboratory mice, with chemical and behavioral tests to determine
toxic effects. Tom has almost finished the experiment for Dr. Q. He has only 5 mice left to test.
However, he really wants to finish his work in time to go to Florida on spring break with his
friends, who are leaving tonight. He has injected the drug in all 50 mice but has not completed all
of the tests. He therefore decides to extrapolate from the 45 completed results to produce the 5
additional results. Many different research ethics policies would hold that Tom has acted
unethically by fabricating data. If this study were sponsored by a federal agency, such as the
NIH, his actions would constitute a form of research misconduct, which the government defines
as "fabrication, falsification, or plagiarism" (or FFP). Actions that nearly all researchers classify
as unethical are viewed as misconduct. It is important to remember, however, that misconduct
occurs only when researchers intend to deceive: honest errors related to sloppiness, poor record
keeping, miscalculations, bias, self-deception, and even negligence do not constitute misconduct.
Also, reasonable disagreements about research methods, procedures, and interpretations do not
constitute research misconduct. Consider the following case: Case 2:
Dr. T has just discovered a mathematical error in a paper that has been accepted for publication
in a journal. The error does not affect the overall results of his research, but it is potentially
misleading. The journal has just gone to press, so it is too late to catch the error before it appears
in print. In order to avoid embarrassment, Dr. T decides to ignore the error. Dr. T's error is not
misconduct nor is his decision to take no action to correct the error. Most researchers, as well as
many different policies and codes, including ECU's policies, would say that Dr. T should tell the
journal about the error and consider publishing a correction or errata. Failing to publish a
correction would be unethical because it would violate norms relating to honesty and objectivity
in research. There are many other activities that the government does not define as "misconduct"
but which are still regarded by most researchers as unethical. These are called "other deviations"
from acceptable research practices and include:

Publishing the same paper in two different journals without telling the editors
Submitting the same paper to different journals without telling the editors
Not informing a collaborator of your intent to file a patent in order to make sure that you are
the sole inventor
Including a colleague as an author on a paper in return for a favor even though the colleague
did not make a serious contribution to the paper
Discussing with your colleagues confidential data from a paper that you are reviewing for a
journal
Trimming outliers from a data set without discussing your reasons in paper

Using an inappropriate statistical technique in order to enhance the significance of your


research
Bypassing the peer review process and announcing your results through a press conference
without giving peers adequate information to review your work
Conducting a review of the literature that fails to acknowledge the contributions of other
people in the field or relevant prior work
Stretching the truth on a grant application in order to convince reviewers that your project will
make a significant contribution to the field
Stretching the truth on a job application or curriculum vita
Giving the same research project to two graduate students in order to see who can do it the
fastest
Overworking, neglecting, or exploiting graduate or post-doctoral students
Failing to keep good research records
Failing to maintain research data for a reasonable period of time
Making derogatory comments and personal attacks in your review of author's submission
Promising a student a better grade for sexual favors
Using a racist epithet in the laboratory
Making significant deviations from the research protocol approved by your institution's
Animal Care and Use Committee or Institutional Review Board for Human Subjects Research
without telling the committee or the board
Not reporting an adverse event in a human research experiment
Wasting animals in research
Exposing students and staff to biological risks in violation of your institution's biosafety rules
Rejecting a manuscript for publication without even reading it
Sabotaging someone's work
Stealing supplies, books, or data
Rigging an experiment so you know how it will turn out
Making unauthorized copies of data, papers, or computer programs
Owning over $10,000 in stock in a company that sponsors your research and not disclosing
this financial interest
Deliberately overestimating the clinical significance of a new drug in order to obtain
economic benefits
These actions would be regarded as unethical by most scientists and some might even be illegal.
Most of these would also violate different professional ethics codes or institutional policies.
However, they do not fall into the narrow category of actions that the government classifies as
research misconduct. Indeed, there has been considerable debate about the definition of "research
misconduct" and many researchers and policy makers are not satisfied with the government's
narrow definition that focuses on FFP. However, given the huge list of potential offenses that
might fall into the category "other serious deviations," and the practical problems with defining

and policing these other deviations, it is understandable why government officials have chosen to
limit their focus. Finally, situations frequently arise in research in which different people
disagree about the proper course of action and there is no broad consensus about what should be
done. In these situations, there may be good arguments on both sides of the issue and different
ethical principles may conflict. These situations create difficult decisions for research known
as ethical dilemmas. Consider the following case:
Case 3:
Dr. Wexford is the principal investigator of a large, epidemiological study on the health of 5,000
agricultural workers. She has an impressive dataset that includes information on demographics,
environmental exposures, diet, genetics, and various disease outcomes such as cancer,
Parkinsons disease (PD), and ALS. She has just published a paper on the relationship between
pesticide exposure and PD in a prestigious journal. She is planning to publish many other papers
from her dataset. She receives a request from another research team that wants access to her
complete dataset. They are interested in examining the relationship between pesticide exposures
and skin cancer. Dr. Wexford was planning to conduct a study on this topic. Dr. Wexford faces
a difficult choice. On the one hand, the ethical norm of openness obliges her to share data with
the other research team. Her funding agency may also have rules that obligate her to share data.
On the other hand, if she shares data with the other team, they may publish results that she was
planning to publish, thus depriving her (and her team) of recognition and priority. It seems that
there are good arguments on both sides of this issue and Dr. Wexford needs to take some time to
think about what she should do. One possible option is to share data, provided that the
investigators sign a data use agreement. The agreement could define allowable uses of the data,
publication plans, authorship, etc. The following are some step that researchers, such as Dr.
Wexford, can take to deal with ethical dilemmas in research:
What is the problem or issue?
It is always important to get a clear statement of the problem. In this case, the issue is whether to
share information with the other research team.
What is the relevant information?
Many bad decisions are made as a result of poor information. To know what to do, Dr. Wexford
needs to have more information concerning such matters as university or funding agency policies
that may apply to this situation, the team's intellectual property interests, the possibility of
negotiating some kind of agreement with the other team, whether the other team also has some
information it is willing to share, etc. Will the public/science be better served by the additional
research?

What are the different options?

People may fail to see different options due to a limited imagination, bias, ignorance, or fear. In
this case, there may be another choice besides 'share' or 'don't share,' such as 'negotiate an
agreement.'
How do ethical codes or policies as well as legal rules apply to these different options?
The university or funding agency may have policies on data management that apply to this case.
Broader ethical rules, such as openness and respect for credit and intellectual property, may also
apply to this case. Laws relating to intellectual property may be relevant.
Are there any people who can offer ethical advice?
It may be useful to seek advice from a colleague, a senior researcher, your department chair, or
anyone else you can trust (?). In the case, Dr. Wexford might want to talk to her supervisor and
research team before making a decision.
After considering these questions, a person facing an ethical dilemma may decide to ask more
questions, gather more information, explore different options, or consider other ethical rules.
However, at some point he or she will have to make a decision and then take action. Ideally, a
person who makes a decision in an ethical dilemma should be able to justify his or her decision
to himself or herself, as well as colleagues, administrators, and other people who might be
affected by the decision. He or she should be able to articulatereasons for his or her conduct and
should consider the following questions in order to explain how he or she arrived at his or her
decision: .

Which choice could stand up to further publicity and scrutiny?


Which choice could you not live with?
Think of the wisest person you know. What would he or she do in this situation?
Which choice would be the most just, fair, or responsible?
Which choice will probably have the best overall consequences?
After considering all of these questions, one still might find it difficult to decide what to do. If
this is the case, then it may be appropriate to consider others ways of making the decision, such
as going with one's gut feeling, seeking guidance through prayer or meditation, or even flipping a
coin. Endorsing these methods in this context need not imply that ethical decisions are irrational
or that these other methods should be used only as a last resort. The main point is that human
reasoning plays a pivotal role in ethical decision-making but there are limits to its ability to solve
all ethical dilemmas in a finite amount of time.
Promoting Ethical Conduct in Science
Many of you may be wondering why you are required to have training in research ethics. You
may believe that you are highly ethical and know the difference between right and wrong. You
would never fabricate or falsify data or plagiarize. Indeed, you also may believe that most of

your colleagues are highly ethical and that there is no ethics problem in research. If you feel this
way, relax. No one is accusing you of acting unethically. Indeed, the best evidence we have
shows that misconduct is a very rare occurrence in research, although there is considerable
variation among various estimates. The rate of misconduct has been estimated to be as low as
0.01% of researchersper year (based on confirmed cases of misconduct in federally funded
research) to as high as 1% of researchers per year (based on self-reports of misconduct on
anonymous surveys). Clearly, it would be useful to have more data on this topic, but so far there
is no evidence that science has become ethically corrupt. However, even if misconduct is rare, it
can have a tremendous impact on research. Consider an analogy with crime: it does not take
many murders or rapes in a town to erode the community's sense of trust and increase the
community's fear and paranoia. The same is true with the most serious crimes in science, i.e.
fabrication, falsification, and plagiarism. However, most of the crimes committed in science
probably are not tantamount to murder or rape, but ethically significant misdeeds that are
classified by the government as 'deviations.' Moreover, there are many situations in research that
pose genuine ethical dilemmas. Will training and education in research ethics help reduce the
rate of misconduct in science? It is too early to tell. The answer to this question depends, in part,
on how one understands the causes of misconduct. There are two main theories about why
researchers commit misconduct. According to the "bad apple" theory, most scientists are highly
ethical. Only researchers who are morally corrupt, economically desperate, or psychologically
disturbed commit misconduct. Moreover, only a fool would commit misconduct because
science's peer review system and self-correcting mechanisms will eventually catch those who try
to cheat the system. In any case, a course in research ethics will have little impact on "bad
apples," one might argue. According to the "stressful" or "imperfect" environment theory,
misconduct occurs because various institutional pressures, incentives, and constraints encourage
people to commit misconduct, such as pressures to publish or obtain grants or contracts, career
ambitions, the pursuit of profit or fame, poor supervision of students and trainees, and poor
oversight of researchers. Moreover, defenders of the stressful environment theory point out that
science's peer review system is far from perfect and that it is relatively easy to cheat the system.
Erroneous or fraudulent research often enters the public record without being detected for years.
To the extent that research environment is an important factor in misconduct, a course in
research ethics is likely to help people get a better understanding of these stresses, sensitize
people to ethical concerns, and improve ethical judgment and decision making. Misconduct
probably results from environmental and individual causes, i.e. when people who are morally
weak, ignorant, or insensitive are placed in stressful or imperfect environments. In any case, a
course in research ethics is useful in helping to prevent deviations from norms even if it does not
prevent misconduct. Many of the deviations that occur in research may occur because
researchers simple do not know or have never thought seriously about some of the ethical norms
of research. For example, some unethical authorship practices probably reflect years of tradition
in the research community that have not been questioned seriously until recently. If the director

of a lab is named as an author on every paper that comes from his lab, even if he does not make a
significant contribution, what could be wrong with that? That's just the way it's done, one might
argue. If a drug company uses ghostwriters to write papers "authored" by its physicianemployees, what's wrong about this practice? Ghost writers help write all sorts of books these
days, so what's wrong with using ghostwriters in research? Another example where there may be
some ignorance or mistaken traditions is conflicts of interest in research. A researcher may think
that a "normal" or "traditional" financial relationship, such as accepting stock or a consulting fee
from a drug company that sponsors her research, raises no serious ethical issues. Or perhaps a
university administrator sees no ethical problem in taking a large gift with strings attached from
a pharmaceutical company. Maybe a physician thinks that it is perfectly appropriate to receive a
$300 finders fee for referring patients into a clinical trial.If "deviations" from ethical conduct
occur in research as a result of ignorance or a failure to reflect critically on problematic
traditions, then a course in research ethics may help reduce the rate of serious deviations by
improving the researcher's understanding of ethics and by sensitizing him or her to the issues.
Finally, training in research ethics should be able to help researchers grapple with ethical
dilemmas by introducing researchers to important concepts, tools, principles, and methods that
can be useful in resolving these dilemmas.
Not that long ago, academicians were often cautious about airing the ethical dilemmas they faced
in their research and academic work, but that environment is changing today. Psychologists in
academe are more likely to seek out the advice of their colleagues on issues ranging
from supervising graduate students to how to handle sensitive research data, says George Mason
University psychologist June Tangney, PhD."There has been a real change in the last 10 years in
people talking more frequently and more openly about ethical dilemmas of all sorts," she
explains.Indeed, researchers face an array of ethical requirements: They must meet professional,
institutional and federal standards for conducting research with human participants, often
supervise students they also teach and have to sort out authorship issues, just to name a few.Here
are five recommendations APA's Science Directorate gives to help researchers steer clear of
ethical quandaries:
1. Discuss intellectual property frankly
Academe's competitive "publish-or-perish" mindset can be a recipe for trouble when it comes
to who gets credit for authorship. The best way to avoid disagreements about who should get
credit and in what order is to talk about these issues at the beginning of a working relationship,
even though many people often feel uncomfortable about such topics.
"It's almost like talking about money," explains Tangney. "People don't want to appear to be
greedy or presumptuous."
APA's Ethics Code offers some guidance: It specifies that "faculty advisors discuss publication
credit with students as early as feasible and throughout the research and publication process as
appropriate." When researchers and students put such understandings in writing, they have a
helpful tool to continually discuss and evaluate contributions as the research progresses.

However, even the best plans can result in disputes, which often occur because people look at the
same situation differently. "While authorship should reflect the contribution," says APA Ethics
Office Director Stephen Behnke, JD, PhD, "we know from social science research that people
often overvalue their contributions to a project. We frequently see that in authorship-type
situations. In many instances, both parties genuinely believe they're right." APA's Ethics Code
stipulates that psychologists take credit only for work they have actually performed or to which
they have substantially contributed and that publication credit should accurately reflect the
relative contributions: "Mere possession of an institutional position, such as department chair,
does not justify authorship credit," says the code. "Minor contributions to the research or to the
writing for publications are acknowledged appropriately, such as in footnotes or in an
introductory statement."
The same rules apply to students. If they contribute substantively to the conceptualization,
design, execution, analysis or interpretation of the research reported, they should be listed as
authors. Contributions that are primarily technical don't warrant authorship. In the same vein,
advisers should not expect ex-officio authorship on their students' work.
Matthew McGue, PhD, of the University of Minnesota, says his psychology department has
instituted a procedure to avoid murky authorship issues. "We actually have a formal process here
where students make proposals for anything they do on the project," he explains. The process
allows students and faculty to more easily talk about research responsibility, distribution and
authorship. Psychologists should also be cognizant of situations where they have access to
confidential ideas or research, such as reviewing journal manuscripts or research grants, or
hearing new ideas during a presentation or informal conversation. While it's unlikely reviewers
can purge all of the information in an interesting manuscript from their thinking, it's still
unethical to take those ideas without giving credit to the originator. "If you are a grant reviewer
or a journal manuscript reviewer [who] sees someone's research [that] hasn't been published yet,
you owe that person a duty of confidentiality and anonymity," says Gerald P. Koocher, PhD,
editor of the journal Ethics and Behavior and co-author of "Ethics in Psychology: Professional
Standards and Cases" (Oxford University Press, 1998). Researchers also need to meet their
ethical obligations once their research is published: If authors learn of errors that change the
interpretation of research findings, they are ethically obligated to promptly correct the errors in a
correction, retraction, erratum or by other means. To be able to answer questions about study
authenticity and allow others to reanalyze the results, authors should archive primary data and
accompanying records for at least five years, advises University of Minnesota psychologist and
researcher Matthew McGue, PhD. "Store all your data. Don't destroy it," he says. "Because if
someone charges that you did something wrong, you can go back." "It seems simple, but this can
be a tricky area," says Susan Knapp, APA's deputy publisher. "The APA Publication Manual
Section 8.05 has some general advice on what to retain and suggestions about things to consider
in sharing data."The APA Ethics Code requires psychologists to release their data to others who
want to verify their conclusions, provided that participants' confidentiality can be protected and
as long as legal rights concerning proprietary data don't preclude their release. However, the code
also notes that psychologists who request data in these circumstances can only use the shared
data for reanalysis; for any other use, they must obtain a prior written agreement.
2. Be conscious of multiple roles
APA's Ethics Code says psychologists should avoid relationships that could reasonably impair
their professional performance or could exploit or harm others. But it also notes that many kinds

of multiple relationships aren't unethical--as long as they're not reasonably expected to have
adverse effects. That notwithstanding, psychologists should think carefully before entering into
multiple relationships with any person or group, such as recruiting students or clients as
participants in research studies or investigating the effectiveness of a product of a company
whose stock they own. For example, when recruiting students from your Psychology 101 course
to participate in an experiment, be sure to make clear that participation is voluntary. If
participation is a course requirement, be sure to note that in the class syllabus, and ensure that
participation has educative value by, for instance, providing a thorough debriefing to enhance
students' understanding of the study. The 2002 Ethics Code also mandates in Standard 8.04b that
students be given equitable alternatives to participating in research. Perhaps one of the most
common multiple roles for researchers is being both a mentor and lab supervisor to students they
also teach in class. Psychologists need to be especially cautious that they don't abuse the power
differential between themselves and students, say experts. They shouldn't, for example, use their
clout as professors to coerce students into taking on additional research duties. By outlining the
nature and structure of the supervisory relationship before supervision or mentoring begins, both
parties can avoid misunderstandings, says George Mason University's Tangney. It's helpful to
create a written agreement that includes both parties' responsibilities as well as authorship
considerations, intensity of the supervision and other key aspects of the job. "While that's the
ideal situation, in practice we do a lot less of that than we ought to," she notes. "Part of it is not
having foresight up front of how a project or research study is going to unfold." That's why
experts also recommend that supervisors set up timely and specific methods to give students
feedback and keep a record of the supervision, including meeting times, issues discussed and
duties assigned.
If psychologists do find that they are in potentially harmful multiple relationships, they are
ethically mandated to take steps to resolve them in the best interest of the person or group while
complying with the Ethics Code.

3. Follow informed-consent rules


When done properly, the consent process ensures that individuals are voluntarily participating in
the research with full knowledge of relevant risks and benefits. "The federal standard is that the
person must have all of the information that might reasonably influence their willingness to
participate in a form that they can understand and comprehend," says Koocher, dean of Simmons
College's School for Health Studies.APA's Ethics Code mandates that psychologists who
conduct research should inform participants about:
The purpose of the research, expected duration and procedures.
Participants' rights to decline to participate and to withdraw from the research once it has started,
as well as the anticipated consequences of doing so.
Reasonably foreseeable factors that may influence their willingness to participate, such as
potential risks, discomfort or adverse effects.
Any prospective research benefits.
Limits of confidentiality, such as data coding, disposal, sharing and archiving, and when
confidentiality must be broken.
Incentives for participation.
Who participants can contact with questions.
Experts also suggest covering the likelihood, magnitude and duration of harm or benefit of
participation, emphasizing that their involvement is voluntary and discussing treatment

alternatives, if relevant to the research. Keep in mind that the Ethics Code includes specific
mandates for researchers who conduct experimental treatment research. Specifically, they must
inform individuals about the experimental nature of the treatment, services that will or will not
be available to the control groups, how participants will be assigned to treatments and control
groups, available treatment alternatives and compensation or monetary costs of participation. If
research participants or clients are not competent to evaluate the risks and benefits of
participation themselves--for example, minors or people with cognitive disabilities--then the
person who's giving permission must have access to that same information, says Koocher.
Remember that a signed consent form doesn't mean the informing process can be glossed over,
say ethics experts. In fact, the APA Ethics Code says psychologists can skip informed consent in
two instances only: When permitted by law or federal or institutional regulations, or when the
research would not reasonably be expected to distress or harm participants and involves one of
the following:
The study of normal educational practices, curricula or classroom management methods
conducted in educational settings.
Anonymous questionnaires, naturalistic observations or archival research for which disclosure of
responses would not place participants at risk of criminal or civil liability or damage their
financial standing, employability or reputation, and for which confidentiality is protected.
The study of factors related to job or organization effectiveness conducted in organizational
settings for which there is no risk to participants' employability, and confidentiality is protected.
If psychologists are precluded from obtaining full consent at the beginning--for example, if the
protocol includes deception, recording spontaneous behavior or the use of a confederate--they
should be sure to offer a full debriefing after data collection and provide people with an
opportunity to reiterate their consent, advice experts. The code also says psychologists should
make reasonable efforts to avoid offering "excessive or inappropriate financial or other
inducements for research participation when such inducements are likely to coerce
participation."
4. Respect confidentiality and privacy
Upholding individuals' rights to confidentiality and privacy is a central tenet of every
psychologist's work. However, many privacy issues are idiosyncratic to the research population,
writes Susan Folkman, PhD, in "Ethics in Research with Human Participants" (APA, 2000). For
instance, researchers need to devise ways to ask whether participants are willing to talk about
sensitive topics without putting them in awkward situations, say experts. That could mean they
provide a set of increasingly detailed interview questions so that participants can stop if they feel
uncomfortable. And because research participants have the freedom to choose how much
information about themselves they will reveal and under what circumstances, psychologists
should be careful when recruiting participants for a study, says SangeetaPanicker, PhD, director
of the APA Science Directorate's Research Ethics Office. For example, it's inappropriate to
obtain contact information of members of a support group to solicit their participation in
research. However, you could give your colleague who facilitates the group a letter to distribute
that explains your research study and provides a way for individuals to contact you, if they're
interested.
Other steps researchers should take include:

Discuss the limits of confidentiality. Give participants information about how their data will be
used, what will be done with case materials, photos and audio and video recordings, and secure
their consent.
Know federal and state law. Know the ins and outs of state and federal law that might apply to
your research. For instance, the Goals 2000: Education Act of 1994 prohibits asking children
about religion, sex or family life without parental permission.
Another example is that, while most states only require licensed psychologists to comply with
mandatory reporting laws, some laws also require researchers to report abuse and neglect. That's
why it's important for researchers to plan for situations in which they may learn of such
reportable offenses. Generally, research psychologists can consult with a clinician or their
institution's legal department to decide the best course of action.
Take practical security measures. Be sure confidential records are stored in a secure area with
limited access, and consider stripping them of identifying information, if feasible. Also, be aware
of situations where confidentiality could inadvertently be breached, such as having confidential
conversations in a room that's not soundproof or putting participants' names on bills paid by
accounting departments.
Think about data sharing before research begins. If researchers plan to share their data with
others, they should note that in the consent process, specifying how they will be shared and
whether data will be anonymous. For example, researchers could have difficulty sharing
sensitive data they've collected in a study of adults with serious mental illnesses because they
failed to ask participants for permission to share the data. Or developmental data collected on
videotape may be a valuable resource for sharing, but unless a researcher asked permission back
then to share videotapes, it would be unethical to do so. When sharing, psychologists should use
established techniques when possible to protect confidentiality, such as coding data to hide
identities. "But be aware that it may be almost impossible to entirely cloak identity, especially if
your data include video or audio recordings or can be linked to larger databases," says Merry
Bullock, PhD, associate executive director in APA's Science Directorate.
Understand the limits of the Internet. Since Web technology is constantly evolving,
psychologists need to be technologically savvy to conduct research online and cautious when
exchanging confidential information electronically. If you're not a Internet whiz, get the help of
someone who is. Otherwise, it may be possible for others to tap into data that you thought was
properly protected.
5. Tap into ethics resources
One of the best ways researchers can avoid and resolve ethical dilemmas is to know both what
their ethical obligations are and what resources are available to them. "Researchers can help
themselves make ethical issues salient by reminding themselves of the basic underpinnings of
research and professional ethics," says Bullock. Those basics include: Moreover, despite the
sometimes tense relationship researchers can have with their institutional review boards (IRBs),
these groups can often help researchers think about how to address potential dilemmas before
projects begin, says Panicker. But psychologists must first give their IRBs the information they
need to properly understand a research proposal. "Be sure to provide the IRB with detailed and
comprehensive information about the study, such as the consent process, how participants will be
recruited and how confidential information will be protected," says Bullock. "The more
information you give your IRB, the better educated its members will become about behavioral
research, and the easier it will be for them to facilitate your research."

As clich as it may be, says Panicker, thinking positively about your interactions with an IRB
can help smooth the process for both researchers and the IRBs reviewing their work.
Ethics in research are very important when you're going to conduct an experiment.
Ethics should be applied on all stages of research, such as planning, conducting and evaluating
a research project. The first thing to do before designing a study is to consider the potential cost
and benefits of the research.
Research - Cost and Benefits-Analysis
We evaluate the cost and benefits for most decisions in life, whether we are aware of it or
not.Ethics should be applied on all stages of research, such as planning, conducting and
evaluating a research project. The first thing to do before designing a study is to consider the
potential cost and benefits of the research. This can be quite a dilemma in some
experiments. Stem cell research is one example of an area with difficult ethical considerations.
As a result, stem cell research is restricted in many countries, because of the major and
problematic ethical issues.
Ethical Standards - Researchers Should...

avoid any risk of considerably harming people, the environment, or property unnecessarily.
The Tuskegee Syphilis Study is an example of a study which seriously violated these
standards.
not use deception on people participating, as was the case with the ethics of the Stanley
Milgram Experiment
obtain informed consent from all involved in the study.
preserve privacy and confidentiality whenever possible.
take special precautions when involving populations or animals which may not be considered
to understand fully the purpose of the study.
not offer big rewards or enforce binding contracts for the study. This is especially important
when people are somehow reliant on the reward.

not plagiarize the work of others


not skew their conclusions based on funding.
not commit science fraud, falsify research or otherwise conduct scientific misconduct. A constudy, which devastated the public view of the subject for decades, was the study of selling
more coke and popcorn by unconscious ads. The researcher said that he had found great
effects from subliminal messages, whilst he had, in fact, never conducted the experiment.
not use the position as a peer reviewer to give sham peer reviews to punish or damage fellow
scientists.
Basically, research must follow all regulations given, and also anticipate possible ethical
problems in their research. Competition is an important factor in research, and may be both a
good thing and a bad thing.
Whistleblowing is one mechanism to help discover misconduct in research. Research ethics
provides guidelines for the responsible conduct of biomedical research. In addition, research
ethics educates and monitors scientists conducting research to ensure a high ethical standard.
The birth of modern research ethics began with a desire to protect human subjects involved in
research projects. The first attempt to craft regulations began during the Doctors Trial of 19461947. The Doctors Trial was asegment of the Nuremberg Trials for Nazi war criminals (see

photo*). In the Doctors Trial, 23 German Nazi physicians were accused of conducting abhorrent
and torturous experiments with concentration camp inmates. The accused physicians tortured,
brutalized, crippled, and murdered thousands of victims in the name of research. Some of their
experiments involved gathering scientific information about the limits of the human body by
exposing victims to extreme temperatures and altitudes. The most gruesome and destructive
experiments tested how quickly a human could be euthanatized in order to carry out the Nazi
racial purification policies most efficiently. To prosecute the accused Nazi doctors for the
atrocities they committed, a list of ethical guidelines for the conduct of research the
Nuremberg Code were developed. The Nuremberg Code consisted of ten basic ethical
principles that the accused violated.1
The 10 guidelines were as follows:
1. Research participants must voluntarily consent to research participation
2. Research aims should contribute to the good of society
3. Research must be based on sound theory and prior animal testing
4. Research must avoid unnecessary physical and mental suffering
5. No research projects can go forward where serious injury and/or death are potential outcomes
6. The degree of risk taken with research participants cannot exceed anticipated benefits of
results
7. Proper environment and protection for participants is necessary
8. Experiments can be conducted only by scientifically qualified persons
9. Human subjects must be allowed to discontinue their participation at any time
10. Scientists must be prepared to terminate the experiment if there is cause to believe that
continuation will be harmful or result in injury or death
The Nuremberg Guidelines paved the way for the next major initiative designed to promote
responsible research with human subjects, the Helsinki Declaration. The Helsinki Declaration
was developed by the World Medical Association and has been revised and updated periodically
since 1964, with the last update occurring in 2000.2 The document lays out basic ethical
principles for conducting biomedical research and specifies guidelines for research conducted
either by a physician, in conjunction with medical care, or within a clinical setting. The
Helsinki Declaration contains all the basic ethical elements specified in the Nuremberg Code but
then advances further guidelines specifically designed to address the unique vulnerabilities of
human subjects solicited to participate in clinical research projects. The unique principles
developed within the Helsinki Declaration include:
The necessity of using an independent investigator to review potential research projects

Employing a medically qualified person to supervise the research and assume responsibility
for the health and welfare of human subjects
The importance of preserving the accuracy of research results
Suggestions on how to obtain informed consent from research participants
Rules concerning research with children and mentally incompetent persons
Evaluating and using experimental treatments on patients
The importance of determining which medical situations and conditions are appropriate and
safe for research
Following the Helsinki Declaration, the next set of research ethics guidelines came out in the
Belmont Report of 1979 from the National Commission for the Protection of Human Subjects of
Biomedical and Behavioral Research. The report outlines:
1. The ethical principles for research with human subjects
2. Boundaries between medical practice and research
3. The concepts of respect for persons, beneficence, and justice
4. Applications of these principles in informed consent (respect for persons), assessing risks and
benefits (beneficence), and subject selection (justice)3 The Nuremberg, Helsinki, and Belmont
guidelines provided the foundation of more ethically uniform research to which stringent rules
and consequences for violation were attached. Governmental laws and regulations concerning
the responsible conduct of research have since been developed for research that involves both
human and animal 6 subjects. The Animal Welfare Act provides guidelines and regulations for
subjects. The Animal Welfare Act provides guidelines and regulations for research with
animals. It goes into detail about sale, licensure, facilities, transport, and other care instructions.
For research with human subjects Title 45, Part 46 from the Code of Federal Regulations (45
CFR 46): The Protection of Human Subjects Regulations outlines the purpose and policies of
Institutional Review Board (IRB) oversight and approval, informed consent, and protections and
policies for research with children, pregnant women, fetuses, prisoners, and mentally
incompetent individuals. Currently, the focus of research ethics lies in the education of
researchers regarding the ethical principles behind regulations as well as the oversight and
review of current and potential research projects. The field has expanded from providing
protections for human subjects to including ethical guidelines that encompass all parts of
research from research design to the truthful reporting of results. There are several avenues for
people who wish to seek education on basic ethical principles, and avenues for education on
how to comply with policies at the institutional, state, and national levels. The University of
Minnesotas Center for Bioethics (www.bioethics.umn.edu) and many other universities and

professional associations around the country continually offer education for researchers and
scientists on ethical research issues. Curriculum is available in frequently offered conferences,
classroom settings, and on-line (www.research.umn.edu/curriculum).
WHY STUDY RESEARCH ETHICS?
Knowing what constitutes ethical research is important for all people who conduct research
projects or use and apply the results from research findings. All researchers should be familiar
with the basic ethical principles and have up-to-date knowledge about policies and procedures
designed to ensure the safety of research subjects and to prevent sloppy or irresponsible
research, because ignorance of policies designed to protect research subjects is not considered a
viable excuse for ethically questionable projects. Therefore, the duty lies with the researcher to
seek out and fully understand the policies and theories designed to guarantee upstanding
research practices. Research is a public trust that must be ethically conducted, trustworthy, and
socially responsible if the results are to be valuable. All parts of a research project from the
project design to submission of the results for peer review have to be upstanding in order to be
considered ethical. When even one part of a research project is questionable or conducted
unethically, the integrity of the entire project is called into question.Authorship is the process of
deciding whose names belong on a research paper. In many cases, research evolves from
collaboration and assistance between experts and colleagues. Some of this assistance will
require acknowledgement and some will require joint authorship. Responsible authorship
practices are an important part of research. Reporting and analyzing results is the key to applying
research findings to the real world. Despite the challenges, researchers should familiarize
themselves with proper authorship practices in order to protect their work and ideas while also
preventing research fraud.Each person listed as an author on an article should have significantly
contributed to both the research and writing. In addition, all listed authors must be prepared to
accept full responsibility for the content of the research article. The International Committee of
Medical Journal Editors (ICMJE) is the recognized international expert organization when it
comes to guidelines regarding biomedical research authorship. Their website (www.icmje.org)
lists all requirements for authorship, which are quoted as follows: Authorship credit should be
based only on 1) substantial contributions to conception and design, or acquisition of data, or
analysis and interpretation of data; 2) drafting the article or revising it critically for important
intellectual content; and 3) final approval of the version to be According to the ICMJE,
colleagues who are part of a research group or team but do not meet the conditions above should
NOT be listed as authors. They should instead receive acknowledgement at the end of the
manuscript, with a brief description of their contribution if appropriate. In order to acknowledge
a contributing colleague, the colleague must consent to the acknowledgement, lest they seem to
be endorsing research or conclusions drawn from research for which they are not responsible.6
All the contributing co-authors of an article must jointly decide the order of the listing of names.
The first person listed should be the person most closely involved with the research.7 The
authors should then decide the order of the remaining authors in accordance with the criteria of

the publishing journal, and be prepared to answer questions about why the order is as it
appears.Query Jamal is a graduate student working under the supervision of professor, Dr.
Kerry. Dr. Kerry is conducting research on tooth decay and has gathered data from hundreds of
dental patients. Jamal uses Dr. Kerrys data to analyze a research question that he came up with
on his own about tooth enamel erosion. His question is his own idea, but is still based on what he
learned about tooth and enamel decay under Dr. Kerry. Jamals friend, Darcie, helped Jamal
design a statistical computer program for data analysis, but did not contribute in any other way to
the research. When writing up his results, Dr. Kerry helped Jamal write the methods section of
his manuscript and reviewed his final results and conclusions, as well as the final draft of the
entire manuscript. How should authorship be decided in this case? Answer Jamal should be listed
first as the primary author because he is most closely involved in the research project. Dr. Kerry
should be listed second as co-author because she meets the ICJME requirements of authorship.
Darcie does not meet the criteria for authorship, but she should be acknowledged for her
contribution if she so consents. Plagiarism is the act of passing off somebody elses ideas,
thoughts, pictures, theories, words, or stories as your own. If a researcher plagiarizes the work of
others, they are bringing into question the integrity, ethics, and trustworthiness of the sum total
of his or her research.9 In addition, plagiarism is both an illegal act and punishable, considered to
be on the same level as stealing from the author that which he or she originally created.
Plagiarism takes many forms. On one end of the spectrum are people who intentionally take a
passage word-for-word, put it in their own work, and do not properly credit the original author.
The other end consists of unintentional (or simply lazy) paraphrased and fragmented texts the
author has pieced together from several works without properly citing the original sources.10,11
No part of the spectrum of potential plagiaristic acts are tolerated by the scientific community,
and research manuscripts will be rejected by publishers if they contain any form of plagiarism
including unintentional plagiarism. The Indiana University website provides the following
advice to avoid plagiarism. A researcher preparing a written manuscript should cite the original
source if he or she:
Quotes another persons actual words, either oral or written;
Paraphrases another persons words, either oral or written;
Uses another persons idea, opinion, or theory; or
Borrows facts, statistics, or other illustrative material, unless the information is common
knowledge. The rules of plagiarism typically apply to graphics, text, and other visuals from all
traditional forms of publication and include modern forms of publications as well, in particular
the World Wide Web. If a substantial amount of another persons graphics or text will be lifted
from a web page, an author should ask permission to use the material from the original author or
website host.13Most researchers certainly try not to plagiarize. However, it isnt always easy
because people often consult a variety of sources of information for their research and end up
mixing it in with their own background knowledge.14 To avoid unintentional or accidental

plagiarizing of another persons work, use the following tips from the Northwestern University
website:
Cite all ideas and information that is not your own and/or is not common knowledge,
Always use quotation marks if you are using someone elses words,
At the beginning of a paraphrased section, show that what comes next is someone elses
original idea (example: these bullet points start out by saying the information originated with
Northwestern University),
At the end of a paraphrased section, place the proper citation.15
Redundant publications constitute a special type of plagiarism. The ICMJE defines redundant
publication as follows:
Redundant or duplicate publication is publication of a paper that overlaps substantially with one
already published.16 The ICMJE further points out that resubmitting a manuscript to a journal
when it has already been published elsewhere violates, international copyright laws, ethical
conduct, and cost-effective use of resources. Articles that have been published already should
not be either resubmitted under another title, or resubmitted with only minor changes to the text
unless it is clearly stated that it is a resubmitted article.1 Peer review is the process in which an
author (or authors) submits a written manuscript or article to a journal for publication and the
journal editor distributes the article to experts working in the same, or similar, scientific
discipline. The experts, otherwise called the reviewers, and the editor then enter the peer review
process. The process involves the following:
1. Reviewers and editors read and evaluate the article
2. Reviewers submit their reviews back to the journal editor
3. The journal editor takes all comments, including their own, and communicates this feedback
to the original author (or authors)
The peer review process seldom proceeds in a straight line. The entire process may involve
several rounds of communication between the editor, the reviewers, and the original author (or
authors) before an article is fully ready for publication. According to an article on quality peer
reviews in the Journal of the American Medical Association, a high quality peer review should
evaluate a biomedical article or publication on the following merits:
Importance Does the research impact health and health care?
Usefulness Does the study provide useful scientific information?
Relevance Does the research apply to the journals readers and content area of interest?

Sound methods Was the research conducted with sound scientific methods that allowed the
researchers to answer their research question?
Sound ethics Was the study conducted ethically ensuring proper protection for human
subjects? Were results reported accurately and honestly?
Completeness Is all information relevant to the study included in the article?
Accuracy Is the written product a true reflection of the conduct and results of the research?
The two most important ethical concepts in the peer review process are confidentiality and
protection of intellectual property. Reviewers should not know the author (or authors) they are
reviewing, and the author (or authors) should not be told the names of the reviewers. Only by
maintaining strict confidentiality guidelines can the peer review process be truly open and
beneficial. Likewise, no person involved in the peer review process either the editor, reviewers,
or other journal staff can publicly disclose the information in the article or use the information
in a submitted article for personal gain. Peer reviewers, in addition to maintaining
confidentiality, can be neither conflicted nor political in their review. Conflicts may take the
form of financial conflicts with the results, conflicts if the research is too similar to their own
research endeavors, and conflicts due to personal relationships with the author (or authors).
Political motivations that might interfere with the peer review process include competition to
publish with other scientists and inaccurate reviews designed to punish a competing colleague
or journal.21 Editors may find it difficult to guarantee a conflict-free peer review process,
because reviewers must be experts with knowledge unique to the field to which the article
pertains. Therefore, many reviewers may find themselves faced with an article concerning
research that is very similar to their own. Peer reviewers should disclose all conflicts of interest
that may unduly influence their review to the journal editor and disqualify themselves when
appropriate. Editors of journals should maintain an open and ethical peer review process, and all
submitting authors and readers should be fully aware of a journals process of peer 17 review.
Editors do retain flexibility in assigning the number of peer reviewers and what to do with the
peer review information once completed. One method is for an editor to approach two or three
reviewers and then ask an author (or authors) to change the article to satisfy all the reviews. On
the other hand, an editor may take all the reviews and consolidate the advice to help guide the
author (or authors) when making changes, clarifications, and corrections. Editors must not
relinquish too many of their own responsibilities to peer reviewers. The peer review process
represents one step in the publishing process and editors need to take full responsibility for their
decision to include an article in their journal. This means that editors must review the content
and character of a submitted article, using all the criteria listed for reviewers above, and should
rely on the reviewers primarily to catch errors that lie outside the editors area of expertise and
technical understanding.22 Finally, editors should have full and complete freedom over the
content of a published journal. They should only include articles that they believe to be honest,
accurate, ethical, and scientifically responsible. According to the International Committee of

Medical Journal Editors, all editors have: An obligation to support the concept of editorial
freedom and to draw major transgressions of such freedom to the attention of the international
medical community. Conflicts of interest arise when a persons (or an organizations) obligations
to a particular research project conflict with their personal interests or obligations. For example,
a university researcher who owns stock in XYZ Pharmaceuticals is obligated to report truthful
and accurate data, but he might be conflicted if faced with data that would hurt stock prices for
XYZ pharmaceuticals. Conflicts of interest are particularly important to examine within the
context of biomedical research because research subjects may be particularly vulnerable to
harm.25 A researcher should attempt to identify potential conflicts of interest in order to confront
those issues before they have a chance to do harm or damage. If conflicts of interest do exist,
then the objectivity of the researcher and the integrity of the research results can be questioned
by any person throughout the research review process from =the IRB review through the peer
review phase. It is therefore imperative to address conflicts of interest up front and discuss how
to combat potential lack of objectivity, =before the research is called into question.26
ETHICAL GUIDELINES
The Objectivity in Research NIH Guide, provides guidelines on how investigators receiving
grants from the National Institutes of Health (NIH) should handle conflicts of interest. In
essence, it suggests that investigators should:27
Disclose to their institution any major or significant financial conflicts of interest that might
interfere with their ability to conduct a research project objectively
Disclose any such financial conflicts of interest of their spouses or dependent children
CONFLICTS OF INTEREST20
The Title 42 Code of Regulations (42 CFR 50) section on conflicts of interest contains the
Responsibility of Applicants for Promoting Objectivity in Research for which PHS Funding is
Sought guidelines, which consist of the following regulations for organizations receiving NIH
funding:
The organization must have, a written and enforced administrative process to identify and
manage, reduce, or eliminate conflicting financial interests with respect to research projects for
which NIH funding is sought;
Before any NIH funds are spent, the organization must inform the Chief Grants Management
Officer (CGMO) at the appropriate NIH office of any existing conflicts of interest and indicate
that the conflict has been addressed, by indicating whether the conflict has either been managed,
reduced, or eliminated;
The organization has to identify and report any conflicts that arise during the course of NIH
funded research;

The organization has to comply with NIH requests for information on how an identified
conflict of interest has been handled.28The NIH recommends the following possible actions to
help organizations address conflicts of interest:
Public disclosure of significant financial interests;
Monitoring of research by independent reviewers;
Modification of the research plan;
Disqualification from participation in all or a portion of the research funded by PHS;
Divestiture of significant financial interests; or
Severance of relationships that create actual or potential conflicts.29
Physician and other health care professional researchers may find themselves facing conflicts of
interest in their duties towards research versus their duties towards the health and welfare of their
patients. Clinical obligations to patients should always be considered above and beyond the
obligations of research. Data management, in respect to research ethics, references three issues:
1) the ethical and truthful collection of reliable data; 2) the ownership and responsibility of
collected data; and, 3) retaining data and sharing access to collected data with colleagues and the
public.32,33 Each issue contributes to the integrity of research and can be easily overlooked by
researchers. Oftentimes, researchers will downplay the importance of data management because
the details can be time consuming and they assume they can figure it out as they go along. It is
not adequate research practice to assume issues involved in data collection will work themselves
out on their own. Instead, a clear, responsible, ethically sound, and carefully outlined plan for
data management is required at the beginning of research to prevent all manners of conflicts and
inappropriate research methods. Ethical data collection refers to collecting data in a way that
does not harm or injure someone. Harm and injury could range from outright physical injury to
harmful disclosure of unprotected confidential health information. In comparison, truthful data
collection refers to data that, once collected, are not manipulated or altered in any way that might
impact or falsely influence results. Assigning and ensuring responsibility for collecting and
maintaining data is one of the most important ethical considerations when conducting a research
project. Responsibilities include the following important issues: Oversight of the design of the
method of data collection Protecting research subjects from harm
Securing and storing data safely to preserve the integrity and privacy of data
Delegating work with data to others and responsibility over the work of others
Responsible use of data and truthful portrayal of data results
DATA MANAGEMENT23

In contrast to the fairly straightforward concepts underlying truthful and ethical data collection
issues, the issue of data sharing is complicated by personal emotions, motives, obligations, and
ownership. Despite its complexities, data sharing is considered to be a hallmark of the scientific
community, particularly in academia. NIH describes the importance of data sharing on its
website: Data sharing achieves many important goals for the scientific community, such as
reinforcing open scientific inquiry, encouraging diversity of analysis and opinion, promoting
new research, testing of new or alternative hypotheses and methods of analysis, supporting
studies on data collection methods and measurement, facilitating teaching of new researchers,
enabling the exploration of topics not envisioned by the initial investigators, and permitting the
creation of new data sets by combining data from multiple sources.34While part of scientific
research encourages accuracy and verification of data through data sharing, sometimes data are
associated with intellectual property and need to be protected as such. For this reason, whether to
retain or share data can be a fine line for researchers who wish to protect their intellectual
property, but the line must be properly drawn in order to allow the positive aspects of data
sharing to occur while protecting the researchers hard work and ingenuity.
ETHICAL GUIDELINES
The three issues for data management (ethical and truthful data collection, responsibility of
collected data, and data sharing) can be addressed by researchers before and during the
establishment of a new research project. Researchers must accurately identify answers to the
following questions to resolve and address all data management issues in a timely manner: 24
Who is in charge of the data? (This person is usually the principal investigator of the research
project and is responsible for data collection design and physical data collection.)
How will data be collected? (Will data be collected via phone, mail, personal interview,
existing records, secondary sources, etc.?)
Will there be identifying information within the data? If yes, why? How will this be rectified?
How will data be stored and what privacy and protection issues will result from the method of
storage? (Will it be stored electronically, on paper, as raw tissue samples, etc.?)
Who will ensure that no data were excluded from the final results and ensure accuracy of result
interpretation?
How long after the project is over will data be kept? (This will depend on the source of funding
and organizational policies.)
Protecting intellectual property while at the same time encouraging data sharing is highly
important in order to ensure valid and reliable research. In order to identify what is and is not
protected as intellectual property, the concept must be clearly defined. The University of
Minnesotas Intellectual Property Policy defines intellectual property as: Intellectual Property

means any invention, discovery, improvement, copyrightable work, integrated circuit mask
work, trademark, trade secret, and licensable know-how and related rights. Intellectual property
includes, but is not limited to, individual or multimedia works of art or music, records of
confidential information generated or maintained by the University, data, texts, instructional
materials, tests, bibliographies, research findings, organisms, cells, viruses, DNA sequences,
other biological materials, probes, crystallographic coordinates, plant lines, chemical
compounds, and theses.* Intellectual property may exist in a written or electronic form, may be
raw or derived, and may be in the form * Emphasis added. 25 of text, multimedia, computer
programs, spreadsheets, formatted fields in records or forms within files, databases, graphics,
digital images, video and audio recordings, live video or audio broadcasts, performances, two or
three-dimensional works of art, musical compositions, executions of processes, film, film strips,
slides, charts, transparencies, other visual/aural aids or CD-ROMS.35In February of 2003, NIH
released guidelines on data sharing. The primary guideline states that all data must be shared and
released in a timely manner. The NIH defines timely manner as no later than acceptance for
publication. In addition, all grant applications to the NIH for grants of at least $500,000 are
required to establish a data sharing plan or give an explanation as to why data will not be shared
in the proposal (i.e. IRB allowance or institutional restrictions).36 The Health Information
Portability and Accountability Act (HIPAA) of 1996 provides detailed guidelines about data
sharing and using data containing personal identification information. The HIPAA guidelines
protect personal health information and provide legal requirements for all segments of the health
care system (including biomedical research) concerning what type of information can be shared,
how information should be stored and protected, data coding, and how information is used.
Genetic information is an area of particular concern when considering the issues surrounding
data management. Due to the wealth of information locked inside the human genome and the
potential for using this information to determine a variety of conditions and genetic tendencies,
including the potential to identify a person based on his or her genetic information, particular
interest has been expressed in protecting the information found in DNA. Careful attention should
be paid by researchers when using genetic information due to its sensitive nature. Research
misconduct is the process of identifying and reporting unethical or unsound research. The United
States Office of Scientific and Technology Policy (OSTP) released a new definition of research
misconduct that went into effect in December of 2000. OSTP defines misconduct, and its
components, as follows: Research misconduct is defined as fabrication, falsification, or
plagiarism in proposing, performing, or reviewing research, or in reporting research results.
Fabrication is making up data or results and recording or reporting them.
Falsification is manipulating research materials, equipment, or processes, or changing or
omitting data or results such that the research is not accurately represented in the research record.
Plagiarism* is the appropriation of another persons ideas, processes, results, or words without
giving appropriate credit.
Research misconduct does not include honest error or differences of opinion.38

In addition to defining research misconduct, the federal policy released by OSTP includes
guidelines on what must be present in order to find a researcher guilty of committing research
misconduct. A finding of research misconduct requires that:
There be a significant departure from accepted practices of the relevant research community;
and
* Emphasis (bolded text) added.
RESEARCH MISCONDUCT28
The misconduct be committed intentionally, or knowingly, or recklessly; and
The allegation be proven by a preponderance of evidence.39Research misconduct can be the
result of criminal behavior. For example, making up research data that doesnt exist and other
overt acts of fraud are deliberate and punishable criminal acts. Government regulations and
criminal punishments are necessary to prevent these criminal practices. Research misconduct can
also be the result of mistaken, negligent, unintentional, lazy, or sloppy research practices. These
types of misconduct are usually covered by institutional policies and are punishable at the
institutional level. In these instances of research misconduct, the use of outside research
evaluators (like the IRB) and the process of peer review helps to maintain and safeguard
scientific integrity.40
ETHICAL GUIDELINES Who is responsible for reviewing instances of research misconduct?
Any person who knows that research is being conducted unethically should raise his or her
concerns to the appropriate authorities, whether that person is involved in the research or not.
The first step in this instance may likely be a confidential conversation with the person in charge
of research integrity at an institution. Once research misconduct has been identified, all parties
involved in the research must take responsibility to resolve the situation, including: the principal
investigator, co-investigators, the institution hosting the research, the funding agency, and
publishing journal editors, if applicable. While the federal government takes responsibility for
research projects funded with federal money, it assigns the primary responsibilities of identifying
and investigating research =misconduct to the agency or institution hosting the research. When
someone is suspected of committing research misconduct, the proper procedure is to first launch
an inquiry. If the inquiry reveals a potential research misconduct situation, the second step is to
then conduct a full-scale investigation. Finally, the institution uses the information collected
during the full-scale investigation to 29 make decisions concerning the presence of misconduct
and its severity, and what appropriate corrective action should be taken, if needed.41 What
should people do if they are suspected of having committed research misconduct? The
Department of Health and Human Services Office of Research Integrity suggests the following
procedural guidelines for reporting and investigating research misconduct. While the procedures
are not mandatory, nearly all research institutions have adopted very similar procedures to the
following:

1. A person suspecting a scientist of research misconduct should report the incident to a research
integrity officer who should immediately look into the allegation to assess if it is both: a)
research misconduct; and b) within the jurisdiction of the research institution.
2. The person who informs the research integrity officer of suspected misconduct (the
whistleblower) should be treated with fairness and respect by the research institution and
efforts should be made to protect their job and reputation as necessary.
3. The person suspected of research misconduct (the respondent) should be protected and treated
with fairness and respect by the research institution.
4. The research integrity officer should strive to maintain the confidentiality of both the
whistleblower and the respondent.
5. If the misconduct issue is a criminal one or exceeds the jurisdiction of the research institution,
the research integrity officer should report the misconduct allegations to the proper authorities or
agencies.
Animals play a significant role in research. They are used in a variety of ways by researchers,
such as for testing new pharmaceuticals, as teaching tools for medical students, and as
experimental subjects for new surgical procedures. Research with animals is necessary and vital
to biomedical research because animal research is frequently a necessary first step towards
research involving new medical treatments and pharmaceuticals intended for human use.44
Many dedicated organizations and individuals are interested in protecting and safeguarding
animal subjects as regards their use in research. Some organizations are interested in eliminating
the use of animals in research. Others consider research with animals a necessary evil to the
advancement of medicine, but still aim to eliminate unnecessary suffering, pain, and poor facility
conditions for animal subjects. To protect animals, research projects that use animals have to be
reviewed. These review processes assess the risks and benefits of using animals in research. This
can prove difficult for project reviewers and often makes for intense debates and arguments
about the appropriate use of animal subjects, particularly because the animal subjects usually
bear all the risks while human beings realize all the benefits. Debates also center on judging how
much pain is too much, whether or not animals experience pain in the same way that humans do,
and whether or not these ideas should even factor into the debate at all. To assure that research
with animals is conducted ethically and responsibly, the federal government has created
regulations involving the use and care of animals involved in teaching, testing, and research.
RESEARCH WITH ANIMALS32
ETHICAL GUIDELINES
In order to prevent the mistreatment of animals the United States government first passed the
Animal Welfare Act in 1966 (last revised in 1990). The Animal Welfare Act exists in order: (1)

To insure that animals intended for use in research facilities or for exhibition purposes or for use
as pets are provided humane care and treatment; (2) to assure the humane treatment of animals
during transportation in commerce; and (3) to protect the owners of animals from the theft of
their animals by preventing the sale or use of animals which have been stolen.45 The
responsibility for enforcing the Animal Welfare Act and protecting animals used in testing,
teaching, and research falls on a number of different shoulders. The variety of agencies
responsible for different issues involving the use of animals are: United States Department of
Agricultures (USDA) Animal and Plant Health Inspection Service (APHIS) provides regulations
and enforces the Public Health Services (PHS) Animal Welfare Act (AWA). The USDA is also
responsible for issues dealing with non-research animals including: farm animals, companion
animals, zoo animals, circus animals, and wildlife. The NIH Office of Extramural Research
(OER) maintains the Office of Laboratory Animal Welfare (OLAW) and provides guidelines and
regulations for the use of laboratory animals in research funded by NIH. NIH also has the
Intramural Research Office of Animal Care and Use (OACU), which provides guidelines for
research with animals conducted by NIH researchers. Institutional Animal Care and Use
Committees (IACUC) are similar to Institutional Review Boards. IACUCs are hosted by
institutions in accordance with the Animal Welfare Act to ensure ethical and humane treatment
of animals used in research, 33 testing and teaching. The USDA provides guidelines and
regulations for the operation of IACUCs.The agencies above all overlap and interconnect. The
USDA uses the policies dictated by PHS to write the regulations concerning the care and use of
animals in research. The USDA also specifies the details for establishing an IACUC and how the
IACUC should review projects and programs that use and care for animals. Projects and
activities funded by the NIH must submit an assurance to the PHS that they have an IACUC
and maintain ethical and humane treatment of animals involved in federally funded projects and
activities in accordance with the AWA. The issues concerning research with human subjects
involves topics ranging from voluntary participation in research to fair selection and justice. This
variety makes the topics surrounding research ethics with human subjects a challenging but
important charge. Respect for Persons Informed Consent. Informed consent exists to ensure
that all research involving human subjects allows for voluntary participation by subjects who
understand what participation entails. Informed consent means that people approached and asked
to participate in a research study must: a) know what they are getting involved with before they
commit; b) not be coerced or manipulated in any way to participate; and, c) must consent to
participate in the project as a subject. The Belmont Report of 1979 outlines the three
requirements for informed consent. The first requirement is that information disclosed to
research participants must include, research procedure, their purposes, risks and anticipated
benefits, alternative procedures (where therapy is involved), and a statement offering the subject
the opportunity to ask questions and to withdraw at any time from the research.46 The second
requirement for informed consent is comprehension. The concept of comprehension requires
researchers to adapt information to be understandable to every participant. This requires taking
into consideration different abilities, intelligence levels, maturity, and language needs. Finally,

the third requirement for informed consent is voluntariness. Informed consent can be neither
coerced nor improperly pressured from any participant.47Respect for Persons Privacy and
confidentiality. Privacy and confidentiality are very important components for research involving
human subjects. People have a right to protect themselves, and information gathered during
research participation could harm a person by violating their right to keep information about
themselves private. The
RESEARCH WITH HUMAN SUBJECTS36
information gathered from people in biomedical studies has a unique potential to be particularly
embarrassing, harmful, or damaging. Recently, a number of research projects have focused on
unlocking genetic information. Genetic information may violate a persons right to privacy if not
adequately protected. The very fact that genetic information contains information about identity
provides a unique challenge to researchers. Many genetic experiments may seem harmless, but
during the process of collecting genetic information on, for example, breast cancer, a researcher
will inevitably collect a wealth of other identifiable information that could potentially be linked
to research participants as well. The Health Information Portability and Accountability Act
(HIPAA) passed into law in 1996 and went into effect in 2003. There are two main provisions in
HIPAA. The first provision prevents workers and their families from losing health insurance
when changing jobs. The second part of HIPAA is the Administrative Simplification Compliance
Act (ASCA) and this part identifies issues in health information privacy and confidentiality.
ASCA contains strict regulations concerning health information privacy, security (particularly of
electronically stored health data), and personal identifiers attached to data. This is the strictest
step taken thus far by the federal government to protect the vast amount of personal electronic
health information maintained by health insurance companies, hospitals, clinics, researchers, and
the government. Risk benefit and beneficence. Beneficence is a principle used frequently in
research ethics. It means, doing good.48 Biomedical research strives to do good by studying
diseases and health data to uncover information that may be used to help others through the
discovery of therapies that improve the lives of people with spinal cord injuries or new ways to
prevent jaundice in infants. The crux of this issue lies in the fact that uncovering information that
may one day help people must be gathered from people who are living and suffering today.
While research findings may one day help do good, they may also cause harm to todays
research participants. For example, research participants in an AIDS study could be asked to take
an experimental drug to see if it alleviates their symptoms. The participants with AIDS take on a
risk (ingesting the 37 experimental drug) in order to benefit others (information on how well the
drug works) at some time in the future. Researchers must never subject research participants to
more risk than necessary, be prepared to cease research if it is causing harm, and never put
participants at a level of risk disproportionate to the anticipated benefits. Justice. Particular
interest has been paid lately to preventing the overburdening of some populations in order to
apply research findings to other groups. Populations under consideration with particular potential
for exploitation may include the following (article titles concerning each population appears

below in italics): 1. MINORITY GROUPS Gil EF. Bob S. Culturally competent research: an
ethical perspective. Clinical Psychology Review, 1999; 19(1):45-55. 2. WOMEN Stevens PE.
Pletsch PK. Informed consent and the history of inclusion of women in clinical research. Health
Care for Women International, 2002; 23(8):809-819.3. MENTALLY IMPAIRED
INDIVIDUALS National Bioethics Advisory Commission Report.Research Involving Persons
with Mental Disorders that may Affect Decision-Making Capacity. December 1998.
http://www.georgetown.edu/research/nrcbl/nbac/capacity/TOC.htm 4.CHILDREN Allmark
P.The ethics of research with children. Nurse Researcher, 2002; 10(2):7-19. 5.
FINANCIALLYDISADVANTAGED INDIVIDUALS Phoenix JA.Ethical considerations of
research involving minorities, the poorly educated and/or low-income
populations.Neurotoxicology& Teratology, 2002; 24(4):475-476.38 6. DISADVANTAGED
PEOPLE LIVING IN THIRD WORLD COUNTRIES National Bioethics Advisory Commission
Report. Ethical and Policy Issues in International Research: Clinical Trials in Developing
Countries. April 2001. http://www.georgetown.edu/research/nrcbl/nbac/pubs.html 7.
PRISONERS Pasquerella L. Confining choices: should inmates' participation in research be
limited? Theoretical Medicine & Bioethics, 2002; 23(6):519-536.8. THE DECEASED The
deceased is included here as a population although deceased persons are not technically human
subjects because human subjects are defined by 45 CF 46 as living human beings. Couzin J.
Human subjects. Crossing a frontier: research on the dead. Science, 2003; 299(5603):29-30. 9.
EMPLOYEES Rose SL. Pietri CE. Workers as research subjects: a vulnerable population.
Journal of Occupational & Environmental Medicine, 2002; 44(9):801-805. Another potentially
overburdened population who may contribute to research and assume risks, but the benefits are
enjoyed by a separate group, is research that offers no hope of therapeutic assistance to
participants, but may yield information for therapies for future generations or future sufferers.
Biomedical research, by definition, does not have therapeutic benefit for participants as its goal.
The goal of research is to advance knowledge and science. Some research does provide the
additional benefit of providing potential therapeutic benefits to participants. This potential
contributes to physician willingness to recruit their patients into a research project, and to the
patients willingness to participate. On the other hand, some research offers no potential for
therapeutic benefit. In these cases, participants are being asked to put themselves in harms way
so that others in the future may benefit. 39
ETHICAL GUIDELINES
Guidelines for the use of human subjects in research are relatively recent, with the first modern
and formal efforts to protect human subjects coming after World War II. Since that time, each set
of regulations and internationally adopted principles concerning research with human subjects
consider the following issues to be of tantamount concern: Human subjects must voluntarily
consent to research and be allowed to discontinue participation at any time.
Research involving human subjects must be valuable to society and provide a reasonably
expected benefit proportionate to the burden requested of the research participant.

Research participants must be protected and safe. No research is more valuable than human
well being and human life.
Researchers must avoid harm, injury, and death of research subjects and discontinue research
that might cause harm, injury, or death.
Research must be conducted by responsible and qualified researchers.
No population of people can be excluded from research or unfairly burdened unless there is an
overwhelming reason to do so.
The way the federal government assures that research involving human subjects is conducted
ethically is through the use of oversight by Institutional Review Boards (IRBs) housed within
research institutions across the country.49 The IRB review and approval process must be
undertaken for all research projects that use human subjects in: a) institutions receiving federal
funding; b) non-federally funded institutions that voluntarily opt to participate in the IRB review
process; and, c) research results submitted to the FDA for consideration. Without IRB approval
of these projects, research with human subjects at these institutions cannot move forward. The
IRB process was designed to catch potentially harmful projects before they got off the ground. It
was also designed to think globally about ethical issues in research that may not necessarily be
in the forethoughts of researchers minds. Questions 40
The words "moral" and "ethics" (and cognates) are often used interchangeably. However, it is
useful to make the following distinction: Morality is the system through which we determine
right and wrong conduct -- i.e., the guide to good or right conduct.Ethics is the philosophical
study of Morality.
What, then, is a moral theory?
A theory is a structured set of statements used to explain (or predict) a set of facts or concepts. A
moral theory, then, explains why a certain action is wrong -- or why we ought to act in certain
ways. In short, it is a theory of how we determine right and wrong conduct. Also, moral
theories provide the framework upon which we think and discuss in a reasoned way, and so
evaluate, specific moral issues.Seen in this light, it becomes clear that we cannot draw a sharp
divide between moral theory and applied ethics (e.g., medical or business ethics). For instance, in
order to critically evaluate the moral issue of affirmative action, we must not attempt to evaluate
what actions or policies are right (or wrong) independent of what we take to determine right and
wrong conduct. You will see, as we proceed, that we do not do ethics without at least some
moral theory. When evaluating the merits of some decision regarding a case, we will always (or
at least ought to always) find ourselves thinking about how right and wrong is determined in
general, and then apply that to the case at hand. Note, though, that sound moral thinking does
not simply involve going one way -- from theory to applied issue. Sometimes a case may

suggest that we need to change or adjust our thinking about what moral theory we think is the
best, or perhaps it might lead us to think that a preferred theory needs modification.
Another important distinction:
Are moral theories descriptive or prescriptive ?
In presenting a moral theory, are we merely describing how people, in their everyday 'doings'
and 'thinkings,' form a judgement about what is right and wrong, or are we prescribing how
peopleought to make these judgements?Most take moral theories to be prescriptive. The
descriptive accounts of what people do is left to sociologists and anthropologists. Philosophers,
then, when they study morality, want to know what is the proper way of determining right and
wrong. There have been many different proposals. Here is a brief summary.
Theories of Morality
(1) Moral Subjectivism
Right and wrong is determined by what you -- the subject -- just happens to think (or 'feel') is
right or wrong. In its common form, Moral Subjectivism amounts to the denial of moral
principles of any significant kind, and the possibility of moral criticism and argumentation. In
essence, 'right' and 'wrong' lose their meaning because so long as someone thinks or feels that
some action is 'right', there are no grounds for criticism. If you are a moral subjectivist, you
cannot object to anyone's behaviour (assuming people are in fact acting in accordance with what
they think or feel is right). This shows the key flaw in moral subjectivism -- probably nearly
everyone thinks that it is legitimate to object, on moral grounds, to at least some peoples'
actions. That is, it is possible to disagree about moral issues.
(2) Cultural Relativism
Right and wrong is determined by the particular set of principles or rules the relevant culture just
happens to hold at the time. Cultural Relativism is closely linked to Moral Subjectivism. It
implies that we cannot criticize the actions of those in cultures other than our own. And again, it
amounts to the denial of universal moral principles. Also, it implies that a culture cannot be
mistaken about what is right and wrong (which seems not to be true), and so it denies the
possibility of moral advancement (which also seems not to be true).
(3) Ethical Egoism
Right and wrong is determined by what is in your self-interest. Or, it is immoral to act contrary
to your self-interest. Ethical Egoism is usually based upon Psychological Egoism -- that we, by
nature, act selfishly. Ethical egoism does not imply hedonism or that we ought to aim for at least

some 'higher' goods (e.g., wisdom, political success), but rather that we will (ideally) act so as to
maximize our self interest. This may require that we forgo some immediate pleasures for the
sake of achieving some long term goals. Also, ethical egoism does not exclude helping others.
However, egoists will help others only if this will further their own interests. An ethical egoist
will claim that the altruist helps others only because they want to (perhaps because they derive
pleasure out of helping others) or because they think there will be some personal advantage in
doing so. That is, they deny the possibility of genuine altruism (because they think we are all by
nature selfish). This leads us to the key implausibility of Ethical Egoism -- that the person who
helps others at the expense of their self-interest is actually acting immorally. Many think that the
ethical egoist has misunderstood the concept of morality -- i.e., morality is the system of
practical reasoning through which we are guided to constrain our self-interest, not further it.
Also, that genuine altruism is indeed possible, and relatively commonly exhibited.

(4) Divine Command Theory


Many claim that there is a necessary connection between morality and religion, such that,
without religion (in particular, without God or gods) there is no morality, i.e., no right and wrong
behaviour. Although there are related claims that religion is necessary to motivate and guide
people to behave in morally good way, most take the claim of the necessary connection between
morality and religion to mean that right and wrong come from the commands of God (or the
gods). This view of morality is known as Divine Command Theory. The upshot is that an action
is right -- or obligatory -- if God command we do it, wrong if God commands we refrain from
doing it, and morally permissible if God does not command that it not be done. Divine Command
Theory is widely held to have several serious flaws. First, it presupposes that God or gods exist.
Second, even if we assume that God does exist, it presupposes that we can know what God
commands But even if we accept theism, it looks like even theists should reject the theory.
Plato raised the relevant objection 2500 years ago. He asked: Is something right (or wrong)
because the gods command it, or do the gods command it because it is right?If the latter, then
right and wrong are independent of the gods' commands -- Divine Command Theory is false. If
the former, then right and wrong are just a matter of the arbitrary will of the gods (i.e., they
might have willed some other, contradictory commands). Most think that right and wrong are not
arbitrary -- that is, some action is wrong, say, for a reason. Moreover, that if God commands us
not to do an action, He does so because of this reason, not simply because He arbitrarily
commands it. What makes the action wrong, then, is not God's commanding it, but the reason.
Divine Command Theory is false again.
(5) Virtue Ethics

Right and wrong are characterized in terms of acting in accordance with the traditional virtues -making the good person. The most widely discussed is Aristotle's account. For Aristotle, the
central concern is "Ethica" = things to do with character. Of particular concern are excellences
of character -- i.e., the moral virtues.Aristotle, and most of the ancient Greeks really had nothing
to say about moral duty, i.e., modern day moral concepts. Rather, they were concerned with
what makes human beings truly 'happy'. True 'happiness' is called Eudaimonia (flourishing /
well- being / fulfilment / self- actualization). Like Plato, Aristotle wants to show that there are
objective reasons for living in accordance with the traditional virtues (wisdom, courage, justice
and temperance). For Aristotle, this comes from a particular account of human nature -- i.e., the
virtuous life is the 'happiest' (most fulfilling) life.
Three steps to the argument:
(1) The ultimate end of human action is happiness.
(2) Happiness consists in acting in accordance with reason.
(3) Acting in accordance with reason is the distinguishing feature of all the
traditional virtues.
Aristotle thought that humans had a specific function. This function is to lead a life of true
flourishing as a human, which required abiding by the dictates of rationality and so acting in
accordance with the traditional virtues.
(6) Feminist Ethics
Right and wrong is to be found in womens' responses to the relationship of caring. Comes out of
the criticism that all other moral theories are 'masculine' -- display a male bias. Specifically,
feminists are critical of the 'individualistic' nature of other moral theories (they take
individualism to be a 'masculine' idea). Rather, feminist ethics suggests that we need to consider
the self as at least partly constructed by social relations. So morality, according to some feminist
moral philosophers, must be ground in 'moral emotions' like love and sympathy, leading to
relationships of caring. This allows legitimate biases towards those with whom we have close
social relationships.
(7) Utilitarianism
Right and wrong is determined by the overall goodness (utility) of the consequences of action.
Utilitarianism is a Consequentialist moral theory. Basic ideas:All action leads to some end. But
there is a summumbonum -- the highest good/end. This is pleasure or happiness. Also, that there
is a First Principle of Morals -- 'Principle of Utility', alternatively called 'The Greatest Happiness
Principle' (GHP), usually characterized as the ideal of working towards the greatest happiness of

the greatest number. The GHP implies that we ought to act so as to maximize human welfare
(though Bentham thought we should include all sentient animals in his utilitarian calculations).
We do this in a particular instance by choosing the action that maximizes pleasure/happiness and
minimizing suffering.Jeremy Bentham -- the first to formulate Utilitarianism -- did not
distinguish between kinds of pleasures. However, Bentham's student, John Stuart Mill, produced
a more sophisticated version of Utilitarianism in which pleasures may be higher or lower. The
higher pleasures (those obtained, e.g., through intellectual pursuits), carried greater weight than
the lower pleasures (those obtained through sensation). The upshot is that in determining what
action to perform, both quality and quantity of pleasure/happiness count.Note: Utilitarians are
not a Hedonist. Hedonists are concerned only with their own happiness. Utilitarians are
concerned with everyone's happiness, so it is Altruistic. In general, morally right actions are
those that produce the best overall consequences / total amount of pleasure or absence of
pain.Modern versions of Utilitarianism have dropped the idea of maximizing pleasure in favour
of maximizing the satisfaction of all relevant peoples' preferences and interests. Also, some
distinguish between Act Utilitarianism and Rule Utilitarianism. Act Utilitarianism is pretty
mush as described above, where we make the utilitarian calculation based on the evaluation of
the consequences of a single isolated act. It is thought by some that this leads to a number of
significant problems -- for instance, that one person may be harmed if that leads to the greatest
good for everyone. To overcome these problems, some advocate Rule Utilitarianism -- the view
that we should adopt only those rules (for governing society) that produce the greatest good for
all.
Other key points:

For Utilitarians, no action is intrinsically right or wrong.

No person's preferences or interests (including your own, your relatives, friends, neighbours,
etc.) carry a greater weight than any other person's.

Usually we cannot make the required utilitarian calculation before acting. So, in most
situations, following 'rules of thumb' will produce the best consequences.

Democratic and economic principles reflect Utilitarianism.

Some things to ask about Utilitarianism:

How can we determine accurately what the consequences of an action will be?

Do people have rights that cannot be overridden by the goal of the best consequences for all?

(8) Kantian Theory


Right and wrong is determined by rationality, giving universal duties. Kantianism is a Nonconsequentialist moral theory. Basic ideas:That there is "the supreme principle of morality".
Good and Evil are defined in terms of Law / Duty / Obligation. Rationality and Freedom are
also central. Kant thought that acting morally was quite simple. That is:
- you ought to do your duty (simply because it is your duty).
- Reason guides you to this conclusion.
Good Will (i.e., having the right intentions) is the only thing that is good without qualification.
So, actions are truly moral only if they have the right intention, i.e., based on Good Will.
What establishes Good Will?
- only can be a law of "universal conformity" -- "I should never act except in such a way that I
can also will that my maxim should become a universal law".
This is called the Categorical Imperative = Principle of Universalizability (something like The
Golden Rule). The basic idea is that we should adopt as action guiding rules (i.e., maxims) only
those that can be universally accepted. Consider someone wondering if they could break a
promise if keeping it became inconvenient. We might formulate the following maxim governing
promises:
I can break promises when keeping them becomes inconvenient.
Can this be universalized? Kant says no because making promises then becomes, in essence,
contradictory. The thinking is that a promise is, by definition, something you keep. The above
maxim would lead to a contradiction of will, i.e., "I'll make a promise (something I keep), but I'll
break it if I choose". The more general way to understand the Principle of Universalizability is
to think that we must always ask the following questions: What if everyone did the action you
are proposing? Or, what if I were in the other person's position? This leads to the basic idea
behind the Golden Rule. Kant had another way of formulating the Categorical Imperative that is
worth noting. Never treat anyone merely as a means to an end. Rather, treat everyone as an end
in themselves.We can understand this by noting an example, i.e., the slave society. What is
wrong with the slave society, following the above principle, is that a slave is treated as a means
to the slave owner's ends, i.e., as an instrument or tool, not as a person. The upshot is that no
person's interests (or rights) can be overridden by another's, or the majority. Many think that this
way of formulating the Categorical Imperative shows that Kantianism is clearly anti-Utilitarian.

Some things to ask about Kantianism:

Is it true that having good intentions is the only thing that counts morally?

Must we always ignore good consequences?

Is it always wrong to treat people merely as a means to an end? (Can we do otherwise?)

(9) Rights-based Theories


We are to act in accordance with a set of moral rights, which we possess simply by being human.
Rights-based views are connected to Kantianism and are Non-consequentialist. The basic idea is
that if someone has a right, then others have a corresponding duty to provide what the right
requires.Most distinguish between positive and negative rights. A positive right is one in which
the corresponding duty requires a positive action, e.g., giving a charitable donation in order to
sustain someone's right to life, shelter, education, etc. A negative right is one in which the
corresponding duty merely requires refraining from doing something that will harm someone.
Some claim -- e.g., Libertarians -- that only negative rights count morally. For instance, the right
to life does not require that we give what is needed to sustain life, rather merely that we refrain
from taking any action that would take life. [Note: others argue that there is really no significant
distinction between positive and negative rights, arguing that a positive right can be understood
negatively, and visa versa. Also, that there is no morally significant difference between, for
example, letting someone die and killing them. Obviously, this is a hotly disputed issue.]
Some things to ask about Rights-based theories:

Where do rights come from? From nature (we have them simply by being human)? From
principles of Justice? Or, from Utilitarian procedures?

How do we decide between competing rights?

(10) Contractarianism
The principles of right and wrong (or Justice) are those which everyone in society would agree
upon in forming a social contract. Various forms of Contractarianism have been suggested. In
general, the idea is that the principles or rules that determine right and wrong in society are
determined by a hypothetical contract forming procedure. Here is John Rawls's
example.Through a thought experiment, Rawls developed a way of getting people to come up
with universal principles of justice. The basic idea is nothing new -- i.e., of impartial developing

a social contract of universal principles -- but many find Rawls' novel method very appealing.
The idea is to start by thinking, hypothetically, that we are at the beginning of forming a society
and we want to know which principles of justice to ground the society. However, in this 'original
position' we do this without knowing which position we will occupy in the future society -- we
don't know if we will be rich or poor, male or female, old or young, etc. We then advocate those
principles that will be in our self-interest (though we don't know what 'self' that will be). This
forces us to be impartial, and if we are rational, to propose universal principles. The idea of the
thought experiment is not to think that we actually begin again, and construct a society from
scratch. Rather, we can use the thought experiment as a test of actual principles of justice. If a
principle is one that would not be adopted by people in the original position, behind the 'veil of
ignorance' (about who they will be), then it is unjust and should be rejected.[Rawls claims that
people in this original position will choose conservatively when developing principles governing
the distribution of benefits and burdens. This conservatism, Rawls claims, will lead to the
choosing two basic principles: (1) that each member of the society should have as much liberty
as possible without infringing on the liberty of others; and (2) the 'maximin' rule for decisions
about economic justice -- namely, that they will choose those rules that would maximize
the minimum they would receive. In other words, make the society in which the least well off
are in the best possible position. Deviations from equality of distribution of benefits and burdens
is justified only if it advantages the least well off. Rawls thought that some inequalities would be
adopted because rewarding on the grounds of merit and hard work, for example, would lead to a
society in which there was a greater production of social benefits, so the least well of would be
better off than in a society of pure equality.]
Morality and ethics[edit]
Ethics (also known as moral philosophy) is the branch of philosophy which addresses questions
of morality. The word 'ethics' is "commonly used interchangeably with 'morality' ... and
sometimes it is used more narrowly to mean the moral principles of a particular tradition, group,
or individual."[6] Likewise, certain types of ethical theories, especiallydeontological ethics,
sometimes distinguish between 'ethics' and 'morals': "Although the morality of people and their
ethics amounts to the same thing, there is a usage that restricts morality to systems such as that of
Kant, based on notions such as duty, obligation, and principles of conduct, reserving ethics for
the more Aristotelian approach to practical reasoning, based on the notion of a virtue, and
generally avoiding the separation of 'moral' considerations from other practical
considerations."[7]
Descriptive and normative[edit]

In its descriptive sense, "morality" refers to personal or cultural values, codes of conduct or
social mores. It does not connote objective claims of right or wrong, but only refers to that
which is considered right or wrong. Descriptive ethics is the branch of philosophy which
studies morality in this sense.
In its normative sense, "morality" refers to whatever (if anything) is actually right or wrong,
which may be independent of the values or mores held by any particular peoples or
cultures. Normative ethics is the branch of philosophy which studies morality in this sense.

Normative theories of ethics or moral theories are meant to help us figure out what actions are
right and wrong. Popular normative theories include utilitarianism, the categorical imperative,
Aristotelian virtue ethics, Stoic virtue ethics, and W. D. Rosss intuitionism. I will discuss each
of these theories and explain how to apply them in various situations.
Utilitarianism
Utilitarianism is a very simple view that matches common sense right and wrong can be
determined by a cost-benefit analysis. We must consider all the good and bad consequences
when deciding if an action is right. Utilitarians disagree about what counts as good or bad.
Some think that fulfilling desires is good and thwarting desires is bad, classic utilitarians think
that happiness is good and suffering is bad, and pluralists believe that there are multiple
intrinsic goods that are worth promoting. An action will then be said to be right as long as it
satisfactorily causes good consequences compared to alternative actions, and it will be wrong
if it doesnt. Utilitarianism doesnt discriminate or encourage egoism. It is wrong to harm others
to benefit yourself because everyone counts. What counts as satisfactory will not be agreed
upon by all philosophers. Originally some philosophers suggested that only the best action we
could possibly perform is right, but this is an extreme, impractical, and oppressive view. Why?
Whenever you are taking a shower or spending time with friends it would probably be better to
be doing something else, such as helping the needy, but it is absurd to say that you are always
doing wrong whenever you are taking a shower or spending time with friends. Additionally, it
isnt clear that there is a best course of action always available to us. There might be an
unlimited number of actions we can perform and at least one of them could be better than what
we choose to do. It should be pointed out that right actions and right moral decisions are two
different things. An action is right when it produces good results even if it was made for the
wrong reasons. For example, I could decide not to go to my job one day when doing so would
just happen to cause a car crash. There is no way to expect a car crash to occur that day, but my
action would be right insofar would cause positive results. People might then say, You got
lucky and ended up doing the right thing. To make the right moral decision for a utilitarian
means to make a decision that is most likely going to actually be right (lead to good results)
based on the available information I have. Choosing to go to work is usually the right decision to
make despite the fact that there is a negligible chance that I will get in a car wreck. Such a

decision cant take far-fetched possibilities into consideration. Utilitarianism is not necessarily
meant to be used as a decision procedure to decide what to do. If we can clearly know that a
course of action will produce highly good results and negligible bad results, then that action is
rational. However, we arent always good at knowing what actions will produce good results and
we can often be overconfident in our ability to do so. It is often wrong to choose to do something
we believe will probably have good results if that behavior is risky and has a chance of hurting
people. For example, a jury shouldnt find someone guilty when someone has been proven
innocent in the hopes that it will prevent a riot in the streets because people cant know for sure
that such a decision will produce the desired results, and they do know that the guilty verdict will
destroy someones life. To conclude, in order to know if something is morally preferable for a
utilitarian, we must ask, Will it lead to more benefits and less harms than the alternatives? If
the answer is, Yes, then it is morally preferable.
Applying Utilitarianism
Killing people Killing people is usually wrong either because people have value (and they
might not exist after dying), because everyone has a desire to stay alive, or because killing
people makes other people unhappy.
Stealing Stealing is usually wrong because it makes people unhappy to lose their possessions,
they might need their possessions to accomplish certain important goals, and because the right to
property makes it possible for us to make long term goals involving our possessions.
Courage Courage is essential for morality because people must be willing to do what they
believe will be right even at a personal cost. Sometimes doing the right thing requires altruism,
such as when a whistle blower must tell the American public about corruption at the work place
(despite the fact that they might be killed for doing so).
Education Education is good because it helps us know how to be a productive member of
society, it helps us know empirical facts that are relevant to knowing which actions are likely to
benefit or cause harm (e.g. better parenting techniques or healthy eating), and it helps us think
rationally to make better decisions.
Promising It is wrong to break a promise because doing so would make other people upset and
waste their time. People depend on the honesty of others in order to take business risks, plan on
their retirement, and so on.
Polluting It is wrong to pollute if the pollution will harm others. It is preferable to refuse to
pollute if too many people doing so could also harm others, but we are not necessarily personally
responsible for the harms caused by an entire civilization.

Homosexual behavior Homosexual behavior does not automatically cause harm and it is
something many people find pleasurable and part of living a happy life. Therefore, it is not
always wrong. Homosexuality can cause someone harm from discrimination, but to blame
homosexuality for the harms of discrimination is a form of blaming the victim just like blaming a
woman who gets raped for being too weak.
Atheism Atheism does not necessarily cause people harm other than through discrimination, but
blaming atheists for discrimination is also a form of blaming the victim. Additionally, atheism is
often a position one believes in because of good arguments, and it is appropriate for people to
have beliefs based on good arguments. Being reasonable is right because it tends to have
good results.
Objections
1. Consequences might not be enough. Utilitarianism requires us to do whatever promotes
the good the most, but that could require us to be disrespectful or even harm certain
people. For example, if we kill someone to donate their organs and save five lives, then it
seems like our action maximized the good and wasnt wrong. This result is
counterintitive and its suggests that utilitarianism is incomplete because we might have
rights that must not be violated, even to maximize the good.
2. Utilitarians arent sensitive to heroic acts. Utilitarians think we ought to maximize the
good. If this is a duty, then it seems much too demanding. In that case we would probably
be doing something morally wrong almost every second of the day, and we would rightly
be blamed and punished for it. But it doesnt seem wrong for me to do a handstand or
spend time with friends just because I could be doing something better with my time.
Additionally, heroic acts like jumping into a fire to save a child seems like they
are beyond the call of duty rather than obligations. If its not a duty to maximize the
good, then utilitarians will have to explain when we have duties and when we dont. Its
not obvious that we can draw this line using utilitarianism.
Categorical Imperative
The categorical imperative asks us to act in a way that we can will to be a universal law. In other
words, it asks us to behave in a rational way that would be rational for anyone. If it is right for
me to defend myself when attacked, then it is right for everyone to defend themselves in self
defense. Robert Johnson describes the categorical imperative as a method to find out if an action
is permissible using four steps: First, formulate a maxim that enshrines your reason for acting as

you propose. Second, recast that maxim as a universal law of nature governing all rational
agents, and so as holding that all must, by natural law, act as you yourself propose to act in these
circumstances. Third, consider whether your maxim is even conceivable in a world governed by
this law of nature. If it is, then, fourth, ask yourself whether you would, or could,
rationally will to act on your maxim in such a world. If you could, then your action is morally
permissible.1I will describe each of these stages in more detail:
1. First we formulate the maxim or motivational principle that guides our action. For
example, I might plan on eating food because Im hungry or decide to break a promise to
pay a friend back because I would rather keep the money.
2. Second, lets transform the action into a universal law of nature. Everyone must act for
the same reason that I will act on. Everyone will eat food when theyre hungry and break
their promises to friends when they would rather keep their money.
3. Third, lets consider if such a maxim could even be a universal law of nature. Could
everyone eat food when theyre hungry? Yes. Could everyone refuse to pay their debts
when theyd rather keep their money? No, because that would undermine the whole point
of having debts to be paid. No one would loan money out in that world. At this point we
can already rule out the maxim of refusing to pay our debts out of convenience, so its an
irrational and impermissible maxim and we have a duty not to act from that motive.
4. Fourth, if the maxim passes the third step, could we rationally will the maxim to be
followed by everyone in our circumstances? Perhaps I can will that people eat when they
are hungry, but not necessarily in every circumstance, such as when theres limited food
that needs to be shared with others who are also hungry.
Johnson adds that we have a perfect duty to refrain from doing something that violates the
third step in the sense that there are no exceptions. Whenever we are in the relevant situation, we
must refrain from doing the act as much as possible. Since refusing to pay ones debts when we
prefer to keep our money doesnt pass the third step, we have a perfect duty not to refuse to pay
our debts for that reason. Kant also thinks we have a prefect duty not to commit suicide when we
want to avoid suffering. If we have a maxim that doesnt pass the fourth step, then its an
imperfect duty to refrain from doing it, which means we must refrain from doing it at least some
of the time. Kant thinks we cant always refrain from helping others, so we have a duty to help
others at least some of the time. I suspect that the categorical imperative is compatible with all
other moral theories. For example, a utilitarian will have to believe that it is only rationalto
behave in a way likely to promote positive values, and such moral rationality applies to
everyone. Of course, the categorical imperative doesnt require us to be utilitarians. There might
be some actions that are right for reasons other than the likelihood of producing positive results.

The categorical imperative is often related to hypocrisy, the golden rule, and the question, What
if everyone did that? First, our morality must not be hypocriticalwhat is right for me is right
for everyone. Second, we can demand that someone treat others how she wants to be treated as
long as she wants to be treated in a way that rationality permits. Third, we can demand that
people dont behave in a way that is wrong for others. If everyone defended themselves from
attack, then people would be behaving appropriately. However, if everyone steals to benefit
themselves, then they will be doing something wrong. When we ask, What if everyone did
that? we are not asking, Would there be bad consequences if everyone did X? The categorical
imperative does not necessarily concern itself with consequences and it doesnt claim that
something is wrong just because too many people doing something could become destructive. In
order to know if an action is morally acceptable based on the categorical imperative we must ask,
Is the action rationally appropriate for everyone else in the same situation? If the answer is,
Yes, then the action is morally acceptable.
Applying the categorical imperative
Killing people Killing people is wrong whenever it would be inappropriate for someone to kill
us, and we need to consider the motivational reason for killing someone. It would be wrong for
people to kill us out of greed just to take our money, so it is wrong for everyone to kill out of
green to take other peoples money. However, it would be right for someone to kill us if
necessary to defend themselves from attack out of self-respect, so it is right for everyone else as
well.
Stealing Stealing is wrong whenever it would be inappropriate for someone to steal from us,
such as when they want something without paying for it. However, if stealing is necessary to
survive because no one is willing to share food, then it might be necessary to steal out of selfrespect.
Courage Courage is rationally necessary for us to be willing to do the right thing when the right
thing is done at personal risk to oneself. Emotions must be disregarded if they conflict with the
demands of moral reason.
Education Education is a rational requirement insofar as ignorance puts others at risk. If we can
rationally demand others to become educated because of the dangers of ignorance, then we are
also rationally required to become educated.
Promising Keeping a promise is a rational requirement insofar as we can rationally demand
that other people keep their promises (out of respect for our humanity). It might be that breaking
a promise is necessary from time to time (to respect our humanity), but only when it would be
wrong for anyone in that situation to break the promise. For example, a enraged friend who asks
for his gun you are borrowing should be denied the weapon. It is perfectly respectful to deny

someone out of their mind a weapon because they will appreciate it later once they regain their
reason. (Kant actually had something different to say about this issue.)
Polluting Although everyone polluting by driving cars causes harm, it isnt clear that
polluting is always wrong. Everyone committing their life to medicine would end up causing
harm, but we dont want to say that someone is doing something wrong for committing her life
to medicine. However, it might be wrong to cause pollution whenever we know that it will cause
harm. If we can rationally demand a business to pollute less, then others can make the same
demand on us.
Homosexual behavior If having sex for pleasure can be rational for heterosexuals, then having
sex for pleasure can be rational for homosexuals. Doing something to attain pleasure is not
irrational as long as theres no overriding reason to find it problematic.
Atheism Someone can rationally believe in atheism if it is found to be a sufficiently reasonable
belief just like all other beliefs. If it is rational to believe in theism if it is found to be sufficiently
reasonable, and it can be rational to believe in atheism for the same reason.
Objections
1. The categorical imperative isnt meant to be a complete decision procedure. Kant
discusses the categorical imperative in the context of moral concepts rather than
moral reality. Even if the categorical imperative exists, its not always clear how to use it
to decide what we ought to do in each unique situation we find ourselves in. Many people
disagree about how the categorical imperative applies in each situation.
2. We dont know that categorical imperatives can help us. Kants theory requires that
people can be motivated by categorical imperatives, but its not clear that we can. The
problem is that we dont know how we are motivated in each situation and we often
deceive ourselves. If we cant be motivated by categorical imperatives, then we need to
know how practical they are. Will they help us be moral in any important sense?
Aristotelian Virtue Ethics
Aristotelian virtue ethics has two parts. First, Aristotle argues that our personal happiness (or
flourishing) is the ultimate goal that we should promote. Second, he argues that we should
learn to have habits and behave in ways that lead to our personal happiness. (To have the right
habits and feelings is to be virtuous.) We can learn what behaviors cause happiness through our
past behavior and we can learn to be sensitive to particularities in each situation. For example,
we know not to attack people in most situations, but it might be necessary to attack people in self

defense. In order to know if something is morally acceptable for an Aristotelian we must ask, Is
the action based on a sensitivity to the situation? And does the action lead to personal
happiness? If the answer to these questions is, Yes, then the action is morally virtuous. Two
clarifications still need to be made. First, Aristotles idea of happiness is distinct from pleasure
and means something more like good life or flourishing. Additionally, some of our goals
could be morally justified for Aristotle as long as they dont conflict with happiness. Pleasure,
knowledge, and virtue in particular seem like worthwhile goals in general, even if they dont
cause happiness. Second, Aristotle argues that virtue is the greatest form of happiness. Happiness
is the ultimate goal or ultimate and most final end, but there can be other worthy goals or
final ends. (Final ends are goals that are worth pursuing and desiring for their own sake.)
Aristotle thought that becoming the best kind of person by developing our uniquely human
capacities was the best way to be happy. In particular, were rational and political animals, so we
need to develop our ability to be rational and our ability to get along with others. Being a
political animal is manifested in how we care for others in general and desire to help others.
Aristotle, like most virtue ethicists, is skeptical about using rules to make moral decisions. It
seems impractical to use rules and philosophical arguments to make decisions every second of
the day, even if morality is ultimately grounded in rules. Instead of having rules, we need to learn
to have an intuitive understanding of morality and develop virtuous character traits that cause
appropriate behavior without a great deal of thought usually being required. A person who has an
intuitive understanding of morality and has virtuous character traits has practical wisdom (the
ability to achieve worthy goals) but not necessarily theoretical wisdom (the ability to know about
the world through generalization and deduction). Although Aristotle doesnt think ethics is best
understood in terms of rules, he finds that wisdom tends to be based on avoiding extremes and
finding a moderate middle groundthe golden mean. A person with cowardice is afraid, even
when she should not be afraid. A person with foolhardiness isnt afraid, even when she should
be. A virtuous person with courage will only be afraid when its appropriate to be. Some people
define courage as an ability to act despite fear. Perhaps there are times when we should endanger
ourselves, even when its appropriate to feel fear. For example, it could be courageous to jump in
a burning building to save a child, even though it might make sense to feel fear insofar as our
own well being would be threatened. Aristotle argues that even the ultimate self-sacrifice isnt
necessarily incompatible with our personal happiness, but that is a very controversial point.
However, even if it can be appropriate to feel fear and act despite our fear, courage is merely
more complex than Aristotle stated because the fact that we feel fear doesnt guarantee inaction.
Aristotles idea of finding the golden mean is a general rule, and we can use it make many other
general rules. Virtues like courage, moderation, justice, and wisdom could be taken to imply
various general rules of avoiding certain extremes. We shouldnt eat too much food, we should
eat, desire, and enjoy food when its appropriate, but not when its inappropriate, and so on.
Applying Aristotles virtue ethics

Killing people It might be necessary to kill people in self defense because living is necessary to
be happy (and we must promote goods that are necessary for our personal happiness), but killing
people makes us unhappy because we are social animals and we care about people. We dont like
horrible things to happen to others.
Stealing Stealing is necessary if it is necessary for our personal happiness, but stealing makes
us unhappy insofar as we care about people.
Courage Courage is necessary for us to take the risks needed to live a fully happy life. Courage
is our habit to be afraid when it is necessary for our happiness and not afraid when it is necessary
for our happiness.
Education Education is necessary for our personal happiness not only to know how to best be
happy, but also because the most intellectual forms of contemplation are the most positive
experiences we can have. A contemplative life is the happiest sort of life we can live.
Promising Keeping a promise is virtuous as long as we consider the situation at hand and keep
the promise because it is likely to promote our happiness. In other words, keeping the promise
might not be personally beneficial because we can also keep a promise out of respect (care) for
the other person. We cant be happy while hurting others.
Polluting Polluting is wrong insofar as it hurts people and we care about people.
Homosexual behavior Homosexual behavior is wrong when done immoderately (in an overlydangerous way likely to lead to unhappiness), but it is right when done in a way that leads to
ones personal fulfillment.
Atheism Atheism is right as long as the belief is not under our control or as long as the belief
does not lead to our unhappiness. Atheists often cant control their atheism just like they cant
believe in many other things that they find implausible (ghosts, ESP, bigfoot, etc.).
Objections
1. Its not just our personal happiness that matters. First, its not obvious that happiness is
the ultimate good. Perhaps our existence is more important. Second, its not obvious that
we should only be concerned with our personal good or happiness. It seems plausible to
think that everyones happiness should be taken into consideration.
2. Caring for others isnt always good for our happiness. Aristotle thinks we care for
others by our very nature, so we should take other peoples good into consideration.
However, we dont always care about strangers and its not obvious that we should

nurture our empathy for stranger given Aristotles assumption that our personal happiness
is the ultimate good. It can be painful to care for others because their suffering can cause
us suffering, and we might have some control over how much we care for others and
strangers in particular.
Stoic Virtue Ethics
Simply put, Stoic virtue ethics is a theory that true moral beliefs and thoughts tend to lead to
appropriate emotions and actions. However, Stoic virtue ethics traditionally has five parts:
1. It argues that virtue is the ultimate value that overrides all other values.
2. It defines virtue in terms of having true evaluative beliefs, emotions based on those
evaluative beliefs, and behaving according to those evaluative beliefs. (Evaluative beliefs
are value judgments, such as pleasure is preferable.)
3. It states that true (or well reasoned) evaluative beliefs and thoughts tend to give us
appropriate emotions and actions. Positive evaluative beliefs lead to positive emotional
responses and negative evaluative beliefs lead to negative emotional responses.
4. It states that we can know what is preferable from our instincts, which was given to us
from God (Universal Reason). In particular, we have an impulse to care for others both
emotionally and through action, which indicates the fact that caring for others is
preferable.
5. It states that everything that happens is for the best because it was preordained by God
(Universal Reason) and therefore there is no reason for us to have a negative emotional
response.
The first three of these parts sounds reasonable, but the last two require us to accept the existence
of the Stoic divinity, which is something contemporary philosophers find to be much too
ambitious. What we need is a way to determine is truths about preferences. I have two different
suggestions for finding them without referring to a divinity:
1. We can prefer whatever is necessary to be virtuous. No matter what we value, we cant
promote the value unless we value life, consciousness, and freedom from pain.
2. We can experience some values for ourselves, such as the value of pleasure and disvalue
of pain.

I discuss these solutions in much more detail in my Masters Thesis, Two New Kinds of
Stoicism. My theories are known as Neo-Aristonianism and Common Sense Stoicism. In
order to determine if something is morally acceptable for a Stoic philosopher we need to ask,
What emotions are being felt and what beliefs are held? If the emotions are caused by rational
beliefs, then it is morally acceptable.
Applying Stoic virtue ethics
Killing people It is wrong to kill people insofar as killing people is motivated by inappropriate
beliefs and thoughts, such as, This person committed atrocities and deserves to die. Such a
belief could motivate rage and we could lose rational control of ourselves. Instead, we should
dispassionately consider why killing could be appropriate based on rational preferences. For
example, it might be appropriate to kill in self defense if necessary for our preference for
survival despite the fact that we ought to care about all people and prefer for good things to
happen to others.
Stealing It is wrong to steal insofar as it is motivated by inappropriate beliefs and thoughts,
such as, I need to have more money. It might be necessary to steal to act on sufficiently
important rational preference, such as a preference to survive when stealing is needed to survive;
but pleasure would not be an important enough preference worth promoting to warrant theft. For
one thing we care for others and dont like others to suffer theft, and the expectation of pleasure
would not override the importance of helping rather than harming others.
Courage The ancient Stoics believed that courage was a lack of fear. We can be cautious and
prefer to live well without fearing death or losing our external goods. The Stoics believed that
the fear of death was based on an inappropriate belief that death is an evil (despite the fact that it
is dis-preferable).
Education First, education can help us attain good reasoning, which helps us form better (well
justified and accurate) beliefs. Second, well justified and accurate beliefs help lead to appropriate
emotions and actions.
Promising Keeping a promise is virtuous as long as we do so based upon justified preferences.
We should not break a promise just because we are compelled to do something more pleasurable
because that would overemphasize the importance of pleasure and de-emphasize the value of the
person that would be disrespected or harmed.
Polluting To pollute to the extent of harming others is often based on inappropriate selfishness,
greed, and an inappropriate lack of care for others. The virtuous person will care for others and
wont want to harm them for money. It might be worth driving a car in a society where cars help
live a better life despite the fact that the pollution ends up harming some people.

Homosexual behavior Homosexual behavior insofar as it is based on a preference for pleasure


is appropriate as long as it is compatible with our care for others. An inappropriate love of
pleasure could cause inappropriate lust that would cloud our judgment whether we are talking
about homosexual or heterosexual sex.
Atheism Atheism is appropriate insofar as the belief is probably true based on the information
available to us. For the Stoic philosopher, true beliefs are of primary importance. We should
have a belief because it is true, not because it is pleasurable or because of our emotions.
Objections
1. Does Universal Reason exist? The Stoics require us to believe in Universal Reason, but
not everyone believes in universal reason and its not obvious that Universal Reason
really exists.
2. The Stoic virtue ethics can dull our emotions. Its not entirely clear what emotions are
appropriate for the Stoics, but some people think they would dismiss many appropriate
emotions that enrich our lives. Grief, passionate love, and anger were often said to be
inappropriate emotions by the Stoics, but many people arent convinced that they are
inappropriate after all.
Rosss Intuitionism
W. D. Rosss theoretical understanding of morality explained in The Right and the Good was not
meant to be comprehensive and determine right and wrong in every situation, but he doesnt
think it is ever going to be possible to do so. He denies that there is one single overarching moral
principle or rule. Instead, he thinks we can make moral progress one step at a time by learning
more and more about our moral duties, and do our best at balancing conflicting obligations and
values. Ross proposes that (a) we have self-evident prima facie moral duties, and (b) some things
have intrinsic value. Prima facie duties :We have various prima facie duties, such as the duty of
non-injury (the duty to not harm people) and the duty of beneficence (to help people). These
duties are prima facie because they can be overriden. Duties can determine what we ought to
do nothing else considered but they dont determine what we ought to do all things considered.
Whatever we ought to do all things considered will override any other conflicting duties. For
example, the promise to kill someone would give us a prima facie duty to fulfill our promise, but
it would be overridden by our duty not to injure others. Ross argues that we have (at the very
least) the following duties:
1. Duty of fidelity The duty to keep our promises.
2. Duty of reparation The duty to try to pay for the harm we do to others.

3. Duty of gratitude The duty to return favors and services given to us by others.
4. Duty of beneficence The duty to maximize the good (things of intrinsic value).
5. Duty of noninjury The duty to refuse to harm others.
Is this list complete? That is not obvious. We might have a duty to respect people beyond these
duties, and we might have a duty to justice, equality, and/or fairness to praise, blame, reward,
punish, and distribute goods according to merit. For example, its unfair and disrespectful to
blame innocent people because they dont merit blamethey werent responsible for the
immoral act. Self-evidence and intuition: Ross thinks we can know moral facts through intuition.
What does it mean for these duties to be self-evident? It means that we can contemplate the
duties and know they are true based on that contemplationbut only if we contemplate them in
the right way. Ross compares moral self-evidence to the self-evidence of mathematical axioms.
A mathematical axiom that seems to fit the bill is the law of non-contradictionWe know that
something cant be true and false at the same time. Intuition is the way contemplation can lead to
knowledge of self-evidence. We often use the word intuition to refer to things we consider
common sense or things we know that are difficult to prove using argumentation. Ross thinks
we can know things without arguing for them, and he thinks that anything truly intuitive is
self-evident. Keep in mind that intuition doesnt necessarily let us know that something is selfevident immediately nor that intuitive contemplation is infallible. Consider that 123+321=444
could be self-evident. We might need to reach a certain maturity to know that this mathematical
statement is true, and recognition of its truth is not necessarily immediate. It requires familiarity
with addition and some people will need to spend more time contemplating than others. Intrinsic
value: Many utilitarians agree with Ross that pleasure is intrinsically good and pain is
intrinsically bad. Pleasure is good just for existing and is worthy of being a goal. The decision
to eat candy to attain pleasure makes sense if it has intrinsic value, and we all seem to think
that eating candy to attain pleasure is at least sometimes a good enough reason to justify such an
act. We have prima facie duties not to harm people at least to the extent that it causes something
intrinsically bad (pain) and to help people at least to the extent that it produces something
intrinsically good, like pleasure. Whats intrinsically good? Ross suggests that justice,
knowledge, virtue, and innocent pleasure are all intrinsically good. However, minds, human
life, and certain animal life could also have intrinsic value. How do we use Rosss
intuitionism?First, we need to determine our duties and what has intrinsic value. Second, we
need to determine if any of these duties or values conflict in our current situation. If so, we need
to find a way to decide which duty is overriding. For example, I can decide to go to the dentist
and get a cavity removed and this will cause me pain, but it is likely that it will help me avoid
even more pain in the future. Therefore, it seems clear that I ought to get the cavity removed.
However, if I have two friends who both want to borrow my car at the same time and I wont be
needing it for a while, I might have to choose between them and decide which friend needs the
car the most or randomly decide between them if thats impossible.

Applying Rosss Intuitionism


Killing people It is generally wrong to kill people because it (a) causes people pain, (b)
prevents them from feeling future pleasure, and (c) destroys their knowledge. If and when killing
people isnt wrong, we will need an overriding reason to do it. Perhaps it can be right to kill
someone if its necessary to save many other lives.
Stealing It is wrong to steal insofar as it causes people pain, but it might be morally preferable
to steal than to die. Our duties to our children could also justify stealing when its the only option
to feed them.
Courage Virtue has intrinsic value, and courage is one specific kind of virtue. Courage is our
ability to be motivated to do whatever it is we ought to do all things considered, even when we
might risk our own well being in the process.
Education Knowledge has intrinsic value, so we have a prima facie duty to educate people and
seek education for ourselves.
Promising Keeping a promise is already a prima facie duty, but it can be easily overriden when
more important duties conflict with it. For example, you could promise to meet a friend for
lunch, but your prima facie duty to help others might override your promise when a stranger is
injured and you can help out.
Polluting Polluting violates peoples prima facie duty to noninjury, but polluting might be
necessary for people to attain certain goods they need to live. In that case pollution could be
appropriate.
Homosexual behavior Homosexual behavior can be justified because it can help people attain
pleasure, but we also have a prima facie duty to try not to endanger our own life or the life of
others, so its better to take certain precautions rather than have homosexual sex
indiscriminately. This is no different than the morality of heterosexual sex.
Atheism Being an atheist doesnt violate any of our prima facie duties, so its not wrong.
Telling ones parents that one is an atheist could cause momentary pain, but ones prima facie
duties to be open and honest seems to override that concern in most situations. Additionally,
being open and honest in public about ones atheism could risk ones own well being, but it
could also help create acceptance for atheists in general and help other atheists as a consequence.
Objections

1. Its not clear that intuitions are reliable. Ive mentioned before that both intuition and
self-evidence has been questioned by philosophers. Many people have differing intuitions
and argue different beliefs qualify as being self-evident.
2. Its not clear how we resolve conflicts in duties. Many philosophers dont think we can
have duties that conflict. For example, utilitarians think we should maximize the good
and no moral consideration that conflicts with that principle will count for anything. If
our duties can conflict, then its not obvious how we can decide which duty is overridden
by the other.
Conclusion
Philosophers have found ethical theories useful because they help us decide why various actions
are right and wrong. If it is generally wrong to punch someone then it is wrong to kick them for
the same reason. We can then generalize that it is wrong to harm people to help understand
why punching and kicking tend to both be wrong, which helps us decide whether or not various
other actions and institutions are wrong, such as capital punishment, abortion, homosexuality,
atheism, and so forth. All of the ethical theories above have various strengths and it is possible
that more than one of them is true (or at least accurate). Not all moral theories are necessarily
incompatible. Imagine that utilitarianism, the categorical imperative, and Stoic virtue ethics are
all true. In that case true evaluative beliefs (e.g. human life is preferable) would tell us which
values to promote (e.g. human life), and we would be more likely to have an emotional response
that would motivate us to actually promote the value. We would feel more satisfied about human
life being promoted (e.g. through a cure to cancer) and dissatisfied about human life being
destroyed (e.g. through war). Finally, what is right for one person would be right for everyone
else in a sufficiently similar situation because the same reasons will justify the same actions.
Introduction
The RCPSC recommends Knowledge of major ethical theories as an educational objective for
Canadian physicians.1 Given that this primer is an introduction to the major philosophical moral
theories, it is important to explain why it is important that physicians think about these ideas.
Most physicians deliberate and make effective decisions about hard moral problems without
knowing much or anything about moral theory. However, moral theories can help physicians to
justify and reflect upon the ethical decisions that they make. Often physicians will explain a
decision on the basis of their clinical experience: I just know that in a case like this you should
this. However, while clinical experience is often a good guide to the right thing to do, it is
fallible. Un-picking the reasoning that is implicit in good decision-making can help us to
discover why it is that we think something is right, as well as testing whether we have done the

right thing. One of the main reasons for learning something about moral theory is that this
knowledge can shed light on the way that we reason about ethical problems.
It is also important to bear in mind that moral theories are different from many of the other
theories that physicians use. Many theories in medicine are useful because they predict what is
going to happen and can tell us what we should do. For example, a theory about the function of
serotonin re-uptake inhibitors might tell us that particular drugs are likely to increase the amount
of serotonin in a patients brain. Combined with other clinical judgments about a person
suffering from moderate depression and whether this might be helped by appropriate medication,
this theory can play a role in predicting what we should do. Moral theories are different from
other theories: while they can help us to justify the ethical decisions that we make, they are often
not predictive in the same way. There are a number of reasons for this. Moral theories attempt to
explain what it is that makes some actions right and others wrong. They operate at a more
general level than moral or legal principles and rules. For example, if a physician is faced with a
difficult decision about whether he/she is ethically bound by a patients refusal of treatment, an
ethical rule about the right of patients to refuse treatment is more relevant and immediate than
more general theoretical considerations about autonomy or the maximization of happiness. Moral
theories play an important role in justifying our moral principles and rules, but sometimes they
are only indirectly relevant to a specific problem. Clinical ethical problems are usually
complicated: often if a question about clinical ethics is not complicated then there is not a
problem! When there is a significant doubt about what would be best for a patient, how a
patients or familys wishes should be balanced against physicians judgments about what is best
or other complicated decisions, introducing moral theory might not provide the magic answer.
Moral theories are more complicated than they initially appear and they do not usually produce
the straightforward predictions that many people expect. This is partly because moral theories are
often refined and developed so that they can accommodate counterintuitive implications. (The
next section on utilitarianism will discuss a number of theory refinements like this.) However, it
is also because all moral theories are controversial, while theories in medicine are often not.
Given that physicians face clinical decisions and problems it is reasonable for them to look for
theories that will help. While this is fine for other areas of medicine, moral theories are
controversial and will often imply different things about the same case. The following sections
will consider a number of cases where moral theories and their variants imply that a particular,
sometimes counterintuitive, action should be performed. This is an important point, because it
would be worrying if a physician did something that was morally unwise because it followed
from a particular moral theory. While there are some reasons for being cautious about moral
theories, these theories also hold great potential for enriching critical reflection upon our
decisions. To bring this kind of critical reflection into sharp relief, it is important to introduce

moral theory in a way that conveys the complexity of, and controversy about, the major moral
theories. The following sections explain the three main major theories utilitarianism, Kantian
deontology and virtue theoryalong with some of their variants and problems. All of these
theories have something going for them and illuminate at least some of the important features of
morality. All of them also have some serious problems, and we should not treat any of them as
being completely correct. Nonetheless, knowledge of these theories can help us to understand,
reflect upon and improve our moral deliberation. While moral theory might not always tell us the
right answer, it can provide us with powerful critical tools for un-picking our moral decision
making. The final two sections describe influential methods for moral reasoning in medical
ethics. The first describes some of the features, strengths and weaknesses of the four principles
approach to biomedical ethics. The second discusses a number of other important accounts of
medical morality.
Utilitarianism
You are an intensive care physician responsible for admissions to a busy intensive care unit
(ICU). You have just been called to see patient A in the emergency room, who requires an urgent
admission to ICU. All of the beds on your unit are full, and while this patient might survive
transport to the nearest ICU with a spare bed, in your clinical judgment there would be a
significant risk of patient A dying in transit. While all of the patients in your ICU still need to be
there, patient B is making an excellent recovery and will be ready to be moved from the ICU in a
day or so. In your clinical judgment it would be feasible to move patient B to another ward
without a significant risk to his medical welfare. Moving patient B to another hospital would not
be in patient Bs best interests, but you wonder whether it might be justified in this case.
What is utilitarianism? The principle of utility
There are, of course, a number of issues that are relevant to making a decision in this case. Some
of them are legal, but there are also important moral questions about your obligations to these
particular patients. One initially attractive option is to argue that the most good can be done by
moving patient B and treating patient A, and that this is ultimately the most important thing.
Appealing to what will produce the most good is the kind of argument that would appeal to a
utilitarian. In general, utilitarians think that the point of morality is to maximize the amount of
happiness that we produce from every action. Utilitarianism is not the only moral theory that
says that we should try to maximize the length and quality of life. All plausible moral theories
should say something about the importance of improving the lives of human beings. The crucial
thing that distinguishes utilitarianism from other moral theories is the claim that maximizing
human welfare is the only thing that determines the rightness of actions.

John Stuart Mill is, perhaps, the most famous utilitarian. He claimed the following: actions
are right in the proportion as they tend to promote happiness, wrong as they tend to produce the
reverse of happiness. By happiness is intended pleasure, and the absence of pain; by
unhappiness, pain and the privation of pleasure.2Mill insisted that there is a strict relationship
between the rightness of an action and the amount of pleasure it promotes and pain it prevents.
He also said that the only thing that is relevant for determining the morality of an action is
whether it produces the greatest happiness. Other moral considerations, such as keeping
promises, only have moral value insofar as they produce happiness: if keeping a promise means
that happiness is not maximized then, according to utilitarianism, this promise should not be
kept. Intuitively, there is at least something right about the utilitarian viewpoint. Mill was a
radical social reformer who worked to promote the rights of women and slaves and argued for
the importance of free speech.3,4 Focusing upon the importance of maximizing pleasure can
provide an important moral scalpel for criticizing moral rules or conventional practices that harm
people. At first, utilitarianism looks like an attractive theory for dealing with some moral
problems in medicine. Suppose you are responsible for a public health care budget. Suppose also
that you only have enough money left in your budget to provide either new radon gas
remediation measures or pneumococcal vaccinations for elderly people. Ideally you would like
to provide both of these services, but for the next year you can only afford one of them. A study
in the UK has estimated the radon gas measures to cost 614310,323 per quality-adjusted life
year (QALY),5 whereas the pneumococcal vaccination is likely to cost 273 per QALY.6 You
are likely to be able to produce more QALYs, or happiness, if you fund the vaccination program.
While we should be concerned about whether QALYs accurately represent quality of life and the
fact that we cannot provide the radon treatment too, it does seem right that if we must choose
then we should choose the vaccination program. The rationale implicit in this decision seems
defensible and is clearly a utilitarian rationale. Utilitarianism might also appear to have some
plausibility in an emergency setting. Suppose that you are in a triage situation where two people
urgently need your care. You would do anything within your ability to save both of these people,
but it is only possible for you to save one of them. Suppose that one of the people is a girl of 6
years and the other a man of 72 years. You know nothing about these individuals apart from
these facts: what should you do? An option that some would take is to refuse to assess the worth
of other human beings in this way and to save the nearest person or decide who to treat on the
basis of some other random, arbitrary factor. However, another possibility is to take the decision
to treat the 6-year-old child because, potentially, this child has a greater amount of life ahead of
her. This is a decision that many would rather not take, but it does seem an instance where there
is at least something to the utilitarian view. While utilitarianism might appear to have some
appeal, it does have implications that most would find unpalatable. Before considering these, it is

worth thinking more carefully about Mills claim that happiness is the most and only important
thing.
Hedonism, preference satisfaction and ideal accounts of human welfare
The core idea common to most versions of utilitarianism is that maximizing human welfare is
what makes actions right. There are a number of philosophical accounts of what human welfare
consists of and these can be plugged into the principle of utility, thereby generating different
versions of utilitarianism.
Hedonism
Mill thought that happiness, understood as the presence of pleasure and the absence of
pain, is what makes peoples lives go well. So, not only did he think that the point of
morality is to maximize human quality of life, he also gave us an account of what
quality of life consists of. Hedonism is the view that the only thing that contributes to a
persons life going well is pleasure, so we can consider Mill to be in favour of what we
might call hedonistic utilitarianism.
Given that the greatest happiness principle requires that we maximize happiness, this
raises the question of whether we can sum pleasure: unless we can know that
something produces more pleasure, how, on this account, can we know that it is right?
Mills predecessor, Jeremy Bentham, claimed that the value of pleasures and pains
should be measured by their intensity and duration.7Suppose that a patient asks you
whether she should opt for treatment A or treatment B, both of which are equally
effective at treating the relevant medical condition. Treatment A is significantly more
unpleasant than treatment B; based on your clinical experience, you think it is about
twice as unpleasant. However, the unpleasantness of treatment A generally only lasts
one third as long as treatment B. If, on this basis, you advised your patient to get it over
with and go for treatment A, this is likely to be because once intensity and duration
have been taken into account, treatment A is the least unpleasant treatment.
While this might make some sense when trading off discomforts, it becomes a much
more complicated matter when we start thinking about all of the things that can we
count as pleasures. A person who regains their mobility after hip replacement
surgery and is able to take their dog for a decent walk might count this as a pleasure.
They might also count the experience of being able to sit through a performance
of Twelfth Night as a pleasure. While it does seem possible to compare the duration of
the walk with that of Twelfth Night, is it really possible to compare the intensity of the
resulting happiness? This difficulty becomes even more apparent when you consider
the difference between pleasures such as being mildly intoxicated, playing pinball,
reading the Bible, or watching your children play. While these may all be pleasures, is
it really possible to compare their intensity, thereby providing a way for us to say that
one of them contributes more to our happiness? These pleasures do all seem to be
pleasurable, but they are not necessarily the same kind of pleasure. Rather than
maximizing any one of these, most of us would rather have a number of these
experiences. Hedonism might appear to be a shallow and implausible view for another
reason, too. When Sigmund Freud was terminally ill he refused all pain relief except
aspirin because he wanted to be able to think clearly even though it would mean that he
was in extreme pain.8 Freud had this preference even though it was not hedonistically

the best thing for him: unless the pleasure of his work was more intense than his
extreme pain, hedonism appears to imply that Freud was wrong about what was best
for him. Preference satisfaction: The possibility that people might prefer things that do
not necessarily maximize their happiness is one of the reasons why some people
believe in preference or desire satisfaction accounts of human welfare. The idea is
that because people have preferences for particular things or experiences, they must
think that these things or experiences are of value to them. Freuds preference about
how he wanted to spend his final days is surely a better account of what was in fact
better for him. The most attractive feature of a preference satisfaction account is that it
makes what is best for a person depend upon what that person judges to be best for
him/herself. Intuitively, most of us think that, at least to some extent, what is best for
us as individuals depends upon our own judgmentcertainly, most of us object to
having someone elses view of what is best for us enforced upon us without regard to
our desires or preferences. Perhaps the major strength of preference satisfaction
accounts of welfare is that they capture the subjectivity of human well-being in an
appropriate way. Utilitarians who opt for this account of human welfare are usually
known as preference utilitarians. This means that the right action is the action that
would lead to the satisfaction of the most significant or strongly held preferences.
While this might appear to imply even worse measurement problems than those that
bedevil hedonistic utilitarianism, in fact, this is very close to the position adopted by
many welfare and health economists. I mentioned in the previous section the QALY,
which attempts to sum improvements in the quality of life produced by medical
interventions. QALYs are usually produced by asking people how much more they
would prefer to be in one health state than another. For example, health economists
wanting to find out how many QALYs coronary artery bypass surgery produces might
ask a group of people how much they would prefer to be in good health as opposed to a
particular state of ill health. These preferences are used to produce a quality of life
score for different illness states. The implicit assumption in QALYs and preference
satisfaction theory is that there is a strict relationship between something being
preferred and it is being good for or valuable to a person. While the subjectivity of
preference satisfaction is attractive in some ways, it can also result in some
counterintuitive implications. Suppose that you discover that one of your patients, an
18-year-old woman, has been hoarding sleeping pills with the thought that she might
take her own life. When a person attempts suicide, they have a strong desire or
preference for their life to end. It might be argued that there are some cases of rational
suicide (i.e., when a person has sound reasons for wanting to end their life). However,
in this case you are convinced that the 18-year-old is making a mistake and that her life
is of value to her. According to preference satisfaction theory, her suicide is good for
her. The preference satisfaction theorist might respond by saying that only rational,
properly informed preferences are relevant to welfare. This will go some way to
answering this worry, but there may still be some difficult cases. Consider the
following: Andrew has been injured in a car accident and requires surgery to correct a
fracture to his skull. He refuses to consent to surgery because he is concerned that it
might damage his appearance. When you point out to him that there are significant
risks to not having the surgery, including a significant risk of mortality, he says that he
is prepared to run this risk because, to him, his appearance is crucially important. He

would rather risk death than have surgery that alters his appearance. Of course, there is
no question about what should be done in this case: if Andrew continues to refuse the
surgery and is competent there is no question that he has the right to make that
decision. On the other hand, it is possible to doubt that he is making the best decision
for him. Even though he is informed, rational and therefore doing what is best on a
preference satisfaction account, is there something to the thought that this is not really
the best thing for him? It might be that if Andrew does have the surgery and is scarred
then his life will not turn out to be as bad for him as he thinks it will.
Ideal theory: The third main kind of welfare theory that generates another version of
utilitarianism is what is referred to ideal or objective list theory. The main idea is
that rather than making welfare depend upon the preferences of a person, there is a
range of goods that, when present in a persons life, make that persons life go better.
Of course it is very hard to determine which items should go on such a list, but
plausible candidates might include friendship, virtue, happiness, wisdom and
intelligence. A heroin addict might argue that his drug use gives him a great amount of
pleasure or that his desire for heroin is so strong that its satisfaction contributes greatly
to his welfare. If the addict funds his drug use through petty crime then it seems
plausible to infer that this might be at the expense of things like friendship and virtue.
Even though the addict does not think that the omission of these things from his life
means that his life does not go as well as it could, there is at least some plausibility to
the thought that he is making a mistake. Even if he experiences a great deal of pleasure
or desire satisfaction from his heroin use, things like friendship and virtue play an
important role in making a life go well, even if the person concerned does not realise it.
This version of welfare theory is one that might be thought to have some alarming
implications for medical ethics. Ideal theory opens up the possibility that people can
be wrong about what it is that makes their life go well, and this suggests that it is
possible for other people to know is best for them. This sounds like an open invitation
for paternalism and is ethically a little alarming. Nonetheless, there are some plausible
responses that the ideal theorist can make. While it might be true that the addict is
making a mistake about what will make his life go best, it does not follow that forcing
him to stop taking heroin will mean that he experiences the goods of friendship and
virtue. It is likely that this degree of compulsion would be counterproductive: we can
not force friendship or virtue upon people; unless they are freely chosen and developed
they are unlikely to be the real thing. The ideal theorist can also appeal to the
importance of autonomy rightsit does not follow that someone who is making a
mistake about what is best for them loses their right to decide what happens to them. In
some respects, this is similar to a medical situation where a physician is convinced that
a treatment will be medically better for a patient: even if that patient disagrees with the
physician and is making mistake about their medical welfare, they still have the right to
refuse the treatment. Ideal or objective list theories of welfare generate a third
variant of utilitarianism: ideal utilitarianism. This version implies that we should
maximize the total amount of human welfare, understood as being comprised of a
number of possible intrinsic goods in a persons life. This version of utilitarianism is
much more difficult to apply than the hedonist or preference satisfaction variants.
Rather than maximizing one thing, an ideal utilitarian has a number of different kinds
of good. This generates a number of difficult questions: what should be done when a

choice has to be made between maximizing different goods? Should good of different
kinds be weighted differentlyfor example, is friendship more important than
happiness? In such cases, the ideal utilitarian may not be able to provide any moral
guidance. Preference utilitarianism and hedonist utilitarianism are the two most
common variants but, as I will show in the next section, there are serious objections to
all three versions. It is worth emphasizing that you might think that one of the theories
of welfare is right (e.g., that happiness really is the only thing that makes a persons life
go well) while also disagreeing with utilitarianism.

The impartiality assumption


While there are problems with all of the welfare theories, there are core features of utilitarianism
that are likely to make it a difficult creed for most people. The decision to move patient B in the
ICU case involves a second defining feature of utilitarianism: the impartiality assumption. Most
forms of utilitarianism claim that we should maximize human welfare. The impartiality
assumption is the closely related idea that we should not be concerned about whose welfare it is
that is maximizedmaximizing welfare is the only thing that matters. One objection to moving
patient B is that you have already undertaken to treat B and you therefore have a duty of care to
him. While patient A desperately needs your care, they are not your patient yet. This is a far
from uncontroversial response. It might be objected that the mere fact that a patient is in an
ambulance as opposed to being in the ICU is not a relevant moral difference. Nonetheless, this
does appear to be an example of partiality, or moral obligations that are based upon the nature
of a specific relationship. Other partial obligations are much less controversial. For example,
parents are very strongly motivated to do all that they can to care for their children. While they
have a moral concern for all children, they think that they have special obligations to their own
children because they are their own children. This kind of obligation is not, from a utilitarian
point of view, easily justified. The impartiality assumption is at the root of an important problem
for utilitarianism that is known as the integrity objection.
The integrity objection
Bernard Williams argued that a plausible moral theory should not require us to perform actions
that are a poor fit with our psychology.9 By this, he means that if a moral theory obliges us to do
things that are radically at odds with the kind of moral commitments that we ordinarily think we
have then something is wrong with that moral theory. Suppose that you are working in a country
where there is a civil war and you are kidnapped by guerrillas. They say that unless you assist
them in extracting information from a captive, a high-ranking officer in the military, they will
start executing other prisoners. Of course, you cannot be certain that the guerrillas will not

execute the prisoners if you do help to torture the officer. However, from what you have seen of
the guerrillas so far you are convinced that they are serious about the promise to execute the
prisoners and it seems overwhelmingly likely that if you do not help to torture the officer then
many prisoners will die. For all of the versions of utilitarianism that we have considered so far,
assisting in the torture is morally the right thing to do. The torture will inflict appalling suffering
upon the officer, and the knowledge that you have taken part in torture is something that you will
always have to bear. However, if morality requires you to maximize utility, then avoiding the
deaths of many prisoners outweighs the harms caused by the torture. According to Williams, the
point of the integrity objection is not so much that utilitarianism says that torturing the officer is
right, but that the moral choice is so straightforward. When the pain and suffering of one
individual can lead to saving a number of lives, the utilitarianism equation appears very simple.
One thing that is missing from this picture is respect for your integrity as a person. Because
utilitarianism implies that you will be partly responsible for the deaths of the prisoners if you do
not torture the officer, your integrity as a person or moral agent is missing from the picture. The
fact that you might have particularly strong moral objections to physicians ever being involved in
torture or that acting on principle is morally important count for nothing in this scenario.
The demandingness objection
Another way of framing these problems about the fit between our psychology and utilitarianism
is by pointing out how demanding being a utilitarian would be. Most of us could do more to aid
those in the developing world. Physicians are in a unique position to improve the quality of other
peoples lives: there are few things that have a greater impact upon human happiness (or
whatever account of human welfare you favour) than treating disease. Physicians practicing
medicine in Canada or another country in North America or Europe can improve the quality of
patients lives while also maintaining a high standard of living. However, the developing world
has a shortage of skilled physicians and it seems likely that most physicians in the Western world
could make a comparatively larger contribution to the lives of their patients if they worked in a
developing country. Many physicians do choose, via organizations such as Mdecins Sans
Frontires, to work in areas where there is a shortage of physicians. Suppose that you and your
family live in a comfortable Canadian town where you work at the local hospital as a general
surgeon. An organization looking for volunteer physicians to work in a war-torn developing
country contacts you. You know that if you leave your comfortable Canadian town then the
hospital will be able to find a replacement general surgeon, and you also learn that the
developing country is desperately short of physicians. In short, if you resign from your job in
Canada and sacrifice your excellent quality of life for the sake of saving more people in the
developing country then you will maximize the amount of utility you can produce. While doing

this might be a morally excellent thing to do, expecting you to act with this degree of selfsacrifice makes morality very demanding. Most of us will put our interests aside for the sake of
other people; parents do this on a daily basis. However, utilitarianism obliges us to sacrifice our
most important interests for the sake of people we do not know, if this is what will maximize
utility. This is even more demanding than it first appears. According to utilitarianism, if you do
not resign your job and move to the developing country, and you know that a number of people
are likely to die as a result of there being nobody to save them, then you are responsible for their
deaths. The principle of utility says that acts are right or wrong in accordance with their
propensity to produce utility or disutility respectively. If you act in a way that does not produce
the greatest amount of utility possible then you have done something wrong and are morally
responsibility for that disutility. If you respond by pointing out that you manage to produce a
significant amount of utility by being a general surgeon in Canada, the utilitarian will reply that
you are morally obliged to produce the most utility that you can and that you are morally
responsible for disutility that results from not doing so. If we take Bernard Williams remark
seriously and insist that moral theories must fit our psychology and not require a radical rethink
of what we take ourselves to be, then the demandingness of utilitarianism makes it an
unattractive and perhaps impossible moral ideal. However, there are variants of utilitarianism
that attempt to sidestep the demandingness and integrity objections.
Act versus rule utilitarianism
Thus far, we have considered variants of act utilitarianism. Simply put, act utilitarians, like Mill,
think that the moral rightness of an action depends upon the extent to which it promotes utility.
This means that the rightness of every action depends only upon the utility that results from it. If
this action involves a lie or some other kind of action that we would ordinarily think of as wrong,
this does not make a moral difference to the act utilitarian. If lying or murder would lead to
greater utility then this is not only permitted, it is morally required. The reason why it is so
straightforward for the act utilitarian to say that you must torture the captured officer is because
this is likely to lead to the greatest utilitythe fact that this involves torture counts for nothing.
(There is a subtle point here. Utilitarians care about the consequences of actions, and it might be
that bad consequences follow from the fact that a physician has lied or tortured. Even in this kind
of case, the fact that this is a lie or an act of torture does not matter morally.) This is very
counterintuitive. Most of us think that there are moral rules that prohibit some actions and that
breaking those rules is morally important. Even though torture and murder might lead to
maximizing utility on some occasions, it is a mistake to think that it is right to partake in such
actions. Rule utilitarians take the importance of moral rules seriously and think that actions are
morally right when they conform to a moral rule: Rule-consequentialism makes the rightness

and wrongness of particular acts, not a matter of the consequences of those individual acts, but
rather a matter of conformity with that set of fairly general rules whose acceptance by (more or
less) everyone would have the best consequences.10The requirement that physicians keep
sensitive information about their patients secret is a moral rule. Furthermore, the preservation of
patient confidentiality is crucial for maintaining patient trust and ensuring that physicians have
the information that they need for diagnosis and treatment. Without a general rule to keep patient
information confidential, many of the central aims of medicine would become far more difficult.
There is a general prohibition on physician involvement in torture11 and this, too, makes sense
from a rule utilitarian point of view. There is something intuitively wrong about any physician
being involved in torture, and if all physicians follow this rule it is likely to lead to the best
consequences.Rule utilitarianism appears to make much better sense of some important moral
concerns in medicine than act utilitarianism. However, it is subject to the criticism that it
collapses into a form of rule worship. The central feature of all forms of utilitarianism is that
morality has, at its core, the promotion of utility or human welfare and that the more of this the
better. It is hard to dispute that having moral rules of some sort is important for the promotion of
utility. Medical confidentiality is a good exampleif patients do not think that their physician
will always keep their information secret and will rather decide confidentiality on the basis of
whether it maximizes utility on each occasion, patients are unlikely to trust physicians with
sensitive information. The problem of rule worship occurs when the consequences of
following a moral rule on a particular occasion do not promote overall utility. While the
physician who refuses to help torture the captured officer might appeal to the general rule against
physicians participating in torture, it is not obvious why this is a utilitarian justification. Even
though the rule tends to maximize utility in general, on this occasion it does not. Maximizing
utility is at the core of all forms of utility, so if a rule utilitarian says that on this occasion utility
need not be maximized, they can be accused of rule worship. To escape this conclusion, rule
utilitarians can attempt to modify the rule so that there is an exception in this case; then,
however, rule utilitarianism collapses back into act utilitarianismwhether we follow a rule in a
particular case depends upon whether it maximizes utility in that case. Of course, you might
wonder what is bad about worshipping some moral rules; general rules that always prohibit
torture and murder might be considered attractive moral ideals. Rule worship is only a problem if
you want to use a strictly utilitarian justification for moral rules. As we will see in the next
section on Immanuel Kant and Kantian approaches to ethics, there are other ways that we can
justify general moral rules.
Direct versus indirect utilitarianism
There is another option open to utilitarianism that helps to reconcile the principle of utility with

general moral rules. Imagine how hard it would be to always act so that your actions maximize
utility. Remember that utilitarianism is impartial, so the fact that a person is your patient, spouse
or child makes no difference to your moral obligations to them. However, patients, spouses and
children expect us to prioritize their interests over those of unknown people. Living in a
utilitarian world where we cannot maintain the relationships that are an integral part of life
would be intolerable and perhaps impossible for us. This observation has led many act
utilitarians, including Mill, to argue that we should not directly aim at maximizing utility.
Instead, we should follow the rules and conventions of customary morality. Mill argued that
conventional morality, which includes things like a general rules against torture and murder, has
developed so that it guides us to acts that are likely to promote utility. Mill clarified this idea by
drawing an analogy with the tables and guides that sailors use for navigation. Nobody argues
that the art of navigation is not founded on astronomy, because sailors cannot wait to calculate
the National Almanac. Being rational creatures, they go to sea with it already calculated; and all
rational creatures go out upon the sea of life with their minds made up on the common questions
of right and wrong, as well as on many of the far more difficult questions of wise and
foolish.2Mill goes on to argue that conventional morality guides us about right and wrong and
that we should follow this rather than attempting to always apply the principle of utility.
Nonetheless, the reason why some actions are right and others are wrong is explained by the
principle of utility. Consider patient confidentiality again. Patients expect that sensitive
information will be kept confidential and that physicians who are conforming to conventional
medical morality, at least in a Canadian context, will take all reasonable measures to ensure that
this happens. There are good reasons for thinking that keeping patient confidences tends to
maximize utility: it helps to facilitate trust between physicians and their patients, makes it
possible for physicians to find out important prognostic information and has other functions that
are ultimately important for patient welfare. An indirect act utilitarian like Mill would say that
this makes perfect sense. The presumption that patient information is kept confidential has
developed because of its tendency to maximize utility. An indirect utilitarian could also make
sense of the abhorrence that a physician would have at being compelled to take part in torture. In
general, torture has the most appalling effects upon human welfare so it makes perfect sense that
conventional medical morality has a strong prohibition of it. While torturing the officer might in
this case maximize utility and, strictly speaking, be the right thing to do, the physician who
refuses because it is contrary to conventional moral thinking is not necessarily doing the wrong
thing (even though they are doing the wrong thing). Indirect utilitarianism sounds like a much
more plausible view. It does seem right that the reason why we have at least some of the
conventional medical morality that we do is because it aims at maximizing human welfare.
However, there are still some serious problems with this more refined view. One question is,

does creating different levels in moral thinking risk creating a moral dissociation in us? Indirect
utilitarians say that we should follow conventional moral rules when deliberating about moral
choicesbut, at the same time, there is another level of moral thinking that does not need to
enter into our deliberations, even though it is really the level at which things are right and wrong.
This creates a schism in our moral thinking. When we believe we are thinking through the
solution to a moral problem, in fact we are only indirectly appealing to what matters morally.
This naturally leads to the thought that there might be a better way to make sense of our moral
reasons that does not involve appealing to considerations that are not part of our moral
deliberation. In other words, perhaps there is structure within our moral thinking that provides
the key to its justification. The idea that moral thinking has its justificatory structure built into it
is at the core of Immanuel Kants moral theory. This is the topic of the next section.
Learning points
At the core of utilitarianism is the idea that morality derives from one principle: we should
always act so as to maximize good consequences.If the QALY is used to allocate medical
resources, this involves a form of utilitarian reasoning.
While utilitarianism aims at maximizing human welfare, there are principally three welfare
theories that generate three versions of utilitarianism: hedonistic (or classical) utilitarianism,
preference satisfaction utilitarianism and ideal (or object list) utilitarianism.Utilitarianism claims
that our moral obligations are impartial: we do not have special reasons to prioritize the welfare
of any particular person, including ourselves.An important objection to utilitarianism is that it
fails to respect the integrity of human beings and alienates us from our fundamental nature as
moral agents.Utilitarianism can be accused of being too morally demanding, requiring more than
we would ordinarily consider to be morally required.Rule utilitarianism attempts to answer these
objections by claiming that actions are right when they conform to moral rules that would
maximize utility if everyone followed them.Rule utilitarianism can be criticized for resulting in
rule worship or collapsing into act utilitarianism.
Indirect utilitarianism says that we should follow the guidance that conventional morality gives
us and directly attempt to maximize utility.Indirect utilitarianism makes our ordinary moral
deliberations appear disconnected or dissociated: when we think we are deliberating about
morality we are not really deliberating about what matters morally.

Top
Kantian Ethics and Deontology
The Tuskegee syphilis study, which started in 1932, attempted to describe the natural

progression of syphilis in black American males. Subjects were offered the heavy metals therapy
that was thought to be effective at that time. The experiment continued until 1972, well beyond
the 1940s when it became clear that penicillin is an effective treatment for syphilis. Subjects
were recruited with Misleading promises of special free treatment (actually spinal taps done
without anaesthesia to study the neurological effects of syphilis), and were enrolled without their
informed consent.12
There are a number of reasons why the Tuskegee syphilis study was wrong. The absence of
consent and the failure to provide effective treatment when it became available are obvious
moral failings. Tuskegee has also had a disastrous effect upon the relationship between many
black Americans and researchers. We might also say that Tuskegee was wrong because of the
way that it instrumentalized experimental subjects and used them simply as a way of finding
out about the natural progression of syphilis. The wrongness of instrumentalizing human beings
is one of the important moral requirements that follows from the moral theory of the philosopher
Immanuel Kant (17241804). Before reaching Kants statement of that principle, it is important
to consider the first steps in Kants analysis of morality and its requirements. Utilitarians claim
that the consequences of actions determine their rightness. Indirect and rule utilitarians attempt to
soften some of the implications of this view by emphasizing the importance of acting in ways
that are consistent with moral rules or conventional morality. While this goes some way to
making utilitarianism more palatable, it does raise the question of whether there is another way
of thinking about morality that makes more sense of moral rules. Kants moral theory is, at least
on this criterion, a more plausible moral theory. Instead of stressing the importance of the
consequences of actions, Kant says that it is the maxim guiding that action that is important for
determining its rightness. A maxim is a description of the reason why someone is doing
something (i.e., what they are trying to achieve and a description of what they are doing to bring
this about). The most straightforward way to think about this is to think of a maxim as specifying
the means and ends of a particular action. This is an idea that can be best explained with an
example. Suppose that an oncologist talks to one of her patients about the possibility of entering
a clinical trial. It is a randomized, double-blind trial where a new medication that looks very
promising is compared to a standard frontline treatment for that condition. The oncologist cannot
be sure, but she thinks that this new medication is likely to be the best thing for this patient and
the trial is the only chance that this patient has of receiving it. The trial is sponsored by a drug
company and the oncologist will receive a significant payment for every patient she recruits
(although this plays no part in her decision to recommend the trial). In this case, the oncologists
maxim might be Recommend the clinical trial to this patient because it is likely to be best for
him. The end is doing what is best for the patient and the means is recommending the trial.
Suppose that a second oncologist recommends the same clinical trial to a different patient, but

acts on a different maxim. Instead of acting to further the interests of his patient, the second
oncologist thinks only of the money to be made by recruiting patients to this trial. His maxim
might be Recommend the clinical trial to this patient because it will help maximize my
income. Both cases share the same means, but the second oncologist has a different end.
Suppose also that both patients are appropriate candidates for inclusion in the trial and that the
second oncologists motivation has not clouded his judgment. Intuitively, it seems like the first
oncologist has done a good thing while the second oncologist has done something that is at least
shady, if not straightforwardly wrong. Utilitarians think that consequences are the only relevant
consideration when determining the rightness of an action. Because what has actually been done
and the consequences of these two cases appear identical, a utilitarian will have to tell a
complicated and perhapsimplausible story to say why the second oncologist did something less
moral than the first. For Kant, the rightness of an action depends upon its maxim. In this case, the
first oncologist is acting on a morally praiseworthy maxim, while the second oncologists maxim
is morally dubious. Even though the two actions are likely to have a nearly identical effect, the
different reasons make a significant difference to the morality of the acts. This is a plausible and
intuitive idea: when we find out that someone only appeared to be doing the right thing, but in
fact had ulterior or wicked motives, we reappraise the morality of what they did. However, more
is needed to explain the idea that maxims are the only things that are morally relevant, and how
we know that some maxims are right while others are not.
Good will
Because Kant wants to derive morality from maxims, he needs to show why they are the only
thing that could generate moral requirements. He argues that a good will or a will that intends
to do the right thing is the only thing that is always morally good. It is impossible to think of
anything at all in the world, or indeed even beyond it, that could be considered good without
limitation except a good will Power, riches, honor, even health and that complete well-being
and satisfaction with ones condition called happiness, produce boldness and thereby often
arrogance as well unless a good will is present which corrects the influence of these on the
mind13, p. 7 Kants phrase good without limitation is important because he is not saying that
the good will is the only thing thats goodclearly there are many other things, such as health,
power, happiness and money, that we can think of as valuable for their own sakes. His point is
that for all of these valuable things, there are instances where they are not morally good. So
when he says there is nothing that is good without limitation, he means that there are
conditions under which we do not think these things morally good. It is not hard to think of
examples where power and wealth are not good: this is familiar idea. A more interesting
example, which is highly relevant to the previous sections, is that Kant also mentions happiness.

Classical utilitarians think that the maximization of happiness is the only thing that matters; yet,
according to Kant, happiness is not always good. Suppose that a senior physician is particularly
happy with his status and position in life, even though he has achieved this by being dishonest
and ruthless in his dealings with his colleagues and juniors. According to utilitarianism his
happiness is morally significant, even though be has achieved it by doing bad things. If his
happiness in life had been tempered by a will that acted on morally correct maxims then his
happiness would (according to Kant) be morally good. Why does Kant think that a good will is
always good? Suppose that a physician provides a patient with a blood transfusion under
emergency conditions and does so with the express purpose of saving that persons life. Suppose
also that the patient is a Jehovahs Witness and that because of the emergency there was no time
for the physician to find this out. This patient might feel wronged or harmed by the fact that they
were given blood products, but it still seems that the physician did the right thing. The maxim
that the physician acted might have been I will save this patients life by giving him a
transfusion. This is a morally good maxim if the physician does not know that this way of
saving a life is not what this patient would want. According to Kant, is morally good even
though that action may have resulted in a bad consequence for the patient. If good will is the
only thing that is always good and this involves acting on morally correct maxims, how does
Kant identify maxims that demonstrate a good will? The arguments that Kant uses for this part of
his moral theory are quite complicated and it is not possible to explain them fully in a brief way.
Nonetheless, the most important point to grasp is that, for Kant, for something to be a moral
requirement it must be possible for it to apply to all agents who are contemplating the same
action in the same situation. A physician who is trying to decide whether it is right to breach
confidentiality because they are concerned about the fitness of a particular patient to drive will
weigh a number factors before determining the best course of action. If they decide that, in this
case, it is right to break the implicit promise to keep medical information confidential, Kant
would say that if this decision really is morally right then it should be morally right for all
physicians, when faced with the same situation, to do the same thing. You can think of this by
using an analogy with the law. If a situation is such that a particular course of action is legally
permitted, then in all other situations that are exactly the same, that action should again be
permitted. As we will see, Kant says that moral requirements must take the form of moral laws
if the relevant conditions exist for that moral requirement then that is what we must do. Kant
develops this idea and uses to it argue for a supreme principle of morality: the categorical
imperative.
The categorical imperative
Act only in accordance with that maxim through which you can at the same time will that it

become a universal law.13, p..31


For Kant, the maxim of an action determines its morality. What now requires explanation is what
it is to will your maxim as a universal law. The central idea is very close to the way in which
we often think through moral problems.
Suppose that a medical researcher desperately needs to replicate a laboratory result, but is simply
unable to make the experiment work. If she contemplates fudging her data because of her need,
she might reason along the following lines: If I fudge my data it is not likely to be very
significant. I know that eventually my experiment will work and this is one isolated fabrication.
However, if I think about what the world would be like if every researcher in my predicament
fudged their data then it is clear that I would not want the world to be this way. If every
researcher did this then progress in my area might grind to halt. Kant would say that I cannot will
this maxim to be a universal law because fudging my data would not be an effective way to
claim that I had produced a useful result. In other words, if everyone who wanted to claim an
experimental effect when they couldnt produce it falsified their data, falsifying data wouldnt be
a good way to claim that I had produced an experimental effect people wouldnt have any reason
to believe that I had produced that effect. This example is similar to one that Kant gives when
he explains the categorical imperative. He imagines a man who decides to borrow money, even
though he does not intend to pay the money back.
his maxim of action would go as follows: when I believe myself to be in need of money I
shall borrow money and promise to repay it, even though I know that this will never happen
how would it be if my maxim became a universal law? I then see at once that it could never hold
as a universal law of nature and be consistent with itself, but must necessarily contradict itself.
For the universality of a law that everyone, when he believes himself to be in need, could
promise whatever he please with the intention of not keeping it would make the promise and the
end one might have in it itself impossible, since no one would believe what was promised him
but would laugh at all such expressions as vain pretenses.13, p. 32In this example, a false promise
cannot be an effective means of attaining the end that this action aims at (money). For Kant, we
cannot will that the world is organized in this way because we cannot conceive of the world that
works in this way. The failure of universalization in this example is a failure in conception.
Kant thinks that there is a second way in which maxims can fail to be universalizable: if they
involve a contradiction in what we would will or really want. Suppose that a physician working
on a fee-for-service basis is approached by someone in need who cannot pay. If the physician
turns the patient away even though it would not have been a significant burden on her to treat
this patient, her maxim might be Do not treat those in need unless they can pay so that my
wealth is increased. This maxim could be conceived of as a universal law. If all physicians who

wanted to maximize their wealth refused to treat anyone who could not pay, this would be an
effective way for physicians to maximize their wealth. While we can conceive of the world
working in this way, Kant says that we would not will (or want) the world to be organized in this
way. If physicians who want to increase their wealth never treat people who could not pay but
needed care then the world would be a far worse place. Helping those in need is part of the
function of medicine, and we should not want the world to be organized in this way.
Kant thought that all of our moral obligations ultimately derive from the categorical imperative.
However, he produced different formulations of the categorical imperative that demonstrate
particular moral obligations. He is one of the great defenders of respect for persons and he
states this in his formula of humanity.
Formula of humanity
After describing the categorical imperative and some of its applications, Kant defines the
formula of humanity:
So act that you use humanity, whether in your own person or in the person of any other, always
at the same time as an end, never merely as a means.13, p. 38He derives this principle from the
categorical imperative by emphasizing the kind of rationality that the categorical imperative
generates. A good or rational will is one that acts upon maxims that could become universal laws
of action. Humanity or, perhaps more accurately, rational persons, embody this kind of
rationality. Given that good will is the only thing that is good without exception, rational human
beings are likewise of unconditional moral worth. Something that is of unconditional moral
worth should not be treated or used in a way that is inconsistent with this moral status.
We are all familiar with the wrongness of using people as mere means or instruments. One of the
reasons that Tuskegee was glaringly wrong is because of the way in which researchers failed to
offer or inform research subjects about antibiotics, and used the subjects as mere means to find
out about the natural progressions of syphilis. Kants explanation of this wrongness is that the
researchers failed to recognize the status of human beings as creatures capable of rational,
universalizable and moral action. Turning human beings into instruments or means for furthering
knowledge wrongs them in a profound way. While the wrongness of turning other people into
mere means for our own purposes is familiar, the idea that we should always treat people as
ends is not immediately obvious. It is useful to think again about what a maxim is. A maxim
always involves doing something so that something else happens: a maxim always involves
adopting a means to bring about an end that is valuable. In the Tuskegee case, the
researchers thought that knowing about the natural progression of syphilis was a valuable end,
and they used the research participants as means to reach that desirable end. So far so good, but
does Kants formula of humanity say too much? All medical research uses research participants

for the sake of furthering knowledge: does Kant imply that all medical research is wrong? Note
that the formula of humanity says that we should treat people as ends, never merely as a
means. In daily life, we frequently use other people as a means. The checkout operator totalling
your bill, the taxi driver taking you to the airport and the nurse handing a scalpel to a surgeon are
being treated as a means to an end. These are all morally acceptable ways in which to interact
with other people, as long as they are consistent with them also being treated as ends or rational
agents. What is it for someone to be treated as a mere means? There are many historical
examples in which research subjects have not known that were being experimented upon and
have suffered greatly. In these cases people were treated as mere meansno regard was paid to
their status as rational agents and they were used as mere instruments for medical knowledge. In
a research context, informed consent is the primary way in which we can ensure that people are
not used as mere means, but are used in ways that are consistent with their humanity. Perhaps
it is for this reason that the voluntary consent of the research subject is given such prominence
in the Nuremberg Code. (The Nuremberg Code list 10 principles of research ethics, but consent
is the first and longest paragraph.14) Sometimes when we think about moral requirements, we
think of them as being requirements only to other people. The formula of humanity says that we
also have a duty to respect our own humanity:
someone who has suicide in mind will ask himself whether his action can be consistent with
the idea of humanity as an end in itself. If he destroys himself in order to escape from a trying
condition, he makes use of a person merely as a means to maintain a tolerable condition up to the
end of life. A human being is not a thing and hence not something that can be used merely as a
means, but must in all his actions always be regarded as an end in itself.13, p. 38Some of us are
likely to reject Kants view about the impermissibility of suicide. Many of us would argue that
there are instances in which a persons life might become so intolerable that they should not be
stopped from ending it if it is clear that this is what they really want. Nonetheless, Kants
argument is important and worth examining. You might argue with Kant and say that some cases
in which a person contemplates suicide are consistent with respecting humanity: why does
suicide involve treating your self as a mere means? Pro-euthanasia campaigners often stress the
importance of autonomy and claim that allowing people to make these decisions is respecting
them as autonomous persons, able to make important decisions for themselves. Kant thinks that
persons are ends because of their capacity for rationality. Suicide involves terminating your own
capacity for rationality so that you no longer suffer. Given that the rational (good) will is the only
thing that is unconditionally good, you should not destroy something like this for the sake of
some other end, such as the avoidance of suffering. For Kant, suicide is an intrinsically irrational
act: you cannot use the extinction of your rationality as a way of bringing about a better state

because your rationality is a precondition of any state having value. Kants moral theory is in
many respects a more attractive option than utilitarianism. Kant does explain why it is that we
think some actions or reasons for acting are immoral, even when they might lead to better
consequences. However, there are consequences of Kants view that make it less attractive that it
might initially appear.
Absolutism
While utilitarianism has the problem that it appears to justify the most appalling actions if they
are likely to lead to good consequences, Kants moral theory has almost the reverse of this
problem. An absolutist about ethics believes that there are some things that we should never do,
even when the consequences of not doing so are very serious. An important 20th century moral
philosopher and absolutist is Elizabeth Anscombe, who argued that murder can never be morally
justified.15 Anscombes work continues to be relevant: she wrote about the intentional targeting
of civilians during war and sadly, as has been demonstrated in recent conflicts in the Middle
East, this continues to be a pressing moral problem. In a medical context, many physicians are
convinced that they should never intentionally take life, even when a patient is in intolerable
suffering and this is what he/she wants. Absolutism about the taking of human life has some
strong arguments in its favour and is a moral prohibition that many of us would find plausible.
However, the problem with Kant is that he thinks that we should be absolutists in cases where
many of us are likely to make an exception. Kant thinks that we should never lie, even when we
think that lying is necessary to avoid a serious harm. Suppose that a colleague calls at your door
and begs you to let her into the basement of your house so that she can hide from someone who
is trying to murder her. When the murderer calls at your door and asks whether you have seen
her, according to Kant you are morally obliged to not lie.16 Kant attempts to justify this position
by pointing out that if you lie then something worse may happen, and in that case you will share
some of the blame for what the murderer does next. Most of us will agree that, in general, lying
is likely to be the wrong thing to do; however, many of us would, in this case, consider it morally
permissible to make an exception. After all, the murderer is exploiting your morality so that he
can do something evil; surely in this kind of case a lie is justified.17 Whereas absolutism may be
plausible for some actions, Kants absolutism about lying means that most of us could not be
consistent, thorough-going Kantians.
The integrity objection (again)
One of the problems facing the utilitarian is the way in which maximizing utility might alienate
us from our normal and natural commitments. Versions of the integrity objection can be directed
at Kant. Suppose that you make a home visit to one of your patients, an elderly woman with
osteoporosis who has broken her leg. When you arrive at her house, it is on fire and you can hear

her cries from her first floor bedroom. You quickly dial the fire brigade and then rush into the
burning house and carry her to safety. Once you are safely outside of the house, she thanks you
profusely for saving her life. You are a good Kantian and explain that you realized that you could
will your maxim as universal law and it was therefore consistent with your duty to help your
patients. This explanation of why you saved her seems to involve an artificial kind of reason and
is a bit emotionally cold. It seems more natural to describe this as a heroic action motivated by
a concern for your patients well-being. Kants insistence that we must act on maxims that could
be universal laws and out of a sense of duty alienates us from our ordinary moral reactions to
situations.18
Other deontological theories
Kants moral theory is probably the most famous deontological moral theory. All versions of
deontology embody the idea that actions are morally right when they are consistent with and
motivated by moral duty. Kants categorical imperative is a way of specifying what our duties
are, but there are other versions of deontology that differ in some important respects. W. D. Ross
argued that we have many moral duties that derive from the importance of doing good for other
people, as well as other duties that derive from more Kantian obligations such as promisekeeping.19 Ross said that we have a number of prima facie moral obligations, each of which can
be relevant to what is right in a particular situation. One reason that his moral theory is relevant
to medical ethics is because this idea that morality involves the weighing up and interpretation of
moral requirements is important for understand the influential four principles of biomedical
ethics (see later).
Summary and conclusions
Perhaps the most attractive feature of Kants moral theory is the explanation that it gives for why
the instrumentalization of human beings is so wrong. This in turn provides a strong justification
for informed consent and the other ways in which autonomy should be respected in medical
practice. However, Kants theory is complex and might not provide us with guidance that is
useful for a range of important questions about medical ethics. It also seems to miss something
important from what really matters when people are morally motivated in the right kind of way.
The next section will consider virtue theory, which attempts to solve these problems.
Learning points
The instrumentalization of human beings is a serious wrong.For Kant, a maxim is a description
of what a person is trying to achieve by a particular action and the action employed to achieve
this.
A good will is the only unconditional good.

The categorical imperative states that one should Act only in accordance with that maxim
through which you can at the same time will that it become a universal law.
The formula of humanity states: So act that you use humanity, whether in your own person or in
the person of any other, always at the same time as an end, never merely as a means.
The formula of humanity does not rule out using people as a means if this use is consistent with
respecting them as an end.
Kant thinks that lying is absolutely prohibited.
Kants moral theory is absolutist and may be too demanding for many.
Kants moral theory seems to miss some of the important aspects of moral motivation.

Top
Virtue Theory
Suppose that avian flu mutates and there is a human pandemic. The general population have been
advised to stay indoors and to avoid all human contact unless it is absolutely necessary. Medical
services are stretched to the limit and you have been working very long hours at significant risk
to yourself. You are worried about the risk, but can see that it is the right thing to do. If you are
asked to explain your motivation for doing this, there are a number of moral reasons that you
might give. As we have seen, utilitarians think that actions are right when they maximize human
welfare, so a utilitarian answer to this question is likely to mention helping those with avian flu
as a good way to maximize welfare, even though it means putting yourself at risk. Kantians think
that actions are right when they are based on maxims that could be willed to be universal laws,
so a Kantian might say that helping those with avian flu is in accordance with a universal maxim
about the importance of helping those in need so, in effect, helping these patients is your duty.
There is a third kind of moral motivation that may have moved you to act: you might say I put
myself at risk to help others because thats what a good physician does in this kind of situation.
Instead of appealing to what maximizes utility or explaining that it is your duty, you are in effect
appealing to what a good person would do in this situation. In other words, a virtue ethic says
that the right thing to do in a given situation is what a good or virtuous person would do.
Intuitively, this is an appealing idea: most, perhaps all, of us want to be good people, so doing
what a good person would do does seem to capture what we aim for when faced with a moral
decision.
Moral motivation
Perhaps the major strength of a virtue theory is that it provides us with an answer to the question
Why should I be moral? In Platos Republic, Glaucon asks what reason we would have to do
the right thing if the external constraints upon our behaviour were removed.20 He imagines what

would happen if a shepherd discovered the Ring of Gyges, a golden ring that can make its wearer
invisible and immune from the usual sanctions that go along with acting badly. Glaucon claims
that if the external punishments that usually accompany wrongdoing are removed, even
apparently good people would end up doing bad things. If the only reason that people do the
right thing is because of adverse consequences otherwise, then morality, in the sense of doing the
right thing for a moral reason as opposed to a purely self-interested reason, is a fiction.
Glaucons challenge is a severe test for any moral theory, and there seems to be a deep truth
behind the thought that often those who appear to be the most moral are also those who have the
most to lose by being thought immoral. Utilitarianism and Kantianism are very demanding moral
theories. Always acting to maximize utility or out of respect for ones duty can conflict with our
own interests. Perhaps, then, when people act in an apparently utilitarian or Kantian way their
moral motivation is not what it appears and, likewise, given the demand of these theories,
perhaps people will simply not follow their commands. This observation is one of the key
reasons for the re-emergence of virtue theories in the 20th century. Although Anscombe herself
was a Catholic and absolutist, she argued that for people who do not believe in God or a divine
enforcer of morality, moral theories such as utilitarianism and Kantianism will simply not
work.21 Unless there is a good reason for people to follow a moral code then it is unlikely that
they will. Anscombe suggested that we need to rethink morality so that the connection between
being moral and living well is re-established. Likewise, Glaucons challenge addresses a
philosophical question that was fundamental to the Ancient Greeks: How should one live?
Perhaps the most influential answer to that question is given by Aristotle in theNichomachean
Ethics.22
All of us have an interest in living a good life, and it is plausible that a central aspect of whether
our lives are of value to us is how happy we are. Aristotle argues that there is a special and
complete form of human happiness,eudaimonia, that can only be obtained by living a virtuous
life and flourishing as a human being. It is important to emphasize that eudaimonia is state of
well-being that involves living a reasoned and reflective life: if someone thinks that they are
happy, but fails to live in accordance with reason and reflection, they cannot be in a state
ofeudaimonia. According to Aristotle, all things can be described as having their own distinctive
functions, and performing these functions well makes that thing into an excellent example of its
kind.
For Aristotle: what is proper to each thing is by nature best and pleasantest for it; for a
human being, therefore, the life in accordance with intellect is best and pleasantest, since this,
more than anything else constitutes humanity. So this life will be the happiest.22, p. 196For
Aristotle, something is good when it fulfils its function well. For example, we might describe a
scalpel as a good scalpel when it does what a scalpel should do well. Scalpels need to be sharp,

sterile, easy to grip and manipulate and so on. If we came up with a list of all of the features that
a good scalpel has we would, in effect, have a list of virtues for a good scalpel. This is a general
feature of all of the tools and aids that a physician might use: whether they are good instruments
depends upon how well they perform the functions that we want these instruments to perform.
Aristotle proceeds to consider the function, or true and distinctive nature, of a human being, so as
to arrive at an account of what a good human being is. He claims that what is distinctive about
human beings is our ability to reason and to live in accordance with reason. So good human
beings live and act in accordance with reason and can be described as existing in a state
of eudaimonia. Aristotle solves the problem of moral motivation by showing how being virtuous
and living in accordance with reason is essential for our happiness. Does this provide a solution
to Glaucons challenge? If people are genuinely motivated to do the right thing then they should
be able to resist, at least to some extent, the temptations resulting from the Ring of Gyges. A
virtue ethicist could insist that unless a person is still moved to do the right thing, even when
wearing the ring, then virtue or reason are not what actually motivates their actions and they are
not in fact virtuous. If they fail to do the right thing when the external sanctions for acting badly
are removed then they will know that they are not in fact virtuous and that, at least according to
Aristotle, they cannot be in a state of eudaimonia. The ideal of living in accordance with reason
and virtue intuitively sounds right, but more can be said to make this relevant to medicine.
Aristotle mentions not only that tools have proper functions, but also some professions. He says
the goodthe doing wellof a flute player, a sculptor or any practitioner of a skill, or
generally whatever has some characteristic activity or action, is thought to lie in its characteristic
activity 22, p. 11
It is natural to extend this idea to medicine to see whether the activity of practicing medicine can
generate a list of virtues for the good physician (i.e., the physician who performs these
characteristic activities well). This is the task that Pellegrino and Thomasma undertake in For
The Patients Good.23 They argue that the core function of medicine is to improve the well-being
of patients. Working toward the patients good requires a broad range of skills as well as concern
for the autonomy and welfare of the patient. Good physicians are physicians who do this well.
The function of medicine therefore implies a number of virtues and skills that will be mastered
by a good physician. A major advantage of a virtue-based approach to medical ethics is that it
provides a powerful reason for physicians to do the right thing. In general, all professionals or
crafts people want to be good at what they do. While there are some people who do not appear to
take much pride in their work, usually even these people would like to think of themselves as
being capable of doing their job well. Medicine is a demanding profession and physicians are
usually highly motivated and strive to be as good at their job as they can. If doing the right thing

forms part of what it is to be a good physician, then physicians have a powerful, self-interested
reason to do what is right.
How to know what is right?
Virtue theory does seem to have an advantage over other moral theories in that it provides a
plausible answer to the question: why be moral? However, a moral theory not only needs to give
an account of moral motivation, but also needs to say something useful about how we can
determine the right thing to do. Aristotle argued that the right action is the action that would be
performed in that particular situation by a virtuous person. (Aristotle thinks that part of acting
virtuously is having the right kind of emotional reaction and to the right kind level at a particular
kind of situation.) So, because the virtuous physician would continue to work during an
epidemic, this is the right thing for a physician to do. However, an important question remains:
how does a virtuous physician know the right thing to do? The virtue theorists response will be
that virtuous physicians have, via a process of education and habituation, developed a character
that enables them to judge that this is the kind of situation in which they should help. This might
at first seem like an unusual idea: instead of relying only upon a moral theory or rule to tell us
what is right, the virtuous person knows what is right partly because of how they perceive and
are motivated by a situation. Suppose you are consulting a radiologist who is an expert at
interpreting MRI scans of the knee. This colleague might be able to diagnose the problem with a
knee joint far more quickly than you can. This is not just because she has seen many scans like
this before and knows a great deal about anatomy and possible pathologies, it is also because she
has developed her clinical skills and has sensitized herself to this particular kind of phenomenon.
The virtue theorist is, in effect, making a similar kind of claim about the morality of a situation.
As the result of past experiences, modelling the behaviour of others and reflecting upon how one
should act, it is possible for a physicians character to be such that he/she just knows the right
thing to do given the situation at hand. While this might fit with the way in which we model the
moral behaviour of those that we admire, how can we know that their habituated judgments and
predispositions are right? After all, your radiologist colleague, although experienced, may have
incorporated a small mistake about the physiology of the knee into her diagnosis of knee
problems. There is a further problem in that utilitarianism and Kantianism do seem capable of
saying something about how we should reason morally difficult cases. While utilitarianism does
lead to some unpalatable conclusions, it does at least give us moral guidance in many cases
where we would otherwise not know what to do. While we might balk at a utilitarian
recommendation to assist in torture, at least that theory does imply a course of action. A virtue
theorist might respond that in this kind of case the virtuous physician could reflect upon the
importance of maximizing overall utility and whether assisting in torture would violate a duty. In

other words, there is no reason why the virtuous person or physician could not consider the
importance of the same factors as the utilitarian or Kantian. However, this will mean that to offer
an account of why a particular action is right, a virtue theorist will end up arguing in the same
way that a Kantian or utilitarian might.
Conclusions
When philosophers defend their moral theory of choice they often try to show how their theory
converges with common-sense moral judgments. There are exceptions to this tendency: some
utilitarians think that when common-sense morality conflicts with maximizing utility then
common-sense morality should be revised. (A clear example is J. Harris The survival lottery,
which argues that we should sacrifice some members of society so that their organs can be used
to save the lives of those needing donor organs.24) Nonetheless, there is a significant amount of
convergence in moral theories and it would be a mistake to think that they will always yield
different conclusions about what we should do. Virtue theory does have a significant advantage
over Kantianism and utilitarianism in providing a plausible account of moral motivation. Where
it falls down is that in many cases it does not tell us much about how to act, apart from Think of
what a virtuous person would do in this situation. Determining what is right in a morally
difficult situation may mean adopting the same arguments that a utilitarian or Kantian would use.
Learning points
Virtue theory emphasizes the importance of a virtuous character for determining the right action.
Virtue theory offers a plausible account of moral motivation.
Virtue theorys weakness is that it does not offer a distinctive method for determining what is
right.

Top
The Four Principles Approach
Arguably, Tom Beauchamp and James Childress The Principles of Biomedical Ethics25 is the
most influential book on medical ethics ever written. The four principles of justice, autonomy,
beneficence and non-maleficence provide a theoretical framework for thinking through moral
problems in medicine. The principle of justice implies that we ought to aim for fair access to or
the equitable distribution of health care resources. Autonomy is interpreted as self rule or
legislation, and implies that patients should be able to make important decisions for themselves
and to have confidential information protected. Beneficence captures the moral obligation that
health care workers have to benefit their patients. Non-maleficence describes the Hippocratic
injunction to First of all, do no harm. Given that this primer has just outlined some key
features of the most influential moral theories, this raises a question about how the four

principles fit with moral theory. If the four principles approach provides an adequate method for
thinking through moral problems in medicine, is knowledge of moral theory important?
Justification for each of the principles can be derived from the three major moral theories. In his
essays On Liberty and Utilitarianism,2,4 John Stuart Mill gives utilitarian defences of the
importance of freedom, promoting human welfare and justice, which appear to map onto at least
three of the four principles. Kants formula of humanity is one of the classic defences of respect
for persons and provides a compelling argument for the principle of autonomy. Kants moral
theory also implies that we have an imperfect moral obligation to work towards the welfare of
fellow human beings, which is an argument in favour of a principle of beneficence. In
theMetaphysics of Morals, Kant develops his account of our justice-based obligations.26 Matters
are slightly more complicated when considering Aristotles virtue theory: for him, the just man
and the good man are used interchangeably. Nonetheless, it is not unreasonable to extrapolate
that the virtuous physician is one who acts on considerations of autonomy, beneficence, nonmaleficence and justice. One of the primary motivations behind the four principles approach is to
distill the main moral requirements of biomedicine. Beauchamp and Childress describe the four
principles as capturing the essential features of our common morality, and by this they mean
the central moral requirements that all of us would agree are essential for moral medicine.25 One
of the big arguments in favour of principles based on a common or shared morality is that there
will be broad-based agreement about the appropriate moral rules for biomedicine and a common
language for discussing moral problems. If two people have different moral beliefs but can agree
upon essential principles then these principles can form the basis from which moral
disagreements can be discussed and resolved. The four principles approach has been the subject
of great deal of academic discussion in the last 30 years, and it is not possible to do justice to all
of the objections and rival accounts. Nonetheless, there are some significant objections to
principlism that make it important to have some knowledge of moral theory.
Which principle?
Many moral problems in medicine involve tensions between conflicting moral obligations, such
as what a person wants and what is good for them, or what we could do for another person if we
do not treat a particular patient. These tensions can be described as instances where autonomy,
beneficence and justice appear to imply that contradictory things are right. Beauchamp and
Childress follow W D. Ross19 and think that the principles of justice, autonomy, beneficence and
non-maleficence are prima facie (on the face of it) obligations. Suppose that one of your
patients requests a referral to an allergy specialist because he is convinced that his problems in
controlling his weight and general feelings of malaise are the result of food allergies. You have
already tested him for likely allergens and are not convinced that this referral is in your patients

best interests. This case involves an apparent tension between autonomy and beneficence. What
the patient wants conflicts with what you think beneficence requires in this case. While there is
no doubt about the principles that are relevant here, there is a question regarding how you can do
the right thingif you write the referral you will not be doing what you think is required by
beneficence, but if you do not do what the patient wants then you will not (in one sense at least)
be respecting his autonomy. This is where the concept of the principles being prima
facie obligations comes in. If you decide to write the referral and reason that while it might not
be what is best for your patient, it is not likely to be harmful and is what they want, then you can
still be doing the right thing even though you have not done what would be implied by
beneficence if it were the only relevant principle. In other words, the obligation to be beneficent
is only an obligation if there is no other stronger conflicting obligationin this case, respect for
autonomy. In simple cases such as this it will usually be a relatively easy matter to think through
the significance of the conflicting demands of principles, and clinical experience is likely to be
an adequate guide. However, in many cases it will be more difficult to determine how the
principles should be applied. There are a number of ways that we can think through more
complicated problems. We can simply weigh how important the competing moral demands
appear, but we can also think more closely about the theoretical justification for a principle in a
particular case. Suppose that you are providing emergency care to a 16-year-old who has been
injured in a road accident. She has lost a significant amount of blood and in your view needs a
blood transfusion. When you explain this to her she becomes anxious and repeatedly says that
she cannot be given blood products because of her religion. She is in a distressed state, but
nonetheless appears sufficiently rational to be considered competent to refuse treatment. Again,
the principles of beneficence and autonomy appear to be in tension: from your perspective, a
transfusion would be best for her. However, giving a transfusion, perhaps if she loses
consciousness, would appear to violate her autonomy. A full exploration of this kind of case can
involve going further than considering the respective weights of principles: it can be important to
go to the level of moral theory so as to consider different justifications and ways of
understanding beneficence and autonomy. This primer has discussed some of the different ways
that we can understand human welfare: hedonism, preference satisfaction and ideal accounts.
While these theories are themselves in tension, reflecting upon the different theoretical accounts
of what beneficence might entail for this young woman can yield a fuller understanding of what
this principle implies. If we do not move to the level of moral theory then there is a risk that we
will not reflect carefully enough about whether this treatment would be best for this woman. If
the satisfaction of important preferences is an essential component of a persons well-being then
satisfying her preference to live and, if necessary, die in a way that is consistent with the
teachings of her religion may be what is best for her. On this understanding of beneficence, what

is best could end up being the same as that which she has autonomously chosen. While there are
three principal accounts of human welfare, the theoretical basis of autonomy is more complex.
For utilitarians such as Mill, freedom is an important precondition of human beings learning to
lead happy lives (in addition to being able to live the life that makes us happiest). From this
view, there will be occasions when it is right to give people the freedom to make their own
mistakes, even when there are good reasons for supposing that this will not be what is best for
them. In this case it could be argued that, although it is very risky for her not to have a blood
transfusion, our general interest in being able to make our own mistakes is so important that her
autonomy should be respected. For Kant, autonomy is important because of its grounding in
rationality and as a source of value (see the earlier section on good will). Whether this request is
autonomous depends upon the rationality of the will that it embodies. (The pre-eminent Kant
scholar Onora ONeill has described this idea as a defence of principled autonomy.27 Mere
preferences or wishes are not necessarily autonomous: autonomy requires a genuine act of selflegislationan agent must determine rationally the maxim that is to be their will.) Whether
willing to risk death because of a religious belief is the preference of a Kantian rational will is
open to argument. Nonetheless, the important point is that on a Kantian defence of autonomy,
merely expressing a preference is not necessarily an expression of an agents autonomy. The
principle of autonomy can imply different things for the same case depending on whether a
Kantian or a utilitarian justification of autonomy is emphasized. Given that the four principles
are intended to help physicians resolve moral dilemmas and that the principles attempt to
amalgamate moral ideas that might imply different things about the same case, they are
themselves not sufficient to reason through what should be done in a complicated case. To
provide useful guidance for moral deliberation and difficult cases, the principles need to be
augmented by some knowledge of moral theory. In fact, the situation is more complicated than is
suggested by this case. While Kantian and utilitarian defences of autonomy can have
significantly different implications, there are numerous accounts of what autonomy is and how it
can be understood that could have different implications for a specific case.The same problems
arise for justice which is a highly contested concept.
Clarity at the risk of superficiality?
The four principles appear to be a great advance in that they group together in a particularly
concise way the main moral considerations of biomedicine. For physicians and medical students
who are thinking about ethics seriously for the first time, this can provide a way to make moral
deliberations more systematic and accessible. While it is clear that the principles can provide a
moral vocabulary, there is a risk that the apparent clarity that they bring to a moral problem can
hide the complexity of many moral problems. When the principles are applied to a case there can

be a temptation to use them to merely identify the relevant features of a clinical scenario and to
then simply make a decision about which of the principles you think should hold sway. Doing
this might neglect the subtleties and different ways of understanding the morality of a situation.
Of course, moving to the level of moral theory makes medical morality even more complicated
in difficult cases. As well as deciding which principle to apply, you must also think about
different ways in which that principle can be understood. Nonetheless, the reason why it is hard
to know what to do in some cases is because it is hard to know what the right thing to do is.
Reasoning in a deeper way may make moral reflection more difficult, but ultimately it should
lead to more reflective and better justified decisions. Beauchamp and Childress did not intend the
principles to be used in a deductive manner as a general theory about genetics or microbiology
might. Instead, they thought that the principles could play a justificatory role when we are
formulating moral rules or about specific cases: we refer back to the principles when testing our
intuitions or require an argument for what we think is right. They would probably agree that in
certain cases it may be necessary to delve more deeply into different theoretical accounts of
justice, autonomy or beneficence. While this is a reasonable claim it is a fairly subtle distinction,
and if the four principles are the only moral concepts used then there is a risk that they will be
used in a superficial way.
A common morality?
When you move to a theoretical level of morality you can see that although the principles appear
to be a set of common moral requirements, there is in fact significant disagreement among
philosophers. This is clearly the case with autonomy and beneficence, but is even more acute for
the principle of justice. Although theories of autonomy emphasize different aspects of freedom,
self-determination and other similar concepts, theories of justice can reach radically different,
often contradictory accounts of what is just. The two most influential theories of justice written
in the 20th century are by Robert Nozick in Anarchy, State and Utopia28 and John Rawls in A
Theory of Justice.29 While neither of these philosophers says exactly what their theories imply
for health care (their theories are general accounts of justice within a political community), it is
clear that they have radically different implications for justice and health care. Briefly, Nozick
states that justice requires us to have only a minimal state, one where taxation is only justified if
it is necessary for self-defence. On this view, health care should not be delivered by a publicly
funded system, but instead should be an individual matter between a citizen or insurance
company and a physician. On the other hand, Rawls states that we need to redistribute resources
within society so that the position of the least well off is maximized. It is not a straightforward
matter to apply this idea to health care, but it does seem to support a system of health care that is
publicly funded and provides universal access to a minimum level of care.30 Although both of

these philosophers use the same term, justice, the accounts of what justice is are so divergent
that it is hard to see how a principle of justice can articulate a common morality. There are even
more serious problems for the idea of a common morality. According to Beauchamp and
Childress, moral theories are not the only relevant sources of justification.25,31 The four
principles attempt to articulate the moral principles that all of us think should apply to
biomedical ethics. Beauchamp and Childress think that biomedicine needs to move toward an
ethic that pays due respect to autonomy and justice. It is simply not the case that everybody
agrees that these principles are part of medical morality (e.g., Pellegrino and Thomasma think
that biomedical ethics can be built upon beneficence23). In many countries, a paternalist ethic is
expected by patients and physicians alike. Of course, this might not be right, but the justification
for a system of ethics cannot be based upon the things that people really value if these values
differ between individuals and countries .
Conclusions
There is no question that the four principles approach has revolutionized medical ethics and
provided a solid foundation from which to begin moral reasoning. However, it is important not to
take the clarity and simplicity that they seem to imply to show that moral deliberation is a
straightforward matter: the four principles are not magical keys for unlocking the solutions to
moral problems. There is no question that they provide a useful moral common vocabulary, but
moral deliberation about difficult cases requires more than this. In addition to clinical experience
and knowledge about what is likely to happen in a particular clinical context, considering the
justifications and concepts that underpin moral principles can enrich moral deliberation.
What is a moral theory?
A moral theory gives an account of the underlying justification for all our correct
moraljudgements.
The role of moral theory:
1. To give us guidance, in cases in which were unsure what to do.We can see what to do by
applying the underlying moral principle.
2. To explain why our correct moral beliefs are true, and to challenge those that are not. Those of
our beliefs that are incompatible with the underlying moral justification are to be revised or
rejected.
The two main enlightenment moral theories:
1. Utilitarianism
The underlying justification for every legitimate moral judgement is the importance of

promoting well-being or preventing suffering.


Each persons well-being matters equally (regardless of race, gender, national
boundaries, etc).
The right action is the one that promotes as much well-being as possible, or alleviates as
much suffering as possible (giving equal weight to each persons welfare or suffering).

2. Kantianism
The underlying moral justification for every correct moral judgement is the priceless
dignity of each human being.
Human dignity is grounded in rational autonomy. This is the capacity for being selfgoverning, which is conferred on us by our rational nature.
The appropriate response to this dignity is to honour and respect it.
These two theories are the most prominent versions of two rival approaches:
consequentialismand deontology. Consequentialism: the right action is the one that has the best
consequences. The goal of morality is to improve the state of the world as much as possible.
Deontology: certain kinds of actions, such as killing an innocent person, are morally
prohibited,even if performing such an action would have the best consequences.
According to Kantian deontology, we ought to act out reverence / respect for the priceless
worthof human dignity.
This involves viewing each life as of incomparable value (and so rules out
consequentialist calculations about the value of different outcomes):
-it precludes seeing one life as less valuable than another.
-it precludes seeing several lives as worth more than one.
Respect for persons dignity also prohibits paternalism, and requires acting in a way that
eachperson could reasonably agree to.
Moral Theories
Through the ages, there have emerged multiple common moral theories and traditions. We will
cover each one briefly below with explanations and how they differ from other moral theories.
Consequentialism
Consequentialist theories, unlike virtue and deontological theories, hold that only the
consequences, or outcomes, of actions matter morally. According to this view, acts are deemed
to be morally right solely on the basis of their consequences. The most common form of
consequentialism is utilitarianism.
Deontology
Deontological theories (derived from the Greek word for duty, deon) base morality on certain
duties, or obligations, and claim that certain actions are intrinsically right or wrong, that is, right
or wrong in themselves, regardless of the consequences that may follow from those actions.

What makes a choice or an action right is its conformity with a moral norm. Thus, an agent has a
duty to act in accordance with a moral norm, irrespective of the (potentially beneficial) effects of
acting otherwise. We might say that parents, for example, have an obligation to take care of their
children. On a deontological view, parents must fulfill this obligation, even if breaking the
obligation were to result, for the parents, in some great benefit (increased financial savings, for
example). The deontological view holds that some actions cannot be justified by their
consequences. In short, for the deontologist, the ends do not justify the means. Indeed, Immanuel
Kant, whose formulation of deontological ethics is perhaps the most well known, wrote that one
must act so that you treat humanity, both in your own person and in that of another, always as
an end and never merely as a means. As with other deontologists (Thomas Hobbes and John
Locke, for example), Kant held that the basis of our moral requirements is a standard of
rationality. In the case of Kant, the standard is a categorical imperative. This single principle of
rationality comprehensively includes all of our particular duties. Objections to Kantian
deontology:
(1) Kants claim is that the moral status of our actions is determined solely on the basis of the
rightness or wrongness of the action itself. This means that it is categorically wrong to, for
example, lie, in any circumstances, regardless of the consequences. It seems implausible,
however, to hold that lying is categorically wrong in all circumstances. Imagine, for example, a
situation in which a serial killer is on the hunt for your daughter. While searching for her, the
killer, whom you know to be the killer, encounters you and asks for information regarding your
daughters whereabouts. According to Kants deontological theory, you would be required to
tell the truth. Does this seem reasonable?
Justice as Fairness
Justice as fairness refers to the conception of justice that John Rawls presents in A Theory of
Justice. This conception of justice concerns societys basic structurethat is, societys main
political, constitutional, social, and economic institutions and how they fit together to form a
unified scheme of social cooperation over time.1Rawls constructs justice as fairness in a rather
narrow framework and explicitly states, Justice as fairness is not a complete contact
theory.2 Its purpose is to show how we ought to allocate a cooperative surplus of resources to
individuals in society. As a result, justice as fairness relies on two implicit assumptions about the
societies in question: first, social cooperation is possible and can work to everyones mutual
advantage, and second, there exists a moderate surplus of available resources to be distributed.
Justice as fairness cannot be used to determine the just distribution of sacrifices to be made by a
societys members when resources are scarce. More generally, it cannot help us identify just
social policies in societies where background conditions (e.g., scarcity of natural resources,
cultural barriers, war) have eliminated the possibility of mutually advantageous social
cooperation. The process for determining how the basic structure should be arranged is based on

a thought experiment in which rational, mutually disinterested individuals choose principles of


justice from behind a veil of ignorance, a condition that specifies they do not know specific
details about themselves (e.g., personal values, race, gender, level of income) or the society in
which they live (e.g., societal stage of development, economic circumstances). However, when
choosing these principles, the parties do possess general social, psychological, and economic
knowledge, and they also know that the circumstances of justice obtain in the society to which
they belong. From this hypothetical initial situation, which Rawls calls the original position,
these individuals will presumably endorse two principles of justice. The first, known as the equal
liberty principle, is that each person is to have an equal right to the most extensive scheme of
basic liberties compatible with a similar scheme of liberties for others, and the second is that
social and economic inequalities are to be arranged so that they are both reasonably expected to
be to everyones advantage, and attached to offices and positions open to all.3Rawls primary
argument for the two principles is that they would be chosen over any variation of utilitarianism,
which he considers the strongest opposition to justice as fairness. Constrained by the veil of
ignorance, the parties in the original position (as mutually disinterested rational agents) try to
agree to the principles which bring about the best state of affairs for whatever citizen they
represent within society. Since the parties are all unaware of precisely what social role they will
occupy, they strive to maximize their individual shares of primary goods. These goods are
defined as things that every rational man is presumed to want regardless of this persons
rational plan of life and include (among other things) rights, liberties, social opportunities, and
income.4 Rawls argues, largely through the appeal to the maximin rule, that the parties in the
original position would favor the equal liberty principle over variations of utilitarianism. He
further argues that the parties would support using the difference principle to regulate the
distribution of wealth and income instead of a principle of average utility
(constrained by a social minimum) because the difference principle provides a stronger basis for
enduring cooperation among citizens. The full application of justice as fairness can be regarded
as a 4-stage sequence. The deliberations concerning the two principles occur at the first stage.
With the two principles established, the parties then progressively thin the veil of ignorance and,
as they acquire more specific knowledge about society at the subsequent stages, determine more
specific principles of justice. At the second stage, the parties learn more about societys political
and economic circumstances and create a constitution that is consistent with the two principles.
At the third stage, the parties agree to laws and policies which realize the two principles within
the context of the agreed-upon constitutional framework. At the fourth stage, the parties possess
all available information about their society and apply the established laws and policies to
particular cases. One of Rawls major tasks in presenting justice as fairness is to show that the
society it generates can endure indefinitely over time. To achieve this aim, Rawls deploys the
just savings principle, a rule of intergenerational savings designed to assure that future
generations have sufficient capital to maintain just institutions. Additionally, Rawls argues that
the society generated by the two principles is congruent with citizens good and that citizens can

develop the necessary willingness to abide by these principles. As a result, the society generated
by adherence to justice as fairness is stable and can be expected to endure indefinitely over time.
Notably, however, the arguments for the stability of justice as fairness that Rawls presents in A
Theory of Justice do not prove convincing. Rawls does not account for reasonable pluralism, a
critical aspect of any constitutional democracy with the guaranteed liberties that Rawls specifies.
Thus, Rawls recasts his arguments for the stability of justice as fairness in Political
Liberalism and strives to demonstrate that citizens, despite reasonable disagreement about many
issues, will agree on a limited, political conception of justice through an overlapping consensus
of their individual viewpoints.
Utilitarianism
Utilitarianism states that actions are morally right if and only if they maximize the good (or,
alternatively, minimizes the bad). Classical utilitarians like Jeremy Bentham and John Stuart
Mill (as well as many contemporary utilitarians) take the good to be pleasure or well-being.
Thus, actions are morally right, on this view, if and only if they maximize pleasure or well-being
or minimize suffering. This approach is sometimes called hedonistic utilitarianism. For
hedonistic utilitarians, the rightness or our actions are determined solely on the basis of
consequences of pleasure or pain. Utilitarian theories may take other goods into
consideration. Preference utilitarianism, for example, takes into account not just pleasures, but
the satisfaction of any preference. Utilitarianism can also be divided along other lines. Actutilitarianism claims that we must apply a utilitarian calculation to each and every individual
action. By making this calculation, we can thereby determine the moral rightness or wrongness
of each action we plan to take. Rule-utilitarianism eases the burden that act-utilitarianism places
on practical reasoning by establishing moral rules that, when followed, brings about the best
consequences. Rule-utilitarianism can be illustrated by the rule do not kill. As a general rule,
we would be better off, that is, the best consequences, or state of affairs, would be brought about,
if we all followed the rule do not kill.
Objections to Utilitarianism:
There are a number of objections to utilitarian theories, both in their act- formulations and in
their rule- formulations.
(1) Act-utilitarianism, for example, seems to be impractical. To stop to calculate the possible
outcomes of every act we intend to make, as well as the outcomes of all of the possible
alternatives to that act is unrealistic. Moreover, it may hinder ones ability to bring about the
best consequences for example, in cases where a quick response is vital (as in responding to a
car wreck).
(2) Others have objected to utilitarianism on the grounds that we cannot always predict the
outcomes of our actions accurately. One course of action may seemlike it will lead to the best

outcome, but we may be (and often are) mistaken. The best it seems we can do, then, is to guess
at the short-term consequences of our actions.
(3) Objections to utilitarianism have also been made on the grounds that it is excessively
demanding and places too large a burden on individuals. Since utilitarianism says that acts are
morally right if and only if they maximize pleasure or well-being, it seems that leisure activities,
such as watching television, may be morally wrong because they do not maximize well-being.
Any person watching television could, after all, be doing something else something
that wouldmaximize utility, like helping others or volunteering.
(4) Finally, utilitarianism receives criticism because seemingly immoral acts and rules can be
justified using utilitarianism (this criticism is applicable both to act- and rule- utilitarianism).
Genocides, torture, and other evils may be justified on the grounds that they, ultimately, lead to
the best outcome. Unjust rules for example, laws that legalize slavery or apartheid might
also be justified on utilitarian grounds.
(3) Virtue theories encounter problems with moral dilemmas in which two (or more) virtues
conflict. In other words, the requirements of one virtue may be opposed, or contradictory, to the
requirements of another. The requirements of honesty, for example, require us to tell the truth,
even if it is hurtful. The virtues of kindness or compassion, on the other hand, point to remaining
silent, or perhaps even lying, in order to avoid harm.
Virtue Ethics
Virtue ethics takes its philosophical root in the work of the ancient Greek philosopher Aristotle.
Virtue theories claim that ethics is about agents, not actions or consequences. Living an ethical,
or good life, then, consists in the possession of the right character traits (virtues) and having, as a
result, the appropriate moral character. Unlike deontological accounts, which focus on learning
and, subsequently, living by moral rules, virtue accounts place emphasis on developing good
habits of character. In essence, this means developing virtuous character traits dispositions to
act in a certain way and avoiding bad character traits, or vices of character. Character traits
commonly regarded as virtues include courage, temperance, justice, wisdom, generosity, and
good temper (as well as many others). This approach to normative ethics also emphasizes moral
education. Since traits of character are developed in youth, adults are responsible for instilling in
their children the appropriate dispositions.
Objections to virtue ethics:
(1) The first difficulty, which any virtue theorist must surmount is figuring out which
characteristics count as virtues (and which count as vices). Given that different cultures
sometimes hold different traits of character to be virtuous, it seems that virtue ethical theories are
susceptible to the difficulties involved with cultural relativism.

(2) It also seems that virtuous characteristics can be exhibited even when the actions carried out
are immoral. Courage, for example, is often regarded as a virtue, but can there not be
courageous bank robbers? It certainly seems that a bank robber could exhibit courage while
robbing a bank, yet we generally agree that robbing is morally wrong.
This consequence is problematic because the aim of any normative theory is to arrive at
standards, or norms, of behavior for living a moral life. In the case of the courageous bank
robber, it seems that the bank robber lives according to the standard set by virtue ethics (that is,
he acts courageously) but his behavior is nevertheless immoral. It may be suggested in response
to this objection that the courageous bank robber, though meeting the requirements of the virtue
of courage, fails to live according to the standard set by some other virtue for example,
honesty. This response, however, only serves to highlight another objection to virtue ethics
competing virtues.
Data ownership refers to both the possession of and responsibility for information. Ownership
implies power as well as control. The control of information includes not just the ability to
access, create, modify, package, derive benefit from, sell or remove data, but also the right to
assign these access privileges to others (Loshin, 2002). Implicit in having control over access to
data is the ability to share data with colleagues that promote advancement in a field of
investigation (the notable exception to the unqualified sharing of data would be research
involving human subjects). Scofield (1998) suggest replacing the term ownership with
stewardship, because it implies a broader responsibility where the user must consider the
consequences of making changes over his data. According to Garner (1999), individuals
having intellectual property have rights to control intangible objects that are products of human
intellect. The range of these products encompasses the fields of art, industry, and science.
Research data is recognized as a form of intellectual property and subject to protection by U.S.
law.
Importance of data ownership:
According to Loshin (2002), data has intrinsic value as well as having added value as a
byproduct of information processing, at the core, the degree of ownership (and by corollary, the
degree of responsibility) is driven by the value that each interested party derives from the use of
that information. The general consensus of science emphasizes the principle of openness (Panel
Sci. Responsib. Conduct Res. 1992). Thus, sharing data has a number of benefits to society in
general and protecting the integrity of scientific data in particular. The Committee on National
Statistics 1985 report on sharing data (Fienberg, Martin, Straf, 1985) noted that sharing data
reinforces open scientific inquiry, encourages a diversity of analyses and conclusions, and
permits:
1. reanalyses to verify or refute reported results
2. alternative analyses to refine results
3. analyses to check if the results are robust to varying assumption
The cost and benefits of data sharing should be viewed in ethical, institutional, legal, and

professional dimensions. Researchers should clarify at the beginning of a project if data can or
cannot be shared, under what circumstances, by and with whom, and for what purposes.
Considerations/issues in data ownership
Researchers should have a full understanding of various issues related to data ownership to be
able to make better decisions regarding data ownership. These issues include paradigm of
ownership, data hoarding, data ownership policies, balance of obligations, and technology. Each
of these issues gives rise to a number of considerations that impact decisions concerning data
ownership
Paradigm of Ownership Loshin (2002) alludes to the complexity of ownership issues by
identifying the range of possible paradigms used to claim data ownership. These claims are
based on the type and degree of contribution involved in the research endeavor. Loshin (2002)
identifies a list of parties laying a potential claim to data:

Creator The party that creates or generate data


Consumer The party that uses the data owns the data
Compiler - This is the entity that selects and compiles information from different
information sources
Enterprise - All data that enters the enterprise or is created within the enterprise is
completely owned by the enterprise
Funder - the user that commissions the data creation claims ownership
Decoder - In environments where information is locked inside particular encoded
formats, the party that can unlock the information becomes an owner of that information
Packager - the party that collects information for a particular use and adds value through
formatting the information for a particular market or set of consumers
Reader as owner - the value of any data that can be read is subsumed by the reader and,
therefore, the reader gains value through adding that information to an information
repository
Subject as owner - the subject of the data claims ownership of that data, mostly in
reaction to another party claiming ownership of the same data
Purchaser/Licenser as Owner the individual or organization that buys or licenses data
may stake a claim to ownership

Data Hoarding
This practice is considered antithetical to the general norms of science emphasizing the principle
of openness. Factors influencing the decision to withhold access to data could include (Sieber,
1989):

(a) proprietary, economic, or security concerns


(b) documenting data which can be extremely costly and time consuming
(c) providing all the materials needed to understand or extend the research
(d) technical obstacles to sharing computer-readable data
(e) confidentiality

(f) concerns about the qualifications of data requesters


(g) personal motives to withhold data
(h) costs to the borrowers
(i) costs to funders

Data Ownership Policies


Institutional policies lacking specificity, supervision, and formal documentation can increase the
risk of compromising data integrity. Before research is initiated, it is important to delineate the
rights, obligations, expectations, and roles played by all interested parties. Compromises to data
integrity can occur when investigators are not aware of existing data ownership policies and fail
to clearly describe rights, and obligations regarding data ownership. Listed below are some
scenarios between interested parties that warrant the establishment of data ownership policies

Between academic institution and industry (public/private sector) This refers to the
sharing of potential benefits resulting from research conducted by academic staff but
funded by corporate sponsors. The failure to clearly delineate data ownership issues early
in public/private relationships has created controversy concerning the rights of academic
institutions and those of industry sponsors (Foote, 2003).
Between academic institution and researcher staff According to Steneck (2003) research
funding is awarded to research institutions and not individual investigators. As recipients
of funds, these institutions have responsibilities for overseeing a number of activities
including budgets, regulatory compliance, and the management of data. Steneck (2003)
notes To assure that they are able to meet these responsibilities, research institutions
claim ownership rights over data collected with funds given to the institution. This means
that researchers cannot automatically assume that they can take their data with them if
they move to another institution. The research institution that received the funds may
have rights and obligations to retain control over the data. Fishbein (1991)
recommended that institutions clearly state their policies regarding ownership of data,
and present guidelines for such a policy.
Collaboration between research colleaguesThis is applicable to collaborative efforts that
occur both within and between institutions. Whether collaborations are between faculty
peers, students, or staff, all parties should have a clear understanding of who will
determine how the data will be distributed and shared (if applicable) even before it is
collected.
Between authors and journals - To reduce the likelihood of copyright infringement, some
publishers require a copyright assignment to the journal at the time of submission of a
manuscript. Authors should be aware of the implications of such copyright assignments
and clarify the policies involved.

Balance of obligations
Investigators must learn to negotiate the delicate balance that exists between an investigators
willingness to share data in order to facilitate scientific progress, and the obligation to
employer/sponsor, collaborators, and students to preserve and protect data (Last, 2003). Signed
agreements of nondisclosure between investigators and their corporate sponsors can circumvent

efforts to publish data or share with colleagues. However, in some cases as with human
participants data sharing may not be allowed due to confidentiality reasons.
Technology
Advances in technology have enabled investigators to explore new avenues of research, enhance
productivity, and use data in ways unimagined before. However, careless application of new
technologies has the potential to create a slew of unanticipated data ownership problems that can
compromise research integrity. The following examples highlight data ownership issues resulting
from the careless application of technology:

Computer The use of computer technology has permitted rapid access to many forms of
computer-generated data (Veronesi, 1999). This is particularly the case in the medical
profession where patient medical record data is becoming increasingly computerized.
While this process facilitates data access to health care professionals for diagnostic and
research purposes, unauthorized interception and disclosure of medical information can
compromise patients right of privacy. While the primary justification for collecting
medical data is to benefit the patient, Cios and Moore (2002) question whether medical
data has a special status based on their applicability to all people.
Genetics Due to advances in technology, investigators of the Human Genome Project
have opportunities to make significant contributions by addressing previously untreatable
diseases and other human conditions. However, the status of genetic material and genetic
information remains unclear (de Witte, Welie, 1997). Wiesenthal and Wiener (1996)
discuss the conflict between the rights of the individual for privacy, and the need for
societal protection. The critical issues that investigators need to be aware of include the
ownership of genetic data, confidentiality rights to such information, and legislation to
control genetic testing and its applications (Wiesenthal and Wiener, 1996).

The mentioned data ownership issues serve to highlight potential challenges to preserving data
integrity. While the ideal is to promote scientific openness, there are situations where it may not
be appropriate (especially in the case of human participants) to share data. The key is for
researchers to know various issues impacting ownership and sharing of their research data and
make decisions that promote scientific inquiry and protect the interests of the parties
involved.Data ownership is a relatively new term. I do not think it was much in vogue before
2000, though no doubt it was used somewhere prior to that. My friend Michael Scofield had
written about data ownership in the late 1990s, but may have been ahead of his time.
The Problem of Ownership
Trying to get a definition of data ownership is not easy, and in most cases (but not all) the term is
misleading. This stems from the basic concept of ownership, which means to have legal title and
full property rights to something. If we accept this as the correct definition of ownership, then
data ownership must be to have legal title to one or more specific items of data. However, this
cannot be what is generally meant. If it were, then anyone assigned as a data owner could take
the data they were told they owned and sell it. That would probably result in the firing of
individual from his or her job or even possibly facing civil charges or jail time. It is pretty self-

evident, then, that data ownership does not have a literal meaning at least not usually. So why
are we using the term? I think the most probable reason is that data ownership is an analogy
rather than a defined term.
Ownership as Analogy
We expect that if someone owns something they will take care of it more so than a renter, for
instance. However, this is not always true. If a person acquires a piece of property at no cost
say as a gift or by inheritance they may not always take as much care of it as someone who had
to work hard to afford to buy it. The person who receives free property often does not appreciate
it as much as the person who had to work to get it. Hence, ownership has limits as an analogy
because ownership does not always imply a high degree of responsibility. There is another
unspoken assumption in the analogy: Owners always know how to take care of their property.
Yet this does not correspond to reality. For instance, first-time homeowners are often unaware of
the full scope and detail of property management, even though they are eager to fulfill their
responsibilities. Similarly, nobody would expect a child who inherited a house to be able to care
for it adequately, no matter how much the child wished to do so. Thus, ownership seems to be a
poor analogy if the point in common between holding legal title (true ownership) and data
ownership is an expectation of effective exercise of responsibility for something. Yet, it does
seem to be this aspect of responsibility that is at the core of the analogy of data ownership. A
further problem with the analogy is that it is not uncommon to hear about people owning
problems. This seems to imply both responsibility and accountability. In true ownership, it is
difficult to see how an owner can be accountable for not looking after their property properly
(unless it causes harm to someone else or damages their property). Furthermore, it is very
difficult to think of problems as property, because a problem is something we want to get rid of.
So this is yet another poor analogy, but it may affect the way we think about data ownership if
we are focusing only on the problems in data. In other words, data ownership might be conceived
as assigning responsibility and accountability for preventing and fixing problems in a particular
set of data. Such a concept might be too narrow for successful data governance.
The Assignment of Data Ownership
Interestingly, it is not always the owners who are claiming ownership. In my experience, those
who are responsible for data governance discover data owners or assign data ownership. The
so-called owners may never have really thought of themselves as such until they were told they
were. Here we come to a fairly significant problem. Suppose the individuals responsible for data
governance have only a very nebulous idea of what data governance is or cannot conceive of a
concrete set of governance tasks for every distinct class of data and for every mode of usage of
data. Such individuals may be tempted to simply avoid having to identify and define a whole
range of data governance tasks by telling other people they are data owners. This places the
burden of figuring out what data governance is on the unfortunate data owners. It is not as if we
do not have the means to express precision about data governance tasks. RACI (Responsible,
Accountable, Consulted, Informed) matrices are a very good tool to capture the details of who is
involved in what role for each governance task. Furthermore, RACI matrices can easily be
understood by business users. Those responsible for data governance would do better to generate

detailed RACI matrices rather than simply identifying people as data owners.
True Data Ownership
Yet there is one area where data ownership is real and has tremendous implications. This is in the
area of data that is purchased as a service from data vendors. Perhaps this is most commonly
seen in the financial services industry, where data on financial instruments, indexes, benchmarks,
prices, and corporate actions has to be purchased from data vendors. Other classes of data can be
sourced from vendors too e.g., corporate structures. The licensing of such data has many issues
around intellectual property. Basically, the data vendors regard the data as their property and
limit redistribution and derivation in the licensing agreements. Such restrictions present
enormous data management challenges. It is not easy to prevent redistribution after all, how do
you know what downstream applications are doing with the data? The same is true of using
vendor-supplied data to derive or compute other data. Yet the vendors do enforce their contracts
and are always on the lookout for violations, some of which can result in hefty settlements. The
problem here is determining to what extent data is the property of a supplier, particularly if the
data can be found in the public realm (which it often can be). It is true that the vendor is
collecting the data, standardizing it, attaching metadata, and so on, but do these actions really
make the data the property of the vendor? At the moment, it would seem that the vendors have
the upper hand, but the costs to their clients are quite considerable and there is pressure being
applied to the vendors. We will have to wait to see how this plays out.
Conclusion
This brief overview of data ownership shows that the term is rarely meant literally and is actually
a misleading analogy. It can too easily provide cover for not thinking clearly and in a sufficiently
detailed way about data governance. However, there is an important though not extensive
area where true data ownership is a reality that presents a number of important issues that seem
to be difficult to resolve. Hopefully, as data governance matures, the poor analogies of data
ownership will be replaced with more detail and more clarity.
The Importance of Data Classification and Ownership
by Carol Woodbury
Because of laws such as the Health Insurance Portability and Accountability Act (HIPAA), the
requirements of Sarbanes Oxley (SOX) auditors, and data breaches, organizations are beginning
to realize that they must secure their data - that is, object level security must be implemented
properly.. Thus, organizations are increasingly classifying their data and identifying its
appropriate owners. Proper classification of data is essential to ensuring that data is secured
correctly. This article details the factors you will want to consider as you go through the process
of classifying your data.

What is data classification?


Data classification entails analyzing the data your organization retains, determining its
importance and value, and then assigning it to a category. Data that is considered top secret
(whether contained in a printed report or stored electronically) needs to be classified. Why? So
that it can be handled properly. IT administrators and security administrators can guess how long
data should be retained and how it should be secured, but unless the organization has taken the
time to classify its data, it may not be secured correctly or retained for the required time period.
When classifying data, determine the following aspects of the policy:
Who has access to the data. Define the roles of people who can access the data.
Examples include accounting clerks who are allowed to see all accounts payable and receivable
but cannot add new accounts, and all employees who are allowed to see the names of other
employees (along with managers names, and departments, and the names of vendors and
contractors working for the company). However, only HR employees and managers can see the
related pay grades, home addresses, and phone numbers of the entire staff. And only HR
managers can see and update employee information classified as private, including Social
Security numbers (SSNs) and insurance information.
How the data is secured. Determine whether the data is generally available or, by default, off
limits. In other words, when defining the roles that are allowed to have access, you also need to
define the type of accessview only or update capabilitiesalong with the general access
policy for the data. Many companies set access controls to deny database access to everyone
except those who are specifically granted permission to view or update the data. Note: Notice I
have not stated the i5/OS security setting for the fileI have defined the access in general terms
just as I described who should have access in general terms. Determining who has access and
identifying the i5/OS security settings will come when the data custodian (described later in this
article) implements this policy. How long the data is retained. Many industries require that data
be retained for a certain length of time. For example, the finance industry requires a seven-year
retention period. Data owners need to know the regulatory requirements for their data, and if
requirements do not exist, they should base the retention period on the needs of
the business.
What method should be used to dispose of the data. For some data classifications, the method
of disposal wont matter. But some data is so sensitive that data owners will want to dispose of

printed reports through cross-shredding or another (C) SkyView Partners, Inc, 2007. All Rights
Reserved.
Whether the data needs to be encrypted. Data owners will have to decide whether their data
needs to be encrypted. They typically set this requirement when they must comply with a law or
regulation such as the Payment Card Industry (PCI) Data
Security Standard.
What use of the data is appropriate. Before data security became such a hot issue for
organizations, people in many roles within and outside the company used data in all types of
reports. This aspect of the policy defines whether data is for use within the company, is restricted
for use by only selected roles, or can be made public to anyone outside the organization. In
addition, some data has legal usage definition (for example, California has defined the
appropriate use of a Social Security number). Your organizations policy should spell out any
such restrictions or refer to the legal definitions. Lets face itsecurity administrators dont have
extra time on their hands. Classifying data is beneficial because it helps security administrators
and internal auditors focus their attention on the data that is most critical to the business, thus
ensuring that it is secured and handled properly. Not that other data is ignored, mind you, but if
administrators can check the access controls on only a limited number of databases or
applications in a given time period, at least its clear on which ones they should spend the
majority of their time. Proper data classification also helps your organization comply with
pertinent laws and regulations. For example, classifying credit card data as private can help
ensure compliance with the PCI Data Security Standard. One of the requirements of this standard
is to encrypt credit card information. Data owners who correctly defined the encryption aspect of
their organizations data classification policy will require that the data be encrypted according to
the specifications defined in this standard. Classifying data as private can also help your
organization comply with the various data breach notification laws that many states have
enacted. (The State PIRG Consumer Protection Web site,
www.pirg.org/consumer/credit/statelaws.htm#breach, can help you keep track of the states that
have enacted the notification laws.)
What classifications should be used?
There are no hard and fast rules about the titles and number of classifications. The general
guideline is that the definition of the classification should be clear enough so that it is easy to
determine how to classify the data. In other words, there should be little (if any) overlap in the

classification definitions. Also, it is helpful to use a term for the title of the classification
thatindicates the type of data that falls into the particular category. Here are some examples of
categorizing data by title:
Private. Data that is defined as private, such as SSNs, bank accounts, or credit card information.
Company restricted. Data that is restricted to a subset of employees.
Company confidential. Data that can be viewed by all employees but is not for general use.
Public. Data that can be viewed or used by employees or the general public.
Data classifications can also change. For example, IBM will often classify new i5/OS release
information as IBM Confidential Until Announced. The recipients of this information can
properly protect and use the information before the announcement and can then more freely use
the information after IBM formally announces a new release.(C) SkyView Partners, Inc, 2007.
All Rights Reserved.
What are the right classifications?
There is no right or wrong classification of data. Remember, data classification is supposed to
ensure that business assets are properly handled. If your organizations management does not
care about its vital business assetdataall of the data can remain unclassified. If the data is
lost or stolen or otherwise inappropriately used, there is no one to blame but the management
personnel who decided not to classify the data. However, I encourage you to at least identify and
classify any private information that your organization retains. Also, classify all the data that is
vital to your business. Data such as a retailers vendor lists, a transportation companys pricing
information, a medical device companys product specifications, or any information that could
be used by a competitor to harm your business should be classified to ensure that the data
custodian secures it properly.
Who decides datas classification?
The individual who owns the data should decide the classification under which the data falls. The
committee that wrote the data classification definitions or policies can certainly help or provide
guidance, but the final determination for the classification should be the data owners
responsibility. The data owner is best qualified to make this decision because he or she has the
most knowledge about the use of the data and its value to the organization. The database

administrator (DBA) can be a good checkpoint to ensure that data is classified and protected
properly. Data owners set the classification, but the classification may be poorly communicated
or forgotten by programmers developing in-house written applications. When new files are
created, the DBA can review the classification to ensure that programmers understand the type of
data with which theyre working. When new files are moved from the development environment
to production, DBAs can perform a final check to ensure the default access on the file is being
set appropriately, given the datas classification. Finally, data owners should review their datas
classification at least annually to ensure that the data remains correctly classified. For example, if
data owners had been reviewing data classifications for the past few years, they probably moved
much of their employees informationespecially information such as SSNsfrom a
confidential classification to a private classification. SSNs were never considered private
until they were used for identity theft. Since thieves started to steal databases of SSNs, their
classification has been upgraded to restrict access and more tightly control their use.
Will the real data owner please stand up?
In addition to classifying data, an organization needs to assign an owner. The owner is not the
i5/OS or OS/400 user profile that owns the database object on the system; rather, it is the person
in the organization who owns the data that is stored in the database on the iSeries. The data
owner is typically a director, or at least a department head, who has a vested interest in making
sure the data is accurately and appropriately secured. Take a financial application, for example.
Depending on the size of the organization, the data owner may be the CFO or one of the
directors who reports to the CFO. The person who is appointed needs to understand the
importance and value of the information to the business as well as the ramifications of inaccurate
storage or inappropriate access as well as the laws and regulations that may govern the use and
retention period of their data. What are the responsibilities of the data owner? The data owner is
responsible for setting up a policy to allow specific individuals to see and update the data.
Usually, a persons role determines access. For example, anyone in the accounting department
can view the accounting data, but only lead accounting analysts can add new accounts. (C)
SkyView Partners, Inc, 2007. All Rights Reserved. The data owner is also responsible for
determining who has access to the data, how the data should be secured, how long the data
should be retained, what the appropriate disposal methods are, and whether the data should be
encrypted. The data owner may appoint an administrator to do the daily tasks associated with
these responsibilities. For example, the data owner may appoint someone to approve daily
requests to access the data. The appointed person will act under the direct instructions of the data
owner. Unfortunately, IT often ends up being the de facto owner of the data. Although the IT

department can be the custodian of the data, it should not be the owner. Employees in IT
generally do not know how important the data is to the business, how the data is to be used, and
which people (or roles) should access the data. Another reason IT should not be the owner as
well as custodian of the data is separation of duties. If IT decides who has access to the data and
then administers that access, there are no checks and balances to ensure that access control
policies are being followed or that inappropriate access is not being assigned. ITs role is usually
that of data custodian. The custodian is responsible for implementing the policies set by the data
owner. For example, IT is usually responsible for ensuring that the database files access controls
(such as *PUBLIC authority) are set per the data owners requirements. IT is also responsible for
backing up the data as well as properly disposing of any electronic copies of the data in the
departments possession.
How do I get the owner to take ownership?
Getting the appropriate person to take ownership of the data can be difficult, if not impossible,
without upper managements involvement in the initiative. When the organization understands
the importance of appointing data owners, it will understand that data is a vital business asset
that must be handled properly. Having IT appoint data owners without upper management buy-in
is rarely successful. So, from an IT perspective, the first order of business is to educate
management on the idea that the organizations data is a vital asset and needs correct
classification as well as an appropriate (non-IT) owner so that it can be handled and protected
properly.

Summary
Data classification and appropriate data ownership are key elements in an organizations security
policy. Without these elements, implementing a security scheme will be difficult, and an
organization is unlikely to meet the internal and regulatory requirements related to access control
for its data.
Background
There are many facets to the responsible conduct of research, and one of the most important is
the integrity of data. While fabrication and falsification obviously affect data integrity, many
other factors have the potential to compromise research data and results. Responsible data
management includes appropriate data collection and storage, issues of access to and sharing of

data, and determination of custody and responsibility for the data record and any associated
sensitive information.
There are many facets to the responsible conduct of research, and one of the most important is
the integrity of data. While fabrication and falsification obviously affect data integrity, many
other factors have the potential to compromise research data and results. Responsible data
management includes appropriate data collection and storage, issues of access to and sharing of
data, and determination of custody and responsibility for the data record and any associated
sensitive information. Policies and guidelines regarding data management can vary among
institutions and disciplines. In addition to their responsibility to conduct research ethically,
researchers and scholars must abide by the procedures required by their funding agency, their
institution or the source of the data (e.g. databanks, museum collections, research subjects). With
the increasing use of technology in recording, storing, and sharing data, new norms are being
developed for the ethical management of data.
What is data?
The National Institutes of Health guidelines on sharing research data refer to "final research
data", which is defined as "recorded factual material commonly accepted in the scientific
community as necessary to validate research findings. Final research data do not include
laboratory notebooks, partial datasets, preliminary analyses, drafts of scientific papers, plans for
future research, peer review reports, communications with colleagues, or physical objects, such
as gels or laboratory specimens."1 These latter items are considered "research resources" and
guidelines for sharing these can be found elsewhere.2 Data can vary widely in character,
especially when research and scholarship in fields other than science are considered. Proper
management of data will depend on the nature of the data, and may need to be determined for
each project in a careful and thoughtful manner.
Preserving data integrity and data storage
Data must be archived in a controlled, secure environment in a way that safeguards the primary
data, observations, or recordings. The archive must be accessible by scholars analyzing the data,
and available to collaborators or others who have rights of access. Primary research data should
be stored securely for sufficient time following publication, analysis, or termination of the
project. The number of years that data should be retained varies from field to field and may
depend on the nature of the data and the research.
Sharing

According to the National Institutes of Health1, "Data sharing achieves many important goals for
the scientific community, such as

reinforcing open scientific inquiry

encouraging diversity of analysis and opinion

promoting new research, testing of new or alternative hypotheses and methods of analysis

supporting studies on data collection methods and measurement

facilitating education of new researchers

enabling the exploration of topics not envisioned by the initial investigators

permitting the creation of new datasets by combining data from multiple sources."

These benefits and values extend beyond the scientific community to include most forms of
research and scholarship. Knowledge builds on prior scholarship, and often that involves further
analysis of previously assembled data. An essential principle in most fields of scholarship is that
research resources, including primary data, should be made available to others who wish to
replicate or advance a line of work. This principle has been articulated by both the National
Institutes of Health and the National Science Foundation, which have specific data sharing
policies that apply to their funding recipients.
Many fields of research and scholarship are competitive, however, and often researchers are
reluctant to share primary data before they have completed their analysis. Is it possible to find a
balance that preserves the rights of those collecting the data to analyze it first, and the principle
of shared scholarship? In response to protests from scientists that NIH's data sharing policy was
too generous, that agency revised their definition of "timely release and sharing" to be "no later
than the acceptance for publication of the main findings from the final data set."2It is understood
that researchers have a right to "first and continuing use" of the data in which they have invested
their time and effort.2
Confidential and sensitive data
Scholars and researchers must be especially scrupulous in ensuring that confidential or sensitive
data is stored and released in a way that does not compromise privacy or create risks for research
participants. The Code of Ethics of the American Anthropological Association states, In
conducting and publishing their research, or otherwise disseminating their research results,
anthropological researchers must ensure that they do not harm the safety, dignity, or privacy of
the people with whom they work, conduct research, or perform other professional activities, or

who might reasonably be thought to be affected by their research.3


This is not only a moral and professional responsibility, but a legal requirement as well. Federal
regulations, such as the Common Rule and FDA regulations, require attention to the privacy of
research subjects, including the confidentiality of data about them. The Privacy Rule of the
federal Health Insurance Portability and Accountability Act (HIPAA) describes requirements for
most research data derived from health care records.4 Achieving appropriate confidentiality
requires specification of data handling responsibilities and privileges that is, who can handle
which portion of data, at what point during the project, for what purpose, and so on.
Data that includes confidential or sensitive information can still be shared, however. There are a
number of steps researchers can take to protect subjects' privacy2:

withholding part of the data

statistically altering the data in ways that will not compromise secondary analyses

requiring researchers who seek data to commit to protect privacy and confidentiality

providing data access in a controlled site, sometimes referred to as a data enclave.

Ownership and responsibility


Typically, when research is funded by federal or nonprofit granting agencies, the data is owned
by the institution receiving the grant. The primary researcher or scholar receiving the grant has
the responsibility for storage and maintenance of the data, including the protection of
confidential or sensitive information. Data obtained through research supported by private or
corporate funding, however, may have different guidelines for ownership and restrictions on
sharing. This issue is further complicated when organizations such as universities patent data
sets.
It is important for researchers to understand the relevant ownership rules for any data that they
collect or use. From an ethical standpoint, researchers should consider the implications of data
ownership agreements before they are made with other researchers, institutions, or funding
agencies. Will the data collected and analyzed be freely available for future collaborations or
further analysis?
Summary
These points all come together in the National Science Foundations data-sharing policy: "NSF
expects significant findings from research and activities it supports to be promptly submitted for

publication, with authorship that reflects the contributions of those involved. It expects
investigators to share with others at no more than incremental cost and within a reasonable time,
the data, the samples, physical collections and other supporting materials created or gathered in
the course of the work. It also encourages awardees to share software and inventions to make
them useful and usable. Exceptions may be allowed to safeguard the rights of individuals and
subjects, the validity of results or the integrity of collections."5

Vous aimerez peut-être aussi