Vous êtes sur la page 1sur 7

643931

research-article2016
JOMXXX10.1177/0149206316643931Journal of ManagementEditorial

Journal of Management
Vol. 42 No. 5, July 2016 10371043
DOI: 10.1177/0149206316643931
The Author(s) 2016
Reprints and permissions:
sagepub.com/journalsPermissions.nav

Editorial

Ensuring Research Integrity: An Editors


Perspective
Patrick M. Wright
Editor

Keywords: ethics; research ethics; publishing ethics; questionable research practices; retrac-
tions; data fraud; peer review; tenure; publishing

Ethics and integrity in research have long been central topics of concern within the
research community (Martinson, Anderson, & de Vries, 2005) and the public at large.
Currently, websites such as Retraction Watch catalog peer-reviewed articles that were later
retracted on the basis of being in error, misleading, and/or fraudulent in some way. Following
this, other outlets have emerged that allow individuals to anonymously question, critique,
and discuss published research online. Within the scholarly domain, a literature on question-
able research practices has emerged that speaks about common problems pertaining to analy-
sis and reporting along with the prevalence of these practices (Simmons, Nelson, &
Simonsohn, 2011). Given the ubiquity of such discussions within and outside of peer-
reviewed journal space, and given Journal of Managements (JOMs) continued (40-year)
evolution as a top-tier research outlet, I thought it appropriate to speak about publication
integrity in general and within JOM specifically.

The Publishing Pressure Cooker


A variety of developments within academe over the decades have created an emerging
threat to research integrity (Mayer & Steneck, 2012); however, two developments in particu-
lar are critical to highlight. Specifically, I believe that the increasing pressure to publish may
encourage some authors to misreport their data, while the rapidly growing developments in
statistical techniques have made it (a) more likely that authors may incorrectly apply a par-
ticular technique and (b) more difficult for reviewers and editors to identify when either of
the above has occurred.

Acknowledgments: The author wishes to thank Deborah Rupp, Fred Oswald, Taco Reus, David Allen, Paul Bliese,
Rob Ployhart, and John Hollenbeck for the valuable input on this manuscript.

1037
1038 Journal of Management / July 2016

Publish or Perish
The publish or perish moniker is certainly nothing new, as scholars and philosophers of
science have discussed the pros and cons of tenure pressure for as long as the system has
existed (De Rond & Miller, 2005). But some characteristics of this environment have become
more salient. One involves the counting that has become normative in evaluating scholars
annual output for both promotion and pay raise decisions. Business schools and other aca-
demic departments increasingly demand (sometimes implicitly but often explicitly) a certain
minimum number of top-tier publications to achieve promotion and tenure (De Rond &
Miller). This pressure extends beyond just publishing to publishing in specifically identified
A journals. An informal poll of JOM editorial board members revealed a number of schools
ascribing to the Financial Times journal list, the University of TexasDallas list, or lists that
are idiosyncratic to a school and reflect the (often dated!) perceptions of faculty members and
deans regarding journal quality and reputation.
In addition, the inclusion of faculty research productivity as one criterion for ranking
MBA programs has resulted in increased pressure to publish a larger quantity of papers and
to publish papers in specific journals. This emphasis on numbers discourages promotion and
tenure committeesand even those writing letters of supportfrom thoroughly reading and
discussing a candidates body of work to consider the works larger impact in terms of
increasing the broader knowledge base. More and more, academic success in our field boils
down to a numbers game in a small set of journals. Thus, those seeking to get ahead in
academe may be incented to do whatever it takes to achieve the numbers that determine their
promotion, pay, and marketability.

The Rapid Escalation of Statistical Techniques


At the same time, over the last few decades, our field has witnessed a sharp rise in the
number of methodological and statistical approaches available to authors and a considerable
increase in the tools available for carrying out such methods and analyses. Furthermore, many
of the analytical approaches gaining favor within our field are increasingly complex, requiring
specialized expertise that may or may not have been widely available to scholars during their
graduate training. The changes in methodological approaches create new challenges for
authors designing studies and analyzing data. At the same time, the changes put new pressures
on reviewers to develop requisite competencies to critique new and emerging methods.
In addition, these new approaches diminish the transparency that has existed in the past.
When regression constituted the popular statistical method, one could easily reanalyze the
results of a paper from the table of intercorrelations and standard deviations. Thus, reviewers
and editors could quality check the analyses within the context of the review process, and
readers could conduct the analyses or variations on them as well. However, the traditional
reporting of statistical methods involving techniques such as hierarchical linear modeling
(HLM), Bayesian analyses, predictive modeling from big data, bootstrapped standard errors,
and so on make such reanalysis virtually impossible. Consequently, in many cases reviewers
can rely only on the good faith of the authors as they apply their expertise in providing a bal-
anced attempt to critique a manuscript.
These factors have combined to create an environment where both authors and reviewers
are increasingly uncertain about the adequacy of their data handling and analyses.
Wright / Editorial 1039

Compounding this issue is the increased prevalence of public scrutiny of authors and papers,
which not only points out potential methodological errors but also occasionally implies an
intentional moral infraction by the authors (a topic to which I later return). The organizational
behavior research literature informs us that lowered efficacy (Bandura, 2012), threat of pub-
lic scrutiny/ostracism (Baumeister, DeWall, Ciarocco, & Twenge, 2006), and a moralized
public discourse (Kreps & Monin, 2011) can each have detrimental effects on motivation and
performance. In essence, if cautionary messages and corrective and developmental interven-
tions are not managed properly, they could stifle the very behaviors they are designed to
promote and potentially exacerbate the problems. Within the science of management, we run
the risk that researchers and reviewers alike will engage in play it safe research conduct
that may result in less skill building, less useful feedback and dialogue, and importantly, less
innovation.
Collectively, these pressures may interact with each other and with personality character-
istics to induce some authors to engage in unethical research practices and others to know-
ingly or unknowingly engage in questionable research practices. Just as there is a broad
system designed to make the publication process effective, a broad system of accountability
needs to be cast in the scientific (and public) discourse on research practices. Furthermore, I
suggest that the field (and academe as a collective) needs to work together on interventions
focused on changing the environment within graduate training, mid- and late-career profes-
sional development, tenure and promotion practices/norms, and review, editorial, and journal
operations. In other words, we are all accountable for the current state of affairs in the con-
duct, reporting, and evaluation of modern statistical analyses, and we must all work together
for solutions. In this regard, we need to consider carefully the psychological tone and impli-
cations of our day-to-day messages and interventions delivered to the field, whether they are
cast broadly to the field or narrowly to an individual or group, with an eye toward all that is
known about what makes social change interventions ultimately effective. In this editorial, I
describe the various ways in which JOM currently seeks to address issues pertaining to
research ethics, methodological competence, the quality of the review process, and the larger
question of research integrity in four ways.

A New Reviewing Policy


Widespread advances in statistical methods have made it nearly impossible for reviewers
to have expertise in every analytical technique available to researchers. Furthermore, it is
rarely possible for researchers to know directly the authors decision points and rationale
underlying all analyses. Researchers are applying structural equation modeling, HLM, meta-
analysis, latent profile analysis, item response theory, and so forth using a variety of data
analytic platforms/languages (e.g., SPSS, Mplus, R, SAS, STATA, MATLAB). The plethora
of techniques has flourished with little consensus regarding which one might be the best for
analyzing a specific data set. Researchers may not completely understand the implementa-
tion of modern techniques because of their newness: for example, the default settings of a
statistical method as implemented in a software program (and the effects of changing them),
the assumptions of the method, and effects of violations of those assumptions. In addition,
best practices surrounding a number of these techniques continue to evolve, and sometimes
rapidly evolve, with precedent being set across published papers, making it increasingly
1040 Journal of Management / July 2016

Table 1
Reviewer Confidence Items
Not all reviewers have deep expertise in the variety of statistical methods used across studies submitted to JOM.
To ensure that all papers have at least one reviewer with deep knowledge of the methods used, given your
expertise in the statistical methods used in this paper, please indicate your comfort/confidence in your ability to
rigorously evaluate the results reported: (Very uncomfortable, some discomfort, comfortable, confident, very
confident, not applicable)
I affirm that, to the best of my ability (as noted above), I have carefully critiqued the results reported in this study
(Yes/no)
If you have concerns, please indicate in the Comments for the Editor section

important to keep plugged-in so as not to miss out on critical norm shifts. Overall, these
changes decrease the odds that every reviewer assigned to a paper will have the necessary
depth of statistical knowledge to accurately evaluate a papers approach, reporting, and
interpretation.
Standard practice among journal editors in the past has been to carefully ensure that
reviewers collectively provide expert coverage of both the substantive and the methodologi-
cal aspects of a paper. This judgment is typically made based on the known or assumed
expertise of each reviewer but rarely validated within the context of the particular paper at
hand. Until recently, an action editor might become aware of reviewers lack of knowledge
or expertise only if reviewers choose to disclose this within their optional comments to the
editor. Conversely, journals never formally test reviewers claimed expertise, and as one edi-
tor told me, One graduate level class on a technique does not equal expertise.
In response to this issue, JOM has added specific items to be completed by reviewers rat-
ing their self-efficacy for evaluating various aspects of a given paper. The specific questions
are provided in Table 1.1 The logic behind these additional questions is three-fold. First, we
want to send a clear message that in most instances, a thorough review of all aspects of a
paper is expected. That being said, we realize there are situations where reviewers may
review only some aspects of a paper and, thus, we want to provide an explicit mechanism
allowing action editors to understand what aspects of a paper received thorough review by
multiple individuals. Finally, we are sympathetic to instances where reviewers are unfamiliar
with a particular method or statistical analysis, and we seek to provide a vehicle for them to
communicate this explicitly to the action editor. Our hope is that this new policy will provide
more concrete information to action editors, who can then make more informed decisions on
whether additional reviewers may be needed in order to critique a paper fairly and com-
pletely. We also hope that this will allow us to detect various questionable research practices
more effectively and will allow the review process to serve as a positive and functional
vehicle for dialogue and development.
In addition, for the remainder of my term, I plan on adding a new improvement to the
review process. I have sought a small cadre of editorial board members who will act as qual-
ity control experts with regard to the analytics presented. When a paper receives a revise and
resubmit, one of these experts will be added as a third reviewer to the revision to focus only
on thoroughly evaluating the adequacy of the statistical analyses. I do not suggest that this
will prevent any flawed analyses from ever being published in JOM, but my belief is that this
will serve to decrease the likelihood that this will happen.
Wright / Editorial 1041

Investigations
We by no means expect that JOM will eliminate inadvertent publications of question-
able results; however, when such events occur (or are alleged to have occurred), JOM
subscribes to the Committee on Publication Ethics (COPE) guidelines, which lay out a
formal process of investigation to ensure procedural justice for all parties involved.2
According to this process, if a reader raises questions about the potential fabrication or
misreporting of data, the editor first performs an internal investigation by seeking out a
reviewer (or multiple reviewers) to determine whether the accusation has any potential
validity. If the reviewer(s) conclude that there is substance to the accusation, the editor
contacts the lead author to explain the issue (without making any accusation) and allows
him or her to respond. At this stage, the author response may be satisfactory, requiring no
further action, or it may entail making corrections that could appear as a corrigendum in
the journal. If the authors response seems unsatisfactory, the journal editor contacts the
home institution of the author for a formal investigation conducted by that institution. At
the conclusion of the investigation, the accusation may be found to be invalid, or if valid,
the journal is advised to retract the paper. At the final resolution, the reader who originally
raised the issue is informed of the outcome.
Note that this investigative process maintains confidentiality for both the author(s) and the
reader. Journal editors have to ensure due process for all parties because even the public
acknowledgement of an investigation may lead to rumors and false assumptions about the
process that may ultimately defame the author and/or claimant, regardless of the final out-
come. The simple accusation of wrongdoing can destroy an authors reputation and poten-
tially damage his or her career; thus, as journal editors, we must take every precaution to
protect all parties throughout the process. For instance, if a reader asks an editor whether the
journal is investigating a particular paper, simply acknowledging the investigation casts a
negative shadow over an author and violates his or her due process rights. For this reason,
JOM does not comment on any investigations or even confirm whether an investigation is
under way or completed.

Corrigenda and Retractions


If some error is discovered in the published paper as a result of the investigation, authors
may be provided the opportunity to issue a corrigendum. If the mistakes were minor and do
not materially affect the ultimate conclusions of a study, authors may publicly acknowledge
the mistake and provide the correct information. However, if the investigation reveals a
material problem, either an unethical practice or an error that substantively changes the con-
clusions of the paper, the journal will reserve the right to retract the paper.
I sometimes hear that editors do not want to retract papers because such actions reflect
poorly on the journal. Nothing could be further from the truth. The reputation of any journal
depends first and foremost on the trustworthiness of the papers it publishes. Our goal at JOM
is to contribute to the knowledge base, and unsound papers set the field back rather than
move it forward. A journal can lose far more credibility by letting flawed papers remain than
by retracting them; thus, there would be no hesitation in retracting a paper for which there
was evidence of misreporting or mishandling of data in a way that materially deceives or
misdirects future research.
1042 Journal of Management / July 2016

At the same time, we are committed to due process when it comes to all investigations
surrounding questionable research practices. Authors will be considered innocent until
proven otherwise, and JOM will not take action unless there is a preponderance of evidence
and rationale supporting such an action. Furthermore, the critical scrutiny given to particular
authors and articles may reveal important problems endemic to a much larger number of
publications, if not the field itself. As methods and analytic techniques evolve, so too will
what are considered best analysis and reporting practices. Thus, we place emphasis on a col-
lective development of skills that will allow all of us to become better consumers of research
that may have predated the solid emergence or culture of improved practices. We recognize
that we must all work together to make science better, moving forward.

Research Integrity and Ethics Beyond the Journals


Although the previous discussion focused on the ethical obligations of editors and jour-
nals such as JOM to take steps to ensure that the papers we publish are of the highest quality
and integrity, I have also discussed a need for the field to consider potentially negative con-
sequences of interventions designed to improve our science. For instance, consider the rise
of websites that allow individuals to post critiques of published work anonymously. The
postings on these sites range from asking simple questions about the analyses within a par-
ticular study up to and including explicit accusations about the morality, ethics, and integrity
of both authors and editors.
Unlike editors, who both feel and bear an ethical obligation to a fair and just process for
both those making accusations and those accused of wrongdoing, individuals posting on such
sites seem to recognize neither. The online process is governed loosely, if at all. Our edito-
rial team has heard from a number of authors whose work has been criticized on such sites,
with the common theme that none of them had received previous inquiries through the jour-
nal mechanism about their published work before seeing it criticized publicly. This seems to
be the impression of other journal editors as well. I believe that such conduct tends to lack
due process, tends to confound the author with the state of the science, and tends to be unethi-
cal and counterproductive to the field.
I respectfully suggest that when a reader suspects a problem with a published paper, he or
she first contact the corresponding authors themselves to ask for clarification. If the authors
provide unsatisfactory responses, the next step would be to contact the journal in which the
paper was published to allow that journal to conduct a fair, impartial, and confidential inves-
tigation. Such a process may take longer than the reader may like, but the delay is necessary
to obtain necessary evidence and maintain fairness. Finally, I believe that once the reader has
been informed of the outcome, he or she should consider the specific concern complete. Of
course, the complainant should be encouraged to continue the scientific dialogue construc-
tively through the peer-reviewed publication process or online through identified means
(e.g., on the latter, many researchers operate highly productive and popular blogs).
Professional scientific dialogue should take place between two or more individuals whose
identities are known. In my opinion, resorting to anonymous public posts that stray beyond
the facts into accusations of incompetence or unethical behavior on the part of the author and/
or journal, while easy, is done without the larger perspective of due process and context, and
it is therefore both morally and professionally wrong.
Wright / Editorial 1043

Conclusion
Certainly, as discussed at the outset, increasing pressures on researchers to publish in top
journals may incent questionable research practices, and the increasing growth of the number
and complexity of data analytic techniques increases the difficulty of editors and reviewers
to fully vet the analyses. I believe that our field needs to cast a broader net of accountability
around our science when facing such issues. Journals must do everything possible to avoid
publishing papers that violate the ethical norms for research conduct and to deal severely
with published papers that have done so. However, journal editors also bear the ethical
responsibility to understand the context of the science at large and in this context, to deal
fairly with authors who have been accused, but not proven, of having engaged in misconduct.
A fair process exists for investigating accusations, but that process works through the jour-
nals and the authors home institutions. The editors of JOM, along with editors at a number
of other top journals, do all we can to ensure publishing integrity in a way that is respectful
to the authors and to our science.

Notes
1. We encourage other editors to use or modify these items as part of their review process.
2. All of COPEs resources can be found at Publicationethics.org.

References
Bandura, A. 2012. On the functional properties of perceived self-efficacy revisited. Journal of Management, 38:
9-44.
Baumeister, R. F., DeWall, C. N., Ciarocco, N. L., & Twenge, J. M. 2006. Social exclusion impairs self-regulation.
Journal of Personality and Social Psychology, 88: 589-604.
De Rond, M., & Miller, A. N. 2005. Publish or perish: Bane or boon of academic life? Journal of Management
Inquiry, 14: 321-329.
Kreps, T. A., & Monin, B. 2011. Doing well by doing good? Ambivalent moral framing in organizations. Research
in Organizational Behavior, 31: 99-123.
Martinson, B. C., Anderson, M. S., & de Vries, R. 2005. Scientists behaving badly. Nature, 435: 737-738.
Mayer, T., & Steneck, N. (Eds.). 2012. Promoting research integrity in a global environment. Singapore: World
Scientific.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. 2011. False-positive psychology: Undisclosed flexibility in data
collection and analysis allows presenting anything as significant. Psychological Science, 22: 1359-1366.

Vous aimerez peut-être aussi