Académique Documents
Professionnel Documents
Culture Documents
research-article2016
JOMXXX10.1177/0149206316643931Journal of ManagementEditorial
Journal of Management
Vol. 42 No. 5, July 2016 10371043
DOI: 10.1177/0149206316643931
The Author(s) 2016
Reprints and permissions:
sagepub.com/journalsPermissions.nav
Editorial
Keywords: ethics; research ethics; publishing ethics; questionable research practices; retrac-
tions; data fraud; peer review; tenure; publishing
Ethics and integrity in research have long been central topics of concern within the
research community (Martinson, Anderson, & de Vries, 2005) and the public at large.
Currently, websites such as Retraction Watch catalog peer-reviewed articles that were later
retracted on the basis of being in error, misleading, and/or fraudulent in some way. Following
this, other outlets have emerged that allow individuals to anonymously question, critique,
and discuss published research online. Within the scholarly domain, a literature on question-
able research practices has emerged that speaks about common problems pertaining to analy-
sis and reporting along with the prevalence of these practices (Simmons, Nelson, &
Simonsohn, 2011). Given the ubiquity of such discussions within and outside of peer-
reviewed journal space, and given Journal of Managements (JOMs) continued (40-year)
evolution as a top-tier research outlet, I thought it appropriate to speak about publication
integrity in general and within JOM specifically.
Acknowledgments: The author wishes to thank Deborah Rupp, Fred Oswald, Taco Reus, David Allen, Paul Bliese,
Rob Ployhart, and John Hollenbeck for the valuable input on this manuscript.
1037
1038 Journal of Management / July 2016
Publish or Perish
The publish or perish moniker is certainly nothing new, as scholars and philosophers of
science have discussed the pros and cons of tenure pressure for as long as the system has
existed (De Rond & Miller, 2005). But some characteristics of this environment have become
more salient. One involves the counting that has become normative in evaluating scholars
annual output for both promotion and pay raise decisions. Business schools and other aca-
demic departments increasingly demand (sometimes implicitly but often explicitly) a certain
minimum number of top-tier publications to achieve promotion and tenure (De Rond &
Miller). This pressure extends beyond just publishing to publishing in specifically identified
A journals. An informal poll of JOM editorial board members revealed a number of schools
ascribing to the Financial Times journal list, the University of TexasDallas list, or lists that
are idiosyncratic to a school and reflect the (often dated!) perceptions of faculty members and
deans regarding journal quality and reputation.
In addition, the inclusion of faculty research productivity as one criterion for ranking
MBA programs has resulted in increased pressure to publish a larger quantity of papers and
to publish papers in specific journals. This emphasis on numbers discourages promotion and
tenure committeesand even those writing letters of supportfrom thoroughly reading and
discussing a candidates body of work to consider the works larger impact in terms of
increasing the broader knowledge base. More and more, academic success in our field boils
down to a numbers game in a small set of journals. Thus, those seeking to get ahead in
academe may be incented to do whatever it takes to achieve the numbers that determine their
promotion, pay, and marketability.
Compounding this issue is the increased prevalence of public scrutiny of authors and papers,
which not only points out potential methodological errors but also occasionally implies an
intentional moral infraction by the authors (a topic to which I later return). The organizational
behavior research literature informs us that lowered efficacy (Bandura, 2012), threat of pub-
lic scrutiny/ostracism (Baumeister, DeWall, Ciarocco, & Twenge, 2006), and a moralized
public discourse (Kreps & Monin, 2011) can each have detrimental effects on motivation and
performance. In essence, if cautionary messages and corrective and developmental interven-
tions are not managed properly, they could stifle the very behaviors they are designed to
promote and potentially exacerbate the problems. Within the science of management, we run
the risk that researchers and reviewers alike will engage in play it safe research conduct
that may result in less skill building, less useful feedback and dialogue, and importantly, less
innovation.
Collectively, these pressures may interact with each other and with personality character-
istics to induce some authors to engage in unethical research practices and others to know-
ingly or unknowingly engage in questionable research practices. Just as there is a broad
system designed to make the publication process effective, a broad system of accountability
needs to be cast in the scientific (and public) discourse on research practices. Furthermore, I
suggest that the field (and academe as a collective) needs to work together on interventions
focused on changing the environment within graduate training, mid- and late-career profes-
sional development, tenure and promotion practices/norms, and review, editorial, and journal
operations. In other words, we are all accountable for the current state of affairs in the con-
duct, reporting, and evaluation of modern statistical analyses, and we must all work together
for solutions. In this regard, we need to consider carefully the psychological tone and impli-
cations of our day-to-day messages and interventions delivered to the field, whether they are
cast broadly to the field or narrowly to an individual or group, with an eye toward all that is
known about what makes social change interventions ultimately effective. In this editorial, I
describe the various ways in which JOM currently seeks to address issues pertaining to
research ethics, methodological competence, the quality of the review process, and the larger
question of research integrity in four ways.
Table 1
Reviewer Confidence Items
Not all reviewers have deep expertise in the variety of statistical methods used across studies submitted to JOM.
To ensure that all papers have at least one reviewer with deep knowledge of the methods used, given your
expertise in the statistical methods used in this paper, please indicate your comfort/confidence in your ability to
rigorously evaluate the results reported: (Very uncomfortable, some discomfort, comfortable, confident, very
confident, not applicable)
I affirm that, to the best of my ability (as noted above), I have carefully critiqued the results reported in this study
(Yes/no)
If you have concerns, please indicate in the Comments for the Editor section
important to keep plugged-in so as not to miss out on critical norm shifts. Overall, these
changes decrease the odds that every reviewer assigned to a paper will have the necessary
depth of statistical knowledge to accurately evaluate a papers approach, reporting, and
interpretation.
Standard practice among journal editors in the past has been to carefully ensure that
reviewers collectively provide expert coverage of both the substantive and the methodologi-
cal aspects of a paper. This judgment is typically made based on the known or assumed
expertise of each reviewer but rarely validated within the context of the particular paper at
hand. Until recently, an action editor might become aware of reviewers lack of knowledge
or expertise only if reviewers choose to disclose this within their optional comments to the
editor. Conversely, journals never formally test reviewers claimed expertise, and as one edi-
tor told me, One graduate level class on a technique does not equal expertise.
In response to this issue, JOM has added specific items to be completed by reviewers rat-
ing their self-efficacy for evaluating various aspects of a given paper. The specific questions
are provided in Table 1.1 The logic behind these additional questions is three-fold. First, we
want to send a clear message that in most instances, a thorough review of all aspects of a
paper is expected. That being said, we realize there are situations where reviewers may
review only some aspects of a paper and, thus, we want to provide an explicit mechanism
allowing action editors to understand what aspects of a paper received thorough review by
multiple individuals. Finally, we are sympathetic to instances where reviewers are unfamiliar
with a particular method or statistical analysis, and we seek to provide a vehicle for them to
communicate this explicitly to the action editor. Our hope is that this new policy will provide
more concrete information to action editors, who can then make more informed decisions on
whether additional reviewers may be needed in order to critique a paper fairly and com-
pletely. We also hope that this will allow us to detect various questionable research practices
more effectively and will allow the review process to serve as a positive and functional
vehicle for dialogue and development.
In addition, for the remainder of my term, I plan on adding a new improvement to the
review process. I have sought a small cadre of editorial board members who will act as qual-
ity control experts with regard to the analytics presented. When a paper receives a revise and
resubmit, one of these experts will be added as a third reviewer to the revision to focus only
on thoroughly evaluating the adequacy of the statistical analyses. I do not suggest that this
will prevent any flawed analyses from ever being published in JOM, but my belief is that this
will serve to decrease the likelihood that this will happen.
Wright / Editorial 1041
Investigations
We by no means expect that JOM will eliminate inadvertent publications of question-
able results; however, when such events occur (or are alleged to have occurred), JOM
subscribes to the Committee on Publication Ethics (COPE) guidelines, which lay out a
formal process of investigation to ensure procedural justice for all parties involved.2
According to this process, if a reader raises questions about the potential fabrication or
misreporting of data, the editor first performs an internal investigation by seeking out a
reviewer (or multiple reviewers) to determine whether the accusation has any potential
validity. If the reviewer(s) conclude that there is substance to the accusation, the editor
contacts the lead author to explain the issue (without making any accusation) and allows
him or her to respond. At this stage, the author response may be satisfactory, requiring no
further action, or it may entail making corrections that could appear as a corrigendum in
the journal. If the authors response seems unsatisfactory, the journal editor contacts the
home institution of the author for a formal investigation conducted by that institution. At
the conclusion of the investigation, the accusation may be found to be invalid, or if valid,
the journal is advised to retract the paper. At the final resolution, the reader who originally
raised the issue is informed of the outcome.
Note that this investigative process maintains confidentiality for both the author(s) and the
reader. Journal editors have to ensure due process for all parties because even the public
acknowledgement of an investigation may lead to rumors and false assumptions about the
process that may ultimately defame the author and/or claimant, regardless of the final out-
come. The simple accusation of wrongdoing can destroy an authors reputation and poten-
tially damage his or her career; thus, as journal editors, we must take every precaution to
protect all parties throughout the process. For instance, if a reader asks an editor whether the
journal is investigating a particular paper, simply acknowledging the investigation casts a
negative shadow over an author and violates his or her due process rights. For this reason,
JOM does not comment on any investigations or even confirm whether an investigation is
under way or completed.
At the same time, we are committed to due process when it comes to all investigations
surrounding questionable research practices. Authors will be considered innocent until
proven otherwise, and JOM will not take action unless there is a preponderance of evidence
and rationale supporting such an action. Furthermore, the critical scrutiny given to particular
authors and articles may reveal important problems endemic to a much larger number of
publications, if not the field itself. As methods and analytic techniques evolve, so too will
what are considered best analysis and reporting practices. Thus, we place emphasis on a col-
lective development of skills that will allow all of us to become better consumers of research
that may have predated the solid emergence or culture of improved practices. We recognize
that we must all work together to make science better, moving forward.
Conclusion
Certainly, as discussed at the outset, increasing pressures on researchers to publish in top
journals may incent questionable research practices, and the increasing growth of the number
and complexity of data analytic techniques increases the difficulty of editors and reviewers
to fully vet the analyses. I believe that our field needs to cast a broader net of accountability
around our science when facing such issues. Journals must do everything possible to avoid
publishing papers that violate the ethical norms for research conduct and to deal severely
with published papers that have done so. However, journal editors also bear the ethical
responsibility to understand the context of the science at large and in this context, to deal
fairly with authors who have been accused, but not proven, of having engaged in misconduct.
A fair process exists for investigating accusations, but that process works through the jour-
nals and the authors home institutions. The editors of JOM, along with editors at a number
of other top journals, do all we can to ensure publishing integrity in a way that is respectful
to the authors and to our science.
Notes
1. We encourage other editors to use or modify these items as part of their review process.
2. All of COPEs resources can be found at Publicationethics.org.
References
Bandura, A. 2012. On the functional properties of perceived self-efficacy revisited. Journal of Management, 38:
9-44.
Baumeister, R. F., DeWall, C. N., Ciarocco, N. L., & Twenge, J. M. 2006. Social exclusion impairs self-regulation.
Journal of Personality and Social Psychology, 88: 589-604.
De Rond, M., & Miller, A. N. 2005. Publish or perish: Bane or boon of academic life? Journal of Management
Inquiry, 14: 321-329.
Kreps, T. A., & Monin, B. 2011. Doing well by doing good? Ambivalent moral framing in organizations. Research
in Organizational Behavior, 31: 99-123.
Martinson, B. C., Anderson, M. S., & de Vries, R. 2005. Scientists behaving badly. Nature, 435: 737-738.
Mayer, T., & Steneck, N. (Eds.). 2012. Promoting research integrity in a global environment. Singapore: World
Scientific.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. 2011. False-positive psychology: Undisclosed flexibility in data
collection and analysis allows presenting anything as significant. Psychological Science, 22: 1359-1366.