AN A IS AN A”: THE NEW BOTTOM LINE FOR VALUING ACADEMIC RESEARCH
In sports, the phrase “a win is a win” refers to the bottom line in those competitions: winning a game. How the game was won is not as important as the fact that it was won. In many ways, we have reached a similar point in the management field. The increased pressure to publish in “A” journals means the new bottom line for valuing academic research is “an A is an A.” This publication ethos has gradually become embedded in universities’ growing managerialism and economic rationality, or what some critics have referred to as the “McDonaldization” of academe. University performance management and resource allocation systems, for example, are increasingly driven by a corporate audit culture where resources and rewards are contingent on quantifiable measures of research value. Faculty recruiting committees and promotion and tenure panels readily discuss how many A’s a candidate has published and how many A’s are needed for a favorable decision, while conversations about the distinctive intellectual value of a publication are often secondary to its categorical membership in journals.
The new bottom line for valuing academic research based on the “an A is an A” dictum has a significant impact, both positive and negative, on researchers, the knowledge they produce, and the business schools that employ them. The ostensible appeal of using A-journal counting to measure research value is inherent in its features. It is fast and easy to use and defend; enables evaluators to readily compare scholars’ research performance to one another and to standard benchmarks; and provides a straightforward, relatively conflict-free approach for making decisions about whom to hire, promote, and reward.
One of the most important seemingly positive outcomes of A-journal counting is the development of clear standards for judging the value of research independent of personal opinions. Like the use of other types of rankings, the use of journal ranking lists as the arbiter of research quality enables business schools to avoid having to translate subjective opinions about the quality of research into quantifiable ratings. Adopting this process increases the transparency of schools’ performance management systems as well as the actual and perceived fairness of the procedures used to make decisions about the allocation of rewards, key factors in ensuring perceptions of trust and organizational justice. Delineating the value of A-journal publications can also serve as a self-selection mechanism. Specifically, doctoral students and faculty who do not wish to compete under a performance management system based on a particular journal list can purposefully opt out of applying to or working for a particular business school. Instead, they can pursue opportunities in schools that consider more than the number of A-journal publications to allocate rewards. Finally, careful examination of A journals can provide information and exemplars about the type of theorizing, methodology, and reporting required to publish successfully in them.
Disconcertingly, however, are mounting concerns about unintended negative effects of using A-journal lists to assess research value. Among these deleterious outcomes are questionable research practices; narrowing of research topics, theories, and methods; and lessening of researcher care and intrinsic motivation for doing research, to name but a few. Arguably, one of the most pernicious outcomes of the “an A is an A” phenomenon is the rampant increase in the prevalence of questionable research practices (QRPs) employed with the purpose of presenting biased evidence in favor of an assertion. In addition, making salient rewards such as tenure and promotion contingent almost exclusively on publishing in A journals can incentivize researchers to produce as many A-journal articles as possible, without necessarily considering whether research results are reproducible, advance the broader conversation in the field, or have meaningful practical implications. Over time, the rewards that accrue from A-journal publication reinforce this emphasis of research over practice and contribute to the growing trend in the management field of doing and publishing research primarily for other researchers, not for the broader practice of the management profession. Moreover, emphasis on A-journal publication can also reduce the heterogeneity and innovation in management research through the preferred methodological approaches used to publish in these journals. Much of their content is based on research using hypothetico-deductive methods and state-of-the-art analytical techniques aimed at precision, control, and testability of existing theory. These research methods are highly relevant to the exploitation of existing management knowledge—testing, refining, and extending it. They are less suited to the exploration of management knowledge, which seeks to discover novel phenomena and invent new theories. From a researcher’s perspective, A-journal lists can lead to decreased emphasis on what researchers care about in doing management research. By focusing exclusively on research output in A journals, the locus of control for management research shifts from the researcher to the external market, thereby turning an intrinsically driven research process into one that is extrinsically motivated and controlled.
However, because the use of A-journal publications as a measure of research quality has certain benefits, we should build on them while seeking ways to ameliorate the negative effects. Journal lists are a reasonable initial tool to define research performance standards when none or very few exist. As mentioned earlier, they supplement purely subjective opinions of research quality with a clear measure that is verifiable. But to maximize the positive impact of journal lists, they need to be part of a more comprehensive performance management system that identifies, measures and develops researchers’ performance.
The current method for valuing research in business schools is not sustainable, yet we do not realistically see a radical change occurring in the near future. Rather, we offer recommendations for creating a performance management system that nudges management researchers beyond an obsession with A journals towards producing knowledge with relevance to a broad set of stakeholders, that openly reports methodological and analytical choices, and is innovative and heterogeneous. Our recommendations involve concrete proposals not just pious sentiments that cannot readily be translated into action. Also, some of them are forward looking and their full potential is likely to be realized once advancements in new ways of collecting and analyzing data, such as machine learning, artificial intelligence, and computer-adaptive text analysis, become more common. Nevertheless, our proposals address thorny and critical issues in business schools and the field of management.
We first suggest how to design performance management systems and measure research performance and then how to build research skills. Ideally, business schools’ performance management systems should derive from strategic choices about how to compete, relate to key stakeholders, and acquire and deploy resources. Explicit and careful attention to management research in making those decisions can clarify the strategic role that research plays in how business school’s function and compete. It can identify the value that key stakeholders place on research and determine how those values should be weighted in assessing and rewarding performance. For example, schools that strategically emphasize education and teaching are likely to weigh the pedagogical contributions of their research highly; others that choose to compete as elite research institutions would likely place a high value on the scientific contributions of their research. Measures of research performance can include, in addition to citation analysis, multiple indicators of research’s practical relevance such as publications in practitioner-oriented and bridging journals, media coverage, number of followers on social media, citations in textbooks and popular business books, and the like. Moreover, assessing the value of management research can be refined by measuring research quality as a continuous rather than a dichotomous “count” versus “does not count” variable. Finally, developing skills for producing high-quality research can include methods and analytical techniques for doing the kinds of exploratory research needed to create innovative and heterogeneous management knowledge.
In summary, our review and critique of the dominance of this new bottom line for valuing academic research provide a foundation for moving management research beyond A-journal strictures. We hope our analysis and forward-looking recommendations will spur further travel down this path. In particular, our insights can be useful to a variety of stakeholders, including (a) academics in all management and business school domains and from universities worldwide, (b) university administrators and funding agencies interested in evaluating research quality and impact, and (c) individuals dedicated to responsible scholarship and addressing the current credibility crisis in management research.