Joel Bradshaw, Journals Manager
Abstract: While the barriers to international scholarly communication have never been lower, the hurdles facing any new journal seeking to gain a respectable foothold in academic institutions are rising, not falling. Among the factors most responsible for lowering barriers are the rapid spread of digital connectivity and the ease of using electronic media. Among the latest obstacles to be surmounted are sharp cuts in library budgets; the dispersal of journal identities into individual articles; and the rise of bureaucratic journal ranking systems in the humanities that use arbitrary and hidebound criteria to rate scholarly output.
Library Budget Cuts
Library subscriptions are the key to an academic journal’s success, and library budgets are currently under greater strain than ever. It is not just the most current economic downturn, although the effects of the latter have been severe. Over the past decade, libraries have been steadily cutting back on journal subscriptions, especially in response to the rapidly rising rates of commercial publishers in Scientific, Technical, and Medical (STM) fields. Even in the humanities and social sciences, however, libraries have been canceling print subscriptions in favor of electronic access. For many years they have also been buying fewer academic monographs in order to keep up with rising journal costs, and many are now eager for publishers to offer monographs in electronic rather than print media formats.
In short, now is a terrible time to start a new journal or new monograph series unless they are: (1) published electronically, and (2) very cheap—preferably free—to subscribers.
New Journal Trends
Nevertheless, the number of academic journals continues to grow, partly due to the increasing specialization by discipline and region in all fields of study, partly due to the lower barriers to entry in electronic media, and partly due—somewhat ironically—to efforts to create new alternatives to the high-priced journals that dominate STM fields.
Much of the effort to create alternative means of disseminating research has been driven by SPARC, the Scholarly Publishing and Academic Resources Coalition created by the Association of Research Libraries (ARL) in 1998, just as the electronic journal revolution was gaining widespread momentum.
The Andrew W. Mellon Foundation deserves credit for helping to get the ball rolling among nonprofit academic publishers in 1995 by providing seed grants to two scholarly journal initiatives. JSTOR (Journal Storage) began digitizing all the back issues of key research journals in order to preserve scholarship, broaden access, and reduce costs for libraries. JSTOR now contains 998 journals in 18 collections from 592 publishers.
At the same time, Johns Hopkins University Press began offering its journals in the humanities and social sciences in both print and electronic editions, the latter under the name Project MUSE. In 1999, JHUP began to expand Project MUSE rapidly by inviting other university presses with midsize or larger journals programs to join MUSE. University of Hawai‘i Press journals began appearing there in 2000. Project MUSE now contains 414 journals from 102 publishers. Nowadays, each publishing operation is not only self-supporting, but also expanding its coverage.
The JSTOR and MUSE models of journal publishing depend on income from subscribers, primarily academic libraries. But SPARC has been pushing open-access models, which do not charge readers or their institutions for access. Such models are especially attractive in STM fields, where journal prices have been highest and where much of the research is supported by grant monies. For instance, the journals of one SPARC initiative, the Public Library of Science (PLoS), an electronic collection of medical and biological journals, are funded by the producers of the research they publish, not the consumers of it.
In fact, PLoS takes the concept of open-access much further. In their words, “Everything we publish is freely available online for you to read, download, copy, distribute, and use (with attribution) any way you wish.” This is equivalent to a Creative Commons Attribution License, a standard that allows even broader reuse (including commercial) than the Attribution-ShareAlike License just adopted by Wikimedia. Although many open-access journals use Creative Commons licenses to get around the obstacles of onerous copyright issues, few go as far as the PLoS.
Whether they use less restrictive Creative Commons licenses or more traditional copyright restrictions (often in the name of the author, not the publisher), more and more new journals are making all of their content freely available online. The Directory of Open Access Journals (DOAJ), which got underway in 2002, now lists over 4,000 journals, over 1,500 of which are searchable at the article level—and there are more than 280,000 articles. To be listed in the DOAJ, journals must be peer-reviewed, must report on research to a scholarly community, and must appear at regular intervals, generally more frequently than once a year.
The explosive growth of open-access, online journal publishing represents not just lower startup costs and faster means of disseminating research. It also represents a stark recognition that by far the largest cost of publishing academic research is the labor input by scholars who do the research, editorial acquisition, and peer-review—scholars whose salaries in most cases are paid by academic institutions officially committed to research.
In many ways, the digital era has been very, very good for academic journals. Their authors have acquired many more readers, all across campuses and all around the globe. Their publishers have implemented more efficient means of production and dissemination, and often found very significant new streams of revenue. But there have been some difficult cultural adjustments.
Subscriber counts are becoming less important as a metric for assessing readership than actual counts of article views online. And those online readers are much less likely to be subscribers, much less likely to be able to evaluate the reputation of the journal, and much less interested in anything but the disembodied article they found via an all-purpose search engine. This has led to an identity crisis for many journals. Are they a place for scholarly dialogue, or just a warehouse of articles awaiting consumers who may or may not care about which brand they buy?
Journal articles become harder to distinguish from chapters in multiauthor books, and senior scholars in the humanities tend to contribute less and less to journals because they tend to be overcommitted to editors who have solicited their contributions to more highly valued books. As journals become more and more specialized, those with broader—and often more prestigious—coverage, tend to attract less of the cutting-edge research that makes people feel the need to subscribe as individuals.
Finally, in an era of fast food, online communities, and instant feedback, ambitious scholars have less patience with the languid cycles of journal review and publication. (And let us not speak of conference proceedings or Festschrifts!)
Journal Quality Assessments
Assessing journal quality is far more important to scholars who publish (in order not to perish) than to those who consume the results of their research—unless those consumers happen to be their academic evaluators. There are two separate trends afoot these days, one seeking better means to evaluate scholarly use of new digital media, the other seeking better means to evaluate the whole panoply of existing journals in which scholars in the humanities publish. The former trend is friendly to the creation of new digital journals, the latter decidedly hostile to new journals of any kind.
Let us first consider efforts to evaluate new media.
Even as the use of electronic media has become common across fields for research and teaching, what is taken for granted among young scholars is still foreign to many of those who sit on tenure and promotion committees. In an effort to confront this problem, the MLA [Modern Language Association] and a consortium called the Humanities, Arts, Science and Technology Advanced Collaboratory [HASTAC] have decided to find new ways to help departments evaluate the kinds of digital scholarship being produced today….
One reason for the new effort is that shifts in publishing may make it impossible for a growing number of academics to submit traditional tenure dossiers. With many university presses in financial trouble and others … turning to electronic publishing for monographs, there will be fewer possibilities for someone to be published in the traditional print form that was once the norm for tenure.
Among the updated guidelines for tenure evaluation outlined by the MLA and HASTAC (pronounced “haystack”) initiatives are:
- Material shouldn’t be judged inferior when it is identical to traditional work in every way except medium.
- New systems are needed to evaluate scholarship that is unique in digital form.
- Peer review matters—and needs to involve people who understand the work.
- Digital work doesn’t fall neatly into teaching vs. research categories.
Leaders of the MLA-HASTAC effort stressed that they were not “anti-book” and indeed those involved have published in both traditional and non-traditional ways. Rather, they said that it was inherently unfair to have younger scholars being evaluated by people who may not understand their work or its media — and it was unrealistic to expect tenure committees to evaluate digital scholarship without some education.
This seems to be a positive trend, seeking to evaluate new tools that technological innovations have provided for scholarly output. But another major initiative that aims to evaluate scholarly output in the humanities looks far less positive.
Academic administrators who supervise scientists (including social scientists) often try to evaluate the productivity of their employees by comparing the “impact factors” of the journals in which they publish. Impact factors within particular fields are tabulated by the Institute for Scientific Information (ISI) according to complex formulas that rate immediacy and durability of scholarly impact, based on citations from other journals in the same fields. Impact factor rankings are published in the ISI’s annual Journal Citation Reports. The use of impact factor to rate individual scholars remains highly controversial, but at least the formulas are understood, data-driven, and objective.
Now academic administrators are attempting to compile similar rankings for humanities journals, based on much more subjective criteria, and they are raising many hackles in the process, as Jennifer Howard reported last year in The Chronicle of Higher Education.
A large-scale, multinational attempt in Europe to rank humanities journals has set off a revolt. In a protest letter, some journal editors have called it “a dangerous and misguided exercise.” The project has also started a drumbeat of alarm in this country, as U.S.-based scholars begin to grasp the implications for their own work and the journals they edit.
The ranking project, known as the European Reference Index for the Humanities, or ERIH, is the brainchild of the European Science Foundation, which brings together research agencies from many countries. It grew from a desire to showcase high-quality research in Europe. Panels of four to six scholars, appointed by a steering committee, compiled initial lists of journals to be classified in 15 fields. Each journal was assigned to a category — A, B, or C — depending on its reputation and international reach.
Similar efforts are underway in Australia.
Australia has been conducting its own assessment exercise across all disciplines, including the humanities. Called the Excellence in Research for Australia, or ERA, it’s run by the Australian Research Council, an arm of the government devoted to promoting — and financing — research. A wide-ranging attempt to gather “indicators of research quality,” the Australian project also ranks journals. It assigns them to four categories — A*, A, B, and C — based on information that includes data gathered from scholarly associations and how widely cited the journals are.
There is no need to examine the details here, but journal editors who have compared the two lists report many glaring omissions from either or both, and many disagreements about category rankings when the same journals show up on both lists, and unimaginative, arbitrary, and out-of-date criteria. Whatever the virtues or faults of these new ranking systems, they would seem to make it much less attractive for scholars to publish in new, unlisted, or unranked journals, no matter whether those journals appear in print or online.
In sum, while the barriers to international scholarly communication have never been lower, the hurdles facing any new journal seeking to gain a respectable foothold in academic institutions are rising, not falling. It may even be advisable to concentrate efforts on better placement of new research in existing journals.
For more, see Robert B. Townsend’s Is There a Future for Humanities Journals? (AHA Today, 1 September 2009), which comments on a report by Mary Waltham commissioned by the National Humanities Alliance on The Future of Scholarly Journals Publishing Among Social Science and Humanities Associations (PDF).
* * * * *
Joel Bradshaw’s experience in academic publishing began thirty years ago while he was completing his Ph.D. in linguistics at the University of Hawai‘i at Manoa. In addition to writing his own articles for publication, he has worked as a proofreader, copyeditor, typesetter, managing editor, book review editor, and peer-reviewer for a wide variety of journal and book projects. He served as a publications specialist for six years at the UH Center for Korean Studies before joining the UH Press in 1998 as journals manager.
 Norman Oder, “ARL Budget Roundup: Large Academic Libraries Face Cuts in Collections, Staff, Hours,” Library Journal, 30 April 2009.
 Scott Jaschik, “Farewell to the Printed Monograph,” Inside Higher Ed, 23 March 2009.
 SPARC, http://www.arl.org/sparc/, 28 May 2009.
 JSTOR, http://www.jstor.org/, 26 May 2009.
 JSTOR by the numbers, http://www.jstor.org/page/info/about/archives/facts.jsp, 30 April 2009.
 Project MUSE, http://muse.jhu.edu/, 26 May 2009.
 Directory of Open Access Journals (DOAJ), http://www.doaj.org/, 26 May 2009.
 Public Library of Science, http://www.plos.org/, 27 May 2009.
 Creative Commons, http://creativecommons.org/licenses/, 27 May 2009.
 Wikimedia Meta-Wiki, http://meta.wikimedia.org/wiki/Licensing_update/Result, 27 May 2009.
 DOAJ, http://www.doaj.org/, 26 May 2009.
 Jennifer Howard, “Humanities Journals Confront Identity Crisis,” The Chronicle of Higher Education 55, no. 19, 27 March 2009, p. A1.
 Howard, “Humanities Journals Confront Identity Crisis,” p. A8.
 Scott Jaschik, “Tenure in a Digital Era,” Inside Higher Ed, 26 May 2009.
 Jaschik, “Tenure in a Digital Era.”
 Jaschik, “Tenure in a Digital Era.”
 Jennifer Howard, “New Ratings of Humanities Journals Do More Than Rank — They Rankle,” The Chronicle of Higher Education 55, no. 7, 10 October 2008.
 Howard, “New Ratings of Humanities Journals.”