“Big Five”

Issue 67 · July 2024

Join the conversation on Twitter @Briefer_Yet and LinkedIn.

Did a colleague forward the The Brief to you? Sign up via the button below to get your very own copy.

C&E’s 2024 Scholarly Journals Market Trends Report – Coming Very Soon

We are getting very close to release of our 2024 Scholarly Journals Market Trends report – 100+ pages (including plentiful figures and tables) of industry insight for society and publishing leadership, plus a succinct executive summary slide deck. This is the perfect pre-read for your next board or editorial board meeting or strategic retreat.
 
The report releases mid-August. We are offering a 20% early-bird discount on pre-release purchases.

Physical Sciences, Engineering, and Mathematics Journals! Now Recruiting for C&E’s 2024 Benchmarking Study

Join C&E’s 2024 Journal  Metrics Benchmarking study to see how your journals compare to peers on editor compensation and workload, editorial office resourcing, submission trends, transfer success rate, turnaround times, cost/revenue per article, research integrity checks, and more. This study is conducted only every 2–3 years, so this is your last chance to participate before 2026. 

ASAE Panel on OA and Journal Strategy

Pam Harley will be part of the “How Open Access Will Affect Your Journal Strategy and Revenue” panel at ASAE on Sunday, August 11, in Cleveland. Drop Pam a line if you’d like to meet up at ASAE (American Society of Association Executives). 

Strategies for Audience Engagement Webinar

Colleen Scollans is speaking at “Unlocking the Power of Data: Strategies for Audience Engagement” (sponsored by Atypon) on August 15 – register to learn how to implement an audience engagement plan including how to choose the right technologies for an audience-first orientation.

How Deep Is Your Love (for Authors)

Colleen is busy this month! She’s also a panelist on a September 4 webinar from ChronosHub, “How Deep Is Your Love for Authors,” a re-running of the popular SSP session. Industry experts will lead participants on a self-assessment quiz to reveal just how much you really care about your authors (and share some insights on improving author experience).

Could Peer Review Systems Benefit from Systems Thinking?

In this article, we make the case for applying systems thinking to submission and peer review systems (SPRSs) in journals publishing. In our view, the evolving landscape of publishing necessitates a new way of thinking about these fundamental systems. This piece encourages publishers to reconsider their approach to SPRSs, suggesting that systems thinking could revolutionize this critical part of the publishing workflow.

“Big Five”

1

As we’ve been putting the finishing touches on C&E’s Scholarly Journals Market Trends Report 2024, we note that we’ve spent an inordinate amount of time heads down in bibliometric databases and other scholarly communications information sources. None of these resources is perfect – each has its strengths and weaknesses. As more research is conducted on the impacts of open access (OA) on the research community, and if the hope is to develop evidence-based policies (rather than the usual blunt instruments that come out of policy developments), it’s important to understand just what these data sources can and can’t do. Two recent studies offer insights into what’s happening, but each suffers from either methodological missteps or a lack of the right data needed to answer the questions being asked.
 
One common issue among these types of study is a stubborn refusal to acknowledge the changing dynamics of the market. Each of the studies discussed below refers to the “big five” commercial publishers: Elsevier, Springer Nature, Wiley, Taylor & Francis, and (for some reason) Sage, which may have held some relevance a decade or more ago but does not reflect things as they are now. Perhaps the need to have a big five to demonize is a retained mindset from older characterizations of the trade publishing market (back when there actually was a trade big five). Each set of researchers below seems intent on setting MDPI to the side as something different, despite it being the third largest for-profit commercial publisher in the market (as measured by article output) and one of the five largest since 2019. If there is a “big five” set of publishers, surely MDPI, which published 3.76 times as many articles as Sage in 2023, is one of them.
 
Frontiers, also for-profit, also commercial, is the sixth largest publisher in the market yet it is similarly “othered.” As for Sage, it is currently (based on 2023 data) the ninth largest publisher in the market by publication volume (and the eighth largest commercial publisher, as the non-profit IEEE is bigger); however, it still seems to have retained a reputation as a market leader, despite having a lower publication volume than Wolters Kluwer for the last two decades. (Wolters Kluwer, for some reason, seems off the radar).
 
Another important issue is understanding the significant differences between Web of Science (WoS), Dimensions, and OpenAlex. Although it is widely recognized that WoS is a smaller, less inclusive database, the advantage it provides is that it captures a snapshot of the market at a particular point in time. Dimensions and OpenAlex, by contrast, are based on identifying articles through DOIs (digital object identifiers). When a journal changes publisher or moves from self-publishing to partnering, the DOIs for back content are changed to point to the new publisher’s platform, so the data imply that the new publisher has always published the journal. To see an example of this, run a historic search in Dimensions or OpenAlex that includes a journal with a recent change of publisher. For example, the journal GENETICS moved from being published by the Genetics Society of America to Oxford University Press (OUP) in 2021; since neither database has a concept of “historical” publisher, they both show OUP as being the publisher of GENETICS for its entire history – thereby inflating the size of OUP in the past. 
 
With those caveats in mind, there are some helpful takeaways from these recent studies as we consider the way the journals market is evolving. The first, a new preprint from Vincent Larivière and colleagues, offers a contrarian view of consolidation in the journals market. Larivière was part of the oft-cited 2015 study showing an “oligopoly” held in the market by the (you guessed it) five largest commercial publishers. The new preprint extends this analysis by looking at more expansive data sets beyond the original paper’s use of WoS. By looking more globally at the growing long tail of smaller journals, largely outside the geographies with the highest levels of research funding, the study concludes that the domination by the big publishers is not quite what it seems.
 
If one ignores the historical trends due to the questionable DOI-based Dimensions and OpenAlex data, and instead just looks at the current state of the market, the authors suggest that the explosive growth of journals outside of traditional geographies and publishers means that the major publishers are not so dominant in terms of journal and article volumes. The big commercial publishers may rule the research world of the US, Europe, and China, but the global view is more expansive. 
 
An aside – if you’ve ever wanted an example of positive results bias, consider that an article concluding that “the major scientific publishers may have lost their dominance in terms of journal and article volumes” is instead positively framed in its title as “The oligopoly of academic publishers persists in exclusive database,” focusing solely on WoS results while ignoring the larger lesson of the study.
 
Next, a new preprint from Haustein et al. attempts to estimate the global article processing charges (APCs) paid to six publishers between 2019 and 2023. While we appreciate the one-upmanship of looking at six publishers instead of five, the preprint does at one point refer to the “big five,” which again includes Sage. The study, however, looks at the five biggest publishers of OA articles in the time period studied: MDPI, Elsevier, Springer Nature, Frontiers, and Wiley, along with, for unexplained reasons, the 13th biggest OA publisher for the period, PLOS. 
 
The study reminds us of many previous papers examining expenditure on subscription journals. The numbers those papers provide are generally wrong, because the authors relied on the list prices for the journals during a period when nearly every customer was paying a discounted rate, either directly or through the purchase of larger journal packages. Here the same problem arises. Although a new data set has been made available that shows the list APCs for journals over time, there is no way to correlate those numbers with the amounts actually paid. In its 2021 Annual Report, MDPIstates that that the company waives fees for approximately 30% of its content every year, and beyond that discounts between 15% and 100% are offered on its journals. As transformative agreements have become common between universities and the largest publishers (8%–9% of total Gold OA articles in 2022 according to ESAC) authors are paying discounted fees.
 
Haustein et al. make very clear the limitations of this study, and that their numbers don’t likely represent real-world spend. Despite this, just as was the case for subscription studies, we expect to see them quoted widely as factual. 
 
But what’s really interesting here is not the expenditures quoted, but the recognition that previous attempts to quantify the “average” APC price are methodologically unsound. Instead of adding up a list of journals’ APCs and taking the average, the researchers looked at where authors actually spent their money. By looking on the article rather than the journal level, they conclude that “authors tend to publish more in journals with higher APCs (or publishers charge higher APCs for popular journals).” The median APC paid across these publishers is highest for Frontiers, and lowest for PLOS (largely due to the dominance of PLOS ONE). After PLOS, Elsevier is the next lowest. Another interesting conclusion looks at the strategies of hybrid publishers: Elsevier seems to focus almost equally on both Gold OA (50.9% of APC revenue) and hybrid OA (49.1%), while Springer Nature favors Gold OA (62.4%) and Wiley favors hybrid OA (59.3%).
 
So, despite their flaws, each of these studies offers interesting conclusions that bear further analysis – chiefly, that market consolidation looks very different if one looks beyond the titles indexed in WoS, and that real world APCs (at least the list prices) are higher than previously supposed. We need more data on the rapidly changing market, particularly with the US Office of Science and Technology Policy declining to perform any meaningful analysis of the potential impact of its Nelson Memo policy. 

Brandification

2

“To say that brands don’t matter is to say that information comes to us unmediated. This is naïve.” So wrote Joe Esposito in 2010 in a post titled, “Why Publishers’ Brands Matter.” At the time, arguments were being made that publishers would be disintermediated by the market shifting to an article-level economy. Joe followed up with another piece five years later, predicting that the big brands would dominate the OA landscape by imitating the already-in-place Nature cascade, and in 2016 wrote that brands had survived, often because the disruptors were not only tilting at windmills, but also “tilting at the wrong windmills.”
 
It is with Joe’s framing that we read a recent article in LSE Impact Blog on “Designer Science” by the leaders of eLife. In the piece, the authors suggest that a focus on brand identity of journals is something new, driven by the ongoing transition to OA. The authors cite the beginning of the modern OA movement as happening in 2002, with the Budapest Open Access Initiative, and argue that author-pays OA models are responsible for the expansion of the Nature portfolio. If this is true, the publishers of Nature must have had amazing foresight as they launched Nature Genetics (1992),Nature Structural Biology (1994), Nature Medicine (1995), Nature Biotechnology (1996), Nature Neuroscience (1998), Nature Cell Biology (1999), not to mention the Nature Reviews journals including Nature Reviews Genetics (2000), Nature Reviews Molecular Cell Biology (2000), Nature Reviews Neuroscience (2000), Nature Reviews Cancer (2001), and Nature Reviews Immunology (2001), all before the 2002 Budapest meeting!
 
The authors also write that eLife was founded “with the purpose to disrupt and reform scholarly communication,” which doesn’t quite align with the journal’s explicitly stated purpose upon its founding in 2011, which was to serve the “need for a scientific journal at the very high end that is run by active practicing scientists” and to “provide a major new vehicle for publication of the world’s best research in the life sciences.” While eLife’s goals have changed over the years (it would be a sad state of affairs if organizations did not change their strategy in light of events!), it was nevertheless founded as an elite journal serving the high end of the market with the hopes of building a “designer” brand strong enough to compete with CellNature, and Science
 
History aside, it’s worth considering some of the arguments made by the authors. Given that the expansion of the Nature portfolio was well underway prior to the advent of OA, it is more likely that the publishers of Nature had hit upon something more fundamental. Offering additional products under an established brand is a tried-and-true business strategy that works across industries and business models. Nike offers more than one shoe. Apple more than one device. Hermès more than one handbag. That is because the cost of introducing a new product and building demand for it is vastly cheaper and more effective if a strong overarching brand is already established. 
 
Perhaps the most salient point to make about why the argument that OA is the driving force behind the Nature portfolio’s expansion misses the mark is that, with the exception of Nature Communications, every single Nature-branded journal follows a hybrid model, selling subscriptions with OA offered as an option (but not a requirement) for authors (and even Nature Communications was initially launched as a hybrid journal). There is simply more money to be made in holding a larger share of the market, particularly the high end of the market.
 
OA has indeed led to market expansion, including some journal expansion in the Nature portfolio but notably outside the Nature brand, including Scientific Reports and the “Communications” journals (Communications ChemistryCommunications Biology). If this is the argument the authors are trying to make, then we agree. But here the number of journals is far less important than how many papers the journals publish. Scientific Reports, while only one journal, published nearly 22,000 papers last year. In an author-pays OA economy, the number of journals with your brand on them only matters insofar as it helps you publish more articles. As Joe accurately predicted in 2016, those who had done the work of building trustworthy and aspirational brands were best positioned to capture the expansion of the market (as measured by articles) that OA supercharged. 
 
The ”brandification” of research communication is not a new phenomenon, nor a result from the shift to OA. Needing a CellNature, or Science paper to get a good job is something that even those of us who went to graduate school back in the last century understood (and still seems to be the case, at least if you hope to become an HHMI Investigator). Seeking proxies or indicators of quality is part of human nature, based in the pattern recognition skills that determined natural selection and enabled survival. Asking a researcher to consider a paper with little-to-no context would seem contrary to the way humans assess and prioritize information. If journal brands no longer existed, researchers would simply resort to other brand signals, such as the university affiliation of the authors or the funder of the research. Further amplifying university and funder brands (which already give privileged authors a leg up) would make publishing lessinclusive and reinforce academia’s elite brands (which many observers already liken to luxury brands).
 
eLife’s own model and the suggested remedies are unlikely, in our view, to help with the problems identified by the authors. If, for example, the proliferation of the literature is a problem, then a publish–review–curate model not only further expands the number of papers out there, but also offers the same author-pays incentives to review and curate more articles to bring in more money from more authors. If journal brands are a problem, then what are we to think of the brand of eLife itself? 

Survey Says…

3

As noted above, policy is a blunt instrument and developing detailed plans for how one will reach a policy’s goals is often the difference between success and failure. In other words, implementation matters. That seems to be the conclusion of a consultation conducted by cOAlition S to better understand how their Towards Responsible Publishing manifesto might “better resonate with stakeholders, identify potential barriers and unintended consequences and determine whether the existing infrastructure can support cOAlition S’ vision of a community-driven publishing ecosystem.” The 11,600 participants in the survey basically just want to know the details, particularly who’s going to do all the work involved and who’s going to pay for it.
 
The report includes some questionable conclusions. Despite purporting to show how authors decide where to publish, it is worth noting that the survey did not actually ask this question. Rather, they ask, “As an author, how useful do you think the following methods are in helping your research reach its intended audience?” This is a very different question from many previous community studies over the last two decades that asked more directly things like, “Please consider your most recently published paper: how important were the following factors when choosing where to submit that paper?” 
 
The question asked by cOAlition S focuses solely on dissemination of research results and omits all of the other factors that come into play when making publication choices. Although OA has become more common and accepted by the research community, the question being asked here likely played a role in the different rank of OA seen here as compared with previous surveys – all the way up to third most important in the cOAlition S study, rather than ranking near last in the older surveys by Nature Publishing Group (linked as an example above). OA is indeed a useful tool for broad dissemination of a paper, but nothing can be concluded from this report about how it ranks in author priorities.
 
Further, the report suggests (page 31) that because OA was seen as a useful tool for reaching an audience, “The benefits of open licensing were broadly accepted and acknowledged by consultation participants.” Again, this conclusion is based on the same question, with an assumption made that respondents were specifically considering reuse rights when asked about OA, rather than dissemination. Given the recent backlash about Taylor & Francis licensing researchers’ works for AI training, it is far from clear that such a consensus regarding unfettered reuse exists in the surveyed community. The report also notes that some 47% of participants in the survey have posted at least one preprint. According to Dimensions (an inter-linked research information system provided by Digital Science, https://www.dimensions.ai), only around 8.4% of total research articles over the last five years were preprinted, suggesting a likelihood of selection bias toward authors who preprint in the sample used for the study.
 
With those caveats mentioned, respondents largely want more services from publishers – posting (and updating) of preprints, publishing peer reviews (including those for rejected papers), integrity checks, data availability checks, and other ethical considerations. Researchers are dependent on the existing publishing infrastructure and seem to indicate a desire for expanding it, rather than starting over from scratch. This result calls into question cOAlition S’s aspirations to radically restructure scientific publishing. Significant questions around the funding and sustainability of any new system were raised.
 
Respondents noted that under this plan, their workload would expand in terms of both communicating their own research results and in sorting through the fragmented and increasingly complex information landscape that would be created if cOAlition S’s plans came to fruition. The research community, and those outside the community looking to keep up with and understand the latest discoveries, highly value the filtering mechanisms and curation offered by publishers (even while acknowledging they are imperfect). Take away these time-saving services and the work falls on the shoulders of researchers.
 
Effective change in research communication practices is dependent upon a global restructuring of the career advancement and funding mechanisms for academia. The publishing industry is reactive to the needs of researchers and institutions, so rather than treating the second-order effects, change can be accomplished by going direct to the source of the problem. As this new world of communication is explicitly intended to be “scholar-led,” one hopes the cOAlition will indeed follow the lead of scholars, who suggest that cOAlition S is trying to make the tail wag the dog. 

Briefly Noted

4

Saddened, disappointed, and taking things very seriously seem to be the standard responses from publishers as Clarivate suppressed some 17 journals this year from its Impact Factor rankings for anomalous citation activities. Interesting that this year there were two Elsevier journals and journals from Taylor & Francis, Springer Nature, and Emerald, but nothing from MDPI, Frontiers, or Hindawi, the publishers that have been receiving the highest levels of scrutiny and suspicion. 
 
An odd article published in Accountability in Research tries to make the case that the “big five” publishers (yes, as above, including Sage but not MDPI) should be seen in the same light as Hindawi and the other born-OA publishers that have massively driven growth via guest-edited special issues (which the authors delightfully call “special issue-ization”). Their reasoning is based on a bibliometric look at three unnamed journals owned by one unnamed publisher. It’s not clear why public material is being masked here (it is, um, published) to “protect” the journals and the publisher (it’s pretty clear this is Springer Nature and one title is the Journal of Ambient Intelligence and Humanized Computing). The lack of transparency makes it difficult to check or reproduce the authors’ work. But the information shown seems contradictory to the paper’s conclusion. Essentially, three journals at a commercial publisher had problems with guest editors of special issues. These problems were discovered, the papers retracted, and the journals promptly delisted from the Journal Citations Report. While this shows ongoing struggles (which are not new) at Springer Nature with regard to quality control of special issues, the authors fail to show that the prevalence of “special issue-ization” at the “big five” can be meaningfully compared with that of born-OA publishers (see figure 2 here to get a sense of the disparities). Given the lack of evidence of significant growth of special issues at the “big five,” it’s hard to accept the authors’ conclusion that special issue-ization is a widespread problem.
 
In addition to the AI-training licensing deal with Microsoft that seems to have incensed authors (at least those authors who failed to actually read the contracts they signed), Taylor & Francis has announced a second such deal with an unnamed company, bringing their AI licensing earnings up to $58 million in 2024. We at The Brief are curious to see how such licensing revenue is allocated toward author royalties, given the size of the corpus.
 
Diminishing returns seems to be the theme of the penultimate Transformative Journal analysis from cOAlition S. The program, which was meant to provide support for journals pledged to flip to OA, has continuously shed titles, and in its final year only around 20% of the journals participating in 2021 still qualify for transformative status. The most successful publisher in the program seems to have been Cambridge University Press, which, as discussed elsewhere, is perhaps as much a curse as a blessing. Support for the program concludes at the end of 2024.
 
With the likely increased costs of the Nelson Memo looming, the National Endowment for the Humanities joins other US agencies in seeing proposed funding cuts.
 
The Finnish Publication Forum has downgraded some 60 journals, excluding them from rankings meant to classify quality. While we at The Brief appreciate efforts to bring the research community into discussions of journal brand identity, we’re not sure if the best way to gather objective data is to put out a tweet asking for stories about bad experiences with MDPI, Frontiers, and Hindawi specifically.
 
NISO (National Information Standards Organization) has released a much-needed new set of standards for retraction notices. While NISO recommends (but doesn’t require) including the reason for the retraction in the notice, this goes against an increasing series of calls for journal editors to get out of the author investigation business. This month Tim Kersjes added another voice in favor of journals simply retracting articles that they can no longer stand behind, without judging whether the papers are the result of misconduct or honest mistakes.
 
Two new CEOs were named this month, with Matthew Kissner taking the helm at Wiley and Annie Callanan announced as the incoming CEO at EBSCO Information Services.
 
Cambridge University Press fell victim to a “cybersecurity incident” in June, reminding us that our community remains just as at risk as the rest of the world to the predation of hackers.
 
Springer Nature’s Annual Progress Report 2023 has been released, showing underlying revenue growth of 5.2% and revenues of €1.85 billion. James Butcher takes a deep dive into what the numbers mean and how they may impact Springer Nature’s perpetually “arriving soon” IPO. In other Springer Nature news, a two-year settlement has been reached for the pay dispute strike for Nature editors.
 
RELX, the parent company of Elsevier, released its first half 2024 results. Revenues for its Elsevier unit (“STM”) are up 4% after currency adjustments. 45% of revenue is attributable to “Databases, Tools & Electronic Reference and Corporate Primary Research” and 45% to “Primary Research, Academic & Government.” This latter segment includes the journals (though presumably not all the journals-related revenues, some of which is in databases such as ClinicalKey). According to the earnings call transcript, year-to-date journal submissions are up 20% compared with this time last year and published output is up 15%. This is an exceptionally high article growth rate for the world’s largest publisher and indicates they are increasing not just their output but their market share. A lingering question is whether they can continue to stay as profitable despite the increased output – which is itself a question about the profitability of OA versus subscription articles. The division’s margin is up slightly on a currency adjusted basis year-over-year, so for now the answer seems to be “yes.”
 
Add one more to the ongoing list of reasons to not become too dependent on Google Scholar – the latest product to join the Google Graveyard is the company’s link shortener, which will stop working on August 25, breaking all the goo.gl links it has created. This one is particularly galling due to how much “link rot” it will create as weighted against the tiny cost of keeping what is effectively a two-column spreadsheet hosted on a server. 
 
Speaking of Google, we at The Brief offer both our congratulations and condolences to Larry Richardson, who briefly attained the honor of being the world’s most-cited cat. While it’s been clear since 2012 that gaming Google Scholar’s citations is a trivial matter, credit has to go to someone putting such manipulations to boosting the reputation of our often unsung feline companions. By creating a bunch of nonsense papers written by Larry Richardson (a cat), followed by more nonsense papers citing them, uploading the lot to ResearchGate and waiting for them to be indexed by Google Scholar, Reese Richardson’s grandmother (a human) was for a time the proud owner of a cat with an h-index of 11. After this achievement caught some online attention, Google appears to have removed Larry’s citations from its index, perhaps an indication that someone at Google Scholar is more of a dog person. The humans who have used this same manipulation to falsely boost their academic rankings, however, do not appear to have been subject to the same correction.

***
The parent publication will spawn families of affiliated publications, most of them working with the Gold OA model. Libraries will continue to purchase large aggregations, though from fewer and fewer publishers; and funding bodies will continue to build the market for mandated OA publication with attendant APCs (simultaneously and causally reducing the amount of money that goes toward research). – Joe Esposito, 2015