2024 Publishing Tech Trends
C&E’s Colleen Scollans recently shared her insights as part of a panel of publishing tech experts interviewed by Silverchair and Hum for their third annual report on publishing technology trends. Find out what industry professionals should keep an eye on in 2024 and beyond – from rapid advancements in AI applications to a sharper business-to-consumer (B2C) focus and greater investment in marketing technology (MarTech) systems and processes.
Peak Special Issue?
1
Ten months have passed since the Wiley earnings call when the extent of Hindawi’s paper mill challenges became widely recognized. In that time, much has changed for Hindawi, which no longer exists as a brand and has now retracted more than 11,000 articles. More importantly, much has changed across the scholarly publishing landscape as a result of the light that Hindawi has shone on special issues. The guest-edited special issue has dominated the open access (OA) landscape since 2016, a period that saw MDPI climb from being the 21st largest publisher by volume in the market to the 3rd largest, with nearly all of that growth attributable to special issues. Frontiers meanwhile grew from 28th to 6th, also propelled by special issues (Source: Dimensions, an inter-linked research information system provided by Digital Science). The implications of this “growth hack” are now becoming evident.
In January 2024, Frontiers announced a major reorganization, which included firing roughly 30% of their global workforce (600 of their “about 2,000 employees”). While the company’s press release suggests that much of this reorganization is due to overhiring to meet the needs of increased submissions during the Covid-19 pandemic (which have now tailed off), it is also clear that a subsequent focus on quality control at the publisher has further reduced Frontiers’ overall publication output. Frontiers’ rapid growth in 2022 resulted in the publisher being poised to overtake Taylor & Francis to become one of the top five journal publishers as measured by article output. But in 2023, Frontiers had a 29% drop-off in publication volume, putting the Swiss publisher closer to the 12th spot than the 5th spot on the list.
MDPI, meanwhile, saw only a 6% decline in total publications from 2022 to 2023 according to Dimensions, but has been the subject of increased scrutiny. Given the connection between paper mills and special issues, and MDPI’s enormous reliance on special issues, a lot of eyes are taking a closer look at what’s been published over recent years. MDPI’s journal Sustainability passed a reevaluation by Scopus, but an 85-paper, 23-journal “review mill” was recently reported(worth noting that the source of this report, “Predatory Reports,” is not without its critics, as we discuss below).
James Butcher’s Journalology newsletter notes a regular spike of publications from Frontiers each January (while MDPI has more steady month-to-month output levels). Journals looking to boost their Journal Impact Factor (JIF) often front-load their publication schedule – a January article has 36 months to be noticed, read, and cited by others before it stops counting toward the JIF, whereas an article published in December gets only 25 months. This January, at least as of the writing of this newsletter, looks remarkably different for both publishers as compared to previous years. According to its website, MDPI has published only 17,072 articles as of January 29, 2024. By this time in 2023, MDPI had published more than 28,000 articles, suggesting a 39% decline in article output. It is difficult to parse papers by date on the Frontiers’ website, but, looking at Dimensions (which can have a lag in indexing papers so these numbers may not be complete), it shows 6,628 articles published as of January 20, 2024 compared to 12,387 in that period in 2023 (a 46% decline). Publishing fewer papers early in the year may affect the JIF (a lagging metric), which in turn may affect future publication output.
It is unclear whether these publication declines are just a speed bump or whether they mark a more substantive change in trajectory in the meteoric rise of these two publishers. In any event, the reorganization by Frontiers points out the precariousness of the author-pays OA business model, and how rapidly one’s fortunes can change. Because revenue is tied directly to publication volume, anything that results in fewer papers being published translates to an immediate drop in revenue. Subscription journals that wish to focus on research integrity and other quality control measures (or that see a drop in submissions due to exogenous events, like a post-pandemic lull) can publish fewer papers without impacting revenues. Author-pays OA journals wishing to implement the same quality control measures do so at the expense of their earnings (and workforce).
Frontiers is doing the right thing by ratcheting up their standards and providing additional scrutiny on special issue publications. But as with Hindawi, doing the right thing with a Gold OA model results in immediate financial pain and suffering. Frontiers at least is a privately held company and does not have the added challenge of the public markets, which further punished Wiley as they worked to clean up the Hindawi mess. That said, this retrenchment at Frontiers may very well be a reaction to the loss of market cap (valuation) experienced by Wiley. If the owners of Frontiers intend to sell the company in the future, they know that any prospective buyers will be aware of the problems at Hindawi and will be giving special issues a great deal of scrutiny. It is better to button up their special issue program and take the financial hit now so that it does not impact future valuation. Buyers also tend to focus on recent (3–5 years) revenue (and margin) growth when they value a company. By cutting back their revenue numbers now, Frontiers is well-positioned to show an upward revenue trend in the coming years, and, if they keep their labor costs down, they will likely do so with higher margins. We should emphasize we have no knowledge of Frontiers’ intent. The owners may have no interest in selling the company, but if they were to do so, the moves they are making now would be prudent.
Perhaps the bigger story here is that we now have two prominent Gold OA publishers (Hindawi and Frontiers) that have worked to improve research integrity and overall quality, and have been financially punished for their efforts. The tension between research integrity, which ultimately means publishing less, and output-based OA models, in which revenues are tied to publishing more, will only grow in the wake of OA mandates and the propagation of transformative agreements.
Non-profit, Non-cheaper
2
Last month’s issue of The Brief included a look at an analysis of the costs involved for Open Research Europe to develop and run its own journal article publishing platform. Our conclusion, based on the high cost per paper (even in the best-case scenarios), was that it would be cheaper and easier to outsource these efforts to third parties. Since releasing the initial analysis, Rob Johnson has taken a deeper dive into his numbers and offers a cogent explanation for why a shift away from commercial publishers to not-for-profits is unlikely to provide any cost savings.
As Johnson notes, the costs of publishing that includes strong efforts toward research integrity and high levels of quality control are not negligible. There’s a common misconception about where the real costs exist: “In practice, being an academic publisher today is primarily about maintaining a technology platform, running an organisation and competing for article submissions. Technology, management, and marketing costs are all far greater than is typically assumed.” Publishers who profit with relatively low article processing charges (APCs) do so either by subsidizing those fees with subscription revenues or by reducing the costs allowed by their enormous scale. As we noted last May, it is highly unlikely that research institutions and funding agencies understand the costs involved in building scale, and even if they did, it’s even more unlikely that they would wish to invest the necessary funds to do so.
Writing in Times Higher Education, Fiona Greig reaches much the same conclusion regarding the long-term viability of institutional repositories: “It is too hard and too expensive to deliver the existing vision and OA mindset.” Using the cyberattack on the British Library as a launching point, Greig states that most repositories are running on tools developed 15–25 years ago when the funding was put into establishing them. Continuous, significant investment is needed to keep things running, or as Kurt Vonnegut famously said, “Another flaw in the human character is that everybody wants to build and nobody wants to do maintenance.” Companies whose survival depends on satisfying their customers are generally more willing to invest in that maintenance than funders and governments whose mandates and interests change over time as they move on to the next priority.
All of this suggests that Richard Poynder’s recent statement of disappointment that the OA movement failed to solve the affordability problem is perhaps misplaced. The idea that if non-profit proponents of the OA movement had taken ownership and responsibility for publishing it would have made things vastly less expensive, is a notion not supported by the data. It may even be the case, as unwelcome a message as it may be in some quarters, that publishers may be good at publishing.
Speeding Science
3
The idea that open access to research papers was essential to the Covid public health response and the rapid development of vaccines has become a talking point among many OA boosters. Frontiers, in their reorganization announcement mentioned above, states that the pandemic has made the need for a transition to OA “urgent,” and argues similarly in an editorial in TIME that the only things holding us back from curing cancer are journal paywalls. Brigham Young University librarian Rick Anderson recently asked the question, “Is there good data/analysis out there demonstrating (as opposed to commentary inferring) that open access to research results sped up the development of COVID vaccines?”
The answers in this admittedly skewed sample were consistently negative, largely noting the absence of actual data to back up the claim and that, where studies were done, what mattered was not open access to research papers, but rather access to open data. To some extent access to preprints was important, although the flood of misinformation included in that corpus reduces their immediate public health value. A report done on behalf of the UK Government Department of Business, Energy, and Industrial Strategy notes that data sharing was key, but the most effective data sharing was described by the report’s author as being done through “a controlled access database (GISAID) which allowed researchers (esp. in LMICs [low- and middle-income countries]) to maintain control & get credit for their data, NOT the fully open databases preferred by funders/OS advocates.”
This is a particularly important point – the data, the actual science, remains more important than the stories written about the science. Although arguably the most important part of the Nelson Memo from the US Office of Science & Technology Policy (OSTP), open data requirements are not mentioned in either of the OSTP’s impact analysis documents. As far as we can tell, little effort seems to be going into implementation of this part of the policy in an effective and adequately funded manner.
We at The Brief would also like to add one additional proven route to driving scientific progress, namely funding basic science. RNA vaccines for Covid were rapidly developed because of groundbreaking basic research on RNA done in the 1980s and 1990s. Basic science, freed from the need for direct application, provides the building blocks necessary for applied science, as Ruth Lehmann eloquently explains in a recent Nature Cell Biology editorial. Lehmann notes that, in the US, the National Institute for General Medical Sciences (NIGMS) is a key supporter of basic science, but receives less than 7% of the overall National Institutes of Health (NIH) budget.
Open access to research papers has obvious value, but if the goal is to speed up research progress, then focusing on improving open data practices and investing in the long term through basic science research are proven, effective strategies that deserve more attention.
Oligopoly?
4
At The Brief, we remain concerned about the ongoing market consolidation in scholarly publishing (recently covered in a Scholarly Kitchen post); however, we were surprised at what the data in a recent Scientometrics article shows for the OA portion of the market. Despite its alarmist title, “The Oligopoly of Open Access Publishing” (paywalled, but a preprint is available), by Fei Shu and Vincent Larivière, shows a more diverse OA market than we expected. According to the paper, the top eight publishers of OA articles (by article volume) made up less than 31% of the total market for OA papers (measured by article output) in 2020.
An oligopoly is defined by Cambridge as “a situation in which a small number of organizations or companies has control of an area of business, so that others have no share,” and by Wikipedia as “a market in which control over an industry lies in the hands of a few large sellers who own a dominant share of the market.” “Oligopoly” thus seems not the best term for a market that has an increasing number of participants and, according to the yardstick chosen by the authors, became more competitive over time in the period studied.
To be fair, a look at 2022 data suggests that the market has become more concentrated – the top eight publishers made up 38% of the total number of OA papers published in 2022, with PLOS (Public Library of Science) being displaced in the eighth spot by Wolters Kluwer. Much of this further concentration is due to significant increases in quantity from MDPI and Frontiers, so it will be worth monitoring how things change if these organizations continue to publish fewer papers (see Item 1 above).
It is also worth noting that the “open access article market” is not a conventional view of the scholarly publishing industry. If, for example, a regulator was looking at the “market” for scholarly papers to assess the degree of concentration, they would likely look at the market for all scholarly papers, not just OA papers. A regulator would also likely look at revenue concentration for the whole market. Shu and Larivière do look at revenue concentration for OA papers but unfortunately their methodology for calculating revenues is at best directional. They have calculated revenues by simply multiplying article APCs by published article output. While they acknowledge the existence of waivers and discounts, they do not factor them into their analysis. Waivers and discounts are used more generously at some publishers and would likely significantly change their analysis. Further, the authors do not consider transformative agreements, which account for a substantial and growing portion of publisher OA revenues. Revenues from transformative agreements do not necessarily align with listed APCs. With those substantive caveats, the paper suggests that the larger publishers are bringing in an outsized proportion of the market’s revenue as compared to their article volume market shares. This result is somewhat biased by the presence of more than 400,000 papers in the year examined that were published in no-fee Diamond OA journals. What does the concentration of revenue in a market mean when more than 20% of that market is specifically designed to be revenue free? Regardless, it should not be surprising to anyone that larger organizations dominate the earnings in a market that explicitly favors scale.
It perhaps should not come as a shock that publishers who have the strongest journal brands, are investing significantly in expanding their OA portfolios, are signing the most transformative agreements, and are developing ever more sophisticated author marketing strategies have taken a leadership position in the field. The advantages of scale and the economic incentives to publish more and more papers inherent in author-pays OA models heighten the risk of greater market consolidation. But this paper, despite its conclusions, shows a surprisingly pluralistic market.
Dealmaking
5
As expected, De Gruyter has followed through with its proposed purchase of Brill.
Despite dissatisfying results from its deal with IEEE, the University of California system continues its efforts toward a multi-payer model (where funded authors cover publication charges via their grants and the university pays for unfunded authors) through a new OA deal with Wiley.
People
6
Janice Audet has been named Editorial Director of the MIT Press.
EBSCO Information Services CEO Tim Collins announced that he will retire on June 30, 2024. EBSCO is currently working to identify his successor.
MDPI has appointed Alistair Freeland as its Chief Operating Officer, a position he previously held from 2013 to 2019.
With CSIRO Publishing Director Andrew Stammer retiring at the end of last year, Arend Küster has been appointed as his successor.
As part of 67 Bricks’ leadership restructuring, Jennifer Schivas has been promoted to CEO and David Leeming to CTO.
Todd Toler has been named VP of Product and Market Strategy at Wiley.
Briefly Noted
7
A recent study suggests that search engine results have gotten worse in recent years due to an ongoing cat-and-mouse battle between search engine companies and SEO-optimized spam websites. Experts expect this problem to become much worse with the advent of AI-generated spam content, which “genuinely threatens the future utility of search engines.” What does this mean for relatively non-selective scholarly discovery tools such as Google Scholar compared with curated and restricted tools such as PubMed, Web of Science, Scopus, and Dimensions? As AI offers the limitless creation of fake documents filled with mis- and disinformation that look like scholarly research papers, gatekeeping may become an increasingly necessary activity.
While some level of gatekeeping can be automated (AAAS announced this month that all papers published in their journals will be screened for image fraud using Proofig, and Taylor & Francis is expanding their use of CACTUS’s Paperpal tool), human judgment and oversight remain essential, particularly in light of the latest low-tech exploit from paper mills – bribing journal editors to accept fake papers.
Low-tech exploits to the side, publishers will need to continue to invest in technology and staffing to improve research integrity performance. Ivan Oransky and Barbara Redman write in Science that governments and funders are unwilling to pay for (and perhaps unable to manage) efforts to clean up the literature. Although the cost and effort of such work may seem high, the costs of losing the trust of readers will be even higher.
One relatively simple research integrity intervention described in the latest issue of Learned Publishing had editors identifying citations to predatory journals in manuscripts and requesting the authors remove or replace them with something more reliable.
Freakonomics convened a two-part series of podcasts on academic fraud that is worth listening to. While it covers no new ground, it is good to hear a recap of several of the big research integrity stories of the last year, along with the voices of some of those involved in investigating them. We have one notable quibble, however. In the second part of the series, the host, Stephen Dubner, makes the claim that “global scholarly publishing is a $28 billion market. Everyone complains about the very high price of journal subscriptions – but universities and libraries are essentially forced to pay.” This not only suggests that the journals market is $28 billion but that universities and libraries are “forced” to pay $28 billion to publishers. The total global journals market is around $10 billion and that includes all revenues for publishers. A significant portion of the $10 billion in revenue is derived not from libraries and universities but from subscriptions from industry. Additionally, publishers earn revenues from advertising, reprints, licensing, open access charges (paid by funding agencies via research grants), and so on. The portion from libraries and universities is the largest slice, but not close to $10 billion, never mind $28 billion. Citing inaccurate and wildly inflated figures like this fans the flames of critics who think academic publishing costs too much (while at the same time demanding that publishers add costly measures to better detect research fraud of the kind discussed in this episode). There is a reasonable debate to be had about the costs of scholarly publishing but that debate should start with first using accurate figures.
Scopus AI has officially launched, offering users research summaries, pointers to influential papers, and identification of field experts. It will be interesting to see whether Scopus AI provides enough additional value for customers to pay more to add it to their Scopus licenses, and how well it stacks up against the many AI-driven tools hitting the market in the near term.
Speaking of Scopus, the CNRS has canceled its subscription, and the Sorbonne canceled its access to Web of Science(both seemingly in favor of a move to OpenAlex). It is unclear if this marks the beginning of a movement away from proprietary sources of metrics or just a cost-cutting endeavor by the institutions involved.
In other industry AI news, Silverchair announced the launch of its AI Lab, a vehicle for experimenting with AI. According to the press release, one of the first pilots is the generation of AI-generated article summaries. Another in the works is an augmented search tool. The company has also announced the creation of an AI newsletter.
Informa, parent company of Taylor & Francis, reported strong 2023 financial results, with underlying revenue growth of around 30%.
The Directory of Open Access Journals (DOAJ), after reviewing the backlash to its well-intentioned but poorly executed rules about special issues, has revised those rules to no longer exclude journals where special issues make up the publication’s entire content for a year.
Does open peer review, where the published paper reveals the identities of reviewers, result in better citation performance when compared with articles for which reviewers are not identified? No, it makes no difference, at least according to a new study in the Journal of Informetrics. The authors did find a correlation between citation performance of the article and the academic performance of the named reviewers – better known, better cited reviewers were associated with better cited papers. But it’s hard to tell from this study if the authors’ theory – better performing academics do a better job of peer review, identifying better papers and adding better value to them – is correct, or if this is more about how editors select peer reviewers – that is, saving their top reviewers for papers they think are important.
Cabells has publicly made accusations that Predatory Reports, an unrelated organization that uses the same title as one of Cabells’ products, is itself a predator, asking publishers for payments of $50,000 to keep their name off the website’s lists. As Samuel Moore put it, “It’s hard to tell these days which Predatory Report is the real Predatory Report and which one is the predatory Predatory Report.” (Note that the accusation, mentioned above, of a reviewer ring made against MDPI comes from the later source.)
A review of the cOAlition S Journal Comparison Service was released this month, looking at seemingly lagging data from 2022. While the number of publishers participating rose from that seen in 2021, the number of journals included actually dropped, as many titles from Wiley, the one major publisher complying with the service, chose to discontinue providing data. Given the current directional change at cOAlition S and the decline in utility for this tool, one wonders if in 2025 we’ll see a report on 2023 participation.
Stephen Heard makes the point that if your authors are using Microsoft Word to write their manuscripts, then they are using AI (given Word’s built-in grammar and autocomplete functions) and can’t honestly sign an attestation that their paper is AI-free. We at The Brief share his philosophy that, in the end, it shouldn’t matter who (or what) wrote the paper, rather what matters is whether the paper makes interesting claims, and is “well supported by data, logic, and literature, and clearly communicated.” But as David Chapman notes, none of this may matter, as text generators such as ChatGPT are unreliable, “difficult to use effectively, […] and extremely expensive to develop and run.” He suggests that they are merely “text genre imitation engines” and the wrong tool for accomplishing what most people hope to get out of them. The next generation of tools that focus on knowledge retrieval, rather than language prediction, are where things become much more useful (and open up a different can of worms). This point was further explored by Dustin Smith of Hum, in a recent post in The Scholarly Kitchen on the development of BERT (Bidirectional Encoder Representations from Transformers), a separate branch of AI from LLMs that seeks to “understand” and not just predict text.
Friend of The Brief Steven Sinofsky provides an insightful analysis of the (in his view) blunt and likely counterproductive instrument that is the European Digital Markets Act (DMA), which is clearly designed with American tech companies in mind. In particular, he notes the likely (negative) impact on European consumers resulting from compromises that companies (and in particular Apple) will need to make with their products. Sinofsky should know, he was responsible for many of Microsoft’s leading products when the company was the focus of the EU’s regulators.
Rarely do medical societies and Real Housewives collide. American Society of Anesthesiologists recently called out on Instagram claims that a Real Housewife of Beverly Hills misrepresented herself as an anesthesiologist (she is in fact a nurse anesthetist). ASA used the pop culture clash to highlight the education and training required to become an anesthesiologist. As happens with RHOBH, drama ensued.
***
… we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. – Aslett et al. 2024