The Challenge with Transformative Agreements and Society Journals
Michael Clarke wrote a new article on the challenges that Transformative Agreements (TAs) pose for those societies that partner with publishers (they also pose challenges for independently published societies but for different reasons). Because societies cannot opt-in to (or opt-out of) specific TAs on a case-by-case basis, it is likely that they are in many TAs that are financially detrimental. This paper looks critically at TAs from the society perspective and discusses strategies to mitigate their negative aspects.
AI Trends in Marketing Academic Journals
C&E’s Colleen Scollans spoke at the inaugural event of Marketing Maestros, an ALPSP Special Interest Group for senior marketing professionals. She discussed how scholarly publishers are incorporating artificial intelligence (AI) into various aspects of their businesses – and how marketers in particular are using AI to stay ahead in the ever-evolving marketing landscape. Read key takeaways >
STM Publishing in China
Osmanthus Consulting and C&E have released the first newsletter companion to International STM Publishing in China: State of the Market Report 2023, which includes exclusive content for Organizational License Plus buyers. Updates in the newsletter include:
- China’s ways of determining reasonable (and unreasonable) article processing charges (APCs) and their intersection with journal warning lists
- An interview with Charlesworth Group CEO Michael Evans covering the outlook for transformational agreements (TAs) in China as well as considerations for international publishers proposing TAs to Chinese customers
- The 2023 announcement of “high potential” journals by the China Journal Excellence Action Plan (CJEAP) – a list dominated by journals focused on materials science, technology and engineering, clinical medicine, and food science
Nicko Goncharoff also summarized key takeaways in The Scholarly Kitchen from a symposium (sponsored by the STM Association and the Society of China University Journals) he chaired on China’s perspectives on research integrity and open access.
Dress Rehearsal
1
OpenAI, Inc., was founded in 2016 by Sam Altman and Elon Musk, not as a typical Silicon Valley start-up but as a 501(c)(3) not-for-profit, the mission of which is “building safe and beneficial artificial general intelligence for the benefit of humanity.” The idea was that they would hire some of the best AI researchers, who would create responsible models that would be made available openly. They soon realized that developing AI technology was extremely expensive. AI researchers are paid a lot. And running AI models requires vast amounts of computing power. To attract and retain talent, and pay for computing, they needed money – huge sums of money. So, they came up with the notion of a “capped profit” subsidiary called OpenAI Global, LLC. In this subsidiary, any profits over 100× an investment would be capped and returned to the 501(c)(3). For example, if a company invested US$10 million it could receive a theoretical return of US$1 billion. However, after the initial US$1 billion was reached, all further profits would go to the 501(c)(3). The advantage of this model is that it allowed OpenAI to sell equity in the LLC to investors and to issue stock grants to employees. (In reality, this description oversimplifies the structure – employees and investors were technically issued shares in a holding company sitting between the 501(c)(3) and the LLC, which is an important detail if you own shares but not really important to the overall governance picture – we recommend Matt Levine’s diagram for readers who wish to learn more about the corporate structure.)
This set-up seemed to work. OpenAI was able to raise billions of dollars, allowing it to hire and retain talent to develop technologies, including ChatGPT, that have put the organization at the center of AI development. The LLC is among the most valuable start-ups in the world having been recently valued at an eye-popping US$86 billion. It is governed by the board of directors of OpenAI, Inc., the 501(c)(3). The supposed advantage of this structure is that the board of the 501(c)(3) does not have the same kind of financial interest that a typical venture-funded company would have. They are, theoretically, in a position to push the “pause” or even “destruct” button if the LLC develops potentially harmful technology including (and especially) AGI.
On November 17, the board, which recently shrunk due to departures of members who had not yet been replaced, was composed of three members who work (or worked) in the company and three “outside” members. The “inside” members were Sam Altman (CEO), Greg Brockman (President and Chair of the board), and Ilya Sutskever (Chief Scientist). The outside members included Tasha McCauley, an adjunct senior management scientist at Rand; Helen Toner, Director of Strategy and Foundational Research Grants at Georgetown University’s Center for Security and Emerging Technology (CSET); and Adam D’Angelo, a former Facebook executive and founder of Quora. At least two of the outside board members were associated with effective altruism. Altman, by contrast, has been skeptical of the movement, commenting that it has become an “incredibly flawed movement” that showed “very weird emergent behavior.”
But on November 17, the board – without advance warning to anyone including, incredibly, their investors, primary partner, or Brockman – fired Altman. Their stated rationale for the firing was, cryptically, a lack of “candor” in interactions with the board. When pressed by the staff, investors, and both of the new Interim CEOs (that the board had appointed over the weekend of November 18–19, bringing the total CEO count to 3 in as many days), they failed to provide any example of a time when Altman had communicated dishonestly to the board. In fact, according to reporting in The Wall Street Journal, the board said that Altman “had been so deft they couldn’t even give a specific example.”
There is some speculation that an unreviewed scholarly paper written by an OpenAI board member and posted to the Georgetown University website precipitated the board’s action. We find this argument less than compelling but include it here because we thought readers of The Brief might find it amusing to imagine an unreviewed scholarly paper nearly destroying an US$86 billion organization. In some ways, we wish it were the case as then we could write a whole piece about how unreviewed PDFs are more dangerous than large language models, but sadly we find ourselves in a different universe.
It should have been obvious that firing Altman for no clear reason and without consulting investors would not go well. When one invests appreciable amounts in a start-up, usually the investor gains a board seat to watch over their investment. This didn’t occur with OpenAI because of the control of the parent 501(c)(3). Due to the success of its technology and the profile of Altman and the team he had built, investors were willing to invest anyway. However, once investors realize that they are dealing with a board that would fire a high-performing CEO for no apparent reason – and not bother even consulting them in advance – future investment would be uncertain at best. Listen, for example, to this interview with Satya Nadella, Microsoft’s CEO, who states in no uncertain terms that Microsoft will not invest further until the governance of the organization is changed. With an uncertain funding future, valuation of the LLC would likely be greatly reduced, perhaps even to zero, and with it the stock grants of staff, triggering an exodus and making it difficult if not impossible to attract new talent. Meanwhile, Altman could raise billions of dollars to start a new AI company at the drop of a hat and take the staff with him. Or the staff could move to any number of other companies – especially Microsoft, a key partner, who reportedly had desks and CPUs readied (non-compete agreements are unenforceable in California).
The path from the firing of Altman to the departure of nearly all the staff and the implosion of the organization was clear to nearly everyone involved. Everyone except the board who inexplicably thought firing Altman would make the organization stronger. According to a remarkably pointed open letter from the staff, management made clear to the board that their actions would lead immediately to the decampment of the staff to Microsoft and effectively to the end of the company (by, as Ben Thompson has pointed out, effectively selling it to Microsoft for $0). The board reportedly responded (according to the open letter) by saying that allowing the company to be destroyed “would be consistent with the mission.” Destruction of the company might be consistent with the mission to develop safe AI if OpenAI were developing unsafe AI, but the company cannot develop anything if it does not exist. Destroying the company due to deft comments seems … not on mission?
Reporting by Reuters on November 22 indicates that the board previously received another letter from a small number of staff prior their decision to sack Altman that did, in fact, warn the board that the organization was developing new technology, called Q* (pronounced “Q-star”), that they thought was dangerous and a step closer to AGI. We can only speculate that the board did not find this warning credible. If the whole point of the governance structure is to be there to push the pause button in the event the company is close to AGI, and the board receives a letter saying they are close to AGI, surely they would mention that? Something like, “We needed to pause development and the CEO did not see it that way, so we fired him, which is exactly the reason we have this governance structure.” But they said nothing like this. What they did say was that they fired him due to deft comments. And then hired him back. Also, it seems like the person most in the position to evaluate the technology was Sutskever, the Chief Scientist. Indeed, he initially voted with the other board members to fire Altman but then recanted, saying he regretted his vote, and subsequently voted to hire Altman back. Maybe the imminent arrival of AGI was not the issue here?
One of the things we work on at C&E is not-for-profit board governance, and this passage that Joe Esposito wrote in a 2011 post titled “Governance and the Not-for-profit Publisher” seems relevant to bring up now:
A related point to make here is that when a NFP has a substantive business unit (whether that business unit is publishing, databases, or AI technology) from which it derives most of its revenues, the requirements of a board are different. They have fiduciary responsibility for the organization that includes the business unit. In such cases, the board should be composed of at least some people that have expertise in the business they are operating. The board of OpenAI had only one outside board member with business experience and no one with the expertise necessary to understand the technology they were creating.
Perhaps the most substantive point to take away from this affair is that OpenAI had a dress rehearsal for the development of AGI or another potentially harmful technology and it failed. It is clear there is no putting the genie back in the bottle. Assuming OpenAI was, hypothetically, developing inherently dangerous technology as the earlier letter to the board claims, neither the board nor the CEO can really do much to stop it. AI development is so competitive that the people developing technology, if ordered to stop, can just go somewhere else to develop it. It could be one team or, as it turns out, nearly the entire company. This puts the onus on government to set guidelines, something the White House has begun to do with the recent Executive Order on AI. Without a digression on the details of this executive order and its merits, there is a broader consideration: it does not apply to AI development work in other countries. If someone is working on AGI at OpenAI and their research is paused, they could presumably go to work for a European or Chinese company. A global treaty on AI (or AGI) seems the only way to “decel.” We are not optimistic of such a treaty happening soon and can only hope our forthcoming robot overlords will be more sensible than the humans creating them.
Scholar Led
2
We turn from responsible AI to responsible journal articles with the release of a new manifesto, “Towards responsible publishing: a proposal from cOAlition S.” With this proposal, the funder coalition is mooting a wholesale, blank slate restructuring of not only publication, but also research assessment and funding. Rather than trying to set rules of the road for the publishing industry, the group of funders is now effectively calling for the dissolution of the largely European-based, US$10 billion+ a year industry that employs tens, if not hundreds, of thousands of people.
According to the proposal, researchers are expected to release their outputs whenever they choose to do so, either as preprints, datasets, or otherwise unspecified formats. At some point in the process, researchers may then choose to send those outputs to (as of yet, largely non-existent) peer review services, and then maybe, later, if they feel like it, submit them to curation services that will choose their favorite articles in the form of overlay journals. All of this is to be done, of course, at no cost to the researcher (it is unclear who will pay for these new services).
The plan is both so all-encompassing and at the same time vague that it’s difficult to know where to begin to analyze it. And so, we offer a “lightning round” of initial impressions:
Time. Most researchers think of publication as a necessary distraction that takes time and effort away from doing actual research. The proposal seems to assume that researchers want to spend more time dealing with the publication of their results, rather than less. When researchers are faced with non-research requirements, most take the path of least resistance – the route that will allow them to comply while spending as little time as possible doing anything other than research. There is no evidence presented by cOAlition S that unbundling the various things publishers do will result in a more-efficient, higher-quality, or less-expensive system. We suspect that the proposed system here would evolve into companies re-bundling the necessary services together and charging for high-quality and efficient execution, essentially reinventing research publishing.
Costs. We’re also confused by a plan that requires publishers (those that don’t cease to exist were this proposal to be successfully implemented) to become “service vendors” while at the same time insisting that all those services be provided to researchers at no cost. Perhaps universities will pay the bill? Funders? A lot of what’s envisioned resembles the current practice at eLife, but even with eLife’s deep Wellcome-funded pockets authors are still charged some US$2,000 per submission that goes out for review. The scale required here also seems somewhat misunderstood. The proposal calls for a plethora of services and platforms overseen by each research community. How many small-scale volunteer efforts will be needed to cover the 5 million+ papers published each year, not to mention all the papers that don’t currently pass muster, and all the new research outputs cOAlition S imagines? Eliminating many of the benefits of scale from infrastructure does not seem an ideal way to cut costs.
Balkanization. There’s a lot in this plan that reminds us of the 2013 UKRI OA mandate. At the time, the UK set out on a route that was very different from the direction of the rest of the world. As a result, they saw costs increase, as they ended up paying for UK authors to make their papers OA and, at the same time, still had to pay for subscription articles from the rest of the non-UK world. cOAlition S has failed to rally the rest of the world, particularly the most prolific research nations, to join Plan S. The US Nelson Memo takes a very different approach to public access, and China currently offers no OA policy at all. Will this policy leave cOAlition-funded researchers out on a limb? Will their universities be on the hook for paying for these new services while still being expected to pay for what’s needed to access research from the rest of the world? And what of those graduate students and postdocs who might apply for grants or jobs elsewhere? If institutions and other funding agencies still value journals and publication as a measuring stick for achievement, will there be bias against those with unbundled publications on their CVs?
Control. Plan S hasn’t struggled to achieve its goals. It has, however, failed to meet them in the way it now wants to see them met. And this is always going to be the case with policies that are outcomes-based, rather than process-focused. The Nelson Memo, for example, is framed as being “business model agnostic,” and making papers publicly accessible is an acceptable outcome, regardless of how that happens. cOAlition S, however, now wants to dictate both the outcome and the process through which that outcome is reached. At the same time, the proposal indicates it wishes to empower researchers. We are unclear on how Plan S empowers researchers by telling them exactly what they must do. As Rick Anderson points out, what is being proposed is funder-led, which is the opposite of scholar-led. The turn from outcome to process-orientation is perplexing and is likely to ruffle far more feathers among researchers, publishers, and universities (who are likely to be stuck with the bill) than did Plan S. We suspect this won’t be the last time the organization calls for a “do-over.”
Economic Analysis
3
While it is unlikely that the largely dysfunctional US House of Representatives will be able to pass any sort of federal budget beyond a series of continuing resolutions that prevent a shutdown, it is worth noting that in the program-level spending details released this month, the language prohibiting the use of funds for implementation of the Nelson Memo (Section 552, page 109) remains. Our understanding is that this prohibition is less of an opposition to the public access requirements of the Nelson Memo than it is a rebuke to the White House Office of Science and Technology Policy (OSTP) for overstepping its authority in how the policy was planned and released. The OSTP failed to provide Congress with advance notice of any change to science publication policy as is required. As we noted on release of the Nelson Memo, the OSTP’s accompanying financial analysis was laughable, and here it is called out as not being “serious” (page 82). Perhaps the biggest warning shot to the OSTP is not in the language about the Nelson Memo itself (which does not attempt to halt it altogether), but rather in the massive 30% budget cut proposed for the OSTP itself, notable in that the other agencies in the bill see flat budgets or small increases. Support for public access is broad and bipartisan, and at least for now, the Nelson Memo will likely remain on track, but it is clear that Congress is now paying attention to the initiative.
In recent days, in response to the proposed spending cuts, the OSTP has released a “Report to the US Congress on Financing Mechanisms for Open Access Publishing of Federally Funded Research.” The report is puzzling in that it offers a look backward at what federally funded researchers have already spent on OA. But it says nothing about the likely impacts in the future of the OSTP’s new policy. This report reads like the introduction to an economic analysis, and then it just … ends. There’s no look at what federally funded researchers are likely to spend out of their research budgets to comply with the Nelson Memo, nothing about additional funding that could be added to offset those costs, nothing about the likely impact on independent and non-profit publishers, nor the inequities it is likely to create for any researcher who is not federally funded.
The one interesting figure included in the report is that federally funded authors who published OA over recent years spent an average of US$3,999.23 per APC, a figure that is 33% higher than the high-end estimate used in the original OSTP economic analysis. With around 255,000 articles published listing US federal funding in 2022 (according to Dimensions, an inter-linked research system provided by Digital Science, https://www.dimensions.ai), that would put the APC costs of the OSTP policy (if all of that output were subject to APCs) at around US $1.02 billion per year (not counting repository costs, support costs, or any costs of enforcement and compliance monitoring). There is also no discussion of the costs associated with the provision of the data components of the OSTP policy.
This report is a good start, but an actual detailed economic impact report remains essential. We at The Brief hope this is the beginning of a deeper planning process.
DOAJ Cracks Down on Special Issues
4
The Directory of Open Access Journals (DOAJ) this month has called out the over-solicitation and over-publication of special issues, a practice that some publishers (we are looking at you, MDPI) have taken to extreme (and highly profitable) levels. To try to stem the issues of quality control and outright fraud that have come with publication of large numbers of special issues, the DOAJ has adjusted its criteria for indexing.
We applaud the new criteria and DOAJ’s efforts to bring some objective measures of quality control to its index. That said, the new adjustments may require some tuning as they are likely to cause problems for many journals that are publishing occasional, carefully edited special issues. DOAJ doesn’t clearly define what they meant by the term “special issue” – they just use the phrase, “Journals that publish special issues or other content curated by guest editors,” with that “or” adding a lot of uncertainty. Are special issues defined solely by having guest editors? There are respected journals composed entirely of themed special issues that it would appear are now excluded from the DOAJ – “DOAJ will not accept a journal if all content in the last year/volume is published as special issues” – regardless of their quality standards or use of guest editors.
The new rules for special issues seem like they will be challenging to enforce without a great deal of sleuthing – for example, how will DOAJ know whether the Editor-in-Chief oversaw the work of the guest editors? That said, letting the “perfect” be the enemy of the “good” results in neither good nor perfection. However, creating a rule to be further refined with feedback from publishers is better than no rule at all on an issue that perennially results in mass retractions and bad publicity for the industry.
Dealmaking
5
In the latest example of the ongoing consolidation of the journals publishing market, Sage has purchased IOS Press, bringing some 100 journals and 70 books per year into its fold. This is a strong move from Sage, which has been steadily declining in terms of article output, at least compared to competitor publishers. Publication volume (article output) is a key success metric in an OA market, and the addition of the 6,000 or so articles published in 2022 by IOS Press would move Sage up a spot from the 10th-largest publisher by article volume to the 9th largest.
Digital Science has acquired the AI-based academic language service Writefull.
Research Solutions, creators of Reprints Desk, has announced the acquisition of scite, an “award-winning search and discovery platform that leverages AI to increase the discoverability and evaluation of research.”
People
6
After a two-year vacancy, the US Senate confirmed Monica Bertagnolli as the new Director of the National Institutes of Health. Bertagnolli was formerly the Director of the National Cancer Institute, and the White House has nominated W. Kimryn Rathmell to take over that position.
The Association of Research Libraries (ARL) has appointed Amy Maden as Senior Director, Finance and Administration.
Heather Kotula has been named President and CEO of Access Innovations, Inc.
Briefly Noted
7
In perhaps the least surprising AI story this month, Elsevier announced the sale of Datasets, including its copyrighted corpus of papers, for text and data mining, as well as AI training. Depending on the outcome of the various court cases on whether AI training on copyrighted materials is considered fair use, the value of highly vetted materials may prove a significant revenue stream for publishers. Or at least those publishers with material not subject to CC BY licensing.
In perhaps the second least surprising AI story this month, a JAMA Ophthalmology paper outlines successful efforts to use the GPT-4 ADA (Advanced Data Analysis) large language model to generate fake datasets supporting unproven scientific conclusions. The dangers to research integrity this poses should be obvious, as open data and thorough peer review will not solve this problem. Soon we will all look back at the community’s quaint concerns over the use of AI to write legitimate papers.
An odd research paper published in Quantitative Science Studies caught our eye this month. Titled “The Oligopoly’s Shift to Open Access: How the Big Five Academic Publishers Profit from Article Processing Charges,” the paper seems aimed at exposing the huge sums that have gone toward commercial publishers via OA publishing, but in fact, accomplishes just the opposite. In 2023, why would a study of OA revenue limit itself to data from 2015–2018? It’s not like OA growth has slowed. It’s also strange that the authors chose to look only at the five largest publishers as measured by number of journals, rather than the five largest OA publishers over that period. Rather than including Taylor & Francis (8th most OA articles), and Sage (11th most OA articles), a more reasonable analysis would have included the many organizations that published higher volumes of OA articles over that period, including MDPI, Frontiers, Hindawi, and Oxford University Press. Even with all the cherry-picking, the data tell a surprisingly mild story. While the topline figures seem daunting (US$1.06 billion in OA revenue going to commercial publishers!), they must be understood in context, and spread across the period studied. US$1.06 billion across four years averages out to US$265 million per year for an annual average of more than 126,000 papers published OA by these five companies. That works out to an APC of around US $2,100, which does not seem particularly outrageous. And if one posits those five publishers as having “oligopoly” control of more than half of a US$10 billion a year industry, then US$265 million per year represents a mere 5% of their earnings coming from around 9% of the articles they published in that span. If anything, this report shows commercial OA to be a relative bargain.
Digital Science and Springer Nature’s latest annual “State of Open Data” report serves to highlight the costs and complexity of open data, further emphasizing the need for better analysis and planning for US federal funding policies that require it.
Subscribe to Open (S2O) continues to gain ground as an alternative to the APC author-pays model for OA, as this month Project MUSE launched an S2O program with 50 journals from more than 20 publishers, and the Biochemical Society announced that five of its journals will move to the S2O model. It should be noted that S2O is not an automatic recipe for success and it may decrease the ability of participating journals to adapt to a changing market in the long term. S2O is an intriguing model, at least until it reaches such a significant mass that the free rider problem becomes more common. A good discussion of the issues (particularly the differences in how US and UK university libraries are funded) can be found in the comments of Lisa Hinchliffe’s recent Scholarly Kitchen post.
An unpublished analysis suggests that between 1.5% and 2% of the scientific literature published in 2022 stems from paper mills. The total number listed, 400,000 fake papers over 20 years, doesn’t seem so urgent, until one realizes that 70,000 of those fake papers were published last year alone. We’re not ready to jump to any conclusions based on a study we can’t read, and we will therefore withhold further comment until it is published.
Another “good intentions, bad execution” example can be found in a proposal for “peer replication,” meant to largely replace peer review by having others replicate experiments as they’re published. While the authors note the obvious flaws in the plan – namely that this would drastically slow the research process, that there’s no mechanism to actually pay for any of the work involved, and that replicating the frequently complex and difficult to learn techniques of other laboratories is often impossible – perhaps the biggest question we have about the proposal is the confidence that researchers will be motivated to do this additional work for the supposed career rewards of publishing a replication paper. To date, we know of no universities hiring new faculty based on their ability to repeat what others have already discovered.
De Gruyter has joined the growing market segment of publishers offering services for hire through the launch of Paradigm Publishing Services.
The Swedish Academy Dictionary has been completed, 140 years after work on it began. It comes in 39 neat volumes spanning a mere 33,111 pages – perfect for that barren bookshelf in your study.
In last month’s issue, we noted incorrectly that the Royal Society did not become a publisher in 1752 when it took over the Philosophical Transactions. We have been informed that the Royal Society, in fact, was considered the publisher of the Philosophical Transactions since its first issue in 1665 even though it did not then have editorial responsibility for the journal. The society also began publishing monographs in 1665 and its Royal Charter to publish dates to 1662.
***
I’m signing off from reporting on open access. The movement has failed and is being rebranded in order to obscure the failure. Time to move on. —Richard Poynder