@UlrikeHahn Not a small challenge! Still better to acknowledge it.
I am posting this text to summarize discussion points and to make systematic contribution possible.
The text is hosted at gitlab. If you think there are better alternatives, I can look into making that possible.
Here is the inventorying text: https://gitlab.com/scientificpublicationpipeline/scientific-publication-pipeline
@UlrikeHahn
> without better discernment things will break *even if* the quality of the additional information is high
This is such an important point! See my recent discussion with @Ooze about the potential of adding a pre-peer-review layer to preprint servers.
More reviewers doesn't improve the review process. Unless there are mechanisms for filtering out low effort or out-of-context reviews, and surfacing the most accurate, detailed and relevant ones (we agree there are ways to do that).
@UlrikeHahn
> all potential remedies/solutions welcome!
A collective Web of Trust approach seems worth consideration as one option. Having networks of federated preprint servers, that each enable a feedback period for work posted there, before sharing it more widely. Roughly like the process I described here;
https://mastodon.nzoss.nz/@strypey/116027385419684558
Have you looked at the Open Science Network project using the @Bonfire software?
https://openscience.network/about/
Or these folks?
https://www.hepi.ac.uk/2026/01/25/calling-for-a-bold-new-vision-for-higher-education/
(1/2)
@UlrikeHahn
> do you envision these servers using machine tools or drawing on researchers?
Great question. As I said in the thread I just posted drawing parallels with independent news publishing, I think human relationships are our super power here. Which is why I references Web of Trust.
To the degree that automated tools are useful, I see them serving researchers by helping them break out of cliques and silos, and build out a larger scale web of trust.
(2/2)
But any automated tools have to be built and deployed in ways that make researchers centaurs, not reverse centaurs reduced to serving the tools, and their owners. See the links to pieces by @pluralistic that explore the distinction between the 2;
https://mastodon.nzoss.nz/@strypey/115210601592114790
To make centauric tools, they need to be as independently reproducible as the methods in any decent scientific paper. So they can't be monopolised and manipulated.
"With generative AI we have essentially provided a tool for conducting denial of service attacks on the infrastructure of the scientific publishing process (broadly construed). And we have done this at the time when we are seeing well-funded campaigns seeking to undermine free and independent scientific research."
@UlrikeHahn, 2026
https://write.as/ulrikehahn/is-ai-killing-scientific-reform
Wow! I wonder how much effect notions like Goodhart’s law is having on this. As individuals focus on publishing to increase the credibility. Unfortunate that AI misinformation in papers has seemed to proven the need for this to some degree.
Still the clause that appeal is only available if the paper has been approved for publication elsewhere. That’s a final decision that offloads moderation to another team.
I didn’t take that to mean that you blamed arXiv for this issue. My comment was more so to point out the difficulty in proposing any solution. As such an appeal process indicates that arXiv has made their final decision and will only change as a result of outside feedback.
The gamification of systems makes me wonder if this would lead people to find a simple peer review journal in order to force the appeal. Simple thoughts on a complex issue.
@UlrikeHahn Yes, I have been pondering .
Could a new field akin to library science, communication, ontology and epistemology help to communicate, curate and coordinate?
The current publishing cycle was broken, and will now finally succumb.
@UlrikeHahn @kula Yes, so the argument goes: any alternative can improve the impasse?
@UlrikeHahn @kula Once the idea is there, the tools will come.
Question is: Is it a reasonable idea. And if so, will scientists accept communication, curation and coordination?
For one, I see the value from an applied perspective. Coordinator: PDEng, we think executing on this is a good idea. PDEng: I think we can to it this way, and I am missing XYZ. Coordinator: PhD, what is your take XYZ?
@UlrikeHahn @spdrnl @kula Good news everybody! We already have a great tool for this, humans. If we stop making academics run the treadmill of endless grant applications and needless publishing it will free up time for such activities.
@spdrnl @kula @UlrikeHahn Have you taken a look at AI not as generator, not as summarize, but as dialog partner (after feeding it all the data you can't parse manually)?
@fallbackerik @spdrnl @kula no, I haven’t myself, but have seen reports by multiple people that they find that useful and I have no reason to doubt that. It fits with my experience that talking things through with other people can be extremely useful even where they don’t have the same level of expertise (and where the benefit doesn’t come from them giving you answers, but rather from the way it helps you to think about your question) so I find that notion credible.
@UlrikeHahn @kula So, could communication, curation and coordination be a separate field?
@UlrikeHahn @kula Yes, as I said, and in reaction to your observation "I don’t even see the attempt to seriously build tools for *evaluation*"