Discussion
Loading...

Post

Log in
  • About
  • Code of conduct
  • Privacy
  • About Bonfire
doopledi
doopledi
@doopledi@sauna.social  ·  activity timestamp 3 days ago

Perhaps a fair reminder. Since there to my understanding there are not too many archive orgs.

The following paper argues and claims to provide mathematical proof that _language models_ as currently conveyed cannot avoid "hallucinations".

Title: "LLMs Will Always Hallucinate, and We Need to Live With This"
https://arxiv.org/abs/2409.05746

>>

This would appear sensible, since they essentially provide just as a stream text of based on what is in effect lossy compression. "Hallucinations" cannot thus be avoided by "priming".

In other respects: yes I think there might be use cases for "algorithms". The issue is more about transparency and learning to understand a society having those.

I will not I think stay around to discuss BERTs, but if it suits your convenience to have a copy or two of that referred paper somewhere I would not mind at all. In my best current assessment we should have more places to archive these things -- and better conventions on how to run those archives.

Otherwise I welcome grounded, informed and considered comments if deemed substantial.

Now, my evening hats are waiting...

#slop #artificialintelligence #ai #hallucinations #algorithms

arXiv.org

LLMs Will Always Hallucinate, and We Need to Live With This

As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. Our analysis draws on computational theory and Godel's First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process-from training data compilation to fact retrieval, intent classification, and text generation-will have a non-zero probability of producing hallucinations. This work introduces the concept of Structural Hallucination as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.
  • Copy link
  • Flag this post
  • Block

BT Free Social

BT Free is a non-profit organization founded by @ozoned@btfree.social . It's goal is for digital privacy rights, advocacy and consulting. This goal will be attained by hosting open platforms to allow others to seamlessly join the Fediverse on moderated instances or by helping others join the Fediverse.

BT Free Social: About · Code of conduct · Privacy ·
Bonfire social · 1.0.1-beta.22 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Code of Conduct