Discussion
Loading...

#Tag

Log in
  • About
  • Code of conduct
  • Privacy
  • About Bonfire
Business Channel boosted
Inautilo
Inautilo
@inautilo@mastodon.social  ·  activity timestamp yesterday

#Business #Approaches
Spot AI hallucinations like a librarian · How source hunts expose AI’s wildest inventions https://ilo.im/169oez

_____
#FactChecking #AI #Hallucinations #Sources #Verification #Citations #Content #Context #Studies

How to Spot AI Hallucinations Like a Reference Librarian

The verification tricks that would make fact-checkers weep with joy.
  • Copy link
  • Flag this post
  • Block
Inautilo
Inautilo
@inautilo@mastodon.social  ·  activity timestamp yesterday

#Business #Approaches
Spot AI hallucinations like a librarian · How source hunts expose AI’s wildest inventions https://ilo.im/169oez

_____
#FactChecking #AI #Hallucinations #Sources #Verification #Citations #Content #Context #Studies

How to Spot AI Hallucinations Like a Reference Librarian

The verification tricks that would make fact-checkers weep with joy.
  • Copy link
  • Flag this post
  • Block
Business Channel boosted
Inautilo
Inautilo
@inautilo@mastodon.social  ·  activity timestamp yesterday

#Development #Analyses
AI will compromise your cybersecurity posture · But maybe not in the way you think https://ilo.im/169oeg

_____
#Business #AI #Hype #Vulnerability #Security #Bugs #Leaks #Hallucinations #PromptInjection #DevOps

Songs on the Security of Networks

AI will compromise your cybersecurity posture

Yes, “AI” will compromise your information security posture. No, not through some mythical self-aware galaxy-brain entity magically cracking your passwords in seconds or “autonomously” exploiting new
⁂
More from
Michał "rysiek" Woźniak · 🇺🇦
  • Copy link
  • Flag this post
  • Block
Inautilo
Inautilo
@inautilo@mastodon.social  ·  activity timestamp yesterday

#Development #Analyses
AI will compromise your cybersecurity posture · But maybe not in the way you think https://ilo.im/169oeg

_____
#Business #AI #Hype #Vulnerability #Security #Bugs #Leaks #Hallucinations #PromptInjection #DevOps

Songs on the Security of Networks

AI will compromise your cybersecurity posture

Yes, “AI” will compromise your information security posture. No, not through some mythical self-aware galaxy-brain entity magically cracking your passwords in seconds or “autonomously” exploiting new
⁂
More from
Michał "rysiek" Woźniak · 🇺🇦
  • Copy link
  • Flag this post
  • Block
Social Media Channel boosted
doopledi
doopledi
@doopledi@sauna.social  ·  activity timestamp 3 days ago

Perhaps a fair reminder. Since there to my understanding there are not too many archive orgs.

The following paper argues and claims to provide mathematical proof that _language models_ as currently conveyed cannot avoid "hallucinations".

Title: "LLMs Will Always Hallucinate, and We Need to Live With This"
https://arxiv.org/abs/2409.05746

>>

This would appear sensible, since they essentially provide just as a stream text of based on what is in effect lossy compression. "Hallucinations" cannot thus be avoided by "priming".

In other respects: yes I think there might be use cases for "algorithms". The issue is more about transparency and learning to understand a society having those.

I will not I think stay around to discuss BERTs, but if it suits your convenience to have a copy or two of that referred paper somewhere I would not mind at all. In my best current assessment we should have more places to archive these things -- and better conventions on how to run those archives.

Otherwise I welcome grounded, informed and considered comments if deemed substantial.

Now, my evening hats are waiting...

#slop #artificialintelligence #ai #hallucinations #algorithms

arXiv.org

LLMs Will Always Hallucinate, and We Need to Live With This

As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. Our analysis draws on computational theory and Godel's First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process-from training data compilation to fact retrieval, intent classification, and text generation-will have a non-zero probability of producing hallucinations. This work introduces the concept of Structural Hallucination as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.
  • Copy link
  • Flag this post
  • Block
doopledi
doopledi
@doopledi@sauna.social  ·  activity timestamp 3 days ago

Perhaps a fair reminder. Since there to my understanding there are not too many archive orgs.

The following paper argues and claims to provide mathematical proof that _language models_ as currently conveyed cannot avoid "hallucinations".

Title: "LLMs Will Always Hallucinate, and We Need to Live With This"
https://arxiv.org/abs/2409.05746

>>

This would appear sensible, since they essentially provide just as a stream text of based on what is in effect lossy compression. "Hallucinations" cannot thus be avoided by "priming".

In other respects: yes I think there might be use cases for "algorithms". The issue is more about transparency and learning to understand a society having those.

I will not I think stay around to discuss BERTs, but if it suits your convenience to have a copy or two of that referred paper somewhere I would not mind at all. In my best current assessment we should have more places to archive these things -- and better conventions on how to run those archives.

Otherwise I welcome grounded, informed and considered comments if deemed substantial.

Now, my evening hats are waiting...

#slop #artificialintelligence #ai #hallucinations #algorithms

arXiv.org

LLMs Will Always Hallucinate, and We Need to Live With This

As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. Our analysis draws on computational theory and Godel's First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process-from training data compilation to fact retrieval, intent classification, and text generation-will have a non-zero probability of producing hallucinations. This work introduces the concept of Structural Hallucination as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.
  • Copy link
  • Flag this post
  • Block

BT Free Social

BT Free is a non-profit organization founded by @ozoned@btfree.social . It's goal is for digital privacy rights, advocacy and consulting. This goal will be attained by hosting open platforms to allow others to seamlessly join the Fediverse on moderated instances or by helping others join the Fediverse.

BT Free Social: About · Code of conduct · Privacy ·
Bonfire social · 1.0.1-beta.22 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Code of Conduct