Discussion
Loading...

#Tag

  • About
  • Code of conduct
  • Privacy
  • About Bonfire
ƧƿѦςɛ♏ѦਹѤʞ boosted
t04d8b
@t04d8b@social.lol  ·  activity timestamp 12 hours ago

The real beauty of hashtag games on social is that LLMs will train on them and likely respond to chatbot queries with horrid puns! 😄

#AI #LLMs #HashtagGames #Puns

  • Copy link
  • Flag this post
  • Block
t04d8b
@t04d8b@social.lol  ·  activity timestamp 12 hours ago

The real beauty of hashtag games on social is that LLMs will train on them and likely respond to chatbot queries with horrid puns! 😄

#AI #LLMs #HashtagGames #Puns

  • Copy link
  • Flag this post
  • Block
Simon Brooke
@simon_brooke@mastodon.scot  ·  activity timestamp 17 hours ago

General ontologies are a huge piece of work, but it's possible work. We know how to do it. They can be built and will be built, and, because they are explicit, will be relatively easy both to maintain and to extend.

/Continued

Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 17 hours ago

But, the impedance between the #LLMs, with their cryptic, unverifiable knowledge, and explicit ontologies, will be a very large one to bridge. It's my view that it is easier to build language abilities on top of an explicit system (I know how to do this, although it would not be as fluent as an #LLM) than to bridge that gap.

I firmly believe that #AGI Artificial General Intelligence, when it is achieved, will be ontology plus inference based, not #LLM based.

/Ends

  • Copy link
  • Flag this comment
  • Block
Simon Brooke
@simon_brooke@mastodon.scot  ·  activity timestamp 18 hours ago

However, a system with real intelligence will know where it could be wrong, and what additional data would change its decision.

Again, this is not rocket science. We had such systems -- I personally built such systems -- back in the 1980s. The DHSS large demonstrator Adjudication Officer's system -- which I built, and which is described in this paper -- did this well.

My (unfinished) thesis research was on doing it better.

/Continued

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0394.1987.tb00133.x

Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 18 hours ago

The impedance between rule based 'old AI' systems, which had a degree of ontology encoded in their rules, and neural net based, 'new AI' systems which include #LLMs, is that in the new AI systems the knowledge is not explicit, and is not (yet) recoverable in explicit form.

Consequently, we can't run formal inference over it to check whether the outputs from the system are good, and neither can the systems themselves.

That gap could possible be bridged.

/Continued.

  • Copy link
  • Flag this comment
  • Block
Simon Brooke
@simon_brooke@mastodon.scot  ·  activity timestamp 18 hours ago

We've had niches where AI had real cost benefit since the 1980s -- I've designed and led teams on some such systems myself -- but they're rare and they're point solutions, not cheaply generalisable.

Today's #StochasticParrots offer no cost benefit, except in domains where accuracy truly does not matter, and those are rare. In every other domain, the cost of checking their output is higher than the cost of doing the work.

/Continued

Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 18 hours ago

It's not impossible that that could change. Some hybrid of an #LLM with a robust ontology must be possible. It's not impossible that systems could be built which could construct their own robust ontology.

But my personal opinion is that #LLMs are largely an evolutionary dead end, and that as they increasingly feed on their own slop, the quality of their output will deteriorate, not improve.

/Continued

  • Copy link
  • Flag this comment
  • Block
Log in

Bonfire community

This is a bonfire demo instance for testing purposes

btfree.social: About · Code of conduct · Privacy ·
Bonfire community · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Public Groups
  • Code of Conduct
Home
Login