Discussion
Loading...

Discussion

  • About
  • Code of conduct
  • Privacy
  • About Bonfire
Simon Brooke
@simon_brooke@mastodon.scot  ·  activity timestamp 22 hours ago

However, a system with real intelligence will know where it could be wrong, and what additional data would change its decision.

Again, this is not rocket science. We had such systems -- I personally built such systems -- back in the 1980s. The DHSS large demonstrator Adjudication Officer's system -- which I built, and which is described in this paper -- did this well.

My (unfinished) thesis research was on doing it better.

/Continued

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0394.1987.tb00133.x

  • Copy link
  • Flag this post
  • Block
Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 23 hours ago

We've had niches where AI had real cost benefit since the 1980s -- I've designed and led teams on some such systems myself -- but they're rare and they're point solutions, not cheaply generalisable.

Today's #StochasticParrots offer no cost benefit, except in domains where accuracy truly does not matter, and those are rare. In every other domain, the cost of checking their output is higher than the cost of doing the work.

/Continued

  • Copy link
  • Flag this comment
  • Block
Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 23 hours ago

It's not impossible that that could change. Some hybrid of an #LLM with a robust ontology must be possible. It's not impossible that systems could be built which could construct their own robust ontology.

But my personal opinion is that #LLMs are largely an evolutionary dead end, and that as they increasingly feed on their own slop, the quality of their output will deteriorate, not improve.

/Continued

  • Copy link
  • Flag this comment
  • Block
Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 23 hours ago

A system which has robust knowledge about the world can test the output from an #LLM and check whether it passes tests for truthiness.

Note that, every intelligent system should be operating in domains of uncertain knowledge -- in domains where knowledge is certain, an algorithmic solution will always be computationally cheaper -- so it is possible for an #AI to be both good but sometimes wrong.

/Continued

  • Copy link
  • Flag this comment
  • Block
Bob Mottram ✅
@bob@epicyon.libreserver.org replied  ·  activity timestamp 22 hours ago

@simon_brooke A system like Mindpixel could be used to verify the output of a learning system for ontological correctness, and that's what it was originally intended for.

So at least in theory within an expert knowledge domain you could have an LLM which is describing things floridly but is also ontologically accurate.

  • Copy link
  • Flag this comment
  • Block
Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 22 hours ago

However, a system with real intelligence will know where it could be wrong, and what additional data would change its decision.

Again, this is not rocket science. We had such systems -- I personally built such systems -- back in the 1980s. The DHSS large demonstrator Adjudication Officer's system -- which I built, and which is described in this paper -- did this well.

My (unfinished) thesis research was on doing it better.

/Continued

https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0394.1987.tb00133.x

  • Copy link
  • Flag this comment
  • Block
Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 22 hours ago

The impedance between rule based 'old AI' systems, which had a degree of ontology encoded in their rules, and neural net based, 'new AI' systems which include #LLMs, is that in the new AI systems the knowledge is not explicit, and is not (yet) recoverable in explicit form.

Consequently, we can't run formal inference over it to check whether the outputs from the system are good, and neither can the systems themselves.

That gap could possible be bridged.

/Continued.

  • Copy link
  • Flag this comment
  • Block
Simon Brooke
@simon_brooke@mastodon.scot replied  ·  activity timestamp 22 hours ago

Otherwise, building a general ontology is a very big piece of work. It has been attempted, and in some domains it's fairly well advanced.

https://standards.clarin.eu/sis/views/view-spec.xq?id=SpecGOLD

https://www.onto-med.de/ontologies/gfo

https://link.springer.com/chapter/10.1007/978-3-031-85363-0_10

/Continued

SpringerLink

Towards a General Ontology Theory

Purpose: Since the beginning of the 2000 s, numerous domain, upper level, and top level ontologies have been developed. In order to represent a model of the reality they contain tens to several thousand concepts, properties and axioms. These are instantiated...

General Formal Ontology (GFO) | Onto-Med Research Group

General Ontology for Linguistic Description

  • Copy link
  • Flag this comment
  • Block
1+ more replies (not shown)
Log in

Bonfire community

This is a bonfire demo instance for testing purposes

btfree.social: About · Code of conduct · Privacy ·
Bonfire community · 1.0.0 no JS en
Automatic federation enabled
  • Explore
  • About
  • Public Groups
  • Code of Conduct
Home
Login