General ontologies are a huge piece of work, but it's possible work. We know how to do it. They can be built and will be built, and, because they are explicit, will be relatively easy both to maintain and to extend.
/Continued
Discussion
General ontologies are a huge piece of work, but it's possible work. We know how to do it. They can be built and will be built, and, because they are explicit, will be relatively easy both to maintain and to extend.
/Continued
We've had niches where AI had real cost benefit since the 1980s -- I've designed and led teams on some such systems myself -- but they're rare and they're point solutions, not cheaply generalisable.
Today's #StochasticParrots offer no cost benefit, except in domains where accuracy truly does not matter, and those are rare. In every other domain, the cost of checking their output is higher than the cost of doing the work.
/Continued
It's not impossible that that could change. Some hybrid of an #LLM with a robust ontology must be possible. It's not impossible that systems could be built which could construct their own robust ontology.
But my personal opinion is that #LLMs are largely an evolutionary dead end, and that as they increasingly feed on their own slop, the quality of their output will deteriorate, not improve.
/Continued
A system which has robust knowledge about the world can test the output from an #LLM and check whether it passes tests for truthiness.
Note that, every intelligent system should be operating in domains of uncertain knowledge -- in domains where knowledge is certain, an algorithmic solution will always be computationally cheaper -- so it is possible for an #AI to be both good but sometimes wrong.
/Continued
@simon_brooke A system like Mindpixel could be used to verify the output of a learning system for ontological correctness, and that's what it was originally intended for.
So at least in theory within an expert knowledge domain you could have an LLM which is describing things floridly but is also ontologically accurate.
However, a system with real intelligence will know where it could be wrong, and what additional data would change its decision.
Again, this is not rocket science. We had such systems -- I personally built such systems -- back in the 1980s. The DHSS large demonstrator Adjudication Officer's system -- which I built, and which is described in this paper -- did this well.
My (unfinished) thesis research was on doing it better.
/Continued
https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1468-0394.1987.tb00133.x
The impedance between rule based 'old AI' systems, which had a degree of ontology encoded in their rules, and neural net based, 'new AI' systems which include #LLMs, is that in the new AI systems the knowledge is not explicit, and is not (yet) recoverable in explicit form.
Consequently, we can't run formal inference over it to check whether the outputs from the system are good, and neither can the systems themselves.
That gap could possible be bridged.
/Continued.
Otherwise, building a general ontology is a very big piece of work. It has been attempted, and in some domains it's fairly well advanced.
https://standards.clarin.eu/sis/views/view-spec.xq?id=SpecGOLD
https://www.onto-med.de/ontologies/gfo
https://link.springer.com/chapter/10.1007/978-3-031-85363-0_10
/Continued
This is a bonfire demo instance for testing purposes