Non-Axiomatic Reasoning

Up: AI and CogSci | Previous: Baby AGI and Developmental Cognitive Science | Next: Goal-Dependent Categorization

The basic idea behind non-axiomatic reasoning is to use experience gained while exploring an environment as evidence for or against hypotheses about different aspects of the environment.

This idea has been implemented for the development of a full-fledged Non-Axiomatic Logic as well as a Reasoning System. A two-continuous-valued truth value is used, one to indicate how much the experience supports a given hypothesis, and another to indicate the amount of evidence itself. Research on NARS has been ongoing since the 1990s; unfortunately, it has only remained a niche. Yet, a substantial amount of code has been written exploring this research direction.

In the 2020s, it is more popular to use probability distributions, and particularly Bayes' Law, to quantify evidence for and against different hypothesis. Pei Wang has contrasted the myriad issues with different logic systems to put NAL into context in the first chapter of the NAL Book. In particular, as far as measuring evidence using probability distributions are concerned, it requires that all the possibilities are known apriori, that the proposition space be pre-specified and be fixed before the evidence measurement runs. (Also see The limitations of Bayesianism.) Non-Axiomatic Reasoning attempts to overcome this.

However, it seems that a similar goal may also be achieved using causal models of the environment. One reason why humans operate robustly is given to be that they have numerous causal models of the environment. In part, this is intuitive. And when intuition fails us, we resort to a systematic investigation aka science. It turns out that causal models can provide a way to perform robustly despite minimal amount of data, similar to humans. Judea Pearl has written extensively causality.

Nevertheless, I think Non-Axiomatic Reasoning also provides a natural pathway to safe A(G)I. Meseems that the mainstream AI safety research is trying to climb a slippery wall except in areas that are already well understood by other sciences. Without rooting the AI's knowledge in a two-valued truth value, and without giving the AI system itself a measure of the truthiness of its knowledge, AI (safety) research is left with trying to measure AI results against an external objective. This is like trying to raise a child by dictating them what is good and what is bad. Unfortunately, parenting is more complex than that. Beyond a few well-defined areas of human knowledge, raising children would rather involve giving the child themselves a way to figure out what is good or bad. Even in day-to-day interactions as colleagues, we trust each other not because the other never does anything we could not comprehend, but because we trust each other to consult each other when we are unsure. Without giving AI this ability to measure truthiness of its beliefs, such trust seems impossible.

However, the current framework of non-axiomatic reasoning linked above assumes apriori availability of symbols from the environment. Often, this has been worked around by using modern machine learning systems to extract fixed categorical knowledge from the high dimensional sense data obtained from the environment. However, modern machine learning systems often assume that a given high dimensional stimulus only maps to a single high level category or symbol. However, as we discuss next, this is not the case.

Up: AI and CogSci | Previous: Baby AGI and Developmental Cognitive Science | Next: Goal-Dependent Categorization