sgo.to

NARS Workshop

Inner Speech with NARS

  • by Antonion Chella
  • plays an role in self-regulation and planning
  • focusing attention and self-attention
  • high-level cognition
  • internalizes human explanation
  • procedural knowledge vs declarative knowledge
  • cognitive architecture
  • self-regulation
  • self-directed questions (e.g. "do they know that the knife is dangerous?")
  • inner speech for conflict resolution
  • moral inner speech

AGI as Generalized Relational Operant Behavior

  • by Robert Johansson
  • NARS experiments from the perspective of Behavioral Psychology
  • Sensory channels -> Reasoning node
  • Three examples with increasing difficulty
    • operant conditioning / simple discriminations
    • conditional
  • Operand Behavior
    • an operant is a relation between organism and environment
    • it's a three-term relation between stimulus - response - consequence
    • the three terms can't be separated
    • example
      • two operations ^clap and ^wave trigger by arbitrary goal G!
      • there is a light that can be on or off <light -> [on]> or <light -> [off]>
      • training at some point increases confidence of the right operation to carry out
      • G has a function of a reinforcer as it increases the probabilities of response
  • Relational Operand Behavior
    • conditional discriminations: e.g. background color (e.g. blue/green) controsl if clapping and waving leads to G
    • Conditional Discrimination in Octopus
  • Generalized Relational Operand Behavior
    • Generalized Identity Matching, identity matching is a special case of conditional discriminations
    • given related experiences, the subject might then in a totally new context match for examples
    • the task requires the subject to learn and apply the concept

The Explanation Hypothesis in Autonomous General Learning

  • by Kristinn R. Thorisson
  • agent, body, controller
  • cybernectics view
  • sensor, actuator, variables, observable variables, manipulatable variables, environment
  • complex task-environment: giant number of variables, relations and transformations
  • complex spation-temporal patterns
  • novelty is common (in fact, it is the rule): it is never same thing twice
  • the total number of variables are vastly more numerous than what a controller can remember or model over its lifetime
  • autonomous general learning
    • learning
      • knowledge acquisition
      • systematic buildup of information structures that allow a control to:
        • predict,
        • achieve goals,
        • explain and
        • (re) create
        • a target phenomenon.
        • consisting of sets of models that capture:
          • the clustering of percepts
          • relations between these (causal, spatial, mereological, etc)
        • assisted by:
          • attention (resource management) mechanisms that
        • to evaluate model's useflness:
          • hypotheses must be fasifiable
          • just like hypotheses in an empieical comparative experiment
        • this means their creation must be bounded by practical concerns
          • e.g. by limiting new models primarily to observable patterns and variables
    • autonomous
      • does not get customized help, without outside help
    • general
      • a wide range of novelty
      • regularly exposed to novelty
      • creates new knowledge through hypothesis making (analogy, random search, etc)
      • has a bootstrap program
    • knowledge acquisition process
    • reasoning
      • a systematic application of logic
        • the learning applies reasoning to generate hypotheses on the basis of
          • similarity of current state to prior ones
          • evidence from experience
          • situational informatoin and
          • its currently active goals
          • consisting of a mixture of all methods (ampliative+deduction), but mainly of:
            • abduction, similarity and analogies
    • explanation
      • in the general case, a good explanation is a compact description that allows effective and efficient:
        • prediction, goal achievement and (re)creation of a particular phenomenon
        • NOTE(goto): maybe has some relationship with the gricean principle of communication?
    • the explanation methods and arguments used are therefore also subject to learning
    • general learning systems must be capable of reflection (self-evaluation, self-programming)
    • Reflection
    • the explanation hypothesis claims that self-explanation is critical to learning
    • if correct, autonomous general learning requires self-reflection
  • symbolic vs sub-symbolic

NARS and the Metamodel AGI

  • Hugo Lapatie
  • Ozkan Kilic, ex grad student of Pei Wang
  • sub-symbolic is the thing before you get knowledge (e.g. sensorial data)
  • Metamodel Artificial General Intelligence (MAGI) Overview
  • fast processes from sensory data, reactive
  • thinking fast, thinking slow
  • neurosymbolic approach leveraging NARS for AIKR learning by reasoning
  • NLU
    • NLU cannot be achieve by statistical approaches (ML/DL)
    • Why?
      • ML/DL is about data compression
      • NLU is about decompression
    • Winograd Schema Challenge
      • The trophy doesn't fit into the brown suitcase because it's too ___.
      • "big" and "small" are almost equally possible for statistical models but not for humans.
    • ML/DL is great for NLP.
    • ML/DL, symbolic reasoniners (e.g. NARS), ontologies, conceptnet, WordNet are used as building blocks within MAGI (Metamodel for Artificial General Intelligence)
    • Knowledge is represented in a hierarchical metamodel & enriched by external world knowledge (data decompression)
    • e.g. "how many routers with Nexus OS have more than 5 issues in the network?"
    • e.g. "Hi Jane, there are 7 routers with Nexus OS that more than 5 issues."

Explainable AI For First Responder Safety

  • Thomas Lu, Edward Chow, NASA Jet Propulsion Lab, Caltech
  • Explainable AI for First Responder Safety
  • Trusted & Explainable
  • TruePAL: DP turns the sub-symbolic signals into symbolic signals

Tutorial

  • NARS assumption: "intelligence is the capability of a system to adapt to its environment and to work with insufficient knowledge and resources"
  • AIKR
  • framework for a reasoning system:
    • a language for representation
    • a semantics of the language
    • set of inference rules
    • a memory structure
    • a control structure
    • advantages
      • domain independence
      • rich expressing power
      • justifiability of the rules
      • flexibility in combining the rules
    • desired features: general, adaptive, flexible, robust, scalable
  • non-axiomatic reasoning system
    • has a logic part and a control part
    • based on AIKR
    • term and statement
      • term: word, as name of a concept
      • statement: subject-copula-predicate
      • S -> P (S is-a P)
      • e.g. water -> liquid
      • a specialization-generalization
      • copula inheriance is reflexive and transitive
      • reflexive (S -> S) and intrasitive (S -> T, T -> U => S -> U)
      • binary truth value
        • experience K a finite set of statements
        • Beliefs K*: the transitive closure of K
        • A statement is true iff
          • either it is in K*
          • or it has the form X-> X
        • otherwise it is false
      • extension and intension
        • Te = {x | x -> T}
        • Ti = {x | T -> x}
      • Theorem: (S -> P) <=> (Se c= Pe) <=> (Pi c= Si)
      • Evidence
        • Positive evidence S -> P
      • Truth-value defined
        • S -> P<f, c>
        • frequency: f = w+ / w
        • confidence: c = w / (w + 1)
      • True-value produced
        • a stream of statements
      • extend the operators to real-numbers:
        • not(x) = 1 - x
        • and(x, y) = x * y
        • or(x, y) = 1 - (1 - x) * (1 - y)
      • deduction
        • M -> P[f1, c1]
        • S -> M[f2, c2]
        • S -> P[f, c]
        • f = f1 * f2, c = c1 * c2 * f1 * f2
      • example:
        • bird -> animal [1.00, 0.90]
        • robin -> bird [1.00, 0.90]
        • robin -> animal [1.00, 0.81]
      • induction
        • M -> P[f1, c1]
        • M -> S[f2, c2]
        • S -> P[f, c]
        • f = f1, c = f2 * c1 * c2 / (f2 ...)
      • abduction
        • P -> M[f1, c1]
        • S -> M[f2, c2]
        • S -> P[f, c]
    • revision
      • S -> P[f1, c1]
      • S -> P[f2, c2]
      • S -> P[f, c]
    • types of inference
      • local inference, forward inference, backward inference
    • memory structure
      • a task is a question, a goal or a piece of new knowledge
      • a belief is accepted knowledge
      • the tasks and beliefs are clustered into concepts, each named by a term
    • meaning of concept
      • every concept in NARS is fluid: its meaning is determined neither by reference nor definition
    • attention
      • combinatorial explosion, resource allocation, real time processing, contextual priming
    • the layers of the logic
      • atomic terms, derivative copulas and compound terms, statement and variable as terms, event / goal and operation as terms
    • procedural reasoning
      • events as statements with temporal relations
      • operations
      • goals as events to be realized
    • cardinality? how do we deal with it?
    • OpenNARS v3.1.2 Overview
      • procedural learning is concerned with representation of pre-conditions and post-conditions of an action, where action considered an operation
      • operation is an event
      • procedural knowledge is represented as (condition, operation) =/> consequence
      • =/> "happened before"
      • goal is an event that a system desires to achieve. to achieve a goal means to execute an operation.
      • the operations get executed
      • the operations need to be pre-registered

Ben

  • self-transcendence,

NOTE(goto): as a self-replicator, as an individual subjected to darwinism, my body seems to be highly inneficient way to propagate my biggest contributions.

Carboncopies Foundation