General > Miscellaneous

ARTUR Exmachina

(1/3) > >>

Pocky:
The ARTUR project and discussion Paul is working on makes me think of Exmachina. Curiosity was a huge element that drove the AI.

Paul:
I'm targeting more of an insect-level intelligence (which I believe is achievable with current technology).  But you correct that curiosity is an absolutely essential element for any autonomous agent.  The agent must have some core motivation to try something different in a recognized context (or try anything at all in a novel context).

Deep Mind (and similar RL implementations) solve this with a diminishing random chance of non-ideal behavior in any given context.  This definitely works, but it has its own problems.  Training requires huge numbers simulations where the agent essentially brute-forces the problem until it reaches a policy that is deemed to be acceptable.  As we recently saw with the Uber self driving car accident, this may not be good enough in situations where human safety is a concern.

My current thinking is that the key to understanding curiosity (which is critical to effectively implement reinforcement learning) requires first understanding emotions.  I recently read an interesting paper by Friedemann Pulvermuller which goes into the relevant biology in nice detail with excellent visualizations.  It appears that emotional flavoring of sensory motor mechanisms is at the core of how the brain establishes a semantic grounding (and thus is a critical part of choosing what actions to take in a given context).

Pocky:
On that topic of emotion, which the paper didn't really go into, is that emotion is an evolutionary trait to punish and reward survival. To "program" emotion i guess you need to program punishment and reward system in a sense

Paul:
The significations of that paper here, I think, is taking it in context of the amygdala:



The relating that to Figure 2 from the paper:



The amygdala is outputting to the prefrontal cortex (PF), which is a hub in the distributed circuit for semantics, which are grounded in sensory motor activation.  This means the sensory input, and all semantic representations derived from it, and generated motor outputs, are all flavored by emotional context.  The obvious conclusion is that emotional context is a critical part of the sensory-motor circuit and action decisions.

I interpret this to mean that in order to understand concepts like "curiosity" that are important for more efficient reinforcement learning strategies, the above circuit (with its emotional input) need to be understood and part of the model.

Pocky:
So, you would have to program and categorize stimulus and the reaction to said stimulus based on the types of sensory input is how i imagine the approach would work in the AI sense, even on the insect level. So it almost sounds like you would need to separate the AI's amygdala into subparts and have them work together (reference, consult, pass a logic check list) to determine a reaction/action?

Navigation

[0] Message Index

[#] Next page

Go to full version