Author Topic: HTM Reinforcement Learning  (Read 17205 times)

Offline Paul

  • Administrator
  • double
  • *****
  • Posts: 3499
  • Developer
    • View Profile
    • PaulsCode.Com
HTM Reinforcement Learning
« on: October 31, 2016, 04:41:31 PM »
One of the important areas that Numenta's implementation of HTM hasn't covered yet is reinforcement learning (RL).  This is because reinforcement learning requires a system that is capable of taking actions based on input (i.e. sensory-motor inference) and that is where Numenta is actively researching.

Numenta's approach to HTM is to refer to neuroscience to understand how the cortex functions, and make their implementations true to their biological counterparts.  While I think this is definitely the correct long-term approach, I am not bound by this requirement for my project.  I have thought out my own HTM-based sensory-motor and RL design.  It is most likely quite different from the biological systems, but should work (based on my current understanding of HTM concepts).

At a high level, the basic idea is to have three high-order sequence memory layers.  The first is a standard temporal memory layer, for learning patterns and context from sensory input.  This layer projects into the second layer.  The second layer receives input from motor commands, and projects into the third layer.  The third layer receives reinforcement input (reward/ punishment).



Each layer's input passes through the usual spatial pooler to select the active columns (i.e. different columns are active in each layer).  Neurons in the first layer are typical, growing distal dendrites connecting to other neurons in the same layer.  The second and third layers are a bit different.  Neurons in the second layer grow distal dendrites connecting to neurons in the first layer.  Similarly, neurons in the third layer grow distal dendrites connecting to neurons in the second layer.



With this setup, the first layer can make inferences about what sensory information will come next based on the current context.  Columns represent the input, and neurons within the columns represent the context.  This is the typical HTM process as implemented by Numenta.

The second layer makes inferences about what motor commands will come next based on the current sensory context.  Columns in this layer represent the motor commands, and neurons within the columns represent the context.

The third layer makes inferences about rewards or punishments that will come next based on the current sensory-motor context from the second layer.  Columns represent the reinforcement, and neurons within the columns represent the context.  How good or bad a particular set of motor commands is in a given context consists of the immediate reward/punishment in that state plus the predicted rewards/punishments of possible next actions.  The system can then take actions based on what it predicts will happen.

Besides rewards and punishments, I have also introduced the concept of "novelty".  These columns represent the level of unknown outcomes a particular action might lead to (i.e. future actions down a particular path that the system has not yet tried).  The purpose of this is to allow the system to explore actions it hasn't tried yet, versus always only ever going with the very first positive action it has done in a particular context.

The system will have a curiosity level that grows over time, and is reduced any time it does something novel.  The more novel a path is, the more the system's curiosity is satisfied.  A combination of novelty score and curiosity level can eventually outweigh punishments that the system has encountered in the past, and cause it to try a particular action again in order to explore subsequent actions down that negative path that it hasn't tried yet (and which could lead to rewards).
Device: Samsung Galaxy Nexus i515
CPU: TI OMAP4460, 1.2 GHz (dual core, ARM Cortex-A9)
GPU: PowerVR SGX540, 307 MHz
RAM: 1 GB
Resolution: 720 x 1280
Rom: omni-4.4.4-20141014-toro-FML KitKat 4.4.4, rooted

Device: Eee PC 1015PEM
CPU: Intel Atom N550, 1.5 GHz (dual core, x86)
GPU: Intel GMA 3150, 200 MHz (dual core)
RAM: 2GB
Resolution: 1024 x 600
Rom: android-x86-4.3-20130725 Jelly Bean 4.3, rooted

Offline Paul

  • Administrator
  • double
  • *****
  • Posts: 3499
  • Developer
    • View Profile
    • PaulsCode.Com
Re: HTM Reinforcement Learning
« Reply #1 on: April 04, 2017, 04:18:05 PM »
I have been active on Numenta's forum lately, but keep forgetting to post on my own forum :)  Let me give a quick progress update on how things are going.

The biggest epiphany for me came from realizing that the concepts of "imagination" and "curiosity" (which were the most biologically implausible elements of my original design) can be simulated by existing functions of a spatial pooler.

Spatial poolers currently simulate inhibition by selecting a percentage of columns that best connect to the current input space, and only those columns activate. A slight modification of this function allows it to replace my earlier concept of "imagination" -- selecting a percentage of columns that best connect to the most positive reinforcement input space, and only those activate. The columns in the motor layer map to the motor commands, so the winning columns drive what actions are taken.

Spatial poolers also have a function for "boosting", which allows columns that haven't been used in a while to slowly accumulate a higher score, and eventually win out over other columns that have been used more frequently. This can be used to replace my earlier concept of "curiosity". Actions the system hasn't tried in a while, such as new actions or those which previously resulted in a negative reinforcement, will eventually be tried again, allowing the system to explore and re-attempt actions that could lead to new outcomes.

I drew up a diagram to help visualize what the current design looks like:



The sequence and feature/location layers are complimentary -- both using the same spatial pooler (same columns activate for both layers) -- i.e. both receiving proximal input from the sensors. The sequence layer receives distal input from other cells in its own layer, while the feature/location layer receives distal input from an array of cells representing an allocentric location.

The motor layer receives proximal input from the reinforcement layer, via the modified spatial pooler which chooses a percentage of motor columns which have the highest reinforcement score with boosting. This layer receives distal input from active cells in both the sequence layer and the feature/location layer. Columns represent motor commands, while cells in the column represent the sensory context.

Columns in the reinforcement layer represent how positive or negative a reinforcement is. In my implementation, I am using columns to the left to represent more negative reinforcement, while columns to the right represent more positive reinforcement (with columns near the center being neutral). This is just to make it easier to visualize. Columns represent positivity/negativity, and cells in the columns represent sensory-motor context. Cells in this layer receive distal input from active cells in the motor layer.

My current design utilizes a two-layer circuit to pool reinforcement input. This tweak eliminates the need to extend reinforcement predictions backwards through time (handled now by a function of the temporal pooler), allowing the implementation to align even more closely with traditional HTM concepts.  Output from the reinforcement pooling layer is passed through the modified spatial pooler, which chooses a percentage of the motor columns which best map to the most positive reinforcement, with boosting.

There is still some more tweaking to do, but it is definitely starting to come together.  The most recent changes I got from watching the HTM Chat with Jeff.  One is the association of the sequence and feature/location layers. Location input itself, however, is currently just an array of input cells representing an allocentric location, which the feature/location layer connects to distally. Egocentric location is still missing, as well as tighter feedback between the two regions.  The other idea from Jeff's slides is the two-layer circuit which gave me the idea for configuring reinforcement feedback with a pooling layer.
Device: Samsung Galaxy Nexus i515
CPU: TI OMAP4460, 1.2 GHz (dual core, ARM Cortex-A9)
GPU: PowerVR SGX540, 307 MHz
RAM: 1 GB
Resolution: 720 x 1280
Rom: omni-4.4.4-20141014-toro-FML KitKat 4.4.4, rooted

Device: Eee PC 1015PEM
CPU: Intel Atom N550, 1.5 GHz (dual core, x86)
GPU: Intel GMA 3150, 200 MHz (dual core)
RAM: 2GB
Resolution: 1024 x 600
Rom: android-x86-4.3-20130725 Jelly Bean 4.3, rooted

Offline Mixalichmip

  • bit
  • Posts: 2
    • View Profile
Поставить зубной имплант
« Reply #2 on: May 08, 2021, 07:47:37 PM »
<h2>Имплантология</h2>
<p>Имплантология на сегодняшний день - самое современное направление стоматологии, позволяющее восстанавливать отсутствующие зубы. Современная медицина достигла много, но при этом люди часто сомневаются, стоит ли выбирать определенную процедуру, и есть ли при этом риски. Чтобы развеять эти сомнения, давайте поговорим об имплантатах, точнее, о том, почему не стоит бояться их установки и какие существуют причины выбора именно этого типа протезирования.
Опыт стоматологов и специалистов, работающих в наших клиниках, и сотрудничество с ведущими производителями имплантатов подтверждает, что не стоит бояться установки имплантатов. Эти конструкции не просто помогают восстанавливать утраченный зуб или целый зубной ряд, но имеют массу преимуществ. </br>К этим преимуществам относятся: newbielink:https://zubdoktor.ru/services/implantologiya/implantatsiya-zubov/ [nonactive]
 </p>
<h2>Оптимальная структура имплантатов</h2>
<p>Современные имплантаты разработаны так, чтобы Вы не ощущали их присутствия, а только чувствовали здоровые и крепкие зубы. Это становится возможно благодаря особой процедуре установки – во время нее имплантат не просто вставляется в кость, а сращивается с ней.</p>
<h2>Безболезненная установка имплантата</h2>
<p>Несмотря на распространенное мнение, операция по установке имплантатов совершенно безболезненна. Во-первых, боль отсутствует, так как основная работа проводится на костной ткани (именно туда и вживляется имплантат), во-вторых, для гарантии отсутствия дискомфорта каждое вживление сопровождается обезболиванием ткани десны, которую слегка надрезают во время операции.</br>
После самой установки врач также может провести обезболивание, во избежание неприятных ощущений. Как говорят наши пациенты, уже через несколько часов можно наслаждаться любимой едой (но не забывать прополоскать рот после приема пищи).</p>
« Last Edit: December 17, 2022, 03:22:09 AM by Mixalichmip »