Neuroscience based machine intelligence models by Gary Gaulin

Sunday, December 6, 2015

Intelligence Design Lab #5 is now online!

The software package is available from Planet Source Code:
http://www.planetsourcecode.com/vb/scripts/ShowCode.asp?txtCodeId=74628&lngWId=1

Or download with this link, which includes compiled exe for Windows:
THEORY OF OPERATION – HOW IT WORKS
The Intelligence Design Lab-5 is a cognitive model with behavior that is guided by a navigational network system that maps out an internal representation of its external environment (an internal world model) using a 2D array where signal flow (magnitude and direction) vectors point out the shortest path to where they want to go. This is a vital part of our visual imagination. During human development it is common and expected to cause children to stretch out their arms and say “I can fly!” as they run around while visualizing themselves navigating the sky.
Physical properties at each place in the external environment are mapped into a network according to whether they are safely navigable, an unnavigable boundary or border at a barrier, or place attracting it (in this case where the food is).
An attracting location in the network provides an always signaling (action potential) signal that propagates outward in all directions and around barrier locations that do not signal at all (the signal stops there just as the critter would by bashing into a barrier). In math these directional activity patterns are shown using a vector map. The ID Lab provides this in the onscreen Navigation Network form that can show the signal direction through each place in the network.
Its confidence in motor actions (forward/reverse and left/right) depend on the magnitude and direction it is actually traveling matching the magnitude and direction of the signal flow at the corresponding place it is currently at. Where there is more than one pathway the shortest path dominates, will be the first to propagate to that point and be favored. Where there are two or more paths of equal distance it may become indecisive but will soon favor one path over the others.
To test its place avoidance behavior a hidden moving shock zone slowly rotates counterclockwise, while the critter chases food in a clockwise direction heading straight towards the hazard. Although the test is demanding the confidence system of this intelligence strives for perfection, as does a human athlete. The relatively high confidence levels shown in the included line chart indicates that the virtual critter is having fun. In the research paper “Dynamic Grouping of Hippocampal Neural Activity During Cognitive Control of Two Spatial Frames” (see notes) that the arena and some of the navigational network is based upon it was found that; some live rats preferred to chase after the treats even though they are not hungry enough to need to eat, while others preferred to remain in the shock free center zone. Even a live animal has to first be willing to accept the challenge. For the virtual critter several If-Then statements that compare actual travel magnitude and direction to that of the internal representation is enough to make it want nothing else but to chase the food around its arena.
Intentionally getting out of the way of the approaching invisible shock zone requires the ability to (from past experience) predict future environmental events. This was added by alternating between current angular time (by default room angle is from 0 to 15) and the next angular time frame ahead. The places that will soon become a shock hazard periodically become a place to avoid. This sequential on and off signaling causes a (over time) temporal decision to be made. The same works for swarming bees. Scouts that find a possible new place to build a hive are one at a time allowed to dance out the location for other bees to inspect. This way each option is first considered, before making a final decision. Otherwise all the bees would either swarm to the first site found or to different ones (instead of staying together).
The virtual critter cannot (like a swarm of bees) divide itself then go separate ways, therefore appropriate actions are taken simply by repeatedly presenting (in any sequence) what must be considered.
Exactly what it will choose to do at any given time is as hard to predict as it is in real animals. The only way to know for sure is read their mind, which (by adding RAM monitoring code) is possible to do to the ID Lab critter. But it's still not at all like the easy predictable behavior of zombie-like “programmed” actions from an algorithm that uses math to make it go in a given direction in response to an approaching hazard instead of simply showing the options to consider then leaving the decision up to it to figure out, on its own.
After avoiding being surrounded by the approaching zone it must have the common sense to go around to behind then wait for the food to be in the clear, while knowing where the food is located even when it's surrounded by places to avoid that can (where signal timing is way off) block its signal activity. Where the signals from attract and avoid locations combine: the wanting to go both towards and away from the food results in it becoming nervously anxious, skittish, as are real animals with such a dilemma.
The signal timing that was found to work best closely follows Hebbian Theory. Neighboring cells that fire together, wire together a network with activity patterns that recreate the physical properties of what is in the external environment. It can also be conceptualized as a conservation of energy strategy where at each place in the network an incoming charge is transferred to uncharged neighbors on the opposite side, outgoing direction. The signal energy is moved from place to place, not destroyed then regenerated all over again.
To establish a benchmark that assumes error free signals from parts of the brain that use dead reckoning to convert what is seen through the eyes into spatial coordinates in its external environment the program simply uses the already calculated X,Y positions that are used to place things in the virtual environment. In the real world our brain oppositely converts visual signals to these spatial X,Y locations, which a virtual environment has to instead start with. Where this dead reckoning system were added to this model and working perfectly that's what you would get for coordinates. Using the exact coordinates that the program already has provides ideal numbers to work from, which in turn gives this critter an excellent sense of where visible things are located around itself even though in this Lab its eyes cannot visually see them.
This navigation system demonstrates how simple it is to organize a network that provides navigational intuition like we have. It helps explain why animals (insects are also animals) seem born with a navigational ability that is there from the start. The origin of this behavior in living animals does not have to be a learned instinct that slowly developed over many millions of years of time by blundering animals passing on slightly less blundering behavioral traits to offspring. It's possible for these neural navigational networks to have existed when multicellular animals first developed, which set off the Cambrian Explosion. The origin of these inherent navigational behaviors may best explained by the activity patterns in these relatively simple cellular networks.
The origin of our brain may in part be from subcellular networks that work much the same way in unicellular protozoans (single celled animals) such as paramecia, which have eye spots, antennae and other features once thought to only exist in multicellular animals. Testing such a hypothesis using this computer model requires additional theory, which may have a controversial title but going further into biology this way meets all of the requirements of the premise for an already proposed theory. In a case like this regardless of being controversial science requires developing already existing theory. Therefore see the TheoryOfID.pdf in Notes folder, for a testable operational definition for "intelligent cause" where each of the three emergent levels can be individually modeled. It is predicted to this way be possible to demonstrate a never before programmed intelligent causation event, which is still a further research goal and challenge for all to enjoy.

No comments: