BackReturn Home

Robotics and System 4
by Robert Campbell, 2006


The following demonstrates the value of System 4 as it may be applied to robotics and computer generated Artificial Intelligence (AI).

Introductory Comments

1. It is not necessary to see deeply into the dynamics of each of the nine Terms of System 4 to apply it to Artificial Intelligence. The System can be understood in more superficial levels of abstraction if the meanings of the System Terms are simply accepted as valid. The overall pattern can more readily be applied to computer programming in directly practical ways even though the whole System cannot be reduced to algorithm.

2. For a robot that can navigate irregular terrain a hexapod robot has obvious advantages for applications such as a Mars rover. Moreover, there is a mechanical linkage system that can be used for each pair of legs, so that a single motor can activate each pair to walk. The linkage is similar to the six term graphical pattern of System 4, such that when one side advances up and forward the other side thrusts down and backward in a walking motion.

3. The spine of the hexapod can be articulated to make turns by two additional motors operating spinal joints between each pair of legs thus enabling the robot to avoid obstacles.

4. System 4 allows for simulated strides that alternate with strides that react directly to sensory input. The simulated strides are called Regenerative and involve an anticipated plan over a series of strides. The reactionary strides are called Expressive and respond to immediate sensory input one stride at a time. These Regenerative and Expressive strides must be mutually reconciled in an ongoing fashion. The same principles can be applied to grasping and manipulating articles, such as a baby learning to grasp articles.

5. Elements of experience are learned piecemeal and gradually assimilated into more coherent complex actions. Each element of experience can be considered a unit memory. For a baby, grasping with the fingers is one of the first things we learn. We are born much more helpless than other animals and must learn nearly everything through conscious effort even before we have language to assist us.

6. Proprioceptive simulation, as in the regenerative mode, is indispensable to this learning process. The proprioceptive nervous system tells us the body’s position oriented in space and proprioceptive neuromuscular spindles, the tiny sensory organs located throughout the muscles of the body are structured to allow simulation of anticipated actions. Learning is more than just a causal process of successive responses to external stimuli. It also involves anticipation of a future desired result and a process of simulation to achieve it. Language greatly enhances our abilities to simulate experience in abstraction and formulate far reaching plans that nevertheless require continual adjustment.

7. Practical applications of AI in robots can be one of several avenues through which we may become more conscious of how the cosmic order works. At this time in human history, with so much potential conflict looming ahead, we need to expand our horizons beyond vested interests. We need a more universal context within which to constructively express our many diverse concerns. What is called the involutionary variant of the cosmic order leads inexorably to fragmentation and conflicts of interest, to the ultimate benefit of no one and to the detriment of all. Whatever one's sentiments in this regard it is not necessary to attach any idealistic override to this offering of ideas freely given.

8. In carefully studying what follows, patient reference to System 4 on this website will facilitate an initial overall grasp of System 4 and how it works.

System 4 and Robotics

The following simplifies the essence of System 4 as much as possible as it relates to a virtual robot. Keep in mind that language is limited in the degree to which it can describe how the System works, so that meanings must be interpreted contextually. The description that follows relates quite directly to the task of generating AI in a robot.

It helps if we have simple mechanical linkages established for legs to begin with. We do not have to explore the evolution of legs as the invertebrates did before evolution settled on a quadruped limb structure for all vertebrates. We can assume a hexapod for walking stability and the simplest linkages to make it easy. Any linkage method may be used of course but others necessitate proprioceptive organs and make the simple act of walking more complex.

As explained on the website there is a System 4 hierarchy involving 4 active Centers (C) that implicitly give direction to one another as follows:

(C1) Idea → (C2) Knowledge → (C3) Routine → (C4) Form

(C1) Idea can be regarded as the electronic activity in a computing program in a specific instance.
(C2) Knowledge is manifest in the program itself as it relates to the hardware.
(C3) Routine is the specific virtual routines that are being animated.
(C4) Form is how the above Routines determine the orientation of the Form of virtual concepts, such as the change in position of a robot with respect to the environment (whether it is a virtual robotic movement or a virtual perceptual idea derived from the environment).

The hierarchy above is specified by the Primary Universal Term (Term 9) but for simplicity we can initially set the Universal Terms aside for the purposes here, and consider only the Six Particular Terms that relate directly to six specific structural elements that occur in every creative activity. These six Terms consist of 6 of the 9 ways that four active Centers can relate to one another with respect to their inside and outside, but we shouldn't need this for now also. Each Term has a meaning implicit within it and we will take this meaning for granted.

The six Terms transform into one another in a specific repeating sequence that we will also take for granted as follows:

T1 → T4 → T2 → T8 → T5 → T7 → T1 → T4 → etc. ( the six step sequence keeps repeating)

The meaning implicit within each of the Terms is as follows:

T1 - Perception of need in relation to response capacity
T4 - Ordered sensory input alternately from the environment & simulated
T2 - Creation of idea as a potential action response or creative concept
T8 - Balanced response to sensory stimuli as a motor output (e.g.: to muscles or robot motors)
T5 - Action sequence (e.g.: muscular or motor driven) with proprioceptive feedback
T7 - Sequence encoded as a unit memory for recall to T1 and another sequence

The above Term transformations alternately go through an Expressive and then a Regenerative sequence, so there are 12 transformations, each called a Step. In the human nervous system each Step coincides precisely with a synapse in the way the nervous system is structured to work. So we have a means to follow initial sensory inputs through the sequence synapse by synapse for any process of integrated sensory perception, conceptual thought, or resultant action. In the case of integrating visual sensory images Systems higher than System 4 are involved, since virtual images begin with System 5. We will focus here only on System 4. We should also be able to follow the same sequence in constructing a virtual robot.

There are three Particular Sets simultaneously transforming through each pathway through the nervous system, each Set being one Step apart. The regenerative sequence in each case concerns a proprioceptive simulation of an anticipated future act, whereas the expressive sequence is a programmed active response driven causally as a reaction to direct sensory input. Since the three Sets are out of step in the sequence there is always an anticipated future that must be reconciled with a casually driven input from the past. In this way System 4 spans and integrates past and future. The two modes are mutually related and so must be mutually reconciled with one another. This process can integrate history in the broadest sense.

We can list the 12 Steps for each of the 3 Sets as shown below. This allows us to easily see which Terms in the Expressive and Regenerative Modes interact in each Step. The regenerative Terms are shown in red:

Step Set 1 Set 2 Set 3
1 T8E T7R T4E
2 T5E T1R T2E
3 T7E T4R T8E
4 T1E T2R T5R
5 T4E T8E T7R
6 T2E T5E T1R
7 T8E T7E T4R
8 T5R T1E T2R
9 T7R T4E T8E
10 T1R T2E T5E
11 T4R T8E T7E
12 T2R T5R T1E

New sensory input from the environment comes via T4E in Set 3 in Step 1. Sensory input T4E is always tensionally coupled to memory recall T7R to begin a related simulation sequence that will anticipate an appropriate response. Memory recall must always be coupled to sensory input in order for our thoughts, feelings, and actions to be relevant to ongoing circumstantial input. This must also be reconciled with the previous action sequence T8E (simultaneous motor instructions to muscles or motors) in order for there to be a smooth transition from sequence to sequence.

Sequence illustrations in the article Nervous System - Part 1 - Spinal Cord provide more detailed information on this, albeit very condensed. It takes a lot of study to understand this fully as it relates to human experience, but most of this can be set aside for a robot.

So, let us see how this will relate to a virtual robot so far. It has 3 paired sets of legs that move in symmetrically mirrored strides. Let us consider paired movements one Step at a time according to how the linkages of legs are designed.

1. Front and rear pairs: As the leg on one side raises to step forward the leg on the other side pushes down and moves backward to move the robot forward. Since this can be accomplished by mechanical linkage with a single motor for each pair of legs we do not have to compute motions for each joint segment in each leg. But we do need to set the distance that each step involves, so that the feet that follow will not trip into the feet ahead.

2. Middle pair: At the same time the middle leg on the opposite side raises to step forward with the leg on the other side pushing down and back.

3. The next step is the mirror image of the first.

4. So, the front and back motors would work in identical patterns and the middle motor would work in synch but in a mirrored pattern. This can be easily programmed as a transmitted motor pattern T8E in Step 1 above. It keeps repeating and operating switching to activate motors to move limbs accordingly as in T5E in Step 2. Every other Step has a T8E term and alternate Steps have either a T5E or a T5R term.

5. Let us assume that the robot has a scanning device to identify obstacles ahead that it must avoid in order to walk to a preprogrammed destination that is given by certain coordinates. In Step 1, the scanning device provides sensory input T4E for obstacles a number of estimated strides ahead. For example, it may be that the way ahead is clear in Step 1 for seven more strides but probably not for eight more strides. So a memory term T7R is recalled in Step 1 that begins a motor simulation T1R in the CPU in Step 2. Let us say that the scanning device identifies that size of the obstacle to be circumvented, so the T7R will have to recall synchronous motor patterns for all of the motors involved in such a way that they are integrated into a turning maneuver of so many degrees per stride. This turning maneuver is a programmed memory of previous turning maneuvers taken and which may or may not be adequate to avoid the obstacle within eight strides, or it may be too sharp of a turn.

6. Let us assume that the robot has two articulated joints in its spine, one between each pair of legs. There is a motor that regulates the alignment of each spinal joint laterally but not vertically and which keeps the spine longitudinally straight when the robot is walking straight. The robot's feet (and/or joint segments) also have a certain amount of flexibility built into them to allow turns up to a maximum amount per stride. So, the program recalled to enact a simulation will have taken this into account and not exceed a certain turning radius that could cause feet to drag or conflict, but it could be a lesser turning radius. We don't want the robot going out of its way unnecessarily.

7. So in Step 2, T1R is doing a motor simulation that will redirect the robot over several strides in the future, while T2E is also generating a turning idea in the CPU from direct external sensory input provided in T4E. But this latter turning idea is simply a reactionary response to the obstacle ahead without benefit of a simulation to see if the turn is sufficient or too much. There may also be a second obstacle further ahead to avoid so the robot has to pick a course through. The reactionary or expressive idea T2E generated by direct sensory input may indicate a turn that is too fast. It can only try to make the turn in one stride according to the perceived angle it needs to turn, and cannot simulate the turn stride by stride over a planned future course. It is also limited by the maximum turn that can be taken in one stride. So, in Step 2, the motor simulation T1R may exchange inputs to and from T2E. Both are executed in the CPU. The motor simulation only relates to the adjustments to spinal alignments with possible necessary adjustments to length of stride. The T2E term thus relates to a more simplistic motor pattern that will tend to get the turn over with as quickly as possible but it can be modified by some input from the simulation.

8. The motor simulation T1R is not the actual simulation however. It only indicates a tentative motor pattern that will hopefully be adequate over several strides. The actual simulation takes place in T4R in Step 3 where the next few stride positions are simulated in relation to the obstacle with simulated sensory feedback as to projected Step positions in relation to the obstacle. The perspective of the obstacle changes with the robot's position. A future path is charted that should be adequate but that will require Step by Step adjustments as the path opens around obstacles.

9. This simulated sensory feedback in T4R is tensionally coupled to a new memory term T7E which incorporates motor patterns in the elements of stride technique recalled that will be consistent with the simulation. It is a programmed automatic response from the computer memory that will fall within the parameters prescribed by the simulated sensory feedback. At the same time a consistent pattern of motor instructions T8E in Set 3 will be sent to operate switches and regulators for motors to perform a stride in T5R in Step 4.

10. In Step 4, the motor programs have been selected from previous related experience that also falls within current simulated parameters, so Knowledge (C2) directs Idea (C1) in a Regenerative T5R term rather than an Expressive T5E term where C1 and C2 exchange places. So the switch from Expressive to Regenerative modes takes place here. When completed this action pattern becomes stored as a T7R memory in Step 5. In Step 5 a related action pattern memory will be recalled simultaneously. In other words, a memory is being stored at the same time that a new but related memory is being recalled. The recalled pattern may differ from the pattern being stored in some aspects since the recalled pattern is coupled to new sensory input T4E in Set 2 that is synchronous with it in Step 5. Memory recall is always tensionally linked to sensory input.

11. T7E in Step 3 transforms into T1E in Step 4. T1E readies the necessary elements of the robot to receive new input from the environment. The scanning device must be readied, pointed and focused to take another "snap shot" of obstacles ahead in T4E in Step 5.

12. At the same time T2R in Step 4 is the new simulated idea as a planned sequence of strides consistent with the simulation in T4R in Step 3 that anticipates avoiding the obstacle. This planned sequence of strides translates into a specific motor pattern T8E in Step 5. In this case T8E is the next stride in the planned sequence of strides. Subsequent planned motor pattern turning strides will require revision with respect to both circumventing the obstacle from a new perspective and getting back on course to the intended destination, because T2R terms alternate with T2E terms and the perspective from which sensory input comes keeps changing.

Five Steps is sufficient to illustrate how System 4 can be used to guide the robot. (In Step 5 new sensory input comes via T4E in Set 1.) This has obvious advantages over methods that attempt to preprogram the robot's path from start to finish. Any number of contingent obstacles that may crop up can be accommodated Step by Step and stride by stride. This process is greatly facilitated by the mechanical linkages of the hexapod that eliminate the need for proprioceptive organs in order to simulate and compute leg joint segment by joint segment movements in the simple process of walking.

When it comes to grasping and carrying things the robot would have to be fitted with arms and hands. Guiding these to specifically grasp identified objects and manipulating or moving them in desired ways could be done in a couple of ways, both of which amount to dependence on proprioceptive feedback. Proprioceptive devices can be fitted to provide sensory feedback to a second scanning device in the “eyes” of the robot, like little transmitters to a scanning receiver. The System 4 Steps would then follow as above for walking, but with more complex simulations and movements involved.

In humans Expressive modes and Regenerative modes are mutually influenced and become automated over time (at the spinal level for behavioral patterns), if they are suitable behaviors of practical value. This 12 Step sequence thus forms the basis of the learning cycle spanning past and future. It works synchronously through any number of parallel pathways through the body at once, as in moving both hands synchronously to perform an integrated task. All parallel pathways have the same number of System 4 Steps and the nervous system has evolved this way synapse by synapse in all vertebrate quadrupeds, with the same number of corresponding synapses in each pathway, from reptiles to humans. All of these parallel pathways must be integrated by the unique Universal Sets associated with each species and each human being.

It can work in a similar way in a robot. Regenerative simulated action patterns reconciled with Expressive action patterns, and vice versa, can be stored as unit memories of action sequences as they happen. In humans, we learn to do things piecemeal, little by little, putting the pieces together into integrated sequences that span space and time. It can work the same way in robot within the more limited context of an electronic memory. This represents a basic level of learning for the robot, including some limited degree of creative expression.

When it comes to using the hands and fingers for doing tasks, then a second level of simulation along the lines of how the cerebellum and cerebral hemispheres work would be valuable if not indispensable, involving interacting CPUs in a robot. It would still follow along the same lines of System 4 Step by Step.

We do not need to physically act to think of course, so all of the above can relate equally well to generating conceptual Forms rather than behavioral Forms in a human being. At the conscious level, this happens in the cerebral hemispheres with emotional input from the ancient limbic system. Memory recall is in fact most fundamentally dependent upon the reptilian part of the cerebral hemispheres. We remain biologically anchored to our biospheric roots and we can draw upon ancient emotional patterns of behavior that require appropriate tailoring to suit the needs of social circumstance. We must restrain and modify our most primal appetites in socially acceptable ways.

It should be possible to include and program some such analogous second order CPU operating in a robot, albeit limited in its creative abilities by the limitations implicit in electronic memories.