post

Perceptual Robots Philosophy

robot-evolution

Purpose:

All (intentional) behaviour of living systems is purposive; that is, it is goal-oriented. Whether you are lifting a glass of beer to your lips, or flapping your wings to soar into the sky or thrusting out your tongue to catch an insect you are doing so for a reason. And those goals are realised by acting in the world.

Control

When you are driving a car, for example, one of the goals is to keep the car between the white lines on the road. That goal is achieved by turning the steering wheel one way or the other. The wheel is not turned a specific amount or to a particular angle, but it is turned until you perceive that the car is between the white lines.

There are many factors that can effect the position of the car, such as the tyre pressures, the road surface and the wind. As the driver, it is not necessary that you measure these factors to determine their effects, because you continually receive feedback by way of your perception of the car’s position. If these factors do effect the position of the car you simply compensate by turning the steering wheel in the opposite direction. You are able to control your perception of the position of the car by varying the rotation of the steering wheel, thus enabling you to protect your perception from any disturbances to it.

You act in the world to control your perceptions of the world; you vary your output to control your input; behaviour is the control of perception ([1] Powers 2005).

Hierarchies

Of course, your perception of the position of the car is not the only goal being controlled while driving. You may also have goals such as controlling a high speed to get away from other cars, travelling to a specific destination where you can hide the car, changing lanes swiftly to evade the police, providing for your family by robbing a bank, maintaining an extravagant lifestyle with minimum effort and controlling your own particular sense of honesty by acting accordingly; many perceptual goals at different levels of complexity all operating at the same time. The more complex goals are dependent upon the more simpler ones, suggesting a hierarchy of goals and control systems. For example, in order to achieve the goal of changing lanes it is necessary to vary the goal of the (lower) system which controls the position of the car.

Through the prism of Perceptual Control Theory (PCT) all behaviour can be seen as the control of perceptual goals. In particular, goals are achieved by way of simple, and parsimonious, negative feedback control systems where a specific perceptual variable is continually monitored and any deviation from the desired value is counteracted by varying the output.

Living Systems

This functionality and architecture extends, according to PCT, not only to all levels of human perception and behaviour, but throughout the animal kingdom. Consider, for example, the lowly zooplankton. It is a tiny sea creature with very simple light sensors and limbs for moving around its watery environment ([1] Science News 2008) and obtains energy by absorbing light. In the changing light conditions of the water it moves around until it is sensing a certain amount of light. In other words, it acts within its environment until it perceives its goal. That this behvioural unit seems to be ubiquitous suggests that it is a principle that has been successful in terms of evolution and is central to our understanding of life.

Artificial Life

The key, then, to designing and building artificial systems which behave in ways similar to humans and animals is to imbue them with the capabilities to perceive and control their goals. This will require suitable sensors and actuators, at the interface with the environment, but, more crucially, will require a simulated `neural’ architecture that enables the system to control not just its raw sensory perceptions, but also complex internal perceptions that express its relationship to the world, which may be in terms of high-level abstract concepts.

Benefits

The benefits of this parsimonious way of implementing robotic systems is that the agent only needs to attend to those elements of the environment which are relevant to it attaining its goals, it does not need to perform complex processing or modelling of its environment, is able to control high-level, complex and abstract perceptions and it is able to operate in dynamic, chaotic and unpredictable environments.

Such is the approach taken by Perceptual Robots.

Perception-based Robotics

From the early days of the Artificial Intelligence (AI) discipline, in the decades following the middle of the 20th century, research was heavily influenced by the computational approach advocated by Alan Turing ([1] Turing 1950) and what could be achieved with the main tool available, the computer ([2] Brooks 1991a, [3] Brooks 1991b).

The outcome was that the issues that were addressed were associated with what were thought of as intelligent, high-level cognitive abilities, such as problem solving, theorem proving, natural language processing and, a modern incarnation, argumentation ([4] Bench-Capon and Dunne 2007)

The ‘problem’ of AI was seen as one of knowledge representation and symbolic reasoning with the main technique being Search through the possible states. If only the states of the world could be well represented by a set of symbols they could be manipulated by search techniques and the relevant set of of knowledge rules to generate a new, desired, state, along with the path to that state, whether it be a chess-playing program of a robot manipulating a set of blocks ([5] Roberts 1963). However, realisation grew that this type of, top-down, approach was mired in well-constrained `toy’ worlds and reflected a paradigm more akin to abstract formal logic than one grounded ([6] Harnad 1990) in reality. Real worlds are messy, chaotic and dynamic. Animals, and people, more often than not do not perform in the most optimal and logical manner. The sensory inputs available are myriad in number and primitive and meaningless in nature.

In the last two decades of the 20th century something of a conceptual, and pragmatic, revolution took place in AI, exemplified by the work of Professor Rodney Brooks at MIT. He proposed a bottom-up approach of building real world systems with simple capabilities to interact with the real world ([7] Brooks 1985), incrementally adding layers of more complex abilities to expand the sophistication of the system. This Artificial Life (AL) approach embodied such principles, and characteristics, as situatedness and embodiment ([8] Anderson 2003), distributed control, hierarchies, parallel processing, autonomy, emergence and the use of the real world as its own model.

PCT encompasses all these principles and a similar philosophy regarding the understanding of living systems (and predates the AL movement by at least a decade). PCT goes further than the behavior-based robotics approach of Brooks and makes two significant claims regarding the relevance of the theory. One, that PCT is a biologically-plausible model of the architecture and function of the nervous system, as well as social and psychological ([9] Carey et al. 2014). Two, that there is a (simple) process common to all types and levels of behaviour, and cognition, in humans and other animals.

Of particular importance is the fundamental difference in the operating function of behavioural systems that differentiates PCT from both traditional AI and AL, as well as conventional psychological approaches ([10] Marken 2013, [11] Marken and Mansell 2013). That is, that the operating function of the internal processes of living systems is not the selection of action (or behaviours), but the selection of goals (perceptions). In other words, not what to do, but what to perceive. Brooks did recognise that actions feed back to affect perceptions ([3] Brooks 1991b), but, perhaps, not that perceptions themselves were the goal.

The traditional AI approach to the problem of catching a baseball, for example, would involve something like the following serial processing stages; perceiving the baseball, and the surrounding environment, by extracting information from the visual scene to compile a symbolic representation of the contents, building a model of the environment and the relative positions of the baseball, and fielder, within it, computing the trajectory of the baseball (which would either require at least two positions of the ball or the position of the hitter and the force with which hit; extremely difficult to estimate, if not impossible), predicting the future position of the ball and the direction and speed at which the fielder would need to run to be in the same position, executing the task of moving to the catch position, which would involve computing and generating the precise muscle tensions required to move the body through the environment. Such an approach clearly relies heavily both on complex computation and internal knowledge of the physics of the behaviour of objects interacting with the world.

The philosophy of the behavior-based approach is to reduce the complexity of the problem by decomposing it into a hierarchy of simpler tasks and reducing the need for symbolic representation and absolute models. However, the layers within the hierarchy are still, to some extent, decomposed in the traditional manner ([7] Brooks 1985), and use a plan-execute model designed to generate behaviours.

PCT shows that there is a much simpler and parsimonious solution to the baseball-catching problem. The solution is simply the control the perceptual input variables of the vertical and horizontal retinal optical velocities of the ball and not output variables ([12] Marken 2001, [13] Marken 2005). In basic terms this involves moving the body relative to the ball until it is in a position where the perceived velocities of the baseball on the retina are zero. No modelling, no mapping, no planning, no representation, no physics knowledge, no specific output, no prediction, no computation.

This unique way of conceptualising living (and artificial) systems as being perception-based rather than behavior-based is crucial to both our understanding of living systems and to our ability to build intelligent machines in a way that is realistic and meaningful.

  • [1] Alan M. Turing. Computing Machinery and Intelligence. Mind, 59(236):433-460, October 1950.
  • [2] Rodney A. Brooks. Intelligence Without Reason. In John Myopoulos and Ray Reiter, editors, Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI-91), pages 569-595, Sydney, Australia, 1991. Morgan Kaufmann publishers Inc.: San Mateo, CA, USA.
  • [3] Rodney A. Brooks. Intelligence without representation. Number 47 in Artificial Intelligence, pages 139-159. 1991
  • [4] T.J.M. Bench-Capon and Paul E. Dunne. Argumentation in artificial intelligence. Artificial Intelligence, 171, 619 – 641, 2007.
  • [5] Larry G. Roberts. Machine perception of three-dimensional solids. Technical report, MIT Lincoln Laboratory, May 1963.
  • [6] Stevan Harnad. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335-346, June 1990.
  • [7] Rodney A. Brooks. A Robust Layered Control System For a Mobile Robot. Technical report, Massachusetts Institute of Technology, 1985.
  • [8] Michael L. Anderson. Embodied cognition: A field guide. Artificial Intelligence, 149(1):91 – 130, 2003.
  • [9] Timothy Andrew Carey, Warren Mansell, and Sara Jane Tai. A biopsychosocial model based on negative feedback and control. Frontiers in Human Neuroscience, 8(94), 2014.
  • [10] Richard S. Marken. Making inferences about intention: Perceptual control theory as a “theory of mind” for psychologists. Psychological Reports, 113(1):257-274, 2013.
  • [11] Richard S. Marken and Warren Mansell. Perceptual control as a unifying concept in psychology. Review of General Psychology, 17(2):190-195, jun 2013.
  • [12] Richard S. Marken. Controlled variables: psychology as the center fielder views it. The American Journal of Psychology, 114(2), 2001.
  • [13] Richard S. Marken. Optical trajectories and the informational basis of fly ball catching. Journal of Experimental Psychology. Human Perception and Performance, 31(3), jun 2005.