From the early days of the Artificial Intelligence (AI) discipline, in the decades following the middle of the 20th century, research was heavily influenced by the computational approach advocated by Alan Turing ([1] Turing 1950) and what could be achieved with the main tool available, the computer ([2] Brooks 1991a, [3] Brooks 1991b).
The outcome was that the issues that were addressed were associated with what were thought of as intelligent, high-level cognitive abilities, such as problem solving, theorem proving, natural language processing and, a modern incarnation, argumentation ([4] Bench-Capon and Dunne 2007)
The ‘problem’ of AI was seen as one of knowledge representation and symbolic reasoning with the main technique being Search through the possible states. If only the states of the world could be well represented by a set of symbols they could be manipulated by search techniques and the relevant set of of knowledge rules to generate a new, desired, state, along with the path to that state, whether it be a chess-playing program of a robot manipulating a set of blocks ([5] Roberts 1963). However, realisation grew that this type of, top-down, approach was mired in well-constrained `toy’ worlds and reflected a paradigm more akin to abstract formal logic than one grounded ([6] Harnad 1990) in reality. Real worlds are messy, chaotic and dynamic. Animals, and people, more often than not do not perform in the most optimal and logical manner. The sensory inputs available are myriad in number and primitive and meaningless in nature.
In the last two decades of the 20th century something of a conceptual, and pragmatic, revolution took place in AI, exemplified by the work of Professor Rodney Brooks at MIT. He proposed a bottom-up approach of building real world systems with simple capabilities to interact with the real world ([7] Brooks 1985), incrementally adding layers of more complex abilities to expand the sophistication of the system. This Artificial Life (AL) approach embodied such principles, and characteristics, as situatedness and embodiment ([8] Anderson 2003), distributed control, hierarchies, parallel processing, autonomy, emergence and the use of the real world as its own model.
PCT encompasses all these principles and a similar philosophy regarding the understanding of living systems (and predates the AL movement by at least a decade). PCT goes further than the behavior-based robotics approach of Brooks and makes two significant claims regarding the relevance of the theory. One, that PCT is a biologically-plausible model of the architecture and function of the nervous system, as well as social and psychological ([9] Carey et al. 2014). Two, that there is a (simple) process common to all types and levels of behaviour, and cognition, in humans and other animals.
Of particular importance is the fundamental difference in the operating function of behavioural systems that differentiates PCT from both traditional AI and AL, as well as conventional psychological approaches ([10] Marken 2013, [11] Marken and Mansell 2013). That is, that the operating function of the internal processes of living systems is not the selection of action (or behaviours), but the selection of goals (perceptions). In other words, not what to do, but what to perceive. Brooks did recognise that actions feed back to affect perceptions ([3] Brooks 1991b), but, perhaps, not that perceptions themselves were the goal.
The traditional AI approach to the problem of catching a baseball, for example, would involve something like the following serial processing stages; perceiving the baseball, and the surrounding environment, by extracting information from the visual scene to compile a symbolic representation of the contents, building a model of the environment and the relative positions of the baseball, and fielder, within it, computing the trajectory of the baseball (which would either require at least two positions of the ball or the position of the hitter and the force with which hit; extremely difficult to estimate, if not impossible), predicting the future position of the ball and the direction and speed at which the fielder would need to run to be in the same position, executing the task of moving to the catch position, which would involve computing and generating the precise muscle tensions required to move the body through the environment. Such an approach clearly relies heavily both on complex computation and internal knowledge of the physics of the behaviour of objects interacting with the world.
The philosophy of the behavior-based approach is to reduce the complexity of the problem by decomposing it into a hierarchy of simpler tasks and reducing the need for symbolic representation and absolute models. However, the layers within the hierarchy are still, to some extent, decomposed in the traditional manner ([7] Brooks 1985), and use a plan-execute model designed to generate behaviours.
PCT shows that there is a much simpler and parsimonious solution to the baseball-catching problem. The solution is simply the control the perceptual input variables of the vertical and horizontal retinal optical velocities of the ball and not output variables ([12] Marken 2001, [13] Marken 2005). In basic terms this involves moving the body relative to the ball until it is in a position where the perceived velocities of the baseball on the retina are zero. No modelling, no mapping, no planning, no representation, no physics knowledge, no specific output, no prediction, no computation.
This unique way of conceptualising living (and artificial) systems as being perception-based rather than behavior-based is crucial to both our understanding of living systems and to our ability to build intelligent machines in a way that is realistic and meaningful.
- [1] Alan M. Turing. Computing Machinery and Intelligence. Mind, 59(236):433-460, October 1950.
- [2] Rodney A. Brooks. Intelligence Without Reason. In John Myopoulos and Ray Reiter, editors, Proceedings of the 12th International Joint Conference on Artificial Intelligence (IJCAI-91), pages 569-595, Sydney, Australia, 1991. Morgan Kaufmann publishers Inc.: San Mateo, CA, USA.
- [3] Rodney A. Brooks. Intelligence without representation. Number 47 in Artificial Intelligence, pages 139-159. 1991
- [4] T.J.M. Bench-Capon and Paul E. Dunne. Argumentation in artificial intelligence. Artificial Intelligence, 171, 619 – 641, 2007.
- [5] Larry G. Roberts. Machine perception of three-dimensional solids. Technical report, MIT Lincoln Laboratory, May 1963.
- [6] Stevan Harnad. The symbol grounding problem. Physica D: Nonlinear Phenomena, 42(1-3):335-346, June 1990.
- [7] Rodney A. Brooks. A Robust Layered Control System For a Mobile Robot. Technical report, Massachusetts Institute of Technology, 1985.
- [8] Michael L. Anderson. Embodied cognition: A field guide. Artificial Intelligence, 149(1):91 – 130, 2003.
- [9] Timothy Andrew Carey, Warren Mansell, and Sara Jane Tai. A biopsychosocial model based on negative feedback and control. Frontiers in Human Neuroscience, 8(94), 2014.
- [10] Richard S. Marken. Making inferences about intention: Perceptual control theory as a “theory of mind” for psychologists. Psychological Reports, 113(1):257-274, 2013.
- [11] Richard S. Marken and Warren Mansell. Perceptual control as a unifying concept in psychology. Review of General Psychology, 17(2):190-195, jun 2013.
- [12] Richard S. Marken. Controlled variables: psychology as the center fielder views it. The American Journal of Psychology, 114(2), 2001.
- [13] Richard S. Marken. Optical trajectories and the informational basis of fly ball catching. Journal of Experimental Psychology. Human Perception and Performance, 31(3), jun 2005.