The Evolution of Domain-General Mechanisms in Intelligence and Learning

By Dan Chiappe and Kevin MacDonald

Abstract

For both humans and animals, domain-general mechanisms are fallible but powerful tools for attaining evolutionary goals (e.g., resources) in uncertain, novel environments that were not recurrent features of the environment of evolutionary adaptedness. Domain-general mechanisms interact in complex ways with domain-specific, information encapsulated modules, most importantly by manipulating information obtained from various modules in attempting to solve novel problems. Mechanisms of general intelligence, particularly the executive functions of working memory, underlie analogical reasoning as well as the decontextualization processes that are central to human thought. Although there is a variety of evolved, special purpose learning devices, learning is also characterized by domain-general mechanisms able to achieve evolutionary goals by making novel and serendipitous associations with environmental cues.

Introduction

Evolutionary psychology is radically at odds with the tradition that domain-generality is an important component of human cognition. Evolutionary psychologists propose the human mind consists predominantly of highly specialized mechanisms designed to solve specific problems in the environment of evolutionary adaptedness (EEA; Cosmides & Tooby, 1987; Palmer & Palmer, 2002; Pinker, 1994, 1997; Shettleworth, 2000; Sperber, 1994; Tooby & Cosmides, 1989, 1992). Though they acknowledge the existence of domain-general mechanisms as a possibility, they have not provided analyses of the evolutionary function of these mechanisms or of how they interface with domain-specific ones. Their view is domain-general mechanisms are inherently weak because 'jacks of all trades are masters of none. They achieve generality only at the price of broad ineptitude' (Cosmides & Tooby, 2002, p. 170). On the contrary, we argue mechanisms of general intelligence and domain-general learning are powerful tools designed to solve problems not recurrent in the EEA.

A fundamental premise of evolutionary psychology is that evolutionary adaptations equip animals to meet recurrent challenges of the physical, biological and social environment. When the environment presents longstanding problems and recurrent cues relevant to solving them, the optimal solution is to evolve domain-specific mechanisms, or 'modules,' specialized to handle specific inputs and generate particular solutions. Modules are designed to solve problems in specific domains by mapping characteristic inputs onto characteristic outputs (Fodor, 1983, 2000). Their operation is mandatory, fast and unconscious. They carry out their operations by consulting a proprietary database -- information about the domains to which they apply. Modules are also information encapsulated. Though information relevant to solving a particular problem may be accessible to other parts of the cognitive system, it is not necessarily available to a module (Fodor, 1983).

The modular view is likely a correct account of how the mind responds to recurrent, highly stable patterns of evolutionarily significant information (Geary & Huffman, 2002). It is the optimal way of solving problems with a restricted problem space -- a small range of possible solutions, such as the 3-dimensional structure of the physical world (Gallistel, 1990; Shepard, 1994). The stability of the structure of physical space favors the evolution of highly modular systems sensitive to the associated information (e.g., geometric relations among landmarks; Gallistel, 1990) and highly constrained mechanisms for learning about variations in features within this space.

Nevertheless, we argue domain-specific mechanisms are only part of the story. From the perspective of modularity it is difficult to see how humans could solve novel problems or how they could solve recurrent problems in novel ways -- things people are capable of. The difficulty presented by novel problems is that, by definition, there is no characteristic input-output relation based on past recurrences that can solve the problem. We claim domain-general mechanisms are central to human and animal cognition in that they allow for the solution of non-recurrent problems in attaining evolutionary goals. These are mechanisms captured by the g factor of intelligence tests and some learning mechanisms. In the case of intelligence, they include the executive functions of working memory, which are conscious, controlled, unencapsulated, and domain-general. Other variations in modularity are also important but are not discussed here. For example, Geary and Huffman (2002) describe 'soft modules' able to process variation within constrained limits (e.g., language sounds), as well as mechanisms for demarcating and expanding new categories of information (e.g., different species).

Motivation and the Frame Problem

A major goal of this paper is to delineate a middle ground between the 'blank slate' perspective based only on domain-general mechanisms and the 'massively modular' view proposed by evolutionary psychologists as a necessary correction.

The blank slate perspective (the 'Standard Social Sciences Model') proposes the mind consists solely of a set of domain-general mechanisms. A basic problem is there are no problems that the system was designed to solve. The system has no pre-set goals and no way to determine when goals are achieved, an example of the 'frame problem' discussed by cognitive scientists (e.g., Dennett, 1987; Fodor, 1983; Gelman & Williams, 1998). This is the problem of relevance -- the problem of determining which problems are relevant and what actions are relevant for solving them. An organism that is a blank slate is unable to determine which of the infinite number of problems it must solve to survive and reproduce. It faces a 'combinatorial explosion' of possible behaviors, because at any time it could do any of an infinite number of things (Tooby & Cosmides, 1992). Without framing mechanisms guiding it toward the solution of adaptive problems, a problem solver would 'go on forever making up solutions that have nothing to do with a non-assigned problem' (Gelman & Williams, 1998).

Due to the frame problem, it is difficult to see how domain-general processes could evolve without further constraints. Perceptual inputs are massively ambiguous, and domain-general systems have no problems needing solution and no criteria for when they are solved. Modular systems provide a built-in sense of relevance: We pay more attention to moving objects than stationary objects, and faces more than feet. Men generally seek out young beautiful women rather than old women as objects of sexual desire. We do these things as a result of our evolutionary history. On this view, such adaptations must by definition respond only to recurrent features of the environment -- the Stone Age mind adapted to recurrent features of the Pleistocene.

On the basis of these considerations, we accept the argument that humans could not have evolved as nothing but general-purpose problem solvers. We propose, however, that an important aspect of evolution has been to solve the frame problem in a manner compatible with the evolution of domain-general mechanisms. A key idea is that we have evolved motivational systems. These provide positively or negatively valenced signals to the organism -- signals of adaptive relevance that help to solve the frame problem while allowing for the evolution of domain-general problem solving. Consider the state of hunger. A child confronted with an infinite number of behavioral choices narrows down this infinite array by choosing behaviors likely to satiate it, including ones that worked in the past. The motive of hunger, and the fact that certain behaviors reliably result in satiating it, give structure to the child's behavior and effectively prevent combinatorial explosion. The child's behavior is not random because it is motivated by the desire to assuage the feeling of hunger.

Motivational mechanisms can be thought of as a set of adaptive problems to be solved but whose solution is massively under-specified. Motivational systems like the child's hunger enable the evolution of any cognitive mechanism, no matter how opportunistic, flexible, or domain-general, that is able to solve the problem. The child could solve its hunger problem by successfully getting the attention of the caregiver. It could solve it by stumbling onto a novel contingency, by observing others who have successfully satisfied their hunger, or by developing a sophisticated plan using explicit representations of events and a great deal of working memory -- general intelligence.

Evolved goals help solve the frame problem by channeling the operations of the executive functions along adaptive lines. They ensure they direct attention to knowledge relevant to the task at hand (e.g., 'how was I successful in obtaining food on previous occasions?'). They also motivate devising an appropriate strategy, including ones based on past experience, but also new ones designed to overcome new obstacles. Motivational mechanisms allow performance examination and evaluation, as in the case of the hungry child where satiation of hunger acts as a cue that the systems have operated successfully.

Motivation is a central component of many psychological adaptations. Whatever cognitive adaptations humans may have, a crucial subset of these adaptations must function as motivators to engage in adaptive behaviors. Imagine an evolved cognitive program that functions to detect cheaters during social exchange. As evolutionary psychologists have pointed out, such a system is essential for the evolution of reciprocal altruism (e.g., Axelrod, 1984; Cosmides & Tooby, 1989). To be effective it would also have to motivate the person to alter the situation. Simply knowing that one is being exploited is not enough. It is for this reason that so much of the psychological research in the areas of altruism and pro-social behavior is concerned with emotions such as guilt, empathy, and sympathy, as well as ones such as moral outrage resulting from non-reciprocated altruistic behavior, free-riding in public goods experiments (Fehr & Głžchter, 2002) or from exploitative behavior.

The scheme of Emmons (1989), shown in Figure 1, is useful in conceptualizing the relation between evolved motivational systems and domain-general cognitive processes (see also Bowlby's [1969] discussion of plan hierarchies). In this hierarchical model, personal strivings and various lower level actions and goals are in the service of motive dispositions at the highest level. An important subset of these motive dispositions is evolved motive dispositions (EMDs; MacDonald, 1991). EMDs are adaptations that constitute fundamental human biosocial goals. Personality theory provides a basis for supposing there are many EMDs, including ones for seeking out social status, sexual gratification, safety, love, and a sense of accomplishment (MacDonald, 1995, 1998).


Figure 1

Hierarchical model of motivation showing relationships between domain-specific and domain-general mechanisms (after Emmons, 1989).

Level 1 EVOLVED MOTIVE DISPOSITIONS

Level 2 PERSONAL STRIVINGS

Level 3 CONCERNS, PROJECTS, TASKS (Utilize Domain-General Mechanisms)

Level 4 SPECIFIC ACTION UNITS (Utilize Domain-General Mechanisms)

EXAMPLE:

Evolved Motive Disposition INTIMACY

Personal Striving INTIMATE RELATIONSHIP WITH A GIVEN PERSON

Concern, Project, Task Arrange Meeting Improve appearance Get promotion

Action Units Find phone number Begin dieting Work weekends


Some of these EMDs have characteristic inputs designed to trigger solutions to specific problems in the organism's EEA. A motivational system such as hunger or lust, for example, has characteristic inputs (e.g., physiological signals of hunger [declining blood sugar], the sight of a nubile woman) motivating the person to seek the rewards of food and sexual gratification, respectively. The outputs of EMDs are typically goals and beliefs rather than specific behaviors. However, the psychological rewards associated with satisfying these goals, such as the pleasure associated with satisfying hunger or engaging in sexual intercourse, are not automatic outputs of the system. Rather they must be sought after, and their achievement is by no means guaranteed.

It is such reward seeking (or punishment avoiding) behavior that allows for flexible strategizing and the evolution of domain-general cognitive mechanisms. People may solve their hunger problem in any number of ways, including learning novel contingencies and using mechanisms linked to general intelligence. There is no requirement that the means of attaining EMDs be an evolutionarily prepared response. Having a specific set of evolved mechanisms for assuaging hunger or achieving other EMDs is a non-necessity. It can easily be seen that an organism able to devise novel and opportunistic solutions to the chronic problem of being hungry would be at an advantage in the game of life and therefore have higher biological fitness, as would an organism able to detect causal relations between food and various contingent events via operant or classical conditioning.

At a fundamental level, we suppose that problem solving is opportunistic -- people satisfy their EMDs and achieve the lower level goals depicted in Figure 1 by utilizing any and all available mechanisms. The only criterion is what is effective in goal attainment. Experimentation with a variety of strategies followed by selection of effective ones is the rule. Children are 'bricoleurs' -- tinkerers who constantly experiment with a wide range of processes to find solutions to problems as they occur. Children 'bring to bear varied processes and strategies, gradually coming through experience to select those that are most effective.... Young bricoleurs ... make do with whatever cognitive tools are at hand' (Deloache, Miller & Pierroutsakos, 1998; p. 803).

Evolutionary Psychology and the EEA

The view that human cognitive architecture is dominated by psychological modules stems from a misconstrual of the nature of the evolutionary environment and the kinds of adaptations that emerge. According to evolutionary psychologists, the EEA of any animal consists of a set of statistical regularities -- recurring problems and associated cues that can be used in solving them. Only these regularities can be exploited by natural selection: 'It is only those conditions that recur, statistically accumulating across many generations, that lead to the construction of complex adaptations.... For this reason, a major part of adaptationist analysis involves sifting for these environmental or organismic regularities or invariances' (Tooby & Cosmides, 1992, p. 69). For example, the female waist to hip ratio is correlated with fertility. Cognitive mechanisms can evolve that use this cue in solving the problem of identifying viable mates (Singh, 1993). Natural selection thus results in a set of information processing devices designed to solve recurrent problems by processing recurring cues from the environment.

This view of the EEA, and of the human mind that evolved in response to its challenges and opportunities, is incomplete. Because recurrence is built into the definition of an adaptation, it implies there could be no adaptations designed to deal with novel, non-recurrent problems: 'Long-term, across-generation recurrence of conditions ... is central to the evolution of adaptations' (Tooby & Cosmides, 1992, p. 69). Prima facie, this leaves unexplained how humans are routinely able to solve novel problems, learn novel contingencies, create the extraordinary human culture characteristic of the last 50,000 years of human evolution, and cope with life in a constantly changing world far removed from the Pleistocene. It leaves unexplained the massive body of data, reviewed below, showing human intelligence and learning function to solve novel problems.

Accordingly, it is necessary to develop a concept of adaptation not restricted to mechanisms designed to process statistically recurrent features of the environment. As used here, an adaptation is a system of inherited and reliably developing properties that became incorporated into the standard design of a species because it produced functional outcomes that contributed to propagation with sufficient frequency over evolutionary time. The functional outcomes include the achievement of evolved motive dispositions discussed above. This view is broad enough to include domain-general mechanisms, such as those enabling us to reason by analogy that we will discuss below. Organisms with mechanisms enabling analogical reasoning would be capable of solving a wide range of adaptive problems, even ones that occur in a single generation.

Central to our position is that a critical aspect of the EEA was that humans were forced to adapt to rapidly shifting ecological conditions by developing adaptations geared to novelty and unpredictability. The EEA was not a period of stasis but rather a period of rapid change that witnessed the appearance and disappearance of several different hominid species over a two million year period (Foley, 1996; Irons, 1998). Modern Homo sapiens appeared late in the Pleistocene (100,000 to 200,000 years ago) and exhibited a wide variety of distinct hunting and gathering ways of life. Potts (1998) notes humans were forced to adapt to inconsistent selection pressures due to rapidly changing ecological conditions (see also Richerson & Boyd, 2000). Environmental fluctuations became increasingly extreme from the Miocene to the present. For example, based on European pollen sources, there were repeated alternations between dense, moist forests and cold, dry steppe during the past million years. These shifts were unpredictable and non-repetitive rather than cyclic and included decade-scale fluctuations between glacial and warm conditions and century-long shifts between cold steppe and warm forested conditions interspersed with periods of climatic stability. Rapid local change also resulted from volcanoes, earthquakes, and tectonic activity.

The predominant human response was to evolve adaptive flexibility by developing mechanisms designed to deal with novel and unpredictable settings. These led to a 'decoupling of the organism from any one habitat' (Potts, 1998, p. 90). Thus there was a broad trend during the Pleistocene toward the evolution of mammalian taxa more flexible in eating habits, patterns of social grouping, and group size in relation to resource availability. This corresponded to a period of rapid environmental shifts during the mid-Pleistocene. As a result, 'hominids became less inclined to track particular habitats as change occurred and more capable of adjusting to novel conditions and the increasing range of [climatic] oscillation' (Potts, 1998, p. 93).

A major trend in human evolution has been increased encephalization, the largest increase coinciding with the largest environmental oscillations. Larger brain size is also linked with a wider geographic range, suggesting the larger brain enabled greater adaptation to environmental diversity. Across mammalian species and particularly in the line leading to humans, there are associations among brain size, mental ability, learning ability, flexibility of response, and developmental plasticity. There are also associations among these variables and the elaboration of costly parenting practices, delayed sexual maturation, and a prolonged juvenile period in which social learning is of great importance (Eisenberg, 1981; Jerison, 1973; Johanson & Edey, 1981; Lerner, 1984).

General Intelligence as an Adaptation to Novelty and Unpredictability

While the Pleistocene may have intensified the need to adapt to novelty and unpredictability, and while humans have specialized in the flexible, domain-general mechanisms of learning and general intelligence, environments are never completely stable and predictable for any animal. From our perspective, human general intelligence is an elaboration (perhaps including mechanisms that are unique to humans) of abilities present in many animals. Animals and humans often have to make decisions about how to attain their goals in situations where past learning, whether by specialized or unspecialized simple learning mechanisms, is ineffective in attaining evolved goals.

General intelligence in animals and humans

Common ravens (Corvus corax), for example, can solve problems they have not encountered as a selective force in their evolution. Heinrich (2000) used long pieces of string to hang meat from a perch. For ravens gaining access to this food is a novel problem. The solution involved repeated pulls on the string with the beak while holding and releasing the string with a foot. Though each step in the solution may be innate (e.g., grabbing objects with their beaks or with their feet), assembling these behaviors into a sequence that solves the problem is novel. Not all birds arrive at this solution, indicating individual differences in performance as there are for general intelligence in humans.

The solution is accomplished by 'insight' occurring suddenly and within a short time of being exposed to the problem. It does not emerge through a gradual trial and error learning process. Heinrich (2000) argued that the ravens formulate a goal, build mental scenarios, and evaluate possible sequences of actions without having to endure their consequences. Information from various sources is taken into account in planning the solution. Thus the ravens will not pull up the string if the piece of meat appears to be too large, nor will they pull up the string if it is attached to rocks rather than meat.

Heinrich (2000, p. 289) notes insight is used to solve problems 'whose solution is not wholly preprogrammed' -- problems not recurrent in the EEA and not previously encountered by the individual. Insightful problem solving has been demonstrated in apes (KłĆhler's [1925] Einsicht problems) and pigeons. Epstein, Kirshnit, Lanza and Rubin (1984) showed that pigeons trained to do two separate tasks (pushing a box, pecking a banana-like object) were able to put them together to solve a problem requiring both abilities. Animals trained in only one of these could not solve the problem.

Anderson (2000) finds evidence for general intelligence in rats by studying problems requiring the ability to 'combine non-contiguously learned behaviors into a solution for a novel problem' (p. 81). Problems include finding a route to a goal box when the previously learned route is blocked and combining knowledge obtained from more than one source to solve a novel problem. While simple learning tasks do not correlate with each other or show stable individual differences, individual differences on these tasks are stable and correlated with other conceptually similar tests to make up an animal g factor. Anderson extracted a single factor from three such tests and showed performance on the tests was positively correlated with brain size (also known to correlate with general intelligence in humans; Jensen, 1998).

Using similar tasks Crinella and Yu (1995) also extracted a g factor in rats unrelated to simple learning. Tasks loading on the g factor involved analytical skills, learning and memory, and the ability to form strategies. Their g factor for rats based on 5 tests accounted for 34% of the variance, a finding comparable to studies of g in humans (Jensen, 1998). Solving these problems typically involved combining information from multiple sources, including from modules specialized for processing spatial information. Spatial learning is a modular process in rats (Gallistel, 1990), but the frontal cortex is involved in solving novel problems using spatial information. The frontal cortex has been shown to be essential to combining information from different experiences but is not essential to spatial learning per se (Poucet, 1990). The ability to integrate this information with other experiences (learned associations) is part of a positive manifold linked to success in solving other novel problems and to brain size -- general intelligence.

The animal data fit well with research on humans which has consistently found more intelligent people are better at attaining goals in situations of minimal prior knowledge. Particularly important is fluid intelligence, defined as 'reasoning abilities consist[ing] of strategies, heuristics, and automatized systems that must be used in dealing with 'novel' problems, educing relations, and solving inductive, deductive, and conjunctive reasoning tasks' (Horn & Hofer, 1992, p. 88). Tests of fluid intelligence produce the highest correlations with g (Carpenter, Just & Shell, 1990; Carroll, 1993; Duncan, Burgess & Emslie, 1995). Tests such as Raven's Progressive Matrices and Cattell's Culture Fair Test, tap the capacity 'to adapt one's thinking to a new cognitive problem' (Carpenter et al., 1990, p. 404). This highlights the idea that intelligence taps conscious problem solving in situations where past recurrences would be unhelpful except perhaps by analogy or induction to the new situation.

Mechanisms underlying general intelligence

Working memory capacity has been implicated as underlying individual differences in fluid intelligence (e.g., Bechelder & Denny, 1977; Engle, Tuholski, Laughlin & Conway, 1999; Kyllonen & Christal, 1990; Larson & Saccuzzo, 1989). For example, Kyllonen and Christal (1990) found correlations from .80 to .90 between a working memory factor (e.g., digit span, mental arithmetic) and a reasoning factor (analogies, verbal reasoning). Engle et al. (1999) showed that the executive functions of working memory (assessed by tasks involving attentional control) predicted g but short-term memory capacity (assessed by tasks such as memory for sets of words) did not. The variance common to both sets of tests predicted individual differences in fluid intelligence, but the variance common to both tests did not, suggesting that the variability in the executive functions (the component that distinguishes tests of short-term memory from working memory tests) underlies differences in fluid intelligence.

One role of the executive functions in solving novel problems is in goal management. This involves constructing, executing and maintaining a mental plan of action during the solution of a novel problem (Carpenter et al., 1990). For example, the Raven's Progressive Matrices fluid intelligence test and the Tower of Hanoi problem (in which subjects must develop a long term plan with multiple sub-goals) require being able to activate multiple goals, and one must be able to keep track of the satisfaction of each one of the goals (Carpenter et al., 1990, p. 413). Performance on these tasks was highly correlated (r = .77), suggesting substantial goal management is necessary in both tasks.

Executive functions underlying general intelligence are thus involved when problems call for substantial planning and keeping track of various sub-goals. They are involved in dealing with situations very demanding of attentional resources because multiple constraints have to be taken into account, constraints that may vary substantially with the context. As Marshalek, Lohman, and Snow (1983) point out, 'more complex tasks may require more involvement of executive assembly and controlled processes that structure and analyze the problem, assemble a strategy of attack on it, monitor the performance process, and adapt these strategies as performance proceeds' (p. 124).

Executive functions should play a more important role in earlier stages of skill acquisition; once planning is no longer essential, the problem is no longer novel because its solution has become proceduralized. Thus Ackerman (1988) found differences in measures of g were important during earlier stages in skill acquisition. However, with sufficient practice, the effects of g disappeared, provided the task remained fairly consistent (for instance, the rules didn't suddenly change). With practice, individual differences were accounted for by speed of perceptual processing and motor responding.

Neuropsychological evidence suggests the frontal lobes are the locus of the executive functions (Duncan et al., 1995; Duncan, Emslie, Williams, Johnson & Freer, 1996). Patients with frontal lobe damage have difficulty planning for the future, they repeat movements and actions, and score lower on measures of fluid intelligence. Frontal lobe patients matched for scores on crystallized intelligence with normal controls scored 20 to 60 points lower than the controls on the Cattell Culture Fair Test, a measure of fluid intelligence (Duncan et al., 1995).

Further studies by Duncan and his colleagues show that the inability to solve novel problems due to frontal lobe damage arises because of problems in goal management. The solution of novel problems involves a hierarchically-structured process characterized by goals and a set of progressively detailed sub-goals requiring attention to a wide range of information. There is 'successive selection of requirements or constraints at multiple levels of abstraction, using knowledge concerning the implications of one fact to establish target values for others. Candidate goals are suggested both by currently active supergoals and by the state of the environment' (Duncan et al., 1996, p. 263). People with damage to the frontal lobes, particularly the dorsolateral prefrontal cortex, are characterized by goal neglect -- the 'disregard of a task requirement, even though it has been understood' (Duncan et al. 1996, p. 265).

Controlled attention is critical to goal management (Engle et al., 1999; Kane, Bleckley, Conway, & Engle, 2001; Lustig, May & Hasher, 2001). The mechanisms of controlled attention are limited capacity mechanisms responsible for activating relevant representations and keeping them in an active state while inhibiting irrelevant ones. Activation of representations is important because they guide behavior. Kane et al. (2001) found that keeping task relevant information in active state was particularly challenging in conditions where distracting information is present. Distracting information needs to be suppressed because if not, the distracters and not the goal relevant information, will guide behavior. 'The controlled attention functions of the central executive are necessary for those processes required to maintain the activation of memory units and to focus, divide and switch attention as well as those processes to block inappropriate actions and to dampen activation through inhibition' (Engle et al., 1999, p. 327).

Individual differences in working memory capacity reflect differences in the capacity for controlled attention. Kane et al. (2001) found that subjects with low working memory capacity were less able to inhibit the prepotent response of orienting toward a visual cue in a task that required them to look in the direction opposite the cue. This supports the idea that working memory capacity plays a crucial role in controlling attention in situations where responding is not automatic -- situations requiring active engagement with task goals and the inhibition of prepotent responses.

The frontal lobes play a crucial role in controlling attention and managing potentially interfering information (Goel & Grafman, 1995; Goldberg, 2001). For example, frontal lobe patients have difficulty inhibiting immediate impulses and thus perform poorly on the Stroop test (Goldberg, 2001). Frontal lobe patients also have difficulty inhibiting responses on the Tower of Hanoi puzzle where successful moves require inhibiting long-term goals in favor of short-term goals that seem inconsistent with the long-term goal (Goel & Grafman, 1995). Frontal lobe patients performed more poorly because they were less able to resolve conflicts between end-goals and sub-goals requiring the temporary inhibition of certain responses.

The executive functions of working memory and the mechanisms of activation and inhibition are not modular. By definition, mechanisms for solving novel problems have to be unspecialized in the domains for which they provide solutions. Although they may have access to specialized information obtained from the various modules that provide them with inputs, the problem solving procedures would have to be general enough to allow us to solve novel problems in various domains. We noted above that there is a substantial correlation between performance on the Raven's Progressive Matrices and performance on the Tower of Hanoi puzzle. Both tasks require a substantial amount of goal management, working memory and inhibition of prepotent responses. However, the types of information utilized in solving these problems, the specific goals and sub-goals, and the specific responses requiring suppression are unique to each task.

Furthermore, measures of working memory capacity predict performance across a wide range of tasks. The only common element is that they make high demand on attentional resources. For example, people who did well on a mathematical processing task also tended to do well on a perceptual task requiring inhibition of prepotent responses (Kane et al., 2001). Similarly, Lustig et al. (2001) found that individual differences in the capacity to inhibit no-longer-relevant information (i.e., proactive interference) predicted how well participants remembered components of a story. Turner and Engle (1989) showed performance on a mathematical processing task and reading span task, another measure of working memory capacity, both predicted reading ability. This is what one would expect if working memory 'reflects an abiding, domain-free capability that is independent of any one processing task' (Kane et al. 2001, p. 169).

Nor are these processes information encapsulated. Modules carry out their operations by taking into consideration a very limited database (Fodor, 1983). The executive functions of working memory, however, coordinate information from various sources. As noted above, working memory is like an executive who delegates tasks to subordinates, integrates information from other areas, and selects what information to seek (Goldberg, 2001). Indeed, the prefrontal cortex, the seat of the executive functions, is connected to every functional area of the brain. As a result, it is well suited for coordinating and integrating the work of all the other brain structures (Goldberg, 2001).

The executive functions are thus able to access goal-relevant information from a wide range of domains when solving a problem. Indeed, it is by being able to access representations from more modular processes that the executive functions are able to extend cognitive competencies in ways unrelated to their evolutionary function (e.g., Mithen , 1996). The data on general intelligence in animals is also consistent with this view. For example, Thompson , Crinella and Yu (1990) found six brain regions were involved in psychometric g for the rat, including a visuo-spatial attentional mechanism, a visual discrimination mechanism, a vestibular-proprioceptive-kinesthetic discrimination mechanism, a place learning mechanism, and a non-specific mechanism. Detterman (2000) notes the consistency of these data with research on human intelligence.

There is much evidence that general intelligence facilitates the integration of information obtained from modules. Geary's (1995) distinction between biologically primary and biologically secondary abilities is useful in this regard. Biologically primary abilities are domain-specific and include abilities like language and simple quantitative abilities, which develop universally and spontaneously. Biologically secondary abilities, such as reading and mathematical ability, utilize these domain-specific modules, but in a novel manner. Rather than appearing spontaneously and effortlessly, biologically secondary abilities typically require practice and tuition, often with coercion, bribery, or exhortation. Learning these biologically secondary abilities involves conscious awareness rather than implicit awareness. Success at these biologically secondary abilities is strongly correlated with general intelligence (Geary, 1995).

As a case in point, human language results from highly dedicated systems that enable children to effortlessly and unconsciously learn extraordinarily complex and productive grammatical rules (Pinker, 1994). However, skill in integrating these language systems as well as the output of visual processing mechanisms into an evolutionarily novel ability, reading, is strongly linked to general intelligence. Unlike language learning, reading is typically mastered only with a great deal of conscious effort, and represents a major hurdle for many schoolchildren. The correlation between IQ and reading skills ranges from about .6 to .7, even longitudinally (e.g., Stevenson et al., 1976). IQ correlates with reading most when decoding ability -- a specialized process -- is controlled (Jensen 1998). Around 3rd to 4th grade children are adept at decoding, and individual differences are mainly in comprehension. Reading comprehension is approximately as highly correlated with verbal as with nonverbal IQ. Similarly, there is evidence that children's language learning is limited because of limitations in their working memory (Newport, 1991; Elman, 1994). As noted above, working memory is a domain-general ability strongly associated with g.

Functions of General Intelligence: Decontextualization and Analogical Reasoning Decontextualization

In an admittedly speculative treatment, Cosmides and Tooby (2000, 2002) propose to account for the ability of humans to solve novel problems by the evolution of meta-representational abilities that include a 'scope syntax' that marks some information as only locally true or false. It includes 'a set of procedures, operators, relationships, and data-handling formats that regulate the migration of information among subcomponents of the human cognitive architecture' (2002, p. 183). Particularly important are meta-representations allowing us to decouple representations of locally true information from the rest of our knowledge base (e.g., John believes that X, where X may be true or false). This allows people 'to explore the properties of situations computationally, in order to identify sequences of improvised behaviors that may lead to novel, successful outcomes' (Cosmides & Tooby, 2000, p. 67). This view implies that intelligence involves what one might term 'hyper-contextualization' because it highlights local contingency and an unspecified set of mechanisms that allow for solution of localized problems in ways not coupled to the modular mechanisms designed to solve evolutionarily recurrent problems.

While meta-representational abilities are of undoubted importance in solving novel problems, these abilities are domain-general (Chiappe, 2000). For instance, Sperber (1994) postulates a meta-representation module specialized for thinking explicitly about representations. This mechanism has, inter alia, the 'ability to evaluate the validity of an inference, the evidential value of some information, the relative plausibility of two contradictory beliefs' (Sperber, 1994, p. 61). Furthermore, it is able to carryout these activities across all domains of thought. 'The actual domain of the meta-representational module is the set of all representations of which the organism is capable of inferring or otherwise apprehending the existence and content' (Sperber, 1994, p. 60). However, if this mechanism is able to, say, evaluate the validity of inferences in any domain, as Sperber himself suggests, it seems most reasonable to characterize the mechanism in question as a domain-general reasoning mechanism and not a module.

Moreover, Cosmides and Tooby's emphasis on hyper-contextualization is radically at odds with data showing general intelligence facilitates solving novel problems not by emphasizing local contingency but by decontextualization and abstraction. Decontextualization enables humans to inhibit the operation of highly context-sensitive, implicit and automatic heuristics for making inferences, judgments and decisions. It is an aspect of Piagetian formal operational thought, 'the independence of form from content' (Piaget, 1972). Decontextualization enables dealing with novel and unpredictable environments because a common source of solutions to novel problems involves recognizing similarities between new problems and previously solved problems, as via analogical reasoning.

IQ researchers are well aware of the centrality of decontextualization for thinking about intelligence.

One of the well-known byproducts of schooling is an increased ability to decontextualize problems. In almost every subject'pupils learn to discover the general rule that applies to a highly specific situation and to apply a general rule in a wide variety of different contexts. The use of symbols to stand for things in reading (and musical notation); basic arithmetic operations; consistencies in spelling, grammar, and punctuation; regularities and generalizations in history; categorizing, serializing, enumerating, and inferring in science, and so on. Learning to do these things, which are all part of the school curriculum, instills cognitive habits that can be called decontextualization of cognitive skills. The tasks seen in many nonverbal or culture-reduced tests call for no scholastic knowledge per se, but do call for the ability to decontextualize novel situations by discovering rules or regularities and then using them to solve the problem (Jensen, 1998, p. 325).

Literacy is of critical importance to decontextualization because it results in the rise of logic rather than a reliance on social context, and empirical observation rather than folk theories (Denny, 1991). Non-literate peoples tend to be less able to decontextualize their thinking. As Luria (1976) pointed out, they are more likely to be suspicious about purely theoretical logical operations, and are more likely to deny the validity of drawing conclusions from statements about things for which they have no direct experience.

Investigations of human reasoning show humans often radically contextualize problems. In particular, Stanovich and West (2000) note people have the following tendencies: (a) to adhere to conversational principles even in situations that lack many conversational features; (b) to contextualize a problem with as much prior knowledge as is easily accessible, even when the problem is formal and the only solution is a content-free rule; (c) to see design and pattern in situations that are undesigned, unpatterned, or random; (d) to reason enthymematically -- to make assumptions not stated in a problem and reason from those assumptions; (e) the tendency toward a narrative mode of thought.

Thinking evolved in a social context and the contextualization process often works quite well (Anderson, 1991; Oaksford & Chater, 1996). However, there are many real-life situations where decontextualization is called for, and decontextualization is linked with g. There is evidence people with higher g are better able to reason logically on a wide variety of tasks, including those where people are prone to the systematic biases resulting from the radical contextualization characteristic of human thinking.

In two studies Stanovich and West found correlations from 0.25 and 0.40 between g and performance on various reasoning problems in a university sample (thus attenuating the correlations compared to a general population sample). These included tasks where successful performance requires ignoring the believability of the conclusion: knowing how to falsify an 'if P, then Q' statement in Wason's Selection Task; avoiding influence by vivid but unrepresentative information in favor of valid statistical information; avoiding the bias of allowing prior beliefs to influence evaluations of arguments; evaluation of association based on 2 X 2 contingency tables; avoiding the bias of rating positive outcomes as superior to negative ones when confronted with equally compelling evidence for both; being able to test the influence of one variable by holding others constant; choosing one of two counterfactual suggestions as better when there is objectively no difference.2 IQ accounted for about 39% of the variance on the seven tests.

Stanovich and West interpret their results as indicating two distinct cognitive systems. System 1 is an interactional, social intelligence. It is composed of mechanisms that support communication in which intention and attribution are critical. This has been termed 'interactional intelligence' by Levinson (1995). Construals triggered by System 1 are highly contextualized, personalized, and socialized. They are driven by considerations of relevance and are aimed at inferring intentionality by the use of conversational implicature even in situations that are devoid of conversational features. System 1 consists of modules performing specific computations that solve recurrent problems in the human social EEA. They solve these problems quickly and unconsciously. The primacy of these mechanisms leads to what Stanovich terms the 'fundamental computational bias' in human cognition -- the tendency to automatic contextualization of problems which tendency may have yielded solutions to recurrent social problems in our evolutionary past (Levinson, 1995). There appears to be low variability in interactional intelligence and little relation between interactional intelligence and IQ (Jones & Day, 1997; Matthews & Keating, 1995; McGeorge, Crawford, & Kelly, 1997).

System 2 conjoins the various characteristics that have been viewed as typifying controlled processing. It encompasses the mechanisms underlying general intelligence. System 2's more controlled processes serve to decontextualize and depersonalize problems. This system is more adept at representing in terms of rules and underlying principles. While this system is much slower than System 1 modules, its advantage is its flexibility -- it's ability to solve novel problems. It can deal with problems without social content and is not dominated by the goal of attributing intentionality or by the search for conversational relevance.

Analogical Reasoning

Analogical reasoning is a central process by which humans solve novel problems. According to James (1890, p. 530) 'the faculty for perceiving analogies is the best indication of genius.' People who could analogize are 'the wits, the poets, the inventors, the scientific men, the practical geniuses' (p. 530). Correlations between tests of general intelligence and tests of analogical reasoning range from .68 to .84 (Spearman, 1927; Sternberg, 1977). As indicated below, this is because analogical reasoning involves a conscious, controlled, comparison process that draws heavily on working memory.

Analogical reasoning involves drawing parallels between novel problems and problems that have been solved in the past. Analogies, such as 'sound is like a water wave,' thus involve transferring information across conceptual domains (Chiappe, 2000; Gentner & Holyoak, 1997; Holyoak & Thagard, 1995). The transfer is based on establishing relevant similarities between a source domain (e.g., water waves) and a target domain (e.g., sound or light). Analogies allow us to use a familiar situation as a model for making inferences about an unfamiliar situation (Gentner & Holyoak, 1997). An analogy between water waves and the propagation of sound may depend on noticing that both spread from a point of origin and it may lead us to infer that sound should bounce back when it strikes a surface (Holyoak & Thagard, 1995).

Analogical reasoning does not fit well into a fully modular view of cognition (Chiappe, 2000; Fodor, 1983; Mithen, 1996). It is thus no surprise that a discussion of analogical reasoning is largely absent from foundational articles on evolutionary psychology. The influence of analogy in cognition, however, can be witnessed across all spheres of human life. It is a truly domain-general process. Religions analogize gods to humans. Scientists use analogies in developing theories (Huygens's use of light and sound to support his wave theory of light; Darwin's analogy between artificial selection and natural selection; the mind as a blank slate or computer). Analogies are also common in political rhetoric (the domino theory of communism), precedent-based legal reasoning, and everyday language (e.g., 'We're at a crossroads;' Lakoff & Johnson, 1980). Importantly, showing people an analogous situation from a very different domain facilitates solving novel problems (Gick & Holyoak, 1980).

Implicit in these examples is also the unencapsulated nature of analogical reasoning. The more information that a system can take into account, the less encapsulated it is. As Fodor (1983, p. 117) notes, 'By definition, encapsulated systems do not reason analogically.' There seems to be no limit to the domains humans can bring together for comparison -- lawyers and sharks, crime and disease, evolution and lotteries, rage and volcanoes, education and stairways (Chiappe, 2000). Although many of the comparisons that we are capable of making are fruitless (computers as windshields), our capacity to make them shows we are capable of bringing just about any two concepts together (Chiappe, 2000, Koestler, 1964).

Analogical reasoning involves explicit manipulation of mental representations. Analogical reasoners consciously reflect on representations, searching among their properties for those pertinent to the analogy. Analogical reasoning also requires comparison processes as described, for example, in Gentner's (1983) 'structure mapping' theory. The comparison process involves establishing a common system of relations between the source domain and the target rather than simply mapping attributes of the objects. For example, an analogy between the solar system and a hydrogen atom exploits the higher order relation between the sun's attraction of the planet as the cause of the planet revolving around the sun rather than the superficial attributes of the sun or planets.

Several studies have shown the overall importance of relational matches to analogical reasoning, especially higher order relational matches that map in a systematic and principled manner onto the target, (e.g., Clement & Gentner, 1991; Gentner & Clement, 1988; Markman & Gentner, 1993). In general, people prefer interpretations that involve establishing similarities at abstract levels: 'People prefer to match and carry over systems of predicates governed by higher-order constraining relations (Gentner & Clement, 1988, p. 313). Analogical reasoning is also a goal-driven process (Dunbar, 1997; Holyoak & Thagard, 1995; Spellman & Holyoak, 1996). Goals play a crucial role in analogical reasoning because they serve to ensure that the process is guided along relevant directions, thereby avoiding the frame problem. As indicated above, goals are critical to the evolution of domain-general mechanisms, and goal management is an important aspect of general intelligence.

Analogical reasoning on the basis of relations is also found among animals. The best documentation is on chimpanzees which have been found capable of using geometric and functional relationships as the basis of analogies (Gillan, Premack & Woodruff, 1981; Oden, Thompson & Premack, 2001). Such reasoning may not occur in the wild because it has been observed only in animals trained to use a symbol system that explicitly contains concepts for the relations 'same' and 'different' (Oden et al., 2001). 'Prior experience with tokens, analogous to words, that symbolize abstract same/different relations is a powerful facilitator enabling a chimpanzee'to explicitly express in judgment tasks'their otherwise implicit perceptual knowledge about relations between relations' (Oden et al., 2001, p. 490). Monkeys given similar training are unable to solve analogical problems (Oden et al., 2001).

Analogical reasoning and working memory. We have noted that working memory is a critical component of general intelligence. Analogical reasoning makes substantial use of the resources of working memory. The comparison process requires both a storage component and an attention-demanding, processing component -- two hallmarks of working memory tasks. Analogies require the activation of important elements and relations of the domains involved while searching for abstract commonalities between the two. The current processing goals motivating the analogy must be kept active. Potentially distracting components of the domains (e.g., superficial features that are irrelevant to the final interpretation of the analogy) must be inhibited.

Supporting the role of working memory in analogical reasoning, Mulholland, Pellegrino and Glaser (1980) found participants made more errors and took longer to respond as the number of elements and transformations required to solve an analogy increased. 'Increases in solution latency and error rates were due to working memory limitations associated with the representation and manipulation of item features at high levels of transformational complexity' (Mulholland et al., 1980, p. 281). Kyllonen and Christal (1990) found positive correlations ranging from .36 to .54 between performance on verbal analogy problems and working memory capacity tests. Waltz, Lau, Grewal and Holyoak (2000) found that increasing working memory load by having subjects generate random numbers while solving analogies resulted in fewer higher-level relational responses and more lower-level attribute responses than those that did not have to do that task. Similarly, Tohill and Holyoak (2000) found that subjects with state anxiety -- a factor known to depress working memory -- produced fewer relational responses and more attributional responses than the low anxiety group.

Neuropsychological research also supports the connection between working memory and analogical reasoning. Waltz et al. (1999) found that subjects with damage to the prefrontal cortex -- the locus of the executive functions of working memory -- were impaired in the conditions that required integrating multiple relations, including analogical reasoning. 'Relational reasoning appears critical for all tasks identified with executive processing and fluid intelligence' (Waltz et al., 1999, p. 123). Waltz et al. suggest that deficits in planning and problem solving can be explained on the basis of deficits in relational integration. The construction of a hierarchy of subgoals when solving problems 'is a special case of relational interaction' (1999, p. 123).

Neuroimaging studies indicate increasing activation in the dorsolateral prefrontal cortex and in the parietal cortex as relational complexity increased (Holyoak & Hummel, 2001). While the dorsolateral prefrontal cortex is involved in the domain-general task of manipulating relations among display elements, the domain-specific posterior cortex represents the elements of the relations (Holyoak & Hummel, 2001).

Analogical reasoning, decontextualization and the creation of new categories. Like other factors related to g, analogical reasoning involves decontextualization. In terms used by Stanovich and West (2000), analogical reasoning involves System 2, the controlled processing system that decontextualizes problems, rather than System 1 which is automatic, unconscious, and highly contextualized.

Analogies often require abstraction -- a form of decontextualization. Mapping across very different semantic domains requires generating representations that abstract away from specific details of the domains involved to produce a schema that preserves the abstract relations common to the two domains while ignoring the characteristics unique to each (Holyoak, 1984). Karmiloff-Smith (1992) refers to the process of abstraction as 'representational redescription.' Through representational redescription, patterns embedded in a particular domain become represented more explicitly and more abstractly. As a result, representations become more broadly accessible: 'Information already present in the organism's independently functioning, special-purpose representations, is made progressively available'to other parts of the cognitive system' (Karmiloff-Smith, 1992, pp. 17-18).

Analogical reasoning therefore yields general problem solving schemas -- higher order categories applicable across a wide range of domains of which the specific analogs are instances (Holyoak, 1984). This decontextualization 'deletes differences between the analogs while preserving their commonalities' (Holyoak, 1984, p. 208). Such decontextualization plays a role in the generation of new concepts in science, as when the abstract concept of a wave is used to apply to vastly different domains. 'Once a more abstract concept of a wave was established, it played a role in the further extension [from water waves and sound waves] to light waves' (Holyoak & Thagard, 1995, p. 23).

The process of creating new categories through analogical reasoning is also evident in the metaphorical statements ubiquitous in natural language, statements such as 'crime is a disease,' 'my job is a jail,' and 'rumors are weeds' (Chiappe, 2000; Lakoff & Johnson, 1980). The process of combining concepts in metaphorical statements leads to the creation of categories more abstract than the source and target concepts involved (Glucksberg, 2001). For example, the metaphor 'rumors are weeds' leads to the creation of the category 'undesirable things that spread quickly and uncontrollably.' Once generated, this category can be applied to a wide range of novel situations.

Domain-specificity and domain-generality in learning

To this point we have argued the mechanisms underlying general intelligence evolved to solve novel problems. This does not, however, exhaust the role of domain-generality. As we will see, many of the mechanisms underlying what has traditionally been called 'learning' feature domain-generality, but are unrelated to measures of g. This is because these learning processes do not need the heavy working memory involvement typical of tasks reflecting general intelligence. Nonetheless, their domain-generality enables organisms to satisfy their evolved goals by exploiting novel contingencies.

To begin, motivation represents a major point of contact between evolutionary approaches and approaches based on learning theory. Learning theories generally suppose that some motivational systems are biological in origin, but traditionally they have tended toward 'biological minimalism.' They posit only a bare minimum of evolved motivational systems. For example, traditional drive theory proposed that rats and people have drives to consume food, satisfy thirst, have sex, and escape pain. For an evolutionist, this is a good start, but leaves out a great many other things that organisms desire innately. As noted above, even this short list of evolved motivations or even one such biologically based motivational system would be consistent with the evolution of domain-general mechanisms. Indeed, this has been the implicit and at times explicit rationale behind discussions by learning theorists that domain-general learning is adaptive in an evolutionary sense (Skinner, 1981).

In general, learning biases (e.g., biases in favor of learning certain skills) and guided learning (where the goal of learning is achieved via genetic systems) are expected to be weak in highly variable (non-recurrent) environments that are not available to the genes directly and in situations where individual trial and error learning is quite inexpensive (Boyd & Richerson 1985, 1988). Conversely, biased learning is favored where the problems to be solved are recurrent features of the EEA and where there are high costs to individual learning.

Evolutionary analyses of learning emphasize that learning mechanisms imply a great deal of evolved machinery and that they are often biased in ways that make certain types of learning easier than others (Garcia & Koelling, 1966; ł˝hman & Mineka, 2001; Rescorla, 1988; Rozin & Schull, 1988). A paradigmatic example is taste aversion learning observable in a wide range of species, including quail, bats, catfish, cows, coyotes, and slugs (Kalat, 1985). If a rat consumes food and later feels nauseous, it associates the illness with the food rather than with other more recent stimuli such as lights and sounds, and it will make this association over much longer periods of delay than typical for other examples of learning. The association of food with poison is greatly influenced by whether the food is unfamiliar to the animal, an indication taste aversion learning is an adaptation to non-recurrent and unpredictable features of the environment.

This example shows that some types of novelty are sufficiently recurrent to yield dedicated, domain-specific mechanisms designed to cope with it. Recurrent novelty occurs when organisms have been confronted over an evolutionarily significant period with a need to evaluate a specific kind of novel situation, such as rats evaluating novel foods. Novel food items are a potential resource for the animal and must not be ignored even though they are more likely to be dangerous. Novel food items were a recurrent but unpredictable feature of the rat's EEA, with the result that the animal has evolved adaptations that minimize the cost of sampling this novelty. Rats will also preferentially eat novel food that they have smelled on the breath of another rat (Galef, 1987), thereby minimizing the danger of trial and error learning and demonstrating the utility of specialized social learning mechanisms that evolved to adapt to recurrent problems involving specific sources of novelty.

There are many recurrent but contingent aspects of an animal's microenvironment that must be learned. This learning is best performed by specialized learning mechanisms that allow for rapid and efficient learning of specific types of information. A paradigmatic example for humans is language, where the language acquisition device is specialized to learn the language spoken around the child (Pinker, 1994). The language acquisition device makes learning any human language an effortless task whereas the task is impossible for animals not so equipped.

There certainly are mechanisms facilitating learning certain types of recurrently important information. However, it does not follow that the language acquisition device or other 'learning instincts' (Tooby & Cosmides, 1992) should be viewed as a general paradigm for all human learning -- that human learning is always the result of domain-specific systems that evolved to preferentially learn certain types of information. Language acquisition is more the exception than the rule in human learning. Unlike social learning and associative learning, there is a critical period for language, during which it is most efficient (Pinker, 1994; Spelke & Newport, 1998). Moreover, the capacity to acquire language can be selectively impaired. Children with Specific Language Impairment have normal intelligence, but their ability to acquire language is disrupted (Pinker, 1994). However, not all forms of learning can be selectively impaired, suggesting at least some learning mechanisms apply to a wide range of domains.

Pavlovian Conditioning. Learning novel cause-effect relationships is an important type of learning in the natural world. Pavlovian conditioning allows animals to make opportunistic associations between local, transient events not recurrent in their EEA. In some cases, these associations recur sufficiently often to result in evolved biases, as in taste aversion learning in rats (e.g., Garcia & Koelling, 1966; Rescorla, 1980). However, using a wide range of stimuli, animals are able to opportunistically satisfy evolved goals by making novel associations, as in Pavlov's dogs learning that the sound of a bell would be followed by food -- not a recurrent contingency in the animals' EEA.

It is useful to distinguish between domain-specific and domain-general mechanisms of classical conditioning. Domain-specific mechanisms rely on evolved connections between specific UCSs and specific classes of stimuli, as in taste aversion learning in rats. In contrast, domain-general classical conditioning is designed to detect transient, locally true associations among any detectable stimuli using general (and fallible) 'rules of thumb' that rely on very broad, general features of the environment. The main general predictors are contiguity (including temporal order and temporal contiguity) and contingency (reliable succession). These predictors reflect that causes are reliable predictors of their effects, that causes precede their effects, and that in general causes tend to occur in close temporal proximity to their effects (Revulsky, 1985; Staddon, 1988). Causes that are temporally far removed from their effects are difficult to detect, and the temporal contiguity of cause and effect is a general feature of the world. The fact that there are exceptions, as in taste aversion learning, where non-contiguous causes have a special status because of the evolutionary history of the animal, does not detract from the general importance of temporal contiguity. From the animal's perspective, in the absence of such a prepared association, the best default condition is to suppose that causes precede the UCS and are temporally contiguous. While temporal contiguity is neither a necessary nor a sufficient condition for associating events, in general it is a main source of information on causality (Shanks, 1994).

The general learning mechanism is also designed so that several pairings of stimuli for which there are no evolved linkages provide more information than one pairing, that repeated occurrences of UCSs in the absence of a particular stimulus make it unlikely that the UCS will be paired with that stimulus (UCS habituation), that the physical intensity of a stimulus increases its likelihood of being framed as a CS, that repeated non-reinforcement of a previously reinforced association will lead to extinction, and that animals are likelier to forget associations with greater intervals of time, which makes adaptive sense because environments are continuously changing (Revulsky, 1985).

Humans and animals are able to make associations between a wide range of stimuli in probing for causal relations (Dickinson, 1994; Shanks, 1994). Since the world is not an entirely predictable place, there is no reason to suppose organisms would be restricted to mechanisms designed to find causal relationships between specific sources of recurrently connected events. Domain-general mechanisms designed to opportunistically detect associations among any discriminable stimuli would be of obvious advantage, and available evidence indicates that such mechanisms have evolved.

Exactly how humans and animals detect associations remains in dispute, but the mechanisms are certainly not restricted to highly delimited sets of inputs recurrently linked to specific goals (UCSs) in the EEA. Mechanisms of classical conditioning probe the world for novel contingent relationships between experienced stimuli and evolutionarily important UCSs -- relationships that may be transient and only locally true. These mechanisms imply at least some evolved machinery. For example, Gallistel (1990, 1994, 1999) proposes that classical conditioning in animals derives from an evolved foraging mechanism specialized to compute associations between stimulus conditions and rates of reinforcement, and is sensitive to temporal alterations in the contingency between stimulus conditions and reinforcement. The mechanism detects predictive relationships between a wide range of stimuli and a UCS. Thus it solves problems that are multivariate (i.e., many different events can predict the UCS), non-stationary (i.e., the contingencies between CSs and UCSs can change at any time), and arrayed in a time series (i.e., learning the temporal dependence of one event on another). Human associative learning is also multivariate: We are highly sensitive to contingency among a wide range of arbitrarily chosen stimuli (including color patches, dot patterns, schematic faces, slides of skin disorders, pseudo-words, phrases, and artificial grammars), and between actions (e.g., key pressing) and arbitrarily chosen outcomes (e.g., Shanks, 1994).

At a functional level, such a learning mechanism is an adaptation as we have defined it: It is a system of inherited and reliably developing properties designed by natural selection to solve an adaptive problem, in this case, the need to adapt to constantly changing, locally true information. It is a domain-general adaptation because it functions to adapt the animal to non-recurrent, constantly changing relationships between all available stimuli that signal the occurrence of other stimuli, including those that serve as cues for satisfying evolved goals (UCSs). The mechanism is domain-general and unencapsulated because the stimuli linked to reinforcement (i.e., the CSs) are any stimulus detectable by the animal; there is no proprietary database except in the vacuous sense that any stimulus available to the animal may be linked with reinforcement.

Learning and Fear. The fear system in humans and animals provides a good illustration of the intertwining of domain-specific with domain-general learning mechanisms. Certain stimuli recurrently associated with danger in the EEA are particularly easy to acquire and difficult to extinguish (e.g., ł˝hman & Mineka, 2001; Seligman, 1971). The fear system is therefore selective in its inputs because some stimuli more easily induce fears: 'Evolutionary contingencies moderate the ease with which particular stimuli may gain control of the module' (ł˝hman & Mineka, 2001, p. 488).

However, other stimuli can gain control of the fear system. The adaptiveness of domain-general aspects of the fear system can be seen from data showing that when the UCS is highly aversive or when a CS without any evolutionary significance is known to be very dangerous, the differences between evolutionarily primed fears and non-evolutionarily primed fears disappear. Thus pointed guns are a very potent stimulus for fear in our culture saturated with media reports and dramatizations of shootings, with the result that guns activate the fear system in a manner indistinguishable from evolutionarily prepared stimuli like snakes and spiders. Hugdahl and Johnsen (1989) found the stimulus of a gun pointed at the subject followed by a loud noise showed superior conditioning compared to slides of snakes. Moreover, the CS of the gun pointed at the subject was nearly identical in extinction rate to a snake pointed at the subject when both were followed by a shock. The results indicate prolonged experience with stimuli such as pointed guns associated with intensely aversive outcomes eventually leads to enhanced connections in the amygdala that 'function like evolutionarily prepared associations' (ł˝hman & Mineka, 2001, p. 513). Similarly, Sutton and Mineka (cited in ł˝hman & Mineka, 2001) did not find a covariation bias for images of a knife-wielding male dressed in black under normal, non-traumatic conditions but did find a covariation bias similar to that for evolutionarily prepared stimuli among students primed by real-life reports of local knife-wielding criminals and a stabbing on campus.3 In this case, fear of a person wielding an item that was not an evolutionary danger produced the sort of bias typically found with evolved fears. Similarly, Lautch (1971) found that intense trauma could result in phobias even toward normally benign objects with no evolutionary prepotency.

Domain-generality is also implied by data indicating there are two different learning systems relevant to fear in animals and humans (LeDoux, 1996; ł˝hman & Mineka, 2001). The inputs to the amygdala fear system include prepared connections between fear responses and evolutionarily recurrent stimulation, although, as we have seen, non-prepared stimuli also have access to the system. The domain-general associative learning system in the hippocampus is activated in attempts to link any and all available stimuli to aversive UCSs, including a range of contextual stimuli. ł˝hman and Mineka (2001) suggest this system typically functions in novel and unnatural situations typical of laboratory studies on animals where the aversive UCS is very motivating and where picking up any and all available information on cues related to the appearance of the UCS may be vital. With humans, Campbell, Sanderson, and Laverty (1964) found a long-lasting conditioned fear response to an arbitrary tone CS following a single traumatic event involving the suspension of breathing due to a drug injection. There was no extinction even at a follow-up 3 weeks after the experiment. The drug did not cause pain, but the experience was described by subjects as 'extremely harrowing' (p. 632).

In conclusion, the fear system fails to qualify as a module because stimuli with no phylogenetic importance can serve as CSs for activating the system, especially if they are intensely aversive. Domain-general learning mechanisms of classical conditioning function both in mildly aversive situations without the involvement of the amygdala fear system and in intensely aversive situations with involvement of the amygdala fear system where any and all available information on contingency is of critical adaptive importance. Nor is the system encapsulated because domain-general cognitive mechanisms are able to influence the effectiveness of UCSs in producing fear CRs. Experience with dangerous objects also influences expectations that such objects will have aversive effects and results in stronger fear CRs that are more resistant to extinction.

Instrumental Conditioning. Instrumental conditioning allows animals to opportunistically assess the effects of their own behavior. An animal without the ability to learn contingencies between its actions and their consequences would have to rely on evolved connections between specific stimuli and specific behaviors. Such a strategy would suffice in a stable, predictable world but would prevent animals from being able to opportunistically take advantage of novel, serendipitous, and non-recurrent contingencies between its behavior and the satisfaction of evolved goals; such animals would not be able to alter their behavior in situations where the goal-related consequences of behavior vary. For example, in laboratory studies, behavior that satisfies the evolved goal state of hunger is strengthened when rats learn to push up and down on levers to obtain food. Levers useful in obtaining food are not a part of the animals' EEA, but rats are designed to be able to take advantage of novel, serendipitous associations between their behavior and the availability of food. Animals are able to perform a wide range of behaviors that are not species-typical foraging behaviors to satisfy their evolved goal of assuaging hunger. Similarly, humans, to the great benefit of TV programmers, can be induced to do virtually anything within the realm of what is physically possible for them if it results in rewards. Organisms unable to take advantage of such novel contingencies -- contingencies not recurrently present in its EEA -- would clearly be at a disadvantage.

Natural selection has sometimes resulted in certain default activities occurring as prepotent responses to situations of reward or danger (Staddon, 1988). For example, raccoons wash any small object that is strongly associated with food, while pigeons tend to peck anything strongly associated with food. Natural selection has also operated to make certain operants easier to learn than others. For example, bees more easily learn to switch their nectar gathering behavior to new flowers (switch learning) and find it difficult to learn to return to the same flower that previously had nectar (stay learning) -- presumably a reflection of evolutionary pressures linking a particular behavior and a particular goal (Cole, Hainsworth, Kamil, Mercier & Wolff, 1982). This situation represents a conflict between evolved linkages between action and inference versus general cues of contingency and temporal contiguity for engaging in actions that result in reward. The interesting point is that bees are able to master the stay learning condition eventually; eventually the domain-general mechanism overrides the domain-specific mechanism. In the absence of evolved biases, the domain-general instrumental conditioning mechanism is able to take advantage of novel, serendipitous associations between the animal's behavior and various rewards and punishments. The best general cues for this are contingency and temporal contiguity (Staddon, 1988).

Tooby and Cosmides (1992, p. 95) claim that support for domain-generality in learning relies on data from 'experimenter-invented, laboratory limited, arbitrary tasks.' They criticize traditional learning experiments for not focusing exclusively on ecologically valid, natural tasks -- tasks that deal with problems that were recurrent in the animal's EEA. We agree that investigations of such tasks are likely to reveal specialized learning mechanisms in some cases. However, an equally remarkable aspect of learning is that pigeons can learn to peck keys to satisfy their evolved goals of staving off hunger and eating tasty foods. Although pecking for food is undoubtedly a species-typical behavior for pigeons, pigeons, like rats learning to push levers, are also able to learn a variety of arbitrary, experimenter-contrived behaviors that are not components of the animal's species-typical foraging behavior. In other words, they are able to solve a fundamental problem of adaptation (getting food) in a novel and even arbitrary environment that presents few, if any, of the recurrent associations between the animal's behavior and obtaining food experienced in the animal's EEA. Similarly, humans are able to learn lists of nonsense syllables -- another example highlighted by Tooby and Cosmides (1992), despite the fact that learning such lists was not a recurrent problem in the EEA. People can learn such lists because their learning mechanisms can be harnessed to new goals, as in getting course credit as a subject in a psychology study.

In general, neither operant nor classical conditioning evolved to exclusively link specific events or behaviors recurrent in the EEA. The mechanisms underlying these abilities imply a great deal of evolved machinery, and there are important cases where evolution has shaped learning in ways that depart from domain-generality. Nevertheless, there is no proprietary database for garden-variety examples of operant and classical conditioning; nor is there evidence the information available to these mechanisms is typically encapsulated. In general, there is no characteristic input to these systems, because the input to associational mechanisms of rats and humans verges on whatever is detectable by the sense organs, and operant behaviors span virtually the range of physically possible motor behaviors. Because of their domain-generality, these mechanisms allow humans to solve problems with features not recurrent in the EEA.

Social Learning. Social learning is also a domain-general adaptation. It occurs not only among humans but also among many birds and mammals. For example, rats can learn new means of obtaining food rewards by observing conspecifics (Heyes, Dawson & Nokes, 1992). Terkel (1996) showed social learning of a method for opening pine cones allowed the Black rat (Rattus rattus) to occupy a new ecological niche, an illustration of the utility of learning for adapting to novel opportunities not characteristic of the animal's EEA. Most social learning among animals functions to improve foraging efficiency by allowing animals to take advantage of novel but transient information rather than to create cultural traditions between generations (Laland, Richerson & Boyd, 1996).

There are a variety of methods for the social transmission of information, ranging from social facilitation (learning facilitated by the presence of a conspecific) to true imitation (one animal copies another's specific behavior and the behavior is not reinforced and not in the natural repertoire of the observer; Zentall, 1996). Moore (1996) shows parrots are able to socially learn, without reinforcement, a wide range of behaviors that are not part of their species-typical repertoire. Although there are controversies about the extent to which non-human primates are able to exhibit true imitation (see Tomasello 1996), there is no question they are able to acquire new behaviors from socially transmitted information without reinforcement. Great apes are able to imitate a wide range of behaviors (e.g., using a hammer or a paint brush) modeled by people (Whiten & Ham, 1992). Human infants, at least beyond one year of age, are 'imitative generalists' who 'imitate a wide variety of acts in varied situations. Facial, manual, vocal, and object-related imitation has been documented; familiar and novel acts are imitated; both immediate and deferred imitation occurs; imitation can take place in the original setting or be transferred to novel contexts' (Meltzoff, 1996, p. 361).

Tooby and Cosmides (1992, p. 118) acknowledge the importance of social learning that results in 'a large residual category of representations or regulatory elements that reappear in chains from individual to individual -- 'culture' in the classic sense.' However, social learning tasks 'would be unsolvable if the child did not come equipped with a rich battery of domain-specific inferential mechanisms, a faculty of social cognition, a large set of frames about humans and the world drawn from the common stock of human metaculture, and other specialized psychological adaptations designed to solve the problems involved in this task' (Tooby & Cosmides, 1992, 119).

There is no question social learning requires a great deal of evolved machinery, but this is insufficient to establish social learning as domain-specific. To be interesting, the argument must entail the content of what is learned is evolutionarily circumscribed. We acknowledge the importance of evolved machinery, including evolved frames in social learning tasks (see below). However, this does not imply the learning system evolved to solve a particular, highly discrete problem recurrent in the EEA. Nor is there evidence for a proprietary database for social learning, or that the information available to social learning mechanisms or transmitted by social learning mechanisms is restricted to a specific set of messages important for adaptation in the EEA. Social learning systems in humans are domain-general in the critical sense that they allow us to benefit from the experience of others, even when their behavior was not recurrently adaptive in the EEA but is effective in achieving evolved goals in the current environment. This shows the importance of social learning in adapting to the contemporary world.

There are important evolved mechanisms guiding human social learning in adaptive ways. Parent-child affection channels children's social learning within the family (MacDonald 1988, 1992, 1997). The human affectional system is designed to cement long-term relationships of intimacy and trust by making them intrinsically rewarding (MacDonald, 1992). A continuing relationship of warmth and affection between parents and children is expected to result in the acceptance of adult values by the child, identifying with the parent, and a generally higher level of compliance -- 'the time-honored concept of warmth and identification' (Maccoby & Martin, 1983, p. 72). The finding that warmth of the model facilitates imitation and identification has long been noted by social learning theorists (e.g., Bandura, 1969).

Besides the framing effect of warmth, evolution has also shaped children's preferences for other features of models such as dominance, high social status, and similarity (MacDonald, 1988).

For humans, the types of behaviors that can be successfully transmitted by social learning are not limited to discrete sets behaviors useful to meeting recurrent challenges of the EEA. They are limited only by general cognitive and motor limitations: Limitations on the informational complexity of modeled behavior, limits on attentional processes and memory, and limitations on human motor abilities (Bandura, 1969; Shettleworth, 1994). Even among rats, Kohn and Dennis (1972) found that animals that were able to observe other rats solve a discrimination problem (and thus avoid shock) were quicker to learn this discrimination than rats that were prevented from the opportunity to observe. The patterns that were discriminated were entirely arbitrary and in no sense elements of the EEA. The response pattern involved motor activity to escape the shock by going through the appropriate door. The mechanism therefore was not domain-specific: It was not triggered by a highly delimited stimulus recurring in the EEA and it did not result in a highly discrete response designed specifically to deal adaptively with this problem.

In short, specialized learning mechanisms are only part of the story of human and animal learning. There are many non-recurrent events that are learnable without specialized mechanisms and being able to learn them is adaptive. Apart from well-known examples where learning is highly biased, learning mechanisms are domain-general.

Conclusion

Evolutionary psychology has been of great value in placing evolutionary thinking at the center of cognitive science. However, by erecting an equally one-sided paradigm in opposition to the 'Standard Social Science Model,' it runs the risk of over-emphasizing modularity and ignoring the vast data indicating a prominent role for domain-general mechanisms in human and animal cognition. As described here, domain-general mechanisms are not weak 'jacks of all trades but masters of none.' They are extremely powerful but fallible mechanisms that are the basis for solving a fundamental problem faced by all but the simplest organisms -- the problem of navigating constantly changing environments that present new challenges that have not been recurrent problems in the EEA. Most importantly, the domain-general mechanisms at the heart of human cognition are responsible for the decontextualization and abstraction processes critical to the scientific and technological advances that virtually define civilization.

The processes discussed here are not meant to be an exhaustive examination of domain-generality in cognition and learning, but merely illustrative. We suppose that a great many other processes will yield to the type of analysis presented here, including other forms of reasoning and induction besides analogical reasoning, memory and categorization, developmental plasticity, and large areas of personality psychology where, as in the analysis of the fear system presented above, there is a complex interplay between evolved emotional responses to specific stimuli as well as the ability to recruit emotional systems to confront entirely novel dangers and opportunities (Chiappe & MacDonald, 2002).

References

Ackerman, P. (1988). Determinants of individual differences during skill acquisition: Cognitive abilities and information processing. Journal of Experimental Psychology: General, 117, 288-318.

Anderson, B. (2000). The g factor in non-human animals. In G. R. Bock, J. A. Goode, & K. Webb (Eds.), The nature of intelligence (pp. 79-95). New York: Wiley.

Anderson, J. (1991). Is human cognition adaptive? Behavioral and Brain Sciences, 14, 471-517.

Axelrod, R. (1984). The evolution of cooperation. New York, NY: Basic Books.

Bandura, A. (1969). Social learning theory of identificatory processes. In D. A. Goslin (Ed.), Handbook of socialization theory and research. New York: Rand-McNally.

Bechelder, B. L. & Denny, M. R. (1977). A theory of intelligence II: The role of span in a variety of intellectual tasks. Intelligence, 1, 237-256.

Bowlby, J. (1969). Attachment and loss, Vol. I: Attachment. New York: Basic Books.

Boyd, R., & Richerson, P. (1985). Culture and the evolutionary process. Chicago: University of Chicago Press.

Boyd, R., & Richerson, P. (1988). An evolutionary model of social learning: The effects of spatial and temporal variation. In T. Zentall & B. Galef (Eds.) Social learning: Psychological and biological perspectives (pp. 29-48). Hillsdale: Erlbaum.

Campbell, D., Sanderson, R., & Laverty, S. G. (1964). Characteristics of a conditioned response in human subjects during extinction trials following a simple traumatic conditioning trial. Journal of Abnormal and Social Psychology, 68, 627-639.

Carpenter, P., Just, M. & Shell, P. (1990). What one intelligence test measures: A theoretical account of the processing in the Raven Progressive Matrices Test. Psychological Review, 97, 404-431.

Carroll, J. B. (1993). Human cognitive abilities. New York: Cambridge University Press.

Chiappe, D. (2000). Metaphor, modularity, and the evolution of conceptual integration. Metaphor and Symbol, 15, 137-158.

Chiappe, D., & MacDonald, K. (2002). Evolution and domain-generality. In preparation.

Clement, C. & Gentner, D. (1991). Systematicity as a selection constraint in analogical mapping. Cognitive Science, 15, 89-132.

Cole, S., Hainsworth, F. R., Kamil, A. C., Mercier, T., & Wolf, L. L. (1982). Spatial learning as an adaptation in hummingbirds. Science, 217, 655-657.

Cosmides, L. & Tooby, J. (1987). From evolution to behavior: Evolutionary psychology as the missing link. In J. Dupre (Ed.), The latest on the best: Essays on evolution and optimality (pp. 277-306). Cambridge, MA: MIT Press.

Cosmides, L. & Tooby, J. (1989). Evolutionary psychology and the generation of culture, part II. Case study: A computational theory of social exchange. Ethology and Sociobiology, 10, 51-97.

Cosmides, L. & Tooby, J. (2000). Consider the source: The evolution of adaptations for decoupling and metarepresentation. In D. Sperber (Ed.), Metarepresentations (pp. 53-115). New York: Oxford University Press.

Cosmides, L., & Tooby, J. (2002). Unraveling the enigma of human intelligence: Evolutionary psychology and the multimodular mind. In R. J. Sternberg & J. C. Kaufman (Eds.), The evolution of intelligence (pp. 145-198). Mahwah: Erlbaum.

Crinella, F. M., & Yu, J. (1995). Brain mechanisms in problem-solving and intelligence: A Replication and Extension. Intelligence 21, 225-246.

Deloache, J. S., Miller, K. F., & S. L. Pierroutsakos, S. L. (1998). Reasoning and Problem Solving. In W. Damon, D. Kuhn, & R. S. Siegler (Eds.) Handbook of Child Psychology, Vol. 2, Cognition, Perception, and Language, 801-850. New York: Wiley.

Dennett, D. (1987). Cognitive wheels: The frame problem of AI. In Z. Pylyshyn (Ed.), The robot's dilemma (pp. 41-64). Norwood, NJ: Ablex.

Denny, J. P. (1991). Rational thought in oral and literate decontextualization. In D. R. Olson & N. Torrance (Eds.), Literacy and orality (pp. 66-89). Cambridge, U.K.: Cambridge University Press.

Detterman, D. (2000). General intelligence and the definition of phenotypes. In G. R. Bock, J. A. Goode, & K. Webb (Eds.), The nature of intelligence (pp. 136-145). New York: Wiley.

Dickinson, A. (1994). Instrumental conditioning. In N. J. Mackintosh (Ed.), Animal learning and cognition (pp. 45-79). San Diego: Academic Press.

Dunbar, K. (1997). How scientists think: Online creativity and conceptual change in science. In T. B. Ward, S. M. Smith, & S. Vaid (Eds.), Conceptual structures and processes: Emergence, discovery, and change (pp. 461-493). Washington, DC: American Psychological Association.

Duncan, J., Burgess, P. & Emslie, H. (1995). Fluid intelligence after frontal lobe lesions. Neuropsychologia, 33, 261-268.

Duncan, J., Emslie, H., Williams, P., Johnson, R. & Freer, C. (1996). Intelligence and the frontal lobe: The organization of goal-directed behavior. Cognitive Psychology, 30, 257-303.

Eisenberg, J. F. (1981). The mammalian radiations. Chicago: University of Chicago Press.

Elman, J. L. (1994). Implicit learning in neutral networks: The importance of starting small. Attention and Performance, 15, 861-888.

Emmons, R. A. (1989). The personal striving approach to personality. In L. A. Pervin (Ed.) Goal concepts in personality and social psychology (pp. 87-126). Hillsdale, NJ: Erlbaum.

Engle, R., Tuholski, S. W., Laughlin, J. E., & Conway, A. R. (1999). Working memory, short-term memory, and general fluid intelligence: A latent-variable approach. Journal of Experimental Psychology: General, 128, 309-331.

Epstein, R., Kirshnit, C., Lanza, R., & Rubin, L. (1984). 'Insight' in the pigeon: Antecedents and determinants of an intelligent performance. Nature, 308, 61-62.

Fehr, E., & Głžchter, S. (2002). Altruistic punishment in humans. Nature 415, 137-140.

Fodor, J. A. (1983). The modularity of mind. Cambridge, MA: MIT Press.

Fodor, J. A. (2000). The mind doesn't work that way. Cambridge, MA: MIT Press.

Foley, R. (1996). The adaptive legacy of human evolution: A search for the Environment of Evolutionary Adaptedness. Evolutionary Anthropology, 4, 194-203.

Galef, B. G., Jr. (1987). Social influences on the identification of toxic foods by Norway rats. Animal Learning and Behavior, 15, 327-332.

Gallistel, C. R. (1990). The organization of learning. Cambridge, MA: MIT Press.

Gallistel, C. R. (1994). Space and time. In N. J. Mackintosh (Ed.), Animal Learning and Cognition (pp. 221-253). San Diego: Academic Press.

Gallistel, C. R. (1999). The replacement of general purpose learning models with adaptively specialized learning modules. In M. S. Gazzanigz (Ed.), The new cognitive neurosciences, 2nd ed. Cambridge, MA: MIT Press.

Garcia, J. & Koelling, R. (1966). Relation of cue to consequence in avoidance learning. Psychonomic Science, 4, 123-124.

Geary, D. C. (1995). Reflections of evolution and culture in children's cognition: Implications for mathematical development and instruction. American Psychologist, 50, 24-37.

Geary, D. C., & Huffman, K. J. (2002). Brain and cognitive evolution: Forms of modularity and functions of mind. Psychological Bulletin, 128, 667-698.

Gelman, R., & Williams, E. M. (1998). Enabling constraints for cognitive development and learning: Domain-specificity and epigenesis. In D. Kuhn & R. S. Siegler (Vol. Eds.), Cognition, perception, and language, Vol. 2 (pp. 575-630). W. Damon (Gen. Ed.), Handbook of child psychology (5th Ed.). New York: Wiley.

Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7, 155-170.

Gentner, D. & Clement, C. (1988). Evidence for relational selectivity in the interpretation of analogy and metaphor. In G. Bower (Ed.), The psychology of learning and motivation, Vol. 22 (pp. 307-358). New York, NY: Academic Press.

Gentner, D. & Holyoak, K. (1997). Reasoning and learning by analogy. American Psychologist, 52, 32-34.

Gick, M. L. & Holyoak, K. (1980). Analogical problem solving. Cognitive Psychology, 12, 306-355.

Gillan, D., Premack, D., & Woodruff, G. (1981). Reasoning in the chimpanzee: I. Analogical reasoning. Journal of Experimental Psychology: Animal Behavior Processes, 7, 1-17.

Glucksberg, S. (2001). Understanding figurative language: From metaphors to idioms. Oxford: Oxford University Press.

Goel, V. & Grafman, J. (1995). Are the frontal lobes implicated in 'planning' functions? Interpreting data from the tower of Hanoi. Neuropsychologia, 33, 623-642.

Goldberg, E. (2001). The Executive Brain: The Frontal Lobes and the Civilized Mind. Oxford, UK: Oxford University Press.

Heinrich, B. (2000). Testing insight in ravens. In C. Heyes and L. Huber (Eds.), The evolution of cognition (pp. 289-305). Cambridge, MA: MIT Press.

Heyes, C., Dawson, G., & Nokes, T. (1992). Imitation in rats: Initial responding and transfer evidence. Quarterly Journal of Experimental Psychology, 45, 229-240.

Holyoak, K. (1984). Analogical thinking and human intelligence. In R. Sternberg (Ed.) Advances in the Psychology of Human Intelligence, Vol. 2 (pp. 199-230), Hillsdale, NJ: Lawrence Erlbaum Associates, Inc.

Holyoak, K. & Hummel, J. (2001). Toward an understanding of analogy within a biological symbol system. In D. Gentner, K. Holyoak, & B. Kokinov (Eds.), The analogical mind: Perspectives from cognitive science (pp. 161-195). Cambridge, MA: MIT Press.

Holyoak, K. & Thagard, P. (1995). Mental leaps: Analogy in creative thought. Cambridge, MA: MIT Press.

Horn, J. L., & Hofer, S. M. (1992). Major abilities and development in the adult period. In R. J. Sternberg & C. A. Berg (Eds.), Intellectual Development. New York: Cambridge University Press.

Hugdahl, K., & Johnsen, B. H. (1989). Preparedness and electrodermal fear-conditioning: Ontogenetic vs. phylogenetic explanations. Behavioral Research and Therapy, 27, 269-278.

Irons, W. (1998). Adaptively relevant environments versus the environment of evolutionary adaptedness. Evolutionary Anthropology, 6, 194-204.

James, W. (1890). The principles of psychology, Vol. 1. New York, NY: Dover Publications Inc.

Jensen, A. (1998). The g factor: The science of mental ability. Westport, CN: Praeger Publishing.

Jerison, H. J. (1973). The evolution of the brain and intelligence. New York: Academic Press.

Johanson, D. C., & Edey, M. A. (1981). Lucy: The beginnings of humankind. New York: Simon & Schuster.

Jones, K., & Day, J. D. (1997). Discrimination of two aspects of cognitive-social intelligence from academic intelligence. Journal of Educational Psychology, 89, 486-497.

Kalat, J. W. (1985). Taste-aversion learning in ecological perspective. In T. Johnston & A. Pietrewicz (Eds.), Issues in the Ecological Study of Learning (pp. 119-141). Hillsdale, NJ: Erlbaum.

Kane, M. J., Bleckley, M. K., Conway, A. R., & Engle, R. (2001). A controlled-attention view of working memory capacity. Journal of Experimental Psychology: General, 130, 169-183.

Karmiloff-Smith, A. (1992). Beyond modularity: A developmental perspective on cognitive science. Cambridge, MA: MIT Press.

Koestler, A. (1964). The act of creation. London: Hutchinson.

KłĆhler, W. (1925). The mentality of apes. New York, NY: Harcourt, Brace.

Kohn, B., & Dennis, M. (1972). Observation and discrimination learning in the rat: Specific and non-specific effects. Journal of Comparative and Physiological Psychology, 78, 292-296.

Kyllonen, P. C. & Christal, R. E. (1990). Reasoning ability is (little more than) working-memory capacity?! Intelligence, 14, 389-433.

Lakoff, G. & Johnson, M. (1980). Metaphors we live by. Chicago, Il: University of Chicago Press.

Laland, K. N., Richerson, P. J., & Boyd, R. (1996). Developing a theory of animal social learning. In C. M. Heyes & B. G. Galef (Eds.), Social learning in animals: The roots of culture (pp. 129-154). San Diego: Academic Press.

Larson, G., & Saccuzzo, D. (1989). Cognitive correlates of general intelligence: Toward a process theory of g. Intelligence, 13, 5-31.

Lautch, H. (1971). Dental phobia. British Journal of Psychiatry, 119, 151-158.

LeDoux, J. (1996). The emotional brain: The mysterious underpinnings of emotional life. New York: Simon & Schuster.

Lerner, R. (1984). On human plasticity. New York: Cambridge University Press.

Levinson, S. C. (1995). Interactional biases in human thinking. In E. Goody (Eds.), Social intelligence and interaction (pp. 221-260). Cambridge: Cambridge University Press.

Luria, A. R. (1976). Cognitive development: Its cultural and social foundations. Cambridge, MA: Harvard University Press.

Lustig, C., May, C. & Hasher, L. (2001). Working memory span and the role of proactive interference. Journal of Experimental Psychology: General, 130, 199-207.

Maccoby, E., & Martin, J. (1983). Socialization in the context of the family. In E. M. Hetherington (Eds.), Handbook of child psychology, Vol. 4: Socialization, personality, and social development (pp. 1-102). New York: Wiley.

MacDonald, K. B. (1988). The interfaces between developmental psychology and evolutionary biology. In K. B. MacDonald (Ed.), Sociobiological Perspectives on Human Development. New York: Springer-Verlag.

MacDonald, K. B. (1991). A perspective on Darwinian psychology: The importance of domain-general mechanisms, plasticity, and individual differences. Ethology and Sociobiology, 12, 449-480.

MacDonald, K. B. (1992). Warmth as a developmental construct: An evolutionary analysis. Child Development, 63, 753-773.

MacDonald, K. B. (1995). Evolution, the Five Factor Model, and Levels of Personality. Journal of Personality 63, 525-567.

MacDonald, K. B. (1997). The Coherence of Individual Development: An Evolutionary Perspective on Children's Internalization of Parental Values. In J. Grusec & L. Kuczynski (Eds.), Parenting and Children's Internalization of Values: A Handbook of Contemporary Theory (pp. 362-397). New York: Wiley.

MacDonald, K. B. (1998). Evolution, Culture, and the Five-Factor Model. Journal of Cross-Cultural Psychology, 29, 119-149.

Markman, A. & Gentner, D. (1993). Structural alignment during similarity comparisons. Cognitive Psychology, 25, 431-467.

Marshalek, B., Lohman, D. F., & Snow, R. E. (1983). The complexity continuum in the radex and hierarchical models of intelligence. Intelligence, 7, 107-127.

Matthews, D. J., & Keating, D. P. (1995). Domain specificity and habits of mind. Journal of Early Adolescence, 15, 319-343.

McGeorge, P., Crawford, J., & Kelly, S. (1997). The relationships between psychometric intelligence and learning in an explicit and an implicit task. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 239-245.

Meltzoff, A. N. (1996). The human infant as imitative generalist: A 20-year progress report on infant imitation with implications for comparative psychology. In C. M. Heyes & B. G. Galef (Eds.), Social learning in animals: The roots of culture (pp. 347-370). San Diego: Academic Press.

Mithen, S. (1996). The prehistory of the mind. London: Thames and Hudson.

Moore, B. R. (1996). Evolution of imitative learning. In C. M. Heyes & B. G. Galef (Eds.), Social learning in animals: The roots of culture (pp. 245-265). San Diego: Academic Press.

Mulholland, T., Pellegrino, J. & Glaser, R. (1980). Components of geometric analogy solution. Cognitive Psychology, 12, 252-284.

Newport, E. L. (1991). Constraining concepts of the critical period for language. In S. Carey & R. Gelman (Eds.), The epigenesis of mind: Essays on biology and cognition (pp. 111-130). Hillsdale, NJ: Erlbaum.

Oaksford, M., & Chater, N. (1996). Rational explanation of the selection task. Psychological Review, 103, 381-391.

Oden, D., Thompson, R. & Premack, D. (2001). Can an ape reason analogically? Comprehension and production of analogical problems by Sarah, a Chimpanzee (Pan troglodytes). In D. Gentner, K. Holyoak, & B. Kokinov (Eds.), The analogical mind: Perspectives from cognitive science (pp. 471-497). Cambridge, MA: MIT Press.

ł˝hman, A., & Mineka, S. (2001). Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning. Psychological Review,108, 483-522.

Palmer, J. & Palmer, L. (2002). Evolutionary psychology. Boston, MA: Allyn and Bacon.

Piaget, J. (1972). Intellectual evolution from adolescence to adulthood. Human Development, 15, 1-12.

Pinker, S. (1994). The language instinct. New York, NY: William Morrow.

Pinker, S. (1997). How the mind works. New York, NY: W. W. Norton.

Potts, R. (1998). Variability selection in Hominid evolution. Evolutionary Anthropology, 7, 81-96.

Poucet, B. (1990). A further characterization of the special problem-solving deficit induced by lesions of the medial frontal cortex in the rat. Behavior and Brain Research, 41, 229-237.

Rescorla, R. A. (1980). Pavlovian second-order conditioning: Studies in associative learning. Hillsdale, NJ: Erlbaum.

Rescorla, R. A. (1988). Pavlovian conditioning: It's not what you think it is. American Psychologist, 43, 151-160.

Revulsky, S. (1985). The general process approach to animal learning. In T. D. Johnston & A. T. Pietrewicz (Eds.), Issues in the ecological study of learning (pp. 401-432). Hillsdale, NJ: Erlbaum.

Richerson, P. & Boyd, R. (2000). Climate, culture, and the evolution of cognition. In C. Heyes and L. Huber (Eds.), The evolution of cognition (pp. 329-346). Cambridge, MA: MIT Press.

Rozin, P. & Schull, J. (1988). The adaptive-evolutionary point of view in experimental psychology. In R. C. Atkinson & A. N. Epstein (Eds.), Progress in psychobiology and physiological psychology (pp. 245-277). New York, NY: Academic Press.

Seligman, M. E. P. (1971). Phobias and preparedness. Behavior Therapy, 2, 307-320.

Shanks, D. R. (1994). Human associative learning. In N. J. Mackintosh (Ed.), Animal Learning and Cognition (pp. 335-374). San Diego: Academic Press.

Shepard, R. N. (1994). Perceptual-cognitive universals as reflections of the world. Psychonomic Bulletin & Review, 1, 2-28.

Shettleworth, S. (1994). Biological approaches to the study of learning. In N. J. Mackintosh (Ed.), Animal Learning and Cognition (pp. 185-219). San Diego: Academic Press.

Shettleworth, S. (2000). Modularity and the evolution of cognition. In C. Heyes and L. Huber (Eds.), The evolution of cognition (pp. 43-60). Cambridge: MIT Press.

Singh, D. (1993). Adaptive significance of female attractiveness: Role of waist-to-hip ratio. Journal of Personality and Social Psychology, 65, 293-307.

Skinner, B. F. (1981). Selection by consequences. Science, 213, 501-504.

Spearman, C. (1927). The abilities of man. New York, NY: Macmillan.

Spelke, E., & Newport, E. (1998). Nativism, Empiricism and the Development of Knowledge. In R. M. Lerner (Vol. Ed.), Theoretical models of human development, Vol 1 (pp. 275-340). W. Damon (Gen. Ed.), Handbook of child psychology (5th Ed.). New York: Wiley.

Spellman, B. & Holyoak, K. (1996). Pragmatics in analogical mapping. Cognitive Psychology, 31, 307-346.

Sperber, D. (1994). The modularity of thought and the epidemiology of representations. In L. A. Hirschfeld & S. Gelman (Eds.), Mapping the mind: Domain-specificity in cognition and culture (pp. 39-67). Cambridge, England: Cambridge University Press.

Staddon, J. E. R. (1988). Learning as inference. In R. C. Bolles & M. D. Beecher (Eds.), Learning and evolution, (pp. 59-78). Hillsdale, NJ: Erlbaum.

Stanovich, K. E., & West, R. F. (1998). Individual differences in rational thought. Journal of Experimental Psychology: General, 127, 161-188.

Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate. Behavioral and Brain Sciences, 23 645-665.

Sternberg, R. (1977). Intelligence, information processing, and analogical reasoning: A componential analysis of human abilities. Hillsdale: Erlbaum.

Stevenson, H., Parker, T., Wilkinson, A., Hegion, A. & Fish, E. (1976). Journal of Educational Psychology, 68, 377-400.

Terkel, J. (1996). Cultural transmission of feeding behavior in the Black Rat (Rattus rattus). In C. M. Heyes & B. G. Galef (Eds.), Social learning in animals: The roots of culture (pp. 17-47). San Diego: Academic Press.

Thompson, R., Crinella, F., & Yu, J. (1990). Brain mechanisms in problem solving and intelligence: A lesion survey of the rat brain. New York: Plenum Press.

Tohill, J. & Holyoak, K. (2000). The impact of anxiety on analogical reasoning. Thinking and Reasoning, 6, 27-40.

Tomarken, A. J., Mineka, S. & Cook, M. (1989). Fear-relevant selective associations and covariation bias. Journal of Abnormal Psychology, 98, 381-394.

Tomasello, M. (1996). Do apes ape? In C. Heyes & B. Galef (Eds.), Social learning in animals: The roots of culture (pp. 319-346). San Diego: Academic Press.

Tooby, J. & Cosmides, L. (1989). Evolutionary psychology and the generation of culture, Part I. Theoretical considerations. Ethology and Sociobiology, 10, 29-49.

Tooby, J. & Cosmides, L. (1992). The psychological foundations of culture. In J. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture (pp. 19-136). New York: Cambridge University Press.

Turner, M. L. & Engle, R. W. (1989). Is working memory capacity task dependent? Journal of Memory and Language, 28, 127-154.

Waltz, J., Knowlton, B., Holyoak, K., Boone, K., Mishkin, F., de Menezes Santos, M., Thomas, C., & Miller, B. (1999). A system for relational reasoning in human prefrontal cortex. Psychological Science, 10, 119-125.

Waltz, J., Lau, A., Grewal, S. & Holyoak, K. (2000). The role of working memory in analogical mapping. Memory & Cognition, 28, 1205-1212.

Whiten, A., & Ham, R. (1992). On the nature and evolution of imitation in the animal kingdom: Reappraisal of a century of research. Advances in the Study of Behavior, 21, 239-283.

Zentall, T. R. (1996). An analysis of imitative learning in animals. In C. M. Heyes & B. G. Galef (Eds.), Social learning in animals: The roots of culture (pp. 221-243). San Diego: Academic Press.

Footnotes

1The authors are listed in alphabetical order.

2An example of if-only thinking: Who is more foolish, a man who sat tight when the stock he wanted to invest in went up or the man who sold some other stock to buy the losing stock (see Stanovich & West, 1998).

3In covariation bias studies, subjects judge the extent to which there is a covariation between fear-relevant stimuli and aversive outcomes. Typical findings are that there is a bias such that people over-estimate the extent to which evolutionarily significant stimuli are associated with aversive outcomes (e.g., Tomarken, Mineka, & Cook, 1989).