ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel
  • »
  • Education and Science»
  • Psychology & Psychiatry

Cognitive Science and Behavior

Updated on February 23, 2016

Perception

From a variant of Old French perçoivre, from Latin percipere ‘seize, understand,’ from per- ‘entirely’ + capere ‘take.’

For cognitive science the direction of actions is from the organism to the environment. The perceiver acts upon the world, perceives it in the sense of taken it in. It was an important point for Greek philosophers: to know a thing meant to be in contact with it. The world had to be taken in and possessed in order to be known.

Processing information must have a product, and for cognitive science that product is, as it was for Greek philosopher, a representation. We did not see the world; we see “copies” of it. It is not what an organism sees but how stimuli alter the probability of its behavior. They acquire the power to do so from the part they play in contingencies of phylogenic and ontogenetic selection. The perceiver must retrieve a stored history and in some way fuse it with a current representation.

Sutherland (1978) has put it: 'everyday perception involves assessing and bringing to bear vast stores of knowledge'. In order to 'make sense of the fragmentary pattern of light reaching the eye' one must consider many possibilities, make inferences, and formulate and test elaborate hypotheses. A behavioral translation appeals only to the history of reinforcement responsible for the current effect of a presentation: everyday perception is the product of a vast number of experiences in which fragmentary patterns of light, resampling those of the moment in many different ways, have been present when behavior has had reinforcing consequences.

Cognitive psychologists contend that the retrieval of stored knowledge affects not only what is seen but how readily it is seen. Thus, familiar words are seen more readily than rare, 'expected' words more readily than unexpected and decorous words more readily than obscene. A behavioral account explains the same facts as the effects of past consequences, positive or negative. Broadbent says, ‘nobody disputes the fact that the perception of a word would depend very much upon its probability of occurrence’ (1965). Nevertheless, there is still a problem for the cognitive psychologist. Broadbent asks, ‘Why is it easier to perceive a word which is probable than a word which is improbable?’

The inner man

Once a representation has been constructed, what is done with it?

A molecular biologist, Stent (1975), traces the processing of visual data, beginning with the roughly 100 million receptor cells in the retina and the roughly one million ganglion cells which, according to Stent, process the information coming from them by signaling light-dark contrasts and edge effects. The fibers of the ganglion cells connect the eye with the brain, where the ‘signals converge on a set of cortical nerve cells’ (1975) which, among other things, process information from fields too big for the ganglion cells. As Stent notes, ‘[The] visual system of the frog abstracts its input data in such a way as to produce only two meaningful structures, “my prey” and “my predator”, which in turn evoke either of two meaningful structures, “my prey” and “my predator”, which in turn evoke either of two alternative motor outputs, attack or flight’. Why are ‘meaningful structures’ needed? Why not simply ‘to produce only two alternative motor outputs?’ Stent concedes: ‘It is not clear at present how far this process of cerebral abstraction by convergence of visual channels can be imagined to go. Should one suppose that the cellular abstraction process goes so far that there exists for every meaningful structure of whose specific recognition a person is capable (for example, “my grandmother”) at least one particular nerve cell in the brain that responds if and only if the light and dark pattern from which that structure is abstracted appears in its visual space (1975)?’ The primrose path of visual perception leads the cognitive scientist to a representation and the neurologist to a grandmother cell. But if what happens when one sees one’s grandmother is the result of what has happened when one has seen her and persons like her many times in the past, changes must have taken place in most of the brain. A single point in the brain is needed only to represent a putative ‘meaningful structure’.

Stent concludes in a traditional way: ‘No matter how deeply we probe into the visual pathway, in the end we need to posit an “inner man” who transforms the visual pathway, in the end we need to posit an “inner man” who transforms the visual image into a percept’ (1975). ‘The end-product of the analysis of a visual scene is subjectively familiar. ‘We recognize the objects present in the environment and their spatial relations to each other and are able to compute distances in a way that permits us to navigate’ (Estes et al., 1983). Some cognitive touches survive: apparently organisms cannot move about in the world without first ‘computing distances’, and things they have seen before are only subjectively familiar, but ‘we’ presumably means people, not little people in the brain, and ‘objects present in the environment’ are presumably presentations, not representations. That is good behaviorism.

It is obviously not the whole organism is said to view a representation of reality. It is some lesser part inside. Cognitive science is forced into saying so when it takes the processing of information as a model. In a behavioral account the whole organism responds, and it responds to the world around it―for reasons which neurology, not cognitive science, will eventually discover.

Rules

Constructing representations of sensory data is only one kind of information processing. Another kind plays a far more important role in cognitive science. Consider a familiar experiment. In an operant chamber a hungry rat occasionally presses a lever extending from one wall. When, for the first time, a bit of food is delivered after a single press, the rat eats and then presses the lever more rapidly. That is a simple fact, but cognitive psychologists want to say more about it. They want to say that ‘the rat has learned, and knows that pressing the lever brings food’. ‘Pressing the lever brings food’ is a description of the contingencies of reinforcement. In some form or other, it is said to pass into the head of the rat in the form of knowledge, a synonym of cognition. Processing has led not to a representation but to a rule.

There is no evidence that anything of the sort happens in a rat, but what about a person? Given the necessary exposure to a verbal community, a person could certainly say, ‘When I pressed the lever, food appeared’ or, more generally, ‘When the lever is pressed, food appears’. People come to ‘know’ the contingencies in the world around them in that sense. The behavior is verbal and must be analyzed as such.

Verbal behavior evidently came into existence when, through a critical step in the evolution of the human species, the vocal musculature became susceptible to operant conditioning. Processes shared with other species were already well established, and they can be used to explain the other properties of verbal behavior (Skinner, 1957). One result is particularly important here. A very large part of the social environment we call a culture consists of descriptions of contingencies of reinforcement in the form of advice, maxims, instructions, rules of conduct, the laws of governments and religions, and the laws of science. With their help members of a group transmit what they have learned to new members, who then behave for either of two reasons: their behavior is either directly shaped and maintained by contingencies of reinforcement or controlled by descriptions of such contingencies. When we learn to drive a car, for example, we begin with responses to verbal stimuli. Our behavior is rule-governed. We flip switches, push pedals, and turn the wheel as we are told to do. But consequences follow, and they begin to shape and maintain our behavior. When we have become skillful drivers, the rules no longer function.

When contingencies are not adequate, however, we return to the rules. Most of the time we speak grammatically, for example, because of the prevailing contingencies in our verbal community, but when the contingencies are not sufficient, we turn to the rules of grammar. (An unnecessary return to rules may be troublesome. Mrs E. Craster (d. 1874) suggested that when the toad asked the centipede, ‘Pray, which leg goes after which?’ the centipede ‘worked her mind to such a pitch/She lay distracted in the ditch/Considering how to run’).

Cognitive scientists argue that rules are in the contingencies and that a person can learn about them in either way. They have therefore felt free to take the easier path and study rule-governed behavior. Settings are often merely described rather than presented; subjects are asked to imagine themselves in given situations and to say what they would do rather than do it. Contingent consequences are suggested rather than imposed: ‘Assume that if you do A, B will follow’. Yet descriptions of settings are never quite accurate or complete, what people say they will do is not always what they actually do, and a description of contingencies (for example, a statement of the odds in a wager) seldom has the same effect as exposure to the contingencies (in repeated betting with those odds).

In an experiment on reaction time, for example, the subjects were said to have ‘understood that their task was to depress the response button as quickly as possible following the onset of the signal lights’ (Estes, 1972). How accurately does ‘understanding’ describe the effect of a history of reinforcement, and how well contingencies are put into effect by saying ‘respond as quickly as possible?’ Many years ago Skinner arranged contingencies under which a pigeon pecked a key ‘as quickly as possible’ and found times in the human range (Skinner, 1950). Skinner does not believe one can learn how to do so from the cognitive literature. The response of which a reaction time is measured is quite different from pulling the hand away from a painful stimulus, for example, and to understand the difference (and give neurologist their proper assignment) the contingencies must be specified. Paying subjects according to their reaction times is a move in the right direction but only a short one. Perhaps cognitive psychologists do not care why their subjects respond as quickly as possible; a measure of the speed of a cognitive process may be enough. But in the world at large people behave as quickly as possible for reasons which, for many purposes, need to be understood.

Storage

How organisms are changed by contingencies of reinforcement is the field of a behavioral analysis. What is happening inside is a question to be answered by neurology, with its appropriate instruments and methods. Cognitive scientists cannot leave it to neurology because processing information is part of the story they want to tell. Panel Report puts it, ‘[a] prominent component of artificial intelligence systems is the memory organizations that hold the knowledge and permit finding the right bit of knowledge at the right time’ (Estes et al., 1983).

Surrogates

Cognitive scientists presumably appeal to the storage and retrieval of representations and rules because they can then explain behavior by pointing to conditions present at the time the behavior occurs. The rat was changed when pressing the lever was reinforced, but it presses now ‘because it knows that pressing brings food’. The knowledge is a current surrogate of the history of reinforcement. A classic example of a current surrogate is purpose. We say that we have eyes in order to see, but biologists have long since learned to say only that eyes have evolved because variations which enabled organisms to see better were selected. A similar mistake is made when cognitive psychologists call operant behavior purposive or goal-directed. Features suggesting direction toward a goal are the products of consequences experienced in the past.

Cognitive psychologists need current surrogates because they pay little attention to selection as a causal principle. A new discipline, cognitive ethology, makes that mistake. Animals do many complex things: they use tools, lay traps, and entice their victims. How can such behavior be explained? Ethologists answer by pointing to contingencies of survival in natural selection. Cognitive ethologists must do so, too, to explain features like gross anatomy (for which mental processes could scarcely be responsible) but they contend that not all complex behavior can have evolved. Instead, what evolved are said to be mechanisms which enabled animals to have expectations and solve their problems by processing information. Behavior of comparable complexity can, however, be shaped by arranging sequential contingencies of reinforcement, and it is then easier to understand that contingencies of natural selection could have sufficed in phylogenic instances.

Rationality

Expectation as a current surrogate of a history of reinforcement figures in recent studies of rationality. Simon calls the elucidation of the concept of rationality ‘[one] of the crowning achievements of the social sciences in the past two or three generations’ (1980, p.75). ‘The core of the formal treatment of rationality’, he writes, ‘is the so-called subjective expected utility (SEU) theory’. According to Simon, ‘human behavior is manifestly directed towards goals and the satisfaction of wants and needs’. (In behavioral terms: people behave in ways which have had reinforcing consequences, and the susceptibilities which make those consequences reinforcing arose through natural selection or operant conditioning. ‘Wants’ and ‘needs’ are current surrogates of a history of deprivation). Simon continues: ‘the rational actor is assumed to have a consistent preference ordering of all possible states of the world’ (1980). (In behavioral terms: ‘reinforcers can be arranged according to their power to reinforce’). The axioms of SEU theory imply that ‘the actor maximizes his or expected utility in the light of subjective estimates of the probabilities of events’ (Simon, 1980). (In behavioral terms: ‘people act according to rules which describe or imply contingencies of reinforcement’). Missing from the cognitive account is any reference to the effect of the contingencies themselves, quite apart from the rules derived from them.

In a fairly typical experiment, two actions having probabilistic outcomes (for example, purchasing two kinds of lottery tickets) are described, and the subject is asked to choose between them. (As in much cognitive research, choice is used because a more direct measure of probability of action is lacking). The choice is then compared with what would be called rational in the light of the actual contingencies. But what ticket would subjects actually buy—either after the probabilities were first described to them (as rule-governed action) or after they had purchased many tickets and consequences had followed? To the extent that economic theory is concerned with what people say they will do, SEU theory may be adequate, but a behavioral scientist (and, one would suppose, an entrepreneur) must be concerned with what they actually do. People most often do those things which have been most abundantly reinforced, without making subjective estimates of the probabilities of reinforcement; and the reinforcers are real, not expected.

Since contingencies are usually more effective than rules derived from them, we should not be surprised that, as Simon reports, ‘A fair summary of the findings of these experiments is that actual human choices depart radically from those implied by the axioms except in the simplest and most transparent of situations’ (1980, p.75). This does not mean that people are ‘irrational’ in the sense that contingencies of reinforcement are not effective. It means that describing contingencies and acting upon the descriptions can obscure the effect of the contingencies themselves.

Feelings

According to cognitive psychologists, the rat not only learns and hence knows that pressing the lever brings food; it not only expects food to appear when it presses the lever; it feels like pressing the lever. A human subject would say as much. In a behavioral account what one feels is various states of one’s body, and what one feels when one is behaving or likely to behave is therefore a collateral product of the causes of the behavior. It should not be mistaken for a cause.

That position is sometimes challenged by asserting that one feeling or state of mind can cause another feeling or state of mind. Fodor (1981) contends that ‘mental causes typically give rise to behavioral effects by virtue of their interaction with other mental causes. For example having a headache causes a disposition to take aspirin only if one also has the desire to get rid of the headache, the belief that aspirin exists, the belief that taking aspirin reduces headaches and so on. Since mental states interact in generating behavior, it will be necessary to find a construal of psychological explanations that posits mental processes: causal sequences of mental events. It is this construal that logical behaviorism fails to provide’ (Fodor, 1981).

But what is felt in each case can be construed as either a state of the behaving body or an external controlling variable, and it is they that interact. Consider a simpler example. When one pulls one’s hand away from a hot object, must one have a disposition to do so, a desire not to be burned, a belief that the hot object exists, and a belief that pulling the hand away will terminate the stimulus? Contingencies of survival in natural selection offer a much simpler explanation of that ‘flexion reflex’. Contingencies of reinforcement offer a similar explanation of the fact that one turns the steering wheel of a car to avoid a head-on collision. One need not have a belief in the existence of the oncoming car, a desire to avoid collisions, and a belief that turning the wheel will avoid one. Turning the wheel occurs because it has had fairly similar consequences in the past.

Neglected contingencies of reinforcement can be subtle. Kahneman & Tversky (1984) have reported that people say they would be less likely to buy a second ticket to the theatre if a first had been lost than to buy a ticket after losing the money they had set aside for that purpose. The difference is said to be due to a difference in categorization. A difference in relevant ‘contingencies of reinforcement should not be overlooked. A boy who usually washed his hands before sitting down to dinner quite justly protests when told to wash them if he has already done so: ‘I did wash them!’ we pay a bus fare and protest at a subsequent request to do so again. Buying a ticket to the theatre is in that ‘category’ of contingencies. Contingencies involving loss are different. We spoil one sheet of paper and take another. And, quite as readily, we use other money to buy a ticket to the theatre.

Comments

    0 of 8192 characters used
    Post Comment

    No comments yet.