# Classical and Operant Conditioning – General Psychology (2023)

### Learning outcomes

By the end of this section, you will be able to:

• Explain how classical conditioning occurs
• Summarize the processes of acquisition, extinction, spontaneous recovery, generalization, and discrimination
• Define operant conditioning
• Explain the difference between reinforcement and punishment
• Distinguish between reinforcement schedules

## Classical Conditioning

Does the name IvanPavlovring a bell? Even if you are new to the study of psychology, chances are that you have heard of Pavlov and his famous dogs.

Pavlov (1849–1936), a Russian scientist, performed extensive research on dogs and is best known for his experiments in classicalconditioning(Figure). As we discussed briefly in the previous section,classical conditioningis a process by which we learn to associate stimuli and, consequently, to anticipate events.

Pavlov came to his conclusions about how learning occurs completely by accident. Pavlov was a physiologist, not a psychologist. Physiologists study the life processes of organisms, from the molecular level to the level of cells, organ systems, and entire organisms. Pavlov’s area of interest was the digestive system (Hunt, 2007). In his studies with dogs, Pavlov surgically implanted tubes inside dogs’ cheeks to collect saliva. He then measured the amount of saliva produced in response to various foods. Over time, Pavlov (1927) observed that the dogs began to salivate not only at the taste of food, but also at the sight of food, at the sight of an empty food bowl, and even at the sound of the laboratory assistants’ footsteps. Salivating to food in the mouth is reflexive, so no learning is involved. However, dogs don’t naturally salivate at the sight of an empty bowl or the sound of footsteps.

These unusual responses intrigued Pavlov, and he wondered what accounted for what he called the dogs’ “psychic secretions” (Pavlov, 1927). To explore this phenomenon in an objective manner, Pavlov designed a series of carefully controlled experiments to see which stimuli would cause the dogs to salivate. He was able to train the dogs to salivate in response to stimuli that clearly had nothing to do with food, such as the sound of a bell, a light, and a touch on the leg. Through his experiments, Pavlov realized that an organism has two types of responses to its environment: (1) unconditioned (unlearned) responses, or reflexes, and (2) conditioned (learned) responses.

In Pavlov’s experiments, the dogs salivated each time meat powder was presented to them. The meat powder in this situation was anunconditioned stimulus (UCS): a stimulus that elicits a reflexive response in an organism. The dogs’ salivation was anunconditioned response (UCR): a natural (unlearned) reaction to a given stimulus. Before conditioning, think of the dogs’ stimulus and response like this:

Meatpowder(UCS)Salivation(UCR)

$Meatpowder(UCS)→Salivation(UCR)$

In classical conditioning, a neutral stimulus is presented immediately before an unconditioned stimulus. Pavlov would sound a tone (like ringing a bell) and then give the dogs the meat powder (Figure). The tone was theneutral stimulus (NS), which is a stimulus that does not naturally elicit a response. Prior to conditioning, the dogs did not salivate when they just heard the tone because the tone had no association for the dogs. Quite simply this pairing means:

Tone(NS)+MeatPowder(UCS)Salivation(UCR)

$Tone(NS)+MeatPowder(UCS)→Salivation(UCR)$

When Pavlov paired the tone with the meat powder over and over again, the previously neutral stimulus (the tone) also began to elicit salivation from the dogs. Thus, the neutral stimulus became theconditioned stimulus (CS), which is a stimulus that elicits a response after repeatedly being paired with an unconditioned stimulus. Eventually, the dogs began to salivate to the tone alone, just as they previously had salivated at the sound of the assistants’ footsteps. The behavior caused by the conditioned stimulus is called theconditioned response (CR). In the case of Pavlov’s dogs, they had learned to associate the tone (CS) with being fed, and they began to salivate (CR) in anticipation of food.

Tone(CS)Salivation(CR)

$Tone(CS)→Salivation(CR)$

Now that you have learned about the process of classical conditioning, do you think you can condition Pavlov’s dog? Visit thiswebsiteto play the game.

### REAL WORLD APPLICATION OF CLASSICAL CONDITIONING

How does classical conditioning work in the real world? Let’s say you have a cat named Tiger, who is quite spoiled. You keep her food in a separate cabinet, and you also have a special electric can opener that you use only to open cans of cat food. For every meal, Tiger hears the distinctive sound of the electric can opener (“zzhzhz”) and then gets her food. Tiger quickly learns that when she hears “zzhzhz” she is about to get fed. What do you think Tiger does when she hears the electric can opener? She will likely get excited and run to where you are preparing her food. This is an example of classical conditioning. In this case, what are the UCS, CS, UCR, and CR?

What if the cabinet holding Tiger’s food becomes squeaky? In that case, Tiger hears “squeak” (the cabinet), “zzhzhz” (the electric can opener), and then she gets her food. Tiger will learn to get excited when she hears the “squeak” of the cabinet. Pairing a new neutral stimulus (“squeak”) with the conditioned stimulus (“zzhzhz”) is calledhigher-order conditioning, orsecond-order conditioning. This means you are using the conditioned stimulus of the can opener to condition another stimulus: the squeaky cabinet (Figure). It is hard to achieve anything above second-order conditioning. For example, if you ring a bell, open the cabinet (“squeak”), use the can opener (“zzhzhz”), and then feed Tiger, Tiger will likely never get excited when hearing the bell alone.

CLASSICAL CONDITIONING AT STINGRAY CITY

Kate and her husband Scott recently vacationed in the Cayman Islands, and booked a boat tour to Stingray City, where they could feed and swim with the southern stingrays. The boat captain explained how the normally solitary stingrays have become accustomed to interacting with humans. About 40 years ago, fishermen began to clean fish and conch (unconditioned stimulus) at a particular sandbar near a barrier reef, and large numbers of stingrays would swim in to eat (unconditioned response) what the fishermen threw into the water; this continued for years. By the late 1980s, word of the large group of stingrays spread among scuba divers, who then started feeding them by hand. Over time, the southern stingrays in the area were classically conditioned much like Pavlov’s dogs. When they hear the sound of a boat engine (neutral stimulus that becomes a conditioned stimulus), they know that they will get to eat (conditioned response).

As soon as Kate and Scott reached Stingray City, over two dozen stingrays surrounded their tour boat. The couple slipped into the water with bags of squid, the stingrays’ favorite treat. The swarm of stingrays bumped and rubbed up against their legs like hungry cats (Figure). Kate and Scott were able to feed, pet, and even kiss (for luck) these amazing creatures. Then all the squid was gone, and so were the stingrays.

Classical conditioning also applies to humans, even babies. For example, Sara buys formula in blue canisters for her six-month-old daughter, Angelina. Whenever Sara takes out a formula container, Angelina gets excited, tries to reach toward the food, and most likely salivates. Why does Angelina get excited when she sees the formula canister? What are the UCS, CS, UCR, and CR here?

So far, all of the examples have involved food, but classical conditioning extends beyond the basic need to be fed. Consider our earlier example of a dog whose owners install an invisible electric dog fence. A small electrical shock (unconditioned stimulus) elicits discomfort (unconditioned response). When the unconditioned stimulus (shock) is paired with a neutral stimulus (the edge of a yard), the dog associates the discomfort (unconditioned response) with the edge of the yard (conditioned stimulus) and stays within the set boundaries.

For a humorous look at conditioning, watch thisvideo clipfrom the television showThe Office, where Jim conditions Dwight to expect a breath mint every time Jim’s computer makes a specific sound.

### GENERAL PROCESSES IN CLASSICAL CONDITIONING

Now that you know how classical conditioning works and have seen several examples, let’s take a look at some of the general processes involved. In classical conditioning, the initial period of learning is known asacquisition, when an organism learns to connect a neutral stimulus and an unconditioned stimulus. During acquisition, the neutral stimulus begins to elicit the conditioned response, and eventually the neutral stimulus becomes a conditioned stimulus capable of eliciting the conditioned response by itself. Timing is important for conditioning to occur. Typically, there should only be a brief interval between presentation of the conditioned stimulus and the unconditioned stimulus. Depending on what is being conditioned, sometimes this interval is as little as five seconds (Chance, 2009). However, with other types of conditioning, the interval can be up to several hours.

Taste aversionis a type of conditioning in which an interval of several hours may pass between the conditioned stimulus (something ingested) and the unconditioned stimulus (nausea or illness). Here’s how it works. Between classes, you and a friend grab a quick lunch from a food cart on campus. You share a dish of chicken curry and head off to your next class. A few hours later, you feel nauseous and become ill. Although your friend is fine and you determine that you have intestinal flu (the food is not the culprit), you’ve developed a taste aversion; the next time you are at a restaurant and someone orders curry, you immediately feel ill. While the chicken dish is not what made you sick, you are experiencing taste aversion: you’ve been conditioned to be averse to a food after a single, negative experience.

How does this occur—conditioning based on a single instance and involving an extended time lapse between the event and the negative stimulus? Research into taste aversion suggests that this response may be an evolutionary adaptation designed to help organisms quickly learn to avoid harmful foods (Garcia & Rusiniak, 1980; Garcia & Koelling, 1966). Not only may this contribute to species survival via natural selection, but it may also help us develop strategies for challenges such as helping cancer patients through the nausea induced by certain treatments (Holmes, 1993; Jacobsen et al., 1993; Hutton, Baracos, & Wismer, 2007; Skolin et al., 2006).

Once we have established the connection between the unconditioned stimulus and the conditioned stimulus, how do we break that connection and get the dog, cat, or child to stop responding? In Tiger’s case, imagine what would happen if you stopped using the electric can opener for her food and began to use it only for human food. Now, Tiger would hear the can opener, but she would not get food. In classical conditioning terms, you would be giving the conditioned stimulus, but not the unconditioned stimulus. Pavlov explored this scenario in his experiments with dogs: sounding the tone without giving the dogs the meat powder. Soon the dogs stopped responding to the tone.Extinctionis the decrease in the conditioned response when the unconditioned stimulus is no longer presented with the conditioned stimulus. When presented with the conditioned stimulus alone, the dog, cat, or other organism would show a weaker and weaker response, and finally no response. In classical conditioning terms, there is a gradual weakening and disappearance of the conditioned response.

(Video) The difference between classical and operant conditioning - Peggy Andover

What happens when learning is not used for a while—when what was learned lies dormant? As we just discussed, Pavlov found that when he repeatedly presented the bell (conditioned stimulus) without the meat powder (unconditioned stimulus), extinction occurred; the dogs stopped salivating to the bell. However, after a couple of hours of resting from this extinction training, the dogs again began to salivate when Pavlov rang the bell. What do you think would happen with Tiger’s behavior if your electric can opener broke, and you did not use it for several months? When you finally got it fixed and started using it to open Tiger’s food again, Tiger would remember the association between the can opener and her food—she would get excited and run to the kitchen when she heard the sound. The behavior of Pavlov’s dogs and Tiger illustrates a concept Pavlov calledspontaneous recovery: the return of a previously extinguished conditioned response following a rest period (Figure).

Of course, these processes also apply in humans. For example, let’s say that every day when you walk to campus, an ice cream truck passes your route. Day after day, you hear the truck’s music (neutral stimulus), so you finally stop and purchase a chocolate ice cream bar. You take a bite (unconditioned stimulus) and then your mouth waters (unconditioned response). This initial period of learning is known as acquisition, when you begin to connect the neutral stimulus (the sound of the truck) and the unconditioned stimulus (the taste of the chocolate ice cream in your mouth). During acquisition, the conditioned response gets stronger and stronger through repeated pairings of the conditioned stimulus and unconditioned stimulus. Several days (and ice cream bars) later, you notice that your mouth begins to water (conditioned response) as soon as you hear the truck’s musical jingle—even before you bite into the ice cream bar. Then one day you head down the street. You hear the truck’s music (conditioned stimulus), and your mouth waters (conditioned response). However, when you get to the truck, you discover that they are all out of ice cream. You leave disappointed. The next few days you pass by the truck and hear the music, but don’t stop to get an ice cream bar because you’re running late for class. You begin to salivate less and less when you hear the music, until by the end of the week, your mouth no longer waters when you hear the tune. This illustrates extinction. The conditioned response weakens when only the conditioned stimulus (the sound of the truck) is presented, without being followed by the unconditioned stimulus (chocolate ice cream in the mouth). Then the weekend comes. You don’t have to go to class, so you don’t pass the truck. Monday morning arrives and you take your usual route to campus. You round the corner and hear the truck again. What do you think happens? Your mouth begins to water again. Why? After a break from conditioning, the conditioned response reappears, which indicates spontaneous recovery.

Acquisition and extinction involve the strengthening and weakening, respectively, of a learned association. Two other learning processes—stimulus discrimination and stimulus generalization—are involved in distinguishing which stimuli will trigger the learned association. Animals (including humans) need to distinguish between stimuli—for example, between sounds that predict a threatening event and sounds that do not—so that they can respond appropriately (such as running away if the sound is threatening). When an organism learns to respond differently to various stimuli that are similar, it is calledstimulus discrimination. In classical conditioning terms, the organism demonstrates the conditioned response only to the conditioned stimulus. Pavlov’s dogs discriminated between the basic tone that sounded before they were fed and other tones (e.g., the doorbell), because the other sounds did not predict the arrival of food. Similarly, Tiger, the cat, discriminated between the sound of the can opener and the sound of the electric mixer. When the electric mixer is going, Tiger is not about to be fed, so she does not come running to the kitchen looking for food.

On the other hand, when an organism demonstrates the conditioned response to stimuli that are similar to the condition stimulus, it is calledstimulus generalization, the opposite of stimulus discrimination. The more similar a stimulus is to the condition stimulus, the more likely the organism is to give the conditioned response. For instance, if the electric mixer sounds very similar to the electric can opener, Tiger may come running after hearing its sound. But if you do not feed her following the electric mixer sound, and you continue to feed her consistently after the electric can opener sound, she will quickly learn to discriminate between the two sounds (provided they are sufficiently dissimilar that she can tell them apart).

Sometimes, classical conditioning can lead to habituation.Habituationoccurs when we learn not to respond to a stimulus that is presented repeatedly without change. As the stimulus occurs over and over, we learn not to focus our attention on it. For example, imagine that your neighbor or roommate constantly has the television blaring. This background noise is distracting and makes it difficult for you to focus when you’re studying. However, over time, you become accustomed to the stimulus of the television noise, and eventually you hardly notice it any longer.

### BEHAVIORISM

John B.Watson, shown inFigure, is considered the founder of behaviorism. Behaviorism is a school of thought that arose during the first part of the 20th century, which incorporates elements of Pavlov’s classical conditioning (Hunt, 2007). In stark contrast with Freud, who considered the reasons for behavior to be hidden in the unconscious, Watson championed the idea that all behavior can be studied as a simple stimulus-response reaction, without regard for internal processes. Watson argued that in order for psychology to become a legitimate science, it must shift its concern away from internal mental processes because mental processes cannot be seen or measured. Instead, he asserted that psychology must focus on outward observable behavior that can be measured.

Watson’s ideas were influenced by Pavlov’s work. According to Watson, human behavior, just like animal behavior, is primarily the result of conditioned responses. Whereas Pavlov’s work with dogs involved the conditioning of reflexes, Watson believed the same principles could be extended to the conditioning of human emotions (Watson, 1919). Thus began Watson’s work with his graduate student Rosalie Rayner and a baby called Little Albert. Through their experiments with Little Albert, Watson and Rayner (1920) demonstrated how fears can be conditioned.

In 1920, Watson was the chair of the psychology department at Johns Hopkins University. Through his position at the university he came to meet Little Albert’s mother, Arvilla Merritte, who worked at a campus hospital (DeAngelis, 2010). Watson offered her a dollar to allow her son to be the subject of his experiments in classical conditioning. Through these experiments, Little Albert was exposed to and conditioned to fear certain things. Initially he was presented with various neutral stimuli, including a rabbit, a dog, a monkey, masks, cotton wool, and a white rat. He was not afraid of any of these things. Then Watson, with the help of Rayner, conditioned Little Albert to associate these stimuli with an emotion—fear. For example, Watson handed Little Albert the white rat, and Little Albert enjoyed playing with it. Then Watson made a loud sound, by striking a hammer against a metal bar hanging behind Little Albert’s head, each time Little Albert touched the rat. Little Albert was frightened by the sound—demonstrating a reflexive fear of sudden loud noises—and began to cry. Watson repeatedly paired the loud sound with the white rat. Soon Little Albert became frightened by the white rat alone. In this case, what are the UCS, CS, UCR, and CR? Days later, Little Albert demonstrated stimulus generalization—he became afraid of other furry things: a rabbit, a furry coat, and even a Santa Claus mask (Figure). Watson had succeeded in conditioning a fear response in Little Albert, thus demonstrating that emotions could become conditioned responses. It had been Watson’s intention to produce a phobia—a persistent, excessive fear of a specific object or situation— through conditioning alone, thus countering Freud’s view that phobias are caused by deep, hidden conflicts in the mind. However, there is no evidence that Little Albert experienced phobias in later years. Little Albert’s mother moved away, ending the experiment, and Little Albert himself died a few years later of unrelated causes. While Watson’s research provided new insight into conditioning, it would be considered unethical by today’s standards.

View scenes fromJohn Watson’s experimentin which Little Albert was conditioned to respond in fear to furry objects.

As you watch the video, look closely at Little Albert’s reactions and the manner in which Watson and Rayner present the stimuli before and after conditioning. Based on what you see, would you come to the same conclusions as the researchers?

Advertising executives are pros at applying the principles of associative learning. Think about the car commercials you have seen on television. Many of them feature an attractive model. By associating the model with the car being advertised, you come to see the car as being desirable (Cialdini, 2008). You may be asking yourself, does this advertising technique actually work? According to Cialdini (2008), men who viewed a car commercial that included an attractive model later rated the car as being faster, more appealing, and better designed than did men who viewed an advertisement for the same car minus the model.

Have you ever noticed how quickly advertisers cancel contracts with a famous athlete following a scandal? As far as the advertiser is concerned, that athlete is no longer associated with positive feelings; therefore, the athlete cannot be used as an unconditioned stimulus to condition the public to associate positive feelings (the unconditioned response) with their product (the conditioned stimulus).

Now that you are aware of how associative learning works, see if you can find examples of these types of advertisements on television, in magazines, or on the Internet.

### Summary

Pavlov’s pioneering work with dogs contributed greatly to what we know about learning. His experiments explored the type of associative learning we now call classical conditioning. In classical conditioning, organisms learn to associate events that repeatedly happen together, and researchers study how a reflexive response to a stimulus can be mapped to a different stimulus—by training an association between the two stimuli. Pavlov’s experiments show how stimulus-response bonds are formed. Watson, the founder of behaviorism, was greatly influenced by Pavlov’s work. He tested humans by conditioning fear in an infant known as Little Albert. His findings suggest that classical conditioning can explain how some fears develop.

### Review Questions

A stimulus that does not initially elicit a response in an organism is a(n) ________.

1. unconditioned stimulus
2. neutral stimulus
3. conditioned stimulus
4. unconditioned response

In Watson and Rayner’s experiments, Little Albert was conditioned to fear a white rat, and then he began to be afraid of other furry white objects. This demonstrates ________.

1. higher order conditioning
2. acquisition
3. stimulus discrimination
4. stimulus generalization

Extinction occurs when ________.

1. the conditioned stimulus is presented repeatedly without being paired with an unconditioned stimulus
2. the unconditioned stimulus is presented repeatedly without being paired with a conditioned stimulus
3. the neutral stimulus is presented repeatedly without being paired with an unconditioned stimulus
4. the neutral stimulus is presented repeatedly without being paired with a conditioned stimulus

In Pavlov’s work with dogs, the psychic secretions were ________.

1. unconditioned responses
2. conditioned responses
3. unconditioned stimuli
4. conditioned stimuli

### Critical Thinking Questions

If the sound of your toaster popping up toast causes your mouth to water, what are the UCS, CS, and CR?

Explain how the processes of stimulus generalization and stimulus discrimination are considered opposites.

(Video) General Psychology - Lecture 21: Operant Conditioning

How does a neutral stimulus become a conditioned stimulus?

### Personal Application Question

Can you think of an example in your life of how classical conditioning has produced a positive emotional response, such as happiness or excitement? How about a negative emotional response, such as fear, anxiety, or anger?

## Operant Conditioning

The previous section of this chapter focused on the type of associative learning known as classical conditioning. Remember that in classical conditioning, something in the environment triggers a reflex automatically, and researchers train the organism to react to a different stimulus. Now we turn to the second type of associative learning,operant conditioning. In operant conditioning, organisms learn to associate a behavior and its consequence (Table). A pleasant consequence makes that behavior more likely to be repeated in the future. For example, Spirit, a dolphin at the National Aquarium in Baltimore, does a flip in the air when her trainer blows a whistle. The consequence is that she gets a fish.

Classical and Operant Conditioning Compared
Classical ConditioningOperant Conditioning
Conditioning approachAn unconditioned stimulus (such as food) is paired with a neutral stimulus (such as a bell). The neutral stimulus eventually becomes the conditioned stimulus, which brings about the conditioned response (salivation).The target behavior is followed by reinforcement or punishment to either strengthen or weaken it, so that the learner is more likely to exhibit the desired behavior in the future.
Stimulus timingThe stimulus occurs immediately before the response.The stimulus (either reinforcement or punishment) occurs soon after the response.

Psychologist B. F.Skinnersaw that classical conditioning is limited to existing behaviors that are reflexively elicited, and it doesn’t account for new behaviors such as riding a bike. He proposed a theory about how such behaviors come about. Skinner believed that behavior is motivated by the consequences we receive for the behavior: the reinforcements and punishments. His idea that learning is the result of consequences is based on the law of effect, which was first proposed by psychologist EdwardThorndike. According to thelaw of effect, behaviors that are followed by consequences that are satisfying to the organism are more likely to be repeated, and behaviors that are followed by unpleasant consequences are less likely to be repeated (Thorndike, 1911). Essentially, if an organism does something that brings about a desired result, the organism is more likely to do it again. If an organism does something that does not bring about a desired result, the organism is less likely to do it again. An example of the law of effect is in employment. One of the reasons (and often the main reason) we show up for work is because we get paid to do so. If we stop getting paid, we will likely stop showing up—even if we love our job.

Working with Thorndike’s law of effect as his foundation, Skinner began conducting scientific experiments on animals (mainly rats and pigeons) to determine how organisms learn through operant conditioning (Skinner, 1938). He placed these animals inside an operant conditioning chamber, which has come to be known as a “Skinner box” (Figure). A Skinner box contains a lever (for rats) or disk (for pigeons) that the animal can press or peck for a food reward via the dispenser. Speakers and lights can be associated with certain behaviors. A recorder counts the number of responses made by the animal.

Watch this briefvideo clipto learn more about operant conditioning: Skinner is interviewed, and operant conditioning of pigeons is demonstrated.

In discussing operant conditioning, we use several everyday words—positive, negative, reinforcement, and punishment—in a specialized manner. In operant conditioning, positive and negative do not mean good and bad. Instead,positivemeans you are adding something, andnegativemeans you are taking something away.Reinforcementmeans you are increasing a behavior, andpunishmentmeans you are decreasing a behavior. Reinforcement can be positive or negative, and punishment can also be positive or negative. All reinforcers (positive or negative)increasethe likelihood of a behavioral response. All punishers (positive or negative)decreasethe likelihood of a behavioral response. Now let’s combine these four terms: positive reinforcement, negative reinforcement, positive punishment, and negative punishment (Table).

Positive and Negative Reinforcement and Punishment
ReinforcementPunishment
NegativeSomething isremovedtoincreasethe likelihood of a behavior.Something isremovedtodecreasethe likelihood of a behavior.

### REINFORCEMENT

The most effective way to teach a person or animal a new behavior is with positive reinforcement. Inpositive reinforcement, a desirable stimulus is added to increase a behavior.

Innegative reinforcement, an undesirable stimulus is removed to increase a behavior. For example, car manufacturers use the principles of negative reinforcement in their seatbelt systems, which go “beep, beep, beep” until you fasten your seatbelt. The annoying sound stops when you exhibit the desired behavior, increasing the likelihood that you will buckle up in the future. Negative reinforcement is also used frequently in horse training. Riders apply pressure—by pulling the reins or squeezing their legs—and then remove the pressure when the horse performs the desired behavior, such as turning or speeding up. The pressure is the negative stimulus that the horse wants to remove.

### PUNISHMENT

Many people confuse negative reinforcement with punishment in operant conditioning, but they are two very different mechanisms. Remember that reinforcement, even when it is negative, always increases a behavior. In contrast,punishmentalways decreases a behavior. Inpositive punishment, you add an undesirable stimulus to decrease a behavior. An example of positive punishment is scolding a student to get the student to stop texting in class. In this case, a stimulus (the reprimand) is added in order to decrease the behavior (texting in class). Innegative punishment, you remove a pleasant stimulus to decrease a behavior. For example, a driver might blast her horn when a light turns green, and continue blasting the horn until the car in front moves.

Punishment, especially when it is immediate, is one way to decrease undesirable behavior. For example, imagine your four-year-old son, Brandon, runs into the busy street to get his ball. You give him a time-out (positive punishment) and tell him never to go into the street again. Chances are he won’t repeat this behavior. While strategies like time-outs are common today, in the past children were often subject to physical punishment, such as spanking. It’s important to be aware of some of the drawbacks in using physical punishment on children. First, punishment may teach fear. Brandon may become fearful of the street, but he also may become fearful of the person who delivered the punishment—you, his parent. Similarly, children who are punished by teachers may come to fear the teacher and try to avoid school (Gershoff et al., 2010). Consequently, most schools in the United States have banned corporal punishment. Second, punishment may cause children to become more aggressive and prone to antisocial behavior and delinquency (Gershoff, 2002). They see their parents resort to spanking when they become angry and frustrated, so, in turn, they may act out this same behavior when they become angry and frustrated. For example, because you spank Brenda when you are angry with her for her misbehavior, she might start hitting her friends when they won’t share their toys.

While positive punishment can be effective in some cases, Skinner suggested that the use of punishment should be weighed against the possible negative effects. Today’s psychologists and parenting experts favor reinforcement over punishment—they recommend that you catch your child doing something good and reward her for it.

#### Shaping

In his operant conditioning experiments, Skinner often used an approach called shaping. Instead of rewarding only the target behavior, inshaping, we reward successive approximations of a target behavior. Why is shaping needed? Remember that in order for reinforcement to work, the organism must first display the behavior. Shaping is needed because it is extremely unlikely that an organism will display anything but the simplest of behaviors spontaneously. In shaping, behaviors are broken down into many small, achievable steps. The specific steps used in the process are the following:

1. Reinforce any response that resembles the desired behavior.
2. Then reinforce the response that more closely resembles the desired behavior. You will no longer reinforce the previously reinforced response.
3. Next, begin to reinforce the response that even more closely resembles the desired behavior.
4. Continue to reinforce closer and closer approximations of the desired behavior.
5. Finally, only reinforce the desired behavior.

Shaping is often used in teaching a complex behavior or chain of behaviors. Skinner used shaping to teach pigeons not only such relatively simple behaviors as pecking a disk in a Skinner box, but also many unusual and entertaining behaviors, such as turning in circles, walking in figure eights, and even playing ping pong; the technique is commonly used by animal trainers today. An important part of shaping is stimulus discrimination. Recall Pavlov’s dogs—he trained them to respond to the tone of a bell, and not to similar tones or sounds. This discrimination is also important in operant conditioning and in shaping behavior.

Here is abrief videoof Skinner’s pigeons playing ping pong.

It’s easy to see how shaping is effective in teaching behaviors to animals, but how does shaping work with humans? Let’s consider parents whose goal is to have their child learn to clean his room. They use shaping to help him master steps toward the goal. Instead of performing the entire task, they set up these steps and reinforce each step. First, he cleans up one toy. Second, he cleans up five toys. Third, he chooses whether to pick up ten toys or put his books and clothes away. Fourth, he cleans up everything except two toys. Finally, he cleans his entire room.

### PRIMARY AND SECONDARY REINFORCERS

Rewards such as stickers, praise, money, toys, and more can be used to reinforce learning. Let’s go back to Skinner’s rats again. How did the rats learn to press the lever in the Skinner box? They were rewarded with food each time they pressed the lever. For animals, food would be an obvious reinforcer.

What would be a good reinforce for humans? For your daughter Sydney, it was the promise of a toy if she cleaned her room. How about Joaquin, the soccer player? If you gave Joaquin a piece of candy every time he made a goal, you would be using aprimary reinforcer. Primary reinforcers are reinforcers that have innate reinforcing qualities. These kinds of reinforcers are not learned. Water, food, sleep, shelter, sex, and touch, among others, are primary reinforcers. Pleasure is also a primary reinforcer. Organisms do not lose their drive for these things. For most people, jumping in a cool lake on a very hot day would be reinforcing and the cool lake would be innately reinforcing—the water would cool the person off (a physical need), as well as provide pleasure.

Asecondary reinforcerhas no inherent value and only has reinforcing qualities when linked with a primary reinforcer. Praise, linked to affection, is one example of a secondary reinforcer, as when you called out “Great shot!” every time Joaquin made a goal. Another example, money, is only worth something when you can use it to buy other things—either things that satisfy basic needs (food, water, shelter—all primary reinforcers) or other secondary reinforcers. If you were on a remote island in the middle of the Pacific Ocean and you had stacks of money, the money would not be useful if you could not spend it. What about the stickers on the behavior chart? They also are secondary reinforcers.

Sometimes, instead of stickers on a sticker chart, a token is used. Tokens, which are also secondary reinforcers, can then be traded in for rewards and prizes. Entire behavior management systems, known as token economies, are built around the use of these kinds of token reinforcers. Token economies have been found to be very effective at modifying behavior in a variety of settings such as schools, prisons, and mental hospitals. For example, a study by Cangi and Daly (2013) found that use of a token economy increased appropriate social behaviors and reduced inappropriate behaviors in a group of autistic school children. Autistic children tend to exhibit disruptive behaviors such as pinching and hitting. When the children in the study exhibited appropriate behavior (not hitting or pinching), they received a “quiet hands” token. When they hit or pinched, they lost a token. The children could then exchange specified amounts of tokens for minutes of playtime.

BEHAVIOR MODIFICATION IN CHILDREN

Parents and teachers often use behavior modification to change a child’s behavior. Behavior modification uses the principles of operant conditioning to accomplish behavior change so that undesirable behaviors are switched for more socially acceptable ones. Some teachers and parents create a sticker chart, in which several behaviors are listed (Figure). Sticker charts are a form of token economies, as described in the text. Each time children perform the behavior, they get a sticker, and after a certain number of stickers, they get a prize, or reinforcer. The goal is to increase acceptable behaviors and decrease misbehavior. Remember, it is best to reinforce desired behaviors, rather than to use punishment. In the classroom, the teacher can reinforce a wide range of behaviors, from students raising their hands, to walking quietly in the hall, to turning in their homework. At home, parents might create a behavior chart that rewards children for things such as putting away toys, brushing their teeth, and helping with dinner. In order for behavior modification to be effective, the reinforcement needs to be connected with the behavior; the reinforcement must matter to the child and be done consistently.

Time-out is another popular technique used in behavior modification with children. It operates on the principle of negative punishment. When a child demonstrates an undesirable behavior, she is removed from the desirable activity at hand (Figure). For example, say that Sophia and her brother Mario are playing with building blocks. Sophia throws some blocks at her brother, so you give her a warning that she will go to time-out if she does it again. A few minutes later, she throws more blocks at Mario. You remove Sophia from the room for a few minutes. When she comes back, she doesn’t throw blocks.

There are several important points that you should know if you plan to implement time-out as a behavior modification technique. First, make sure the child is being removed from a desirable activity and placed in a less desirable location. If the activity is something undesirable for the child, this technique will backfire because it is more enjoyable for the child to be removed from the activity. Second, the length of the time-out is important. The general rule of thumb is one minute for each year of the child’s age. Sophia is five; therefore, she sits in a time-out for five minutes. Setting a timer helps children know how long they have to sit in time-out. Finally, as a caregiver, keep several guidelines in mind over the course of a time-out: remain calm when directing your child to time-out; ignore your child during time-out (because caregiver attention may reinforce misbehavior); and give the child a hug or a kind word when time-out is over.

### REINFORCEMENT SCHEDULES

Remember, the best way to teach a person or animal a behavior is to use positive reinforcement. For example, Skinner used positive reinforcement to teach rats to press a lever in a Skinner box. At first, the rat might randomly hit the lever while exploring the box, and out would come a pellet of food. After eating the pellet, what do you think the hungry rat did next? It hit the lever again, and received another pellet of food. Each time the rat hit the lever, a pellet of food came out. When an organism receives a reinforcer each time it displays a behavior, it is calledcontinuous reinforcement. This reinforcement schedule is the quickest way to teach someone a behavior, and it is especially effective in training a new behavior. Let’s look back at the dog that was learning to sit earlier in the chapter. Now, each time he sits, you give him a treat. Timing is important here: you will be most successful if you present the reinforcer immediately after he sits, so that he can make an association between the target behavior (sitting) and the consequence (getting a treat).

Watch thisvideo clipwhere veterinarian Dr. Sophia Yin shapes a dog’s behavior using the steps outlined above.

Once a behavior is trained, researchers and trainers often turn to another type of reinforcement schedule—partial reinforcement. Inpartial reinforcement, also referred to as intermittent reinforcement, the person or animal does not get reinforced every time they perform the desired behavior. There are several different types of partial reinforcement schedules (Table). These schedules are described as either fixed or variable, and as either interval or ratio.Fixedrefers to the number of responses between reinforcements, or the amount of time between reinforcements, which is set and unchanging.Variablerefers to the number of responses or amount of time between reinforcements, which varies or changes.Intervalmeans the schedule is based on the time between reinforcements, andratiomeans the schedule is based on the number of responses between reinforcements.

Reinforcement Schedules
Reinforcement ScheduleDescriptionResultExample
Fixed intervalReinforcement is delivered at predictable time intervals (e.g., after 5, 10, 15, and 20 minutes).Moderate response rate with significant pauses after reinforcementHospital patient uses patient-controlled, doctor-timed pain relief
Variable intervalReinforcement is delivered at unpredictable time intervals (e.g., after 5, 7, 10, and 20 minutes).Moderate yet steady response rateChecking Facebook
Fixed ratioReinforcement is delivered after a predictable number of responses (e.g., after 2, 4, 6, and 8 responses).High response rate with pauses after reinforcementPiecework—factory worker getting paid for every x number of items manufactured
Variable ratioReinforcement is delivered after an unpredictable number of responses (e.g., after 1, 4, 5, and 9 responses).High and steady response rateGambling

Now let’s combine these four terms. Afixed interval reinforcement scheduleis when behavior is rewarded after a set amount of time. For example, June undergoes major surgery in a hospital. During recovery, she is expected to experience pain and will require prescription medications for pain relief. June is given an IV drip with a patient-controlled painkiller. Her doctor sets a limit: one dose per hour. June pushes a button when pain becomes difficult, and she receives a dose of medication. Since the reward (pain relief) only occurs on a fixed interval, there is no point in exhibiting the behavior when it will not be rewarded.

(Video) How to Train a Brain: Crash Course Psychology #11

With avariable interval reinforcement schedule, the person or animal gets the reinforcement based on varying amounts of time, which are unpredictable. Say that Manuel is the manager at a fast-food restaurant. Every once in a while someone from the quality control division comes to Manuel’s restaurant. If the restaurant is clean and the service is fast, everyone on that shift earns a \$20 bonus. Manuel never knows when the quality control person will show up, so he always tries to keep the restaurant clean and ensures that his employees provide prompt and courteous service. His productivity regarding prompt service and keeping a clean restaurant are steady because he wants his crew to earn the bonus.

With afixed ratio reinforcement schedule, there are a set number of responses that must occur before the behavior is rewarded. Carla sells glasses at an eyeglass store, and she earns a commission every time she sells a pair of glasses. She always tries to sell people more pairs of glasses, including prescription sunglasses or a backup pair, so she can increase her commission. She does not care if the person really needs the prescription sunglasses, Carla just wants her bonus. The quality of what Carla sells does not matter because her commission is not based on quality; it’s only based on the number of pairs sold. This distinction in the quality of performance can help determine which reinforcement method is most appropriate for a particular situation. Fixed ratios are better suited to optimize the quantity of output, whereas a fixed interval, in which the reward is not quantity based, can lead to a higher quality of output.

In avariable ratio reinforcement schedule, the number of responses needed for a reward varies. This is the most powerful partial reinforcement schedule. An example of the variable ratio reinforcement schedule is gambling. Imagine that Sarah—generally a smart, thrifty woman—visits Las Vegas for the first time. She is not a gambler, but out of curiosity she puts a quarter into the slot machine, and then another, and another. Nothing happens. Two dollars in quarters later, her curiosity is fading, and she is just about to quit. But then, the machine lights up, bells go off, and Sarah gets 50 quarters back. That’s more like it! Sarah gets back to inserting quarters with renewed interest, and a few minutes later she has used up all her gains and is \$10 in the hole. Now might be a sensible time to quit. And yet, she keeps putting money into the slot machine because she never knows when the next reinforcement is coming. She keeps thinking that with the next quarter she could win \$50, or \$100, or even more. Because the reinforcement schedule in most types of gambling has a variable ratio schedule, people keep trying and hoping that the next time they will win big. This is one of the reasons that gambling is so addictive—and so resistant to extinction.

In operant conditioning, extinction of a reinforced behavior occurs at some point after reinforcement stops, and the speed at which this happens depends on the reinforcement schedule. In a variable ratio schedule, the point of extinction comes very slowly, as described above. But in the other reinforcement schedules, extinction may come quickly. For example, if June presses the button for the pain relief medication before the allotted time her doctor has approved, no medication is administered. She is on a fixed interval reinforcement schedule (dosed hourly), so extinction occurs quickly when reinforcement doesn’t come at the expected time. Among the reinforcement schedules, variable ratio is the most productive and the most resistant to extinction. Fixed interval is the least productive and the easiest to extinguish (Figure).

GAMBLING AND THE BRAIN

Skinner (1953) stated, “If the gambling establishment cannot persuade a patron to turn over money with no return, it may achieve the same effect by returning part of the patron’s money on a variable-ratio schedule” (p. 397).

Skinner uses gambling as an example of the power and effectiveness of conditioning behavior based on a variable ratio reinforcement schedule. In fact, Skinner was so confident in his knowledge of gambling addiction that he even claimed he could turn a pigeon into a pathological gambler (“Skinner’s Utopia,” 1971). Beyond the power of variable ratio reinforcement, gambling seems to work on the brain in the same way as some addictive drugs. The Illinois Institute for Addiction Recovery (n.d.) reports evidence suggesting that pathological gambling is an addiction similar to a chemical addiction (Figure). Specifically, gambling may activate the reward centers of the brain, much like cocaine does. Research has shown that some pathological gamblers have lower levels of the neurotransmitter (brain chemical) known as norepinephrine than do normal gamblers (Roy, et al., 1988). According to a study conducted by Alec Roy and colleagues, norepinephrine is secreted when a person feels stress, arousal, or thrill; pathological gamblers use gambling to increase their levels of this neurotransmitter. Another researcher, neuroscientist Hans Breiter, has done extensive research on gambling and its effects on the brain. Breiter (as cited in Franzen, 2001) reports that “Monetary reward in a gambling-like experiment produces brain activation very similar to that observed in a cocaine addict receiving an infusion of cocaine” (para. 1). Deficiencies in serotonin (another neurotransmitter) might also contribute to compulsive behavior, including a gambling addiction.

It may be that pathological gamblers’ brains are different than those of other people, and perhaps this difference may somehow have led to their gambling addiction, as these studies seem to suggest. However, it is very difficult to ascertain the cause because it is impossible to conduct a true experiment (it would be unethical to try to turn randomly assigned participants into problem gamblers). Therefore, it may be that causation actually moves in the opposite direction—perhaps the act of gambling somehow changes neurotransmitter levels in some gamblers’ brains. It also is possible that some overlooked factor, or confounding variable, played a role in both the gambling addiction and the differences in brain chemistry.

### COGNITION AND LATENT LEARNING

Although strict behaviorists such as Skinner and Watson refused to believe that cognition (such as thoughts and expectations) plays a role in learning, another behaviorist, Edward C.Tolman, had a different opinion. Tolman’s experiments with rats demonstrated that organisms can learn even if they do not receive immediate reinforcement (Tolman & Honzik, 1930; Tolman, Ritchie, & Kalish, 1946). This finding was in conflict with the prevailing idea at the time that reinforcement must be immediate in order for learning to occur, thus suggesting a cognitive aspect to learning.

In the experiments, Tolman placed hungry rats in a maze with no reward for finding their way through it. He also studied a comparison group that was rewarded with food at the end of the maze. As the unreinforced rats explored the maze, they developed acognitive map: a mental picture of the layout of the maze (Figure). After 10 sessions in the maze without reinforcement, food was placed in a goal box at the end of the maze. As soon as the rats became aware of the food, they were able to find their way through the maze quickly, just as quickly as the comparison group, which had been rewarded with food all along. This is known aslatent learning: learning that occurs but is not observable in behavior until there is a reason to demonstrate it.

Latent learning also occurs in humans. Children may learn by watching the actions of their parents but only demonstrate it at a later date, when the learned material is needed. For example, suppose that Ravi’s dad drives him to school every day. In this way, Ravi learns the route from his house to his school, but he’s never driven there himself, so he has not had a chance to demonstrate that he’s learned the way. One morning Ravi’s dad has to leave early for a meeting, so he can’t drive Ravi to school. Instead, Ravi follows the same route on his bike that his dad would have taken in the car. This demonstrates latent learning. Ravi had learned the route to school, but had no need to demonstrate this knowledge earlier.

THIS PLACE IS LIKE A MAZE

Have you ever gotten lost in a building and couldn’t find your way back out? While that can be frustrating, you’re not alone. At one time or another we’ve all gotten lost in places like a museum, hospital, or university library. Whenever we go someplace new, we build a mental representation—or cognitive map—of the location, as Tolman’s rats built a cognitive map of their maze. However, some buildings are confusing because they include many areas that look alike or have short lines of sight. Because of this, it’s often difficult to predict what’s around a corner or decide whether to turn left or right to get out of a building. Psychologist Laura Carlson (2010) suggests that what we place in our cognitive map can impact our success in navigating through the environment. She suggests that paying attention to specific features upon entering a building, such as a picture on the wall, a fountain, a statue, or an escalator, adds information to our cognitive map that can be used later to help find our way out of the building.

### Summary

Operant conditioning is based on the work of B. F. Skinner. Operant conditioning is a form of learning in which the motivation for a behavior happensafterthe behavior is demonstrated. An animal or a human receives a consequence after performing a specific behavior. The consequence is either a reinforcer or a punisher. All reinforcement (positive or negative)increasesthe likelihood of a behavioral response. All punishment (positive or negative)decreasesthe likelihood of a behavioral response. Several types of reinforcement schedules are used to reward behavior depending on either a set or variable period of time.

### Review Questions

________ is when you take away a pleasant stimulus to stop a behavior.

1. positive reinforcement
2. negative reinforcement
3. positive punishment
4. negative punishment

Which of the following isnotan example of a primary reinforcer?

1. food
2. money
3. water
4. sex

Rewarding successive approximations toward a target behavior is ________.

1. shaping
2. extinction
3. positive reinforcement
4. negative reinforcement

Slot machines reward gamblers with money according to which reinforcement schedule?

1. fixed ratio
2. variable ratio
3. fixed interval
4. variable interval

### Critical Thinking Questions

What is a Skinner box and what is its purpose?

What is the difference between negative reinforcement and punishment?

(Video) Pavlov’s Classical Conditioning

What is shaping and how would you use shaping to teach a dog to roll over?

### Personal Application Questions

Explain the difference between negative reinforcement and punishment, and provide several examples of each based on your own experiences.

Think of a behavior that you have that you would like to change. How could you use behavior modification, specifically positive reinforcement, to change your behavior? What is your positive reinforcer?

### glossary

[glossary-page]
[glossary-term]acquisition:[/glossary-term]
[glossary-definition]period of initial learning in classical conditioning in which a human or an animal begins to connect a neutral stimulus and an unconditioned stimulus so that the neutral stimulus will begin to elicit the conditioned response[/glossary-definition]

[glossary-term]classical conditioning:[/glossary-term]
[glossary-definition]learning in which the stimulus or experience occurs before the behavior and then gets paired or associated with the behavior[/glossary-definition]

[glossary-term]cognitive map:[/glossary-term]
[glossary-definition]mental picture of the layout of the environment[/glossary-definition]

[glossary-term]conditioned response (CR):[/glossary-term]
[glossary-definition]response caused by the conditioned stimulus[/glossary-definition]

[glossary-term]conditioned stimulus (CS):[/glossary-term]
[glossary-definition]stimulus that elicits a response due to its being paired with an unconditioned stimulus[/glossary-definition]

[glossary-term]continuous reinforcement:[/glossary-term]
[glossary-definition]rewarding a behavior every time it occurs[/glossary-definition]

[glossary-term]extinction:[/glossary-term]
[glossary-definition]decrease in the conditioned response when the unconditioned stimulus is no longer paired with the conditioned stimulus[/glossary-definition]

[glossary-term]fixed interval reinforcement schedule:[/glossary-term]
[glossary-definition]behavior is rewarded after a set amount of time[/glossary-definition]

[glossary-term]fixed ratio reinforcement schedule:[/glossary-term]
[glossary-definition]set number of responses must occur before a behavior is rewarded[/glossary-definition]

[glossary-term]habituation:[/glossary-term]
[glossary-definition]when we learn not to respond to a stimulus that is presented repeatedly without change[/glossary-definition]

[glossary-term]higher-order conditioning:[/glossary-term]
[glossary-definition](also, second-order conditioning) using a conditioned stimulus to condition a neutral stimulus[/glossary-definition]

[glossary-term]latent learning:[/glossary-term]
[glossary-definition]learning that occurs, but it may not be evident until there is a reason to demonstrate it[/glossary-definition]

[glossary-term]law of effect:[/glossary-term]
[glossary-definition]behavior that is followed by consequences satisfying to the organism will be repeated and behaviors that are followed by unpleasant consequences will be discouraged[/glossary-definition]

[glossary-term]negative punishment:[/glossary-term]
[glossary-definition]taking away a pleasant stimulus to decrease or stop a behavior[/glossary-definition]

[glossary-term]negative reinforcement:[/glossary-term]
[glossary-definition]taking away an undesirable stimulus to increase a behavior[/glossary-definition]

[glossary-term]neutral stimulus (NS):[/glossary-term]
[glossary-definition]stimulus that does not initially elicit a response[/glossary-definition]

[glossary-term]operant conditioning:[/glossary-term]
[glossary-definition]form of learning in which the stimulus/experience happens after the behavior is demonstrated[/glossary-definition]

[glossary-term]partial reinforcement:[/glossary-term]
[glossary-definition]rewarding behavior only some of the time[/glossary-definition]

[glossary-term]positive punishment:[/glossary-term]
[glossary-definition]adding an undesirable stimulus to stop or decrease a behavior[/glossary-definition]

[glossary-term]positive reinforcement:[/glossary-term]
[glossary-definition]adding a desirable stimulus to increase a behavior[/glossary-definition]

[glossary-term]primary reinforcer:[/glossary-term]
[glossary-definition]has innate reinforcing qualities (e.g., food, water, shelter, sex)[/glossary-definition]

[glossary-term]punishment:[/glossary-term]
[glossary-definition]implementation of a consequence in order to decrease a behavior[/glossary-definition]

[glossary-term]reinforcement:[/glossary-term]
[glossary-definition]implementation of a consequence in order to increase a behavior[/glossary-definition]

[glossary-term]secondary reinforcer:[/glossary-term]
[glossary-definition]has no inherent value unto itself and only has reinforcing qualities when linked with something else (e.g., money, gold stars, poker chips)[/glossary-definition]

[glossary-term]shaping:[/glossary-term]
[glossary-definition]rewarding successive approximations toward a target behavior[/glossary-definition]

[glossary-term]spontaneous recovery:[/glossary-term]
[glossary-definition]return of a previously extinguished conditioned response[/glossary-definition]

[glossary-term]stimulus discrimination:[/glossary-term]
[glossary-definition]ability to respond differently to similar stimuli[/glossary-definition]

[glossary-term]stimulus generalization:[/glossary-term]
[glossary-definition]demonstrating the conditioned response to stimuli that are similar to the conditioned stimulus[/glossary-definition]

[glossary-term]unconditioned response (UCR):[/glossary-term]
[glossary-definition]natural (unlearned) behavior to a given stimulus[/glossary-definition]

(Video) General Psychology Operant Conditioning

[glossary-term]unconditioned stimulus (UCS):[/glossary-term]
[glossary-definition]stimulus that elicits a reflexive response[/glossary-definition]

[glossary-term]variable interval reinforcement schedule:[/glossary-term]
[glossary-definition]behavior is rewarded after unpredictable amounts of time have passed[/glossary-definition]

[glossary-term]variable ratio reinforcement schedule:[/glossary-term]
[glossary-definition]number of responses differ before a behavior is rewarded[/glossary-definition]
[/glossary-page]

## Videos

1. What is conditioning in psychology with example?#conditioning #pavlov #skinner
(Studying Psychology is fun)
2. General Psychology: More on Classical Conditioning, Intro to Operant Conditioning
(Amanda Gilchrist)
3. Classical conditioning: Extinction, spontaneous recovery, generalization, discrimination
4. PSY101 - Conditioning and Learning
(R. J. Birmingham)
5. Behaviorism, Watson, Pavlov & Skinner: Learning Theories - Approaches (5.01) Psychology AQA paper 2
(Psych Boost)
6. Operant Conditioning
(General Psychology - Angela Daly)
Top Articles
Latest Posts
Article information

Author: Rueben Jacobs

Last Updated: 03/04/2023

Views: 6078

Rating: 4.7 / 5 (77 voted)

Author information

Name: Rueben Jacobs

Birthday: 1999-03-14

Address: 951 Caterina Walk, Schambergerside, CA 67667-0896

Phone: +6881806848632

Job: Internal Education Planner

Hobby: Candle making, Cabaret, Poi, Gambling, Rock climbing, Wood carving, Computer programming

Introduction: My name is Rueben Jacobs, I am a cooperative, beautiful, kind, comfortable, glamorous, open, magnificent person who loves writing and wants to share my knowledge and understanding with you.