Blog Details

image

BEHAVIORISM: AN INFLUENTIAL APPROACH IN PSYCHOLOGY

 

Sakshi Kanojia, Rudrani Mishra

Abstract

The present article discusses behaviorism as a school of psychology and how it evolved through time and how it helps in conditioning our responses or consequences of our actions. This helps the readers understand the concepts through a better lens and relate it better to everyday lives. 

Contents 

  1. History
  2. Behaviorism: A School of Psychology
  3. Key Concepts of Behaviorism

            3.1. Classical Conditioning

            3.2. Operant Conditioning

                    3.2.1. Reinforcement

  • Positive reinforcement
  • Negative reinforcement

                   3.2.2. Schedules in Reinforcement

  • Fixed-ratio 
  • Variable-ratio
  • Fixed-interval
  • Variable-interval 

            3.3. Observational Learning

  1. Case study 
  2. References
 

History of Behaviorism

Behaviorism or behavioral psychology is a school of thought that began to emerge in 1913. 

John B Watson (1878-1958) also known as the father of behaviorism, led this new movement. He opposed the “mentalism” of the structuralist and argued that the subject matter of psychology was a behavior that can be seen, not things of inner consciousness that cannot be seen or experimentally proved. According to him, humans are products of their learning experiences.

 

IVAN PAVLOV: Early work in the field of behavior was conducted by Russian Psychologist Ivan Pavlov (1849-1936). Pavlov studied a form of learning behavior called condition reflex, in which an animal or human produced a reflex response to a stimulus and over time, was conditioned to produce the response to a different stimulus that the experimenter associated with the original stimulus. Pavlov published the result of this experiment in 1897

THORNDIKE: Thorndike gave the  “law of effect” which stated that any behavior followed by pleasant consequences will be repeated and any behavior followed by unpleasant consequences is less likely to occur again. In 1905 Thorndike formalized this law of effect.

WATSON AND RAYNER: In 1920, Watson and Rayner conducted the Little Albert experiment in which they conditioned an 11-month-old child named “Albert B”, to fear a white rat. At first, they exposed the child to various stimuli such as a white rat, a rabbit, a monkey, masks, and so on. Initially, Albert wasn’t scared of any of the stimuli, but later when the appearance of the white rat was associated with a loud sound, the child began to cry. Later, he generalized this fear to every furry object. This experiment became important in the history of conditioning.

B.F SKINNER: B.F Skinner(1904-1990) was a leading 20th-century behaviorist. He has notable contributions in the field of psychology. He wrote the book “The behavior of organisms” in 1936, in which he introduced and explained the concepts of operant conditioning and shaping. He defined operant conditioning as "a learning type in which behavior is influenced by the outcome that follows it”. 

 

Behaviorism: A School of Thought

Behaviorism predominated psychological theory between the two world wars. Behaviorism proposes that all behavior is learned from the environment. It is primarily concerned with data that is visible, not data that is unobservable and of inner consciousness.

Related Post: Structuralism: The First School of Thought in Psychology


Types of Behaviorism

Methodological Behaviorism

John B Watson was a pioneer in the field of methodological behaviorism. Methodological behaviorism is a normative philosophy of psychology's empirical actions. It asserts that psychology should be concerned with an organism's observable behavior (human and nonhuman animals). 

Radical Behaviorism

Radical Behaviorism was propounded by Skinner. According to him, society could harness the power of the environment by "social engineering". Radical behaviorism inherits from behaviorism, the assumption that behavior science is a natural science, the belief that animal behavior can be observed and compared to human behavior profitably, and a strong focus on the environment as the source of behavior.

Cognitive Behaviorism

In the 1960s and 1970s, several psychologists showed that cognitive processes such as attention and memory could be studied using experiments. Cognitive behaviorism proposes that learning experience and the environment influences our expectations and other thoughts and, in turn, those thoughts influence how we behave. Cognitive behaviorism remains a crucial viewpoint to this day.


Key Concepts of Behaviorism: Conditioning

According to behavioral psychology, there are two types of conditioning, namely classical and operant. 

CLASSICAL CONDITIONING 

In classical conditioning, Pavlov experimented with dogs. He began with the notion that a dog doesn't need to learn to salivate (unconditioned response) when the food (unconditioned stimulus) is presented. He rang a bell just before presenting the food. 

At first, the dogs did not salivate until the food was presented. After a while, however, the dog began to salivate (conditioned response) at the sound of the bell (conditioned stimulus). It learned to associate the sound of the bell with the presentation of food. He called this 'classical conditioning'.

Thus, Classical conditioning is a type of learning in which an organism learns to associate the stimulus with a response. After repeated pairing of the conditioned stimulus (bell) with the unconditioned stimulus (food), the CS (bell) starts eliciting the CR(saliva). The dog has thus learned to respond to a previously neutral stimulus(bell).

 

 

Principles of Classical Conditioning

a) Acquisition: 

Acquisition refers to the period during which a response is being learned. It occurs when CS (bell) after repeated pairings with the US(food) comes to elicit the conditioned response(saliva).

b) Extinction:

Extinction occurs when after conditioning, CS ceases to produce CR.

c) Spontaneous Recovery: 

It refers to the reappearance of the CR after a rest period following extinction.

 

OPERANT CONDITIONING

Skinner coined the term operant behavior. It is a type of learning in which behavior is influenced by consequences that follow it. If the consequences are positive, the behavior is likely to occur again and if it’s followed by negative consequences it is less likely to occur again.

 

 

Reinforcement

Reinforcements are the responses from the environment that increase the likelihood of repeated actions. The reinforcement can be either positive or negative.

Positive Reinforcement

Positive reinforcement is the concept defined by B. F. Skinner in his theory of operating conditioning. In positive reinforcement, the reaction or action is enhanced by incentives, leading to the continuation of the desired behavior. The reward is a boost to stimuli.

Positive reinforcement increases actions by presenting a person with rewarding outcomes. For example, if your teacher gives you Rs. 5 each time you complete your homework (i.e. a reward) you are more likely to replicate the behavior in the future, thereby reinforcing the behavior of completing your homework.

Negative Reinforcement

Negative reinforcement is the end of the unpleasant state of response. This is known as negative reinforcement because it is the elimination of an unfavorable stimulus that is 'rewarding' to an animal or human. Negative reinforcement reinforces actions by stopping or eliminating unpleasant experiences. 

There are two learned responses to negative reinforcement:

1. Escape Learning:

Escape conditioning is an aversive type of conditioning. The term "aversive" applies to stimuli that are avoided. Generally, these sensations are uncomfortable or painful. When an aversive stimulus is presented to an animal, the animal responds by fleeing the stimulus field.

For example, If a monkey discovers that pulling a string prevents a noisy sound, an escape conditioning process would occur.

2. Avoidance Learning:

Avoidance learning is a method in which a person learns an action or reaction to avoid a stressful or unpleasant circumstance. Behaviour is to prevent, or eradicate, a situation.

For example, an individual who has an allergic reaction to consuming a certain food a couple of times. Eventually, they begin to avoid the food and not eat it at all

 

Scheduling in Reinforcement

 

1. Continuous Scheduling:

Every time the desired behavior happens, it is reinforced. This schedule works best in the early stages of learning to develop a clear connection between action and response.

When attempting to teach a new behavior, continuous reinforcement schedules are the most successful. It refers to a pattern in which each narrowly defined response is followed by a narrowly defined outcome or a consequence.

2. Partial Scheduling:

A continuous reinforcement schedule is typically shifted to a partial reinforcement schedule once the response is firmly established. The response is only reinforced part of the time in partial (or intermittent) reinforcement. With partial reinforcement, learned patterns are developed more slowly, but the response is more immune to extinction.

Partial scheduling is of four types:

 
  1. Fixed-ratio schedule

A fixed-ratio schedule is one in which a response is reinforced only after a certain number of responses have been received. This schedule results in a high, consistent rate of response with just a brief delay after the reinforcers are delivered. 

Delivering a food pellet to a rat after it presses a lever five times is an example of a fixed-ratio plan.

  1. Variable-ratio schedule

When a response is reinforced after an unpredictably large number of responses, this is known as a variable-ratio schedule. This schedule ensures a high, consistent rate of response. A payout based on a variable ratio schedule can be found in gambling and lottery games. 

In a lab environment, this would include feeding food pellets to a rat after one bar press, then four bar presses, and then two bar presses.

  1. Fixed-interval schedule

Fixed-interval schedules reward the first response only after a predetermined period has passed. This schedule results in a lot of responses at the end of the interval, but slower responses right after the reinforcers are delivered.

In a lab setting, reinforcing a rat with a food pellet for the first bar press after a 30-second interval will be an example of this.

  1. Variable-interval schedule

If a response is rewarded after an uncertain period has elapsed, it is referred to as a variable-interval schedule. This schedule results in a long, steady response time. 

Delivering a food pellet to a rat after the first bar press after a one-minute interval, a second pellet for the first response after a five-minute pause, and a third pellet for the first response after a three-minute pause are all examples of this.


OBSERVATIONAL LEARNING

Observational learning refers to the learning in which behavior is learned by watching others and later re-acting that behavior. For example, a child learns to do chores by watching parents do it or sometimes learns bad words by listening to parents.

The human capacity to learn behavior by observing is called modeling. It allows learning behavior without trial and error. For example, you wouldn’t want doctors to learn through trial and error.


According to Bandura, modeling occurs through four basic processes namely:

1)Attention: The subject must pay attention to the behavior of the model.

2)Retention: The behavior must be retained in the memory so that it can be recalled when required.

3)Reproduction: The behavior must be physically reproduced for behavior to be learned.

4)Motivation: There should be some sort of motivation to show the behavior.

 

 

Bandura’s Classic BOBO DOLL Experiment

In this experiment, children watched a film in which a model was shown hitting a “BOBO DOLL” . A group was shown that the model was punished for the behavior, another group was shown that the model was praised for the behavior, and the last group was shown no consequences at all. The group that was shown the model being punished was less likely to show aggressive behavior towards the doll.  


A Case Study in Behavioral Psychology

Little Albert

Image source

The participant in the experiment was a boy whom Watson and Rayner named "Albert B" but is now popularly known as Little Albert. When Little Albert was 9 months old, Watson and Rayner introduced him to several stimuli, including a white rat, a cat, a monkey, masks, and a burning newspaper, and examined the boy's reactions.

The boy initially displayed no fear of any of the items he had seen.

The next time Albert was introduced to a rodent, Watson made a loud noise by hitting a hammered metal pipe. Of course, after hearing the loud noise, the infant started to scream. After repeatedly pairing the white rat with the loud noise, Albert began to expect a terrifying noise every time he saw the white rate. Albert started to weep shortly after he saw the rat.
 

  



References:

  • Morgan, C. T., King, R. A., Weisz, J. R., & Schopler, J. (1986). Introduction to psychology. New York: McGraw-Hill.

 

Comments:

Leave a Reply

Comment Field is required.

SignUp

PW Education Support: Typically replies within a day ×

PW Education
Namaste 🙏
How can I help you?
02:57pm