In operant conditioning, a fixed-interval schedule is a schedule of reinforcement where the first response is rewarded only after a specified amount of time has elapsed. This schedule causes high amounts of responding near the end of the interval, but much slower responding immediately after the delivery of the reinforcer.
As you may remember, operant conditioning relies on either reinforcement or punishment to strengthen or weaken a response.
This process of learning involves forming an association with a behavior and the consequences of that behavior. Behaviors that are followed by desirable outcomes become stronger and therefore more likely to occur again in the future. Actions that are followed by unfavorable outcomes become less likely to occur again in the future.
In order to better understand how a fixed-interval schedule works, let’s begin by taking a closer look at the term itself.
A schedule refers to the rate at which the reinforcement is delivered or how frequently a response is reinforced. An interval refers to a period of time, which suggests that the rate of delivery is dependent upon how much time has elapsed. Finally, fixed suggests that the timing of delivery is set at a predictable and unchanging schedule.
For example, imagine that you are training a pigeon to peck at a key. You put the animal on a fixed-interval 30 schedule (FI-30), which means that the bird will receive a food pellet every 30 seconds. The pigeon can continue to peck the key during that interval but will only receive reinforcement for the first peck of the key after that fixed 30-second interval has elapsed.