Sub:- Computer Organisation & Architecture
Pipelines and Hazard
22/IT/171 Snigdhendu Chattopadhyay
22/IT/172 Soham Mukherjee
22/IT/173 Somdutta Mukherjee
22/IT/174 Soudarjya Guha
Introduction to Pipelines
What is Pipeline?
Pipelining is a Technique for breaking down a sequential process into various Sub-operation
and executing each sub-operation in its own dedicated segment that runs in parallel with all
other segments. The most significant feature of a pipeline technique is that it allows several
computations to run in parallel in different parts at the same time.
By associating a register with every segment in the pipeline, the process of computation can
be made overlapping. The registers provide separation and isolation among every segment,
allowing each to work on different data at the same time.
An input register for each segment, followed by a combinational circuit, can be used to
illustrate the structure of a pipeline organisation.
Types of Pipelining
1 Arithmetic Pipeline
2 Instruction Pipeline
These pipelines can be designed as either static or dynamic, each with unique advantages and
potential hazards.
Instruction Pipelines
Instruction pipelines break the execution of a single instruction into multiple steps, allowing
multiple instructions to be processed concurrently. This parallelization improves processor
throughput, but can also introduce pipeline hazards that disrupt the flow of instructions.
• Fetch instruction from memory
• Decode the instruction and prepare operands
• Execute the instruction
• Write back results to registers
• Commit the instruction
Arithmetic Pipelines
Arithmetic pipelines are specialized circuits that handle mathematical operations like
addition, subtraction, multiplication, and division. These pipelines break down complex
operations into a sequence of simpler steps, allowing multiple operations to be executed
concurrently for improved throughput.
• Fetch Operands: Retrieve the input values from registers or memory
• Execute Operation: Perform the required arithmetic computation in a series of stages
• Write Result: Store the final output in a register or memory location
• Handle Exceptions: Detect and manage issues like overflow, underflow, or divide-by
zero errors
Static vs. Dynamic Pipelines
1.Static Pipelines
Static pipelines are designed to perform a single operation, like multiplication or addition, at
a time. They have a fixed sequence of stages that cannot be changed during runtime.
2.Dynamic Pipelines
Dynamic pipelines can adapt their sequence of stages based on the type of instruction being
executed. This allows them to handle a wider range of operations , but introduces more
complexity.
3.Advantages
Static pipelines are simpler to design and have lower hardware costs.Dynamic pipelines offer
greater flexibility and can potentially achieve higher throughput.
4.Tradeoffs
The trade-off is between efficiency and versatility.Static pipelines are more efficient for a
limited set of operations , while dynamic pipelines sacrifice some efficiency for broader
capabilities.
What is Instruction Pipeline Hazards?
In the domain of CPU design,hazards are problems with the instruction pipeline in CPU
microarchitecture when the next instruction cannot execute in the following clock cycle and
can potentially lead to incorrect computation results are called Hazards.
As we all know, the CPU’s speed is limited by memory. There’s one more case to consider,
i.e. a few instructions are at some stage of execution in a pipelined design. There is a chance
that these sets of instructions will become dependent on one another, reducing the pipeline’s
pace. Dependencies arise for a variety of reasons, which we will examine shortly. The
dependencies in the pipeline are referred to as hazards since they put the execution at risk.
We can swap the terms, dependencies and hazards since they are used interchangeably in
computer architecture. A hazard, in essence, prevents an instruction present in the pipe from
being performed during the specified clock cycle. Since each of the instructions may be in a
separate machine cycle, we use the term clock cycle.
Data Hazards in CPU Pipelines
Read-After-Write (RAW)
Occurs when an instruction needs to read a value that has not yet been written by a previous
instruction in the pipeline.
A Read-After-Write (RAW) hazard, also known as a data hazard or true dependency, occurs
in pipelined computer architectures when an instruction depends on the result of a previous
instruction that has not yet completed its execution. This type of hazard can cause incorrect
execution if not properly managed, as the subsequent instruction may read incorrect or
incomplete data.
Example
Consider a simple example with two instructions:
Instruction 1: ADD R1, R2, R3 (Adds the contents of registers R2 and R3 and stores the
result in R1)
Instruction 2: SUB R4, R1, R5 (Subtracts the contents of R5 from R1 and stores the result in
R4)
Write-After-Read (WAR)
Occurs when an instruction writes a value that is needed by a previous instruction still in the
pipeline.
A Write-After-Read (WAR) hazard, also known as an anti-dependency or name dependency,
occurs in pipelined computer architectures when an instruction writes to a register or memory
location that a previous instruction reads from. This type of hazard can cause incorrect
execution if the write operation overwrites the data before the read operation completes.
Example
Consider a simple example with two instructions:
Instruction 1: SUB R1, R4, R5 (Subtracts the contents of R5 from R4 and stores the result in
R1)
Instruction 2: ADD R4, R2, R3 (Adds the contents of R2 and R3 and stores the result in R4)
Write-After-Write (WAW)
Occurs when an instruction writes a value that will be overwritten by a previous instruction
still in the pipeline. A Write-After-Write (WAW) hazard, also known as an output
dependency, occurs in pipelined computer architectures when two instructions write to the
same register or memory location. The hazard arises when the order of write operations is not
preserved, potentially leading to incorrect data being stored.
Example
Consider a simple example with two instructions:
Instruction 1: ADD R1, R2, R3 (Adds the contents of registers R2 and R3 and stores the
result in R1)
Instruction 2: SUB R1, R4, R5 (Subtracts the contents of R5 from R4 and stores the result in
R1)
Control Hazards in CPU Pipelines
1.Branch Prediction
Control hazards occur when the processor encounters a conditional branch instruction, as it's
uncertain which path the program will take until the branch condition is resolved. In a
pipelined processor, branches can cause significant delays. When the CPU encounters a
branch instruction, it may not know the next instruction to fetch until the branch is resolved.
Without branch prediction, the CPU would have to wait until the branch decision is made,
leading to pipeline stalls and reduced performance.
2.Branch Misprediction
If the branch predictor guesses incorrectly, the processor must discard all instructions in the
pipeline and restart execution from the correct path, causing a significant performance
penalty. Branch misprediction occurs when the branch prediction mechanism in a CPU
incorrectly predicts the outcome of a branch instruction. When the CPU fetches instructions
based on this incorrect prediction, it leads to fetching and executing the wrong set of
instructions, which must then be discarded once the correct branch path is determined.
3.Speculative Execution
To mitigate control hazards, modern CPUs employ speculative execution, where they execute
instructions from the predicted branch path while waiting for the branch condition to be
resolved. Speculative execution is a technique used in computer architecture to improve the
performance of pipelined processors by executing instructions ahead of time, based on
predicted paths of branch instructions. If the prediction is correct, the results of the
speculatively executed instructions are used, allowing the processor to maintain a high level
of instruction throughput. If the prediction is incorrect, the speculative results are discarded,
and the correct path is executed.
Structural Hazards in CPU Pipelines
Definition of Structural Hazards
Structural hazards occur in pipelined CPU architectures when two or more instructions
require the same hardware resources simultaneously, leading to conflicts and potential stalls
in the pipeline. These conflicts can cause delays or stalls in the pipeline, reducing the
efficiency and throughput of the processor.
Types of Structural Hazards
1. Resource Conflicts
Resource conflicts are the most common type of structural hazard. They occur when different
stages of the pipeline require the same hardware resources, such as execution units, memory,
or buses. Because these resources cannot be simultaneously shared, the pipeline must
serialize access to them, resulting in potential delays.
2. Shared Execution Units
One typical example of a structural hazard involves shared execution units, such as the
Arithmetic Logic Unit (ALU). If multiple instructions need to use the ALU at the same time,
the pipeline must serialize their access to avoid collisions. This can create a bottleneck, as
instructions have to wait for the ALU to become available.
3. Memory Access Contention
Structural hazards can also arise from memory access contention. If multiple instructions try
to access the same memory location simultaneously, the processor must prioritize and queue
their requests, causing potential delays in the pipeline.
Factors Affecting Pipeline Hazards
1.Instruction Complexity
The more complex the instructions, the higher the chance of pipeline hazards due to data
dependencies and resource conflicts.
2.Processor Architecture
The number and design of pipeline stages, functional units, and memory hierarchy can
influence the likelihood and severity of hazards.
3.Branch Behaviour
The frequency and predictability of branch instructions impact control hazards and the
effectiveness of branch prediction mechanisms.
4.Memory Access Patterns
Irregular memory access patterns can lead to structural hazards due to conflicts in the
memory subsystem.
Mitigating Pipeline Hazards
Stall and Flush:
When a hazard is detected, the processor can stall the pipeline, halting the progression of
instructions until the hazard is resolved. Additionally, instructions that have already entered
the pipeline and are affected by the hazard may need to be flushed out of the pipeline to
prevent incorrect execution. Flushing involves removing instructions from the pipeline that
have not yet completed execution or whose results cannot be guaranteed due to the hazard.
This ensures that the pipeline starts afresh with correct instructions once the hazard is
resolved, maintaining program correctness.
Forwarding Data:
Data hazards occur when an instruction depends on the result of a previous instruction that
has not yet completed its execution. Forwarding, also known as bypassing, addresses data
hazards by directly forwarding data between pipeline stages without the need to write the data
to and read it from the register file. This allows the dependent instruction to access the
required data without waiting for it to be written back to the register file, reducing or
eliminating stalls in the pipeline and improving overall performance.
Branch Prediction:
Branch instructions can introduce control flow changes in program execution, potentially
causing pipeline stalls if not predicted accurately. Branch prediction algorithms anticipate the
outcome of branch instructions based on historical patterns or heuristics. Accurate branch
prediction can help the processor avoid costly pipeline flushes by speculatively executing
instructions along the predicted branch path. If the prediction turns out to be correct, the
pipeline continues smoothly without interruption. However, if the prediction is incorrect, any
speculatively executed instructions are discarded, and the pipeline is redirected along the
correct branch path, incurring a minor performance penalty compared to a full pipeline flush.
Advanced branch prediction techniques, such as two-level predictors or neural predictors,
improve prediction accuracy and reduce the likelihood of pipeline stalls
Conclusion
In conclusion, CPU pipelines represent a cornerstone of modern processor design, facilitating
parallel instruction execution to enhance overall throughput. Nevertheless, the pursuit of
parallelism comes with its own set of challenges, chiefly the emergence of hazards that
threaten the correctness and efficiency of program execution.
Structural hazards, such as resource conflicts and memory access contention, arise when
pipeline stages vie for the same hardware resources simultaneously. These conflicts can lead
to potential stalls, impeding the smooth flow of instructions through the pipeline.
Data hazards, on the other hand, manifest as dependencies between instructions, necessitating
careful management to ensure that instructions access the correct data at the right time.
Techniques like stall and flush, forwarding data, and branch prediction serve as effective
strategies to mitigate these hazards and maintain pipeline efficiency.
In essence, while CPU pipelines offer tremendous performance benefits, their effective
utilization demands a delicate balance between maximizing parallelism and mitigating
hazards. By employing appropriate hazard-handling techniques, modern processors can
achieve optimal performance while ensuring the accuracy and reliability of program
execution.
Thank You