INTERSNHIP REPORT FOR INDUSTRIAL
TRAINING & SEMINAR (ECD 1601)
A Report Submitted for
Industrial Training & Seminar (ECD 1601), Electronics and Communication
Engineering Department
Siddharth Jha
BT21EC026
ELECTRONICS & COMMUNICATION DEPARTMENT
NATIONAL INSTITUTE OF TECHNOLOGY MIZORAM (April, 2024)
1
INDEX
Summary.............................................................3
Week 1 Progress Report..............................................................4
Week 2 Progress Report..............................................................8
Week 3 Progress Report ..............................................................12
Week 4 Progress Report ..............................................................16
Week 5 Progress Report............................................................21
Final Project.......................................................25
Certificate.........................................................
Acknowledgements....................................................
2
SUMMARY OF INTERNSHIP EXPERIENCE AT ENTUPLE:
During my time at Abhiyantha Centre for Advanced Learning in Engineering and
Technology at Entuple Technologies, I delved into various aspects of VLSI, ASIC,
and FPGA flow, gaining a comprehensive understanding of these fundamental
concepts. This included learning about digital design stages, Verilog program
structure, and essential constructs, enabling me to effectively design both
combinational and sequential circuits.
Moreover, I acquired proficiency in designing state machines and delved into the
intricacies of verification, utilizing Verilog for effective verification processes.
Abhiyantha's training, under the umbrella of Entuple Technologies Pvt. Ltd.,
provided me with invaluable exposure to real-world industry practices, equipping
me with the necessary tools and knowledge to excel in the dynamic field of
engineering and technology. The emphasis on staying updated with current market
trends and technologies ensured that I remained relevant and adaptable in an ever-
evolving industry landscape.
Overall, my experience at Entuple Technologies has been instrumental in not only
enhancing my technical expertise but also in fostering my professional growth and
readiness for industry challenges.
3
WEEK 1: Introduction to VLSI, ASIC and FPGA Flow
● VLSI (Very Large-Scale Integration)
1. Evolution and Scaling
The journey of VLSI started with the creation of the first integrated
circuit (IC) by Jack Kilby and Robert Noyce in the late 1950s.
Since then, the semiconductor industry has witnessed rapid
advancements, leading to the scaling down of transistor sizes and
the integration of more transistors on a single chip. This
continuous scaling has enabled the development of smaller, faster,
and more power-efficient electronic devices.
2. Design Complexity and Challenges
As the complexity of VLSI designs has increased, so have the
challenges associated with them. These challenges include:
i. Power Consumption: With millions of transistors
packed into a small area, managing power
consumption has become a critical concern.
ii. Signal Integrity: Ensuring that signals propagate
correctly and reliably across the chip is another
challenge, especially with high-speed designs.
iii. Design Verification: With the increasing complexity
of designs, verifying the correctness of the design has
become more challenging and time-consuming.
4
● ASIC (Application-Specific Integrated Circuit)
1. Design Methodologies
There are several design methodologies used in ASIC design,
including:
i. Top-down Design: This approach starts with a high-level
description of the design and iteratively refines it at lower
levels of abstraction.
ii. Bottom-up Design: In contrast to top-down design, bottom-
up design starts with the smallest building blocks and
gradually builds up to the complete design.
2. Tools and Technologies:
Various Electronic Design Automation (EDA) tools are used
throughout the ASIC design flow, such as:
i. HDL Simulators: Tools like Xilinx ISE, Xilinx
Vivado, Cadence Virtuoso etc, are used for RTL
simulation.
ii. Synthesis Tools: Synthesis tools like Synopsys Design
Compiler or Cadence Genus convert RTL code into
gate-level netlists.
iii. Physical Design Tools: Tools like Cadence Innovus or
Synopsys IC Compiler are used for floorplanning,
placement, and routing.
5
● FPGA (Field-Programmable Gate Array)
1. Architecture and Resources
FPGAs consist of an array of programmable logic blocks
interconnected by a programmable routing fabric. These logic
blocks can be configured to implement various digital circuits,
such as arithmetic units, memory blocks, and I/O interfaces.
Additionally, modern FPGAs often include specialized resources
like DSP slices, high-speed transceivers, and embedded processors.
2. Advantages and Applications
The flexibility and reconfigurability of FPGAs offer several
advantages:
i. Prototyping and Validation: FPGAs are often used
for rapid prototyping and validation of ASIC
designs before committing to fabrication.
ii. Adaptive Computing: FPGAs are well-suited for
applications that require real-time processing or
adaptability, such as signal processing, machine
learning, and cryptography.
iii. Hardware Acceleration: FPGAs can be used to
accelerate specific tasks or algorithms, offloading
computation from general-purpose processors and
improving performance
6
3. Design Considerations
Designing for FPGAs requires considering:
i. Resource Utilization: Efficiently utilizing FPGA
resources while meeting performance and area
constraints.
ii. Timing Closure: Ensuring that the design meets
timing requirements during place-and-route.
iii. Power Management: Implementing power-saving
techniques to reduce energy consumption, especially
for battery-powered applications.
Conclusion:
VLSI, ASIC, and FPGA represent the pillars of modern digital design and
semiconductor technology. While VLSI focuses on integrating millions of
transistors on a chip, ASICs offer tailored solutions for specific applications, and
FPGAs provide flexibility and reconfigurability.
7
WEEK 2: Verilog Program Structure, Verilog Constructs
● Verilog Overview
Verilog is a hardware description language (HDL) used for modelling
and simulating digital circuits. It is one of the most widely used
languages in the design and verification of digital systems, including
ASICs, FPGAs, and VLSI circuits.
● History
Verilog was developed by Gateway Design Automation in the mid-
1980s. It was later standardized as IEEE Standard 1364 in 1995. Over the
years, Verilog has evolved, with the latest standard being IEEE 1800.
● Verilog Constructs
i. Verilog provides various constructs to describe hardware at different
levels of abstraction:
- Module: A module is the basic building block in Verilog,
representing a hardware component like an adder,
multiplexer, or flip-flop.
- Ports: Ports define the interface of a module, specifying
inputs, outputs, and bidirectional signals.
- Data Types: Verilog supports different data types such as
wire, reg, integer, and real for representing signals and
variables.
- Operators: Verilog provides a rich set of operators for
arithmetic, bitwise, and logical operations.
8
- Control Structures: Verilog supports if-else, case, and for
loops for conditional and iterative logic.
ii. Verilog supports three main modelling styles:
- Behavioral Modeling: Describes the functionality of a circuit
using procedural blocks like always and initial blocks.
Example:
- Dataflow Modeling: Describes the circuit in terms of data flow,
specifying how data flows through the circuit using continuous
assignments.
Example:
9
- Structural Modeling: Describes the circuit in terms of
interconnected modules and gates, representing the hardware
structure.
Example:
10
iii. Verilog is used for both simulation and synthesis:
- Simulation: Verilog models can be simulated using simulators
like Xilinx ISE, Xilinx Vivado, Cadence Virtuoso to verify the
functionality and behavior of the design.
- Synthesis: Verilog designs can be synthesized into gate-level
netlists using synthesis tools like Synopsys Design Compiler or
Cadence Genus for ASIC or FPGA implementation.
iv. Why Verilog over C and other Programming Languages:
11
CONCLUSION:
Verilog is a powerful and versatile hardware description language widely
used in the field of digital design. Its modularity, simplicity, and
flexibility make it suitable for designing and simulating complex digital
systems.
WEEK 3: Design of Combinational & Sequential Circuits Using Verilog
Digital circuits can be broadly classified into two categories: combinational and
sequential circuits. Combinational circuits produce outputs solely based on the
12
current input, whereas sequential circuits have memory elements and produce
outputs based on both current and past inputs. Verilog, a hardware description
language (HDL), is an effective tool for designing both types of circuits.
● Combinational Circuits
Combinational circuits are a fundamental building block in digital design,
performing Boolean logic operations on input signals to produce output
signals without any memory elements.
i. Basic Gates
Verilog provides built-in primitives for basic gates such as
AND, OR, NOT, and XOR.
ii. Multiplexers and Decoders
Multiplexers (MUX) and decoders are commonly used
combinational circuits.
Example:
● Sequential Circuits
Sequential circuits incorporate memory elements such as flip-flops or
latches, allowing them to store information and produce outputs based on
both current and past inputs.
13
i. Flip-Flops
Verilog provides built-in primitives for various types of flip-flops,
including D, T, JK, and SR flip-flops.
Example:
ii. Registers and Counters
Registers are sequential circuits that store data, while counters are
sequential circuits that count clock cycles or events.
Example:
Example:
14
When designing combinational and sequential circuits using Verilog, it is
essential to follow a structured design methodology:
1. Specification: Clearly define the functionality and requirements of the
circuit.
2. Design: Implement the circuit using Verilog modules and components.
3. Simulation: Create testbenches to simulate the circuit and verify its
functionality.
4. Synthesis: Synthesize the Verilog code to generate a netlist for
implementation.
5. Implementation: Implement the circuit on an FPGA or ASIC.
15
CONCLUSION:
Designing combinational and sequential circuits using Verilog involves
understanding the fundamental principles of digital logic, mastering
Verilog syntax and constructs, and following a structured design
methodology.
16
WEEK 4: Design of State Machines
State machines are fundamental components in digital design, used to control and
coordinate the behaviour of digital systems by transitioning between states based
on inputs and internal conditions. Verilog, a hardware description language (HDL),
provides a powerful and flexible framework for designing state machines.
● State machines can be classified into two main types:
- Moore Machine: Outputs depend only on the current state.
- Mealy Machine: Outputs depend on both the current state and inputs.
● States and Transitions
A state machine consists of a set of states and transitions between these
states. Each state represents a specific condition or behaviour of the
system, and transitions define how the system moves from one state to
another based on inputs and conditions.
● Verilog Design Methodology
Designing state machines using Verilog involves the following steps:
1. State Definition: Define the states of the state machine.
2. Input and Output Signals: Define the input and output signals.
3. State Transition Logic: Implement the state transition logic.
4. Output Logic: Implement the output logic.
5. Testbench and Simulation: Create a testbench to simulate and
verify the state machine.
17
● Mealy Machine Example: Sequence Detector
Design:
18
19
20
Test-Bench:
21
Simulation Result:
22
WEEK 5: Introduction to Verification, Verification Using Verilog:
Verification is a crucial step in the design process of digital circuits, ensuring that
the design behaves as expected under all conditions. Verilog, with its powerful
simulation and modeling capabilities, is widely used for verification of digital
designs.
● Importance of Verification
1. Ensure Correctness: Validate that the design meets its
functional and performance requirements.
2. Reduce Time-to-Market: Identify and fix bugs early in the
design cycle, reducing the time and cost of product
development.
3. Improve Quality: Enhance the reliability and robustness of the
design, minimizing the risk of field failures.
● Simulation-Based Verification
Simulation is the most common method of verification, where the
Verilog design is simulated using a simulator to verify its functionality
and performance.
● Testbenches
A testbench is a Verilog module or program written to stimulate the
design under test (DUT) with input stimuli and verify its outputs against
expected results.
23
● Directed Testbenches
In directed testbenches, input stimuli are manually created to test specific
functionalities or corner cases of the DUT.
Example:
24
● Random Testbenches
In random testbenches, input stimuli are generated randomly or semi-
randomly to explore different scenarios and increase verification
coverage.
Example:
● Coverage Metrics
Coverage metrics measure the completeness of the verification process
by tracking which parts of the design have been exercised during
simulation.
i. Functional Coverage: Ensures that all functions and
features of the design are tested.
25
ii. Code Coverage: Measures which lines or branches of
the code have been executed.
iii. Assertion Coverage: Tracks the effectiveness of
assertions in catching bugs.
● Assertion-Based Verification
Assertions are statements or properties that capture design requirements
and constraints. They are used to check the correctness of the design
during simulation automatically.
● Immediate Assertions
Immediate assertions are checked immediately when they are
encountered in the code.
Example:
● Concurrent Assertions
Concurrent assertions are evaluated concurrently with the simulation and
can be written outside the design module.
Example:
26
WEEK 6: FINAL PROJECT.
RISC Processor: A Comprehensive Overview
Reduced Instruction Set Computing (RISC) is a CPU design philosophy
that emphasizes simplicity and efficiency by using a smaller set of
instructions with uniform execution time. RISC processors are designed
to execute a smaller number of types of instructions at a faster speed,
making them more efficient in terms of instruction cycles per second
(IPC) compared to Complex Instruction Set Computing (CISC)
processors. This article provides a comprehensive overview of RISC
processors, covering their architecture, advantages, challenges, and
examples.
RISC Processor Architecture
Core Components
A typical RISC processor consists of the following core components:
1. Arithmetic Logic Unit (ALU): Performs arithmetic and logical
operations.
2. Registers: Small storage locations for holding data and
instructions.
3. Control Unit: Manages instruction fetching, decoding, and
execution.
4. Memory Unit: Accesses and stores data and instructions from and
to memory.
27
Instruction Set
RISC processors have a reduced and simplified instruction set compared
to CISC processors. Common RISC instructions include load, store,
arithmetic, logical, and branch instructions. Each instruction performs a
specific and simple operation, allowing for faster execution and simpler
hardware design.
Advantages of RISC Processors
1. Simplicity: RISC architectures are simpler and easier to
understand, design, and implement compared to CISC
architectures.
2. Efficiency: RISC processors execute instructions at a faster rate
due to simpler instruction decoding and execution mechanisms.
3. Scalability: RISC architectures are scalable and can be easily
extended with additional functional units or enhancements.
4. Power Efficiency: RISC processors consume less power compared
to CISC processors due to simplified instruction set and reduced
complexity.
5. Compiler Optimization: RISC architectures are compiler-friendly,
allowing compilers to optimize code more effectively for
performance.
Challenges and Limitations
1. Code Size: RISC instructions are simpler but may require more
instructions to perform complex tasks, leading to larger code size.
2. Memory Bandwidth: Increased number of instructions and data
accesses may lead to higher memory bandwidth requirements.
3. Compiler Dependency: Compiler optimizations are crucial for
RISC performance, making the choice and quality of compiler
important.
28
4. Interoperability: Compatibility with existing CISC architectures
and legacy software can be a challenge.
RISC Processor Examples
ARM Architecture
ARM (Advanced RISC Machine) is one of the most popular and widely
used RISC architectures. ARM processors are known for their power
efficiency, scalability, and versatility, making them suitable for a wide
range of applications from smartphones to servers.
MIPS Architecture
MIPS (Microprocessor without Interlocked Pipeline Stages) is another
prominent RISC architecture known for its simplicity, efficiency, and
academic significance. MIPS processors have been widely used in
embedded systems, gaming consoles, and networking devices.
Power Architecture
Power Architecture, formerly known as PowerPC, is a RISC architecture
developed by IBM. Power processors are known for their performance,
scalability, and reliability, making them suitable for high-performance
computing, servers, and embedded systems.
Future Trends and Innovations
29
1. Multi-Core and Parallel Processing: RISC architectures are
increasingly adopting multi-core and parallel processing
techniques to improve performance and efficiency.
2. Hardware Acceleration: Integration of specialized hardware units
or accelerators for specific tasks like AI, graphics, and
cryptography.
3. Energy-Efficient Designs: Continued focus on reducing power
consumption and improving energy efficiency through advanced
fabrication processes and architectural optimizations.
4. Security Enhancements: Integration of hardware-level security
features and enhancements to protect against vulnerabilities and
attacks.
Conclusion
RISC processors have revolutionized the CPU design landscape with
their focus on simplicity, efficiency, and performance. Despite facing
challenges like code size, memory bandwidth, and compatibility, RISC
architectures have continued to evolve and adapt to meet the demands of
modern computing. With ongoing innovations in multi-core processing,
hardware acceleration, energy-efficient designs, and security
enhancements, RISC processors are well-positioned to drive the next
wave of advancements in computing and technology. Whether it's the
ubiquitous ARM processors in mobile devices, the academic significance
of MIPS architectures, or the performance of Power processors in high-
performance computing, RISC architectures continue to play a crucial
role in shaping the future of computing and driving innovation across
various industries and applications.
30
31