KEMBAR78
SoCD2 - Unit 4 | PDF | Class (Computer Programming) | Formal Verification
0% found this document useful (0 votes)
19 views20 pages

SoCD2 - Unit 4

The document provides an overview of System Verilog Assertions, detailing their purpose in validating system behavior and improving debugging through Assertion Based Verification (ABV). It explains the types of assertions, their advantages, and the process of creating assertions, as well as the importance of randomization and coverage in verification. Additionally, it introduces UVM (Universal Verification Methodology) as a standardized approach for developing verification environments, highlighting its components and advantages.

Uploaded by

dedijes704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views20 pages

SoCD2 - Unit 4

The document provides an overview of System Verilog Assertions, detailing their purpose in validating system behavior and improving debugging through Assertion Based Verification (ABV). It explains the types of assertions, their advantages, and the process of creating assertions, as well as the importance of randomization and coverage in verification. Additionally, it introduces UVM (Universal Verification Methodology) as a standardized approach for developing verification environments, highlighting its components and advantages.

Uploaded by

dedijes704
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

System Verilog Assertions

➢ The behaviour of a system can be written as an assertion that should be true at all times.
➢ Hence assertions are used to validate the behaviour of a system defined as properties,
and can also be used in functional coverage.
➢ Assertions are used to check design rules or specifications and generate warnings or
errors in case of assertion failures.
➢ An assertion also provides function coverage that makes sure a certain design
specification is covered in the verification.
➢ The methodology that uses assertions is commonly known as “Assertion Based
Verification” (ABV).
➢ Assertions can be written in the design as well as the verification environment.

What are the properties of a Design?


➢ If a property of the design that is being checked for by an assertion does not behave in
the expected way, the assertion fails.
➢ For example, assume the design requests for grant and expects to receive an ack within
the next four cycles. But if the design gets an ack on the fifth cycle, the property that an
ack should be returned within 4 clocks is violated and the assertion fails.

Why do we need Assertions?


➢ An assertion is nothing but a more concise representation of a functional checker.
➢ The functionality represented by an assertion can also be written as a System Verilog
task or checker that involves more line of code.
Types of Assertions
1. Immediate assertions
2. Concurrent assertions

Immediate Assertions
➢ An assertion that checks a condition at the current simulation time is called immediate
assertions.
➢ They are executed like procedural statements like if-else statements.

Concurrent Assertion
➢ An assertion that checks the sequence of events spread over multiple clock cycles is
called a concurrent assertion.
➢ They execute in parallel with other always blocks concurrently, hence it is known as a
concurrent assertion.

Types of Assertion Statements


An assertion statement can be of the following types:

Type Description
assert To specify that the given property of the design is true in simulation
To specify that the given property is an assumption and used by formal
assume
tools to generate input stimulus
cover To evaluate the property for functional coverage
To specify the property as a constraint on formal verification computations
restrict
and is ignored by simulators

Advantages of using Assertions


1. Checks design specifications and reports errors or warnings in case of failure.
2. It improves debugging time. For example, a bug due to an illegal state transition can
propagate to the output. Writing an assertion helps out to improve debugging time.
3. Can be used in formal verification.
4. Can be re-used across verification testbench or design.
5. Can be parameterized
6. Can be turned on/off based on the requirement.

Assertion severity levels


1. $info: indicates that the assertion failure carries no specific severity.
2. $warning: run-time warning, which can be suppressed in a tool-specific manner.
3. $fatal: run-time fatal
4. $error: run-time error

Property in assertion
The property keyword is to capture design specifications that span over time. It also contains
a sequence of events. It distinguishes a concurrent assertion from an immediate assertion.

Property declaration
1. A property can be declared in a module, clocking block, package or interface, etc.
2. A property can have formal arguments.

Property usage
1. The property captures design specifications and checks for design behavior.
2. It can be used as an assumption in the verification environment.
3. It is also used for coverage to measure that property is covered.
If assertion has simple property, then usually it is written as a part of the assert statement.

Steps to create assertions


Following are the steps to create assertions:
• Step 1: Create Boolean expressions
• Step 2: Create sequence expressions
• Step 3: Create property
• Step 4: Assert property
System Verilog Randomization

➢ Randomization is a process of producing random values of the mentioned data type.


➢ As System Verilog also deals with objects, the $random method from Verilog is not
sufficient for randomizing an object.

Need for Randomization


➢ As per the increasing complexity of the design, there are high chances to have more
bugs in the design when it is written for the first time.
➢ To verify DUT thoroughly, a verification engineer needs to provide many stimuli.
➢ There can be multiple cross combinations of variables in a real system.
➢ So, it is not possible practically to write directed cases to verify every possible
combination.
➢ So, it is very much required to have randomization in the verification testbench.

Why we cannot have any random value?


➢ Simply running randomized tests do not make much sense because there will be many
invalid cases.
➢ The way we create randomized tests with valid configurations is by the use
of constraints.
➢ Such a verification style is commonly called Constrained Random Verification (CRV).

rand and randc keywords


➢ To randomize a class object, the following keywords are used while declaring class
variables.
1. rand
2. randc

rand Keyword
On randomizing an object, the rand keyword provides uniformly distributed random values.
rand bit [4:0] value;
On randomizing, values will be generated with equal probability.

randc Keyword
➢ On randomizing an object, the randc keyword provides random value without
repeating the same value unless a complete range is covered.
➢ Once all values are covered, the value will repeat.
➢ This ensures that to have all possible values without repeating the same value unless
every value is covered.
randc bit [1:0] value; // Possible values = 0, 1, 2, 3
Possible random value generated: 2, 3, 1, 0, 3, 2, 0, 1.

Inside keyword in constraints


➢ The inside keyword is helpful when randomized values have to be in the provided range.
➢ The provided range in a bracket can be constant, parameter, define, or variable.
Syntax:
constraint <constraint_name> {<variable> inside {. . . .};}

To specify range of values


constraint <constraint_name> {<variable> inside {[10:20]};}

To specify set of values


constraint <constraint_name> {<variable> inside {40, 70, 80};}

Combination of set of values and range


constraint <constraint_name> {<variable> inside {4, 7, 8, [10:20], 25, 30, [40:70]};}

Define based range in constraint


constraint <constraint_name> {<variable> inside {[`START_RANGE:`END_RANGE]};}

Variable based range in constraint


constraint <constraint_name> {<variable> inside {[<var1>:<var2>]};}
Parameter based range in constraint
constraint <constraint_name> {<variable> inside {[<param1>:<param2>]};}

Inverted inside constraint


constraint <constraint_name> {! (<variable> inside {[10:20]});}

Advantages of Randomization
1. It has the capability of finding hidden bugs with some random combination.
2. Constraint-based randomization provides possible random values instead of a
complete random range.
3. It provides flexibility to have random values based on user-defined probability.
4. System Verilog randomization provides flexibility to disable randomization for a
particular variable in a class as well as disable particular constraints based on the
requirement.
5. It saves time and effort in verification instead of writing a test for every possible
scenario.
System Verilog Coverage

➢ Before we start with functional coverage, it is required to understand the term coverage.
➢ The coverage provides a set of metrics that are used to measure verification progress.
➢ Functional coverage is a measure of what functionalities/features of the design have
been exercised by the tests.
➢ This can be useful in constrained random verification (CRV) to know what features have
been covered by a set of tests in a regression.

Need for Coverage


1. As you are aware that verification is a long-lasting process and we never know what
type of input stimulus can capture the bug. Hence, it is essential to have a set of metrics
that decide the endpoint for verification of the design once all metrics are satisfied.
2. It is also essential to see whether we verified all kinds of features supported by the
design and cover every line from the design code.

Types of Coverage
There are two types of coverage supported
1. Code coverage
2. Functional coverage

Code Coverage
Code coverage deals with covering design code metrics. It tells how many lines of code have
been exercised w.r.t. block, expression, FSM, signal toggling.
The code coverage is further divided as
1. Block coverage – To check how many lines of code have been covered.
2. Expression coverage – To check whether all combinations of inputs have been driven to
cover expression completely.
3. FSM coverage – To check whether all state transitions are covered.
4. Toggle coverage – To check whether all bits in variables have changed their states.
Note:
1. Code coverage does not specify that the code behavior is correct or not. It is simply used
to identify uncovered lines, expressions, state transitions, dead code, etc. in the design.
Hence, it does not indicate design quality.
2. Verification engineers aim to achieve 100% code coverage.
3. There are industry tools available that show covered and missing code in code
coverage.

Functional Coverage
Functional coverage deals with covering design functionality or feature metrics. It is a user-
defined metric that talks about how much design specification or functionality has been
exercised. The functional coverage can be classified into two types
1. Data intended coverage – To check the occurrence of data value combinations.
Example: Writing different data patterns in a register.
2. Control intended coverage – To check the occurrence of sequences in the intended
fashion.
Example: Reading a register to retrieve reset values after releasing a system reset.

Note:
Since it is user-defined metrics, it is up to the verification engineer to consider all features
because if some features are missed to add in functional coverage and remaining features are
covered then functional coverage will show up 100% even though some cover points are
missed to add.

Define a coverage model: covergroup


➢ The covergroup is a user-defined construct that encapsulates coverage model
specification.
➢ The covergroup construct can be instantiated multiple times in various contexts using
the new () operator.
➢ A covergroup can be defined in a program, class, module, or interface.
The covergroup includes
1. A set of coverage points
2. Cross coverage between coverage points
3. A clocking event that synchronizes coverage points sampling
4. Coverage options
5. Optional formal arguments

Basic covergroup syntax


covergroup <coverage model name>;
...
...
endgroup

<coverage model name> <covergroup inst> = new();

covergroup syntax with clocking event


covergroup <coverage model name> @(<clocking event>)
...
...
endgroup

<coverage model name> <covergroup inst> = new();

List of arguments in a covergroup


A covergroup can have an optional list of arguments and that has to be specified in the new
operator too.
covergroup cg (<list of arguments>);
...
endcovergroup
UVM (Universal Verification Methodology)

➢ The Universal Verification Methodology (UVM) is a standard verification methodology


that includes a set of class libraries for the development of a verification environment.
➢ UVM is based on Open Verification Methodology (OVM) and Verification
Methodology Manual (VVM).
➢ The UVM API (Application Programming Interface) provides standardization for
integration, creation of verification components.
➢ The API also scales from block-level to system-level verification environment.

What was used before UVM?


➢ OVM (Open Verification Methodology) was introduced in 2008 as an open-source
verification methodology for digital designs and systems-on-chip (SoCs) and was based
on System Verilog.
➢ UVM was introduced in 2011 as a successor to OVM, and it built upon the concepts
and principles of OVM.
➢ UVM was designed to be a more standardized and flexible methodology that could be
easily adapted to different verification environments and use cases.

What does UVM contain?


➢ It contains a set of pre-defined classes and methods that enable users to create modular,
reusable testbench components for verifying digital designs and systems-on-chip
(SoCs). Some of the key components of UVM include:

• Testbench Components:
UVM provides a set of base classes that can be extended to create testbench components,
such as drivers, monitors, scoreboards, and agents.
For example, the image below shows how a typical verification environment is built by
extending readily available UVM classes which are denoted by uvm_* prefix.
These components already have the necessary code that will let them connect between
each other, handle data packets and work synchronously with others.
• Transactions:
Transactions are used to model the communication between the design-under-test
(DUT) and the testbench.
UVM provides a transaction class that can be extended to create transaction objects that
carry information between the DUT and the testbench.
• Phases:
UVM defines a set of simulation phases that enable users to control the order in which
testbench components are created, initialized, and executed.
• Messaging and Reporting:
UVM provides a messaging and reporting infrastructure that enables users to output
information about the simulation, such as warnings, errors, and debug information.
• Configuration:
UVM provides a configuration database that allows users to store and retrieve
configuration information for testbench components.
• Functional Coverage:
UVM provides a mechanism for tracking functional coverage, which is used to ensure
that the design has been thoroughly tested.
• Register Abstraction Layer:
UVM provides a register abstraction layer (RAL) that simplifies the process of creating
and accessing register maps.

Advantages of UVM based testbench


1. UVM methodology provides scalable, reusable, and interoperable testbench
development.
2. To have uniformity in the testbench structure across the verification team, UVM
provides guidelines for testbench development.
3. UVM provides base class libraries so that users can inherit them to use inbuilt
functionality.
4. The driver-sequencer communication mechanism is an inbuilt mechanism in UVM that
reduces verification efforts for the connection.
5. UVM also provides verbosity to control message displays.
➢ UVM Agent
An agent is a container that holds and connects the driver, monitor, and sequencer instances.
The agent develops a structured hierarchy based on the protocol or interface requirement.
uvm_agent class declaration:
virtual class uvm_agent extends uvm_component

User-defined class declaration:


The user-defined agent has to be extended from the uvm_agent component class.
class <agent_name> extends uvm_agent;

How to create a UVM agent?


1. Create a user-defined agent class extended from uvm_agent and register it in the factory.
2. In the build_phase, instantiate driver, monitor, and sequencer if it is an active agent.
Instantiate monitor alone if it is a passive agent.
3. In the connect_phase, connect driver and sequencer components.

Types of Agents
There are two types of agents
1. Active agent
2. Passive agent
Active Agent
An Active agent drives stimulus to the DUT. It instantiates all three components driver,
monitor, and sequencer.
Passive Agent
A passive agent does not drive stimulus to the DUT. It instantiates only a monitor component.
It is used as a sample interface for coverage and checker purposes.

How to configure the agent as an active or passive agent?


- An agent is usually instantiated at UVM environment class. So, it can be
configured in the environment or any other component class where an agent is
instantiated using int configuration parameter is_active as shown below
set_config_int("<path_to_agent>", "is_active", UVM_ACTIVE);
set_config_int("<path_to_agent>", "is_active", UVM_PASSIVE);
➢ UVM Sequence

- UVM sequence is a container that holds data items (uvm_sequence_items) which are
sent to the driver via the sequencer.

How to write a sequence?


An intention is to create a seq_item, randomize it, and then send it to the driver. To perform
this operation any one of the following approaches is followed in the sequence.
1. Using macros like `uvm_do , `uvm_create, `uvm_send etc
2. Using existing methods from the base class
a. Using wait_for_grant(), send_request(), wait_for_item_done() etc
b. Using start_item/finish_item methods.

How to start a sequence?


- A sequence is started by calling the start method that accepts a pointer to the sequencer
through which sequence_items are sent to the driver.
- A pointer to the sequencer is also commonly known as m_sequencer.
- The start method assigns a sequencer pointer to the m_sequencer and then calls the
body() task.
- On completing the body task with the interaction with the driver, the start() method
returns. As it requires interaction with the driver, the start is a blocking method.
Arguments Description

While starting a sequence, on which sequencer it has to be started,


sequencer
need to be specified whereas other arguments are optional.

parent_sequence is a sequence that calls the current sequence. If


parent_sequence is null, then this sequence is a root parent otherwise,
parent_sequence it is a child of parent_sequence. The parent_sequence’s pre_do,
mid_do, and post_do methods will be called during the execution of
the current sequence.

this_priority is the priority of the sequence (by default it considers the


this_priority priority of the parent sequence). Higher priority is indicated by higher
value.

By default, call_pre_post = 1 is set which


call_pre_post means pre_body and post_body methods will be called. To disable
calling these methods, call_pre_post can be set to 0.
➢ UVM Scoreboard

- The UVM scoreboard is a component that checks the functionality of the DUT.
- It receives transactions from the monitor using the analysis export for checking
purposes.

uvm_scoreboard class declaration:


virtual class uvm_scoreboard extends uvm_component

User-defined scoreboard class declaration:


The user-defined scoreboard is extended from uvm_scoreboard which is derived from
uvm_component.
class <scoreboard_name> extends uvm_scoreboard;

Scoreboard Usage
1. Receive transactions from monitor using analysis export for checking purposes.
2. The scoreboard has a reference model to compare with design behavior.
3. The reference model is also known as a predictor that implements design behavior so
that the scoreboard can compare DUT outcome with reference model outcome for the
same driven stimulus.

How to write scoreboard code in UVM?


1. Create a user-defined scoreboard class extended from uvm_scoreboard and register it in
the factory.
2. Declare an analysis export to receive the sequence items or transactions from the
monitor.
3. Write standard new() function. Since the scoreboard is a uvm_component. The new()
function has two arguments as string name and uvm_component parent.
4. Implement build_phase and create a TLM analysis export instance.
5. Implement a write method to receive the transactions from the monitor.
6. Implement run_phase to check DUT functionality throughout simulation time.
UVM Scoreboard types
Depends on design functionality scoreboards can be implemented in two ways.
1. In-order scoreboard
2. Out-of-order scoreboard

In-order scoreboard
- The in-order scoreboard is useful for the design whose output order is the same as driven
stimuli.
- The comparator will compare the expected and actual output streams in the same order.
They will arrive independently.
- Hence, the evaluation must block until both expected and actual transactions are present.

Out-of-order scoreboard
- The out-of-order scoreboard is useful for the design whose output order is different from
driven input stimuli.
- Based on the input stimuli reference model will generate the expected outcome of DUT
and the actual output is expected to come in any order.
- So, it is required to store such unmatched transactions generated from the input stimulus
until the corresponding output has been received from the DUT to be compared.
- To store such transactions, an associative array is widely used.
- Based on index value, transactions are stored in the expected and actual associative
arrays.
- The entries from associative arrays are deleted when comparison happens for the
matched array index.
Need for Coverage
1. As you are aware that verification is a long-lasting process and we never know what
type of input stimulus can capture the bug. Hence, it is essential to have a set of metrics
that decide the endpoint for verification of the design once all metrics are satisfied.
2. It is also essential to see whether we verified all kinds of features supported by the
design and cover every line from the design code.

Code Coverage

- Code coverage is a crucial component of verification, and it is used to ensure that the
design-under-test (DUT) is properly tested.
- Code coverage helps to identify untested or under-tested parts of the design, which may
contain bugs or errors that could impact the functionality of the design.
- Code coverage is also helpful in ensuring that all test cases have been executed and that
the verification environment has adequately exercised the DUT.
- It can identify situations where a test case has failed to cover a particular branch,
condition, or statement in the code.

Functional Coverage

- Functional coverage is a measure of what functionalities/features of the design have


been exercised by the tests.
- This can be useful in constrained random verification (CRV) to know what features have
been covered by a set of tests in a regression

Path Coverage

Path Coverage is a debugging tool that collects information about the execution of program
paths and analyses whether all possible sequences of program execution were verified by a
testbench (currently available for VHDL only).
Types of Coverage
There are two types of coverage supported
1. Code coverage
2. Functional coverage

Code Coverage
Code coverage deals with covering design code metrics. It tells how many lines of code have
been exercised w.r.t. block, expression, FSM, signal toggling.
The code coverage is further divided as:
1. Block coverage – To check how many lines of code have been covered.
2. Expression coverage – To check whether all combinations of inputs have been driven to
cover expression completely.
3. FSM coverage – To check whether all state transitions are covered.
4. Toggle coverage – To check whether all bits in variables have changed their states.

Note:
1. Code coverage does not specify that the code behavior is correct or not. It is simply used
to identify uncovered lines, expressions, state transitions, dead code, etc. in the design.
Hence, it does not indicate design quality.
2. Verification engineers aim to achieve 100% code coverage.
3. There are industry tools available that show covered and missing code in code
coverage.

Functional Coverage
Functional coverage deals with covering design functionality or feature metrics. It is a user-
defined metric that talks about how much design specification or functionality has been
exercised. The functional coverage can be classified into two types
1. Data intended coverage – To check the occurrence of data value combinations.
Example: Writing different data patterns in a register.
2. Control intended coverage – To check the occurrence of sequences in the intended
fashion.
Example: Reading a register to retrieve reset values after releasing a system reset.
Note:
Since it is user-defined metrics, it is up to the verification engineer to consider all features
because if some features are missed to add in functional coverage and remaining features are
covered then functional coverage will show up 100% even though some cover points are
missed to add.

You might also like