Académique Documents
Professionnel Documents
Culture Documents
Verification
Methodology
Cookbook
Advanced
Verification
Methodology
Cookbook
The material in this book is licensed for use under the Apache‐2.0 open source
license.
Copyright © 2007‐2008 Mentor Graphics Corporation
To Janice,
who keeps me sane
- Mark
To Jeanne,
who provides a beautiful voice for my thoughts
- Harry
Contents
Chapter 2: Introduction to the AVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.1 Reuse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.2 Components and Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.3 Layered Organization of Testbenches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.4 Two Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
2.5 Tour of the SystemVerilog AVM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Chapter 3: Fundamentals of Object‐Oriented Programming . . . . . . . . . . . . . 33
3.1 Procedural vs. OOP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Classes and Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.3 Object Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
3.4 Virtual Functions and Polymorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5 Generic Programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
3.6 Objects as Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
3.7 OOP and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Chapter 4: Introduction to
Transaction‐Level Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.1 Abstraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.2 Definition of a Transaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.3 Communicating Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.4 Isolating Components with Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.5 Forming a Transaction‐Level Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
Chapter 5: AVM Mechanics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.1 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
5.2 Connecting Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.3 Building an Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.4 Connecting Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
5.5 Reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.3 Monitors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .112
6.4 Three State Machines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .113
6.5 Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .115
6.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .119
Chapter 7: Complete Testbenches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .121
7.1 Analysis Ports and Analysis Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .122
7.2 Scoreboards . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .123
7.3 Coverage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .130
7.4 Error Injection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .133
7.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .137
Chapter 8: Stepwise Refinement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .139
8.1 Transaction‐Level Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .140
8.2 RTL Substitution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .144
8.3 TLM as Golden Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .148
8.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .151
Chapter 9: Modules in Testbenches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .153
9.1 Nonpipelined Bus Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .154
9.2 Module‐Based Assertion Monitor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .160
9.3 Bus Functional Model (BFM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .169
9.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .172
Chapter 10: Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
10.1 Overview of CRV Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173
10.2 Adding Randomization to Classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
10.3 Layering Constraints Using Inheritance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .180
10.4 Dynamically Modifying Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181
10.5 Over Constraining . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .182
10.6 Set Membership . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .185
10.7 Dynamically Sized Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .186
10.8 Per Design/Per Test Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .187
10.9 Design Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
10.10 Class Factories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .188
10.11 Example of State‐Dependent Constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190
10.12 The AVM Random Stimulus Generator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .191
Chapter 11: AVM in SystemC and SystemVerilog . . . . . . . . . . . . . . . . . . . . .195
11.1 Object Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196
11.2 Object Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .197
11.3 Encapsulating Behaviors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .197
11.4 Randomization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
11.5 Instantiation and Elaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .198
11.6 Transaction‐Level Connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201
11.7 Execution Phases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .201
11.8 Building Complete Testbench Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .202
11.9 SystemVerilog or SystemC? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .203
Appendix A: Graphic Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .205
A.1 Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
A.2 Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .206
A.3 Interconnect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207
A.4 Channels . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
A.5 Analysis Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .209
ix
A.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Appendix B: Naming Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Appendix C: AVM Encyclopedia . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
Appendix D: Apache License . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 301
Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
x
Preface
When I need to learn a new piece of software I invent a problem for myself
that is within the domain of the application and then set out to solve it using
the new tool. When the software package is a word processor, I’ll use it to
write a paper or article I’m working on; when the software is a drawing tool,
I’ll use it to draw some block diagrams of my latest creation. In the course of
solving the problem, I learn how to use the tool and gain a practical
perspective on which features of the tool are useful and which are not.
#include <stdio.h>
main()
{
printf(“hello, world\n”);
}
I typed the program into my text editor, ran cc and ld, and a few seconds
later saw
hello, world
flicker on my green CRT screen. Getting that simple program working gave
me confidence that C was something I could conquer. I haven’t counted how
xii
much C/C++ code I’ve written since, but it’s probably many hundreds of
thousands of lines. I’ve written all manner of software, from mundane
database programs to exotic multi‐threaded programs. It all started with
Hello World.
The premise of this book is that most engineers, like me, want to jump right
into a new technology. They want to put their hands on it, try it out and see
how it feels, learn the boundaries of what kinds of problems it addresses, and
develop some practical experience. This is why quickstart guides and online
help systems are popular. Generally, we do not want to read a lengthy manual
and study the theory of operation first. We would rather plunge in, and later,
refer to the manual only when and if we get stuck. In the meantime, as we
experiment, we develop a general understanding of what the technology is
and how to perform basic operations. Later, when we do crack open the
manual, the details become much more meaningful.
This book takes a practical approach to learning about testbench construction.
We provide a series of examples, each of which solves a particular verification
problem. The examples are thoroughly documented and complete and
delivered with build and run scripts that allow you to execute them in a
simulator and observe their behavior. The examples are small and focused so
you don’t have to wade through a lot of ancillary material to get to the heart
of an example.
We call this book the Verification Cookbook because we have modeled its
organization after a cookbook. Each example is a recipe, containing working
code that applies the AVM to a specific problem. The recipes are grouped in
sections, beginning with simple sauces and moving on to complete meals.
We present the examples in a linear progression—from the most basic
testbench, with just a pin‐level stimulus generator, monitor, and DUT, to
fairly sophisticated usages that involve stacked protocols, coverage, and
automated testbench control. Each example in the progression introduces
xiii
new concepts and shows you how to implement those concepts in a
straightforward manner. We recommend you start by examining the first
example. When you feel comfortable with it, move on to the second one.
Continue in this manner, mastering each example and moving to the next.
The examples in the cookbook are there for you to explore. After you run an
example, study the code to really understand the its construction. The code
documentation provided with each example serves as a guidepost to point
you to the salient features. Use this as a starting point to study the coding
organization, style, and other implementation details not explicitly discussed.
Play with the examples, too. Change the total time of simulation to see more
results, modify the stimulus, add or remove components, insert print
statements, and so on. Each new thing you try will help you more fully
understand the examples and how they operate.
Feel free to use any of the example code as templates for your work1. Cut and
paste pieces that you find useful into your code, or use them as a way to get
started developing your own verification infrastructures. Mainly, enjoy.
Mark Glasser, April 2007
Organization of This Book
Chapter 1. We describe some general principles of verification and establish a
framework for designing testbenches based on two questions—“Does it
work?” and “Are we done?”
Chapter 2. In this chapter, we describe the essential principles of the AVM.
1. The Verification Cookbook is delivered under an open source license. See
LICENSE.txt in the cookbook kit or refer to http://opensource.org/licenses/
apache2.0.php for full text of the Apache‐2.0 license.
xiv
with transaction‐level interfaces. It also explains the essentials of using the
AVM reporting facility.
Chapter 7. This chapter picks up where the previous chapter left off. Using a
parallel‐to‐serial device, we examine the organization and construction of
scoreboards and coverage collectors. We also show how to build an error‐
injecting driver by deriving it from a “good” (non‐error‐injecting) driver.
Chapter 9. Up to this point, we have discussed how to build testbenches
using class‐based components exclusively. Sometimes it is necessary to use
modules, whether to include assertions or to adapt legacy BFMs (bus
functional models). This chapter explains how to incorporate module‐based
components into a testbench.
Chapter 11. Even though the principles of the AVM are language‐neutral, as
you implement an AVM‐style testbench in SystemC or SystemVerilog, you
should be aware of certain language‐specific issues. This chapter discusses
some of the differences between SystemC and SystemVerilog and how they
affect testbench implementation.
Example Code
Most of the code used as illustrations in this text is derived directly from the
code in the examples and the AVM library, which are available in the AVM
cookbook kit (see below for how to obtain the AVM cookbook kit). In places
where that’s the case, you’ll see line numbers associated with the code, and in
many cases a file name either at the top or bottom of the code inset. The file
name identifies the specific file in the example tree that contains the code
shown in the text. The line numbers are relative to the file. Here’s a sample
code inset:
xv
The file name at the bottom tells you where to find the file that this code came
from in the example tree, in this case topics/04_tlm/01_put_sc/put.cc.
The line numbers are relative to that file.
Using the AVM Libraries
The SystemVerilog libraries are encapsulated in a package called avm_pkg. To
use the package, you must import it into any file that uses any of the AVM
facilities.
The SystemC libraries are in a header (.h) file. To make the libraries available
to your program, you must #include avm.h in each of the program files that
use any of the AVM facilities.
SystemVerilog SystemC
+incdir+<location-of-AVM-libraries>/libraries/systemverilog/avm
<location-of-AVM-libraries>/libraries/systemverilog/avm/
avm_pkg.sv
When compiling SystemC testbenches that use AVM, you must provide the
path to the directory where avm.h resides on the sccom command line:
-I<location-of-AVM-libraries>/libraries/systemc/avm
Building and Running the Examples
Installing the cookbook kit is a matter of unpacking the kit in a convenient
location. No additional installation scripts or processes are required.
Each example directory contains an “all” script and a “compile” script. The
“all” script runs the example in its entirety and is named all.do. The compile
script is a file that is supplied as an argument to the -f option on the compiler
command line. Each example is supplied with a vsim.do file that contains the
simulator commands needed to run each example.
The simplest way to run an example is to execute its “all” script:
% ./all.do
This compiles, links, and runs the example. You can also run the steps
manually:
% vlib work
% vlog -f compile_sv.f
% vsim -c top -do vsim.do
SystemC examples require an additional link step. The individual steps to run
a SystemC example are:
% vlib work
% sccom -f compile_sc.f
% sccom -g -scv -link
% vsim -c top -do vsim.do
You must have the proper simulator license available to run the examples.
When running examples that include both SystemVerilog and VHDL, such as
xvii
those in Chapter 8, you’ll need a mixed language license. For specifics on
licensing, contact your Mentor technical support representative.
Obtaining the Cookbook Kit
You can get the AVM Verification Cookbook in its entirety on the Mentor
Graphics web site at http://www.mentor.com/go/cookbook. Please check there for
updates.
Questions and Comments
We’d very much like to hear from you. Please tell us how you are using the
cookbook, and feel free to offer ideas and suggestions for future editions. We
will answer your questions about the material. You can E‐mail questions or
comments to:
cookbook_questions@mentor.com
Who Should Read This Book?
This books is intended for electronic design engineers and verification
engineers who are looking for ways to improve their efficiency and
productivity in building testbenches and completing the verification portion
of their projects. We assume a familiarity with hardware description
languages (HDL) in general, and specifically SystemVerilog or SystemC. We
assume that you know how to write programs in one or both of these
languages, but it’s not necessary to be an expert. Familiarity with object‐
oriented programming or OO terminology is helpful to fully understand the
AVM. If you are not yet familiar with OO terminology, not to worry, the book
introduces you to the fundamental concepts and terms.
Acknowledgements
The authors wish to acknowledge the people who contributed their time,
expertise, and wisdom to this project. This book would never have come to
completion without their dedication to this project:
Todd Burkholder, Rich Edelman, Jeanne Foster, Jan Johnson, Susan Ross.
xviii
Introduction
Software construction is not usually a topic that immediately comes to mind
when hardware designers and verification engineers think about their work.
Designers and verification engineers, particularly those schooled in electronic
engineering, naturally think of design and verification work as a “hardware
problem,” meaning that principles of hardware design are required to build
and verify systems. Of course, they are largely, but not entirely, correct.
Electronic design requires an in‐depth knowledge of hardware: everything
from basic DC and AC circuit analysis and transistor operation, to
communication protocols and computer architecture. A smattering of physics
is useful, too, for designs implemented in silicon (which is the intent for
most). However, building a testbench to verify a hardware design is a
different kind of problem.
Today, with the availability of reliable synthesizers and the application of
synchronous design techniques, the lowest level of detail that designers must
consider is Register Transfer Level (RTL). As the name suggests, the primary
elements of a design represented at the RTL are registers, interconnections
between registers, and the computation necessary to modify their values.
Since each register receives new values only when the clock pulses, all of the
combinational logic needed to compute the register value can be abstracted to
a set of Boolean and algebraic expressions.
RTL straddles the hardware and software worlds. The components of an RTL
design are readily identifiable as hardware; such as registers, wires, and
clocks. Yet, the combinational expressions and control logic look suspiciously
like those in typical procedural programming languages, such as C. The
process of building an RTL design is much like programming. For example,
you use compilers, linkers, and debuggers, just as you would if you were
programming in C. There are differences, of course. You do not need to
consider issues surrounding timing, concurrency, and synchronization when
programming in C (unless you are writing embedded software, which further
blurs the line between hardware and software).
Testbenches live squarely in the software world. The elements of a testbench
are exactly the same as those found in any software system—data structures
and algorithms. Testbenches are hardware aware since their job is to control,
respond to, and analyze hardware. Still, the bulk of their construction and
operation falls under the software umbrella.
2
Most of what a testbench “does,” does not involve hardware. Testbenches
operate at levels of abstraction higher than RTL and thus do not require
registers, wires, and other hardware elements. We can categorize the
testbench results we collect and analyze as data processing, which does not
involve hardware. Testbench programs do not need to be implemented in
silicon, which completely frees them from the limitations of synthesizable
constructs. The only place that a testbench is involved with hardware is at its
interfaces. Testbenches must stimulate and respond to hardware. Testbenches
must know about hardware, but they do not need to be hardware.
Because testbenches are software, it is appropriate to apply software
construction techniques to building them. Software construction is at the
center of modern verification technology and the Advanced Verification
Methodology. Software construction is itself a very large topic on which many
volumes have been written. It’s not possible for us to go into great depth on
topics such as object‐oriented programming, library organization, code
refactoring, testing strategies, and so on. However, we will touch on these
topics in a practical way, showing how to apply software techniques to
building testbenches. We rely heavily on examples to illustrate the principles
we discuss.
1
Verification Principles
1.1 Verification Basics
Functionally verifying a design means comparing the designer’s intent with
observed behavior to determine their equivalence. We consider a design
verified when, to everyone’s satisfaction, it performs according to the
designer’s intent. This basic principle often gets lost in the discussion of
testbenches, assertions, debuggers, simulators, and all the other
paraphernalia used in modern verification flows. Keep this in mind as you
read the rest of this text. Whenever we present a concept or illustrate a
testbench construction technique, we will clearly identify the reference design
and the design under test (DUT).
When we say “design”, we mean the design being verified, i.e. the DUT. To be
verified, the DUT is typically in some form suitable for production—a
representation that can be transformed into silicon by a combination of
automated and manual means. We distinguish a DUT from a sketch on the
back of a napkin or a final packaged die, neither of which are in a form that
can be verified. A reference design captures the designer’s intent, that is, what
the designer means the design will do. The reference can take many forms,
such as a document describing the operation of the DUT, a golden model that
contains a unique algorithm, or assertions that represent a protocol.
4 Verification Basics
Observed Designer’s
Behavior Intent
equal
?
Figure 1‐1 Comparing Design and Intent
To automate the comparison of the behavior with the intent, both must be in a
form that we can execute on a computer using some program that does the
comparison. Exactly how to do this is the focus of the rest of this book. The
problem of building a design is well understood by design engineers and is a
topic beyond the scope of this text. Here, we confine our discussion to the
problem of capturing design intent and comparing it with the design to show
their equivalence.
1.1.1 Two Questions
Verifying a design involves answering two questions: “Does it work?” and
“Are we done?” These are basic and obvious questions, yet they motivate all
the mechanics of every verification flow. The first question, “Does it work?”,
comes from the essential idea of verification we discussed in the previous
section. It asks, “Does the design match the intent?” The second question,
“Are we done?” asks if we have satisfactorily compared the design and intent
to conclude whether the design does indeed match the intent, or if not, why
not. We use these valuable questions to create a framework for developing
effective testbenches.
1.1.2 Does It Work?
“Does it work?” is not a single question but a category of questions that
represent the nature of the DUT. Each design will have its own set of does‐it‐
work questions. The first part of developing a test plan is to identify the set of
questions necessary to determine whether the design works.
Functional correctness questions ask whether the device functions correctly in
specific situations. We derive these questions directly from the design intent,
and we use them to express design intent in a testbench.
Verification Basics 5
Consider a simple packet router as an example. This device routes packets
from an input to one of four output ports. Packets contain the address of the
destination port and have varying length payloads. Each packet has a header,
trailer, and two bytes for the cyclic redundancy check (CRC). The does‐it‐
work questions might include these:
Does a packet entering the input port addressed to output port 3
arrive properly at port 3?
Does a packet of length 16 arrive intact?
Are the CRC bytes correct when the payload is [0f 73 a8 c2 3e
57 11 0a 88 ff 00 00 33 2b 4c 89]?
Obviously, this is just a sample of a complete set of questions. For a device
even as relatively simple as this hypothetical packet router, the set of does‐it‐
work questions can be long. To build a verification plan and a testbench that
supports the plan, you must first enumerate all the questions or show how to
generate all of them, and then select the ones that are interesting.
Continuing with our packet router example, to enumerate all the does‐it‐
work questions, you might create a chart like this:
Number Does‐It‐Work Questions
1. For all four output ports, does a packet arriving at the input
port addressed to an output port arrive at the proper output
port?
2. Do packets of varying payload sizes, from eight bytes to 256
bytes, arrive intact?
3. Is the CRC computation correct for every packet?
4. Is a packet with an incorrect header flagged as an error?
The table above contains two kinds of questions: those we can answer directly
and those we can break down into more detailed questions. Question 1 is a
series of questions we can explicitly enumerate:
Number Does‐It‐Work Questions
1a Does a packet arriving at the input port addressed to output
port 0 arrive at port 0?
6 Verification Basics
Number Does‐It‐Work Questions
1b Does a packet arriving at the input port addressed to output
port 1 arrive at port 1?
1c Does a packet arriving at the input port addressed to output
port 2 arrive at port 2?
1d Does a packet arriving at the input port addressed to output
port 3 arrive at port 3?
Notice that we formulate all of the questions so that they can be answered yes
or no. At the end of the day, a design either works or it doesn’t‐‐it either is
ready for synthesis, place, and route or it is not. If you can answer all the
questions affirmatively then you know the design is ready for the next
production step.
When designing the set of does‐it‐work questions, you should word them so
they can be answered yes or no and so that a yes answer is positive; that is,
answering yes means the device operates correctly. That will make it easier
than trying to keep track of which questions should be answered yes and
which should be answered no. A question such as, “Did the router pass any
bad packets?” requires a no answer to be considered successful. A better
wording of the question is, “Did the router reject bad packets?” Make the
questions as specific as you can. An even better wording of the previous
question is, “When a bad packet entered the input port did the router raise
the error signal and drop the packet?” More specific questions tell you more
about the machinery your testbench needs to determine the yes or no answer.
A properly worded yes or no question contains its own success criteria. It
says what will achieve a yes response. A question such as, “Is the average
latency less than 27 clock cycles?” contains the metric, 27 clock cycles, and the
form of comparison, less‐than. Had we (improperly) worded the question as,
“What is the average latency of packets through the router?” we wouldn’t
know what was considered acceptable. To answer either question, you first
must be able to determine the average latency. Only in the first wording of the
question do we know how to make a comparison to determine whether the
result is correct. The metric by itself does not tell us whether the design is
functioning as intended. Only when we compare the measured value against
the specification, 27 clock cycles in this example, can we determine whether
the design works.
As is often the case, it is not practical to enumerate every single does‐it‐work
question. To verify that every word in a 1 Mb memory can be written to and
Verification Basics 7
read from, it is neither practical nor necessary to write one million questions.
Instead, a generator question, a question that generates many others, takes the
place of one million individual questions. “Can each of the one million words
in the memory be successfully written to and read from?” is a generator
question.
1.1.3 Are We Done?
To determine, “Are we done?” we need to know if we have answered enough
of the does‐it‐work questions to consider that we’ve sufficiently verified the
design. We begin this task by prioritizing all the does‐it‐work questions
across two axes:
Easy to answer Hard to answer
The art of building a testbench requires that in the initial stage, we identify
the set of questions and sort them to identify the ones that return the highest
value in terms of verifying the design. The next step is to build the machinery
that will answer the questions and determine which ones have been answered
(and which have not).
“Are we done?” questions are also called functional coverage questions,
questions that relate to whether the design is sufficiently covered by the test
suite in terms of design function. As we do with does‐it‐work questions, we
can also decompose functional coverage questions into more detailed
questions. And just like functional correctness questions, functional coverage
questions must also be answerable in terms of yes or no. Here are some
examples of functional coverage questions:
8 Verification Basics
Has every processor instruction been executed at least once?
Has at least one packet traversed from every input port to every
output port?
Has the memory been successfully accessed with a set of
addresses where each address bit is one and each address bit is
zero, not including 0xffffffff and 0x00000000?
Another way to think of these questions is that they ask, “Have the necessary
does‐it‐work questions been answered affirmatively?” When we think of
functional coverage in this light, the term refers to covering the set of does‐it‐
work questions. Furthermore, coverage questions identify a metric and a
threshold to compare against. Coverage is reached (that is, the coverage
question can be answered yes) when the metric reaches the threshold.
In summary, the art of building a testbench begins with a test plan. The test
plan begins with a carefully thought out set of does‐it‐work and are‐we‐done
questions.
1.1.4 Two‐Loop Flow
The process for answering the does‐it‐work and are‐we‐done questions can
be described in a simple flow diagram as shown in Figure 1‐2. Everything is
driven by the functional specification for the design. From the functional
specification, you can derive the design itself and the verification plan. The
verification plan drives the testbench construction.
The flow contains two loops, the does‐it‐work loop and the are‐we‐done loop.
Both loops start with a simulation operation. The simulation exercises the
design with the testbench and generates information we use to answer the
questions. First we ask “Does it work?” If any answer is no, then we must
debug the design. This could result in changes to the design implementation.
Once the design works to the extent it has been tested, then it is time to
answer the question, “Are we done?” We answer this question by collecting
coverage information and comparing it against thresholds specified in the test
plan. If we do not reach those thresholds, then the answer is no, and we must
modify the testbench to increase the coverage. Then we simulate again.
Changing the testbench or the stimulus could cause other latent design bugs
to surface. A subsequent iteration around the loop may cause us to go back to
the does‐it‐work loop again to fix any new bugs that appear. As you can see, a
complete verification process flip‐flops back and forth between does‐it‐work
Verification Basics 9
and are‐we‐done until we can answer yes for all the questions in both
categories.
Verification Design
Plan Specification
Design
Testbench
Implementation
Simulate
Modify Debug
Stimulus Design
Does It Work No
?
Does it work?
Yes
No Are We Done
?
Are we done?
Yes
Done
Figure 1‐2 Two‐Loop Flow
In an ideal world, a design has no bugs and the coverage is always sufficient
so you would only have to go around each loop once to get “yes” answers to
both questions. In the real world, it can take many iterations to achieve two
“yes” answers. One objective of a good verification flow is to minimize the
10 First Testbench
number of iterations to complete the verification project in the shortest
amount of time using the smallest number of resources.
1.2 First Testbench
Let’s jump right in by illustrating how to verify one of the most fundamental
devices in a digital electronic design: an AND gate. An AND gate computes
the logical and of the inputs. The function of this device is trivial, and in
practice, is hardly worth its own testbench. Because it is trivial, we can use it
to illustrate some basic principles of verification without having to delve into
the details of a more complex design.
Figure 1‐3 shows the schematic symbol for a two‐input AND gate. The gate
has two inputs, A and B, and a single output Y. The device computes the
logical AND of A and B and puts the result on Y.
A
Y
B
Figure 1‐3 A Two‐Input AND Gate
The following truth table describes the function of the device.
A B Y
0 0 0
0 1 0
1 0 0
1 1 1
The truth table is exhaustive: it contains all possible inputs for A and B and
thus all possible correct values for output Y.
Our mission is to prove that our design, the AND gate, works correctly. To
verify that it does indeed perform the AND function correctly, we first need to
list the questions. The truth table helps us create the set of questions we need
to verify the design. Each row of the design contains an input for A and B and
the expected output for Y. Since the table is exhaustive, our generator
question is: “For each row in the truth table, when we apply the values of A
and B identified in that row, does the device produce the expected output for
First Testbench 11
Y?” To answer the are‐we‐done question, we determine whether we have
exercised each row in the truth table and received a yes answer to the does‐it‐
work question for that row. Our are‐we‐done question is: “Do all rows
work?”
To automate answering both the does‐it‐work and are‐we‐done questions, we
need some paraphernalia, including:
A model that represents the DUT (in this case, the AND gate)
The design intent in a form we can codify as a reference model
Some stimuli to exercise the design
A way to compare the result to the design intent
A
stimulus Y
B
scoreboard
Figure 1‐4 First Testbench
While our little testbench is simple, it contains key elements found in most
testbenches at any level of complexity. The key elements are the:
DUT
Stimulus generator—generates a sequence of stimuli for the DUT
Scoreboard—embodies the reference model
The scoreboard observes the inputs and outputs of the DUT, performs the
same function as the DUT except at a higher level of abstraction, and
determines whether the DUT and reference match.
1.2.1 DUT
The DUT is our two‐input AND gate. We implemented the AND gate as a
module with two inputs, A and B, and one output Y.
45 always @(A or B)
46 Y = #1 A & B;
47 endmodule
The stimulus in this example generator generates directed stimulus. Each new
value the stimulus generator emits is computed in a specific order. Later we
will look at random stimulus generators which, as their name suggests, generate
random values.
The purpose of the stimulus generator is to produce values as inputs to the
DUT. stimulus, a two‐bit quantity, contains the value to be assigned to A and
B. After it is incremented in each successive iteration, the low order bit is
assigned to A, and the high order bit is assigned to B.
The scoreboard is responsible for answering the question “Does it work?” It
watches the activity on the DUT and reports back whether it operated
correctly.1 One important thing to notice is that the structure of the
scoreboard is strikingly similar to the structure of the DUT. This makes sense
when you consider that the purpose of the scoreboard is to track the activity
of the DUT and determine whether the DUT is working as expected.
1. For anything more sophisticated than an AND gate, the monitor and response
checker would be separate components in the testbench. For the trivial AND gate
testbench, this would be more trouble than it is worth and would cloud the basic
principles we are illustrating.
Second Testbench 13
65 $time, A, B, Y_sb, Y,
66 ((Y_sb == Y) ? “Match” : “Mis-match”));
67 end
68 endmodule
The scoreboard pins are all inputs. The scoreboard does not cause activity on
the design. It passively watches the inputs and outputs of the DUT.
The top‐level module, shown below, is completely structural; it contains
instantiations of the DUT, the scoreboard, and the stimulus generator, along
with the code necessary to connect them.
83 module top;
84 stimulus s(A, B);
85 and2 a(Y, A, B);
86 scoreboard sb(Y, A, B);
87
88 initial
89 #100 $finish(2);
90 endmodule
When we run the simulation for a few iterations, here is what we get:
Each message has two parts. The first part shows the stimulus being applied.
The second part shows the result of the scoreboard check that compares the
DUT’s response to the predicted response. We have used a colon to separate
the two parts.
This simple testbench illustrates the use of a stimulus generator and a
scoreboard that serves as a reference. Although the DUT is a simple AND
gate, all the elements of a complete testbench are present.
1.3 Second Testbench
The previous example illustrated elementary verification concepts using a
combinational design, an AND gate. Combinational designs, by their very
nature, do not maintain any state. In our second example, we look at a
14 Second Testbench
slightly more complex design that maintains state data and uses a clock to
cause transitions between states.
The verification problem associated with synchronous (sequential) designs is
a little different than for combinational designs. Everything you need to know
about a combinational design is available at its pins. A reference model for a
combinational device simply needs to compute f(x) where x represents the
inputs to the device and f is the function it implements. The outputs of a
sequential device are a function of its inputs and its internal state. Further
computation may change the internal state. The scoreboard must track the
internal state of the DUT and compare the output pins.
A combinational device can be exhaustively verified by exercising all possible
inputs. For a device with n input pins we must apply a total of 2n input
vectors. The number 2n can be large but computing that many inputs is easy.
We just need to have an n‐bit counter and apply each value of the counter to
the inputs of the device.
For a sequential device the notion of “done” must extend to covering not only
the total number of possible inputs, but also the number of possible internal
states. For a device with n inputs and m internal states, you must cover (2n
inputs) * (2m states), which is 2n+m combinations of internal states and inputs.
For a device with 64 input pins and a single 32‐bit internal register, the
number of state/input combinations is 296 — a very large number indeed!
Even for very large numbers of combinations, the verification problem would
not be too difficult if it were possible to simply increment a counter to reach
all combinations, as we do with combinational devices. Unfortunately it is not
possible. The internal state is not directly accessible from outside the device.
It can only be modified by manipulating the inputs. The problem now
becomes how to reach all the states in the device through only manipulating
the inputs. This is a difficult problem and requires a deep understanding of
the device to generate sequences of inputs to reach all the states.
Since it is difficult to reach all the states, the obvious question becomes, “Can
we prune the problem by reducing the number of states that we need to reach
to show that the device works correctly?” The answer is yes. Now the
question becomes, “How do we decide which states do not need to be
covered?”
This topic is complex, and a full treatment of it is beyond the scope of this
text. However, we can give an intuitive answer to the question. States that can
be shown to be unreachable, through formal verification or other techniques,
Second Testbench 15
do not need to be covered. The designer should consider simplifying the
design to remove unreachable states since they provide no value. States that
have a low probability of being reached may also be eliminated from the
verification plan. Determining the probability threshold and assigning
probabilities to states is as much an art as a science. It involves understanding
how the design is used and which inputs are expected (as compared to which
are possible).
It is also possible to eliminate coverage of states that are functionally
equivalent. Consider a packet communications device. In theory, every
possible packet payload value represents a distinct state (or set of states) as it
passes through the design, and it should be covered during verification.
However, it is probably not a stretch to consider that arbitrary non‐zero
values are, for all intents and purposes, equivalent. Of course, there might be
some interesting corner cases that must be checked such as all zeros, all ones,
particular values that might challenge the error correction algorithms, and so
forth
For complex sequential designs, determining which states to cover (and
which do not need to be covered) and how to reach those states with minimal
effort is a problem that keeps verification engineers employed. In this section
we will consider a small sequential device whose internal states can easily be
covered.
1.3.1 Three‐Bit Counter
The design, shown in Figure 1‐5, is a three‐bit counter with an asynchronous
reset. Each time the clock pulses high, the count increments. The design is
composed of three toggle flip‐flops, each of which maintains a single bit of the
counter. The flip‐flops are connected with some combinational logic to form a
counter. Each toggle flip‐flop toggles when the T input is high. When T is low,
16 Second Testbench
the flip‐flop maintains its current state. When the active low reset is set to 0,
the flip‐flop moves to a 0 state.
Q2 Q1 Q0
reset
T T T
clk
Figure 1‐5 Three‐Bit Counter
The code for the counter is contained in two modules: One is a simple toggle
flip‐flop, and the other connects the flip‐flops with the necessary glue logic to
form a counter. First, the toggle flip‐flop:
The counter comprises three toggle flip‐flops and an AND gate.
Second Testbench 17
The design is straightforward but has characteristics that are common in real
designs, and that require some attention to properly verify it. The key
characteristics are that the design is driven by a clock and that it maintains
internal state. The AND gate from the previous example does not maintain
any state. All of the information about what the design is doing can be
gleaned from its pins. In a design with internal data, that is not the case. This
difference is reflected in the design of our scoreboard. Figure 1‐6 shows the
organization for the testbench for the three‐bit counter.
DUT scoreboard
control q1 q1
counter
q2 q2
clkgen
Figure 1‐6 Testbench Organization for Three‐Bit Counter
In many respects, the testbench for the three‐bit counter is much like the one
for the AND gate. Both have a scoreboard whose role is to watch what the
design is doing and determine whether it is working correctly. Both have a
device for driving the DUT. However, we manage operation differently for
these designs. We use a stimulus generator for the AND gate, but we use a
18 Second Testbench
controller for the three‐bit counter. The three‐bit counter is a free‐running
device. As long as it is connected to a running clock, it will continue to count.
So we do not need a stimulus generator as we did with the AND gate.
Instead, the controller manages the operation of the DUT and testbench. The
controller provides an initial reset so that the count starts from a known
value. It also stops the simulation at the appropriate time.
The scoreboard must track the internal state of the DUT. It does this using the
variable count. Like the DUT, when reset is activated count is set to 0. Each
clock cycle count increments, and the new value is compared with the count
from the DUT.
109 count = 0;
110 #0;
111 if (count == q)
112 $display(“time=%t q=%b count=%d match!”,
113 $time, q, count);
114 else
115 $display(“time=%t q=%b count=%d <-- no match”,
116 $time, q, count);
117
118 end
119 endmodule
The scoreboard has a high‐level model of the counter. It uses an integer
variable and the plus (+) operator to form a counter instead of flip‐flops and
AND gates. Each time the clock pulses, it increments its count, just like the
RTL counter. It also compares to see if its internal count matches the output of
the counter DUT.
For completeness, we show the clock generator and top‐level module. The
clock generator simply initializes the clock to zero, and then it toggles it every
5ns.
69 module clkgen(clk);
70 output clk;
71 bit clk;
72
73 initial
74 begin
75 clk = 0;
76 end
77
78 always
79 begin
80 #5;
81 clk = !clk;
82 end
83 endmodule
The top‐level module is typical of most testbenches. It connects the DUT and
the testbench components.
150 endmodule
We have illustrated a simple testbench that contains the elements used in
much more sophisticated testbenches. Sequential designs that maintain
internal state require a scoreboard that operates in parallel with the DUT. It
performs the same computations as the DUT but at a higher level of
abstraction. The scoreboard also compares its own computation with inputs
received from the DUT.
1.4 Summary
In this chapter we looked at how to structure an overall verification process.
The process is based on two fundamental questions: “Does it work?” and
“Are we done?” Through some simple examples, we illustrated how to build
testbench machinery to answer these questions with devices such as stimulus
generators and scoreboards. The rest of this book will show how to apply
transaction‐level modeling techniques to build practical, scalable, reusable
testbench components that answer the questions and how to connect them to
form testbenches.
2
Introduction to the AVM
In this chapter we’ll give an overview of the methodology and provide a
framework for testbench architectures. We’ll also give you a tour of the
library and show you the essential components we use throughout the rest of
the book.
2.1 Reuse
Building verification components and connecting them in a testbench is a
complex and sometimes difficult problem. Subsequent generations of a
design or a family of similar designs will have similar verification problems,
and thus the verification team can find itself having to solve similar problems
repeatedly. We can improve productivity by reusing as much as possible from
previous testbenches rather than re‐inventing everything from scratch each
time. Reusability is a key consideration when constructing verification
components.
1. “methodology.” Websterʹs Third New International Dictionary, Unabridged. Merriam‐
Webster, 2002. http://unabridged.merriam‐webster.com (7 May 2006).
22 Components and Interfaces
Reuse is the entire raison d’être of the AVM—reuse of verification solutions
and reuse of verification components. By reusing something that is already
done you:
Don’t have to reinvent new solutions
Don’t have to spend time writing new code
Have the increased confidence of using components that have
been used before and therefore are more stable and reliable than
brand new code
There are various kinds of reuse, each with its own set of trade‐offs between
difficulty in building a reusable component and general applicability. The
difference is in how much the component is parameterized:
No parameters
Simple scalar parameters
Type parameters
2.2 Components and Interfaces
In Verilog, the basic unit of functionality is the module. Modules contain both
structure and behavior and have interfaces to connect to other modules.
SystemC has a similar construct, sc_module, which provides a place to hold
ports to connect to other sc_modules, structural hierarchy, and behavior in
the form of threads and methods. An essential feature of both of these
language constructs is that they form a boundary around some structure and
behavior, and they communicate to the outside world through a well‐defined
interface.
A large focus of the AVM is how to build reusable components. To build a
reusable component, you must clearly define the interface(s) through which it
communicates, the semantics of the communication, and the boundaries of
structure and behavior. Components are black boxes, connecting and
communicating only through interfaces. You cannot “reach inside” a
component to access internal objects.
Layered Organization of Testbenches 23
2.3 Layered Organization of Testbenches
Just as a design is a network of design components, a testbench is a network
of verification components. The AVM defines verification components, their
structure and interfaces. This section describes the AVM components.
AVM testbenches are organized in layers. The bottommost layer is the DUT,
an RTL device with pin‐level interfaces. Above that is a layer of transactors,
devices that convert between the transaction‐level and pin‐level worlds. The
components in the layers above the transactor layer are all transaction‐level
components. The diagram below illustrates the layered testbench
organization. The box on the left identifies the name of the layer. The box on
the right lists the type of components in that layer. The vertical arrows show
which layers communicate directly. For example, the control layer
communicates with the analysis, operational, and transactor layers but not
directly with the DUT.
-specific
Testbench
Untimed
Transaction-Level Control Test Controller
Coverage Collectors
Untimed Performance Analyzer
Analysis
Transaction-Level Scoreboards
Golden Models Design-specific
Stimulus Generators
Untimed or
Masters
Partially timed Operational
Slaves
Transaction-Level
Constraints
Drivers
specific
Protocol-
Transaction
Transactors Monitors
Pins Responders
Figure 2‐1 AVM Testbench Architecture Layers
24 Layered Organization of Testbenches
You can also view a testbench as a concentric organization of components.
The innermost ring maps to the bottom layer and the outermost ring maps to
the top layer. Some find it easier to understand the relationships between the
layers using a netlist style diagram.
scoreboard
coverage
monitor monitor
controller
master/
driver DUT responder slave
stim gen
Figure 2‐2 Concentric Testbench Organization
2.3.1 Transactors
The role of a transactor in a testbench is to convert a stream of transactions to
pin‐level activity or vice versa. Transactors are characterized by having at
least one pin‐level interface and at least one transaction‐level interface.
Transactors come in a wide variety of shapes, colors, and styles. We’ll focus
on monitors, drivers, and responders.
Monitor. A monitor, as the name implies, monitors a bus. It watches the pins
and converts their wiggles to a stream of transactions. Monitors are passive,
meaning they do not affect the operation of the DUT in any way.
Driver. A driver converts a stream of transactions into pin‐level activity.
Responder. A responder is much like a driver, but it responds to activity on
pins rather than initiating activity.
Layered Organization of Testbenches 25
2.3.2 Operational Components
The operational components are the set of components that provide all the
things the DUT needs to operate. The operational components are responsible
for generating traffic for the DUT. They are all transaction‐level components
and have only transaction‐level interfaces. The ways to generate stimulus are
as varied as the kinds of devices there are to verify. We’ll look at three general
kinds of operational components: stimulus generators, masters, and slaves.
A scenario generator is a form of stimulus generator. Instead of simply
generating a stream of randomized requests, a scenario generator produces
directed or directed random sequences that are intended to perform a specific
function on the DUT, or exercise a particular scenario.
requests
master slave
responses
Figure 2‐3 A Master and a Slave
2.3.3 Analysis Components
Analysis components receive information about what’s going on in the
testbench and use that information to make some determination about the
correctness or completeness of the test. Two common kinds of analysis
components are scoreboards and coverage collectors.
26 Two Domains
Coverage Collector. Coverage collectors count things. They tap into streams of
transactions and count the transactions or various aspects of the transactions.
The purpose is to determine verification completeness. The particular things
that a coverage collector counts is dependent on the design and the specifics
of the test. Common things that coverage collectors count include: raw
transactions, transactions that occur in a particular segment of address space,
and protocol errors. The list is limitless.
Coverage collectors can also perform computations as part of a completeness
check. For example, a coverage collector might keep a running mean and
standard deviation of data being tracked. Or it might keep a ratio of errors to
good transactions.
2.3.4 Controller
Controllers form the main thread of a test and orchestrate the activity.
Typically, controllers receive information from scoreboards and coverage
collectors and send information to environment components.
For example, a controller might start a stimulus generator running and then
wait for a signal from a coverage collector to notify it when the test is
complete. The controller, in turn, stops the stimulus generator. More elaborate
variations on this theme are possible. One possible configuration is a
controller supplies a stimulus generator with an initial set of constraints and
starts the stimulus generator running. When a particular ratio of packet types
is achieved, the coverage collector signals the controller. Rather than stopping
the stimulus generator, the controller may send it a new set of constraints.
2.4 Two Domains
We can view the set of components in a testbench as belonging to two
separate domains. The operational domain is the set of components, including
the DUT, that operate the DUT. These are the stimulus generators, BFMs, and
similar components that generate stimulus and provide responses that drive
the simulation. The DUT, responder, and driver transactions—along with the
environment components that directly feed or respond to drivers and
responders—comprise the operational domain. The rest of the testbench
components—monitor transactors, scoreboards, coverage collectors, and
controller—comprise the analysis domain. These are the components that
collect information from the operational domain.
Tour of the SystemVerilog AVM 27
Data must be moved from the operational domain to the analysis domain in a
way that does not interfere with the operation of the DUT and that preserves
event timing. This is accomplished with a special communication interface
called an analysis port. Analysis ports are a special kind of transaction port in
which a publisher broadcasts data to one or more subscribers. The publisher
signals all the subscribers when it has new data ready.
One of the key features of analysis ports is that they have a single interface
function, write(). Analysis FIFOs, the channels used to connect analysis
ports to analysis components, are unbounded. This guarantees that the
publisher doesn’t block and that it deposits its data into the analysis FIFO in
precisely the same delta cycle in which it was generated. Chapter 7 discusses
analysis ports and analysis FIFOs in more detail.
Analysis Domain
Analysis
Control and Interfaces
Configuration
Interfaces Analysis
Ports
Operational Domain
Figure 2‐4 Connection between Operational and Analysis Domains
Generally, the operational and analysis domains are connected by analysis
ports and control and configuration interfaces. Analysis ports tap off data
concerning the operation of the DUT. These data might include bus
transactions, communication packets, and status information (success or
failure of specific operations). The components in the analysis domain
analyze the data and make decisions. The results of those decisions can be
communicated to the operational domain via the control and configuration
interfaces. Control and configuration interfaces can be used to start and stop
stimulus generators, change constraints, modify error rates, or manipulate
other parameters affecting how the testbench operates.
2.5 Tour of the SystemVerilog AVM
The SystemVerilog AVM is a set of base classes and utilities that facilitate
building AVM‐style testbenches. Inasmuch as testbenches are networks of
verification components, the first facility that the library provides is a way to
28 Tour of the SystemVerilog AVM
build verification components using SystemVerilog classes and connect them
with ports and channels. The AVM library takes advantage of the object‐
oriented class facilities in SystemVerilog. Components, channels, and
transaction‐level ports are created using classes.
2.5.1 Named Components
To construct testbenches as networks of components, we first need to be able
to build components. The base class avm_named_component serves as the
shell for all AVM class‐based components, providing each component with a
name and enabling a hierarchical organization of multiple
avm_named_components.
The constructor of my_component takes the same arguments as the base class.
In turn, it must call the constructor of the base class to register the name and
connect into the hierarchy correctly. Let’s look at how to construct a simple
hierarchy with named components.
top
u1 u2
Figure 2‐5 A Simple Named Component Hierarchy
Tour of the SystemVerilog AVM 29
In Figure 2‐5, the topmost component in the hierarchy is called “top.” It has
two subordinate components, u1 and u2, which are instances of
my_component:
In the sample code above, the class top contains two instances of
my_component, u1 and u2. Those instances come to life in the constructor.
Each call to new() passes two arguments, the name of the component and its
parent. In this case, as in most cases, the parent of a subordinate component is
this, a reference to the current component.
2.5.2 Threaded Components
Threaded components are named components that have their own thread of
execution. The base class avm_threaded_component provides a place to
create a thread and control its execution. Threaded components are named
components; their constructor takes name and parent arguments. Here’s a
simple threaded component that generates sequences of randomized
transactions.
37 t = new();
38 t.randomize();
39 put_port.put(t);
40 end
41
42 endtask
43
44 endclass
The environment, which we will introduce shortly, causes all of the run()
tasks in all of the threaded components in the testbench to be forked as
independent processes.
2.5.3 Ports and Exports
You will use ports and exports to connect two devices at the transaction level.
For a full discussion of how to connect components, see “AVM Mechanics” on
page 79.
2.5.4 The Environment
The environment is the top‐level container for testbenches. It holds the top‐
level components in the hierarchies of avm_named_components, and it
oversees testbench execution. The base class for environments is avm_env.
You will build your own environments by deriving (extending) this class.
Here’s what our little three‐component hierarchy looks like in the context of
an environment:
endfunction
task run();
// orchestrate the execution of the testbench
endtask
endclass
avm_env is itself derived from avm_named_component so its constructor looks
much like that of avm_named_component. The arguments don’t include a
parent since avm_env is the top of a component hierarchy and therefore has
no parent.
2.5.5 Messaging
The AVM provides some facilities for managing messages. In addition to
generating messages of various kinds, you can also filter them at run time to
change how verbose your testbench program is. You can also route messages
to one or more files.
Basic messaging is performed through four functions:
avm_report_message()
avm_report_warning()
avm_report_error()
avm_report_fatal()
These functions each take the same arguments: id, message, verbosity,
filename, line number. The first two arguments are required, the rest are
optional. Id is an arbitrary string used to categorize messages. This argument
is useful for filtering message. Message is the text of the message. If it contains
formatted numbers or text, then you need to use $sformat or some other
means of generating a message string to send to avm_report_*().
2.6 Summary
We’ve shown how AVM testbenches are organized as concentric layers of
components. The inner‐most layer contains fully timed, pin‐level devices and
the outer‐most layer contains untimed, transaction‐level devices. Layers in
between provide various protocol conversion, control, and analysis services.
32 Summary
The AVM library is a collection of base classes that facilitate building layered
testbenches. The base classes include classes for building named components,
threaded components, and ports and exports for connecting the components
with transaction‐level interfaces. The AVM also provides an advanced
messaging facility that provides fine‐grained control over the generation and
presentation of messages. Throughout the rest of this book we will show you
how to use these classes to build testbenches.
3
Fundamentals of Object-
Oriented Programming
In this chapter, we introduce the basic concepts of OOP. This includes the
notions of encapsulation and interface. We’ll conclude the chapter with a
discussion of why OOP is important for building testbenches.
3.1 Procedural vs. OOP
To understand OOP and the role it plays in verification, it is beneficial to first
understand traditional procedural programming and its limitations. This sets
the foundation for understanding how OOP can overcome those limitations.
In the early days of assembly language programing, programmers and
computer architects quickly discovered that programs often contained
sequences of instructions that were repeated throughout a program.
Repeating lots of code (particularly with a card punch) is tedious and error
prone. Making a change to the sequence involved locating each place the
sequence appeared in the program and repeating the change in each location.
To avoid the tedium and the errors caused by repeated sequences, the
subroutine was invented.
A subroutine is a unit of reusable code. Instead of coding the same sequence
of instructions inline you call a subroutine. Parameters passed to subroutines
34 Procedural vs. OOP
allow you to dynamically modify the code. That is, each call to a subroutine
with different values for the parameters causes the subroutine to behave
differently based on the specific parameter values.
Every programming language of any significance has constructs for creating
subroutines, procedures, or functions, along with syntax for passing
parameters into it and returning values back. There are some operations that
every program must perform, such as I/O, data conversions, numerical
methods, and so forth. As a result of the common need for these operations,
programmers found it valuable to create libraries of commonly used
functions. Most programming languages include such a library as part of the
compiler package. One of the most well‐known examples is the C library that
comes with every C compiler. It contains useful functions such as printf(),
cos(), atof(), and qsort(). These are functions that virtually every
programmer will use.
Imagine having to write your own I/O routines or your own computation for
converting numbers to strings and strings to numbers. There was a time when
programmers did just that. Libraries of reusable functions changed all that
and increased overall programming productivity.
As software practice and technology advanced, programmers began thinking
more about abstraction than instructions and subroutines. Instead of writing
individual instructions, programmers now code in languages that provide
highly abstracted models of the computer, and compilers or interpreters
translate these models into specific instructions. A library, such as the C
library, is a form of abstraction. It presents a set of functions that
programmers can use to construct ever more complex programs or
abstractions.
In his seminal book Algorithms + Data Structures = Programs, Niklaus Wirth
explains that to solve any programming problem you must devise an
abstraction of reality that has the characteristics and properties of the problem
at hand and ignores the rest of the details. He argues that the collection of
data you need to solve a problem forms the abstraction. So before you can
solve a problem, you first need to determine what data you need to have to
create the solution.
To continue building reusable abstractions, we need to create libraries of data
objects that can be reused to solve specific kinds of problems. The search for
ways to do this lead to object‐oriented technology. Object‐oriented program
analysis and design is centered around data objects, the functionality
associated with each object, and the relationships between objects.
Classes and Objects 35
The goal of OOP is to facilitate separation of concerns, a phrase coined by
Edsger Dijkstra in his 1974 essay titled “On the Role of Scientific Thought.”1
In this essay he quotes himself:
It is what I sometimes have called “the separation of concerns”,
which, even if not perfectly possible, is yet the only available tech‐
nique for effective ordering of oneʹs thoughts, that I know of. This is
what I mean by “focussing oneʹs attention upon some aspect”: it does
not mean ignoring the other aspects, it is just doing justice to the fact
that from this aspectʹs point of view, the other is irrelevant. It is being
one‐ and multiple‐track minded simultaneously....
3.2 Classes and Objects
The primary unit of programming in object‐oriented languages, such as
SystemVerilog or SystemC (C++), is the class. A class contains data elements,
called members, and functions, called methods. To execute an object‐oriented
program, you must instantiate one or more classes in a main routine and then
call methods on the various objects. Although the terms class and object are
sometimes used interchangeably, typically the term class refers to a class
declaration or an uninstantiated object, and the term object refers to an
instantiated class.
1. The complete text of Dijstra’s essay is at http://www.cs.utexas.edu/users/EWD/
ewd04xx/EWD447.PDF
36 Classes and Objects
To illustrate these concepts, below is an example of a simple class called
register that is rendered in both SystemVerilog and SystemC.
SystemVerilog SystemC
class register; class register
bit[31:0] contents; {
sc_bv<32> contents;
function void write(bit[31:0] d)
contents = d; void write(sc_bv<32> d)
endfunction {
contents = d;
function bit[31:0] read(); }
return contents;
endfunction sc_bv<32> read()
endclass {
return contents;
}
};
This very simple class has one member, contents, and two methods, read()
and write(). To use this class you create objects by instantiating the class and
then call the object’s methods, as shown below
SystemVerilog SystemC
module top; sc_main()
register r; {
bit[31:0] d; register r;
dv_bv<32> d;
initial begin
r = new(); r.write(0x00ff72a8)
r.write(32’h00ff72a8); d = r.read();
d = r.read(); }
end
endmodule
You can use classes to create new data types, such as register. Using classes
to create new data types is an important part of OOP. You can also use them
to encapsulate mathematical computations or to create dynamic data
structures, such as stacks, lists, queues, and so forth. Encapsulating the
organization of a data structure or the particulars of a computation in a class
makes the data structure or computation highly reusable.
A useful data type is a pushdown stack. A stack is a LIFO (last in first out)
structure. Items are put into the stack with push(), and items are retrieved
Classes and Objects 37
from the stack with pop(). pop() returns the last item pushed and removes it
from the data structure. The internal member stkptr keeps track of the top of
the stack. The item it points to is the top, and everything below it (that is, with
a smaller index) is lower in the stack. Below is a basic implementation of a
stack in SystemVerilog.
1 class stack;
2 typedef bit[31:0] DATA;
3 local DATA stk[20];
4 local int stkptr;
5
6 function new();
7 clear();
8 endfunction
9
10 function bit pop(output DATA data);
11 if(is_empty())
12 return 0;
13 data = stk[stkptr];
14 stkptr = stkptr - 1;
15 return 1;
16 endfunction
17
18 function bit push(DATA data);
19 if(is_full())
20 return 0;
21 stkptr = stkptr + 1;
22 stk[stkptr] = data;
23 return 1;
24 endfunction
25
26 function bit is_full();
27 return stkptr >= 19;
28 endfunction
29
30 function bit is_empty();
31 return stkptr < 0;
32 endfunction
33
34 function void clear();
35 stkptr = -1;
36 endfunction
37
38 function void dump();
39
40 $write(“stack:”);
41 if(is_empty()) begin
42 $display(“<empty>”);
43 return;
44 end
45
46 for(int i = 0; i <= stkptr; i = i + 1) begin
38 Classes and Objects
The class stack encapsulates everything there is to know about the stack data
structure. It contains an interface and the implementation of the interface. The
interface is the set of methods that you use to interact with the class. The
implementation is the behind‐the‐scenes code that makes the class operate.
The interface to our stack contains the following methods:
function new();
function bit pop(output DATA data);
function bit push(DATA data);
function bit is_full();
function bit is_empty();
function void clear();
function void dump();
There is no other way to interact with stack than through these methods.
There are also two data members of the class, stk and stkptr, that represent
the actual stack structure. However, these two members are local, which
means that the compiler will disallow any attempts to access them from
outside the class. By preventing access to the internals of the data structure
from outside, we can make some guarantees about the state of the data. For
example push() and pop() can rely on the fact that stkptr is correct and
points to the top of the stack. If it were possible to change the value of stkptr
by means other than using the interface functions, then push() and pop()
would have to resort to additional time‐consuming and possibly difficult
checks to determine the validity of stkptr.
The implementation of the interface occurs inline. The class declaration
contains not only the interface definition, but also the implementation of each
of the interface functions. Both C++ and SystemVerilog allow the
implementation to be separate from the interface. Separating the interface and
the implementation is an important concept. Programmers writing in C++ can
use header files to capture the implementation and .cc (or .cpp or whatever
your compiler uses) to hold the implementation.
Object Relationships 39
There are some important by‐products of enforcing access through class
interfaces. One is reusability. We can more easily reuse classes whose
interfaces are well defined and well explained than those whose interfaces are
fuzzy. Another important by‐product of enforcing access through class
interfaces is reliability. The authors of the class can guarantee certain
invariants (for example, stkptr is less than the size of the available stk array)
when they know that users will not modify the data other than by the means
provided. In addition, users can expect the state of the object to be predictable
when they adhere to the interface. Clarity is another by‐product. An interface
can describe the entire semantics of the class. The object will do nothing other
than execute the operations available through the interface. This makes it
easier for those who use the class to understand exactly what it will do.
3.3 Object Relationships
The true power of OOP becomes apparent when objects are connected in
various relationships. There are many kinds of relationships that are possible.
We will consider two of the most fundamental relationships HAS‐A and IS‐A.
3.3.1 HAS‐A
HAS‐A refers to the concept of one object contained or owned by another. The
HAS‐A relationship is represented by members. For example, in our stack
class, the stack HAS‐A stack pointer (stkptr) and stack array. Those are
primitive data types, not classes, but the same concept of HAS‐A applies. In
SystemVerilog and SystemC, you can create HAS‐A relationships between
classes with references or pointers. The diagram below illustrates a HAS‐A
relationship.
The figure below illustrates the underlying memory model for a HAS‐A
relationship. Object A contains a reference or a pointer to object B.
Figure 3‐1 HAS‐A Relationship
40 Object Relationships
A B
Figure 3‐2 UML for a HAS‐A Relationship
Object A owns an instance of object B. Coding a HAS‐A relationship in
SystemVerilog involves instantiating one class inside another or otherwise
providing a handle to one class that is stored inside another.
class B;
endclass
class A;
B b;
function new();
b = new();
endfunction
endclass
Class A contains a reference to class B. The constructor for class A (function
new()) calls new() on class B to create an instance of it. The member b holds a
reference to the newly created instance of B.
3.3.2 IS‐A
The IS‐A relationship is most often referred to as inheritance. A new class is
derived from a previously existing object and inherits its characteristics.
Objects created with inheritance are composed using IS‐A. The derived object
is considered a sub‐class, or a more specialized version of the parent object.
Object Relationships 41
To illustrate the notion of inheritance, we will use a portion of the taxonomy
of mammals.
Mammalia
Figure 3‐3 IS‐A Example: Mammal Taxonomy
Animals that are members of the cetacia, carnivora, or primate orders are
mammals. These very different kinds of creatures share the common traits of
mammals. Yet cetacia (whales, dolphins), carnivora (dogs, bears, raccoons),
and primates (monkeys, humans) each have their distinct and unmistakable
characteristics. To use OO terminology, a bear IS‐A carnivore and a carnivore
IS‐A mammal. The object bear is composed of attributes of both mammals
and carnivores plus additional attributes that distinguish it from other
carnivores.
To express IS‐A using UML, we draw a line between objects with an open
arrow head pointing to the base class. Traditionally, we draw the base class
above the derived classes, and the arrows point upward, forming an
inheritance tree (or a directed acyclic graph that can be implemented in
languages such as C++ that support multiple inheritance).
mammalia
Figure 3‐4 UML for IS‐A relationship
Figure 3‐5 Example of IS‐A Relationship
Here is what this composition looks like in SystemVerilog:
class A;
int i;
float f;
endclass
class B extends A;
string s;
endclass
Class B is derived from A, so it contains all the attributes of A. Any instance of
B not only contains the string s, but also the floating point value f and the
integer i.
This is but a brief introduction to the idea of inheritance in OOP. For
references to texts with more in‐depth discussion of inheritance and other
object‐oriented concepts and techniques, see the Bibliography in the
appendixes.
3.4 Virtual Functions and Polymorphism
One of the reasons for composing objects through inheritance is to establish
different behaviors for the same operation. In other words, the behavior
defined in a derived class overrides behavior defined in a base class. The
means to do this is through virtual functions. A virtual function is one that can
be overridden in a derived class. Consider the following generic packet class.
class generic_packet;
addr_t src_addr;
addr_t dest_addr;
Virtual Functions and Polymorphism 43
It has three virtual functions to set the contents of the packet. Different kinds
of packets require different kinds of contents. We use generic_packet as a
base class and derive different kinds of packets from it.
Both packet_A and packet_B may have different headers and trailers and
different payload formats. The knowledge about how the parts of the packet
are formatted is kept locally inside the derived packet classes. The virtual
functions set_header(), set_trailer(), and set_body() are implemented
differently in each subclass based on the packet type. The base class
generic_packet establishes the organization of the class and the types of
operations that are possible, and the derived classes can modify the behavior
of those operations.
Virtual functions are used to support polymorphism: a single class with many
forms. For example, some processing of packets may not need to know what
kind of packet is being processed. The only information necessary is that the
object is indeed a packet, that is, the current packet is related to the base class
packet via the IS‐A relationship. Virtual functions are the mechanism by
which we can code alternate behaviors for different variations of a packet.
44 Virtual Functions and Polymorphism
To look a little deeper at how virtual functions work, let’s consider three
classes related to each other by the IS‐A relationship.
figure
polygon
square
Figure 3‐6 Three Classes Related with IS‐A
1 class figure;
2
3 virtual function void draw();
4 $display(“figure::draw”);
5 endfunction
6
7 function void compute_area();
8 $display(“figure::compute_area”);
9 endfunction
10
11 endclass
12
13 class polygon extends figure;
14
15 virtual function void draw();
16 $display(“polygon::draw”);
17 endfunction
18
19 function void compute_area();
20 $display(“polygon::compute_area”);
21 endfunction
22
23 endclass
24
25 class square extends polygon;
26
Virtual Functions and Polymorphism 45
Each function prints out its fully qualified name in the form
class_name::function_name. We can write a simple program that calls each
of these functions to understand how the virtual functions are bound.
37 program top;
38 figure f;
39 polygon p;
40 square s;
41
42 initial begin
43 s = new();
44 f = s;
45 p = s;
46
47 p.draw();
48 p.compute_area();
49 f.draw();
50 f.compute_area();
51 s.draw();
52 s.compute_area();
53 end
54 endprogram
Here is what happens when we run this program:
square::draw
polygon::compute_area
square::draw
figure::compute_area
square::draw
square::compute_area
the printed output, we can conclude that the functions are bound according to
the following table:
p.draw() square::draw()
p.compute_area() polygon::compute_area()
f.draw() square::draw()
f.compute_area() figure::compute_area()
s.draw() square::draw()
s.compute_area() square::compute_area()
In all cases, compute_area() was bound to the particular compute_area()
function specified by the type of the reference that called it—p is a reference
to a polygon and so polygon::compute_area() is bound. This is because
compute_area() is non‐virtual. The compiler can easily figure out which
version of the function to call simply based on the type of the object.
Because draw() is virtual, it is not always possible for the compiler to figure
out which function to call. The decision is made at run time using a virtual
table, a table of function bindings. We will not discuss the mechanics of
virtual tables here. A good reference is Inside the C++ Object Model by Stanley
B. Lippman.
Notice that even though p is a polygon, the call to p.draw() results in
square::draw() being called not polygon::draw(), as you might expect.
The same thing happens with f—f.draw() is bound to square::draw(). The
object we originally instantiated is a square, and even though we assign
handles of different types, the fact that it is a square is not forgotten. This
works only because square is derived from polygon, which in turn is derived
from figure, and because draw() is declared as virtual. A compile time error
about type incompatibility would occur if you tried to assign s to p and s
were not derived from p.
3.5 Generic Programming
An implication of separating concerns is that each concern is represented only
once. Duplicating code violates the principle. In practice, many problems are
quite similar and their solution requires code that is similar, but not identical.
Intuitively, we want to take advantage of code similarity to write code that
can be used in as many situations as possible. This leads us to writing generic
code, code that is highly parameterized so that it can be easily reused in a
wide variety of situations.
Generic Programming 47
Details of generic code are supplied at compile time or run time instead of
hard coding them. The most obvious kind of generic code is the
parameterized function. Consider the fixed (not parameterized) trivial
function doubler():
This function always doubles three, returning six. This particular function
isn’t very interesting because it always returns the same value—the double of
three. A more generic version takes a parameter:
This new version of doubler() will double any value provided as an
argument, for example:
doubler(2) -> 4
doubler(37) -> 74
Clearly, the new doubler() is more generic than the old one.
Making programs generic is consistent with the OOP goal of separating
concerns. Thus OOP languages provide facilities for building generic code.
C++ has templates and SystemVerilog has parameterized classes. Using these
facilities, you can parameterize types as well as values. For example:
}
};
There is a subtle but important difference between the SystemVerilog and C++
implementations of maximizer. The SystemVerilog version will work only on
on built‐in types. If T is a class type, then a and b will be passed in to max() as
references to class objects. Attempting to compare the relative magnitude of
two references is nonsensical. The C++ version will work on any class or type
for which operator>() is defined. A compile time error will occur if
operator>() is not defined.
To make a version of maximizer that will return the largest of two class
objects, we need to define a method in each class that will compare objects.
This presumes that the type parameter T is really a class, not a built‐in type
such as int or real. Further, it presumes that T has a function called comp(),
which is used to compare itself with another instance. The AVM library
contains a parameterized component called avm_in_order_comparator#(T),
which is used to compare streams of transactions. This component will be
discussed in detail in Chapter 7. It has two variants, one for comparing
streams of built‐in types, avm_in_order_builtin_comparator#(T), and one
for comparing streams of classes, avm_in_order_class_comparator#(T).
The reason we need two in‐order comparator classes is exactly the same
reason we need two maximizers—SystemVerilog does not support operators
that can operate on either classes or built‐in types.
Our stack is not particularly generic. It has a fixed stack size of 20 and the data
type of the items kept on the stack is fixed to be int. Below is a more generic
form of stack that changes these fixed characteristics to parametrized
characteristics.
The generic stack class is parameterized with the type of the stack object. The
parameter T contains a type. In this case T can be either a class or a built‐in
type because we are not using operators directly on objects of type T. Any
place in the class where we previously used int as the stack type, we now use
T. For example push() now takes an argument of type T. Class parameters,
such as T, are compile‐time parameters. This means that the value is
established at compile time. Creating an instance of a class with a particular
set of values for the parameters is called specialization. To specialize
stack#(T) we instantiate it with a specific value for the type. For example:
The size of the stack is no longer fixed at 20. We use a dynamic array to store
the stack, whose size is specified as a parameter to the constructor. Unlike T,
size is a run‐time parameter—its value is specified when the program runs.
This lets us create multiple stacks, each with a different size.
...
big_stack = new(2048);
little_stack = new(6);
big_stack and little_stack are of the same type. They use the same
specialization of stack#(T). However, they are each instantiated with
different size parameters.
In making stack generic, we made another change. We replaced dump() with
traverse_init() and traverse_next(). dump() relies on the type of the
stack elements, which is not known until compile time. We need to be able to
traverse the stack and format each element no matter what the element type
is. It could be an int, or it could be a complex class with multiple members.
We don’t know what it will be. To keep stack#(T) generic, we must resist all
temptation to establish any reliance on the type of the stack elements.
Whereas dump() would run through the stack elements and print them in
order, traverse_init() sets an internal traversal pointer (tp) to point to the
top of the stack, and traverse_next() hands the current element (as pointed
to by tp) back to the caller and decrements tp. The stack maintains some state
Objects as Components 51
information about the traversal. The state information is reset when
traverse_init() is called.
By making stack#(T) generic, removing reliance on hardcoded types and
sizes, we have made this component highly reusable.
3.6 Objects as Components
Interestingly, HDLs, such as Verilog and VHDL, though not considered
object‐oriented languages, are built around concepts quite similar to classes
and objects. Module instances in Verilog, for example, are objects, each with
its own data space and set of tasks and functions. Just like objects in OO
programs, each instance of a module is an independent copy. All instances
share the same set of tasks and functions and the same interfaces, but the data
contained inside each one is independent from all other instances. Modules
are controlled by their interfaces. Verilog modules do not support inheritance
(that is, the ability to form IS‐A relationships) or type parameterization, and
they are static, which makes them unsuitable for true OOP.
The similarity between classes and modules opens up an opportunity for us
to use class objects in a hardware context. We can create verification
components as instances of classes, giving us the flexibility of classes along
with the connection to hardware elements. The designers of SystemVerilog
have capitalized on this relationship when extending Verilog with classes,
providing the capability for a class to work a lot like modules.
The table below compares features of classes in Verilog, SystemVerilog, and
C++.
SystemVerilog interface construct). We can write a class containing references
to items inside an interface that doesn’t yet exist (that is, it isn’t instantiated).
When the class is instantiated, the virtual interface is connected to a real
interface. This makes it possible for a class object to both drive and respond to
pin activity. SystemC modules are implemented as classes and allow for pins
to be in the port list, providing the same sort of structure.
Objects as Components 53
Sidebar: Simula 67
The relationship between class objects and hardware simulation has been
around for quite some time. Simula 67,1 one of the earliest OOP
languages, was developed explicitly for the purpose of building discrete
event models. Simula 67 has the notion of class objects and a simulation
kernel. It even has a kind of PLI for connecting in external Fortran
programs. Simula provides DETACH and RESUME keywords, which
allow processes to be spawned and reconnected, sort of a fork/join. It has
a special built‐in class called SIMULATION, which provides event list
features.
Even though the terms object and object‐oriented are not used at all in
Simula 67, all modern object‐oriented programs can trace their lineage to
this early programming language. Discrete event simulation languages
also can trace their genesis to Simula 67. For many, bringing together the
ideas of OOP and hardware simulation seems new; but in fact, the two
ideas were born together and only later parted ways. Using OOP with a
discrete event simulator brings us full circle.
Simula 67 still is being used many places around the world, but
its main impact has been through introducing one of the main
categories of programming, more generally labelled object‐ori‐
ented programming. Simula concepts have been important in the
discussion of abstract data types and of models for concurrent
program execution, starting in the early 1970s. Simula 67 and
modifications of Simula were used in the design of VLSI circuitry
(Intel, Caltech, Stanford). Alan Kayʹs group at Xerox PARC used
Simula as a platform for their development of Smalltalk (first lan‐
guage versions in the 1970s), extending object‐oriented program‐
ming importantly by the integration of graphical user interfaces
and interactive program execution. Bjarne Stroustrup started his
development of C++ (in the 1980s) by bringing the key concepts
of Simula into the C programming language. Simula has also
inspired much work in the area of program component reuse and
the construction of program libraries.
1. Lamprecht, Gunther, “Introduction To Simula 67,” Vieweg, 1983
2. http://heim.ifi.uio.no/~kristen/FORSKNINGSDOK_MAPPE/F_OO_start.html
54 OOP and Verification
3.7 OOP and Verification
Building an object‐oriented program and building a testbench are not very
different things. A testbench is a network of interacting components. OOP
deals with defining and analyzing networks of interacting objects. Objects can
be related through IS‐A or HAS‐A and they communicate through interfaces.
OOP just naturally fits the problem of building testbenches.
HDLs, such as Verilog and VHDL, lack many OOP facilities, and thus are not
well suited for building testbenches. The fundamental unit of programming
in most HDLs is the module, which is a static object. Modules come into
existence at the very beginning of the program and persist unmodified until
the program completes. They are syntactically static as well. The syntactic
means to modify a module to create a variant are limited. Verilog allows you
to parameterize scalar values, but not types. Often you are reduced to cutting
and pasting code then making local modifications. If you have ten different
variations you need in a particular design you need to paste ten copies in
appropriate locations and then locally modify each one. Should the template
module change, the one that you pasted around to create the variants, you’ll
have to locate each instance and make those same changes in each one. This
process is not all that different from what our assembly language
programmers had to do fifty years ago.
Languages such as SystemC/C++ and SystemVerilog, which do provide OOP
facilities, are more well suited for testbench construction. Using dynamic
classes, parameterized classes, inheritance, and parameterized constructors
you can build highly reusable components. Spending a little extra time to
build a generic component can result in a large productivity gain when that
component is reused in different ways in many places.
4
Introduction to
Transaction-Level Modeling
The process of designing an electronic system involves taking an abstract idea
and successively replacing the abstractions with concrete details until you
reach a representation that can be manufactured in silicon. Some abstractions
have been carefully defined and codified, and have become the medium in
which designs are rendered. RTL is a common abstraction used to create
designs. There are many tools based on the RTL abstraction that make it a
convenient way to initiate the design and verification process.
However, as designs get larger and more complex, it becomes increasingly
convenient to represent them using abstractions higher than RTL.
Transaction‐level modeling (TLM) is becoming popular as a way to create the
first incarnation of a design that can be simulated and analyzed.
4.1 Abstraction
In their book System Design with SystemC, Grötker et al., discuss models of
computation. They define a model of computation as having three
components:
A model of time
Methods of communication between concurrent processes
Rules for process activation
56 Abstraction
In comparison, transaction‐level models can be timed or untimed and use
channels to communicate between processes. Instead of sending individual
bits back and forth, the processes communicate by sending transactions to
each other through function calls.1 The world of TLM encompasses a range of
models of computation with different time, communication, and process
activation models. In each case, however, the contents of the communication
are at a higher level of abstraction than individual bits. Thus, a transaction‐
level model is at a higher level of abstraction (it is more abstract) than an RTL
model. Combining the notions of abstraction and models of computation, we
can see that making an abstract model means abstracting time, data, and
function. The following sections discuss these elements in detail.
Abstract time. The time abstraction in a simulator refers to how often the
entire design state is consistent. Models that run in event‐driven simulators
(for example, logic simulators) use a discrete notion of time, meaning events
happen at specific time points. Events usually (although not always) cause a
process of some sort to be invoked. As more events occur in a simulation,
more processes are invoked, and with more processes comes slower overall
simulation runs. Abstracting time means reducing the number of points
where the design must be consistent, and therefore, the total number of
events and process activations that must occur. For example, in an RTL
model, every net must be consistent after every change. In cycle‐accurate
abstraction, the design must be consistent only on the clock edges,
eliminating all the events that occur between clock edges. In a transaction‐
level model, the design state must be consistent at the end of each transaction,
each of which might consume many clock cycles.
Abstract data. Data refers to the objects communicated between components.
In RTL models, the data refers to individual bits that are passed via nets
between components. In transaction‐level models, data is in the form of
transactions, heterogeneous structures that contain arbitrary collections of
elements.
Consider a packet in a communications device. At the lowest level of detail,
the packet contains start and stop bits, a header, error correction information,
1. You could easily make the case that a transfer of a single bit is a transaction in the
most general sense. Even though TLM is a proper superset of RTL modeling (all
RTL models are also transaction‐level models), we restrict ourselves to the cases
involving higher levels of abstractions than bits when discussing transactions
Definition of a Transaction 57
payload size, payload, and a trailer. In an abstract model, only the payload
and size might be necessary. The other pieces of data are not necessary for the
calculations being performed.
Abstract function. The function of a model is the set of all things it must do at
each event. Abstracting function means reducing that set or replacing groups
with simpler calculations. For example, in an ALU, you might choose to use
the native multiplication operation supplied in your modeling language
instead of coding the complete algorithm for a shift‐and‐add multiplier. The
latter may be part of the implementation, but at the higher level, the details of
the shift‐and‐add algorithm are unimportant. The primitives that are part of
the language define how you can abstract function. In a gate‐level language,
for example, you build complex behaviors from gates. In an RTL language,
you build behaviors around arithmetic and logical operations on registers. In
TLM, you implement language design functionality with function calls of
arbitrary complexity.
Specific ways of abstracting time, data, and function are called abstraction
levels. RTL is an abstraction level that has far less detail than gate‐level
models or transistor‐level models. For the purposes of functional verification,
RTL is the lowest‐level abstraction that we need to consider. Since
synthesizers can effectively convert RTL to gates, we don’t need to concern
ourselves with lower levels of detail. Besides, anything lower gets into
electrical issues that are beyond the scope of logic design.
4.2 Definition of a Transaction
To effectively talk about TLM in greater detail, we must step back and define
transactions.
A transaction is a quantum of activity that occurs in
a design bounded by time.
This is the most general definition of transaction. It says that a transaction is
everything that occurs in a design (or a module or subsystem within a design)
between two time points. While that is accurate, it is so general that it doesn’t
lead to practical application. A more useful definition is:
58 Communicating Components
A transaction is a single transfer of control or data
between two entities.
This is the hardware‐oriented notion of a transaction. When looking at a piece
of hardware, you can easily identify entities between which control or data is
transferred. In a bus‐based design, reads and writes on a bus can be
considered transactions. In a packet‐based communication system, sending a
packet is a transaction.
A third definition is:
A transaction is a function call.
4.3 Communicating Components
In this section we’ll review the basic means of transmitting a transaction
between components. We’ll examine put, get, and transport forms of
transaction communication. These examples do not use the AVM library, as
they are intended to illustrate the essential mechanics of transaction‐level
communication with minimal overhead. In the next section we’ll look at a
more complete example that uses the AVM library for communication.
4.3.1 Put
Using TLM nomenclature, we say that the initiator puts transactions to the
target.
A B
Figure 4‐1 Put
Figure 4‐1 indicates that A puts transactions to B. The initiator has a port
drawn as a square box, and the target has an export drawn as a circle. The
flow of control is from box to circle, that is, A will call B. The arrow shows the
direction of the data flow, and in this case, it indicates that data will move
from A to B.
We’ll illustrate the code for these components with a producer and a
consumer using both SystemC and SystemVerilog. The producer is the
initiator and the consumer is the target. We must build these components in
such a way that they do not know about each other a priori. To do that, we use
a pure virtual interface to define the function that will be used to transmit
data between the initiator and the target. First, let’s take a look at the
SystemVerilog version of the producer.
39 class producer;
40
41 put_if put_port;
42
43 task run;
44
45 int randval;
46
47 for(int i=0; i<10; i++)
48 begin
49 randval = $random %100;
50 $display(“producer: sending %4d”, randval);
51 put_port.put(randval);
52 end
53
54 endtask
55
56 endclass : producer
file: cookbook/04_tlm/01_put_sv/put.sv
60 Communicating Components
The producer is a class, implying that it is created dynamically. It has two key
elements, a run() task and a put_port. The run() task is a simple task that
loops 10 times and puts 10 transactions. To keep things simple, our
transactions are integers. In practice, a transaction can be an arbitrarily
complex object such as a struct or a class.
To put transactions, the producer calls put() on the put_port. What is a
put_port? It is not a port in the traditional Verilog sense. It is a reference to a
put_if. What is a put_if ? put_if is the virtual interface class shared
between the initiator (producer) and target (consumer).
put_if is a class with a pure virtual task, meaning the task has no
implementation. Without an implementation of all of its tasks and functions,
a virtual class cannot be instantiated by itself. It must be the base class of
another class that is instantiated. In our case, the class derived from the pure
virtual put_if is consumer.
consumer contains an implementation of put(), the pure virtual task defined
in put_if. The put() task implementation accepts the argument passed to it
and prints it. put_if plays a pivotal role in connecting the producer to the
consumer. A reference to it on the producer side, which we call a port,
establishes the requirement that there must be an implementation of the
functions and tasks in the interface to which this object will be bound. The
consumer is derived from the interface and therefore must implement the
pure virtual task satisfying the requirement.
The top‐level module binds the producer to the consumer.
70 module top;
71
72 producer p;
73 consumer c;
74
75 initial begin
Communicating Components 61
Notice line 81:
81 p.put_port = c;
It forms the linkage between the producer and the consumer. When new() is
called on p to create a new instance of producer, the member put_port has
no value. A run‐time failure will occur if put_port.put() is called prior to
the linkage assignment. Assigning c to p.put_port gives the port a reference
to the consumer which contains an implementation of the interface task
put().
The SystemC implementation of the put configuration is similar to its
SystemVerilog counterpart. A pure virtual interface class serves as the link
between the initiator and the target. We use an sc_export to connect the pure
virtual interface to the producer.
65 put_port->put(randval);
66 }
67 }
68 };
file: cookbook/04_tlm/01_put_sc/put.cc
Like the SystemVerilog consumer, the SystemC consumer is derived from the
pure virtual interface and provides an implementation of put().
The pure virtual interface class provides definitions of the functions used to
communicate between the producer and the consumer.
The top‐level module connects the producer and consumer.
file: cookbook/04_tlm/01_put_sc/put.cc
In SystemVerilog, we use an assignment statement to associate a port and an
export. SystemC overloads the function operator() to do binding. The
constructor binds the port to the export with the call p.put_port(c).
4.3.2 Get
The complement to put is get. In this arrangement, the initiator receives a
transaction from the target. The flow of control is the same—from initiator to
target—but the direction of the data flow is opposite. The initiator gets a
transaction from the target. In this case, the consumer is the initiator and the
producer is the target. The consumer initiates a call to the producer to retrieve
a transaction.
A B
Figure 4‐2 Get Configuration
Figure 4‐2 is very similar to Figure 4‐1. The only difference is that here the
arrow points from the target to the initiator instead of the other way around.
This indicates that the data flows from the target to the initiator. The
following is the SystemVerilog consumer (initiator).
55 class consumer;
56
57 get_if get_port;
58
59 task run;
60 int randval;
61 for(int i=0; i<10; i++)
62 begin
63 get_port.get(randval);
64 $display(“consumer: receiving %4d”, randval);
65 end
66 endtask
67 endclass
file: cookbook/04_tlm/02_get_sv/get.sv
64 Communicating Components
The consumer has a task, run(), which iterates 10 times to get 10 transactions.
Like the producer in the put example, the consumer here has a port. Also like
the put example, the port is a reference to a pure virtual interface, in this case
it is called get_if.
get_if is a pure virtual interface class that defines the task get(). The target
(producer) is constructed in a similar fashion to the target in the put example.
It contains an implementation of the interface task. This producer produces a
random value between zero and 99.
The connection at the top level will look very familiar.
72 module top;
73
74 producer p;
75 consumer c;
76
77 initial begin
78 // instantiate producer and consumer
79 p = new();
80 c = new();
81 // connect producer and consumer through the get_if
82 // interface class
83 c.get_port = p;
84 c.run;
85 end
86 endmodule : top
file: cookbook/04_tlm/02_get_sv/get.sv
Communicating Components 65
After creating instances of the producer and consumer by calling new(), the
two components are connected using a linkage assignment.
In SystemC, the consumer looks as you would expect with an sc_export to
connect to the producer.
The pure virtual interface class contains no surprises.
The top‐level module connects the consumer and the producer.
4.3.3 Transport
A B
The master (A) does both a put and a get in a single function call. As we saw
in previous sections, put() and get() tasks each take one argument, the
Communicating Components 67
argument they are putting or getting. However, the transport() task takes two
arguments, a request and a response. It sends the request and returns with a
response. The slave (B) accepts the request and replies with a response.
Let’s first look at the pure virtual interface.
The master calls transport(), creates a request, and sends it to the slave via
transport. It processes the response that is returned.
37 class master;
38
39 transport_if port;
40
41 task run;
42
43 int request;
44 int response;
45
46 for(int i=0; i<10; i++)
47 begin
48 request = $random % 100;
49 $display(“master: sending request %4d”,
50 request);
51 port.transport(request, response);
52 $display(“master: receiving response %4d”,
53 response);
54 end
55
56 endtask
57 endclass : master
file: cookbook/04_tlm/03_transport_sv/transport.sv
The slave implements the transport() task. In our example, it does some
trivial processing of the request to create a response.
The top‐level linkage between master and slave works the same way the put
and get examples work.
77 module top;
78
79 master m;
80 slave s;
81
82 initial begin
83 // instantiate the master and slave
84 m = new;
85 s = new;
86
87 // connect the master and slave through
88 // the port interface
89 m.port = s;
90 m.run;
91 end
92
93 endmodule : top
file: cookbook/04_tlm/03_transport_sv/transport.sv
The linkage assignment makes the connection between the master and the
slave. After the assignment completes, the master can use the connection to
directly call functions in the slave.
Now we’ll look at the SystemC implementation of the transport
configuration. By now, the pattern is probably looking familiar. The master
and slave are connected through an export and a common interface.
The transport interface has a single transport function. Notice that instead of
two arguments, the transport function has one argument and a return value.
The request is sent to the slave through the argument list, and the response is
returned to the master via the return value. The reason for this difference is
that SystemVerilog does not allow tasks to return values
Communicating Components 69
The master is straightforward. It has a single export through which it can call
transport.
The slave implements the transport function. Like the SystemVerilog version,
it performs a trivial transformation of the request to create a response.
74
75 int transport(int request)
76 {
77 cout << “slave: receiving request “ << request << endl;
78 int response = -request;
79 cout << “slave: responding with “ << response << endl;
80 return response;
81 }
82 };
file: cookbook/04_tlm/03_transport_sc/transport.cc
4.3.4 Blocking vs. Nonblocking
The interfaces that we have looked at so far are blocking. That means that the
functions and tasks block execution until they complete. They are not allowed
to fail. There is no mechanism for any blocking call to terminate abnormally
or otherwise alter the flow of control. They simply wait until the request is
satisfied. In a timed system, this means that time may pass between the time
the call was initiated and the time it returns.
In the put configuration, we have two components, producer and consumer.
The producer generates a random number and sends it to the consumer via
put(). Before put() is called, there is no activity in the consumer. The call to
put() causes activity in the consumer, which prints the value of the
argument. During the time that the consumer is active, the producer is
waiting. This is the nature of a blocking call. The caller must wait until the call
finishes to resume execution.
The pure virtual interface that connects the nonblocking slave to the master
looks much like the other pure virtual interfaces we’ve seen. The significant
difference is that the nb_get() function returns a status value instead of a
transaction.
The master (consumer) must check the status return from nb_get() to
determine whether the function successfully completed. Notice also that
we’ve introduced time into the model. The consumer checks every four ns to
see if a value is available.
71 class consumer;
72
73 get_if get_port;
74
75 task run();
76 int randval;
77 int ok;
78
79 for(int i=0; i<20; i++)
80 begin
81 #4;
82 if(get_port.nb_get(randval))
83 $display(“%t: consumer: receiving %4d”, $time,
randval);
84 else
85 $display(“%t: consumer: no randval”, $time);
86 end
87 endtask
88 endclass
file: cookbook/04_tlm/04_nonblocking_sv/nbget.sv
The producer is organized as a function and a task. The task will be forked
(spawned) to run as a continuous process. It generates new random values
that the consumer will grab. However, each random value is only available
for two ns out of a seven‐ns cycle. The function is an implementation of
nb_get that returns the value generated periodically by the run() task.
58 #5;
59 randval = $random % 100;
60 rand_avail = 1;
61 #2;
62 rand_avail = 0;
63 end
64 endtask
65
66 endclass : producer
file: cookbook/04_tlm/04_nonblocking_sv/nbget.sv
When we run the example, we see that not every nb_get() call succeeds.
4: consumer: no randval
8: consumer: no randval
12: producer: sending -99
12: consumer: receiving -99
16: consumer: no randval
20: producer: sending -39
20: consumer: receiving -39
24: consumer: no randval
28: producer: sending -9
28: consumer: receiving -9
32: consumer: no randval
36: consumer: no randval
40: producer: sending 57
40: consumer: receiving 57
44: consumer: no randval
48: producer: sending -71
48: consumer: receiving -71
52: consumer: no randval
56: producer: sending -14
56: consumer: receiving -14
60: consumer: no randval
64: consumer: no randval
68: producer: sending 29
68: consumer: receiving 29
72: consumer: no randval
76: producer: sending 18
76: consumer: receiving 18
80: consumer: no randval
The blocking get configuration had only one process—the consumer that
continually made requests to the producer to send a new value. The
nonblocking variant has two processes: the consumer regularly polls the
producer to see if it has a value to grab, and the producer generates new
values asynchronously with respect to the consumer. Our nonblocking
producer makes a random value available every 7ns. It waits 5ns and then
generates a new value, and the new value is valid for 2ns. The flag
Isolating Components with Channels 73
rand_avail is set when a valid random value is available and cleared when
none is available.
The implementation of nb_get() for this example must check rand_avail to
see if there is indeed something to send. If not, it returns a zero to indicate
that the request failed. If there is something available, then it sends it and
returns a one to indicate success.
Blocking interfaces are useful for operating two components synchronously.
Blocking calls wait patiently until the requested operation completes, no
matter how long that might take. On the other hand, nonblocking interfaces
are useful for communicating asynchronously. They do not wait and can be
used to poll targets as in the example we’ve shown.
4.4 Isolating Components with Channels
In the previous section, we discussed simple mechanisms for moving a
transaction between two processes. In each, the initiator and target were
tightly synchronized by the transaction interface task call. In this section, we
examine the case where the initiator and target are more loosely coupled. The
decoupling is possible using a channel, in this case a FIFO, to manage the
synchronization between the initiator and the target, rather than relying on
the two components to synchronize themselves. Here we have two
components, an initiator A and a target B, plus a FIFO connecting the two
components.
A B
FIFO
Figure 4‐3 Two Components Isolated with a FIFO
In our previous examples, one component had a port and the other an export.
The component with the port makes calls to the component with the export.
Here both A and B have ports. Instead of the initiator calling the target
directly, now we have both the initiator and the target calling the FIFO
channel. The channel provides the functions required by both the initiator
and the target.
74 Isolating Components with Channels
The initiator uses a blocking put() to send transactions to the FIFO, and the
target uses a blocking get() to retrieve transactions from the FIFO. The FIFO
buffers the transactions and serves as a synchronizer. The initiator can
continue putting transactions into the FIFO until it is full. Since the initiator
uses a blocking put(), the initiator process will block when the FIFO is full.
Likewise, the target uses a blocking get() and will block when the FIFO is
empty. Essentially, the producer in this example is like the producer in the
blocking put example, and this consumer is like the consumer in the blocking
get example. The FIFO replaces the target and provides the tasks necessary to
satisfy the interface requirements created by the ports on the producer and
consumer.
Let’s look at the code. This is the first example that uses the AVM library. The
AVM library includes a FIFO, called tlm_fifo, which is a parameterized class
with a variety of interfaces to support blocking and nonblocking operations.
This producer looks a lot like the producer in the blocking put example. It has
a process, run(), that loops 10 times generating 10 random values and
sending them to the target via the put_port.
There are two new things to notice. First, the component is derived from
avm_threaded_component, a base class in the AVM library that provides
essential services for components. It allows components to be connected into
the hierarchy of named components, and it provides process control for the
run task. The run task is forked at startup time and can be suspended or
resumed at will.
The other thing to notice is how put_port is declared. In our simple examples
above, we created our own pure virtual interface to connect the initiator to the
target. The AVM library supplies a collection of port and export objects,
which are wrappers around pure virtual interface references. The port and
export objects, which are themselves named components, provide a
connect() function for establishing associations between ports and exports.
This is a nicer use model compared to using assignment statements.
The consumer is not much different than the consumer in the blocking get
example.
To connect the producer, consumer, and FIFO, we use an environment. An
environment serves as the top of the hierarchy of named components, and it
orchestrates the hierarchy construction and testbench execution.
76 Forming a Transaction‐Level Connection
The connect() function makes the association between the ports on the
producer and consumer and the corresponding exports on the FIFO. The
run() task is responsible for controlling testbench execution. In this simple
example, we let the testbench run for 100ns and then terminate.
4.5 Forming a Transaction‐Level Connection
To form a transaction‐level connection, you must specify three elements: the
control flow, the data flow, and the transaction data type. Declaring a
connection as a port or export identifies the control flow—control flows from
ports to exports. That is, a port initiates activity and an export responds to it.
The interface identifies the data flow. A put interface indicates that data flows
from the initiator (port side) to the target (export side), a get interface
indicates that data flows from the target back to the initiator, and a transport
or request/response interface indicates a bidirectional data flow.
In SystemC, we capture these three elements in the port and export
declarations. For example:
We declare put_port as a port, so we know the device in which this port is
declared is an initiator. The interface type is tlm_nonblocking_put_if<>,
Summary 77
which is one of the put interfaces defined in the TLM library. This port is an
egress for data objects. Finally, the data type of the object being sent is trans.
In SystemVerilog using AVM, port and export declarations capture the same
three elements. Here is the same example rendered in SystemVerilog:
The suffix of the object type is _port indicating this is a port object. Exports
use the suffix _export. The interface type is identified by the name between
the avm_ prefix and the _port (or _export) suffix. In this case. that name is
nonblocking_put, which refers to tlm_nonblocking_put_if. As in the
SystemC example, the data type of the object being sent is trans. See “AVM
Encyclopedia” on page 217. for a complete mapping between port and export
types and interface types.
We have shown a producer and a consumer, each of which uses blocking
tasks to send and retrieve transactions. The blocking tasks reside in the FIFO,
an object that serves as an intermediary between the two components,
otherwise known as a channel. The channel transfers data between the two
components, and it serves as a synchronizing agent.
Putting a FIFO between two components to buffer and synchronize transfers
is a common idiom in TLM. We will see this idiom frequently in the
transaction‐level testbenches we build using the AVM.
4.6 Summary
Put, get, and transport are fundamental means for synchronizing parallel
processes and for communicating transaction‐level information between
those processes. These ideas are used extensively in the AVM to build
transaction‐level testbenches. In section 4.4 we illustrated transaction‐level
communication using AVM facilities. In the next chapter we will delve deeper
into the AVM to show how to build arbitrary hierarchies of class‐based
verification components connected with transaction‐level interfaces.
78 Summary
5
AVM Mechanics
Up to this point, we’ve discussed the essentials of verification and
transaction‐level modeling. Now we will put those ideas to use in the AVM.
The AVM library provides facilities for connecting components using
transaction‐level interfaces and channels. Here we explain in some depth how
to use these facilities. This includes building hierarchies of named
components and connecting them through transaction‐level interfaces. This
chapter sets the groundwork for the examples in subsequent chapters, which
use transaction‐level communication in testbenches. In addition, we’ll discuss
how you can use the AVM reporting mechanism to produce and filter
messages.
5.1 Interfaces
The term “interface” is used in many contexts in any discussion of the AVM.
There are at least four specific meanings for interface, which we will use
throughout the rest of this book. The four meanings are distinct and not to be
confused with each other. They are:
Pin interfaces on RTL modules
SystemVerilog interfaces
SystemVerilog virtual interfaces
Pure virtual interface classes
A pin interface on an RTL module is the set of pins exposed to the outside
world (that is, pin‐level ports). Data and control flow through this interface.
A SystemVerilog interface is a construct in the SystemVerilog language.
80 Connecting Components
Finally, a pure virtual interface class is a term used in object‐oriented
programming, independent of SystemVerilog or SystemC. It is a class that
contains pure virtual functions and nothing else.
When we use the term interface and you cannot glean the specific meaning
from the context, we’ll qualify it.
5.2 Connecting Components
Components in the AVM are connected using transaction‐level interfaces and
channels. Recall from Chapter 4 that transaction communication is performed
using function and task calls.
Because components in AVM are constructed from classes and classes are
dynamic, connectivity must be created dynamically. In modules, connectivity
is defined declaratively and the elaborator uses the instantiation statements to
form a complete hierarchy. On the other hand, class‐based components must
be formed into a hierarchy by some other means since classes are not
instantiated at the time the elaborator runs. The AVM dynamically constructs
a hierarchy of class‐based components in an elaboration phase.
You provide information to the elaboration phase through three virtual
functions in avm_named_component—export_connections(), connect(),
and import_connections(). The environment runs the mini‐elaborator,
which first executes export_connections() in all components, then
connect() in all components, and finally, import_connections() in all
components. In subsequent sections, we will describe how to write the three
functions to elaborate your hierarchy of class‐based components.
5.2.1 Ports and Exports
The elements of a transaction‐level connection are a port, which is the requires
side of the connection; an export, which is the provides side of the connection;
and a means to associate the two. Ports and exports are objects that are
instantiated inside a component. The connect() method on the port allows it
to be connected to a compatible export.
Diagramatically, a port is shown as a square and an export is shown as a
circle. Control flow moves from box to circle, from port to export. An arrow
Connecting Components 81
shows the direction of data flow between components. The direction of the
data flow may be different than the direction of control flow.
A B
Figure 5‐1 Connection between a Port and an Export
To form a proper connection, the type of the port and export must be the
same. The type of a port or an export is determined by the pure virtual
interface from which it is derived. The difference between ports and exports is
that exports provide implementations of the interface functions and ports do
not.
Here is an example of a component with a port called get_port.
...
endclass
component. The reason is that ports and exports are named components, that
is, they are derived from avm_named_component.
5.2.2 Peer‐to‐Peer Connectivity
To connect a port to an export at the same level of hierarchy, you must build a
connect() function in the component that is the parent of the components to
be connected. Furthermore, you must populate the function with statements
that make the connections.
Let’s consider a producer/consumer pair of components similar to what was
discussed in the previous chapter.
Figure 5‐2 Producer/Consumer Design
The hierarchy for the producer/consumer design looks like this:
top
Figure 5‐3 Producer/Consumer Hierarchy
All three of the components, the producer, the FIFO channel, and the
consumer, are at the same level of hierarchy. To create this design, the
component top must instantiate the three sub‐components and connect them
using the connect() method. Here is the code to do this:
98 super.new(name, parent);
99 p = new(“producer”, this);
100 c = new(“consumer”, this);
101 f = new(“fifo”, this);
102 endfunction
103
104 function void connect();
105 p.put_port.connect(f.blocking_put_export);
106 c.get_port.connect(f.blocking_get_export);
107 endfunction
108
109 endclass
Component top is an avm_named_component. That means it will have a name
and a place in the hierarchy of components. The class top contains the three
objects we are instantiating and connecting. The constructor instantiates all
the components by calling new() for each one and passing in a name and a
handle to the parent component. All of the instantiated components take this
as the parent handle because they are all immediately subordinate to the
component whose constructor we are executing, namely top.
To connect components that reside at the same level in the hierarchy, we use
the connect() function. Our design has two connections, one between the
producer and the FIFO and the other between the FIFO and the consumer.
Since that is the case, we need two statements in the connect function to forge
each of those connections.
Although not shown here, the producer contains a port called put_port,
which is declared as:
And we are assuming that the consumer contains a get_port declared as:
When connecting components at the same level of hierarchy, you always
connect exports to ports. To represent the connections, all the statements in
connect() will be of the form:
component.port.connect(other_component.export);
5.2.3 Hierarchical Connectivity
Making connections across hierarchical boundaries involves some additional
issues, which we’ll discuss in this section. Consider the hierarchical design
shown in Figure 5‐4. We’ve labeled the connections so we can refer to them in
the discussion.
top
producer consumer
A B C D E F
gen fifo conv fifo bfm
Figure 5‐4 Simple Hierarchical Design
The hierarchy of this design is shown in Figure 5‐5. top contains two
components, producer and consumer. producer contains three components,
gen, fifo, and conv. consumer contains two components, fifo and bfm. The
two FIFOs are both unique instances of the same FIFO component.
Connecting Components 85
top
producer consumer
Figure 5‐5 Design Hierarchy
Recall the discussion about connecting components at the same hierarchy
level. To make connections A and B, we populate the connect() method in
producer. To make connection F, we populate the connect() method in
consumer. And we populate the connect() method in top to make
connection D.
Connections C and E are of a different sort than we’ve seen previously.
Connection C is a port‐to‐port connection, and connection E is an export‐to‐
export connection. These two kinds of connections are necessary to complete
hierarchical connections. Connection C imports a port from the outer
component to the inner component. Connection E exports an export upwards
in the hierarchy from the inner component to the outer one. Ultimately, every
transaction‐level connection must resolve so that a port is connected to an
export. However, the port and export terminals do not need to be at the same
place in the hierarchy. We use port‐to‐port and export‐to‐export connections
to bring connectors to a hierarchical boundary to be accessed at the next‐
higher level of hierarchy.
We create port‐to‐port connections in the function import_connections(),
and export‐to‐export connections in the function export_connections().
Export‐to‐export connections look like this:
export.connect(subcomponent.export);
In this statement, export is a reference to an export in the current component
and subcomponent.export is a reference to an export inside subcomponent.
export.connect() is the method that makes the association between the two
exports by reaching down into the hierarchy and pulling
subcomponent.export up to the current level.
86 Connecting Components
We create the port‐to‐export connection F between sibling components in the
function connect() and we make the export‐to‐export hierarchical
connection E in the function export_connections().
Conversely, port‐to‐port connections look like this:
subcomponent.port.connect(port);
Here we are working in the opposite direction. We are making an association
between the port at a lower level of hierarchy and a port at the current level of
hierarchy. In this way, we are pushing the port down one level. Here’s what
this looks like in the producer:
producer and consumer are connected in top where connection D is made in
function connect().
The elaboration phase will take care of calling the functions connect(),
import_connections(), and export_connections() at the right time. You
just need to populate these functions appropriately to forge the necessary
connections.
88 Building an Environment
The following table summarizes connection types and elaboration functions.
Note that the argument to the port.connect() method may be either an export
or a port, depending on which elaboration function is being used. The
argument to export.connect() is always an export, and it may only be
called from within export_connections().
5.3 Building an Environment
The environment is the topmost container of an AVM testbench. It contains all
the components that comprise the testbench and orchestrates the execution of
the testbench. In addition, the environment orchestrates all the activity of the
testbench.
The environment for our hierarchical example is simple, yet it illustrates the
major features of an environment object.
The environment is derived from avm_env, the environment base class.
avm_env itself is a named component and thus takes a name in the
constructor. The default constructor for avm_env supplies a default name of
“env” so you can leave out the name argument and call to super.new() if
you don’t want to use a different name. Since the environment is the root of
the hierarchy of class‐based components, the full path name will begin with
Building an Environment 89
the name of the environment, “env” by default. The path name to the
component gen in our hierarchical example is env.top.producer.gen.
The environment base class provides a set of virtual functions and tasks you
can use to control the testbench. The task do_test() (which is also virtual)
executes them in a specific order. The order of execution in do_test() is:
1. print banner
Call a local (non‐virtual) function that prints a banner identifying
the AVM library.
2. elaborate
Invoke the AVM elaboration process. To connect objects at the top
level (environment), code all the connections in the function
connect().
3. configure
Call configure() in all named components. Since the
environment is a named component, configure() is called in the
environment as well. This is a hook that you can use for any
startup processing such as initializing or configuring components
or opening files.
4. run all
Fork the run() task in all threaded components. This effectively
starts all the threads running.
5. run
Call run() in the environment. run() is not forked; it is called in
a blocking fashion. When this function terminates, then
do_test() starts its shutdown sequence.
6. kill all
To shut down the testbench, do_test calls kill() on all
threaded components. This terminates all the threads that are still
executing.
7. report
The final step is to call report() in all named components.
report() is intended to be a place where users can print
summary information. You can use it as a post simulation hook to
do any final shutdown processing such as closing files.
90 Connecting Hardware
5.4 Connecting Hardware
To verify an RTL design, the testbench needs to connect to one or more of its
pin interfaces. Connecting class objects to modules using pins is somewhat
different than connecting modules to each other using pins. We use a virtual
interface to connect a class to a SystemVerilog interface that in turn connects to
the pin interface on the DUT. To show how this works in SystemVerilog, we’ll
consider an example of a driver and a DUT. The driver is an
avm_threaded_component and the DUT is a Verilog module. The two are
connected through a SystemVerilog interface.
SystemVerilog
Driver DUT
Interface
Figure 5‐6 A Class‐Based Verification Component Connected to an RTL Module
through a SystemVerilog Interface
The SystemVerilog interface is a module‐like construct that contains pins that
can be connected to modules and classes. The simplest way to think of a
SystemVerilog interface is as a bundle of pins, although that doesn’t describe
the object entirely since it’s possible to also put tasks and functions in
interfaces. For use with AVM verification components, we recommend that
you just use them as wire bundles.
SystemVerilog interfaces can have modports, shorthand for “modified ports,”
which provide signal direction. Various components connect to interfaces via
modports and the particular modport you use depends on the direction of the
signal flow in or out of the component. Modports are optional, however we
recommend that you use them consistently to help ensure correct signal
connectivity.
Here is an example of a SystemVerilog interface with three modports:
12 bit req;
13 bit ack;
14 bit err;
15
16 modport master_mp(
17 input clk,
18 input rst,
19 output address,
20 output wr_data,
21 input rd_data,
22 output req,
23 output rw,
24 input ack,
25 input err );
26
27 modport slave_mp(
28 input clk,
29 input rst,
30 input address,
31 input wr_data,
32 output rd_data,
33 input req,
34 input rw,
35 output ack,
36 output err );
37
38 modport monitor_mp(
39 input clk,
40 input rst,
41 input address,
42 input wr_data,
43 input rd_data,
44 input req,
45 input rw ,
46 input ack,
47 input err );
48 endinterface
When we connect the DUT to the interface, we’ll use the slave_mp modport.
That modport specifies the direction for signals on the DUT. When we
connect a driver to the interface, we’ll use the master_mp. That modport has
the correct signal directions for a device driving the DUT. The monitor_mp
specifies that the signals are all inputs, which is what we need for a monitor
since a monitor only listens, it never drives.
We need to connect the interface to the DUT and to the driver. The DUT is a
module that can have an interface in its port list. The driver is a class that uses
virtual interfaces to gain access to an interface. Here’s the driver:
92 Connecting Hardware
We declared pif as a virtual pin_if, which means it is a reference to a
pin_if. Signals in the interface are referenced by prefixing the signal names
with the name of the virtual interface, pif. For example, pif.clk references
the clk pin in the pin_if interface.
The top‐level module instantiates the DUT, the clock generator, the
SystemVerilog interface, and the testbench environment. The DUT, the clock
generator, and the interface, all being modules or interfaces, are connected
using the usual means in SystemVerilog.
The environment, being a class, is instantiated using new(). The constructor
to the environment takes the virtual interface pin_if as an argument along
with a name for the environment. To connect a verification component with a
Reporting 93
virtual interface, the environment first stores the virtual interface locally.
Then, in the connect() function, it connects the driver to the virtual interface
using an assignment statement. Executing the connect() function gives the
driver access to the virtual interface pif, which in turn gives it access to the
real interface instantiated in top. Since the real interface is also connected to
the DUT, the class‐based driver now has a pin‐level connection to the DUT.
Virtual interfaces are another kind of connection that works much like ports.
We can augment our table of connection types to include virtual interfaces.
Connection
Elaboration Function Form
Type
5.5 Reporting
The AVM provides a rich set of classes and functions for generating and
filtering messages. The AVM reporting facility contains three kinds of
functionality:
Displaying messages in a uniform way to various destinations
Filtering messages
Altering control flow as a result of a message being printed
5.5.1 Basic Messaging
Each of these four functions issues a message that has several components: A
severity, a verbosity level, an id, a message, a filename, and a line number.
Severity. The severity of the message can be one of MESSAGE, WARNING, ERROR,
or FATAL. The choice of severity changes the final text that is printed to
include an indication of the severity. It also affects how the message is
processed. For example, a call to avm_report_fatal terminates the
testbench. Other ways in which severity affects message processing are
discussed in Section 5.5.2.
id. The id of a message is an arbitrary string that we use as a message
identifier. The identifier is printed as part of the message text and it also
affects how messages are processed.
Message. The message is the body of the message text.
Verbosity. The verbosity level of a message is an arbitrary number that is
relative to the current setting of the verbosity threshold. Messages whose
verbosity level is at or below the threshold will be printed, and those above
will be ignored. This is a way to filter messages. You can make your testbench
more verbose by raising the threshold or less verbose by lowering the
threshold. The function for changing the verbosity threshold is
set_report_verbosity_level(int verbosity).
5.5.2 Message Actions
Associated with each message is an action that determines exactly how it is
processed. The action is a bit vector with each bit representing one possible
action. You can specify multiple actions by turning on one or more bits in the
vector. So you don’t have to remember which bit is which, AVM has an action
enum that you can use to specify actions. The following table describes the
possible actions:
Action Definition
NO_ACTION Do not execute an action
DISPLAY Display the message on the standard output
device
LOG Send the message to a file
COUNT Increment quit_count. When quit_count
reaches a predetermined threshold, then ter‐
minate the testbench
EXIT Terminate the testbench immediately
CALL_HOOK Call the appropriate hook function
quit_count and max_quit_count are stored in a global location. You can
change max_quit_count with the following function:
set_max_quit_count(int q);
A combination of a particular message’s severity and id determine the action
it takes. The message handler keeps a set of tables that define actions and file
destinations for messages by id and severity. (We’ll see shortly how those
tables are set up.) First, the message handler looks to see if there is an action
specified for the combination of id and severity for the message. If there is
none, then the message handler looks to see if there is an action specified just
for the id. If it finds none, then it looks for actions by severity. The AVM
message facility guarantees that there is always an action for each severity.
The default actions are:
Severity Default Action
MESSAGE DISPLAY
96 Reporting
Severity Default Action
WARNING DISPLAY
The only default actions are those by severity in the table above. You must set
any other action by id or the combination of id and severity with functions
designed for just that purpose.
5.5.3 Message Files
To send messages to a file, you must first open the file and change the
appropriate message actions to LOG. A handy place to do this is in the
configure() method of named components, for example:
FILE f;
Later, when the testbench terminates, you can close the file:
5.5.4 Message Handlers
top rh
rh producer consumer rh
rh rh rh rh rh
Figure 5‐7 Hierarchical Design with Report Handlers
To change reporting characteristics for an individual component, you need to
change only its report handler. For example, issuing this call in the
component bfm:
set_report_id_action(“fsm”, LOG);
causes all the messages whose id is “fsm” to be logged to a file. This call
affects only messages issued from bfm. Messages issued from any other
component in this testbench are not affected, even if they also have the “fsm”
id. To make a similar change on an entire sub‐hierarchy you can issue the
same call on each component or you can call the hierarchical equivalent of the
set_report_id_action() method. In this case, you would call:
set_report_id_action_hier(“fsm”, LOG);
If you make this call in consumer, you will affect consumer and all of the
components in the hierarchy underneath it. The shaded report handlers are
those affected.
98 Reporting
top rh
rh producer consumer rh
rh rh rh rh rh
Figure 5‐8 Report Handlers Affected by a Call to set_report_id_action_hier
The following table identifies all the methods for changing report actions and
files and their hierarchical equivalents.
Local Method Hierarchical Method
set_report_verbosity_level set_report_verbosity_level_hier
set_report_default_file set_report_default_file_hier
set_report_severity_action set_report_severity_action_hier
set_report_id_action set_report_id_action_hier
set_report_severity_id_action set_report_severity_id_action_hier
set_report_severity_file set_report_severity_file_hier
set_report_id_file set_report_id_file_hier
set_report_severity_id_file set_report_severity_id_file_hier
5.5.5 Altering the Flow of Control
Most of the time when you issue a report, the report is displayed or sent to a
file, and then control resumes at the next sequential statement. There are
occasions when it is desirable to alter the flow of control based on a message
that is issued. The most obvious case is terminating the testbench. The EXIT
Reporting 99
action terminates the testbench immediately after the message is sent to its
final destination. The action COUNT increments quit_count, and the testbench
terminates when quit_count reaches max_quit_count. Typically, you will
use these actions to do things like prevent an errant program from looping
indefinitely in an error state or preventing cascading error messages from
obfuscating the source of an error.
This configure() function sets max_quit_count to 10 and instructs the
report handler so that each time an error is issued (that is,
avm_report_error() is called) the message displays on the screen, goes to a
log file, and increments quit_count. The tenth time an error is issued, the
testbench terminates.
Another way to alter the flow of control when a report is issued is through
report hooks. The report client provides this set of virtual functions. They are a
place where you can gain control when any report is issued or a report of a
specific severity is issued to do additional filtering, counting, sanity checking,
and so forth. The AVM report client provides five report hooks, one for each
severity and a “catch‐all” hook that is called no matter what the severity of
the report.
int line);
The first thing you might notice is that these functions take exactly the same
argument as the avm_report_* functions. The reason is that all of the
arguments passed to avm_report_* are passed to the hooks as well.
The other thing to notice is that each of these functions returns a value, a
single bit. Processing continues only if both hooks return 1. The default
hooks, the hooks in the base class that are called when you don’t explicitly
supply one, always return 1. If the return value is 0, then processing
terminates, and it is as though the report was never issued. Through the
return code of the hooks, you can do fine‐grained filtering of messages. As an
example of how you might use this, let’s say that you don’t want to see
messages from your testbench during initialization, which takes 250
microseconds. After initialization is complete, you want to see all messages.
The catch‐all hook is called first, and then the severity‐specific hook is called.
To enable hooks, you must turn them on by setting the action to HOOK. A
convenient place to do that is in the configure() function:
FILE f;
Hooks are run in the named component in which they are implemented. Just
as each named component has its own set of methods, they also have their
own hooks. If you want to run the same hook in different components, you’ll
have to implement it in each component. A straightforward way to do this is
to create your own component base class that inherits from
avm_named_component or avm_threaded_component and has your hook
implementations.
5.6 Summary
Making a transaction‐level connection using the AVM involves associating a
port with an export. The export makes functions available externally, which
can be bound to a device with a port of a matching type. In addition to port‐
export connections, port‐port and export‐export connections are used to
make a connection on a lower level of hierarchy available at a higher level.
Virtual interfaces, a unique SystemVerilog construct, makes it possible to
connect static hardware to dynamic class‐based components. The AVM
messaging system, in addition to providing a way to uniformly generate
messages, also provides means to selectively filter and control them.
The primary role of a testbench is to control and observe the operation of the
DUT. Control means manipulating the input interfaces on the DUT. Observe
means to capture all the activity on the DUT interfaces, both inputs and
outputs. The agent of control is the stimulus generator, and the agent of
observation is the monitor. These two types of components appear in
virtually all testbenches. In this chapter we’ll look at how to build and use
these devices.
6.1 A Simple Memory Design
To explore stimulus generators and monitors we’ll use a simple memory
design as our DUT. It has an addressable memory space that can be read or
written through the external interface. Here we’ll briefly describe the
interface and the operation of the device. The schematic symbol shows all the
pins on our memory device:
104 A Simple Memory Design
clk
reset
addr
rd_data
wr_data
rw
req
ack
err
Figure 6‐1 Schematic Symbol for Memory Device
The definition and function of each pin is listed in the following table:
clk 1 Clock input
reset 1 Enter RESET state when asserted
addr 8 Address bits
rd_data 8 Read data, data read from the memory
wr_data 8 Write data, data to be written into the memory
rw 1 Read/Write bit. A value of 1 indicates a write
request
req 1 When asserted, signifies the beginning of a
memory request
ack 1 Asserted by the memory device to signify the
completion of a response
err 1 Signifies an error when asserted by the mem‐
ory device
A Simple Memory Design 105
Our memory device has separate buses for read data and write data.
Separating the data bus into two parts by direction eliminates the need for
bidirectional buses, which require tri‐state devices to implement in hardware.
Operation of the device is initiated by asserting req. When req is asserted, the
device consults the rw pin to determine if the request is for a read or a write.
The address for the request is obtained from the addr pins. When the device
has completed responding to the request, either by writing a new value to the
memory or by retrieving a value and placing it on the read data bus, then the
ack pin is asserted.
To build this device, we first start with the pin interfaces. In SystemVerilog,
we’ll use the interface construct to encapsulate the pin interfaces:
23 interface mem_pins_if;
24
25 parameter int ADDRESS_WIDTH = 8;
26 parameter int DATA_WIDTH = 8;
27
28 typedef bit[ADDRESS_WIDTH-1:0] address_t;
29 typedef bit[DATA_WIDTH-1:0] data_t;
30
31 address_t address;
32 data_t wr_data;
33 data_t rd_data;
34 bit clk, rst;
35 bit req, rw;
36 bit ack, err;
The first part of the interface declares the objects that are part of the interface.
This includes the types and sizes of each one. The second part contains a set
of modports.
38 modport master_mp(
39 input clk, rst,
40 output address, wr_data,
41 input rd_data,
42 output req, rw,
43 input ack, err);
44
45 modport slave_mp(
46 input clk, rst,
47 input address, wr_data,
48 output rd_data,
49 input req, rw,
50 output ack, err);
51
52 modport monitor_mp(
53 input clk, rst,
106 A Simple Memory Design
The three modports each contain the same set of signals with different I/O
directions to support different connection modes. The device itself uses the
slave_mp modport, which identifies the direction of rd_data as output and
wr_data as input, for example. A device driving the memory (such as the
stimulus generator) uses the master_mp modport. It has rd_data as input and
wr_data as an output. All of the pins in the monitor_mp modport are inputs
as is required by monitors. In subsequent sections, we’ll see how interfaces
and modports are used to connect stimulus generators and monitors.
SystemC uses interfaces in much the same manner as SystemVerilog. In
contrast, C++ does not have a specific interface construct; instead, a pin
interface is a class that contains port definitions. Each port definition uses
sc_in<>, sc_out<> or sc_inout<> to identify the direction of the signal flow.
The parameter to each port template is the data type of the signal. Since our
memory is a pin‐level device, we represent the signals in the interface using
bool for single bits and sc_uint for grouped bits such as addr. Here’s the
master pin interface:
34 class mem_master_if
35 {
36 public:
37 sc_in<bool> clk;
38 sc_in< bool > rst;
39
40 sc_out< ADDRESS_TYPE > address;
41 sc_out< DATA_TYPE > wr_data;
42 sc_in< DATA_TYPE > rd_data;
43 sc_out<bool> rw;
44 sc_out<bool> req;
45 sc_in<bool> ack;
46 sc_in<bool> err;
47 };
file: cookbook/06_testbench_fundamentals/mem_sc/mem_pin_if.h
The slave interface, like its SystemVerilog counterpart, is used by the device
itself to communicate externally. The directions of the signals are relative to
the memory device.
55 class mem_slave_if
56 {
A Simple Memory Design 107
57 public:
58 sc_in<bool> clk;
59 sc_in< bool > rst;
60
61 sc_in< ADDRESS_TYPE > address;
62 sc_in< DATA_TYPE > wr_data;
63 sc_out< DATA_TYPE > rd_data;
64 sc_in<bool> rw;
65 sc_in<bool> req;
66 sc_out<bool> ack;
67 sc_out<bool> err;
68 };
Finally, the monitor interface has only input ports, as is appropriate for
connecting to a monitor.
77 class mem_monitor_if
78 {
79 public:
80 sc_in<bool> clk;
81 sc_in< bool > rst;
82
83 sc_in< ADDRESS_TYPE > address;
84 sc_in< DATA_TYPE > wr_data;
85 sc_in< DATA_TYPE > rd_data;
86 sc_in<bool> rw;
87 sc_in<bool> req;
88 sc_in<bool> ack;
89 sc_in<bool> err;
90 };
The memory protocol can be described with the simple state machine in
Figure 6‐2.
WAIT SEND
REQ ACK
Figure 6‐2 FSM for Memory Protocol
The device comes up in the WAIT REQ state where it waits for req to be
asserted. During the next clock cycle after req is asserted, the state changes to
SEND ACK, and data is transferred to or from the rd_data or wr_data buses
(depending on the type of request being processed). ack is asserted,
108 A Simple Memory Design
indicating the response is complete. Finally, the state is set back to WAIT REQ
to wait for another request. The code for the DUT is below:
The heart of the DUT is an implementation of the protocol FSM. The clock in
an “always @” block drives the FSM, so it can only change state once per clock
cycle. The FSM is built around a case statement with two cases, one for each
state in the FSM.
To exercise this memory device, we’ll use a simple testbench that has a
stimulus generator and a monitor. All three devices, the stimulus generator,
the monitor, and the memory device DUT are connected on the same bus.
mem_stim mem_dut
mem_monitor
Figure 6‐3 Simple Testbench
The three components are connected, along with a clock generator, in a top‐
level module. Notice that all the connections are made through a
SystemVerilog interface. The stimulus generator, which we are calling a
master, is connected via the master_mp modport, and the DUT is connected
via the slave_mp modport.
27 module top;
28
29 mem_pins_if #(.ADDRESS_WIDTH(8),
30 .DATA_WIDTH(8)) pins_if();
31
32 mem_master master(pins_if.master_mp);
110 Stimulus Generators
33 mem_dut dut(pins_if.slave_mp);
34
52 endmodule
file: cookbook/06_testbench_fundamentals/01_mem_sv/top.sv
6.2 Stimulus Generators
Traditionally, a stimulus generator is a free‐running device that drives values
on the DUT’s input pins. It usually contains a state machine to manage a
handshaking protocol. In this example, the stimulus generator is connected
directly to the bus. Shortly, we’ll look at stimulus generators that are
connected indirectly to the bus. Since this stimulus generator is connected to
the bus, it must exercise the bus protocol, and it does so through an FSM.
SEND WAIT
RESET ACK
REQ
The FSM for the stimulus generator is strikingly similar to the FSM for the
DUT itself. This is not surprising when you consider that the role of the
stimulus generator is to exercise the DUT. Actually, the two FSMs are
complementary. The DUT FSM waits for a req, and after it receives one,
completes the requested operation and then sends an ack. The stimulus
generator, being complementary, does the opposite. It sends a req and then
waits for an ack.
First, here’s the code for our stimulus generator.
This code implements a state machine with three states—RESET, SEND_REQ,
and WAIT_FOR_ACK. The RESET state initializes the address and data
busses and sets up a request. The SEND_REQ state generates some stimulus
in the form of new values for address and data, and then posts the request to
the bus and transitions to the WAIT_FOR_ACK state. The WAIT_FOR_ACK
state does exactly as the name suggests—it waits for the DUT to acknowledge
112 Monitors
the request. Once an acknowledge is received, then the FSM transitions back
to SEND_REQ.
6.3 Monitors
The role of the monitor is to passively watch the bus and recognize specific
transactions that occur. The monitor for our simple memory protocol will
recognize read and write operations. Not surprisingly, the FSM for the
monitor looks much like the FSMs for the stimulus generator and the memory
device.
WAIT WAIT
REQ ACK
Figure 6‐4 Monitor FSM
The monitor FSM has two states, one waits for requests and the other waits
for responses. The code for the FSM also looks familiar. The FSM is in a loop
controlled by the clock. At each clock cycle, the case statement executes the
code associated with the current state. It may cause the state to change so that
on the next clock cycle, the code for a different state executes.
53 forever begin
54
55 @( posedge pins_if.monitor_mp.clk );
56
57 if( pins_if.monitor_mp.rst ) begin
58 state = WAIT_FOR_REQ;
59 continue;
60 end
61
62 case( state )
63 WAIT_FOR_REQ : begin
64
65 if( pins_if.monitor_mp.req ) begin
66 request = read_request_from_bus();
67 avm_report_message( “Saw Mem Request”,
68 request.convert2string() );
69 state = WAIT_FOR_ACK;
70 end
71
72 end
73 WAIT_FOR_ACK : begin
Three State Machines 113
74
75 if( pins_if.monitor_mp.ack ) begin
76 response = read_response_from_bus();
77 avm_report_message(“Saw Mem Response”,
78 response.convert2string() );
79 transaction = new( request, response );
80 avm_report_message(“Saw Mem Transaction”,
81 transaction.convert2string()
);
82 ap.write( transaction );
83 state = WAIT_FOR_REQ;
84 end
85
86 end
87 endcase
file: cookbook/06_testbench_fundamentals/mem_sv/mem_monitor.svh
6.4 Three State Machines
Before we continue on to discuss drivers, it is instructive to review the three
state machines we’ve used—the stimulus generator, the DUT, and the
monitor—to compare them from an intuitive point of view. The mathematics
behind the FSMs are beyond the scope of this text.
WAIT SEND
Memory Device DUT REQ ACK
114 Three State Machines
SEND WAIT
Stimulus Generator REQ ACK
WAIT WAIT
Monitor REQ ACK
Figure 6‐5 FSMs for DUT, Stimulus Generator, and Monitor
Notice that the stimulus generator and the DUT are complementary. The
stimulus generator sends requests and the DUT waits for requests; the DUT
sends acks and the stimulus generator waits for acks.
When the system is initialized, both the DUT and monitor are in the
WAIT_REQ state, waiting for a request. The stimulus generator starts in
SEND_REQ state, causing a request to be posted on the bus. Once the request
is posted, the stimulus generator and the monitor both move to the
WAIT_ACK state to wait for a response from the DUT. Once the DUT
completes the requested operation, it sends an ack that triggers both the
monitor and the stimulus generator to return to their initial states, and the
cycle repeats. This is typical of nonpipelined request/response protocols. The
requestor and responder each ping‐pong back and forth—the requestor sends
a request and waits for a response, the responder waits for a request and
sends a response—and the monitor ping‐pongs back and forth in sync with
the requestor and responder.
With each protocol, the details will be different, but typically, the three state
machines will be similar. The requestor and responder are typically
complementary to each other, and the monitor follows the activity of both,
recognizing when a request is sent and when the response is returned.
Drivers 115
6.5 Drivers
The stimulus generator that we’ve been using thus far performs two tasks—it
generates interesting stimulus, and it orchestrates the pin‐level activity
necessary to place the stimulus on the bus. In practical applications, it is
desirable to separate those two functions, having one device generate
stimulus and another device manage the pins. The reason is that either of
those functions could change, and it’s much easier to make changes to just one
without having to worry about the other.
The agent that manages the protocol is called a driver. It is responsible for
operating the pin‐level protocol to communicate with other RTL devices. It
receives its instructions in the form of a transaction. The stimulus generator
sends transactions to the driver, and the driver converts them to pin‐level
protocol. The stimulus generator is split into two parts. One part, still called
the stimulus generator, generates stimulus transactions. The other part, the
driver, converts those transactions to pin‐level protocol.
Besides untangling the functions to improve maintainability, separating them
greatly improves the reusability of both. The exact details of the pin‐level
protocol are independent of the nature of the stimulus that will be
communicated using the protocol. By keeping them separate, you can change
either one without affecting the other. You can replace the pin‐level protocol
and not change the way stimulus is generated or vice versa.
To achieve further independence, the driver and the stimulus generator are
separated by a channel, a transaction‐level component that buffers
transactions sent between components. The channel that we typically use in
the AVM is a FIFO channel. The FIFO establishes the semantic that
transactions are retrieved by the consumer in the same order they were sent
by the producer. In this case, transactions retrieved by the driver are in the
same order they were sent by the stimulus generator. This is an application of
the FIFO buffering described in “Isolating Components with Channels” on
116 Drivers
page 73. Here’s how a testbench with a separate driver and stimulus
generator is organized:
men_monitor
Figure 6‐6 Testbench with Separate Stimulus Generator and Driver
The stimulus generator is greatly simplified, and the code for generating
addresses and data values is no longer buried amongst the code needed to
manipulate the pins connected to the bus. The core of our new transaction‐
based stimulus generator is the task generate_stimulus().
This task generates sequences of writes followed by a sequence of reads. It
uses a convenience layer to package the transactions and send them out. The
construction of the convenience layer is entirely dependent on the protocol.
We’ve chosen to provide one task for each possible operation in the protocol,
in this case read() and write(). Each convenience function uses its
arguments to fill a transaction object and then it sends the transaction
downstream through initiator_port.
54
55 request_t request=new(addr, MEM_WRITE, data);
56 response_t response;
57 string write_str;
58
59 $sformat(write_str, “%d %d”, addr, data);
60
61 avm_report_message(“about to do write”, write_str);
62 initiator_port.transport(request, response);
63 avm_report_message(“just done write”, write_str);
64
65 endtask
66
67 task read(input address_t addr, output data_t data);
68
69 request_t request=new(addr, MEM_READ);
70 response_t response;
71 string read_str;
72
73 $sformat(read_str, “ADDR=%d”, addr);
74 avm_report_message(“about to do read”, read_str);
75
76 initiator_port.transport(request, response);
77 data=response.m_rd_data;
78
79 $sformat(read_str, “%d %d”, addr, data);
80 avm_report_message(“just done read”, read_str);
81
82 endtask
file: cookbook/06_testbench_fundamentals/mem_sv/
mem_bidirectional_stimulus.svh
The communication is done with the blocking call transport(). This causes
the stimulus generator to wait for the response before continuing on with
generating the next transaction.
The driver looks a lot like the old stimulus generator in that it is built around
an FSM that manages the pin‐level protocol. The code that generates new
values to be placed on the bus has been removed, and the details of managing
the bus have been delegated to two functions: write_request_to_bus() and
read_response_from_bus().
51 forever begin
52 @( posedge pins_if.master_mp.clk );
53
54 if( pins_if.master_mp.rst ) begin
55 avm_report_message(“mem_driver” , “doing reset”);
56 state = WAIT_FOR_REQ;
57 continue;
58 end
118 Drivers
59
60 case( state )
61 WAIT_FOR_REQ : begin
62
63
64 if( request_port.try_get( request ) ) begin
65
66 avm_report_message( “Sending Request” ,
67 request.convert2string() );
68
69 write_request_to_bus( request );
70 state = WAIT_FOR_ACK;
71 end
72 end
73
74 WAIT_FOR_ACK : begin
75
76 pins_if.master_mp.req <= 0;
77
78 if( pins_if.master_mp.ack ) begin
79
80 response = read_response_from_bus();
81
82 if( !response_port.try_put( response ) ) begin
83 avm_report_error(“mem_bidirectional_driver” ,
84 “Cannot put reponse” );
85 end
86
87 state = WAIT_FOR_REQ;
88 end
89
90 end
91 endcase
file: cookbook/06_testbench_fundamentals/mem_sv/
mem_bidirectional_driver.svh
a nonblocking form of put(), try_put(), to return responses to the stimulus
generator.
6.6 Summary
We’ve reviewed the fundamental components of testbenches, transaction‐
level stimulus generators, drivers, and monitors. Drivers and monitors are
complementary—drivers convert transaction streams to pin wiggles and
monitors convert pin wiggles into transaction streams. Transaction‐based
stimulus generators provide a separation between the process of generating
interesting stimulus for the DUT and managing the stimulus on the pin‐level
bus. In the next chapter, we’ll look at how to build complete testbenches using
these three components as a foundation.
120 Summary
7
Complete Testbenches
Drivers, monitors, and stimulus generators by themselves are not enough to
make a complete testbench. Although these components provide the
controllability and observability necessary in any testbench, they do not
provide analysis or stimulus modification. We’ll talk about these aspects in
detail in this chapter.
Stimulus generators and drivers cause a DUT to operate, and monitors
produce streams of transactions representing both the inputs and outputs of
the DUT. From information provided in those streams, we need to answer the
questions, “Does it work?” and “Are we done?” How can we answer those
questions from the streams of transactions moving about the testbench? The
simple‐minded way to analyze the transaction streams is to print them out or
save them in a file and then comb through them manually to glean their
meaning. Discounting the labor intensiveness of this approach, its reliability
is entirely dependent on the thoroughness and power of concentration of the
person doing the analysis. The engineer might have to go through thousands
or millions of transactions. Even if the sample space is small, if the engineer is
doing the work late in the evening after the caffeine has worn off or on a
Friday afternoon before a long weekend, that engineer is quite likely to miss
important markers or patterns.
A more reliable, repeatable, and reusable approach is to write code that
analyzes the transaction streams. Encapsulating the code as verification
components makes it easy to insert it in a transaction‐based testbench.
Components whose role it is to analyze transaction streams are called
analysis components. The collection of analysis components in a testbench are
referred to as the analysis domain.
122 Analysis Ports and Analysis Components
7.1 Analysis Ports and Analysis Components
Analysis ports form the boundary between the operational domain and the
analysis domain in a testbench. The analysis domain is the collection of
components in the testbench responsible for analyzing the behavior observed
by a monitor. Analysis components receive their input from analysis ports. A
monitor sends transactions through an analysis port to an analysis
component.
publisher
monitor
sub[0] write(tr)
sub[1]
sub[2]
Figure 7‐1 Analysis Port Organization
Before the test begins, each subscriber must register itself with the publisher.
The publisher maintains a list of subscribers. At some time during its
operation the device that contains the analysis port, such as a monitor, calls
write(), passing in a transaction object. The analysis port forwards the write
call to each subscriber, passing a copy of the transaction object to the
subscriber.
hold. An analysis FIFO is a TLM FIFO with unbounded size and a write()
interface instead of the usual put() and get() interfaces. By having the
subscriber of an analysis port be an unbounded FIFO, we can guarantee that
it will never fill up in a single delta cycle and block, and that the analysis
component can access all the transactions sent in the same delta cycle.
Some analysis components may deliver more than one transaction through an
analysis port in a single delta cycle. write() must return immediately, but the
subscriber may do anything, including consume time. The consequence is
that data can be lost if a subscriber is not prepared to deal with multiple
transactions in one delta cycle. In this case, you can use an analysis FIFO to
serve as a FIFO buffer between the analysis port and the analysis component.
An analysis FIFO is a tlm_fifo with an analysis interface (that is, write()).
The analysis component, instead of having an analysis interface, connects to
the analysis FIFO in the same way any component connects to a FIFO. It uses
get() or try_get() (nb_get() in SystemC) to retrieve transactions.
7.2 Scoreboards
A typical use of analysis ports, as we see in Figure 7‐2, is to communicate
from a monitor to a scoreboard. The monitor contains the analysis port, and
the scoreboard is a subscriber. The role of a scoreboard is to determine
whether the DUT functions correctly, in other words, it answers does‐it‐work
questions. A scoreboard taps off the transaction streams from the DUT’s pin
interfaces. It can then perform the computations necessary to determine if the
DUT responded correctly to its inputs.
The scoreboard in this sequence of examples is an in‐order comparator. It
accepts two streams of transactions, the before and after streams, and compares
them for equivalence. For each transaction pulled from the after stream, it
pulls a transaction from the before stream and compares the two to see if they
are equivalent. The in‐order comparator is so named because it assumes that
transactions in the two streams being compared are in the same order.
in_order_comparator
before after
analysis fifo
Figure 7‐2 Organization of the in_order_comparator
124 Scoreboards
The in‐order comparator has an analysis FIFO on each input. The comparator
process waits until a transaction appears in the after FIFO. When it does, it
gets the next transaction from the before FIFO. SystemC uses operator==()
to compare the two transactions (which, of course, must be of the same type).
SystemVerilog uses the comparison policy, either the built‐in (for built‐in
types such as float and int) or the class comparator (for user‐defined class
objects).
Our DUT for these examples is a parallel‐to‐serial (P2S) converter. It takes a
stream of bytes and converts it to a serial stream of bits. To verify the
functionality of this device we’ll use these components:
Stimulus generator
The stimulus generator generates a stream of packets that the
driver will convert to pin wiggles.
Driver
Notice in Figure 7‐3 that no FIFO is shown between the stimulus
generator and driver. The reason is that the FIFO is inside the
driver. This is a common idiom, which we recommend.
Monitor
The monitor reconstitutes the serial bit stream into packets.
Scoreboard
The scoreboard, which in this case is an in_order_comparator,
compares packets from the driver with packets from the monitor.
scoreboard
in-order
comparator
analysis port
Parallel-to-serial
Driver Monitor
DUT
Stimulus
generator
Figure 7‐3 Testbench for P2S Design
Scoreboards 125
The components are declared and instantiated in the environment.
The constructor takes two arguments, which are the virtual interfaces to the
DUT. This DUT has two pin interfaces, an input and an output interface.
These are stored in the environment as virtual interfaces m_input_bus and
m_output_bus. All of the components are instantiated by calling new() and
passing in this to refer to their parent, which in this case is the environment.
To generate stimulus, we use the AVM library component
avm_random_stimulus. We’ll look at how this component works in more
detail in Chapter 10. In the meantime, all you need to know is that this
component will generate a stream of randomized transactions of a specified
type.
62 m_stimulus.blocking_put_port.connect(
63 m_driver.blocking_put_export);
64 m_driver.ap.connect(m_comparator.before_export);
65 m_bit_monitor.ap.connect(m_comparator.after_export);
66 endfunction
file: cookbook/07_complete_testbenches/01_scoreboard_sv/
p2s_env.sv
The virtual interfaces are pushed down the hierarchy using assignment
statements. The transaction‐level connections are made using the connect()
function on the ports. (Notice that analysis ports are connected using the
same technique as ports. This is different from previous versions of the AVM,
which used a special register() function. connect() on analysis ports does
the same thing as register() did in the previous versions.)
The environment will start the run function to execute the testbench.
68 task run;
69 p2s_transaction gen = new;
70 m_stimulus.generate_stimulus(gen, 10);
71 # 100;
72 endtask
file: p2s_env.sv
This run function instructs the random stimulus generator to generate 10
transactions, wait 100 ns, and then terminate. The stimulus generator
generates transactions of type p2s_transaction. This is a simple object;
however, it is nonetheless representative of transaction objects you would
build to pass around the testbench between stimulus generators and drivers,
monitors and scoreboards, or anywhere a transaction object must be passed
between components.
35 avm_report_message(t.convert2string(),
36 convert2string());
37 return t.data == data;
38 endfunction
39
40 function string convert2string;
41 string s;
42 $sformat( s , “Parallel Transaction %d” , data );
43 return s;
44 endfunction
45
46 endclass
file: cookbook/07_complete_testbenches/p2s_tr_sv/
p2s_transaction.svh
A minimum transaction object has four functions—copy(), clone(), comp(),
and convert2string(). The following table describes each of these
functions.
Function Description
copy(t) Copies the argument object into the current object.
clone() Calls new() to allocate memory for a new object,
uses copy() to copy the current contents into the
new object, and then returns the new object.
comp(T t) Compares current object with argument object.
Returns 1 if they are equivalent, 0 otherwise
convert2string() Converts the object into a string suitable for print‐
ing and returns the string. Typically, $sformat is
used to create the string.
The avm_transaction base class, from which all transactions are derived,
specifies clone() as virtual and specifies the convert2string(), thus
enforcing the requirement that these functions are present in derived
transaction classes. comp() and copy() are not specified in the base class
because the signature of these functions requires the type of this (the derived
class), which the base class does not know. Technically these functions are not
required since they are not specified in the base class. However, we strongly
recommend that you get in the habit of providing these functions. Even if you
don’t need them right away, you might find that as you expand your
testbench with new functionality, you’ll be glad they are there.
128 Scoreboards
In SystemC, you use operator==() to perform a comparison between objects,
and you use operator=() and the copy constructor to copy and clone objects.
To print objects, you must write an operator<<() function to convert the
transaction object to a string and send it to an output stream.
We’ve looked at how the scoreboard is constructed. Now let’s look at how the
driver and monitor are constructed.
The driver FSM has two states—WAIT and BURST.
WAIT BURST
Figure 7‐4 P2S Driver FSM
The FSM must stay in the wait state a fixed number of clock cycles. After the
FSM waits the proper number of clock cycles, it switches to the BURST state.
In this state, the FSM sends a burst of bytes, one each clock cycle.
The primary structure for the FSM is a forever block controlled by the clock.
Inside the forever loop is a case statement that switches on the current state
(m_state).
86 forever begin
87 @( posedge m_bus_if.master_mp.clk );
88
89 if( m_bus_if.master_mp.rst == 1 ) begin
90 m_wait_count = 0;
91 m_burst_count = 0;
92 avm_report_message(“doing reset”, ““, 500 );
93 continue;
94 end
95
96 case( m_state )
97
98 WAIT : begin
99 m_bus_if.master_mp.en <= 0;
100 if( m_wait_count++ == m_wait_length ) begin
101 m_wait_count = 0;
102 m_state = BURST;
103 end
108 end
109
110 BURST : begin
Scoreboards 129
The BURST state first checks to see if the burst has completed. If it has not,
then it gets a transaction from the input and sends the transaction to the bus.
This small fragment of code that gets a transaction and sends it to the bus
represents the fundamental function of the transaction and is an important
part of its organization. Just like the memory driver we saw in the last
chapter, this driver uses try_get(), the nonblocking form of get(), to
retrieve a transaction. It uses an internal task, send_transaction_to_bus(),
to cause the transaction to be played on the bus.
send_transaction_to_bus() is a task instead of a function in order to allow
nonblocking assignments to be used to convert transaction information into
bits on a bus; however, it should not consume time. Any time‐consuming
activity should be part of the FSM. In practice, send_transaction_to_bus()
may be called multiple times to move a transaction out to the bus, with each
call sending part of the transaction. For example, a burst write may call
send_transaction_to_bus() several times, each time writing a single byte
or word as part of a larger transfer.
7.3 Coverage
To answer the are‐we‐done questions, we need to collect coverage
information as the simulation proceeds. In the most technical sense, coverage
refers to the number of states of a state machine that have been reached
relative to the total number of states contained in the FSM. For example, if a
design contains 8,274,388 states and a test reaches 7,488,329 of them, then we
can claim 90.5% coverage. Instead of trying to enumerate the number of states
in a design, we’re going to use a more informal but infinitely more practical
notion of coverage. Instead of enumerating which states have been reached,
we’ll ask if the design has done all the things it’s expected to do. We do this by
collecting the coverage information necessary to answer the are‐we‐done
questions.
Two additional components are needed in our testbench to deal with
coverage: a coverage collector and a test controller. The coverage collector
receives transactions from a monitor via an analysis port. It then performs
some data reduction operation on the transaction stream to extract a coverage
metric, a measure of coverage. The metric is compared against a threshold to
determine whether all the required activity has taken place, that is, whether
the coverage goal has been reached.
The simplest data reduction activity is to count the transactions. For example,
we can say we’re done if 500 transactions have passed by. In most
circumstances, this is insufficient. Typically, the transaction stream consists of
different types of transactions, each type representing a different kind of
activity. A common coverage metric is to count each expected type of
transaction. The metric is the number of types whose count is greater than
zero. If that number matches the number of expected types, then we know
that the system has exercised at least one of each transaction type. Assuming
the scoreboard has not located any functional errors, we can claim that we’re
done!
Of course, coverage metrics can be quite complex. The nature of the DUT and
the set of are‐we‐done questions will determine the complexity of the
Coverage 131
The other component that we are introducing here is the test controller. It
receives information from the coverage collector about whether the coverage
threshold has been reached. When the controller is notified that the threshold
has been reached, it can take any desired action. Commonly, test controllers
will change random constraints to cause new kinds of activity to occur in the
DUT (hopefully reaching more corner cases and increasing coverage), switch
to a new test, or simply shut off the stimulus generator.
The controller completes the loop that includes a random stimulus generator
and the coverage collector. You can turn on a random stimulus generator (not
an arbitrary stimulus generator but a stimulus generator that produces a
stream of randomly generated transactions) and let it run freely. The coverage
collector and the test controller act as the governor, preventing the testbench
from running indefinitely.
For example, consider the case where the coverage collector is programmed
to make sure that a particular memory space is covered. It will tell the
controller that we’re done when it sees at least one transaction for each
address in the space. The DUT, possibly because of a design flaw, will never
exercise every address. At some point during the simulation, all the addresses
that are possible for the DUT to exercise will have been exercised, and yet the
space is not fully covered. To avoid the simulation spinning on endlessly, the
coverage collector can ask every, say 100 cycles or whatever interval is
appropriate, whether the coverage metric has increased. If the answer is no,
then it instructs the controller to stop the simulation. This sort of fail‐safe
mechanism prevents runaway simulations.
Below is the testbench augmented with a coverage collector and a test
controller. Note that the monitor feeds transactions to the scoreboard as well
as to the coverage collector. Also notice that the communication between the
132 Coverage
coverage collector and the test controller and between the test controller and
the stimulus generator is done at the transaction level.
coverage
test controller
collector
scoreboard
in-order
comparator
Parallel-to-serial
Driver Monitor
DUT
Stimulus
generator
Figure 7‐5 P2S Testbench with Coverage Collector and Test Controller
24 covergroup byte_cov;
25 top_four : coverpoint data[7:4];
26 bottom_four : coverpoint data[3:0];
27 endgroup
file: cookbook/07_complete_testbenches/p2s_transactors_sv/
p2s_byte_coverage.svh
The coverage collector performs its work when it receives a transaction.
Because it is an avm_subscriber, it will spring to life when write() is called.
file: cookbook/07_complete_testbenches/p2s_transactors_sv/
p2s_byte_coverage.svh
The first thing write() does is extract the relevant data from the input
transaction. Then it samples the coverage, causing the appropriate bins in the
covergroup to be incremented. Next it asks if the coverage metric, as
calculated by the covergroup, has reached the threshold. (In this case, the
threshold is 95%.) If it has reached the threshold, it sets m_is_covered to 1 to
indicate that the coverage threshold has been reached.
The test controller in our testbench is not a separate component, rather it is
the run task in avm_env.
67 task run();
68 fork
69 begin
70 m_stimulus_process = process::self;
71 m_stimulus.generate_stimulus;
72 end
73 terminate_when_covered;
74 join
75 endtask
76
77 task terminate_when_covered();
78 wait( m_byte_coverage.m_is_covered );
79 m_stimulus_process.kill;
80 avm_report_warning(“terminate_when_covered” , ““ );
81 endtask
run() forks the task terminate_when_covered(), which simply waits until
m_is_covered is modified in the coverage collector (m_byte_coverage).
When m_is_covered changes, then terminate_when_covered() kills the
stimulus generator and then terminates, allowing run() to terminate, which
in turn allows the testbench to shut down.
7.4 Error Injection
The world is not always as well behaved as we would like it to be. Wires are
noisy, data gets corrupted, and from time to time, systems fail. To deal with
the real world, most designs have the means built in for handling bad data or
noisy lines. To claim a DUT works, we must not only show that it works on
good, well‐formed, well‐behaved data, we must also show that it behaves
properly on malformed data.
134 Error Injection
There are a number of ways to get bad data. One way is to have the stimulus
generator generate bad data. It is not always possible to generate all the kinds
of bad data the DUT might encounter because the data generated by the
stimulus generator is at a higher level of abstraction than what the bus sees. A
better way is to inject errors onto the bus by modifying the behavior of the
driver.
However, we don’t want to have to rewrite the driver. Drivers can be quite
complicated and a rewrite is not only tedious, redundant work, it quite likely
can result in subtle errors in the verification process. If the error‐injecting
driver does not exhibit the exact same behavior as the good (non‐error‐
injecting) driver, then when the error‐injecting driver is not injecting
intentional errors, it could be injecting unintentional errors by causing
different behavior to occur on the bus.
We can avoid this problem by deriving the error‐injecting driver from the
good driver. The object‐oriented facilities in SystemVerilog and SystemC
allow us to do this.
The header of error_driver shows that it is extended from p2s_driver, the
good driver. The error driver inherits all of the methods and members from
p2s_driver, including the virtual interfaces and transaction‐level ports and
exports. The constructor initializes some internal variables used to control
error injection. The only real difference is the send_transaction_to_bus()
task.
send_transaction_to_bus() is a virtual function. That means it is eligible to
be overridden by derived classes. It also means that the task in the derived
class must have the same type signature—function name, arguments, and
return type—as the base class. The FSM in run() will not have to change. It
will call send_transaction_to_bus() as it always has. However, in the
derived error‐injecting driver, send_transaction_to_bus() will have some
different behavior. Specifically, it will cause errors to be injected.
Here’s the new testbench that includes the error‐injecting driver and a new
coverage collector to count errors.
136 Error Injection
coverage
collector
scoreboard
in-order
comparator
Error Parallel-to-serial
Monitor
Driver DUT
Stimulus
generator
Figure 7‐6 Complete Testbench with Error Injection
70 task run;
71 fork
72 begin
73 m_stimulus_process = process::self;
74 m_stimulus.generate_stimulus;
75 end
76 terminate_when_covered;
77 join
78 endtask
79
80 task terminate_when_covered;
81 wait( m_byte_coverage.m_is_covered &&
82 m_comparator_coverage.m_is_covered );
83 m_stimulus_process.kill;
84 avm_report_message(“terminate_when_covered” , ““ );
85 endtask
Summary 137
7.5 Summary
The basic testbenches discussed in the previous chapter illustrated means for
generating stimulus and obtaining information about what goes on in the
simulation—control and observation. In this chapter, we added the means to
analyze the activity in the system through the inclusion of scoreboards,
coverage collectors, and test controllers. These kinds of components
transform a testbench from a simulation‐results generator to a true self‐
checking testbench.
138 Summary
8
Stepwise Refinement
Stepwise refinement is the process of taking a high‐level design and, through
various transformations and substitutions, turning it into a detailed
implementation. In a stepwise refinement flow, also known as a top‐down
flow, the initial incarnation of the design is built at the transaction level.
Subsequent transformations and substitutions will change some or all of the
design to lower‐level TLMs (that is, timed models or even cycle‐accurate
models) or RTL. TLM is used first because it enables designers to understand
the general characteristics of a device, such as algorithm correctness,
performance, and throughput.
In a large design with many components designed by different designers or
different design teams, it is often the case that not all of the RTL is finished at
once. It’s desirable to substitute portions of the TLM with RTL components as
they become available. Performing this kind of substitution requires users to
insert transactors to connect transaction‐level components to pin‐level
components.
We can construct a testbench for a TLM to answer the are‐we‐done questions,
but not the does‐it‐work questions. The reason is that since it’s the first model
in a design chain. there is no reference model to compare against.
Determining the answer to the does‐it‐work questions for a highly abstracted
model is typically done by manual means.
Creating testbench components at any level of abstraction, including at the
transaction level, is a non‐trivial task, so it’s important to be able to reuse
them to whatever extent possible. In this chapter we will show how to do just
that. We will show how to reuse a stimulus generator and a coverage collector
at both the transaction level and in the RTL implementation. We’ll also show
140 Transaction‐Level Design
how to reuse the TLM as a golden model and compare results between it and
the RTL model at run time.
The model that we’ll use to illustrate this flow is a floating point unit (FPU).
The FPU takes in pairs of floating point numbers and an opcode representing
an arithmetic operation. The FPU executes the function associated with the
opcode using the two floating point numbers as inputs. It returns a floating
point result and a set of flags indicating overflow and so forth.
8.1 Transaction‐Level Design
To exercise our FPU at the transaction level, we’ll need a master to generate
stimulus requests and process responses, and we’ll need a coverage collector
to count specific activities and to establish a threshold of completeness, and a
test controller to switch off the master when the coverage threshold has been
reached. Figure 8‐1 shows the transaction‐level design and testbench.
request
`
coverage
FPU
master `
TLM
Figure 8‐1 Transaction‐Level Model
A transaction‐level model of an FPU is quite straightforward. The core of the
design is centered around a forever loop that continually retrieves
transactions from the input, computes a response, and sends the response
back.
56 forever begin
58 slave_get_export.get(m_req);
61 request_ap.write(m_req);
62
65 m_rsp = compute(m_req);
66
69 slave_put_export.put(m_rsp);
70 response_ap.write(m_rsp);
Transaction‐Level Design 141
file: cookbook/08_stepwise_refinement/fpu_sv/fpu_tlm.sv
The compute() function takes a request object (m_req), performs the specified
floating point computation, and then creates a response object (m_rsp) that
contains the result of the computation. compute() is structured around a case
statement that switches on the opcode in the request.
The details of floating point arithmetic are beyond the scope of this text, but
we’ll touch on a few relevant issues. Certain values of the operands will result
in an exception for some operations. A divisor of zero will cause a divide‐by‐
zero exception, and square root will fail if provided an operand of zero. In
each of these cases, we want to detect the situations and return something
nice instead of allowing the exception to occur.
Since both possible exceptions involve operands whose value is zero, we first
have to detect when an operand is zero. Comparing two floating point
numbers for equality (for example, a == b, where both a and b are floating
142 Transaction‐Level Design
point types) is rarely a good idea. Instead it’s better to see if their difference is
close to zero. Arbitrarily we define any quantity whose absolute value is less
than 1.0‐e‐38 as “effectively zero.” To avoid a divide‐by‐zero exception, we
first check to see if the divisor is effectively zero. To avoid an imaginary
number exception (taking the square root of a negative number) we need to
see if the operand is less than zero. When either of these cases is detected we
return the special constant NaN (not a number). This special constant is
defined in the IEEE 754 standard for representing floating point numbers.
The FPU has two analysis ports—request_ap for posting requests and
response_ap for posting responses. We’ll connect these later to the coverage
collector and golden model.
Notice that the transaction‐level representation of the FPU has no timing in it.
Time never advances past 0. In this model, we are interested in whether the
FPU computes results correctly, not when the events occur. Timing will
become important when we look at the RTL implementation.
The stimulus generator (which we call a master in this case because it is
responsible for fielding responses as well as generating requests) must
generate interesting sequences of operands and opcodes to send to the FPU.
We use a simple mechanism to control the execution of the master. The run()
task forks two subordinate threads, go() and controller(). go() is the
thread that generates new stimuli.
controller() will cause the stimulus generator (go() thread) to stop once
the coverage threshold has been reached. It gets control transactions from the
coverage collector using a blocking get(). By blocking, the controller doesn’t
have to poll, it just waits until a control transaction arrives.
119 m_ctrl_fifo.get(m_ctrl);
123
124 if( m_ctrl.stop_start == STOP) begin
125 m_stop_start = STOP;
128 end
129 end
130 endtask
file: cookbook/08_stepwise_refinement/fpu_sv/fpu_master.sv
When the coverage goal is reached, a STOP control transaction is created and
sent out via the control analysis port. The control analysis port is connected to
the master and completes the control loop between the master, DUT, and
coverage collector.
8.2 RTL Substitution
The transaction‐level model can tell us whether our algorithm works
correctly. Checking the answer, however, is a largely manual process. If there
is any automation to be used, then it is outside of the testbench. For example,
we could write all the requests and responses to a file and compare them with
requests and responses generated by a standalone C model.
The next step in our stepwise refinement process is to replace the TLM with
an RTL model. When we make this substitution, we will reuse the master and
the coverage collector without modification. The master and coverage
collector have transaction interfaces. We use transactors to connect them to an
RTL DUT. We’ll use a driver to convert a stream of stimulus transactions to
pin‐level activity and a monitor to convert pin‐level activity on the bus to a
stream of transactions.
RTL Substitution 145
request
` monitor
coverage
FPU
master ` driver
RTL
clock
Figure 8‐2 RTL Model with Transaction‐Level Testbench
The sub‐assembly that contains the RTL DUT, the driver, and the monitor
(surrounded by the larger box in the figure above) has the same interfaces as
the transaction‐level FPU. Because they have the same interfaces with the
same connections, the same inputs and outputs, and the same semantics, the
sub‐assembly and the transaction‐level FPU are functionally equivalent.
Since the DUT is modeled at the RTL, it has a clock, and thus is a timed
model. The only difference visible at the interfaces between the RTL FPU and
the transaction‐level model is time. The sequence of responses from the RTL
DUT will be spaced out in time whereas the responses from the transaction‐
level DUT will be returned immediately. The coverage collector and the
master don’t know anything about time, so they can handle timed as well as
untimed responses. The master has no dependency on the driver or the DUT,
so the sequences of requests it produces will be the same no matter which
device it drives. Thus, the sequence of responses will be identical as well.
146 RTL Substitution
The RTL DUT is a synchronous pipelined device with a pipeline depth of one.
The schematic symbol for the RTL FPU is in Figure 8‐3.
3 2
FPU
32 8
B Exceptions
CLK
Figure 8‐3 Pin‐Out for FPU
The response appears on the OUT port one clock cycle after the request is
posted on the input ports. To avoid any synchronization problems, the driver
is not pipelined. Instead, it synchronizes responses with requests by waiting
until the ready signal is asserted, indicating a response is ready.
71 forever begin
73 master_chan_slave.get(m_req);
76 issue_request(m_req);
79 wait(m_fpu_pins.ready == 1);
80
81 m_rsp = collect_response(m_req);
82
84 master_chan_slave.put(m_rsp);
85
86 @(posedge m_fpu_pins.clk);
87
88 end
89 endtask // run
file: cookbook/08_stepwise_refinement/fpu_sv/fpu_driver.sv
The monitor does not have the luxury of deciding that it will be impervious to
pipelining issues. It must respond to the protocol on the bus, which in this
case, involves some pipelining. If it doesn’t recognize this feature of the
protocol, it risks missing or improperly reporting some transactions. The
main loop of the monitor responds to both the start and ready signals.
RTL Substitution 147
37 task run();
38 forever
39 @(posedge m_fpu_pins.clk) fork
40 if(m_fpu_pins.start == 1’b1) begin
42 monitor_request();
43 end
44 if(m_fpu_pins.ready == 1’b1) begin
46 monitor_response();
47 end
48 join
49 endtask // run
file: cookbook/08_stepwise_refinement/fpu_sv/fpu_monitor.sv
When start is asserted, then monitor_request() is called to capture the
request on the bus. When ready is asserted, monitor_response() is called to
capture the response.
Because we are reusing the master and the coverage collector, we can reuse
the transaction level environment by extending it and adding in the new
components.
The call to super.connect() causes the master and coverage collector to be
connected to each other, as they were in the base environment. The
connections to the fpu_tlm are simply ignored, since the component was
removed. We then connect the master to the driver and the coverage collector
to the monitor. Both the driver and monitor must also be connected to the
virtual pin‐level interface, which occurs in import_connections().
8.3 TLM as Golden Model
In the first step of the refinement process we built a transaction‐level model of
an FPU. We used this model to prove the feasibility of the device and the
correctness of the algorithm. We built testbench components needed to
stimulate the device and to tell when we’ve exercised it sufficiently. In the
second step, we replaced the transaction‐level device with an RTL device. If
the RTL device is implemented faithfully, then the results from the tests with
the RTL device should be identical to the test results using the transaction‐
level device. How can we tell if the results are identical? In this step of the
refinement process, we’ll show how to reuse the transaction‐level device as a
golden model and how to automate the comparison of the two devices.
To construct this testbench, we’ll again reuse the master and the coverage
collector. As noted, we’ll also reuse the transaction‐level FPU. We’ll modify
the monitor to produce a stream of responses as well as requests. We’ll add an
adapter to connect the transaction‐level FPU and a comparator device to
compare the streams of responses. We’ll also add a new coverage collector to
count matches and comparison failures.
TLM as Golden Model 149
FPU
TLM
TLM
adapter
`
monitor
FPU
master ` driver
RTL
clock
Figure 8‐4 RTL and Transaction‐Level Testbench with TLM as Golden Model
The transaction‐level FPU is constructed with a single slave port that both
sends and receives transactions. In our testbench, we have a monitor that has
150 TLM as Golden Model
two analysis ports, one for requests and one for responses, and we need to
route requests to the request coverage collector and responses to the response
comparator. To manage all this additional plumbing, we need an adapter. The
adapter converts the single bidirectional port on the TLM into two separate
streams.
47 task run;
49 forever begin
50 request_fifo.get(m_req);
53 master_port.put(m_req);
54
55 master_port.get(m_rsp);
58 response_ap.write(m_rsp);
59 end
60 endtask // run
file: cookbook/08_stepwise_refinement/fpu_sv/fpu_tlm_adapter.sv
The adapter gets requests from the request_fifo and sends them to the
TLM. It then waits for a response and, upon receiving one, sends it out
through the analysis port response_ap.
The response comparator is an instance of the avm_in_order_comparator we
saw used as the scoreboard in the parallel‐to‐serial examples in the previous
chapter. We use it here to compare responses from the monitor with
responses from the TLM. Since both the golden model TLM and the RTL DUT
receive the same stream of requests, their response streams should be
identical. The avm_in_order_comparator automatically performs the
comparison.
These new components are added by extending the RTL environment.
43 m_ta.master_port.connect(
44 m_golden.blocking_master_export);
45 m_ta.response_ap.connect(m_comp.after_export);
46 m_monitor.request_ap.connect(m_ta.request_export);
47 endfunction // void
file: cookbook/08_stepwise_refinement/fpu_env_sv/
fpu_golden_env.sv
8.4 Summary
In this chapter, we demonstrated an important notion of reuse across
abstraction levels. We built a master (stimulus generator) and a coverage
collector, which we used to exercise a TLM of an FPU. We then used the same
verification components to exercise the RTL equivalent of the FPU. Finally, we
used the same components again, adding the transaction‐level FPU as a
golden model to functionally verify the RTL.
We were able to accomplish this feat of reuse because the reusable devices
included well‐defined interfaces. The interfaces on the master, driver,
monitor, and coverage collector are based on a foundation of transaction‐level
interfaces and channels. Using these interfaces consistently enables us to
reuse these components in other situations.
152 Summary
9
Modules in Testbenches
In the previous chapters, we demonstrated how to create various class‐based
verification components. A question often arises: “Should I use classes or
modules to create my verification components?” The answer is—it depends.
In general, class‐based components are easier to customize and maintain by
means of inheritance. They are easier to randomize than modules, and they
allow flexible instantiation. Yet, there are still a few situations that require
module‐based verification components—or justify their use.
One example is when you create a monitor that is built from assertions to
detect black‐box errors (that is, errors outside of the DUT) as well as
interesting coverage scenarios. The reason you might do this is that
SystemVerilog does not permit temporal assertions within a class. For this
situation, you must encapsulate your higher‐level (black‐box) assertions
inside a module, and then integrate this module‐based monitor into your
testbench (along with your class‐based verification components).
Another example where modules are required is when you reuse legacy bus
functional models (BFMs) written as modules. For example, to manage your
current project resources, you might be required to reuse an existing module‐
based BFM from a previous project—or even purchase new third‐party,
module‐based BFMs for your current project. You might also have existing
PLI‐ or DPI‐based models that require module‐based encapsulation. In these
situations, the ability to mix existing module‐based verification components
with your newly developed AVM class‐based verification components is
critically important to many projects.
monitor for our nonpipelined bus, which consists of a set of SystemVerilog
assertions and is used to validate bus activity. In addition, our module‐based
assertion monitor converts nonpipelined bus pin‐level activity back into a
stream of transactions for analysis by other components within the testbench
(such as a coverage collector). We demonstrate how to form a complete
testbench consisting of our module‐based monitor along with various AVM
class‐based verification components.
Our second example builds on our first by demonstrating how to reuse a
legacy module‐based BFM along with various AVM class‐based verification
components to form a complete testbench. For this example, we replace the
class‐based driver (used in our first example) with a module‐based BFM
along with a class‐based BFM wrapper.
9.1 Nonpipelined Bus Example
In this section, we introduce a simple nonpipelined bus design (illustrated in
Figure 9‐1), which we use as a reference example to demonstrate our modules
in testbenches concepts. For our nonpipelined bus example, all signal
transitions relate only to the rising edge of the bus clock (clk).
clk
rst
sel
en
MASTER write SLAVE
addr
rdata
wdata
Figure 9‐1 Simple Nonpipelined Bus Design
Nonpipelined Bus Example 155
The table below provides a summary of the bus signals for our simple
nonpipelined bus example.
Name Description
clk All bus transfers occur on the rising edge of clk
rst An active high bus reset
sel These signals indicate that a slave has been selected.
Each slave has its own select (for example, sel[0] for
slave 0). However, for our simple example, we assume a
single slave
en Strobe for active phase of bus
write When high, write access
When low, read access
addr[7:0] Address bus
rdata[7:0] Read data bus driven when write is low
wdata[7:0] Write data bus driven when write is high
To help you understand the operation of our nonpipelined bus example (for a
single slave), we have created a conceptual state machine that describes the
valid bus transitions of the sel and en control signals.
156 Nonpipelined Bus Example
no transfer
INACTIVE
sel == 0
en == 0
setup
START
sel == 1
en == 0
transfer
setup
no transfer
ACTIVE
sel == 1
en == 1
Figure 9‐2 Simple Nonpipelined Bus Conceptual States
After a reset (that is, rst==1’b1), our simple nonpipelined bus is initialized to
its default INACTIVE state, which means both sel and en are de‐asserted. To
initiate a transfer, the bus moves into the START state, where the master
asserts a slave select signal, sel, selecting a single slave component.
The bus only remains in the START state for one clock cycle and will then
move to the ACTIVE state on the next rising edge of the clock. The ACTIVE
state only lasts a single clock cycle for the data transfer. Then, the bus will
move back to the START state if another transfer is required, which is
indicated when the selection signal remains asserted. However, if no
additional transfers are required, the bus moves back to the INACTIVE state
when the master de‐asserts the slave’s select and bus enable signals.
The address (addr[7:0]), write control (write), and transfer enable (en)
signals are required to remain stable during the transition from the START to
ACTIVE state. However, it is not a requirement that these signals remain
stable during the transition from the ACTIVE state back to the START state.
Nonpipelined Bus Example 157
9.1.1 Basic Write Operation
Figure 9‐3 illustrates a basic write operation for our simple nonpipelined bus
interface involving a bus master and a single slave.
0 1 2 3 4
write
sel
en
addr ADDR 1
wdata DATA 1
Figure 9‐3 Non‐Bust Write Transaction
At clock one, since both the slave select (sel) and bus enable (en) signals are
de‐asserted, our bus is in an INACTIVE state, as we previously defined in our
conceptual state machine (see Figure 9‐2) and illustrated in Figure 9‐3. The
state variable in Figure 9‐3 is actually a conceptual state of the bus, not a
physical state implemented in the design.
The first clock of the transfer is called the START cycle, which the master
initiates by asserting one of the slave select lines. For our example, the master
asserts sel, and this is detected by the rising edge of clock two. During the
START cycle, the master places a valid address on the bus and in the next
cycle, places valid data on the bus. This data will be written to the currently
selected slave component.
The data transfer (referred to as the ACTIVE cycle) actually occurs when the
master asserts the bus enable signal. In our case, it is detected on the rising
edge of clock three. The address, data, and control signals all remain valid
throughout the ACTIVE cycle.
When the ACTIVE cycle completes, the bus enable signal (ben) is de‐asserted
by the bus master, and thus completes the current single‐cycle write
operation. If the master has finished transferring all data to the slave (that is,
there are no more write operations), then the master de‐asserts the slave select
signal (for example, sel). Otherwise, the slave select signal remains asserted,
and the bus returns to the START cycle to initiate another write operation. It is
158 Nonpipelined Bus Example
not necessary for the address data values to remain valid during the
transition from the ACTIVE cycle back to the START cycle.
0 1 2 3 4
write
sel
en
addr ADDR 1
rdata DATA 1
Figure 9‐4 Non‐Burst Read Transaction
9.1.2 Basic Read Operation
Figure 9‐4 illustrates a basic read operation for our simple parallel bus
interface involving a bus master and slave zero (sel).
Just like the write operation, since both the slave select (sel) and bus enable
(en) signals are de‐asserted at clock one, our bus is in an INACTIVE state, as
we previously defined in our conceptual state machine (see Figure 9‐2). The
timing of the address, write, select, and enable signals are all the same for the
read operation as they were for the write operation. In the case of a read, the
slave must place the data on the bus for the master to access during the
ACTIVE cycle, which Figure 9‐4 illustrates at clock three. Like the write
operation, back‐to‐back read operations are permitted from a previously
selected slave. However, the bus must always return to the START cycle after
the completion of each ACTIVE cycle.
9.1.3 Nonpipelined Bus Requirements
Prior to creating the set of assertions for our module‐based monitor, we must
identify a comprehensive list of natural language requirements. We begin by
classifying the requirements into categories, as shown in the table below.
Nonpipelined Bus Example 159
Assertion name Summary
Bus legal state
a_state_reset_inactive INACTIVE is the initial state after reset
a_valid_inactive_trans INACTIVE or START follows INACTIVE
a_valid_start_trans ACTIVE state follows START
a_valid_active_trans INACTIVE or START follows ACTIVE
a_no_error_state sel and en must be in valid state
Bus select
a_sel_stable From START to ACTIVE, sel is stable
Bus address
a_addr_stable From START to ACTIVE, addr is stable
Bus write control
a_write_stable From START to ACTIVE, write is stable
Bus data
a_wdata_stable From START to ACTIVE, wdata is stable
160 Module‐Based Assertion Monitor
Driver DUT
coverage
Module-Based
Coverage
Assertion
Collector
Monitor
error status
Test Controller
Stimulus
Generator
Figure 9‐5 Mixed Class‐Based and Module‐Based Testbench
9.2 Module‐Based Assertion Monitor
Our first modules in testbenches example demonstrates a module‐based
assertion monitor that serves two roles. First, it is a verification component
whose purpose is to determine whether the DUT is functioning as intended.
Second, our module‐based assertion monitor is a verification component
whose purpose is to convert pin‐level activity into a sequence of transactions
that other verification components within a testbench can use—such as a
coverage collector.
What makes our module‐based monitor unique is that we use SystemVerilog
properties and sequences to identify and verify events occurring on the bus
instead of an FSM modeled with procedural code. Next, our module‐based
monitor uses an analysis port to broadcast information to any subscribing
verification components within the testbench. Finally, our module‐based
assertion monitor uses another analysis port to transfer error status
Module‐Based Assertion Monitor 161
information directly to a test controller, which then can take appropriate
actions related to the specific error it detects.
9.2.1 Class‐Based and Module‐Based Integrated Testbench
Figure 9‐5 illustrates a mixed class‐based and module‐based integrated
testbench example. Our module‐based assertion monitor is instantiated
within the testbench top.sv module (as demonstrated by the following
code), while all our class‐based components are instantiated within the
tb_env class (which is an extension of avm_env).
26 module top;
27
28 import avm_pkg::*;
29 import tb_tr_pkg::*;
30 import tb_env_pkg::*;
31
32 clk_rst_if clk_rst_bus();
33
34 clock_reset clock_reset_gen( clk_rst_bus );
35
36 pins_if #(.DATA_SIZE(8),.ADDR_SIZE(8))
37 nonpiped_bus (
38 .clk( clk_rst_bus.clk ),
39 .rst( clk_rst_bus.rst)
40 );
41
42 // class-based components within environment
43
44 tb_env env;
45
46 // module-based component
47
48 tb_monitor_mod tb_monitor(
49 .bus_if( nonpiped_bus )
50 );
51
52 initial begin
53 env = new( nonpiped_bus,
54 tb_monitor.cov_ap,
55 tb_monitor.status_ap );
56 fork
57 clock_reset_gen.run;
58 join_none
59 env.do_test;
60 $finish;
61 end
62
63 endmodule
file: cookbook/09_modules/01_monitor_sv/top.sv
162 Module‐Based Assertion Monitor
We use a pin‐level SystemVerilog interface (as previously discussed in Section
5.4) to establish a connection between the driver, responder, and our monitor,
as illustrated in Figure 9‐6.
Driver Responder
Module-Based
Assertion
Monitor
Figure 9‐6 SystemVerilog Interface
coverage
Module-Based
Coverage
Assertion
Collector
Monitor
error
status
Test
Controller
Figure 9‐7 AVM Analysis Ports
Module‐Based Assertion Monitor 163
Upon detecting a nonpipelined bus error, the error status analysis port
broadcasts the specific error condition via an error status transaction object.
The following code sample shows the error status transaction class, which is
an extension on an avm_transaction base class. The tb_status class includes
an enum that identifies the various types of serial bus errors and a set of
methods to uniquely identify the specific error it detected.
50 p_nonpiped_bus = nonpiped_bus;
51 p_cov_ap = cov_ap;
52 p_status_ap = status_ap;
53
54 p_driver = new(“driver “, this);
55 p_responder = new(“responder”, this);
56 p_coverage = new(“coverage “, this);
57 p_stimulus = new(“stimulus “, this);
58
59 p_status_af = new(“status_fifo”);
60 endfunction
61
file: cookbook/09_modules/01_monitor_sv/tb_env.sv
The environment class constructor calls all the other constructed class‐based
components that are needed to create the testbench.
The final connection between our module‐based monitor and the class‐based
driver and responder verification components is established within the
connect method in the environment class as shown in the following code:
Similarly, in our testbench example, the class‐based coverage collector and
test controller components subscribe to our module‐based monitor’s coverage
analysis ports within the environment’s connect method. This completes the
connection of our module‐based monitor to our AVM class‐based
components.
Module‐Based Assertion Monitor 165
9.2.2 Module‐Based Assertion Monitor Implementation
To simplify writing our assertion (and improve clarity), we create modeling
code within our monitor to map the bus sel and en control signals to our
previously defined conceptual states. We then write a set of assertions to
detect protocol violations by monitoring illegal bus state transitions related to
these conceptual states.
Now we are ready to write assertions for our nonpipelined bus requirements
defined in Section 9.1.3.
90 // ---------------------------------------------
91 // REQUIREMENT: Bus legal states
92 // ---------------------------------------------
93
94 property p_reset_inactive;
95 @(posedge bus_if.monitor_mp.clk)
96 disable iff (bus_reset)
97 $past(bus_reset) |-> (bus_inactive);
98 endproperty
99 assert property (p_reset_inactive) else begin
100 status = new();
101 status.set_err_trans_reset();
102 status_ap.write(status);
103 end
104
105 property p_valid_inact_trans;
106 @(posedge bus_if.monitor_mp.clk)
107 disable iff (bus_reset)
108 ( bus_inactive) |=>
109 (( bus_inactive) || (bus_start));
110 endproperty
111 assert property (p_valid_inact_trans) else begin
112 status = new();
113 status.set_err_trans_inactive();
114 status_ap.write(status);
115 end
116
117 property p_valid_start_trans;
166 Module‐Based Assertion Monitor
Assertion a_state_reset_inactive states that after a reset, the bus must be
initialized to an INACTIVE state (which means the sel and en signals are de‐
asserted). Similarly, we write assertions for each natural requirement listed in
Section 9.1.3.
enables us to support advanced testbench features (such as error injection)
without modifying the monitor.
Our final set of requirements (defined in Section 9.1.3) specifies that the bus
control, address, and write data signals must remain stable between a bus
START and bus ACTIVE state transition.
150 // ---------------------------------------------
151 // REQUIREMENT: Bus must remain stable
152 // ---------------------------------------------
153
154 property p_bsel_stable;
155 @(posedge bus_if.monitor_mp.clk)
156 disable iff (bus_reset)
157 (bus_start) |=> ($stable(bus_sel));
158 endproperty
159 assert property (p_bsel_stable) else begin
160 status = new();
161 status.set_err_stable_sel();
162 status_ap.write(status);
163 end
164
165 property p_baddr_stable;
166 @(posedge bus_if.monitor_mp.clk)
167 disable iff (bus_reset)
168 (bus_start) |=> $stable(bus_addr);
169 endproperty
170 assert property (p_baddr_stable) else begin
171 status = new();
172 status.set_err_stable_addr();
173 status_ap.write(status);
174 end
175
176 property p_bwrite_stable;
177 @(posedge bus_if.monitor_mp.clk)
178 disable iff (bus_reset)
179 (bus_start) |=> $stable(bus_write);
180 endproperty
181 assert property (p_bwrite_stable) else begin
182 status = new();
183 status.set_err_stable_write();
184 status_ap.write(status);
185 end
186
187 property p_bwdata_stable;
188 @(posedge bus_if.monitor_mp.clk)
189 disable iff (bus_reset)
190 (bus_start) && (bus_write) |=>
191 $stable(bus_wdata);
192 endproperty
193 assert property (p_bwdata_stable) else begin
168 Module‐Based Assertion Monitor
The psize local variable defined in the sequence above increments each time
a new data word is transferred for a given bus transaction burst. In the next
code sample, you’ll see that the burst size and type (for example, read or
write), is then passed to the build_tr function.
The build_tr function broadcasts the detected burst size and type
information through the module‐based assertion monitor’s coverage
transaction analysis port, which any subscribing verification component can
then use within the testbench (in our case, that might be a test controller or
coverage collector).
9.3 Bus Functional Model (BFM)
One of our goals, in terms of BFM reuse, is to increase productivity by
leveraging previously validated complex code with no modifications. Hence,
our second modules in testbenches example demonstrates how to integrate an
existing module‐based BFM into a complete testbench with various AVM
class‐based verification components. For this example, we replace the class‐
based driver (used in our previous example shown in Figure 9‐5) with a
module‐based BFM and introduce a class‐based BFM wrapper, which
communicates between our testbench’s stimulus generator and the BFM
tasks, as shown in Figure 9‐8.
Driver
BFM Responder
coverage
Module-Based
Coverage
BFM Wrapper Assertion
Collector
Monitor
error status
Test Controller
Stimulus
Generator
Figure 9‐8 Testbench with Module‐Based BFM
170 Bus Functional Model (BFM)
9.3.1 Module‐Based BFM Testbench Integration
In general, a BFM consists of a set of Verilog tasks—often encapsulated in a
module. Each task performs a specific action (for example, bus read or bus
write) and is responsible for driving the appropriate bus controls and timing
defined by the protocol onto our pin‐level nonpipelined bus.
For our BFM example, we introduce a class‐based BFM wrapper (shown in
Figure 9‐8), whose role is to accept a stream of transactions from a stimulus
generator, interpret the type of transaction (for example, a write or a read),
and then call the appropriate BFM task to perform the bus’ pin‐level action.
Recall that our goal is to directly reuse a given module‐based BFM within our
AVM testbench without any modifications. However, our class‐based BFM
wrapper needs to access the appropriate tasks embedded in the module‐
based BFM to drive the bus. To solve this problem without introducing a new
interface into the module‐based BFM, we create a bfm_xref class to form a
binding mechanism between the module‐based BFM and our class‐based
transactor. The bfm_xref class is then encapsulated within an interface,
which is passed into our class‐based BFM wrapper.
27
28 interface bfm_if (input rst);
29
30 parameter int DATA_SIZE = 8;
31 parameter int ADDR_SIZE = 8;
32
33 modport master_mp (
34 input rst,
35 input bfm_xref_h
36 );
37
38 class bfm_xref;
39 virtual task bfm_inactive();
40 top.tb_bfm.bfm_inactive();
41 endtask
42 virtual task bfm_write(
43 input bit [ADDR_SIZE-1:0] addr,
44 input bit [DATA_SIZE-1:0] data);
45 top.tb_bfm.bfm_write( addr, data);
46 endtask
47 virtual task bfm_read(
48 input bit [ADDR_SIZE-1:0] addr);
49 top.tb_bfm.bfm_read( addr );
50 endtask
51 endclass
52
53 bfm_xref bfm_xref_h = new;
54
55 endinterface
Bus Functional Model (BFM) 171
56
file: cookbook/09_modules/mod_bfm_sv/bfm_if.sv
This technique lets us isolate the hierarchical references to our module‐based
BFM tasks from our class‐based wrapper BFM implementation.
The following code shows how our class‐based BFM wrapper accesses the
module‐based BFM task using the interface’s bfm_xref class:
For completeness, we show the bfm_write task contained in our module‐
based BFM, which is called from our class‐based BFM wrapper:
56 task bfm_write;
57 input bit[7:0] addr;
58 input bit[7:0] data;
59 begin
60 string m_trans_str;
61
62 driver_mp.sel <= 1;
63 driver_mp.en <= 0;
64 driver_mp.addr <= addr;
65 driver_mp.write <= 1;
66 driver_mp.wdata <= data;
67 @(posedge driver_mp.clk);
68
69 $sformat( m_trans_str ,
70 “Bus type = WRITE , data = %d , addr = %d” ,
172 Summary
71 driver_mp.wdata , driver_mp.addr );
72 avm_report_message(“ BFM sending “ ,
73 m_trans_str );
74 driver_mp.sel <= 1;
75 driver_mp.en <= 1;
76 @(posedge driver_mp.clk);
77 end
78 endtask
file: cookbook/09_modules/mod_bfm_sv/bfm_mod.sv
Finally, the next code sample shows the bfm_read task contained in our
module‐based BFM, which is called from our class‐based BFM wrapper.
34 task bfm_read;
35 input bit[7:0] addr;
36 begin
37 string m_trans_str;
38
39 driver_mp.sel <= 1;
40 driver_mp.en <= 0;
41 driver_mp.addr <= addr;
42 driver_mp.write <= 0;
43 @(posedge driver_mp.clk);
44
45 driver_mp.sel <= 1;
46 driver_mp.en <= 1;
47 $sformat( m_trans_str ,
48 “Bus type = READ , data = %d , addr = %d” ,
49 driver_mp.rdata , driver_mp.addr );
50 avm_report_message(“ BFM receiving “ ,
51 m_trans_str );
52 @(posedge driver_mp.clk);
53 end
54 endtask
file: cookbook/09_modules/mod_bfm_sv/bfm_mod.sv
9.4 Summary
This chapter showed the process of integrating modules in testbenches. First, we
showed how to form a complete testbench consisting of a module‐based
assertion monitor along with various AVM class‐based verification
components. Then we showed how to reuse a legacy module‐based BFM to
form a complete testbench along with various AVM class‐based verification
components.
10
Randomization
Verification engineers write tests that provide stimuli to a design to test
different pieces of functionality and check that it responds as expected. Each
of these tests takes time to write, debug, and run. Once a test is working, it
becomes part of a regression suite, and the user moves on to write a new test,
which also takes time to write, debug, and run. Such an approach requires the
test writer to create an explicit test for each feature of the design, so that more
tests (and more time) are required as more features are added to the design.
Rather than require the verification engineer to write tests to check each
feature individually, constrained‐random verification (CRV) effectively allows a
single test to check multiple features. With this methodology, each “test”
actually describes a set of possible scenarios, and the simulator itself chooses
a specific scenario for each invocation. This can be an extraordinarily
powerful verification methodology, but it is one that is not supported well by
either Verilog or VHDL. SystemVerilog has been designed specifically to
support this methodology.
10.1 Overview of CRV Methodology
In this section, we look at elements of a CRV methodology, including an
introduction to directed and constrained‐random testing. Then we’ll look at
how we use these building blocks to produce a CRV methodology.
10.1.1 Directed Testing
Because of the straightforward nature of directed tests, they are fairly easy to
write. Unfortunately, by definition, they only address the explicit scenarios
predicted by the verification engineer. As designs get more complex, it
becomes harder to write directed tests to cover all of the possible scenarios
and corner cases, both because the expected response becomes harder to
predict and because the corner cases become harder to hit, if they can be
predicted at all. Such tests can be improved somewhat by adding
randomization, such as writing random values to a memory in addition to, or
instead of, a walking‐ones pattern. These tests are still inherently directed.
Data that is irrelevant to the features being tested may still use random
number generators as filler for content. The Verilog system functions $random
and $dist_uniform provide a simple way of filling bits with random
numbers. You can also use them to randomize delays or repetition counts.
At some point, adding significant amounts of randomization to a directed
testing methodology will not sufficiently address the testing issues. The
procedural nature of the test code itself limits its effectiveness and limits the
quantity of new tests you can derive from it. Starting with such a
methodology precludes the possibility of building a truly randomized
environment in which more corner cases can be exercised and from which
new tests can be derived with a minimum of coding changes.
10.1.2 Constrained‐Random Verification (CRV)
At first glance, the idea of feeding totally random stimuli into a design seems
inefficient, and it would be if there were no constraints on the random
numbers the generators are allowed to produce. The idea behind CRV is that
both the data and the transactions generated by the test are chosen at random
from a set of valid, or constrained, possibilities.
For example, instead of having to write many different tasks for each
operation, only one command task is created and the operation is encoded as
data that is passed as an argument to a single task, as you’ll see in the
example below.
begin
write(address[1],data1);
write(address[2],data[2]);
read(address[3],data[3]);
write(address[4],data[4]);
end
command(op[i],address[i],data[i]);
Eventually, all the arguments to the command task will be encapsulated in a
single object with different fields for all the choices that can be randomized.
The task of the verification engineer is to take what has been documented as
legal input to a design and turn that information into a set of constraints. Note
that errors in the input stimulus are legal if the specification for the design
says that the design is supposed to handle those errors.
Writing constrained‐random tests and testing the design is only part of a
complete verification methodology. Functional coverage reporting of the
randomly generated tests against a test plan is also necessary to know that the
design has been adequately verified.
10.1.3 Directing Tests from Constrained Random
In a constrained‐random environment, a directed test is achieved by tightly
constraining the choices so that a single scenario is exercised. Thus, a “sanity
test” in this environment can be achieved by constraining the test to generate
a single write to a specified address, followed by a single read from the same
address. Once this sanity test is validated, you have proved that the read/
write interface works properly. You then remove the constraints, allowing a
full broad‐spectrum test to occur. In this test, all of the registers are read/
written in random order, with random data, in all different modes. When a
problem occurs, it is easy to add new constraints to the test to focus on the
particulars that caused the problem so you can debug it.
This is not to say that directed tests are not useful in some circumstances. On
the contrary, for certain scenarios it might be easier to write a directed test to
guarantee that the design reaches a certain state quickly, rather than rely on
random behavior to achieve the desired results. For example, when verifying
an interrupt mask register, the test must set each mask bit one at a time,
generate the corresponding interrupt, and then clear the mask bit. The chance
of such a behavior being achieved randomly is practically zero; so, in this
instance, it makes much more sense to write a directed test.
10.1.4 Basics of CRV Technology
The technology behind constrained‐random verification can be divided into
two basic components. One component is the actual random number generation
(RNG) process. Although this is largely an academic discussion, there are a
176 Overview of CRV Methodology
The other component of this technology is the actual constraint solver. The
solver determines what set of possible values, if any, would satisfy the given
constraints. This set of possible values is called the solution space. In practice,
the solver computes the solution space first and then uses the random
number generator to select one of those solutions.
Random Number Generation. The process of random number generation on
a computer is not really random at all. In fact, it is a predictable sequence of
numbers, called a pseudo‐random number. The generator appears to be random
because the sequence takes so long before it repeats. Integer RNG
implementations represent a sequence with an internal state register,
typically 96‐bits. Thus, after generating 296 32‐bit values, the sequence will
repeat. Generating a random value for a variable having a greater number of
bits than the internal state register means that the sequence will repeat before
reaching all possible values. The RNG process steers a number of random
facets of the testbench, including constraints and random procedural
statements. For now, let’s consider the system function $urandom that
SystemVerilog provides to access the RNG directly. The example below
displays a sequence of five pseudo‐random numbers.
int A;
initial repeat (5)
begin
A =$urandom;
$display(A);
end
Without any modifications to the code, the RNG delivers the same sequence
of random numbers each time the simulation executes. This is an important
property because, if a problem is found in the DUT, you can modify the
design and verify it using the identical test.
int A;
initial
begin
A=$urandom(`mySEED);
repeat (5)
Overview of CRV Methodology 177
begin
A =$urandom;
$display(A);
end
end
You can change the value mySEED as needed to generate a different sequence
of five pseudo‐random numbers.
As designs become more complex, one sequence of generated random
numbers might become disturbed by another generated sequence of random
numbers. The sequence of random numbers for A will be disturbed by the
additional call to $urandom, in the code fragment below.
int A, B;
initial repeat (5)
begin
A =$urandom;
B =$urandom;
$display(A);
end
If the original example produced the sequence 6, 35, 41, 3, 27 for A, the new
sequence will start as 6, 41, 27, and so forth because the generated numbers,
35 and 3, will become part of the sequence for B.
One possible way to avoid this disturbance is to manually seed each call to
$urandom with a different seed variable. Each variable must also have a
unique value; otherwise each RNG would generate the same sequence of
random values.
Another way to address this issue is by defining a scheme for automatically
generating independent seeds for each RNG. This scheme, together with the
reproducible sequences of random values, gives the system random stability.
subsystem. Within each static thread, dynamic child threads get a derived
seed in the order that the threads are procedurally started. Thus, each thread,
whether it is static or dynamic, has its own seed. Calls to $urandom in each
thread produce a stable sequence of random values as long as the relative
thread relationships are maintained with declarative ordering for static
threads and procedural ordering for dynamic threads.
In the example below, the sequence of random numbers generated for A is
not disturbed because the two static threads formed by the initial blocks are
independent and have their own unique random seeds.
int A, B;
initial repeat (5)
begin
A =$urandom;
$display(A);
end
initial repeat (5)
begin
B =$urandom;
end
The size of a solution space is the permutation of all possible values of jointly
constrained random variables that satisfy their constraints. The solver will
create an order to the possible solutions and use the RNG to pick one of the
ordered solutions.
Overview of CRV Methodology 179
For example, a single 8‐bit scalar random variable, A, has the constraint 2<A
&& A<9. Its solution space has six possible solutions (3–8), and we can reduce
the size of that random variable to four bits.
A
2<A<9
range: 0-255
6 solutions
256 solutions
(3,4,5,6,7,8)
Figure 10‐1 Single Variable Constraint
Now suppose there are two 8‐bit random variables, A and B. In the absence of
any constraints, there are two solution spaces, each with 256 possible values
for A and B. When we add the constraint that A!=B, we create just one
solution space with 65,280 solutions. (The permutations of A and B is 28+8 =
65,536 solutions, minus the 256 solutions when A==B.) The solution space is a
single 16‐bit random variable.
A
range: 0-255
256 solutions
A != B
256 solutions
Figure 10‐2 Constraint with Multiple Variables
The amount of time needed to solve a constraint increases in direct
correlation to the total number of interrelated bits of random variables that
must be solved for one solution space. When writing constraints, it is best to
keep the total number of bits of related random variables down to the
absolute minimum to maximize run time performance. Try to break up the
interdependencies of random variable constraints by randomizing in stages.
For example, if you want to generate a sorted list of 100 random variables,
you can create an iterative constraint along the lines of Ai< Ai+1. However, this
would create a single random variable of 3200 bits—most likely unsolvable in
180 Adding Randomization to Classes
any reasonable amount of CPU time. A better solution is to generate 100
independent 32‐bit random numbers and then sort them procedurally
afterwards.
10.2 Adding Randomization to Classes
In SystemVerilog, random variables, random number generators, and
constraints are integrated into the object‐oriented class system. The function
randomize() is a built‐in method that is available in all classes.
SystemVerilog uses the rand modifier to distinguish the random variables
from the non‐random variables. A constraint is added as a named list of
expressions, declared using the constraint keyword.
class Packet;
rand bit [7:0] m_address;
rand bit [31:0] m_data;
constraint addr_range {
m_address < 132;
}
endclass : Packet
The RNG and constraint solver are invoked by using a built‐in method of the
class, randomize(), which can be called after the class has been constructed.
In this example, calling P.randomize() will randomize m_address using the
constraint addr_range, and it will randomize m_data with no constraints.
Packet P;
initial begin
P = new();
if (!P.randomize()) $stop;
$display(P.m_address);
end
Always check the return value from randomize(). If the constraints placed on
the random variables in a class have no solution, randomize() returns zero. It
is critical to check the return value so that any problems can be reported to the
user. An immediate assertion can be useful for reporting these problems.
10.3 Layering Constraints Using Inheritance
Constraints may be added or overridden when extending a class, just like
properties and methods. All of the existing constraints remain in effect in the
Dynamically Modifying Constraints 181
base class unless they are overridden. For example, assume the Packet class
in the previous example is extended:
When randomizing an instance of Word_Packet, both the addr_range and
word_align constraints apply. When overriding a random property, always
ensure that existing constraints on the parent property are also overridden.
Existing constraints have no affect on any properties added to a class by
extension. A constraint only applies to the properties within the scope of the
class hierarchy where they are defined. Suppose, for example, Word_Packet
is defined as:
The only constraint on m_address in the derived class is word_align.
super.m_address is still in the base class with the addr_range constraint in
effect.
Sometimes it might be helpful to think of fixing the extended class name first
and then working backwards through the class hierarchy. Document the class
name Word_Packet as being publicly available. In cases where additional
constraints are not needed, extend the class without providing a body.
10.4 Dynamically Modifying Constraints
Constraints may refer to non‐random variables that may also be either
properties of the class or external global variables. As the test executes,
modifying the variable dynamically modifies the constraint.
class Packet;
rand addr_t m_address;
rand data_t m_data;
addr_t high_address = ‘1;
addr_t low_address = 0;
182 Over Constraining
constraint addr_range {
m_address <= high_address;
m_address >= low_address;
}
endclass : Packet
Packet P;
initial begin
P = new();
P.high_address = 10;
assert (P.randomize) else $stop; // range is 0-10
$display(P.m_address);
P.high_address = ‘1
P.low_address = 250;
assert (P.randomize) else $stop; // range is 250-255
$display(P.m_address);
end
Constraints may also be procedurally turned on or off by constraint name.
The addr_range constraint may be turned off by assigning
P.addr_range.constraint_mode the value zero.
10.5 Over Constraining
Given enough time, the constraint solver will find the solution space, if one
exists. As mentioned earlier, the amount of CPU time increases in direct
correlation to the total number of interrelated bits of random variables that
must be solved for one solution space.
You could organize the random variables into separate classes and then
procedurally randomize each class, using the results of the first
randomization as state variables for the second. However, this approach
defeats the purpose of the object‐oriented model and makes it more difficult
to maintain. SystemVerilog provides a few alternatives that keep all of the
random variables in one class.
Variables that are normally just derivatives of other random variables should
use function calls. Function calls embedded in a constraint provide a way to
separate the solution spaces of random variables. The return value of a
function call becomes a state variable. That means the random variables that
are arguments to the function are solved and selected before calling the
function.
class Packet;
rand bit [7:0] m_address;
rand bit [31:0] m_data;
rand bit m_data_parity;
constraint addr_range {
m_address < 132;
Over Constraining 183
}
constraint gen_data_parity {
m_data_parity == even_parity(m_data);
}
function bit even_parity(bit [31:0] d);
return (~^d);
endfunction
endclass : Packet
Since the function call is inside a constraint, the constraint can be turned on or
off. If turned off, m_data_parity returns to being an independent random
variable.
Variables that are always just derivatives of other random variables should
use the post_randomize() method. SystemVerilog automatically calls a user‐
defined pre_randomize() and post_randomize() method as part of the
execution of the randomize() method. In the example below, m_data_parity
is no longer a random variable and is set by the post_randomize() method.
class Packet;
rand bit [7:0] m_address;
rand bit [31:0] m_data;
bit m_data_parity;
constraint addr_range {
m_address < 132;
}
function void post_randomize();
m_data_parity = ~^m_data;
endfunction
endclass : Packet
10.5.1 Implication
In the example below, there is an implied constraint in the enum variable
kind where m_op must be one of READ, WRITE, or NOP.
class Packet;
184 Over Constraining
10.5.2 Distributions and Solving Order
class Packet;
rand addr_t m_address;
rand bit [31:0] m_data;
rand kind m_op;
constraint data_range {
(m_op == READ) -> m_data inside {[1:100]};
class Packet;
rand addr_t m_address;
rand bit [31:0] m_data;
rand kind m_op;
constraint data_range {
(m_op == READ) -> m_data inside {[1:100]};
(m_op == WRITE) -> m_data inside {[101:300]};
(m_op == NOP) -> m_data inside {0};
}
constraint op_dist { m_op dist {READ := 2, WRITE := 2 NOP := 1};}
endclass : Packet
As a general rule, it is a good idea to use a distribution constraint on only one
random variable in a set of interrelated random variables. It is difficult to
calculate the probability of choosing a value for a random variable when there
are multiple, conflicting distribution constraints. Distribution constraints are
used after the solution space is created and are not guaranteed to be satisfied.
10.6 Set Membership
The inside operator is a powerful construct for dynamically creating sets of
constrained values. The inside operator tests for set membership. A range of
values is a kind of set and we use the inside operator here to test whether a
value is in a valid range. We use it in two ways. If the size of the memory
array is zero (ad_q.num == 0), then we constrain m_op to be a NOP or a WRITE.
If the memory size is not zero, then we constrain m_op to be a READ and
m_addr to be in the range of valid addresses for the memory. The example
below shows how to constrain a read transaction to only those addresses that
have been written.
module memq;
typedef bit [7:0] addr_t;
typedef enum {READ,WRITE,NOP} op_t;
constraint read_only_written {
186 Dynamically Sized Arrays
if (m_addr_q.num == 0)
m_op inside {NOP,WRITE};
else
m_op == READ -> m_addr inside
{ad_q};
}
function void post_randomize();
if (m_op == WRITE) m_addr_q[m_addr] = m_addr;
endfunction // void
endclass
Packet_Xaction X = new;
10.7 Dynamically Sized Arrays
Normally, dynamically sized arrays are considered as fixed sized arrays when
they are declared as random variables. In certain cases, if the size of the
dynamic array is referenced in a constraint, the size is treated as a random
variable. The constraint solver solves for the size before resizing the array, and
the solver applies any remaining constraints on the individual elements,
treating the size as a state variable at that point.
class array_pre;
rand Packet array[];
Per Design/Per Test Configuration 187
endclass : array_pre
As an alternative, the following example uses the post_randomize() method
to construct the dynamic array.
class array_post;
Packet array[]; // Note: not rand!
rand int m_array_size;
rand enum {kind_LITTLE, kind_MEDIUM, kind_BIG} m_kind;
constraint data_range
{
(m_kind == kind_LITTLE) -> m_array_size inside {[1:10]};
(m_kind == kind_MEDIUM) -> m_array_size inside {[11:19]};
(m_kind == kind_BIG) -> m_array_size inside {[20:23]};
}
endclass : array_post
10.8 Per Design/Per Test Configuration
A package provides a handy mechanism for collecting global definitions that
rarely change and for swapping in a set of constants on a per test basis. Below
is an example that uses a package to set knobs.
package Config;
parameter int width=16;
parameter int num_tests=20;
endpackage
You can substitute another Config package, like the one shown below, for the
Config package above.
10.9 Design Constraints
The design requires certain design or valid constraints to function properly.
These constraints must never be turned off or overridden. They represent
physical limitations or errors that the design is not able to handle. A
complementary or one hot encoded input might be such a case.
Give design constraints a unique prefix to identify them as being fixed. A
suggested constraint block prefix is: assert_<constraint_name>.
10.10 Class Factories
Most testbench environments need to generate massive amounts of random
data and repeatedly call the constraint solver. Repeatedly randomizing a
single instance of a class and then copying it has some key benefits. Each time
a new class instance is constructed, the solution space must also be
constructed before the call to randomize() can begin. Instead of calling new()
many times to create new objects and randomizing each new object, call
randomize() repeatedly on a single instance of that class and make copies of
the randomized object to send it to other portions of the test. Calling
randomize() repeatedly on the same instance might also save a lot of extra
Class Factories 189
computation. The procedural process of creating class instances is part of an
object‐oriented concept called a class factory.
Make the single handle to the class being randomized statically available to
the rest of the test environment. A static handle is globally visible to the entire
testbench. If a class has dynamically modifiable (state‐dependant constraints),
the handle can be modified from anywhere in the testbench. Typically,
random generators are distributed throughout the testbench, and all of the
generators can be controlled from a central location. In the example below, the
stimgen module creates a static handle, and that module is instantiated twice
in the root module top. Another root module, test, controls all the stimulus
generators from the initial block.
module top;
addr_t a1,a2;
data_t d1,d2;
stimgen t1(a1,d1);
stimgen t2(a2,d2);
DUT U1(a1,a2,d1,d2); // definition not shown
endmodule : top
endmodule : stimgen
module test;
initial begin
top.t1.handle.high_address = 10;
top.t1.handle.low_address = 5;
top.t2.handle.high_address = 5;
top.t2.handle.low_address = 1;
#10 $finish;
end
endmodule : test
The class factory, together with inheritance, makes the stimulus generator a
reusable piece of code. Instead of simply calling a static new constructor, call
190 Example of State‐Dependent Constraints
a function to construct a class; this is an abstract factory. That function can
return an instance of the base class or any derived class. We can rewrite the
previous example to call an abstract factory that chooses between Packet and
Word_Packet:
Packet handle;
initial stimulus_generator(Abstract_Packet1());
Abstract_Packet1 may also be written as a method of a class in a different
class hierarchy from Packet, which can be constructed by another abstract
factory. The class instance returned by the abstract factory is called a factory
pattern, because it serves as the blueprint for all the copies of the classes
created by that class factory. This kind of structure also serves to separate the
generation of the pattern from the generation of stimulus, making both more
reusable.
10.11 Example of State‐Dependent Constraints
Even though constraints are declarative, they do not need to be viewed as
static. Because constraints can be based on expressions, it is possible to
declare a constraint that changes throughout the simulation based on the
value of specific variables. Consider the following:
class Xaction;
int m_counter = 0;
rand enum {READ, WRITE, NOP} m_busop;
constraint startwriting {
if (m_counter < 10)
m_busop == WRITE;
else
m_busop dist {1:=NOP, 2:=READ};
}
endclass : Xaction
Similarly, you can write a constraint to explicitly change the constrained
values much like a finite state machine. The state variable is updated using
the pre_randomize() method as in the example below.
class myState;
typedef enum {INIT, REQ, RD, WR...} State_t;
rand State_t m_state = INIT;
State_t prev_state;
bit req;
constraint fsm {if (prev_state == INIT)
{state == REQ; req == 1};
if (prev_state == REQ && rdwr == 1)
state == RD;
...};
function void pre_randomize();
prev_state = m_state; // copy state value before randomizing
endfunction
endclass
10.12 The AVM Random Stimulus Generator
The AVM library contains a utility component called avm_random_stimulus,
which generates streams of randomized transactions.
41 class avm_random_stimulus
42 #(type trans_type=avm_transaction)
43 extends avm_named_component;
44
48 avm_blocking_put_port #( trans_type ) blocking_put_port;
file: libraries/systemverilog/avm/utils/avm_random_stimulus.svh
The class is parameterized with the type of transaction that will be generated
by instances of avm_random_stimulus. The component contains an
avm_blocking_put_port, which it uses to send out generated transactions.
Since the port has a blocking interface, it will use blocking puts to send
transactions.
parameter, it is derived from avm_transaction. If the number is zero, which
is the default, the generator will run indefinitely or until it is stopped by
calling stop_stimulus_generation(), which sets m_stop to one.
Each iteration of the loop randomizes the transaction by calling
t.randomize(), which will cause all the randomizable fields to receive new
random values. The new values adhere to any constraints defined in the
transaction class. Next, the loop creates a clone of the transaction. A clone
object is an identical copy of the original object except that it occupies new
memory. The clone() function is a virtual function in the avm_transaction
base class, and our derived class for which this component is generating
randomized instances must contain an implementation of it. clone() is
expected to allocate new memory, copy the contents of the object from which
it is called into the new memory, and return a handle to it. Finally, the loop
sends the randomized, cloned transaction downstream using the blocking call
put().
At first glance, it might seem odd that generate_stimulus() takes an object
to randomize as an argument. The component class avm_random_stimulus is
parameterized with the type of transaction to randomize. Why would we
need to specify it again? Well, we don’t if we choose not to. The default value
for the argument t is null, and when t is null, new is called to allocate a new
object of type trans_type. However, if the argument t is not null, then the
passed‐in object is randomized (rather than allocating a new one). This
handling allows objects that are derived from trans_type to be passed in to
generate_stimulus(). Objects derived from trans_type may have different
constraints than the base type. This is a simple but powerful capability where
The AVM Random Stimulus Generator 193
The fundamental tenets of the AVM are language independent. Building
components with well‐defined interfaces, using transaction‐level modeling
techniques, building testbenches as networks of verification components, and
so forth can be implemented in any language. We’ve chosen to implement
them in both SystemVerilog and SystemC because these are the dominant
design and verification languages today, and they will likely remain so for the
forseeable future. Even though the AVM principles are the same no matter
what language you use for implementation, there are linguistic differences
that you should understand when writing testbenches. We will discuss these
differences in this chapter.
There are many reasons for choosing a particular language for building your
testbenches. Everything from available tool support, technical skill set of the
development team, and local cultural biases play a role in language choice.
There is no one right answer for which language you choose. We are not
attempting to make any value judgements as to which language is better.
Instead, the goal of this chapter is to make you aware of the differences and
various issues associated with using either SystemC or SystemVerilog so you
can make your own choice.
First, we present a quick summary of some of the differences between
SystemC and SystemVerilog. We’ll discuss each of these differences in the
remainder of the chapter.
SystemVerilog SystemC
SystemVerilog SystemC
11.1 Object Model
Both SystemC and SystemVerilog are object‐oriented languages and support
classes. Each has a slightly different model of classes and objects.
11.1.1 Types of Objects
SystemVerilog has a variety of different objects you can use to contain
elements of a design or testbench. These include modules, program blocks,
interfaces, and classes. Modules are typically used for RTL code representing
hardware. Modules are statically elaborated. Interfaces, like modules, are also
statically elaborated. These contain collections of pins, modports, and tasks.
Interfaces are used to connect modules and to connect class‐based verification
components to modules (through virtual interfaces). Program blocks are not
hierarchical, they exist only at the top level. They also are statically
elaborated, that is, they come into existence prior to time zero. Finally, classes
are dynamic objects. Class instances are not created by the elaborator. Instead,
they are created and destroyed as simulation proceeds.
SystemC has only classes. Modules, channels, interfaces, and so on are all
created out of classes. Having all objects in a SystemC design based on a
single kind of object provides semantic uniformity across all objects.
11.1.2 Single vs. Multiple Inheritance
11.2 Object Support
Since classes are dynamic in both SystemC and SystemVerilog, both
languages provide static (declarative) and dynamic (run time) facilities for
managing objects.
The C++ object model provides syntax and semantics for comparison of
objects, I/O, and object copying (operator==(), operator<<(), and
operator=(), respectively). SystemVerilog does not provide these features
automatically. In the AVM these are emulated in avm_transaction by
requiring implementation of virtual functions convert2string() and
clone() and non‐virtual functions copy() and comp().
C++ provides multiple ways of passing data—pass by value, pass by
reference, and pass by pointer. SystemVerilog provides only pass by handle,
where a handle is a reference counted pointer.
11.3 Encapsulating Behaviors
SystemC and SystemVerilog have different ways for encapsulating behaviors.
The differences have to do with how each manages time.
Tasks. SystemVerilog has tasks. A task is much like a function except that it
can consume time. Like functions, tasks can return values through their
argument list. Tasks can be used to build blocking behaviors. SystemC has no
equivalent to a task.
Tasks can be “forked,” spawned as separate threads that operate in parallel
with each other.
schedule events to be executed in the future, including trigger its own
execution. Methods and threads can be scheduled for execution by the
scheduler, but ordinary functions (in SystemC) cannot be scheduled.
Although less frequently used, you can also spawn threads using
sc_spawn(). Functions spawned as threads may consume time and will run
continuously until they return or killed.
11.4 Randomization
SystemVerilog provides excellent support for randomizing class members
and for defining independent multiple random number generators. Any class
member can be augmented with rand to indicate it can be randomized; the
randomize() method is available for all classes. Each thread, whether
statically defined or created dynamically, contains its own independent
random number generator.
In SystemC, Much of this functionality is available through the SystemC
Verification library (SCV). SCV supports creation of multiple independent
random number generators and provides a constraint solver. However, to
randomize an object, the nature of the object must be described in a way to
allow the randomizer to understand it. SCV uses a technique called
introspection to allow objects to interrogate themselves about their types, sizes,
and so forth. Users must code this information using introspection wrappers,
an additional step that some consider overly verbose.
11.5 Instantiation and Elaboration
SystemVerilog has a pre‐execution step called elaboration. In that step, all of
the static components—modules, program blocks, and interfaces, are created
and initialized. When the first initial block starts executing, the entire design
is in place.
SystemC also has an elaboration step. However, since SystemC is built as a
runtime library in C++, and all objects are dynamically allocated classes,
elaboration is not as hidden as in SystemVerilog. Elaboration is accomplished
by chaining constructors. Constructors are chained when child components are
instantiated in the constructor of their parent. As an example, we’ll review the
top level of our memory testbench from Chapter 6.
The top‐level module declares a number of signals along with a number of
components.
37 {
38 public:
39 SC_HAS_PROCESS( top );
40
41 top(sc_module_name name);
42
43 private:
44 void run();
45
46 sc_signal<ADDRESS_TYPE > address;
47 sc_signal<DATA_TYPE > wr_data;
48 sc_signal< DATA_TYPE > rd_data;
49 sc_signal<bool> rw;
50 sc_signal<bool> req;
51 sc_signal<bool> ack;
52 sc_signal<bool> err;
53
54 sc_clock clk;
55 sc_signal<bool> rst;
56
57 mem_dut m_dut;
58 tlm_transport_channel<mem_request,mem_response>
m_channel;
59 mem_bidirectional_driver m_driver;
60 mem_monitor m_monitor;
61 mem_bidirectional_stimulus m_stimulus;
62 reset_gen m_reset_gen;
63 };
The constructor, shown below, does essentially two things: it instantiates all
of its children components, those components declared in the class definition
above, and it connects them. As each child component is instantiated, its
constructor is called. Those constructors may also instantiate and connect
components.
22 top::top(sc_module_name name) :
23 sc_module(name) ,
24 m_dut(“dut”) ,
25 m_stimulus(“stimulus”) ,
26 m_channel(“channel”) ,
27 m_driver(“driver”) ,
28 m_monitor(“monitor”) ,
29 m_reset_gen(“reset_gen”) ,
30 clk(“clk”, 10, SC_NS) {
31
32 m_reset_gen.rst( rst );
33
34 m_dut.clk( clk );
35 m_dut.rst( rst );
36 m_dut.address( address );
200 Instantiation and Elaboration
37 m_dut.wr_data( wr_data );
38 m_dut.rd_data( rd_data );
39 m_dut.rw( rw );
40 m_dut.req( req );
41 m_dut.ack( ack );
42 m_dut.err( err );
43
44 m_driver.clk( clk );
45 m_driver.rst( rst );
46 m_driver.address( address );
47 m_driver.wr_data( wr_data );
48 m_driver.rd_data( rd_data );
49 m_driver.rw( rw );
50 m_driver.req( req );
51 m_driver.ack( ack );
52 m_driver.err( err );
53
54 m_monitor.clk( clk );
55 m_monitor.rst( rst );
56 m_monitor.address( address );
57 m_monitor.wr_data( wr_data );
58 m_monitor.rd_data( rd_data );
59 m_monitor.rw( rw );
60 m_monitor.req( req );
61 m_monitor.ack( ack );
62 m_monitor.err( err );
63
64 m_stimulus.initiator_port( m_channel.target_export );
65
66 m_driver.request_port( m_channel.get_request_export );
67 m_driver.response_port( m_channel.put_response_export );
68
69 SC_THREAD( run );
70 }
The body of the constructor is a sequence of binding operations, which bind
components instantiated by the initializers of the constructor to the signals
also instantiated and to exports of child components.
When you use AVM in SystemVerilog, components are elaborated in the same
way, through chained constructors and sequences of binding operations.
Transaction‐Level Connections 201
11.6 Transaction‐Level Connections
As discussed in “Forming a Transaction‐Level Connection” on page 76,
SystemC uses sc_ports and sc_exports to form transaction‐level
connections. Using AVM in SystemVerilog you use AVM ports and exports
(avm_*_port and avm_*_export).
11.6.1 Analysis Ports and Subscribers
Analysis ports in both languages can be unbound, bound to a single
subscriber, or bound to many subscribers.
SystemC does not have a subscriber object built‐in, but you can easily create
one. A subscriber is an sc_module that also inherits from analysis_if.
You can bind a subscriber to an analysis port using SystemC binding syntax,
just as you would bind any other port to a channel.
11.7 Execution Phases
The primary container for components in SystemC is the class sc_module.
The primary container for components in a SystemVerilog testbench based on
AVM is the named component, classes derived from avm_named_component.
In their respective languages, these classes contain the component processes
and interconnections. Each component must be initialized, have processes
started and stopped, and finally, be shut down. The execution phases of a
202 Building Complete Testbench Structures
SystemC design or testbench and a SystemVerilog testbench built with AVM
closely correspond. The table below shows how they correspond.
SystemC SystemVerilog
1 elaboration via chained con‐ elaboration via chained con‐
structors structors
2 port binding in constructors export_connections(),
connect(),
import_connections()
3 end_of_elaboration() end_of_elaboration(),
configure()
5 end_of_simulation() report()
Once elaboration is complete and before simulation begins, the function
end_of_elaboration() is called for all modules in a SystemC design.
Similarly, in SystemVerilog, configure() is called by do_test() for all
named components. When simulation completes, after all the component
processes have terminated, end_of_simulation() is called in all
sc_modules, and report() is called in all SystemVerilog named components.
The functions end_of_elaboration() and end_of_simulation() in
SystemC, and configure() and report() in SystemVerilog are virtual
functions. The default implementation of all these functions does nothing. In
addition, sc_modules or named components can provide their own
implementations to perform customized pre‐ or post‐simulation processing.
11.8 Building Complete Testbench Structures
Throughout this book, we have used a graphical schematic‐like notation for
describing testbenches. The notation uses filled‐in boxes to represent pin
interfaces, open boxes to represent ports, open circles to represent exports,
SystemVerilog or SystemC? 203
and diamonds to represent analysis ports. Arrows fill out the notation by
indicating data flow. From a diagram drawn using this notation, you can
easily construct code in either SystemC or SystemVerilog. The table below
lists some of the key constructs in SystemC and SystemVerilog necessary to
build AVM‐style testbenches.
SystemC SystemVerilog
components
sc_module avm_named_component
avm_threaded_component
avm_subscriber
ports
sc_port<tlm_put_if<T> > port avm_put_port#(T) port
exports
sc_export<tlm_put_if> > export avm_put_export#(T) export
analysis ports
analysis_port<T> avm_analysis_port#(T)
binding
operator() connect()
See “Connecting Hardware” on
page 90.
11.9 SystemVerilog or SystemC?
Both SystemC and SystemVerilog are powerful, general‐purpose
programming languages, and they are quite suitable for building
sophisticated, reusable verification components and scalable testbenches.
How do you choose which one is the best for you? To help you answer that
question we conclude this chapter by suggesting some criteria you can use in
your decision‐making process.
Legacy IP
Do you have a deep library of legacy verification IP? If your VIP
is mostly RTL, it is probably easier to integrate RTL VIP using
SystemVerilog.
Tool availability
204 SystemVerilog or SystemC?
The language for which you have more access to tools is where
you should lean.
Team skill sets
Is your team comfortable with OO concepts and techniques? Are
they more comfortable with C++ or Verilog/VHDL? What
languages are they most comfortable with? Which one allow
them to be most productive?
Abstraction
Do you need to reuse your testbench across abstraction levels? If
you are writing testbenches in SystemC to drive a SystemC
transaction‐level model, then it makes sense to reuse your
SystemC testbenches for the refined RTL design.
A
Graphic Notation
Throughout the cookbook we illustrate examples with diagrams that show
verification components and their interconnections. We use a schematic‐like
notation for these diagrams that combines both data flow and control flow
concepts.
A.1 Components
A component is represented using a box.
Figure A‐1 Component Symbol
Components are objects such as modules (in SystemC) or modules, interfaces,
program blocks, or classes (in SystemVerilog) that can be instantiated.
Components often have free running threads. Sometimes, the location of
threads in a design or testbench is important to understanding the design. To
show a component that has one or more threads, we use a circular arrow.
Figure A‐2 A Component with a Thread
A.2 Interfaces
Interfaces are the externally visible connections to components. All of a
component’s behavior is accessible and visible only through its interfaces.
First, is the familiar pin interface.
Figure A‐3 A Component with a Pin Interface
Interconnect 207
The black boxes on the right side of the component represent pins.
Whereas pin interfaces move data represented at the bit level between
components, transaction interfaces move high‐level data between
components.
port export
Figure A‐4 Transaction‐Level Interfaces
Figure A‐4 represents two variations of transaction interfaces: a port and an
export (in SystemC parlance). The component on the left has a transaction
port and the component on the right has an export. An export makes an
interface visible, and a port is a requirement to connect to an export. A good
way to think about transaction ports is as a set of unresolved function calls
that are resolved by exports. Ports and exports are complements of each
other; ports connect to exports. You cannot connect an export to an export or a
port to a port.
The port/export notation identifies the flow of control between components.
Since a port interface calls functions on an export, flow of control moves from
ports to exports.
A.3 Interconnect
Just like with traditional schematics, we use lines between interfaces to show
the interconnection amongst components. The addition of arrow heads
allows us to represent data flow.
208 Interconnect
A B
Figure A‐5 Pin‐Level Data Flow
Arrows between pins show the direction data flows between components.
The figure above shows, from top to bottom, flow from A to B, bidirectional
between A and B, and flow from B to A.
A B put configuration
A B get configuration
Figure A‐6 Transaction Data Flow
Figure A‐6 illustrates two configurations, each with the same transaction
interfaces, but with different data flow. In both configurations, a function in B
is involved by A. A initiates activity in B. A is the initiator and B is the target.
In the top configuration, A moves data to B. This is called a put operation. In
the bottom configuration, A moves data from B back to itself. This is called a
get operation.
Channels 209
A.4 Channels
Transaction‐level components often communicate through channels. A
channel is a component that defines the semantics of the communication. One
of the most common channels used is a FIFO. FIFOs are used to throttle
communication between two transaction‐level components. To show this in a
netlist, we show a small box between components to represent the FIFO.
A B
fifo `
Figure A‐7 Two Components Communicating through a FIFO
Figure A‐7 shows two components, each with its own thread, and each with a
transaction port that connects to an intervening channel. Component A puts
transactions into the FIFO channel, and component B gets transactions from
the same channel. The data flow arrows, in addition to the transaction ports,
tell us which components are doing gets and which are doing puts. A has a
thread, a transaction port (as opposed to an export), and an arrow leading
away from it. That tells us that A is putting transactions into the channel. B
also has a thread and a transaction port, but the data flow arrow is leading
into the component instead of away from it. That tells us that B is getting
transactions from the channel.
A.5 Analysis Ports
Analysis ports are a kind of transaction‐level port used for communicating
information between components involved in the operation of the DUT and
components used to analyze activity. Analysis ports are discussed fully in
Chapter 7. The symbol for an analysis port is a diamond. Analysis ports are
210 Summary
connected to a component with an analysis interface. This could be an
analysis FIFO or a component with an analysis interface.
Analysis port
A B
Analysis interface
Figure A‐8 Analysis Port Connected to an Analysis Interface
A.6 Summary
The AVM graphic notation is an extension to traditional RTL schematic
notation. The extensions let us show transaction interfaces, channels, and data
flow between components. Using this notation, we can combine transaction‐
level and RTL components on the same diagram, which is important for
diagramming testbenches.
B
Naming Conventions
Good quality code has a consistent look and feel. One of the ways to achieve a
consistent look and feel is to use a consistent naming scheme. This section
documents the one we use for our examples.
We have the somewhat unusual problem of creating a naming convention
that is consistent between two languages, SystemC and SystemVerilog. For
the most part, both languages are well behaved when it comes to naming, so
we were able to build a convention that works for both languages with just a
few minor differences. All the rules apply equally to both languages except in
places where it is explicitly noted otherwise.
A name is constructed from three parts: the prefix, the main part, and the
suffix. The main part of the name may consist of one or more words. All the
parts—prefix, suffix, and words in the main part—are separated by
underscores. Some sample names are as follows:
tlm_fifo
This name has a main part of fifo and a prefix of avm.
m_parent_p
finite_state_machine
This name has a main part with three words, but no suffix or prefix.
top
212
This name also has no suffix or prefix, but the main part, top, consists of only
one word.
The SystemVerilog AVM library is contained in a single package called
avm_pkg. The classes inside that package are prefixed with either avm_ or
tlm_. The reason for the tlm_ prefix is to match names in the OSCI TLM‐1.0
standard which, of course, is rendered in SystemC.
class names
Class names use the general naming scheme. Classes that are part
of a specific package or library should use the same prefix for all
members of the package or library.
avm_analysis_port
tlm_fifo
local variables
Local variables use the general naming scheme but have no
prefix. They may have suffixes depending on the kind of object
being named.
integer indexes
Use i, j, k for integer indexes. This is one place where single
letter variable names are acceptable.
int i;
int j;
for(i = 0; i < last; i++)
{
for(j = i; j < last; j++)
matrix[i,j] = compute_entry(i,j);
}
class members
Class members are another form of local variable. Instead of
being local to a function or task, they are local to a class. To
distinguish class members from local variables in a function, task,
213
or method, use the local variable convention and add a prefix of
m_.
class pc_bus_request
addr_t m_address;
data_t m_data;
request_t m_request_type;
endclass : bus_request
class methods
For class methods, use a different prefix from the enclosing class
and group common methods with the same prefix
class pc_bus_request
addr_t m_address;
data_t m_data;
function set_addr(addr_t a);
endfunction
function set_data(addr_t a);
endfunction
endclass : pc_bus_request
local variables with suffixes
It greatly improves the readability of a program to quickly
understand something about the type or kind of object you are
looking at in an expression without having to refer to the
declaration.
pointers
Pointers appear in SystemC and not in SystemVerilog. Pointers
use the local variable naming scheme and have a suffix of _p.
handles
Handles appear in SystemVerilog and not in SystemC. All
instances of a class in SystemVerilog are referenced using class
handles. Use the local variable naming scheme, and if no other
suffix applies, add the _h suffix.
type names
For type names that are created with typedef, use the local
variable convention and add the _t suffix.
bus_in_port_t;
typedef struct {bit [7:0] value} data_t;
function/task/method names and formal arguments
Functions, tasks, and methods and their formal arguments use
the same convention as local variables—no prefixes or suffixes. A
formal argument may be abbreviated if the abbreviation is
derived from its type.
macros
Macros are all uppercase letters and words are separated by
underscores. We distinguish between a macro and a named
constant in SystemC. Macros are simply text to be substituted at
the appropriate point in a program using a preprocessor. Named
constants are constant values with a name known to the compiler
and to the debugger.
parameters
Parameters are all uppercase letters, and words are separated by
underscores. An abbreviation may be used if it is derived from its
type. In SystemVerilog, parameters or localparams are preferred
over macros to reduced order of compilation issues.
enumeration types and enumeration members
Enums need a suffix only if used as a defined type. In that case
use _e. For example:
The members of enumerated types should have a common prefix
that indicates their type.
interfaces
modports have the _mp suffix
215
interfaces have the _if suffix
virtual interfaces have the _vif suffix
interface bus_if;
...
endinterface : bus_if
packages
Packages have the _pkg suffix. The main part of the package
name should be the basis for the prefix of the names within the
package.
ports
Transaction‐level ports should use the same naming conventions
as formal arguments to a function. Transaction‐level ports should
use the suffix _port or _export, as appropriate. Use _ap as a
suffix for analysis ports. If you have only one analysis port in a
module, which is quite common, just name it ap with no suffix or
prefix.
sc_export<control_if> ctrl_export;
analysis_port error_ap, good_ap;
216
C
AVM Encyclopedia
The AVM Encyclopedia documents all of the classes in the AVM library. The
classes are organized in related groups. For each class there is a description of
the class and what it is used for along with a listing of all the members and
methods. The methods and members are described as well.
To use the Encyclopedia, look up a class name in the class index to find the
page that has the complete description and the file name in the library where
the class is defined. You can also peruse the class groups to understand how
the classes work together.
218
Class Index
Table 1 is a complete list of all the class definitions in alphabetic order, the file
in which each definition resides, and a reference to a page number for the
complete description of the class.
Classes for Components
Components form the foundation of the AVM. Components encapsulate
behavior of transactors, scoreboards, and other objects in a testbench.
avm_named_component is the base class from which all component classes are
derived.
avm_random_stimulus avm_subscriber
parent
avm_named_component avm_named_component
1 0..1
children
1 0..*
IF
avm_connector_base
avm_threaded_component avm_env
Figure C‐1 UML Diagram for Components
avm_connection_phase_e m_connection_phase =
AVM_CONSTRUCTION_PHASE
Used to determine which phase of elaboration is currently being
executed. This enum is used by avm_connector_base (see page 249)
to check that the correct connections are being done in the correct
subphase of the connect phase.
internal members:
local process m_run_process
The run process is the process id of the run() task.
methods:
function new(string name="env")
This is the constructor. name is the first argument of the normal
avm_named_component constructor. It is not necessary to specify the
parent argument, since avm_env does not have a parent.
virtual function void configure()
Allows configuration of verification components before the simulation
starts, although it may be that back door memory accesses are also done
here. It is virtual so that it can be overloaded in subclasses of avm_env. It
is a function, so it cannot consume time. If there is some time‐consuming
initialization that needed before the test starts, then this needs to be done
in the run phase. configure() is executed after elaborate() and
before run().
virtual function void connect()
A virtual method that gets overridden in the user‐defined subclass to
specify the connections between top‐level components in the
environment.
function void do_kill_all()
Kills all run() tasks, whether these were created by avm_env or
avm_threaded_component. It is called by do_test() between the
run phase and the report phase. It can also be called manually to stop all
processes.
virtual task do_test()
The main user‐visible method in avm_env. It runs through the AVM
phases as described above.
virtual task execute()
This method is deprecated. It exists to support backward compatibility
with AVM 2.0.
virtual function void kill()
Kills the run() task and any of its children. It can be overloaded to do
additional work before or after killing these processes. However, it is
necessary that the overloaded method call super.kill().
226 avm_named_component
methods:
function new(string name, avm_named_component parent=null,
bit check_parent=1)
We must always provide a local name. For most components, a parent
is supplied and checked. The only exceptions to this are avm_env, which
must not have a parent, and components such as tlm_fifo (see page
264), which might not have a parent if they are being used outside of an
avm_env, such as in a module. If a component instantiation does not
have a parent, then check_parent should be set to 0.
function avm_named_component absolute_lookup(string name)
Returns a handle to the component with the full hierarchical name
specified by name, if there is such a component, and will return null
otherwise.
virtual function void configure()
An empty implementation in avm_named_component. It should be
overloaded if required in a subclass. It is called by the avm_env using
do_configure().
protected virtual function void connect()
Called by avm_env after export_connections and before
import_connections. It should be overloaded in subclasses of
avm_named_component so that a child port that requires an interface
can obtain it from an export of another sibling child. Connections in
connect() should be of the form child1.port.connect(
child2.export ).
virtual function void do_display(int max_level=-1,
int level=0,
bit display_connectors=0)
A debugging method that is used to recursively display the hierarchical
names of this component and its children. max_level is used to control
the depth of the recursion—the default value of ‐1 means that the
recursion will always carry on to the lowest level in the hierarchy of the
testbench. It is not expected that the normal testbench code will supply a
value other than zero to the second argument. The third argument is
used to control whether ports, exports, and implementations are
displayed. The default value of zero indicates that they are not
displayed; a value of 1 ensures that connectors are displayed.
virtual function void do_kill_all()
Kills all the run() tasks in the current instance of
avm_named_component and any tasks spawned by this instance and
any child component instances. It is called by avm_env after its run()
task has finished executing.
function void do_flush()
avm_named_component 229
Calls the virtual flush() method for this component and all its
children using a bottom‐up ordering. It is not called automatically by
avm_env, so it needs to be called explicitly when required.
virtual function void end_of_elaboration()
A virtual function whose default implementation is empty. It is called by
avm_env at the end of elaboration and before configure(). It can be
overloaded in a subclass, and is a useful place to put debugging code
that can display interesting aspects of the testbench hierarchy or
connectivity.
protected virtual function void export_connections()
Called by avm_env at the beginning of the connect() phase. It should
be overloaded in subclasses of avm_named_component to make
avm_*_exports and avm_*_imps defined in children of this
component externally visible. Connections in export_connections
should be of the form export.connect(child.export).
virtual function void flush()
An empty implementation in avm_named_component. It should be
overloaded if required in a subclass. It is called from normal testbench
code by do_flush().
protected virtual function void import_connections()
Called by avm_env at the end of the connect() phase. It should be
overloaded in subclasses of avm_named_component so that a child port
that requires an external interface can obtain it from a port of this
component. Connections in import_connections should be of the
form child.port.connect( port ).
function bit is_removed()
Returns 1 if this component has been removed, otherwise returns 0.
function avm_named_component relative_lookup(string name)
Looks up the child of this component whose name relative to this
component is name. For example, if this component’s name is “i1.i2” and
name is “i3.i4,” then this method will return the handle to the
component with name “i1.i2.i3.i4” if there is such a component, and will
return null otherwise.
virtual function void remove()
Removes all trace of a component from the various AVM data structures.
It is virtual to allow subclasses to delete this component from their data
structures if that is necessary. remove() can only be called during the
avm_env construction phase.
virtual function void report()
230 avm_named_component
Calls add_to_debug_list() for each child of this component.
virtual function void check_connection_size()
Called by amv_env at the end of elaboration to check that the minimum
number of interfaces have been supplied to each connector as defined by
the min_size argument in the constructor of avm_connector_base.
function void do_configure()
Called by avm_env after the elaboration phase but before the run()
phase. It calls configure() on this component and all its children
using a top‐down ordering.
function void do_connect()
Called by avm_env after do_export_connections() and before
do_import_connections(). Calls connect() in this component
and all its children using an undefined ordering.
function void do_end_of_elaboration()
Used by avm_env to ensure top‐down ordering of
end_of_elaboration() methods.
function void do_export_connections()
Called by avm_env at the beginning of the connect() phase. Calls
export_connections() in this component and all its children using a
bottom‐up ordering.
function void do_import_connections()
Called by avm_env after do_connect(). Calls
import_connections() in this component and all its children using a
top‐down ordering.
function void do_report()
Called by avm_env after the run() method terminates. Calls report on
this component and its children using a bottom‐up ordering.
virtual task do_run_all()
Spawns all the run() tasks in this component and all its children. Called
by avm_env after the configure() phase and immediately before it
spawns its own run() method.
function void do_set_env(avm_env e)
Called by avm_env after construction and before the connection phase.
It sets m_env in all the children of the avm_env.
local function void extract_name()
A utility used by the absolute and relative look‐up methods.
local function void no_parent_message()
An internal method that prints out some AVM 2.0 to AVM 3.0 migration
messages.
232 avm_random_stimulus
This is a general purpose unidirectional random stimulus generator. It is
a very useful component in its own right, but can also either be used as a
template to define other stimulus generators, or it can be extended to add
additional stimulus generation methods to simplify test writing.
The avm_random_stimulus class generates streams of trans_type
transactions. These streams may be generated by the randomize()
method of trans_type, or the randomize() method of one of its
subclasses, depending on the type of the argument passed into the
generate_stimulus() method. The stream may go indefinitely, until
terminated by a call to stop_stimulus_generation(), or you may
specify the maximum number of transactions to be generated.
By using inheritance, we can add directed initialization or tidy up
sequences to the random stimulus generation.
file: utils/avm_random_stimulus.svh
virtual: no
parameters:
type trans_type=avm_transaction
Specifies the type of transaction to be generated.
members:
avm_blocking_put_port #(trans_type) blocking_put_port
The port through which transactions come out of the stimulus generator.
local bit m_stop=0
Indicates whether the stimulus generator should stop before issuing the
next transaction.
methods:
function new(string name, avm_named_component parent)
This is the standard AVM 3.0 constructor.
The constructor displays the string obtained from get_randstate()
during construction. set_randstate() can then be used to regenerate
precisely the same sequence of transactions for debugging purposes.
virtual task generate_stimulus(trans_type t=null,
int max_count=0)
The main user‐visible method. If t is not specified, we will generate
random transactions of type trans_type. If t is specified, we will use
the randomize() method in t to generate transactions—so t must be a
subclass of trans_type. max_count is the maximum number of
transactions to be generated. A value of zero indicates no maximum—in
avm_stimulus 233
This is deprecated in AVM 3.0 in favor of avm_random_stimulus. It is in
the library to ensure backward compatibility with AVM 2.0.
file: deprecated/avm_stimulus.svh
virtual: no
parameters:
type trans_type = avm_transaction
members:
tlm_blocking_put_if #(trans_type) blocking_put_port
local bit m_stop=0
methods:
function new(string name, avm_named_component parent=null)
virtual task generate_stimulus(trans_type t=null,
int max_count=0)
virtual function void stop_stimulus_generation()
methods:
function new(string name, avm_named_component p)
This is the standard AVM 3.0 constructor.
pure virtual function void write(T t)
A pure virtual method that needs to be defined in a subclass.
task do_run_all()
Used by the avm_env to spawn the run() method.
Classes for Comparators
A common function of testbenches is to compare streams of transactions for
equivalence. For example, a testbench may compare a stream of transactions
from a DUT with expected results. The AVM library provides a base class
called avm_in_order_comparator and two derived classes, which are
avm_in_order_built_in_comparator for comparing streams of built‐in
types and avm_in_order_class_comparator for comparing streams of class
objects. avm_algorithmic_comparator also compares two streams of
transactions; however, the transaction streams might be of different type
objects. This device will use a user‐written transformation function to convert
one type to another before performing a comparison.
avm_named_component
avm_threaded_component
avm_algorithmic_comparator
T, comp, convert, pair_type
avm_in_order_comparator
T T
avm_in_order_class_comparator avm_in_order_built_in_comparator
Figure C‐2 UML Diagram for Comparator Classes
file: utils/avm_algorithmic_comparator.svh
virtual: no
parameters:
type AFTER = int
The type of the transaction against which the transformed BEFORE
transactions will be compared.
type BEFORE = int
The type of incoming transaction to be transformed prior to comparing
against the AFTER transactions.
type TRANSFORMER = int_transform
The type of the class that contains the transform() method.
members:
typedef avm_algorithmic_comparator
#(BEFORE, AFTER, TRANSFORMER) this_type
comp is the comparator used to compare the transformed BEFORE
stream with the AFTER stream.
local TRANSFORMER m_transformer
m_transformer encapsulates the algorithm that transforms BEFOREs
into AFTERs.
methods:
function new(TRANSFORMER transformer, string name,
avm_named_component parent)
The constructor takes a handle to an externally constructed
transformer, a name, and a parent. The last two arguments are the
normal arguments for an AVM 3.0 named component constructor.
We create an instance of the transformer (rather than making it a genuine
policy class with a static transform method) because we might need to
do reset and configuration on the transformer itself.
function void export_connections()
This is the standard AVM method for making exports and
implementations of subcomponents visible externally.
function void write(BEFORE b)
This method handles incoming BEFORE transactions. It is usually
accessed via the before_export, and it transforms the BEFORE
transaction into an AFTER transaction before passing it to the
in_order_class_comparator.
Compares two streams of transactions. These transactions may either be
classes or built‐in types. To be successfully compared, the two streams of
data must be in the same order. Apart from that, there are no assumptions
made about the relative timing of the two streams of data.
file: utils/avm_in_order_comparator.svh
virtual: no
parameters:
type T = int
Specifies the type of transactions to be compared.
type comp = avm_built_in_comp #(T)
The type of the comparator to be used to compare the two transaction
streams.
type convert = avm_built_in_converter #(T)
A policy class to allow convert2string() to be called on the
transactions being compared. If T is an extension of avm_transaction,
it uses T::convert2string(). If T is a built‐in type, the policy
provides a convert2string() method for the comparator to call.
240 avm_in_order_comparator
Classes for Connectors
Connectors are the ports and exports used to form transaction‐level
connections between components or between components and channels.
«interface» avm_named_component
tlm_*_if
IF IF
avm_port_base avm_connector_base
1 1
IF IF IF
Figure C‐3 UML Diagram for Connectors
parameters:
type T = int
The type of transaction to be communicated across the export.
members: <none>
methods:
function new(string name, avm_named_component parent,
int min_size=1, int max_size=1)
name and parent are the standard AVM 3.0 constructor arguments.
min_size and max_size specify the minimum and maximum number
of interfaces that must have been supplied to this port by the end of
elaboration.
The AVM library contains one export for each tlm_*_if interface class as
shown in Table 2.
Interface Export
analysis_if avm_analysis_export
tlm_blocking_get_if avm_blocking_get_export
tlm_blocking_get_peek_if avm_blocking_get_peek_export
tlm_blocking_master_if avm_blocking_master_export
tlm_blocking_peek_if avm_blocking_peek_export
tlm_blocking_put_if avm_blocking_put_export
tlm_blocking_slave_if avm_blocking_slave_export
tlm_get_if avm_get_export
tlm_get_peek_if avm_get_peek_export
tlm_master_if avm_master_export
tlm_nonblocking_get_if avm_nonblocking_get_export
tlm_nonblocking_get_peek_if avm_nonblocking_get_peek_export
tlm_nonblocking_master_if avm_nonblocking_master_export
tlm_nonblocking_peek_if avm_nonblocking_peek_export
tlm_nonblocking_put_if avm_nonblocking_put_export
tlm_nonblocking_slave_if avm_nonblocking_slave_export
tlm_peek_if avm_peek_export
243
Interface Export
tlm_put_if avm_put_export
tlm_slave_if avm_slave_export
tlm_transport_if avm_transport_export
avm_*_imp provides a tlm_*_if to ports and exports that require it. The
actual implementation of the methods that comprise tlm_*_if are
defined in an object of type IMP (e.g., tlm_fifo #(T)) which is passed in
to the constructor.
file: tlm/avm_imps.svh
virtual: no
parameters:
type T = int
Type of transactions to be communicated across the underlying
interface.
type IMP = int
Type of the parent of this implementation.
internal members:
local tlm_*_if #(T) m_if
Handle back to avm_*_imp.
local IMP m_imp
Handle to the component that implements the methods conveyed in the
tlm_*_if description.
methods:
function new(string name, IMP imp)
name is the normal first argument to an AVM 3.0 constructor. imp is a
slightly different form for the second argument to the AVM 3.0
constructor, which is of type IMP and defines the type of the parent.
Since it is the purpose of an “imp” class to provide an implementation of
a set of interface tasks and functions, the particular set of tasks and
functions available for each avm_*_imp class is dependent on the type
of the interface it implements, i.e., the particular TLM interface it
extends.
244 avm_*_port
Table 3 lists all the avm_*_imp classes and the interfaces each
implements. The set of tasks and functions implemented is listed in the
description of the interface classes.
Implementation Interface
avm_analysis_imp analysis_if
avm_blocking_get_imp tlm_blocking_get_if
avm_blocking_get_peek_imp tlm_blocking_get_peek_if
avm_blocking_master_imp tlm_blocking_master_if
avm_blocking_peek_imp tlm_blocking_peek_if
avm_blocking_put_imp tlm_blocking_put_if
avm_blocking_slave_imp tlm_blocking_slave_if
avm_get_imp tlm_get_if
avm_get_peek_imp tlm_get_peek_if
avm_master_imp tlm_master_if
avm_nonblocking_get_imp tlm_nonblocking_get_if
avm_nonblocking_get_peek_imp tlm_nonblocking_get_peek_if
avm_nonblocking_master_imp tlm_nonblocking_master_if
avm_nonblocking_peek_imp tlm_nonblocking_peek_if
avm_nonblocking_put_imp tlm_nonblocking_put_if
avm_nonblocking_slave_imp tlm_nonblocking_slave_if
avm_peek_imp tlm_peek_if
avm_put_imp tlm_put_if
avm_slave_imp tlm_slave_if
avm_transport_imp tlm_transport_if
Port Interface
avm_analysis_port analysis_if
avm_blocking_get_port tlm_blocking_get_if
avm_blocking_get_peek_port tlm_blocking_get_peek_if
avm_blocking_master_port tlm_blocking_master_if
avm_blocking_peek_port tlm_blocking_peek_if
avm_blocking_put_port tlm_blocking_put_if
avm_blocking_slave_port tlm_blocking_slave_if
avm_get_port tlm_get_if
avm_get_peek_port tlm_get_peek_if
246
Port Interface
avm_master_port tlm_master_if
avm_nonblocking_get_port tlm_nonblocking_get_if
avm_nonblocking_get_peek_port tlm_nonblocking_get_peek_if
avm_nonblocking_master_port tlm_nonblocking_master_if
avm_nonblocking_peek_port tlm_nonblocking_peek_if
avm_nonblocking_put_port tlm_nonblocking_put_if
avm_nonblocking_slave_port tlm_nonblocking_slave_if
avm_peek_port tlm_peek_if
avm_put_port tlm_put_if
avm_slave_port tlm_slave_if
avm_transport_port tlm_transport_if
avm_analysis_port is used by a component such as a monitor to publish
a transaction to zero, one, or more subscribers. Typically, it will be used
inside a monitor to publish a transaction observed on a bus to
scoreboards and coverage objects.
file: tlm/avm_ports.svh
parameters:
type T = int
The type of transaction to be written by the analysis port.
members:
typedef avm_port_base #(analysis_if #(T)) port_type
methods:
function new(string name, avm_named_component parent)
This is the standard AVM 3.0 constructor. parent should be null for
analysis ports defined in a static scope, e.g., in a module‐based monitor.
virtual function void connect(port_type provider)
Used to connect an analysis port to another analysis port, an analysis
export, or an analysis implementation; e.g., in a flat hierarchy, we will
typically use
avm_blocking_master_imp 247
monitor.ap.connect(coverage_object.analysis_export) to
connect a monitor to a coverage object observing the transactions being
emitted by the monitor.
function void register(analysis_if #(T) _if)
Provides backwards compatibility with AVM 2.0.
function void write(T t)
Publishes transaction t to all subscribers.
A blocking slave implementation allows a single or a pair of components
that implement put(response), get(request), and peek(request) to
export a single interface that allows a slave to get or peek requests and
put responses.
file: tlm/avm_imps.svh
virtual: no
parameters:
type REQ = int
Type of transactions to be received by this slave.
type RSP = int
Type of transactions to be sent out by this master.
type IMP = int
Type of the parent of this implementation.
type REQ_IMP = IMP
Type of the object that implements the request side of the interface.
type RSP_IMP = IMP
Type of the object that implements the response side of the interface.
avm_connector_base 249
internal members:
local tlm_blocking_slave_if #(REQ, RSP) m_if
Handle back to the nonblocking slave implementation itself.
local REQ_IMP m_req_imp
Handle to the object that implements get(request), can_get,
peek(response), and can_peek. By default, it is the parent of the
blocking master implementation.
local RSP_IMP m_rsp_imp
Handle to the object that implements put(response) and can_put.
By default, it is the parent of the blocking master implementation.
methods:
function new(string name, IMP imp,
REQ_IMP req_imp=imp, RSP_IMP rsp_imp=imp)
name is the normal first argument to an AVM 3.0 constructor. imp is a
slightly different form for the second argument to the AVM 3.0
constructor, which is of type IMP and defines the type of the parent.
req_imp and rsp_imp are optional. If they are specified, then they
must point to the underlying implementation of the request and
response methods; e.g., in tlm_req_rsp_channel (see page 266),
req_imp and rsp_imp are the request and response FIFOs.
task put(input RSP rsp)
task get(output REQ req)
task peek(output REQ req)
See the documentation for tlm_blocking_slave_if (see page 274)
for a description of these methods.
Lists the types of connectors allowed in the AVM.
typedef enum {AVM_CONSTRUCTION_PHASE,
AVM_EXPORT_CONNECTIONS_PHASE,
AVM_CONNECT_PHASE,
AVM_IMPORT_CONNECTIONS_PHASE,
AVM_DONE_CONNECTIONS_PHASE}
avm_connection_phase_e
Lists the phases executed during the elaboration of an avm_env.
virtual: no
parameters:
type IF = int
A placeholder for the type of interface being required or provided by
this connector.
internal members:
typedef avm_connector_base #(IF) connector_type
local IF m_if_list[$]
Holds the interfaces that (should) satisfy the connectivity requirements
of this connector. At the end of elaboration, an error will be reported if
the size of this list is not between m_min_size and m_max_size
(inclusive).
local int m_max_size
The maximum number of interfaces that this connector can have at the
end of elaboration. This value is checked during elaboration.
local int m_min_size
The minimum number of interfaces that this connector can have at the
end of elaboration. This value is checked at the end of elaboration.
local avm_port_type_e m_port_type
Indicates whether this connector is a port, export, or implementation.
local avm_connector_base #(IF) m_provided_by[string]
An associative array of connector bases that have supplied their
interfaces to satisfy the connectivity requirements of this
avm_connector_base. It is indexed by the name of the connector to
make debugging easier. All the interfaces of the
avm_connector_bases in this.m_provided_by are copied into
this.m_if_list.
local avm_connector_base #(IF) m_provided_to[string]
An associative array of avm_connector_bases to which this connector
base has supplied its interfaces. It is indexed by the name of the
connector to make debugging easier. All the interfaces of the
avm_connector_base 251
Looks up the ith interface supplied to this connector. It is typically used
to access the various interfaces bound to a multiport.
function int max_size()
Returns the maximum number of connected interfaces.
function int min_size()
Returns the minimum number of connected interfaces.
function int size()
Returns the number of connected interfaces (i.e., the number of elements
in the m_if_list).
avm_master_imp 253
The first two arguments are the normal AVM 3.0 constructor arguments.
The port_type is port, export, or implementation. min_size and
max_size specify the minimum and maximum number of interfaces
that must be supplied to this port base by the end of elaboration.
parent is usually non null, in which case, check_parent should take
its default value of 1. The rare exception to this (usually, analysis ports
defined outside of an avm_env) should set the value of parent to null
and check_parent to 0.
function void connect(this_type provider)
Connects a port or export that requires interfaces of type IF to a port,
export, or implementation that provides interfaces of type IF.
function void connect_to_if(IF _if)
Connects directly to an interface by delegating the call to m_connector.
The main use for this method is to enable backward compatibility with
AVM 2.0.
function void debug_connected_to(int level=0, int max_level=-1)
Prints out information on the connectors that have supplied interfaces to
this connector, by delegating the call to m_connector.
function void debug_provided_to(int level=0, int max_level=-1)
Prints out information on the connectors that this connector has supplied
interfaces to, by delegating the call to m_connector.
function IF lookup_indexed_if(int i=0)
Gets the ith interface that has been provided to this port base, by
delegating the call to m_connector.
function void remove()
Delegates the method call to m_connector.
function int size()
Gets the number of interfaces that have been provided to this port base
by delegating the call to m_connector.
virtual: no
parameters:
type REQ = int
Type of transactions to be received by this slave.
type RSP = int
Type of transactions to be sent out by this master.
type IMP = int
Type of the parent of this implementation.
type REQ_IMP = IMP
Type of the object that implements the request side of the interface.
type RSP_IMP = IMP
Type of the object that implements the response side of the interface.
internal members:
local tlm_slave_if #(REQ, RSP) m_if
Handle back to the slave implementation itself.
local REQ_IMP m_req_imp
Handle to the object that implements get(request),
try_get(request), can_get, peek(request),
try_peek(request), and can_peek. By default, it is the parent of the
slave implementation.
local RSP_IMP m_rsp_imp
Handle to the object that implements put(response),
try_put(response), and can_put. By default, it is the parent of the
slave implementation.
methods:
function new(string name, IMP imp,
REQ_IMP req_imp=imp, RSP_IMP rsp_imp=imp)
name is the normal first argument to an AVM 3.0 constructor. imp is a
slightly different form for the second argument to the AVM 3.0
constructor, which is of type IMP and defines the type of the parent.
req_imp and rsp_imp are optional. If they are specified, then they
must point to the underlying implementation of the request and
response methods; e.g., in tlm_req_rsp_channel (see page 266).
req_imp and rsp_imp are the request and response FIFOs.
task put(input RSP rsp)
function bit try_put(input RSP rsp)
function bit can_put()
task get(output REQ req)
function bit try_get(output REQ req)
function bit can_get()
task peek(output REQ req)
260 avm_transport_imp
[Deprecated in AVM‐3.0. Use avm_analysis_imp instead.]
file: deprecated/tlm_imps.svh
virtual: no
analysis_port 261
members:
local IMP m_imp
function new(IMP i)
methods:
function void write(input T t)
members:
local analysis_if #(T) if_list[$]
local avm_reporter r
methods:
function new()
function void register(input analysis_if #(T) i)
function void write(input T t)
methods:
static function analysis_port #(T) get_analysis_port(string
name)
tlm_*_imp
262 tlm_*_imp
Table 5 lists the tlm_*_imp deprecated in AVM 3.0
Implementation Interface
tlm_blocking_get_imp tlm_blocking_get_if
tlm_blocking_get_peek_imp tlm_blocking_get_peek_if
tlm_blocking_master_imp tlm_blocking_master_if
tlm_blocking_peek_imp tlm_blocking_peek_if
tlm_blocking_put_imp tlm_blocking_put_if
tlm_blocking_slave_imp tlm_blocking_slave_if
tlm_get_imp tlm_get_if
tlm_get_peek_imp tlm_get_peek_if
tlm_master_imp tlm_master_if
tlm_nonblocking_get_imp tlm_nonblocking_get_if
tlm_nonblocking_get_peek_imp tlm_nonblocking_get_peek_if
tlm_nonblocking_master_imp tlm_nonblocking_master_if
tlm_nonblocking_peek_imp tlm_nonblocking_peek_if
tlm_nonblocking_put_imp tlm_nonblocking_put_if
tlm_nonblocking_slave_imp tlm_nonblocking_slave_if
tlm_peek_imp tlm_peek_if
tlm_put_imp tlm_put_if
tlm_slave_imp tlm_slave_if
tlm_transport_imp tlm_transport_if
analysis_fifo 263
Classes for Channels
The AVM supplies a FIFO channel and a variety of interfaces to access it. The
interfaces have both blocking and nonblocking forms. Because SystemVerilog
does not support multiple inheritance, the FIFO has a collection of “imps”
implementations of abstract interfaces that are used to access the FIFO. The
FIFO is a named component and thus has a name and a location in the
component hierarchy.
«interface» avm_named_component
tlm_*_if
REQ, RSP
analysis_fifo
tlm_transport_channel
Figure C‐4 UML Diagram for Channels
members:
avm_analysis_imp #(T, analysis_fifo #(T)) analysis_export
analysis_export provides the write method to other components.
Calling ap.write(t) on a port bound to this export is the normal
mechanism for writing to an analysis FIFO.
methods:
function new(string name, avm_named_component parent=null)
This is the standard AVM 3.0 avm_named_component constructor.
name is the local name of this component. parent should be left
unspecified when this component is instantiated in statically elaborated
constructs and must be specified when this component is a child of
another AVM component.
function void write(input T t)
Transfers transaction t into the unbounded FIFO, which is guaranteed to
succeed.
task peek(output T t)
Does a blocking peek() and then publishes the peeked transaction
using get_ap. Succeeds when there is something in the FIFO available
to be peeked. peek() is not consuming. When it succeeds, t is still in
the FIFO.
task put(input T t)
Inserts transaction t to the internal mailbox and publishes the
transaction to the put_ap when it is successful. Succeeds when there is
room in the FIFO.
function int size()
This returns m_size.
function bit try_get(output T t)
Will get a transaction from the FIFO. If the FIFO contains a transaction,
then it publishes the transaction across get_ap and returns 1.
Otherwise, it returns 0. try_get() is consuming. When it succeeds, t is
no longer in the FIFO.
function bit try_peek(output T t)
Will get a transaction from the FIFO if it contains a transaction, then it
publishes the transaction across get_ap and returns 1. Otherwise, it
returns 0. peek() is not consuming. When it succeeds, t is still in the
FIFO.
function bit try_put(input T t)
Will put t into the FIFO if there is room, then publish the transaction
across put_ap and return 1. Otherwise, it returns 0.
members:
typedef tlm_req_rsp_channel #(REQ, RSP) this_type
protected tlm_fifo #(REQ) m_request_fifo
The internal FIFO that stores the REQs.
protected tlm_fifo #(RSP) m_response_fifo
The internal FIFO that stores the RSPs.
avm_blocking_put_export #(REQ) blocking_put_request_export
avm_nonblocking_put_export #(REQ)
nonblocking_put_request_export
avm_put_export #(REQ) put_request_export
The exports make the put, blocking put, and nonblocking put interfaces
of the request FIFO externally visible. Through these interfaces, a master
can put requests into the request FIFO.
avm_blocking_get_peek_export #(REQ)
blocking_get_peek_request_export
avm_blocking_get_export #(REQ) blocking_get_request_export
avm_blocking_peek_export #(REQ) blocking_peek_request_export
avm_get_peek_export #(REQ) get_peek_request_export
avm_get_export #(REQ) get_request_export
avm_nonblocking_get_peek_export #(REQ)
nonblocking_get_peek_request_export
avm_nonblocking_get_export #(REQ)
nonblocking_get_request_export
avm_nonblocking_peek_export #(REQ)
nonblocking_peek_request_export
avm_peek_export #(REQ) peek_request_export
These nine request get exports export the blocking, nonblocking, and
combined get, peek, and get_peek interfaces of the request FIFO. These
allow slaves to get or peek requests from the request FIFO.
avm_blocking_put_export #(RSP) blocking_put_response_export
avm_nonblocking_put_export #(RSP)
nonblocking_put_response_export
avm_put_export #(RSP) put_response_export
These three response put exports export the put, blocking put, and
nonblocking put interfaces of the response FIFO. These allow a slave to
put responses into the response FIFO.
avm_blocking_get_peek_export #(RSP)
blocking_get_peek_response_export
avm_blocking_get_export #(RSP) blocking_get_response_export
avm_blocking_peek_export #(RSP) blocking_peek_response_export
avm_get_peek_export #(RSP) get_peek_response_export
avm_get_export #(RSP) get_response_export
avm_nonblocking_get_peek_export #(RSP)
nonblocking_get_peek_response_export
avm_nonblocking_get_export #(RSP)
nonblocking_get_response_export
268 tlm_req_rsp_channel
avm_nonblocking_peek_export #(RSP)
nonblocking_peek_response_export
avm_peek_export #(RSP) peek_response_export
These nine response get exports export the blocking, nonblocking, and
combined get, peek and get_peek interfaces of the request FIFO.
These allow masters to get or peek responses from the response FIFO.
avm_analysis_port #(RSP) response_ap
response_ap publishes an RSP whenever a put() or try_put() to
the response FIFO succeeds.
avm_analysis_port #(REQ) request_ap
Publishes a REQ whenever a put() or try_put() to the request FIFO
succeeds.
avm_master_imp #(REQ, RSP, this_type,
tlm_fifo #(REQ), tlm_fifo #(RSP)) master_export
Exports a single interface that allows a master to put requests and get or
peek responses.
avm_slave_imp #(REQ, RSP, this_type
tlm_fifo #(REQ), tlm_fifo #(RSP)) slave_export
Exports a single interface that allows a slave to get or peek requests and
put responses.
avm_blocking_master_imp #(REQ, RSP, this_type,
tlm_fifo #(REQ), tlm_fifo #(RSP)) blocking_master_export
Exports a single blocking interface that allows a master to put requests
and get or peek responses.
avm_blocking_slave_imp #(REQ, RSP, this_type,
tlm_fifo #(REQ), tlm_fifo #(RSP)) blocking_slave_export
Exports a single blocking interface that allows a slave to get or peek
requests and put responses.
avm_nonblocking_master_imp #(REQ, RSP, this_type,
tlm_fifo #(REQ), tlm_fifo #(RSP)) nonblocking_master_export
Exports a single nonblocking interface that allows a master to put
requests and get or peek responses.
avm_nonblocking_slave_imp #(REQ, RSP, this_type,
tlm_fifo #(REQ), tlm_fifo #(RSP)) nonblocking_slave_export
Exports a single nonblocking interface that allows a slave to get or peek
requests and put responses.
methods:
function new(string name, avm_named_component parent=null,
int request_fifo_size=1,
int response_fifo_size = 1)
tlm_transport_channel 269
name and parent are the standard AVM 3.0 constructor arguments.
parent must be null if this component is defined within a static
component such as a module, program block, or interface, and it must
take a non value if it is defined inside an avm_env. The last two
arguments specify the request and response FIFO sizes, which have
default values of one.
internal methods:
function void create_master_slave_exports()
Creates the bidirectional exports for both master and slave.
function void create_response_exports()
Creates the unidirectional request exports for both master and slave.
function void export_response_connections()
Connects the response FIFO to the appropriate exports.
function void export_request_connections()
Connects the request FIFO to the appropriate exports.
A tlm_transport_channel is a tlm_req_rsp_channel that implements
the transport interface. It is useful when modeling a nonpipelined bus
at the transaction level. Because the requests and responses have a tightly
coupled one‐to‐one relationship, the request and response FIFO sizes
must be one.
file: tlm/tlm_req_rsp.svh
virtual: no
parameters:
type REQ = int
Type of transactions to be passed to/from the request FIFO.
type RSP = int
Type of transactions to be passed to/from the response FIFO.
members:
typedef tlm_transport_channel #(REQ, RSP) this_type
avm_transport_imp #(REQ, RSP, this_type) transport_export
The mechanism by which external components gain access to the
transport() task.
methods:
function new(string name, avm_named_component parent=null)
270 tlm_transport_channel
name and parent are the standard AVM 3.0 constructor arguments.
parent must be null if this component is defined within a statically
elaborated construct such as a module, program block, or interface, and
it must take a non‐null value if it is defined inside an avm_env.
task transport(input REQ request, output RSP response)
Calls put(request) followed by get(response).
analysis_if #(type T=int) 271
TLM Interfaces
The TLM interfaces are a collection of pure virtual classes that define the way
transaction objects move between components. Each interface class supplies a
set of one or more tasks and function prototypes. Interface implementations
(imps), ports, exports, and channels use the TLM interfaces to define the set of
functions and tasks that each needs to implement.
Blocks until the callee is able to supply a response transaction t. This is a
consuming method, so subsequent calls to get() return a different
transaction (or a new copy of the same transaction).
pure virtual task peek( output RSP rsp )
Blocks until the callee is able to supply a response transaction. This is a
nonconsuming method, so subsequent calls to peek() or the next call to
get() return the same transaction.
type T = int
Type of transactions to be handled by this interface.
members: <none>
methods:
Pure virtual methods must have an implementation specified in a
subclass.
Returns immediately and supplies a transaction t, if one is available. If
successful, it returns 1 (and t will still be available). Otherwise, it returns
0 (and t will be undefined).
methods:
Pure virtual methods must have an implementation specified in a
subclass.
methods:
Pure virtual methods must have an implementation specified in a
subclass.
Returns immediately. If the callee can accept a transaction, it returns 1,
otherwise, it returns 0.
pure virtual task put(T t)
Blocks until the callee is able to accept a transaction.
Transactions
This policy class is used to convert built‐in types to strings. It is used to
build generic components that will work with either classes or built‐in
types.
file: vbase/avm_policies.svh
virtual: no
avm_built_in_pair 287
parameters:
type T = int
The type of the item to be converted.
members: <none>
methods:
static function string convert2string(input T t);
Returns the value of t as a string.
This class represents a pair of built in types.
file: utils/avm_pair.svh
virtual: no
parameters:
type T1 = int
The type of the first element of the pair.
type T2 = T1
The type of the second element of the pair. By default, the two types are
the same.
members:
typedef avm_built_in_pair #(T1, T2) this_type
T1 first
The first element of the pair.
T2 second
The second element of the pair.
methods:
virtual function string convert2string()
function bit comp(this_type t)
function void copy(input this_type t)
function avm_transaction clone()
Since avm_built_in_pair is a transaction class, it provides the four
compulsory methods as defined by AVM 3.0.
virtual: no
members: <none>
methods:
static function avm_transaction clone(input T from)
This method returns from.clone().
This class represents a pair of classes.
file: utils/avm_pairs.svh
virtual: no
members:
typedef avm_class_pair #(T1, T2) this_type
T1 first
This is the first element in the pair.
avm_transaction 289
T2 second
This is the second element in the pair.
methods:
function new(input T1 f=null, input T2 s=null)
A constructor, with optional arguments for first and second. No cloning
is performed for nondefault values.
function string convert2string
function bit comp(this_type t)
function void copy(input this_type t)
function avm_transaction clone
Since avm_built_in_pair is a transaction class, it provides the four
compulsory methods as defined by AVM 3.0.
avm_transaction
This is the base class for all AVM transactions.
file: vbase/avm_transaction.svh
virtual: yes
members: <none>
methods:
pure virtual function avm_transaction clone
This virtual method returns a handle to a clone of this transaction. Since
it is virtual, the clone is deep in relation to the inheritance hierarchy,
although it may be shallow or deep in relation to members of subclasses
that are themselves handles.
pure virtual function string convert2string
This method converts the transaction into a string. Since it is virtual, it is
also deep, in relation to the inheritance hierarchy.
In addition to the two methods described above, any transaction T that is
a subtype of avm_transaction must also define the following two
methods.
Reporting
The reporting classes provide a facility for issuing reports with different
severities and ids, and to different files. The primary interface to the reporting
facility is avm_report_client.
«singleton»
avm_report_global_server
avm_report_server
avm_report_client avm_report_handler
1 1
avm_reporter avm_named_component
Figure C‐5 UML Diagram for Reporting Classes
avm_report_client
avm_report_client is a base class from which all components that want
to use the AVM reporting facility must inherit. It provides methods to
issue messages, change the action associated with these messages,
associate files with messages, and execute hook methods as a result of
these messages.
All of the state information relating to actions and files associated with
different types of messages is held in an avm_report_handler. Most of
avm_report_client 291
the methods in this class are delegated to a report handler, which in turn
delegates the actual formatting and production of messages to a central
avm_report_server.
file: reporting/avm_report_client.svh
virtual: yes
members:
protected avm_report_handler m_rh
Handle to a report handler, which stores all the state information about
actions and files. It may be unique to this report client or shared with
other clients.
local string m_report_name
The name of the report handler. This name is printed out at the
beginning of each message.
methods:
function new(string name="")
The constructor requires a name, and creates a new report handler that is
unique to this client.
function void avm_report_error(string id, string message,
int verbosity_level=100,
string filename=””, int line=0)
One of the four core reporting methods, it issues a report of severity
ERROR. If the verbosity level of this report is higher than the maximum
verbosity level of the report handler, this report is simply ignored. By
default, a warning is displayed on the command line, logged in a file if
one has been set, and counted. If the error count in any report handler
exceeds its maximum quit count, then the die()method is called. The
default verbosity level for an error is 100.
function void avm_report_fatal(string id, string message,
int verbosity_level=0,
string filename=””, int line=0)
One of the four core reporting methods, it issues a report of severity
FATAL. If the verbosity level of this report is higher than the maximum
verbosity level of the report handler, this report is simply ignored. By
default, a fatal error is displayed on the command line, and then it calls
the die() method. The default verbosity level for a fatal report is 0.
function void avm_report_message(string id, string message,
int verbosity_level=300,
string filename=””, int line=0)
One of the four core reporting methods, it issues a report of severity
MESSAGE. If the verbosity level of this report is higher than the
maximum verbosity level of the report handler, this report is simply
ignored. By default, a message is displayed on the command line and
292 avm_report_client
logged in a file, if one has been set. The default verbosity level for a
message is 300.
function void avm_report_warning(string id, string message,
int verbosity_level=200,
string filenamee=””, int line=0)
One of the four core reporting methods, it issues a report of severity
WARNING. If the verbosity level of this report is higher than the
maximum verbosity level of the report handler, this report is simply
ignored. By default a warning is displayed on the command line and
logged in a file, if one has been set. The default verbosity level for a
warning is 200.
function avm_report_handler get_report_handler()
Provides public access to the report handler, which stores all the state
information.
function string get_report_name()
Provides public access to the report name.
virtual function void report_header(FILE f=0)
Prints version and copyright information. This information will be sent
to the command line if f is 0, or to the file descriptor f if it is not 0. This
method is called by avm_env immediately after the construction phase
and before the connect phase.
virtual function bit report_hook(string id, string message,
int verbosity,
string filename, int line)
Called only if the CALL_HOOK bit is specified in the action associated
with the report. By default, it does nothing other than return 1, but it can
be overloaded in a subclass. If this method returns 0, the report will not
be processed by the report server.
virtual function bit report_error_hook(string id,
string message,
int verbosity,
string filename, int line)
Called only if the CALL_HOOK bit is specified in the action associated
with an error report. By default, it does nothing other than return 1, but
it can be overloaded in a subclass. If this method returns 0, the error will
not be processed by the report server.
virtual function bit report_fatal_hook(string id,
string message,
int verbosity,
string filename, int line)
Called only if the CALL_HOOK bit is specified in the action associated
with a fatal report. By default, it does nothing other than return 1, but it
avm_report_client 293
can be overloaded in a subclass. If this method returns 0, the fatal will
not be processed by the report server.
virtual function bit report_message_hook(string id,
string message,
int verbosity,
string filename, int line)
Called only if the CALL_HOOK bit is specified in the action associated
with a message. By default, it does nothing other than return 1, but can
be overloaded in a subclass. If this method returns 0, the message will
not be processed by the report server.
function void report_summarize(FILE f=0)
Produces statistical information on the reports issued by the central
report server. This information will be sent to the command line if f is 0,
or to the file descriptor f if it is not 0.
virtual function bit report_warning_hook(string id,
string message,
int verbosity,
string filename, int line)
Called only if the CALL_HOOK bit is specified in the action associated
with a warning. By default, it does nothing other than return 1, but can
be overloaded in a subclass. If this method returns 0, the warning will
not be processed by the report server.
function void reset_report_handler()
Reinitializes the client’s report handler to the default settings.
function void set_report_handler(avm_report_handler hndlr)
Sets the report handler, thus allowing more than one client to share the
same report handler.
function void set_report_max_quit_count(int m)
Sets the value of the max_quit_count in the report handler to m. When
the number of COUNT actions reaches m, the die() method is called.
The default value of 0 indicates that there is no upper limit to the
number of COUNTed reports.
function void set_report_name(string s)
Sets the report name.
function void set_report_severity_action (severity s,
action a)
Sets the action associated with a severity. An action can take the value
NO_ACTION ( 5ʹb00000 ) or can be composed of the bitwise OR of any
combination of DISPLAY, LOG, COUNT, EXIT, or CALL_HOOK.
function void set_report_verbosity_level(int verbosity_level)
294 avm_report_client
Sets the maximum verbosity level for the client’s report handler. If the
verbosity of any report exceeds this maximum value, then the report is
ignored.
function void set_report_id_action (string id, action a)
This method sets the action associated with an id. An action associated
with an id takes priority over an action associated with a severity. An
action can take the value NO_ACTION ( 5ʹb00000 ) or can be composed
of the bitwise OR of any combination of DISPLAY, LOG, COUNT, EXIT, or
CALL_HOOK.
function void set_report_severity_id_action (severity s,
string id,
action a)
This method sets the action associated with a (severity,id) pair. An action
associated with a (severity,id) pair takes priority over an action
associated with either the severity or the id alone. An action can take the
value NO_ACTION
( 5ʹb00000 ) or can be composed of the bitwise OR of any combination of
DISPLAY, LOG, COUNT, EXIT, or CALL_HOOK.
function void set_report_default_file (input FILE f)
This method sets the file descriptor associated by default with any report
issued by this client’s report handler. The default value is 0, which means
that even if the action includes a LOG attribute, the report is not sent to a
file.
function void set_report_severity_file (severity s,
FILE f)
This method sets the file descriptor associated with a severity. A file
descriptor associated with a severity takes priority over the default file
descriptor.
function void set_report_id_file (input string id, input FILE f)
This method sets the file descriptor associated with an id. A file
descriptor associated with an id takes priority over the default file
descriptor and a file descriptor associated with a severity.
function void set_report_severity_id_file (severity s,
string id,
FILE f)
This method sets the file descriptor associated with a (severity,id) pair. A
file descriptor associated with a (severity,id) pair takes priority over the
default file descriptor, a file descriptor associated with a severity, or a file
descriptor associated with an id.
function void dump_report_state()
This method dumps the internal state of the report handler. This
includes information about the maximum quit count, the maximum
avm_report 295
verbosity, and the action and files associated with severities, ids, and
(severity,id) pairs.
virtual function void die()
This method is called by the report server if a report reaches the
maximum quit count or has an EXIT action associated with it (this is part
of the default action for a fatal error).
If this method is called in a client that is actually a named component
defined in an avm_env, then all the avm_env’s run() tasks are killed
and the avm_env goes through the report phase, which by default, calls
report_summarize(). In this case, any other avm_env’s in the
simulation will not be affected.
If die() is called in a report client that is not an
avm_named_component, or in an avm_named_component defined
outside of an avm_env, then report_summarize() is called and the
simulation terminates with $finish.
avm_report_handler
avm_report_handler is the class to which many of the methods in
avm_report_client are delegated. None of its methods are intended to
be called directly from normal testbench code.
It stores the maximum verbosity, actions, and files that affect the way
reports are handled. The relationship between report clients and report
handlers is usually one to one, but it can, in theory, be many to one. If a
report needs processing, it passes it on to the central report server. The
relationship between report handlers and report servers is many to one.
file: reporting/avm_report_handler.svh
virtual: no
members:
avm_report_server m_srvr
This is the central report server that actually processes the reports.
int m_max_verbosity_level
This is the maximum verbosity of reports that this report handler
forwards to the report server. The default value is 10000.
action severity_actions[severity]
296 _handler
This is the array that contains the actions associated with each severity.
The default values are given by the table below.
Severity Actions
MESSAGE DISPLAY
WARNING DISPLAY
id_actions_array id_actions
This is the array of actions associated with each string id. By default,
there are no entries in this array.
id_actions_array severity_id_actions[severity]
This is an associative array of associative arrays. If it exists, then
severity_id_actions[s][i] contains the actions associated with
the (severity,id) pair (s,i). By default, there are no entries in this array.
FILE default_file_handle
This is the default file handle for this report handler. By default, it is set
to 0, which means that reports are not sent to a file even if a LOG attribute
is set in the action associated with the report.
FILE severity_file_handles[severity]
This array contains the file handle associated with each severity.
id_file_array id_file_handles
This array contains the file handle associated with each string id.
id_file_array severity_id_file_handles[severity]
This associative array of associative arrays contains the file descriptor
associated with each (severity,id) pair, if there are any.
methods:
function new()
The constructor.
function void set_max_quit_count(int m)
See avm_report_client::set_report_max_quit_count (see
page 293).
function void summarize(FILE f=0)
See avm_report_client::report_summarize (see page 293).
function void report_header(FILE f=0)
_handler 297
See avm_report_client::report_header (see page 292).
function void initialize()
This method is called by the constructor to initialize the arrays and other
variables described above to their default values.
virtual function bit run_hooks(avm_report_client client,
severity s, string id,
string message, int verbosity,
string filename, int line)
run_hooks is called if the CALL_HOOK attribute is set for this report. It
calls the client’s report_hook and severity specific hook method. If
either returns 0, then the report is not processed.
local function FILE get_severity_id_file(severity s, string id)
This method looks up the file descriptor associated with reports with
this severity and id.
function void set_verbosity_level(int verbosity_level)
See avm_report_client::set_report_verbosity_level (see
page 293).
function action get_action(severity s, string id)
This method looks up the action associated with this severity and id.
function FILE get_file_handle(severity s, string id)
This method looks up the file descriptor associated with reports with
this severity and id.
function void report(severity s, string name, string id,
string mess,
int verbosity_level=0,
avm_report_client client=null)
This is the basic reporting method, which is called by the four core
reporting methods avm_report_client::avm_report_message
(see page 291), avm_report_client::avm_report_warning (see
page 292), avm_report_client::avm_report_error (see page
291), and avm_report_client::avm_report_fatal (see page 291).
See the descriptions of these methods for their detailed behavior.
function string format_action(action a)
This method returns a string that describes the action.
function void set_severity_action(severity s, action a)
See avm_report_client::set_report_severity_action (see
page 293).
function void set_id_action(string id, action a)
See avm_report_client::set_report_id_action (see page 294).
function void set_severity_id_action(severity s, string id,
298 avm_report_server
action a)
See avm_report_client::set_report_severity_id_action
(see page 294).
function void set_default_file(FILE f)
See avm_report_client::set_report_default_file (see page
294).
function void set_severity_file(severity s, FILE f)
See avm_report_client::set_report_severity_file (see 294).
function void set_id_file(string id, FILE f)
See avm_report_client::set_report_id_file (see page 294).
function void set_severity_id_file(severity s, string id,
FILE f)
See avm_report_client::set_report_severity_id_file (see
page 294).
function void dump_state()
See avm_report_client::dump_report_state (see page 294).
avm_report_server
avm_report_server is a global server that processes all the reports
generated by an avm_report_handler. None of its methods are intended
to be called by normal testbench code, although in some circumstances
the virtual methods process_report and/or compose_message may be
overloaded in a subclass.
file: reporting/avm_report_server.svh
virtual: no
members:
static avm_report_server global_report_server=null
This is an internal avm_report_server singleton.
local int max_quit_count
This specifies the maximum number of COUNT actions that can be
tolerated before a COUNT action is treated as an EXIT action. The
default value is 0, which is treated as specifying no upper bound.
local int quit_count
This is the actual number of COUNT actions sent to the server.
local int severity_count[severity]
This counts the number of messages for each severity.
local int id_count[string]
avm_report_server 299
This counts the number of messages for each string id.
methods:
function new()
The constructor is protected to enforce a singleton.
static function avm_report_server get_server()
This method returns a handle to the singleton.
function int get_max_quit_count()
This method gets the value of max_quit_count().
function void set_max_quit_count(int m)
This method sets the value of max_quit_count().
function void reset_quit_count()
This method resets the value of quit_count to 0.
function void incr_quit_count()
This method increments the value of quit_count.
function int get_quit_count()
This method gets the value of quit_count.
function bit is_quit_count_reached()
This method returns 1 if the value of quit_count has reached its upper
bound, if there is one, and returns 0 otherwise.
function void reset_severity_counts()
This method resets the values in the severity_count array.
function int get_severity_count(severity s)
This method gets the number of reports with severity s since the last
reset.
function void incr_severity_count(severity s)
This method increments the severity count for this severity.
function void set_id_count(string id, int n)
This method resets the value in the id_count array for an id to n.
function int get_id_count(string id)
This method gets the number of reports with this id.
function void incr_id_count(string id)
This method increments the number of reports with this id.
function void summarize(FILE f=0)
See avm_report_client::report_summarize (see 293).
function void f_display(FILE f, string s)
300 avm_reporter
This method sends string s to the command line if f is 0 and to the file(s)
specified by f if it is not 0.
function void dump_server_state()
See avm_report_client::dump_report_state() (see page 294).
virtual function void process_report( severity s, string name ,
string id, string message,
action a,
FILE f,
string filename , int line,
avm_report_client client )
This method calls compose_message to construct the actual message to
be output. It then takes the appropriate action according to the value of
action a and file f. This method can be overloaded by expert users so that
the report system processes the actions different from the way described
in avm_report_client and avm_report_handler.
virtual function string compose_message(severity s, string name,
string id, string message)
This method constructs the actual string sent to the file or command line
from the severity, component name, report id, and the message itself.
Expert users can overload this method to change the formatting of the
reports generated by avm_report_client.
avm_reporter extends avm_report_client
avm_reporter is a reporter that can be used by objects that are not
avm_named_components to issue reports.
file: reporting/avm_report_client.svh
virtual: no
members: <none>
methods:
function new(string name="reporter")
The constructor has a default name of ʺreporter.ʺ
D
Apache License
Apache License, Version 2.0.
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
1. Definitions.
ʺLicenseʺ shall mean the terms and conditions for use, reproduction, and
distribution as defined by Sections 1 through 9 of this document.
ʺLicensorʺ shall mean the copyright owner or entity authorized by the
copyright owner that is granting the License.
ʺLegal Entityʺ shall mean the union of the acting entity and all other entities
that control, are controlled by, or are under common control with that entity.
For the purposes of this definition, ʺcontrolʺ means (i) the power, direct or
indirect, to cause the direction or management of such entity, whether by
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
ʺYouʺ (or ʺYourʺ) shall mean an individual or Legal Entity exercising
permissions granted by this License.
302
ʺSourceʺ form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation source, and
configuration files.
ʺObjectʺ form shall mean any form resulting from mechanical transformation
or translation of a Source form, including but not limited to compiled object
code, generated documentation, and conversions to other media types.
ʺWorkʺ shall mean the work of authorship, whether in Source or Object form,
made available under the License, as indicated by a copyright notice that is
included in or attached to the work (an example is provided in the Appendix
below).
ʺDerivative Worksʺ shall mean any work, whether in Source or Object form,
that is based on (or derived from) the Work and for which the editorial
revisions, annotations, elaborations, or other modifications represent, as a
whole, an original work of authorship. For the purposes of this License,
Derivative Works shall not include works that remain separable from, or
merely link (or bind by name) to the interfaces of, the Work and Derivative
Works thereof.
ʺContributionʺ shall mean any work of authorship, including the original
version of the Work and any modifications or additions to that Work or
Derivative Works thereof, that is intentionally submitted to Licensor for
inclusion in the Work by the copyright owner or by an individual or Legal
Entity authorized to submit on behalf of the copyright owner. For the
purposes of this definition, ʺsubmittedʺ means any form of electronic, verbal,
or written communication sent to the Licensor or its representatives,
including but not limited to communication on electronic mailing lists, source
code control systems, and issue tracking systems that are managed by, or on
behalf of, the Licensor for the purpose of discussing and improving the Work,
but excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as ʺNot a Contribution.ʺ
ʺContributorʺ shall mean Licensor and any individual or Legal Entity on
behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non‐exclusive, no‐charge, royalty‐free,
irrevocable copyright license to reproduce, prepare Derivative Works of,
303
publicly display, publicly perform, sublicense, and distribute the Work and
such Derivative Works in Source or Object form.
3. Grant of Patent License.
Subject to the terms and conditions of this License, each Contributor hereby
grants to You a perpetual, worldwide, non‐exclusive, no‐charge, royalty‐free,
irrevocable (except as stated in this section) patent license to make, have
made, use, offer to sell, sell, import, and otherwise transfer the Work, where
such license applies only to those patent claims licensable by such
Contributor that are necessarily infringed by their Contribution(s) alone or by
combination of their Contribution(s) with the Work to which such
Contribution(s) was submitted. If You institute patent litigation against any
entity (including a cross‐claim or counterclaim in a lawsuit) alleging that the
Work or a Contribution incorporated within the Work constitutes direct or
contributory patent infringement, then any patent licenses granted to You
under this License for that Work shall terminate as of the date such litigation
is filed.
4. Redistribution.
You may reproduce and distribute copies of the Work or Derivative Works
thereof in any medium, with or without modifications, and in Source or
Object form, provided that You meet the following conditions:
You must give any other recipients of the Work or Derivative Works a
copy of this License; and
You must cause any modified files to carry prominent notices stating
that You changed the files; and
You must retain, in the Source form of any Derivative Works that You
distribute, all copyright, patent, trademark, and attribution notices
from the Source form of the Work, excluding those notices that do not
pertain to any part of the Derivative Works; and
If the Work includes a ʺNOTICEʺ text file as part of its distribution,
then any Derivative Works that You distribute must include a read‐
able copy of the attribution notices contained within such NOTICE
file, excluding those notices that do not pertain to any part of the
Derivative Works, in at least one of the following places: within a
NOTICE text file distributed as part of the Derivative Works; within
the Source form or documentation, if provided along with the Deriv‐
ative Works; or, within a display generated by the Derivative Works,
304
You may add Your own copyright statement to Your modifications and may
provide additional or different license terms and conditions for use,
reproduction, or distribution of Your modifications, or for any such
Derivative Works as a whole, provided Your use, reproduction, and
distribution of the Work otherwise complies with the conditions stated in this
License.
5. Submission of Contributions.
6. Trademarks.
This License does not grant permission to use the trade names, trademarks,
service marks, or product names of the Licensor, except as required for
reasonable and customary use in describing the origin of the Work and
reproducing the content of the NOTICE file.
7. Disclaimer of Warranty.
Unless required by applicable law or agreed to in writing, Licensor provides
the Work (and each Contributor provides its Contributions) on an ʺAS ISʺ
BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
express or implied, including, without limitation, any warranties or
conditions of TITLE, NON‐INFRINGEMENT, MERCHANTABILITY, or
FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for
determining the appropriateness of using or redistributing the Work and
assume any risks associated with Your exercise of permissions under this
License.
8. Limitation of Liability.
305
In no event and under no legal theory, whether in tort (including negligence),
contract, or otherwise, unless required by applicable law (such as deliberate
and grossly negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special, incidental, or
consequential damages of any character arising as a result of this License or
out of the use or inability to use the Work (including but not limited to
damages for loss of goodwill, work stoppage, computer failure or
malfunction, or any and all other commercial damages or losses), even if such
Contributor has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability.
While redistributing the Work or Derivative Works thereof, You may choose
to offer, and charge a fee for, acceptance of support, warranty, indemnity, or
other liability obligations and/or rights consistent with this License. However,
in accepting such obligations, You may act only on Your own behalf and on
Your sole responsibility, not on behalf of any other Contributor, and only if
You agree to indemnify, defend, and hold each Contributor harmless for any
liability incurred by, or claims asserted against, such Contributor by reason of
your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work .
To apply the Apache License to your work, attach the following boil‐
erplate notice, with the fields enclosed by brackets ʺ[]ʺ replaced with
your own identifying information. (Donʹt include the brackets!) The
text should be enclosed in the appropriate comment syntax for the
file format. We also recommend that a file or class name and descrip‐
tion of purpose be included on the same ʺprinted pageʺ as the copy‐
right notice for easier identification within third‐party archives.
You may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Standards
1. IEEE standard 1800‐2005, “IEEE Standard for SystemVerilog Uni‐
fied Hardware Design, Specification, and Verification Language”,
November 2005.
2. IEEE, standard 1666‐2005, “IEEE Standard SystemC Language
Reference Manual”, March 2006.
3. OSCI TLM‐1.0 Transaction Level Modeling Standard. SystemC
kit with white paper available on http://www.systemc.org.
Functional Verification
4. Janick Bergeron, “Writing Testbenches: Functional Verification of
HDL Models”, Second edition, Kluwer Academic Publishers,
2003.
5. Andreas S. Meyer, “Principles of Functional Verification”,
Elsevier Science, 2004.
6. Harry D. Foster, Adam C. Krolnik, David J. Lacey, “Assertion‐
Based Design”, 2nd Edition, Kluwer Academic Publishers, 2004.
7. Chris Spear, “SystemVerilog for Verification: A Guide to Learn‐
ing the Testbench Language Features”, Springer 2006
SystemC
8. Thorsten Grotker, Stan Liao, Grant Martin, Stuart Swan, “System
Design with SystemC”, Kluwer Academic Publishers, 2002.
9. David C. Black and Jack Donovan, “SystemC: From the Ground
Up”, Kluwer Academic Publishers, 2004.
10. J. Bhasker, A SystemC Primer, Star Galaxy Publishing, 2002.
11. Frank Ghenassia (ed.), “Transaction‐Level Modeling with Sys‐
temC: TLM Concepts and Applications for Embedded Systems”,
Springer, 2005.
C++ and Object‐Oriented Programming
12. Stanley B. Lippman, “Inside the C++ Object Model”, Addison‐
Wesley, 1996
13. Bjarne Stroustrup, “The C++ Programming Language”, Third
Edition, Addison‐Wesley, 1997.
14. Gregory Satir, Doug Brown, “C++: The Core Language”, O’Reilly
& Associates, Inc., 1995.
308
15. Erich Gamma, Richard Helm, Ralph Johnson, John Vlissides,
“Design Patterns: Elements of Reusable Object‐Oriented Soft‐
ware”, Addison‐Wesley, 1995.
16. Andrei Alexandrescu, “Modern C++ Design: Generic Program‐
ming and Design Patterns Applied”, Addison‐Wesley, 2001.
17. Programming Style
18. Steve McConnell, “Code Complete”, Second Edition, Microsoft
Press, 2004.
19. Herb Sutter, Andrei Alexandrescu, “C++ Coding Standards: 101
Rules, Guidelines, and Best Practices”, Addison‐Wesley, 2005.
Miscellaneous
20. Niklaus Wirth, “Algorithms + Data Structures = Programs”, Pren‐
tice‐Hall, Inc., Englewood Cliffs, New Jersey, 1976
Index
A rationale for SystemC and
SystemVerilog 195
avm prefix, for classes 212
abstract definition
avm_*_export class
data 56
definition of 241
function 57
role of 30
time 56
avm_*_imp class 243
abstract factory 190
avm_*_port class
abstraction levels, definition of 57
definition of 244
abstraction, role in programming 34
role of 30
analysis component
avm_algorithmic_comparator class
communicating with 162
236
coverage collector 26
avm_analysis_port class 246
definition of 25, 121
avm_blocking_master_imp class 247
role of 122
avm_blocking_slave_imp class 248
scoreboard 26
avm_built_in_clone class 286
analysis domain
avm_built_in_comp class 286
definition of 121
avm_built_in_converter class 286
role of 122
avm_built_in_pair class 287
analysis FIFO
avm_class_clone class 287
definition of 27
avm_class_comp class 288
role of 122
avm_class_converter class 288
analysis ports
avm_class_pair class 288
definition of 27
avm_connector_base class 249
description of 201
avm_env
notation 209
definition of 224
role of 122
example of 88
analysis_fifo class 263
role of 30
analysis_if class 271
avm_in_order_built_in_ compar-
analysis_imp class 260
ator class 238
analysis_port class 261
avm_in_order_class_comparator
are‐we‐done questions
class 238
and coverage collector 131
avm_in_order_comparator class 239
automating answers 11
avm_master_imp class 253
collecting coverage data 130
avm_named_component class
developing 7
definition of 226
driving verification flow with 4
role of 28
assertions
avm_nonblocking_master_imp class
integrating with AVM 165
254
separating detection from action 166
avm_nonblocking_slave_imp class
AVM
255
description of xii, 23
avm_port_base class 257
including modules 153
avm_random_stimulus class 232
310
avm_report_client class C
definition of 290
functions of 93
channel class 263
avm_report_handler class
definitions of 263
definition of 96, 295
tlm_fifo 264
description of 96
tlm_req_rsp_channel 266
example of 96
tlm_transport_channel 269
avm_report_server class 298
channels
avm_reporter class 300
communication 73
avm_slave_imp class 258
in SystemC 196
avm_stimulus class 233
notation 209
avm_subscriber class 233
choosing a programming language 203
avm_threaded_component class
class extension, effect of constraints on
definition of 234
181
example of 29
class factories
avm_threaded_component classes
copying 188
role of 29
definition of 189
avm_transaction class 289
using factory pattern with 190
avm_transport_imp class 260
with inheritance, for reuse 189
avm_verification_component class
class members, naming convention for
235
212
class methods, naming convention for
213
B class parameter
introduction to 48
specialization of 50
BFM
classes
example 169
See also specific class names
reusing 153
cloning 287
wrapper example 170
comparing 288
bidirectional data flow 76
comparison of features in Verilog,
bidirectional interface, master and slave
SystemVerilog, and C++ 51
66
converting to strings 288
blocking call
definition of 35
definition of 70
distinguishing from objects 35
put() 192
example of 36
transport() 117
in SystemC 196
blocking interfaces
in SystemVerilog 196
and random stimulus generator 191
index of names and definition files
definition of 70
218
purpose of 73
interface and implementation of 38
tlm_transport_if 285
naming convention for 212
building examples xvi
role in testbench 28
built‐in types
uses of 36
cloning 286
clock generators
comparing 286
connecting 109
converting to strings 286
example of 19
311
E F
H naming convention for 214
notation of 206
related exports 242
handles, naming convention for 213
related implementation 244
HAS‐A relationship 39
related ports 245
virtual, definition of 51
introspection, in SCV 198
I IS‐A relationship
definition of 40
example of 41, 42
id, for messages 94
immediate assertion, using to check
randomize() 180
implementation class, related interfaces L
244
implication operator 183
layered testbench architecture 23
import_connections(), role in elab‐
layering constraints 181
oration 80
legal input, using as basis for constraints
inheritance
175
in SystemC 196
library, definition of 34
in SystemVerilog 196
line number, for messages 94
rationale for 42
Lippman, Stanley 46
using with class factory 189
lm 278
with IS‐A relationship 40
local variable
initiator
example of 168
definition of 208
naming convention for 212, 213
example of 63
in‐order comparator, description of 123
inside operators, description of 185
installing cookbook kit xvi M
integer indexes, naming convention for
212
macros, naming convention for 214
integrated testbench
master
BFM example 170
definition of 25
establishing monitor‐driver
example of 67, 69
connection 164
linkage to slave 68
establishing pin‐level connections
role of 66
163
messages
example of 161
base class for 290
using modules with AVM 153
body 94
using Verilog tasks in 170
functions for issuing 93
intent, designer’s 3
generating and filtering 93
interconnect, notation 207
id 94
interfaces
id and associated actions 95
control and configuration, role of 27
line number 94
in SystemC 196
printing 94
in SystemVerilog 196
saving to file 96
meanings of 79
severity 94
315
N O
named component, example of 28 object‐oriented programming
naming conventions class, role of 35
316
definition of 33 polymorphism, definition of 43
HAS‐A relationship, definition of 39 port/export notation 207
importance of object relationships ports
39 connection to export 81
IS‐A relationship, definition of 40 definition of 80
Simula 67 53 naming convention for 215
objects related interfaces 245
cloning 128 port‐to‐port connection
comparing 128 example of 86
copying 128 need for 85
distinguishing from classes 35 post_randomize() method
printing 128 constructing dynamic array with
observe, definition of 103 187
observer pattern, definition of 122 example of 183
OOP pre_randomize() method
class, role of 35 and state variable 191
definition of 33 calling 183
HAS‐A relationship, definition of 39 primary container, in SystemC and Sys‐
importance of object relationships temVerilog 201
39 printing objects 128
IS‐A relationship, definition of 40 procedural programming, limitations of
Simula 67 53 33
operational components 25 producer xiv
operational domain 26 program blocks, in SystemVerilog 196
operators, inside 185 programming language, choosing 203
over constraining 182 pruning, to achieve coverage 14
overloading, operator and function 197 pseudo‐random number, definition of
176
publisher, role of 27, 122
pure virtual interface class
P definition of 80
example of 59
P2S converter, example of 124 put
packages definition of 58, 208
constraints in 187 example of 59
naming convention for 215 put_port, definition of 60
parallel‐to‐serial converter, example of
124
parameterized function
example of 47 R
role in generic programming 47
parameters, naming convention for 214 rand
parametrized characteristics, introduc‐ definition of 180
tion to 48 in layering example 181
pin interfaces, definition of 79 rand_avail, example of 73
pin‐level ports, purpose of 79 random constraints, and test controllers
pointer, naming convention for 213 131
policy class, transactions 286 random number generator
317
as key element in testbench 11 114
definition of 26 reuse via class factory 189
example of 124 role of 110, 121
in first testbench 12 stimulus, random 174
in three‐bit counter 18 subroutine, importance of 33
in‐order comparator 123 subscriber
purpose of 12 See also analysis component
role of 123 in data communication 27
structure 12 role of 122
SCV 198 synchronous (sequential) designs, verifi‐
seed cation of 14
derived, and root 177 SystemC
value 176 choosing a programming language
semantic uniformity, in SystemC 196 203
separation of concerns, and OOP 35, 46 comparing to SystemVerilog 195
sequential devices, verification of 14 SystemC Verification library 198
set membership, application of 185 SystemVerilog
severity, for messages 94 and CRV 173
Simula 67 53 choosing a programming language
single inheritance, definition of 196 203
slave comparing to SystemC 195
definition of 25 definition of available objects 196
example of 67, 69 SystemVerilog classes, role in testbench
linkage to master 68 28
role of 66 SystemVerilog interfaces
solution space definition of 79
definition of 178 description of 90
in CRV 176 example of 90
solving order, constraints 184 using pin‐level connections 162
specialization, for class parameters 50 SystemVerilog virtual interfaces, defini‐
stack tion of 80
definition of 36
example of 37
state‐dependent constraints
definition of 178 T
example of 190
stepwise refinement, definition of 139 target
stimulus generator definition of 208
and error injection 133 example of 64
and test controllers 131 tasks
as agent of control 103 forked, definition of 197
as key element in testbench 11 in SystemVerilog 197
definition of 25 temporal assertions, in modules 153
example code for 110, 116 test controller
example of 12, 110, 124 as governor 131
example of BFM testbench 169 role in testbench 131
exploring through example 103 test plan
relationship to DUT and monitor developing are‐we‐done questions 7
319
developing does‐it‐work questions tlm_*_imp, deprecated implementa‐
6 tions 261
testbench tlm_blocking_get_if class 271
basics of 10 tlm_blocking_get_peek_if class
building 1, 121 272
example 124, 135 tlm_blocking_master_if class 273
example code for 125 tlm_blocking_put_if class 273
example, with coverage collector tlm_blocking_slave_if class 274
and test controller 132 tlm_fifo class 264
layers of 23 tlm_get_if class 274, 275
list of key constructs 203 tlm_master_if class 276
mixed‐base example 160 tlm_nonblocking_get_if class 277
module‐based example 154, 169 tlm_nonblocking_get_peek_if
primary role of 103 class 278
testbench architecture, building with tlm_nonblocking_master_if class
AVM 21 279
testing, directed 173 tlm_nonblocking_peek_if class 280
threaded component, role of 29 tlm_nonblocking_put_if class 281
threads 197 tlm_nonblocking_slave_if class
TLM adapter, constructing 149 281
TLM interface classes tlm_peek_if class 282
analysis_if 271 tlm_put_if class 283
general description of 271 tlm_req_rsp_channel class 266
tlm_blocking_get_if 271 tlm_slave_if class 284
tlm_blocking_get_peek_if tlm_transport_channel class 269
272 tlm_transport_if class 285
tlm_blocking_master_if 273 top, example of class 13
tlm_blocking_put_if 273 top‐down flow, definition of 139
tlm_blocking_slave_if 274 transaction
tlm_get_if 274 control, definition of 143
tlm_get_peek_if 275 converting pin‐level activity into
tlm_master_if 276 160
tlm_nonblocking_get_ definition of 57
peek_if 278 operational, definition of 143
tlm_nonblocking_get_if 277 transaction communication
tlm_nonblocking_master_if forms of 58
279 get 63
tlm_nonblocking_peek_if 280 put 58
tlm_nonblocking_put_if 281 transport 66
tlm_nonblocking_slave_if transaction data type 76
281 transaction object, functions of 127
tlm_peek_if 282 transaction stream, analyzing with reus‐
tlm_put_if 283 able code 121
tlm_slave_if 284 transaction‐level components 25
tlm_transport_if 285 transaction‐level connections
tlm prefix, rationale for 212 description of 76
TLM transaction, definition of 57 elements of 80
TLM, introduction to 55 SystemC 76
320
SystemVerilog 77 verification components
three elements of 81 analysis component 121
with AVM ports and exports 201 assertion‐based monitor 160
transaction‐level modeling communicating with 162
definition of 56 levels of 23
introduction to 55 verification flow
transaction, definition of 57 two‐loop flow 8
transaction‐level ports and exports, com‐ using are‐we‐done questions 4
municating through 30 using does‐it‐work questions 4
transactions policy classes verification plan 15
AVM transactions, base class for 289 Verilog task, in integrated testbench 170
avm_built_in_clone 286 virtual function
avm_built_in_comp 286 and polymorphism 43
avm_built_in_converter 286 example of 44
avm_built_in_pair 287 example program with 45
avm_class_clone 287 role of 42
avm_class_comp 288 virtual interfaces
avm_class_converter 288 connecting testbench with 90
avm_class_pair 288 connection types and functions 93
general description of 286 definition of 51
transactor example of 60
See also transactor types (responder,
driver, monitor) 121
and the analysis domain 26
definition of 24 W
driver
definition of 24 WARNING message 94
example of 115 Wirth, Niklaus 34
monitor write(), in analysis domain 27
definition of 24
example of 112
responder, definition of 24
transports
definition of 66
example of 67, 68
two‐loop verification flow, implement‐
ing 8
type names, naming convention for 213
valid constraints, definition of 188
verbosity, for messages 94
verification
basic principles of 10
module‐based, rationale for 153