Vous êtes sur la page 1sur 422

Verification Compiler Platform

User Guide
L-2016.06-SP1, September 2016
Copyright Notice and Proprietary Information
2016 Synopsys, Inc. All rights reserved. This Synopsys software and all associated documentation are proprietary to Synopsys,
Inc. and may only be used pursuant to the terms and conditions of a written license agreement with Synopsys, Inc. All other use,
reproduction, modification, or distribution of the Synopsys software or the associated documentation is strictly prohibited.

Destination Control Statement


All technical data contained in this publication is subject to the export control laws of the United States of America.
Disclosure to nationals of other countries contrary to United States law is prohibited. It is the reader's responsibility to
determine the applicable regulations and to comply with them.
Disclaimer
SYNOPSYS, INC., AND ITS LICENSORS MAKE NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, WITH
REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
Trademarks
Synopsys and certain Synopsys product names are trademarks of Synopsys, as set forth at
http://www.synopsys.com/Company/Pages/Trademarks.aspx.
All other product or company names may be trademarks of their respective owners.
Third-Party Links
Any links to third-party websites included in this document are for your convenience only. Synopsys does not endorse
and is not responsible for such websites and their practices, including privacy practices, availability, and content.

Synopsys, Inc.
690 E. Middlefield Road
Mountain View, CA 94043
www.synopsys.com

ii
Contents

Prologue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
New Integration Features in This Release . . . . . . . . . . . . . . . . . . 15
Other Integration Features in This Release . . . . . . . . . . . . . . . . . 15

SoC Design and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

SoC-IP Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1. Static and Formal Verification


Unified Verdi Debug Platform for Static Verification . . . . . . . . . . . 27
Unified Compile for Verdi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Verdi Integration: Default Source Viewer. . . . . . . . . . . . . . . . . 29
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Adding a Locator Function to the nSchema GUI . . . . . . . . . . . 49
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Verdi Integration: LP/CDC Source View . . . . . . . . . . . . . . . . . 66
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Identifying Incorrect Waivers on Certain Violations . . . . . . . . . 68

iii
The analyze_waiver_correctness Command . . . . . . . . . . . 69
Supported Violation Tags . . . . . . . . . . . . . . . . . . . . . . . . . . 72
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
Unified Verdi Debug Platform for Formal Verification . . . . . . . . . . 74
Support for VHDL in VC Formal. . . . . . . . . . . . . . . . . . . . . . . . 75
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Enabling Formal Verification Mode in GUI. . . . . . . . . . . . . . . . 76
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Backward Compatibility . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Replay of Verdi-VC Static Commands. . . . . . . . . . . . . . . . . . . 83
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Renaming and Reorganizing Reported Run-Status Fields . . . 84
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
Support for Explore Property . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
Verification Tasks Management in GUI . . . . . . . . . . . . . . . . . . 116
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Limitations of Verification Tasks. . . . . . . . . . . . . . . . . . . . . 123
Enhancements to VC-Static Grid Functionality . . . . . . . . . . . . 124
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
Managing Progress Reports in GUI . . . . . . . . . . . . . . . . . . . . . 129
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Bounded Coverage Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 140
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
SEQ Debug Flow in GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

iv
SoC-IP Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

2. Verification Planning
Native LP Planner Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Viewing LP Elements in the HVP Hierarchy Page . . . . . . . . . . 166
Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Cell and Image Selection for Specification Linking . . . . . . . . . . . . 168
Cell-Based Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Table Cell Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Image Selection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Examples for Image Selection . . . . . . . . . . . . . . . . . . . . . . 175
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
Integrated Planning for VC VIP Coverage Analysis . . . . . . . . . . . 180
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

3. Testbench Creation
Unified UVM Library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
Transaction/Message Recording in DVE/Verdi With VCS . 188

4. Validating Testbench: Functional Qualification


Concurrent Fault Qualification With Certitude and VCS Vectorization195
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
VCS and Certitude Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

v
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Loading Design Automatically in Verdi with Native Certitude . . . . 209
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209
Key Points to Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
Dumping and Comparing Waveforms in Verdi for SystemC Designs 211
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
Key Points to Note . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

5. Verification Execution
Compile Turnaround Time. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
Unified Compile Front End. . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
Generating Verdi KDB With Unified Compile . . . . . . . . . . . 217
Support for VHDL LRM Features . . . . . . . . . . . . . . . . . . . . 221
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
Native VIP Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
Optimized VC VIP Compilation Performance With Partition
Compile and Precompiled IP . . . . . . . . . . . . . . . . . . . . 226

6. Verification Debug
VC Formal Coverage With Verdi Coverage . . . . . . . . . . . . . . . . . 235
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Collecting VC Formal Results in the Coverage Database . 235
Measuring VC Formal Assert Status in HVP . . . . . . . . . . . 239
Generating VCS Coverage Database From Design and FSDB . . 244
Data Input and Output Requirements . . . . . . . . . . . . . . . . . . . 246

vi
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Prepare an Initial VDB for covsim . . . . . . . . . . . . . . . . . . . 247
Prepare FSDB for covsim in the Extraction Phase. . . . . . . 248
Calculate the Coverage Data From FSDB by covsim in the
Simulation Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 250
The covsim Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252
Power Model L1/Utility/Applications . . . . . . . . . . . . . . . . . . . . . . . 253
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
Value Traverse-Based Netlist . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
Enhancing NPI Applications in the Verification Compiler Platform
Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
Unified Verdi Debug Platform for Interactive and Post-Simulation Debug
283
Interactive Simulation Debug Mode . . . . . . . . . . . . . . . . . . . . . 284
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285
Rebuilding and Restarting Interactive Simulation Debug in Verdi
286
Post-Simulation Debug Mode . . . . . . . . . . . . . . . . . . . . . . . . . 287
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 287
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Saving/Loading Verdi Elaboration DB Library to/from Disk . . . . . . 289
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 290
Interactive Simulation Debug Flow. . . . . . . . . . . . . . . . . . . 290

vii
Post-Simulation Debug Flow . . . . . . . . . . . . . . . . . . . . . . . 291
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 295
Attaching a Running Simulation in Verdi . . . . . . . . . . . . . . . . . . . . 296
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 298
Detaching Verdi From Simulation Process. . . . . . . . . . . . . 304
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305
Reverse Interactive Debug . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 308
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
Before Invoking Verdi Interactive Simulation Debug Mode 309
Enable the Verdi Reverse Interactive Simulation Debug Mode
309
Using Reverse Simulation Control Commands . . . . . . . . . . . . 311
Run/Continue Reverse Simulation Control Command . . . . 313
Step and Next Reverse Simulation Control Commands . . 314
New UCLI Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
Going to Previous/Next Value Assignment . . . . . . . . . . . . . . . 317
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319
Unified UCLI Command for FSDB Dumping . . . . . . . . . . . . . . . . . 321
Default Dump Type . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Default Dump File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
Enhanced UCLI Options for FSDB Dumping . . . . . . . . . . . . . . 324
dump -file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
dump -add . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 325
dump -close . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328

viii
dump -deltaCycle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
dump -flush. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
dump -switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329
dump -power . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
dump -powerstate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
New UCLI Options for FSDB Dumping . . . . . . . . . . . . . . . . . . 331
dump -suppress_file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
dump -suppress_instance . . . . . . . . . . . . . . . . . . . . . . . . . 332
dump -enable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
dump -disable. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333
dump -glitch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
dump -opened . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 334
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
AMS-Debug Integrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Unified Dumping of Analog Signals in FSDB in VCS-CustomSim
Cosimulation Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Use Model for FSDB Dumping . . . . . . . . . . . . . . . . . . . . . . 337
Enabling Dumping of the Analog/Digital Signals in the FSDB File
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 338
Enabling Merge Dumping. . . . . . . . . . . . . . . . . . . . . . . . . . 340
Usage Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
Verdi Interactive Simulation Debugging With Analog Mixed-Signal
Designs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 342
Compiling Mixed-Signal Design With VCS. . . . . . . . . . . . . 342
Compiling With Verdi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

ix
Compiling and Importing SPICE Design . . . . . . . . . . . . . . 343
Compiling With SPICE Unified Front-End Flow . . . . . . . . . 345
Enabling Verdi Interactive Simulation Debug Mode . . . . . . 345
Interactive Simulation Debugging With Mixed-Signal Design . 347
Variable Observation in the Watch Tab . . . . . . . . . . . . . . . 347
Annotation Value . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
Dumping Analog Signals Into FSDB . . . . . . . . . . . . . . . . . 349
Force/Release Node Voltage . . . . . . . . . . . . . . . . . . . . . . 350
Save/Restore the Simulation Session/State. . . . . . . . . . . . 351
Interactive Console. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 351
Unified Transaction Debug With Native Verdi Protocol Analyzer . 353
Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
Hierarchy Tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
Quick Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Global Pane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
Object Pane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Call Stack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 357
Search Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Limitation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 358
Memory Debug With Native Verdi Protocol Analyzer . . . . . . . . . . 358
Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360
Memory Array View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
Detail View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362

x
History View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Summary View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Search Results View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
VC APPs Protocol Analyzer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363
Prerequisite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 365
Capturing Protocol Data During Simulation . . . . . . . . . . . . 368
Creating or Importing Protocol Extension Definition. . . . . . 369
Protocol Analyzer: Native Performance Analyzer for Transactions 370
Performance Analyzer Use Model . . . . . . . . . . . . . . . . . . . . . . 372
Hierarchy Tree Pane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
Performance Report Pane . . . . . . . . . . . . . . . . . . . . . . . . . 382
Details Pane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388
Summary of Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
Optimized Performance of Gate-Level Designs Using Native FSDB
Gates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 392
Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393

7. Closing Coverage Gaps


NPI Coverage Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Object Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 399
Assert Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 400
Testbench Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 401
Power Metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 403

xi
APIs for C Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
APIs for Tcl Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
VC Formal Coverage Analyzer Features . . . . . . . . . . . . . . . . . . . 409
Saving Formal Covered/Uncoverable Coverage Goal Into a Coverage
Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Saving Coverage Database and Exclusion File When Database is Not
Imported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 414
Use Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415

xii
Prologue

Today's electronic consumer market is driven by a huge demand for


mobility, portability, and reliability. Additional functionality,
performance, and bandwidth are very important for maximizing
semiconductor sales in addition to faster time-to-market and product
quality. The evolution of applications, such as cellular phones,
laptops, PDAs, computers, mobile multimedia devices, and portable
systems, has seen an exponential growth in battery operated
systems.

The increase in design complexities and shrinking technologies,


where more and more functionality is getting added into smaller area
of a chip has brought in a new set of challenges in System-on-Chip
(SoC) verification. With adoption of advanced techniques and
sophisticated tools, which helps in verifying SoC connectivity, signal
integrity, power management, and functionality of analog
components, hardware-software co-verification has become
inevitable.

This brings in a need for a unified and integrated verification


environment with seamless flow and reuse of the information across
different domains/levels to achieve faster results.

Prologue
13
Verification Compiler Platform is a next-generation verification
solution that provides a scalable environment, where sophisticated
tools work seamlessly with each other throughout the flow to
accomplish various verification tasks using integration of
technologies. It helps in optimizing design iterations and
recompilations, shortens debug cycles, and enables steady
integration and interoperability between individual verification tools.

Note:The technology platform inherent in the Verification Compiler


product (now named "Verification Compiler 2014") will be referred
to as the "Verification Compiler Platform" throughout the
documentation.
This chapter consists of the following two sections:

New Integration Features in This Release


Other Integration Features in This Release
Note:Verification Compiler 2014 capabilities are supported only for
use with Synopsys verification tools and technologies.

Prologue
14
New Integration Features in This Release

This document discusses the following new integrations offered by


the L-2016.06-SP1 release of Verification Compiler Platform:

VC Formal Coverage With Verdi Coverage

Other Integration Features in This Release

The following integrations are also available with the Verification


Compiler L-2016.06-SP1 release

Static and Formal Verification


- Unified Verdi Debug Platform for Static Verification
- Unified Verdi Debug Platform for Formal Verification
Verification Planning
- Native LP Planner Support
- Cell and Image Selection for Specification Linking
- Integrated Planning for VC VIP Coverage Analysis
Testbench Creation
- Unified UVM Library
Validating Testbench: Functional Qualification
- Concurrent Fault Qualification With Certitude and VCS
Vectorization
- VCS and Certitude Integration

Prologue
15
- Loading Design Automatically in Verdi with Native Certitude
- Dumping and Comparing Waveforms in Verdi for SystemC
Designs
Verification Execution
- Unified Compile Front End
- Optimized VC VIP Compilation Performance With Partition
Compile and Precompiled IP
Verification Debug
- Generating VCS Coverage Database From Design and FSDB
- Power Model L1/Utility/Applications
- Value Traverse-Based Netlist
- Enhancing NPI Applications in the Verification Compiler
Platform Application
- Unified Verdi Debug Platform for Interactive and Post-
Simulation Debug
- Saving/Loading Verdi Elaboration DB Library to/from Disk
- Attaching a Running Simulation in Verdi
- Reverse Interactive Debug
- Unified UCLI Command for FSDB Dumping
- AMS-Debug Integrations
- Unified Transaction Debug With Native Verdi Protocol
Analyzer
- Memory Debug With Native Verdi Protocol Analyzer

Prologue
16
- VC APPs Protocol Analyzer
- Protocol Analyzer: Native Performance Analyzer for
Transactions
- Optimized Performance of Gate-Level Designs Using Native
FSDB Gates
Closing Coverage Gaps
- NPI Coverage Model
- VC Formal Coverage Analyzer Features

Prologue
17
SoC Design and Verification

A typical SoC design and verification encompasses everything from


architecting a design and verification environment, performing
integrated static and formal verification, functional verification
execution to achieving the design signoff with functional and formal
coverage closure.

An SoC design consists of creating both hardware intellectual


properties (IP) blocks and software IP blocks. Generally, an SoC is
built on prequalified hardware IP blocks. The SoC design aims to
develop these hardware IPs and software IPs in parallel and
cosimulate these IPs. In SoC verification, the SoC is verified for its
functionality and logical correctness in parallel to the hardware and
software cosimulation cycle.

A typical SoC design and verification is illustrated in the following


figures:

SoC Design and Verification


18
Figure 1-1 SoC Design
SoC Specification
HardwareIPs
Hardware IPs SoftwareIPs
Software IPs

Determining Platform Hardware/Software


Architecture Partition

Cosimulation
Customize IP for Application Integrate Software for the
and Integrate into SoC Platform Application including OS

Simulate Simulate
Hardware Software
Emulation

Physical Design and Application Software


Layout Development and Testing

Board Development

Manufacture

SoC Design and Verification


19
Figure 1-2 SoC Verification
SoC Specification
Hardware IP Library

System Level
System Design
Verification Software IP Library

H/W and S/W


Partitioning

SoC Software
SoC Hardware RTL Software Verification
Development

Functional Verification

Synthesis Chip Plan and


Design

Netlist Verification

Functional Verification
Timing Verification
Physical Verification and Testing

Design Sign-off

SoC Design and Verification


20
SoC-IP Design

A substantial portion of the product development cycle consists of


building IPs. Creating a hardware IP block is a complex process that
involves verification of multiple IPs. Preverified IPs are used to
reduce project time. However, each hardware IP that is not
preverified needs to go through the complete development cycle
resulting in an extended development cycle. This includes
developing RTL code, checking for structural correctness of RTL,
checking for connectivity and compliance, and providing error-free
RTL along with model correctness.

An IP design is illustrated in the following figure:

SoC-IP Design
21
Figure 2-1 IP Design
RTL Development

Lint Checks
(Structural)

Connectivity Checks

Add Clocks and State


M achines

Integration Checks,
CDC Checks M odel Checking

LP Lint Checks

Functionality, Protocol
and Compliance Checks

Ready for Verification

Verification Compiler Platform aids the IP design as described in the


following chapters:

Static and Formal Verification

SoC-IP Design
22
1
Static and Formal Verification 1
Traditionally, simulation-based dynamic verification techniques have
been the mainstay of functional verification. As modern day SoC
designs become more complex, the adoption of static verification
techniques is important.

Synopsys' VC static and formal verification solution offers the next-


generation comprehensive VC formal verification solution, VC Lint,
VC Clock Domain Crossing Checker (VC CDC) and VC Low Power
verification solution.

VC Formal verification offers property checking that consists of


mathematical techniques to test properties or assertions to ensure
the correct functionality of RTL designs. For more information, see
the Synopsys VC Formal Verification User Guide on SolvNet.

Static and Formal Verification


23
VC Lint, a static verification tool, performs system-to-netlist
verification using prepackaged rules to check Verilog or
SystemVerilog designs against various coding standards and design
rules. After you elaborate your design in the VC Lint environment,
you can use built-in Tcl queries, prepackaged checks, and a set of
predefined procedures to run interactive queries on your design. For
more information, see the Synopsys VC Lint User Guide on SolvNet.

RTL code is verified for connectivity correctness between two nodes


of a design using the VC Formal Connectivity Checking solution. For
more information, see the Synopsys VC Formal Connectivity
Checking User Guide on SolvNet.

As part of clock domain crossing verification, all clock domains are


identified, crossings are created, and synchronizers are detected on
flat SoCs at the RTL level using VC Clock Domain Crossing Checker.
For more information, see the Synopsys VC CDC User Guide on
SolvNet.

VC Low Power verification solution verifies RTL for low-power


functionality to ensure that it adheres to the correct power intent
policy. Low-power verification helps in checking multivoltage and
static low-power rules. For more information, see the Synopsys VC
LP User Guide on SolvNet.

RTL is further verified for functionality and policy compliance. Model


checking technique exhaustively and automatically checks whether
a model adheres to a given specification and verifies correct
properties of finite-state systems. For more information, see the
Synopsys VC Formal Verification User Guide on SolvNet.

Static and Formal Verification


24
Synopsys' static and formal verification solution combines the best-
in-class technologies for improved ease-of-use, accuracy, and
performance. It also provides with low violation noise and excellent
debug capabilities. This solution enables designers and verification
engineers to quickly and easily find and fix bugs in RTL before
simulation; thus reducing the time needed before software bring-up,
hardware emulation, and prototyping.

This release of VC Static and Formal solution (VC Formal, VC Lint,


VC CDC and VC Low Power) unifies the debug environment to be
natively Verdi-based that is, the unified GUI and debug capabilities
are now based on the Verdi debug platform. VC Formal pioneered
the use of the Verdi debug platform in prior releases. With the unified
support for Verdi-based debug in this release, designers and
verification engineers can access the combined power of
formal-specific and static-specific debug features and use Verdi's
de facto industry-standard workflow, interface and powerful debug
capabilities.

The unified Verdi-based debug platform in the VC Static products


offers automation features that dramatically reduce the debug time.
These integration features include the following:

Easy-to-understand schematic views with check-specific


annotations, coloring, and partitioning
Powerful waveform viewers and analyzers
Advanced cross-probing capabilities to allow fast and easy ability
to switch between views
Search and trace capabilities to enable rapid ability to identify the
root cause of violations

Static and Formal Verification


25
Highly-differentiated and high-value advanced debug features,
such as two-design sequential equivalence checking debug with
temporal flow analysis
In addition to the new natively-integrated debug capabilities, this
release of the VC Static solution offers many improvements to core
features, performance, capacity, and stability.

This chapter discusses the following VC Static and Formal


Verification solutions offered by Verification Compiler Platform with
this release:

Unified Verdi Debug Platform for Static Verification


Unified Verdi Debug Platform for Formal Verification

Static and Formal Verification


26
Unified Verdi Debug Platform for Static Verification

The following features are available with unified debug platform for
static verification:

Unified Compile for Verdi


Verdi Integration: Default Source Viewer
Adding a Locator Function to the nSchema GUI
Verdi Integration: LP/CDC Source View
Identifying Incorrect Waivers on Certain Violations

Unified Compile for Verdi

The default value of enable_verdi_kdb and


enable_verdi_debug application variables are set to true. By
default, VC Static uses Verdi for debug. By default, the VC Static first
mode, view source, view UPF, and view schematic use Verdi to load
a design and show its source file and schematic.

It is not required for you to set the environment variable


VERDI_VCSTATIC_BETA to 1(VCST first mode) to use Verdi for
debug. By default, VCS compilation (vlogan/vhdlan/vcs) uses -kdb
- lca option to generate KDB. Verdi depends on KDB to import the
design information for debugging. Verdi (-simflow) reads
simv.daidir/simv.kdb that enables you to read
synopsys_sim.setup to get all KDB directories. This helps you to
load the design properly.

Static and Formal Verification


27
You can change the default value of the enable_verdi_kdb and
enable_verdi_debug application variables using the following
command:

set_app_var enable_verdi_debug false

The enable_verdi_kdb variable must be set before analyze,


elaborate, read_file commands, otherwise KDB is not
generated completely.

Note:
When enable_verdi_kdb is set to true and
enable_verdi_debug is set to false, KDB files are not
generated as Verdi does not need to use these files for debugging.

The following is the sample output after debug in Verdi is completed.

Command Line Message Output:

---------------------------------------------------------------------------

Info: Simon call complete

Info: Exiting after Simon Analysis

Info: Simon VCS Finished

Top Level Modules:

top

TimeScale is 1 ns / 1 ns

Verdi KDB elaboration done

---------------------------------------------------------------------------

Static and Formal Verification


28
Limitations
The following is the limitation for this feature:

Verdi does not support the dw_analyze command.

Verdi Integration: Default Source Viewer

You can make Verdi as the default source viewer for all VC Static
applications and use the useful debug features provided by Verdi to
debug formal and static issues.

VC Static automatically converts VC Static design import commands


into Verdi commands, so that a design can be loaded into Verdi for
debug.

Use Model
The following sections describe the capabilities of this feature:

The enable_verdi_debug Application Variable


Debug Solution for VC LP, CDC and Lint Using Verdi
Opening Multiple nSchema Windows
VCS Unified Front End Support
Identifying Incorrect Waivers on Certain Violations
Backward Compatibility
Limitations

Static and Formal Verification


29
The enable_verdi_debug Application Variable
The enable_verdi_debug application variable is set to true by
default. This enables you to use Verdi as the default source viewer
for RTL designs instead of using the native source viewer of VC
Static. However, netlist designs still continue to use the native source
viewer of VC Static. The native schematic and UPF viewer of VC
Static are still available.

Debug Solution for VC LP, CDC and Lint Using Verdi


You can use the following capabilities of Verdi for RTL and netlist
designs using two modes: Verdi first mode and VC Static first mode.

Verdi source and hierarchy viewer


Verdi schematic viewer
Verdi UPF source and hierarchy viewer
Verdi UPF power annotated schematic viewer
Verdi First Mode

In this mode, you can use the -verdi option in the VC static shell to
open Verdi from VC Static.

vc_static_shell -verdi

Figure 1-1, Figure 1-2, Figure 1-3, Figure 1-4, and Figure 1-5 show
Verdi - VC Static Integrated View, Verdi Schematic View, Verdi
Source View, Verdi UPF View, and Verdi UPF Schematic View when
you use the Verdi First mode.

Static and Formal Verification


30
Figure 1-1 Verdi - VC Static Integrated View

Static and Formal Verification


31
Figure 1-2 Verdi Schematic View

Static and Formal Verification


32
Figure 1-3 Verdi Source View

Static and Formal Verification


33
Figure 1-4 Verdi UPF View

Static and Formal Verification


34
Figure 1-5 Verdi UPF Schematic View

VC Static First Mode


You can set the following environment variable to open Verdi Debug
environment in VC Static:

setenv VERDI_VCSTATIC_BETA 1

vc_static_shell -f <tcl_file>

vc_static_shell> view_activity

Figure 1-6, Figure 1-7, Figure 1-8, Figure 1-9, and Figure 1-10 show
VC Static Activity View, Verdi Schematic View, Verdi Source View,
Verdi UPF View and Verdi UPF Schematic View when you use the
VC Static First mode.

Static and Formal Verification


35
Figure 1-6 VC Static Activity View

Static and Formal Verification


36
Figure 1-7 Verdi Schematic View

Static and Formal Verification


37
Figure 1-8 Verdi Source View

Static and Formal Verification


38
Figure 1-9 Verdi UPF View

Static and Formal Verification


39
Figure 1-10 Verdi UPF Schematic View

Opening Multiple nSchema Windows


VC Static can open multiple nSchema windows for each violation
report and node in LP, and for each violation path, clock root object
and pin and port instances in CDC. You can also add any instance
or signal to the active window.

The following sections provide the details for adding/viewing


schematic entries in LP and CDC:

VC LP Schematic Entries
VC CDC Schematic Entries

Static and Formal Verification


40
VC LP Schematic Entries

To add/view schematic entries in VC LP, perform any of the following


steps:

Click New Schematic Path for each violation report to open a


new Verdi-nSchema window.
Figure 1-11 New Schematic Path

Click Add Schematic Path of each violation report to open the


current active flatten window. If there is no nSchema flatten
window or the active one is not a flatten window, then a new Verdi-
nSchema window gets opened.

Static and Formal Verification


41
Figure 1-12 Add Schematic Path

Click schematic of each node to open a new Verdi-nSchema


window.
Click Add to Schematic of each node to open the current active
flatten window. The last nSchema window you click is the active
window, the active window caption appears in the window title.
If there is no flatten window or the active one is not flatten window,
then a new Verdi-nSchema window opens.

Static and Formal Verification


42
Figure 1-13 Add to Schematic Window

VC CDC Schematic Entries

To add/view schematic entries in VC CDC, perform any of the


following steps:

Click New Violation Schematic for each violation path to open


a new Verdi-nSchema window with the violation path result.
Click Add Violation Schematic for each violation path to add the
violation path result to the current active flatten window. If there
is no flatten window or if the active window is not a flatten window,
then a new window gets opened.

Static and Formal Verification


43
Figure 1-14 New Violation Schematic and Add Violation Schematic for
CDC Violation Reports

Click View Item in Schematic for a clock root object to open a


new Verdi-nSchema window with its fan out result.
Click Add to Schematic for a clock root object to open the current
active flatten window with its fan out result. If there is no flatten
window or the active one is not flatten window, a new Verdi-
nschema window opens.

Static and Formal Verification


44
Figure 1-15 New Violation Schematic and Add Violation Schematic for
Clock Root Object

Click View Item in Schematic for a control path pin or data path
pin to open a new Verdi-nSchema window with the instance of
the pin that it belongs to.
If you click Add to Schematic for a control path pin or data path
pin, then current active flatten window opens with the instance of
the pin that it belongs to. If there is no flatten window or the active
one is not flatten window, a new Verdi-nSchema window opens.

Static and Formal Verification


45
Figure 1-16 View Item in Schematic and Add to Schematic for a Control
Path Pin or Data Path Pin

Static and Formal Verification


46
VCS Unified Front End Support
To reduce the complexity of dual compilation and improve the overall
performance (CPU Time/Memory), VC Static provides a new
application variable called enable_verdi_kdb.

When the enable_verdi_kdb variable is enabled, VC Static


implicitly passes the -kdb option to vlogan/vhdlcom/vcs to
enable the VCS compiler to generate KDB for Verdi. Thus,
vericom/vhdlcom is not called any more. By default, this variable
is set to false.

You can set this application variable to true, when there are any
VERDI_COMPILE errors and the design cannot be loaded into Verdi
properly.

When you use the set_app_var enable_verdi_kdb true


command in the Verdi first mode, sometimes Verdi may give the
following error:
verdi: Please import design first!" and not able to load the
design. Please copy <rtdb>/.internal/design/
synopsys_sim.setup to work directory and try again. This
file will help Verdi load the KDB generated by VCS.

Backward Compatibility
For VC Static native GUI, set the enable_verdi_debug
application variable to false.

set_app_var enable_verdi_debug false

Static and Formal Verification


47
Limitations
Source/sink being an MDA, the schematic view may not display
the complete path schematics or may not add locators.
Few schematic views may take more time to display the path
schematics.
Schematic is not displayed properly as Verdi's RTL extraction is
different from the VC Static RTL extraction.
Currently, the view_schematic -clear command does
nothing.
There is a schematic issue, if the generate block command
does not have instances under it.
In cases like concatenate coding in instantiation for the bus, the
schematic displayed may be different than the schematic
displayed by the native GUI. This is because Verdi uses full bus
mode schematic display, while VC Static native GUI splits the bus
to a single bit.
VC Static supports the remove_upf/load_upf flow that can
load a different UPF file for a design. Verdi does not support this
flow, hence, you may need to close Verdi and open it again from
VC Static to make use of the new UPF file.
If the save_session command is executed after the
report_lp command, the restored session cannot update the
activity view properly. To avoid this issue, once the session is
restored, execute the save_db command in the VC Static shell.
This loads the violations in the activity view properly.
In the schematic, selecting and viewing property view for the
unconnected PG pins is not supported.

Static and Formal Verification


48
Adding a Locator Function to the nSchema GUI

nSchema provides support for the locator function. You can add a
locator from the VC Static CDC-LP menu entry to the Verdi-
nSchema window. You can also add locator in the nSchema window
for any instance or pins.

Use Model

Adding a Locator From the VC Static CDC-LP Menu Entry to the


nSchema Window
You can add a locator from the VC Static CDC-LP menu entry to the
nSchema window by performing the following steps:

1. Click Add Locators when the new schematic window is open as


shown in Figure 1-17.
Figure 1-17 Adding Locator When New Schematic Window is Open

2. Add a locator using the Locate in Schematic option from the VC


Static menu entry as shown in Figure 1-18.

Static and Formal Verification


49
Figure 1-18 Locator From VC Static Menu Entry

Verdi-nSchema always adds a locator to the current active flatten


window. If the current flatten window does not contain the object
or there is no flatten window, then Verdi issues the following
messages:
- Failed to find any available nSchema window, please
open new schematic window for this object first.
- Failed to find any object in current active nSchema
window, please open schematic window for this object
first.

3. You can add a locator for any primitive instance/instance port in


the nSchema window for the selected object from the Schematic-
> Add Locator menu as shown in Figure 1-19 or from the RMB
menu Add Locator as shown in Figure 1-20.

Static and Formal Verification


50
Figure 1-19 Adding Locator in the nSchema Window

The Add Locator menu is enabled when you select an object in


the current window. The menu opens the Add Locator Form
window and you can set the locator name in this form.

Static and Formal Verification


51
Figure 1-20 Add Locator From RMB Menu

Static and Formal Verification


52
Figure 1-21 shows Add Locator Form.

Figure 1-21 Add Locator Form

Hiding/Displaying Locators
You can use the View->Show Locator option to display or hide
locator in the nSchema window as shown in Figure 1-22.

Static and Formal Verification


53
Figure 1-22 Hiding/Showing Locator Option

Viewing Hierarchy
You can use the View->Hierarchy View option from the Window
menu to switch views between the hierarchy flatten and the flatten
mode as shown in Figure 1-23.

Static and Formal Verification


54
Figure 1-23 Hierarchy View Option

Finding Locator Form


nSchema supports one locator find form as shown in Figure 1-24 to
search and highlight one locator. Once you select an item in this
form, the nSchema window jumps to the corresponding design
object, similar to the behavior when you double-click the locator in
the drawing area.

Static and Formal Verification


55
Figure 1-24 Find Locator Form

Aligning Locator Form to Right


You can align all the locators to the right using the Align Locator
Right menu as shown in Figure 1-25. When you use this option,
nSchema displays all the locators to the right without overlapping.
The locator lines are not drawn in this condition, until you double-
click one locator to jump to the corresponding object as shown in
Figure 1-26.

Static and Formal Verification


56
Figure 1-25 Align Locator Right Menu

Static and Formal Verification


57
Figure 1-26 Align Locator Right

Capture Window Applications


Capture Window includes the locators as shown in Figure 1-27.

Static and Formal Verification


58
Figure 1-27 Capture Window

Property Window
nSchema supports the Properties window as shown in Figure 1-28
to display library and UPF relative power information for the selected
cell or pin. The Properties window can be opened from the
Property RMB menu. Information for all the properties is obtained
from VC Static.

Static and Formal Verification


59
Figure 1-28 Property Window

You can apply do not disturb functionality on the design object and
power design object from this property window to Verdi's other
components for further debugging as shown in Figure 1-29.

Static and Formal Verification


60
Figure 1-29 DnD the Power Strategy to Verdi-nPowerManager

Clock Domain Coloring (VC Static-CDC)


You can select a color for each clock domain. Verdi-nSchema uses
these colors to highlight the register instance. You can use the
Schematic->Highlight Clock Domains option to turn on or turn off
the coloring for every clock domain as shown in Figure 1-30.

Static and Formal Verification


61
Figure 1-30 Highlight Clock Domains Menu

When you click the Highlight Clock Domains menu, Verdi-


nSchema opens the Highlight Clock Domains form to turn on or
turn off clock domain coloring and to select color for every clock
domain as shown in Figure 1-31. All the clock domains in this Verdi-
nSchema window are listed in the clock domain list.

Static and Formal Verification


62
Figure 1-31 Highlight Clock Domains

Figure 1-32 shows the Select Color dialog box.

Static and Formal Verification


63
Figure 1-32 Select Color Dialog Box

Auto Collapse Unreferenced Paths (VC Static-CDC Only)


You can use the View->Auto Collapse Unreferenced Paths option
to turn on or turn off abstract combo logics as clouds as shown in
Figure 1-33. By default, this option is set to ON.

When you turn on this option, all combo instances are abstracted as
a cloud instance, which keeps the input/output information for the
original instance.

When you turn off this option, all instances that belong to cloud
instances are displayed and the cloud instances are removed.

Static and Formal Verification


64
Figure 1-33 Auto Collapse Unreferenced Paths Option

Static and Formal Verification


65
Verdi Integration: LP/CDC Source View

The analysis of VC Static tools are performed on the NLDM data


model. The NLDM data model can have several inferred design
objects that have a name generated in the model itself, but have no
corresponding named object reference in the RTL that you create.
As a consequence, if these generated internal names are reported
in analysis output of the applications, you may not be able to
interpret it in reference to the RTL. Hence, VC Static always reports
user names - that is, names present in the RTL.

However, VC Static applications work on NLDM objects irrespective


of whether they have a named object reference in the RTL itself or
not. Therefore, the application's analysis data is converted to data
that has a named reference to RTL with implicit objects. To enable
this conversion, it is mandatory for reporting feature to use only RTL
names.

Use Model

Finding RTL Name


The following steps show how the RTL name is found for a given
starting design point p:

If p is port bit or pin bit, p's name is reported.


If p is net bit of a user-defined net, p's name is reported.
If p is pin bit of a sequential operator, p's name is reported.
If p is input pin bit of a combinational operator and if the driving
net is user-defined, then the driving net bit's name is reported.

Static and Formal Verification


66
If p is output pin bit of a combinational operator and if the driven
net is user defined, then the driven net bit is reported.
If none of the above holds true, then fanout is done from p to find
the next design point and it checks the above points repeatedly.
Note:
For multiple fanout from a net, a fanout path is chosen randomly.
If there is a hanging net in a fanout path (this scenario may always
happen for internal net), then the next fanout path is explored. If
no fanout path leads to RTL name (which possibly will never
happen), p's name is reported. If p is a word-level object, p's name
is reported.

Static and Formal Verification


67
Identifying Incorrect Waivers on Certain Violations

Consider a scenario where an incorrect UPF is detected by


corruption in NLP, and violations such as ISO_STRATEGY_MISSING
are reported in VC LP. The design team declares this scenario as
safe and therefore, you write a waiver in VC LP to remove this
violation. However, the diagnosis on the failed chip confirms the
violation was a real one, which should not have been waived.

VC LP provides the analyze_waiver_correctness command


to enable you to identify such waivers that should not have been
waived. Using this command, you can generate a simulation
assertion for each unique power state rejected due to a waiver on a
violation. If one of these assertions triggers during RTL simulation,
this indicates the waiver(s) must be more carefully reviewed.

The analyze_waiver_correctness command can be used only


for the violations that describe a combination of power states which
exists in the design and that can cause an electrical problem. For
example, ISO_STRATEGY_MISSING is reported when a source
supply is off and a sink supply is on, which causes an electrical
problem. In VC LP, some waivers are created to waive-off one of
these power state violations, because you think that the combination
of these power states cannot actually occur in NLP. The
analyze_waiver_correctness command can be used to verify
the correctness of such waivers.

There are other violations which do not describe any power state
combinations. For example, PG_PIN_UNCONN shows a structural
problem, where a PG pin in a netlist is left floating. You cannot use
the analyze_waiver_correctness command for identifying
incorrect waivers on such violations.

Static and Formal Verification


68
The analyze_waiver_correctness command performs the
following:

1. Find all the waived violations which describe combinations of


power states.
2. Extract the unique set of power state combinations which are
covered by the waived violations.
3. Write a file containing simulation assertions. For each unique
power state combination, write one assertion, and a comment
describing which waivers and violations caused this assertion to
be generated.
With the results of the analyze_waiver_correctness
command, perform the following steps to identify incorrect waivers:

4. (Optional) You may want to review the comments in the file, and
decide to edit the file to remove certain assertions if they were
created for other reasons.
5. Run RTL simulations with the assertions.
6. If any assertion triggers, use the comments in the file to determine
which waiver(s) need to be carefully reviewed.

The analyze_waiver_correctness Command


The analyze_waiver_correctness command enables you to
identify the incorrect waivers on certain violations in VC LP. This
command performs extensive analysis before writing output.

The command looks at all the waivers and violations in the current
run, which you should set up before executing the command. The
command take a filename as its input, where the output must be
written. The command has the -verbose option which causes a

Static and Formal Verification


69
much larger output file to be written. By default, a short comment is
written for each assertion with a single violation ID and waiver name.
If you use the -verbose option, the entire list of violation ID's and
waivers are written. This can be large, however, it is helpful for
debugging.

Example

Suppose you have the following violations and waiver. (Only the
relevant fields of the violation are shown.)

ISO_STRATEGY_MISSING

ViolationID LP:72

StrategyNode b1/p1

SourceInfo

PowerNet

NetName vdda

SinkInfo

PowerNet

NetName vdda

SinkInfo

PowerNet

NetName vdd

Waiver w1

vcst_shell> waive_lp -add w1 -tag ISO_STRATEGY_MISSING

-filter {sourceinfo:powernet:netname==vdda}

Then you use the following command:

vcst_shell> analyze_waiver_correctness foo.sva

VC static generates the following file as output.

Static and Formal Verification


70
import UPF::*;

module generated_assertions;

// Variable declarations for supplies

UPF::supply_net_type local_vdd;

UPF::supply_net_type local_vdda;

initial begin

$upf_mirror_state ("vdd", local_vdd);

$upf_mirror_state ("vdda", local_vdda);

end

// Assertions

// Violations: 1, first is 1; Waivers: 1, first is w1

awc_1 : assert #0 (!((local_vdd.voltage != 0) &&

(local_vdda.voltage == 0)));

endmodule

module bind_file;

bind top generated_assertions generated_assertion_inst (.*);

endmodule

The upf_mirror_state command highlighted in red is a (new)


proprietary command in VCS. This is required to link, or mirror, a
UPF supply to a simulation variable which can be used in an
assertion.

Static and Formal Verification


71
Supported Violation Tags
The following is the list of tags supported along with the details of
how the assertion for the particular tag is generated. For example,
each waived violation of ISO_BACKUP_STATE generates an
assertion for the state where the source supply is off, and the sink
supply is on.

ISO_BACKUP_STATE {{Source OFF} {Sink ON}}


ISO_INST_MISSING {{Source OFF} {Sink ON}}
ISO_SINK_STATE {{Isolation OFF} {Sink ON}}
ISO_SOURCE_STATE {{Source ON} {Isolation OFF}}
ISO_STRATEGY_MISSING {{Source OFF} {Sink ON}}
ISO_STRATEGY_NOISO {{Source OFF} {Sink ON}}
ISO_STRATSUPPLY_INCORRECT {{Source OFF} {Sink ON}}
RAIL_BUFINV_FUNC {{Source ON} {Instance OFF} {Sink ON}}
RAIL_BUFINV_LEAKAGE {{Driver OFF} {Instance ON}}
RAIL_BUFINV_STATE {{Instance OFF} {Sink ON}}
RAIL_ISOINPUT_FUNC {{Source ON} {Isolation OFF} {Sink ON}}
RAIL_LSINPUT_FUNC {{Source ON} {Level_Shifter OFF} {Sink
ON}}
RAIL_ELSINPUT_FUNC {{Source ON} {ELS OFF} {Sink ON}}
RAIL_ISOOUTPUT_FUNC {{Source ON} {Isolation OFF} {Sink
ON}}

Static and Formal Verification


72
RAIL_LSOUTPUT_FUNC {{Source ON} {Level_Shifter OFF}
{Sink ON}}
RAIL_ELSOUTPUT_FUNC {{Source ON} {ELS OFF} {Sink ON}}
RAIL_ISOINPUT_LEAKAGE {{Driver OFF} {Isolation ON}}
RAIL_LSINPUT_LEAKAGE {{Driver OFF} {Level_Shifer ON}}
RAIL_ELSINPUT_LEAKAGE {{Driver OFF} {ELS ON}}
RAIL_ISOOUTPUT_LEAKAGE {{Driver OFF} {Isolation ON}}
RAIL_LSOUTPUT_LEAKAGE {{Driver OFF} {Level_Shifter ON}}
RAIL_ELSOUTPUT_LEAKAGE {{Driver OFF} {ELS ON}}
RAIL_ISOINPUT_STATE {{Isolation OFF} {Sink ON}}
RAIL_LSINPUT_STATE {{Level_Shifter OFF} {Sink ON}}
RAIL_ELSINPUT_STATE {{ELS OFF} {Sink ON}}
RAIL_ISOOUTPUT_STATE {{Isolation OFF} {Sink ON}}
RAIL_LSOUTPUT_STATE {{Level_Shifter OFF} {Sink ON}}
RAIL_ELSOUTPUT_STATE {{ELS OFF} {Sink ON}}

Limitations
The analyze_waiver_correctness command generates
assertions for power nets only and not for ground nets.
The analyze_waiver_correctness command is hidden.

Static and Formal Verification


73
Unified Verdi Debug Platform for Formal Verification

The following features are available with unified debug platform for
formal verification:

Support for VHDL in VC Formal


Enabling Formal Verification Mode in GUI
Replay of Verdi-VC Static Commands
Renaming and Reorganizing Reported Run-Status Fields
Verification Tasks Management in GUI
Enhancements to VC-Static Grid Functionality
Managing Progress Reports in GUI
Bounded Coverage Analysis
SEQ Debug Flow in GUI
Note:The examples of Tcl command scripts show how to write
assertions, to configure the run, to generate report on properties,
to run checks, and to debug falsifications for Formal applications
can be found in the VC installation area at the following locations:
$VC_HOME/static/doc/vcst/examples/FPV

$VC_HOME/static/doc/vcst/examples/FCA

$VC_HOME/static/doc/vcst/examples/AEP

$VC_HOME/static/doc/vcst/examples/SEQ

$VC_HOME/static/doc/vcst/examples/UNR

$VC_HOME/static/doc/vcst/examples/CC

Static and Formal Verification


74
The TCL regression scripts and the related documents are
available at the following locations:

$VC_HOME /auxx/ctg/tcl/library/proc

$VC_HOME /auxx/ctg/tcl/library/doc

Support for VHDL in VC Formal

VHDL is supported for the following features:

Formal Property Verification (FPV)


Formal Coverage Analyzer (Line, Condition)
Automatically Extracted Properties (Bus Checks).

Limitations
The following are the limitations for this feature:

Condition coverage for loops in VHDL can adversely impact


performance. To resolve the performance issue, ensure that in
the read_file command, you use the
initCmCondVarsForVhdl option in the -simonOptionFile
file.
vc_static_shell>read_file -simonOptionFile
simonopt.f <other options>

%cat simonopt.f

...

initCmCondVarsForVhdl

Static and Formal Verification


75
....

Counter examples cannot be viewed in the Verdi waveform when


VHDL configurations are elaborated in VC Formal.
The procedure hdl_xmr in VHDL is not supported.
Record types cannot be viewed in Verdi Waveform.
Abstraction of FIFO is not supported.

Enabling Formal Verification Mode in GUI

Verification Compiler Platform provides the Verdi Formal Verification


mode where the VC Static GUI is integrated with Verdi such that you
can use Verdi to perform formal and static analysis and debug in one
integrated tool, and even use Verdi as a cockpit for static debug.

The option to enable the Formal Verification mode is added in the


Verdi GUI as shown in Figure 1-34.

The Verdi GUI start-up page (welcome page) has a work mode
selection icon (Briefcase) that lists the work modes available in
Verdi for different debugging purposes. A new Formal Verification
Mode option is added on this page. You can enable this mode using
the setenv VERDI_VCSTATIC_BETA 1 command.

When you select the Formal Verification Mode, select Tools -->
VCST Apps --> Run VCST to pen the window (see Figure 1-35).

Static and Formal Verification


76
Figure 1-34 Verdi Work Mode Selection

Static and Formal Verification


77
Figure 1-35 Initial Layout for Formal Verification Mode

Use any of the following commands to enter this mode directly as


shown in Figure 1-36 and Figure 1-37:

verdi -workMode formalVerification


$VC_STATIC_HOME/bin/vc_static_shell -full64 -f
vsi.tcl -verdi

Static and Formal Verification


78
Figure 1-36 Accessing VC Static From Verdi

Figure 1-37 Starting/Stopping VC Static From Verdi

Static and Formal Verification


79
If VC_STATIC_HOME is not available, Verdi issues the following
warning message:

VC_STATIC_HOME is not available - Unable to start

vc_static_shell.

The following sections provide a detailed description of this feature:

Use Model
Backward Compatibility

Use Model

VC Static Menu/Toolbar
There are new Verdi menu/toolbar items added for the Formal
Verification mode.

For VC Formal SEQ, a new menu and tool items are added as
shown in Figure 1-38.
Snip driver has new RMB menu items in the SignalList and
Source views as shown in Figure 1-39 and Figure 1-40
respectively.
Figure 1-38 New Menu/Toolbar for SEQ

More VC Static Apps are added as Verdi supports more special


features of different VC Static applications.

Static and Formal Verification


80
Note:
Hiding Verdi menus/toolbars in the Formal Verification mode is
not supported.

Figure 1-39 New Menu for Snip Driver

Static and Formal Verification


81
Figure 1-40 Signal Snip Menu Items

Schematic
VC Static has its own Schematic that displays the schematic based
on Netlist Model (NLDM). Verdis nSchema displays the schematic
based on the design imported in Verdi that might be different from
NLDM.

Backward Compatibility
As Activity View is embedded into Verdi, all the VC Static features in
VC Static views work. Verdi depends on Verdi Knowledge Database
(KDB) and Fast Signal Database (FSDB) generated by VC Static in
this phase, hence, Verdi debug features work in the same way as
earlier.

Static and Formal Verification


82
Replay of Verdi-VC Static Commands

In the VC Formal use flow, you can perform formal analysis on your
design after executing an existing run.tcl file in the GUI mode.
Using GUI capabilities, you can add functionality for formal analysis,
such as:

Adding constraints through GUI/shell


Creating tasks
Disabling constraints

Use Model
VC Formal also lets you invoke Verdi to debug properties and
violations. However, the vc_static_shell only logs its own Tcl
commands in the vcst_command.log file, while Verdi logs the
Verdi commands in novas.cmd file. This makes it difficult for you to
reproduce the scenarios for replay or reproduce the issue.

There are the following three command log files:

vcst_command.log: This file logs the sourcing of the -f script


as well as the commands you type in the vc_static_shell
command line.
vcst_rtdb/.internal/GUI/uiAction.log: The
uiAction.log logs the Activity View actions. It uses action and
waitfor commands to make the GUI action execution blocking.
vcst_rtdb/.internal/verdi/novas.cmd: The
novas.cmd logs Verdi commands including the commands sent
from VC Static and also user operations.

Static and Formal Verification


83
You can copy the commands from the vcst_command.log file and
append them to the original run.tcl file. Re-executing this updated
run.tcl file takes you to the same point/snapshot both in GUI and
shell, enabling you to understand and debug the issue easily. For
more information on the manual operation history on GUI, see the
vcst_rtdb/.internal/GUI/uiAction.log and the
vcst_rtdb/.internal/verdi/novas.cmd files.

Note:
Some Verdi actions, such as snip point that is also applied to VC
Formal are logged. However, Verdi-related debug operations that
do not affect VC Formal are not logged (for example, show source,
zoom in waveform).

Renaming and Reorganizing Reported Run-Status


Fields

VC Static provides enhancements to Results Table of Activity


View. As part of this enhancement, the order of the column for
properties in the Activity View is improved to present the most useful
information, thus improving the usability.

For example,

The default orders of columns of Result Table for VC Formal are


adjusted. You can drag and drop to change the column order, and
also use RMB menu on column header to show/hide columns.
The Prop ID (property ID) column is moved to the right (before
Message ID).

Static and Formal Verification


84
For Formal Property Verification, the status can be falsified,
proven, or other states. Vacuous Proofs are shown in red in the
top-left corner, and also have indicator in ToolTip, when the mouse
hovers over the name column.

Use Model
The enhancements are explained in more detail in the following list:

The Toggle: Search Control works for non-record-type columns.


It can automatically fill the selected table cell in the result table,
and also support Live Search (filter without pressing enter). The
Search filters the results and shows the matched items.
Figure 1-41 Right-Mouse-Button Menu on Column Header

Static and Formal Verification


85
Figure 1-42 RMB Menu on Property

Figure 1-43 Search Toolbar on the Status Bar

If there is no selection in the result table, the default search type is


the name column.

To improve usability, you can make some adjustments in the column


orders for different Formal Verification applications.

For Formal Property Verification, the first six columns are status,
depth, name, vacuity, witness, and engine. The default column order
is as shown in Figure 1-44.

Static and Formal Verification


86
Figure 1-44 Formal Property Verification Enhancements

For VC Formal Coverage Analyzer/FAST/COV, the columns are


status, name, engine, and type. The default column order is as
shown in Figure 1-45.
Figure 1-45 Formal Property Verification Default Column Ordering

For SEQ, the columns are status, depth, name, vacuity, witness,
engine, expression, usage, and type. The default column ordering
is as shown in Figure 1-46.

Static and Formal Verification


87
Figure 1-46 Formal Coverage Analyzer Default Column Ordering

* Task support, Bounded coverage may impose further


incremental enhancement requirements on the Result table.
Figure 1-47 SEQ Column Ordering

Besides the column order and ToolTip, you can double-click the
grouping including the predefined groups like *Cov-Types, *AEP,
*Scopes to create and view the groups in one shot.

Static and Formal Verification


88
Figure 1-48 Double-Click to Add and View Grouping

Note:
The underlined buttons/links in Activity View indicate that they
are clickable for additional function.

Status of the Goals


You can view the status of the goals in the GUI as well as the
report commands.

Note:
VC Formal does not support vacuity checks on assume properties
and cover properties.

Static and Formal Verification


89
Terminology
In this section, the following terminologies are used:

Property: SVA properties specified in RTL or specified within the


tool via run script/shell.
Goal/Targets: Targets for verification. They can be SVA assert,
SVA cover, Script assert, Script cover, AEP, extracted coverage
for line-cond, SEQ generated miters, CC goals from csv, and so
on.
Assumptions: Constraints for verification. They can be SVA
assume or Script assume.
Application: Formal Verification applications, invoked by
separate commands - Formal Property Verification (check_fv),
Coverage (check_cov), CC (check_cc), and SEQ
(check_seq).

Usage Status Indicator


The following separate usage fields are available:

Assert: The property (includes CC connections) is used in the


current task as an assertion to be solved.
Assume: The property is used in the current task as a constraint.
Cover: The property (includes extracted ones) is used in the
current task as a cover property.
Unused: The property is not used in the current task. It is
equivalent to disabled.

Static and Formal Verification


90
Run Status Indicators
Constraints and Unused properties do not have run status.

Goals can also have vacuity, witness sub-goals. Hence, goals or


targets have multiple run status fields. The run status (separately
and collectively) reflect the primary status, progression, and
subgoals status for the goal.

The following status fields are available:

Status: Primary status


Depth: Progression (Bounded or Falsification) Depth
Vacuity: Vacuity Sub-Goal Primary Status
Witness: Witness Sub-Goal Primary Status

Status (Primary) Field


Each goal has a primary status that reflects the current/known
status. The field shows the initial, intermediate and terminal run
status. The primary status does not reflect the relative
progression. The status values for this field is dependent on the
application, type of goal, and the status of sub-goals (like vacuity),
wherever applicable.

The values for this field are as follows:

Initial Status Values


NOTRUN Denotes that the goal has not yet started running -
that is, the goal is not attempted by any of the engines.

Applicable to all goals/sub-goals irrespective of application and


type all start with this status if the goal/sub-goal is enabled.

Static and Formal Verification


91
Intermediate Status Values
CHECKING Denotes that the goal is running in a formal engine
- that is, the goal is attempted at least once.

Applicable to all goals/sub-goals irrespective of the application


and type, it is shown as an intermediate status, until a terminal
status is reached. For this phase, the progress is captured in the
Depth field.

Terminal Status Values


The terminal status reflects the final status for a goal. It is
dependent on the application and the type of goal.

Table 1-1 Terminal Status Values


Value Explanation Application
FALSIFIED Counter example found for the Formal Property Verification, SEQ
goal
PROVEN Goal has been proven Formal Property Verification, SEQ
exhaustively
VACUOUS Goal has been proven Formal Property Verification, CC
vacuously. Vacuity Status for
such cases would also show
vacuous status
COVERED Trace has been found for the Formal Property Verification,
coverage goal Coverage

Static and Formal Verification


92
Value Explanation Application
UNCOVERABL No trace exists that reaches the Formal Property Verification,
E coverage goal Coverage
CONNECTED Connection/Path exists CC
between design nodes
UNCONNECTE No connection/Path found CC
D between design nodes
TIMEDOUT Goal wasn't solved because All applications
resource limit was reached
TERMINATED Goal wasn't solved because run All applications
was stopped by user

Depth Field
The Depth field taken together with the primary status provides
information about the current state and progression.

For goals having terminal status as FALSIFIED and COVERED,


the depth value represents the falsification and reachability depth
(of the trace)
For goals having terminal status as PROVEN, VACUOUS, and
UNCOVERABLE, the depth value is empty, representing infinite
depth.
For goals having current status as NOT_RUN, the depth value is
empty.
For goals having current status as CHECKING, the depth value is
an integer (including 0) depicting bounded proof depth. This depth
typically increases with the run depicting bounded proof progress
- If the terminal status becomes PROVEN, VACUOUS,
UNCOVERABLE, the depth value gets cleared (set to empty)

Static and Formal Verification


93
- If the terminal status becomes FALSIFIED, COVERED, the
depth value gets updated to depict the trace depth.
Table 1-2 Depth Field Values
Value Depth Explanation
NOTRUN No status available
CHECKING Formal Property Verification, SEQ
CHECKING 0 Formal Property Verification, CC
CHECKING 5 Reset state check. Goal hasn't
converged - run in progress
.........
Can lead to one of the following
PROVEN Proven. Depth information cleared
(also for VACUOUS status)
FALSIFIED 6 CEX of depth 6
TIMEDOUT 5 Resource limit reached. Last known
depth shown
TERMINATED 5 User stopped the run. Last known
depth shown

Vacuity Field
The vacuity field represents the primary status of a vacuity goal. A
vacuity (assertion antecedent) goal is sub-goal for an assertion (if
applicable) and comes into being, if the vacuity checks are enabled.
The vacuity status is independent of the primary status of the
assertion; however, its status impacts the primary status of the
assertion. An assertions terminal status is either PROVEN vs
VACUOUS depending upon the status of its vacuity.

Note:
Since there is no separate Usage field for Vacuity, the Vacuity field
can reflect both usage and run status.

Static and Formal Verification


94
Table 1-3 Vacuity Fields Values
Value Explanation
Usage Status
<Empty> Empty value points to vacuity not being applicable
OR
Vacuity check have not been enabled
Initial Run Status
NOTRUN Vacuity checking has not been tried
Progression Run Status
CHECKING Vacuity check is in progress
Terminal Run Status
VACUOUS Vacuity check failed. No witness possible. Goal's
primary status would show terminal status as
VACUOUS
NONVACUOUS Vacuity check was successful
TIMEDOUT Vacuity check couldn't complete as resource limit
was reached
TERMINATED Vacuity check couldn't complete as run stopped by
user

Static and Formal Verification


95
Witness Field
The witness field represents the status of witness check for a goal (if
applicable). The witness status is dependent on the vacuity status.
There is no witness, if the vacuity check result is VACUOUS.

Figure 1-49 Witness Field Values


Value Explanation
Usage Status
<Empty> Empty value points to vacuity not being applicable
OR
Vacuity check have not been enabled
Initial Run Status
NOTRUN Witness checking has not been tried
Progression Run Status
CHECKING Witness check is in progress
Terminal Run Status
COVERED Witness check successful. Trace available
UNCOVERABLE No witness exists.
TIMEDOUT Witness check couldn't complete as resource limit
reached
TERMINATED Vacuity check couldn't complete as run stopped by
user

Static and Formal Verification


96
Support for Explore Property

In certain phases of the verification flow, you must refine constraints,


understand the design behavior, or debug an assertion failure. In
these scenarios, the explore and replot capabilities feature enables
you to do a What-if analysis, and observe the design behavior, such
as graphically defining new constraints, covers, and asserts, and
observing the effects of these new constraints, and covers, and
repeating the process. This capability is called Explore Property.
With this support, you can perform the following:

1. Explore a problem in Verdi in nWave.


2. View a graphical dialogue in primary nWave that contains many
features with their property.
3. View information of the current debugged property. The
information contains Primary Property, Referenced Clock,
Property list, and so on.
4. Construct new or modified constraints, assertions, or covers. The
key to this feature is that it can interact with nWave - get signals,
ranges, and signal value at cursor position.
5. Query VC Static to solve a new problem based on Primary
Property called as Replot.
This section consists of the following subsections:

Use Model
Limitations

Static and Formal Verification


97
Use Model
You can invoke Explore Property from VC Static Activity View and
Verdi, and perform the following functions:

1. Add, edit, or delete a newly-created property. For more details,


see section Adding or Editing the Property Form .
2. Freeze signals' value. For more details, see section Freezing
Signals .
3. Extend specific cycles based on a property or extend cycles until
an expression is satisfied. For more details, see section
Extending Traces of Assert or Cover Property .
4. Regenerate and load new FSDB, which is a counter example
(primary goal: assert property) or a coverage trace (primary
goal: cover property). For more details, see section
Replotting .
5. Export a currently-created property as SVA or Tcl format to a file.
For more details, see section Exporting Properties .
This section consists of the following subsections:

Invoking Explore Property From VC Static Activity View


Invoking Explore Property From Verdi
Explore Property Dialog Box
Adding or Editing the Property Form
Freezing Signals
Replotting
Exporting Properties

Static and Formal Verification


98
Extending Traces of Assert or Cover Property
Closing Explore Property

Invoking Explore Property From VC Static Activity View


To invoke the Explore Property dialog box from VC Static Activity
View, perform the following steps:

1. Select a falsified property.


2. Click Explore: Property or RMB->Explore Property.
The Explore Property dialog box appears (see Figure 1-50).

Figure 1-50 Invoking Explore Property From VC Static Activity View

Static and Formal Verification


99
Invoking Explore Property From Verdi
To invoke Explore Property from Verdi, perform the following steps:

1. Select a falsified property in VC Static Activity View.


2. Click View: Property or RMB-> View Trace.
3. Select a property in nWave.
4. Click the nWave menu bar -> Tools -> Explore Property or click

on the toolbar.

The Explore Property dialog box appears (see Figure 1-51).


Figure 1-51 Invoking Explore Property From Verdi

Static and Formal Verification


100
Note:
The nWave menu bar -> Tools -> Explore Property option and
the button are enabled in primary nWave only when VC Static
is invoked. It is disabled once Explore Property starts successfully.

Explore Property Dialog Box


You can open the Explore Property dialog box as described in
sections Invoking Explore Property From VC Static Activity View
and Invoking Explore Property From Verdi . Figure 1-52 shows the
Explore Property dialog box.

Figure 1-52 Explore Property Dialog Box

The Explore Property dialog box provides the following capabilities:

Static and Formal Verification


101
Indicates the current task name in the window title.
Indicates the primary property.
Indicates the reference clock of the primary property.

Displays the property list and provides , , , buttons to


add, edit, or delete a newly-created property.
- Newly-created property is displayed in blue.

- and buttons are grayed out, if you select a property that


existed at the beginning of Explore Property. These properties
are displayed in black.
- Related property is deleted, if you edit or delete its parent
property.

Provides and buttons to enable or disable a property. All


properties are enabled by default after they are inserted in the
property list.

Provides the Replot button to trigger VC Static to re-generate


FSDB according to the current selection in the property list.
- You can select either assert or cover in the property detail
window and use the button to perform Replot. The label
Primary Property is changed when you click .

- The button is disabled when you select multiple properties.


- The button is disabled when you select a property type =
Assume.

Static and Formal Verification


102
Provides button to output all or selected new property in SVA
or Tcl format.
- By default, all user-created properties are exported. You can
save selected properties in a property list from the Option
menu.
Several options are provided to close Explore Property.

Adding or Editing the Property Form

To add or edit a property, click or in the Explore Property


dialog box. The last created assert or cover is selected in the
property list immediately after it is created successfully. You can then
use to replot.

Figure 1-53 shows the Add/Edit Property dialog box.

Static and Formal Verification


103
Figure 1-53 Add Property Dialog Box

The Explore Property window provides fields required to add or edit


a property. The last created assert or cover is selected in the
property list immediately after it is created successfully.

To add a new property, perform the following steps:

Static and Formal Verification


104
1. Name: In the Name box, type a name for the newly-created
property.
- New property is created under top scope, traffic as shown in
Figure .
- Name can only start with [a-z], [A-Z] and _, followed by 0 or
[a-z], [A-Z], [0-9] and _.
2. Clock: From the Clock list, select the reference clock of this
newly-created property. The default reference clock is Traffic. You
can select another reference clock, if needed.
3. Trigger Condition: Not supported.
4. Type: Select a Type for the newly-created property. By default,
assert is selected.
5. Expression: SVA expression for the newly-created property. Note
that a runtime signal cannot be used in the expression.

6. Click for overlapping and click for non-overlapping


implications.

7. Click the sequence operator to insert ## to the current position


of the cursor in the expression, by default. There are three options
available: Insert Sequence Op., Insert Sequence Range Op., and
Insert Sequence Range Op. with Cursor/Marker.

8. Click to add selected signals in primary nWave.

- By default, the signal name is added and a disjunction is


performed when multiple signals are selected.
- Use the Add Selected Signals and Values option to add both
signal name and the current value of signals at cursor time. The
signal name and value are concatenated with ==.

Static and Formal Verification


105
- Concatenate Signals by and Concatenate Signals and Values
by sub-menu is available. This enables you to choose the
concatenation for Add Selected Signals and Add Selected
Signals and Values separately.
9. Functions: Select functions from the Functions box.
The Functions box provides the following functions:

- $rose, $fell, $changed, $stable, $onehot,


$onhot0, $countbits, $countones, $driver, $past
- When you double-click a function, the function is inserted to the
expression box. The text cursor is set into the function, such as
$rose(|).

- You can then use to add the selected signals in the primary
nWave to the expression. For example, if you insert $rose()
into the expression, you can use to add a signal arb.a that
is selected in the primary nWave. The final expression is
$rose(arb.a).
10. Select the Evaluate at time 0 only option, if you want the
property to be evaluated at time 0 once. This option is the same
as the from_zero option present in VC Static.
11. Click OK. After VC Static completes the command evaluation, the
form is closed and the results are reflected in the property list with
the selection. Alternatively, an error message is issued, if there
are issues with the mentioned details.

Static and Formal Verification


106
Freezing Signals
When you turn on Explore Property, the
RMB on signal pane->Explore Property->Freeze Signal Value
option is shown in the primary nWave. It is enabled when you select
a digital signal that exists in a design.

Some of the fields that are common to the Add/Edit Property form
are filled automatically. A freeze expression is formed up at start and
whenever the clock is changed.

Figure 1-54 shows freezing signals.

Static and Formal Verification


107
Figure 1-54 Freezing Signals

1. Name: The default name is freeze_prop. If the property already


exists in the property list, then freeze_prop_<idx> appears.
2. Clock: By default, Verdi uses the default reference clock.
However, if the default reference clock cannot express all value
changes of a signal in integer cycles, then a warning message
appears that this operation did not succeed with the reference
clock.

Static and Formal Verification


108
3. Trigger Condition: Posedge. This field cannot be modified.
4. Type: Be default, the value is Assume.
5. Expression: Verdi forms an expression automatically according
to the current specified clock. The expression changes
correspondingly to the Clock field.
6. The Evaluate at time 0 only: By default, this option is enabled.
Freeze Multiple Signals

You can freeze multiple design digital signals simultaneously. Verdi


adds several assume properties with the default name and the
default reference clock. You can later edit each property individually.

Note:
The property name cannot be changed when you edit the freeze
property.

Replotting
The Replot button enables you to regenerate a new FSDB
according to the current primary property you set after several
operations.

Verdi can continue operations when VC Static generates FSDB.


However, the features related to Explore Property are disabled. After
VC Static completes generating the FSDB, Verdi displays a message
that the new FSDB is ready to load. If you click OK, Verdi loads the
new FSDB. Verdi displays a message if replot fails with a reason for
the failure.

Figure 1-55 shows replotting results.

Static and Formal Verification


109
Figure 1-55 Replotting Results

After you click Replot, the following happens:

1. Verdi sends the check_fv -property <PrimaryProperty>


command to VC Static and disables all Explore Property related
features.
2. VC Static notifies Verdi that a new FSDB is ready.
3. If you confirm, Verdi loads the new FSDB.
4. An evaluation is performed on a design property, if the task
involves the design property to get the correct value.
5. Verdi adds a new property and the associated signals into
<PrimaryNWave>.
- Verdi adds the property and signals into a new group at the
bottom with the group name as Replot-Result#idx.

Static and Formal Verification


110
Limitation
If you open multiple FSDB files in primary nWave and add signals
from those files, then those signals might be unrecognized after
restore signals is performed.

Exporting Properties
Click to export all or the selected new properties to Tcl or
Report format. If the selected property has an extended property,
then the extended property is also exported.

The default file format is Report format (*.rpt). You can select Tcl
format (*.tcl) optionally (see Figure 1-56).

Figure 1-56 Selecting File Format

Static and Formal Verification


111
Extending Traces of Assert or Cover Property
Click to extend a trace of the assert or cover property for specific
cycles or until a condition is satisfied (see Figure 1-57). This icon is
enabled when you select an assert or cover property.

Verdi inserts the newly-created cover property into property lists and
the property belongs to the original property as
traffic.assert_no_both_green_extend (see Figure 1-52).

The Add/Edit Property form has the following fields:

Name: Default name is <Original Property


Name>_extend. Append an index, if the name conflicts with a
property in the property lists - that is <Original Property
Name>_extend_idx.
Clock: Set to the same clock as extended property, clk2 in this
case.
Type: Set to Cover Property.
Evaluate expression only at time 0: Disabled in this mode.
Expression: Fill and select <Number of extended cycles>
|<Cond> automatically.
Note:
Using the Edit Property form, you cannot change clock, trigger
condition, type, and evaluate expression at time 0.

Figure 1-57 shows extend traces.

Static and Formal Verification


112
Figure 1-57 Extend Traces of Assert or Cover Property

Static and Formal Verification


113
Closing Explore Property
You can use any of the following operations in Verdi to close the
Explore Property:

Closing primary nWave


Changing primary nWave
Clicking Close in the Explore Property dialog box.
Performing any of the following actions prompts the appearance of a
dialog box (see Figure 1-58):

1. Select the Save selected new properties to parent task check


box to save newly-created properties to a parent task. To quit
Explore Property, click Quit.
2. After you confirm to quit Explore Property, Verdi sends
explore_end through an event to VC Static to close the Explore
Property session.
Note:
For steps 1 and 2, the dialog box does not appear and a new
property is not promoted to a parent task.

Static and Formal Verification


114
Figure 1-58 Closing Explore Property

Limitations
Tcl commands input, from the VC Static shell that create or modify
properties, is not synced with the Explore Property dialog box.
If you open multiple FSDB files in primary nWave and add signals
from those files, then those signals may be unrecognized signals
after restore signals.
Modified trigger condition is not supported.
Runtime signal cannot be used in Expression.
Explore property feature is not supported from restored session.
Explore session must end before a session can be saved.

Static and Formal Verification


115
Verification Tasks Management in GUI

A verification task encapsulates the setup information for a particular


invocation of the check_fv or check_cov command. There is
always one default verification task and the initial setup commands
that apply to the default task.

When a new task is created, some or all of the setup information is


copied into the new task. The subsequent setup commands apply to
the new task; the previous task(s) retain their prior setup information
and are not modified unless the current task selection is reset.

Once multiple verification tasks are created, you can switch between
multiple tasks to modify setups, run checks, view results, and debug
counter-examples. The design data and any properties compiled as
part of the design are common to all tasks. The initial state is global
to all tasks.

Each verification task has an unique set of results. When a new task
is created, results from the predecessor task (property or coverage
status for example) are not copied into the new task. The results are
created in a task only when a check_* command is run in that task.
A specific design property may be proven in one task and falsified in
another because of differences between the constraints enabled in
each task.

Static and Formal Verification


116
Advantages of Using Verification Tasks

During design verification with formal tools, there are many


situations where the problem setup must be partitioned or
temporarily modified. Some examples include the following:

Partitioning a group of properties into subsets to be solved


separately.
Assume-guarantee methodology where a property is proven in
one formal run and then used as a constraint to help prove other
properties.
Temporary over constraint to improve formal engine performance
or validate property correctness. In other runs, these constraints
may be removed.
Trials of multiple formal engine selections or resource limitations.
All of these processes can be performed by copying and editing VC
Static project scripts, but it is much more convenient and efficient to
use the Verification Task feature to encapsulate the concept of
multiple derivative verification scenarios in a single script.

The following sections provide a detailed description of this feature:

Use Model
Limitations of Verification Tasks

Static and Formal Verification


117
Use Model
The default initial task is named _default and cannot be deleted.
It can be renamed using the following command:
set_app_var fml_default_task_alias <aliasname>

A new task can be created with the following command:


fvtask -create <taskName>

The new task becomes the current task and subsequent setup
commands apply to this task. It is an error to create a task with
the same name as another task. By default, a new task inherits
all of the configuration data from the previous task, but a variety
of switches are provided to control exactly which optional data are
copied. For example, the following command limits the assertions
enabled in the new task to those matching a specific name:
fvtask -create <taskName> -asserts top_check*

Though the default source of setup information for a new task is


the previous current task, you can copy setup data from a different
task with the -copy option as follows:
fvtask -create <taskName> -copy <task_name>

There are a number of other options listed in the command


reference manual or command help limiting or selecting the data
to be copied from the source task.
When a (non-default) task is no longer needed it can be deleted
with the following command:
fvtask -delete <taskName>

Static and Formal Verification


118
If you delete the current task, the predecessor becomes the
current task. You can always retrieve the current task with the
following command:
get_fvtask

The get_fvtask command also accepts switches and allows


selection of multiple tasks matching the specified criteria. See the
reference manual for details. The following command prints a
summary of the tasks matching the selection criteria:
report_fvtask *

Formal Verification Tasks

Task Name Status

-------------------------

_default

mycov

myprops(active)

The following is an example of the Verification task use model:

#-- global app vars


set_app_var fml_mode_on true

#-- compile and load the design


read_file -top dut -sva -format -sverilog -vcs

#-- Clocking and reset


create_clock clk -period 100
create_reset resetn -low

Static and Formal Verification


119
#-- generate the reset state
sim_run -stable
sim_save_reset

#-- set up task to handle assertions and run


fvtask prop_task -create -copy _default -asserts * -assumes *
-attributes {{fml_max_time 1H} {fml_witness_on true}}
set_grid_usage -type RSH=6

check_fv -block
report_fv -list

#-- create a new tasks to look at unconverged properties


# and add new pin constraint and then run with new resources
set unconv [get_props -status { timed_out bounded }]
fvtask prop_unconv -create -copy prop_task -assumes * \
-asserts $unconv \
-attributes {{fml_max_time 8H} {fml_orc_tactic
prop_heavy}}
set_constant free_config -value 0
check_fv -block
report_fv -list

Verification Tasks in the GUI


Verification tasks can be created, selected, and managed in the VC
Static GUI as well as from the TCL script interface. As new tasks are
created, they appear in the Activity View hierarchy tree on the left
side of the window. Selecting the FORMAL tree entry brings up the
task summary as shown in Figure 1-59. The hand-pointer icon
indicates the currently selected task. The Create New Task link
opens a dialog box for creating new tasks and selecting the
properties and constraints that must be included in that task.

Static and Formal Verification


120
Figure 1-59 GUI for Creating Verification Task

Controlling Task-Specific Variables and Attributes


Any formal application variables that are set using the
set_app_var command are global to all tasks. To enable task-
specific variable values, the set_fml_var command is provided.
For example, to turn-off vacuity checking in the current task, use the
following command:

set_fml_var fml_vacuity_on false

The get_fml_var and report_fml_var commands are


provided to query a specific variable or generate a report on all task
variables. Table 1-4 lists task-specific control variables.

Static and Formal Verification


121
Table 1-4 Tasks-Specific Variables
Attribute Name Value Type Default Value Constraints
fml_conflict_check_depth int 4
fml_conflict_debug_minimal bool true
fml_conflict_debug_reset bool false
fml_cov_enable_llk string off {off low medium
high exact}
fml_cov_fast_mode bool false
fml_cov_gen_trace string default
fml_max_mem string 4GB
fml_max_time string -1H
fml_min_unk int 0
fml_quiet_trace bool true
fml_orc_tactic string prop_check
fml_vacuity_on bool true
fml_witness_on bool false

It is also possible to access task variables as attributes on the task


object. For example, the following command returns the value of the
fml_vacuity_on variable:

get_attribute [get_fvtask] fml_vacuity_on

To get a list of complete attributes for a task, use the following


command:

list_attributes -application -class task

It is also possible to use the fv_task command to set attributes on


a task. For example, the following command sets the current task to
task2 and sets the maximum memory available for formal searches
to 2GB:

Static and Formal Verification


122
fvtask task2 -attributes {{fml_max_mem 2GB}}

Limitations of Verification Tasks


The following are the limitations of Verification tasks:

Verification tasks are supported only for formal properties, formal


coverage, and sequential equivalence. The other static
applications in VC Static do not support tasks.
All verification tasks share the same design data. A single
invocation of vc_static_shell can only have one read_file
command.
All verification tasks share the same initial state.
The script properties created by fv_assume -env and
fv_assume -stable are global to all tasks.
Script property names must be unique across all tasks.
Clock specifications are global to all tasks.
Only one task at a time can run check_fv, check_seq, or
check_cov.

Static and Formal Verification


123
Enhancements to VC-Static Grid Functionality

VC Static provides several enhancements to the current grid support


with this release. Earlier VC Static Formal Verification and VC
Formal Coverage Analyzer support the following grid functionality:

Grid submission for [LSF|SGE|RSH], where RSH is for


multiprocessing on local-host.
If an error occurs during the grid submission,
check_[fv|cc|cov] stops and sends an acknowledgment.
The grid support in VC Static [Formal Verification/Formal Coverage
Analyzer] is enhanced to support the following functionalities:

Use the grid control in GUI


Enhance grid submission error debugging
Improve the visibility of grid jobs
Allow users to terminate grid jobs
This feature can be classified into three categories, namely, grid-
configuration, grid-report, and grid-control. You can use this feature
in the VC STATIC shell and the GUI.

Static and Formal Verification


124
Use Model
The following sections provide a detailed description of this feature:

Grid Configuration
Grid Report
Grid Control

Grid Configuration
You can use the set_grid_usage commands to configure a VC
Formal and VC Formal Coverage Analyzer run to launch workers
(executing solver tasks) inside server farms, and then retrieve the
status of grid submissions.

You can configure a [LSF|SGE|RSH] run in the shell using one of


the following commands:

set_grid_usage -type [LSF|SGE]=<#_of_worker> -


control {<submission_cmd/opt>}

or

set_grid_usage -type [RSH]=<#_of_worker>.

No control string is needed for RSH. For [LSF|SGE], the grid


submission commands/options depend on the setup of computing
farms and is different from farm to farm. Therefore, the VC Static GUI
displays the following:

Current grid usage setting and allows editing


Control strings commonly used for [LSF|SGE]
The following are the commonly used control strings:

Static and Formal Verification


125
LSF: bsub -q bnormal -R arch==glinux -R
rusage[mem=4000]

SGE: qsub -P bnormal -V -l arch=glinux,mem_free=4G

Grid Submission Error Report


Earlier if an error occurred during grid submission, the
check_[fv|cc|cov] command is terminated and an
acknowledgment is received. However, the error message does not
provide complete details on how and why the grid submission fails.

The following features are provided to add more visibility into the
status of the grid job submission:

If a grid submission is rejected, the VC Static [shell|GUI]


displays the exact message returned by the computing farm and
then suggests the next step, if possible.
If workers are queued, the VC Static [shell|GUI] does not notify
you automatically. However, you can find out whether any worker
is queued using Solver Task View. See section Verdi Integration:
Default Source Viewer . This feature displays the status of solver
tasks in the VC Static GUI and also allows you to query the same
in the VC Static shell.

Grid Report
The VC Static [shell|GUI] provides the status of workers or tasks
allowing you a better understanding of the utilization of the grid.

Static and Formal Verification


126
Alive Workers Status
Once the grid job spawns, VC Static [shell|GUI] reports the
location (host name) of all the worker processes running inside the
computing farm as well as the machine load for each of the host
names.

The machine load statistics include the following:

Total number of computing slots


Number of slots being occupied
Total physical memory
Amount and percentage of available memory
For example, Table 1-5 shows you that the quality of the result of this
ongoing run could be affected by a loaded machine inside the
computing farm. If this run is for performance comparison, you must
reconfigure the grid setup to avoid busy machine or obtain get
symmetric machines.

Table 1-5 Quality of the Result


CPU (%) Memory
(GB)
Worker ID Hostname Status User system+nice Total Used
1 vgintsb72 alive 12 2 32 20
2 vgintsb72 alive 12 2 32 20
3 vgintsb60 alive 99 12 128 122
4 vgintsb61 terminated - - - -
5 vgintsb62 alive 15 2 128 66
upto the number of workers specified via set_grid_usage -type
[LSF|SGE]=<#_of_worker>

Static and Formal Verification


127
As another example, Table 1-6 shows that all workers are allocated
to the same big memory machine inside the computing farm. You
can then reconfigure the grid setup to avoid this situation. For
example, in SGE, you can control the number of CPU slots assigned
to each worker using the qsub -pe mt <number of cpu slot
per worker> command. If each machine contains sixteen CPU
cores but only has 32GB of physical memory, the qsub -pe mt
4 command ensures that the farm can only assign at most four
workers on each machine, so that each worker has 8GB memory to
use.

Table 1-6 Quality of the Result - Example1


CPU (%) Memory
(GB)
Worker ID Hostname Status User System+nice Total Used
1 vgintwm175 alive 20 3 128 127
2 vgintwm175 alive 20 3 128 127

20 vgintwm175 alive 20 3 128 127

Queued Versus Running Tasks


See Section Verdi Integration: Default Source Viewer for more
details.

Static and Formal Verification


128
Grid Control
With the Progress Reporting feature, the VC static shell and the GUI
reports the status of solver tasks. For each solver task, you can know
the goal and the engine assigned to the task. If you are an advanced
user who likes to have control on what engines are running and what
goals the tool is trying to resolve, the following section describes how
you can control the queued/running tasks.

Canceling Queued/Running Tasks


The [cancel task <solver_task_id>] command removes a
task, if it is still in queue or terminates the task, if it is running.

Managing Progress Reports in GUI

The progress reporting feature provides more visibility in the VC


Formal orchestration and engines. Currently, VC Formal reports
information, such as number of proven, falsified, bounded properties
(along with depth).

With this feature, you can obtain the following additional information
that helps you to understand and debug convergence issues:

For a given goal what solvers are working on it and their status:
- You can know which engines are unproductive or what is the
best engine for a given problem. For hard goals, you can know
the bounded depth reached by different engines.
Goals that are being worked on by orchestration at any given time:

Static and Formal Verification


129
- You can know how goals are being prioritized by orchestration.
For example, you can see that safety goals are given more
priority than the liveness goals. This enables you to instruct
orchestration to solve goals in a different order by creating
multiple user level tasks.
Whether the solvers are running out of memory/inconclusive/
error:
- You can know which engines are running out of memory or
crashing. For memory issues, VC Formal can provide bigger
memory machines.
Number of workers available or in use:
- You can know how many grid resources are actually allocated
and how the resources change over time. For example, if you
asked for 100 slots and you only got 10, then you can take
appropriate action such as ask for more resources or simplify
the problem to get acceptable runtimes.
The progress of the engine run on a goal.
A GUI to display this information is available for ease.

Each check_* command creates a Verification Problem (VP) that


is handed to the orchestration. Each verification problem is a proof
object and has its set of constraints (assumptions) and assertions
(can include different types such as SVA asserts, vacuity, SVA
covers etc). Each verification problem has a unique integer ID to
distinguish it from other VPs.

The orchestration solves a VP by creating multiple SolverJobs (SJ).


Each SJ has following characteristics:

It is a unit of work dispatched on local/remote machine or grid.

Static and Formal Verification


130
It runs on a single core and has an unique integer ID.
It has a label (not unique) and can employ one or more engines,
but they run sequentially.
It has a subset of goals in VP.
It eventually gets assigned to a worker on some machine.
Figure 1-60 Two Verification Problems VP1 and VP2 and the Solver Jobs
Working in Those Verification Problem

Use Model
You can use the following two commands to view the details of
orchestration:

Goal view - report_fml_engines


Solver job view - report_fml_jobs

Static and Formal Verification


131
Both these commands report information about the latest check_*
command executed in the current verification task. For more details
on the commands, see the VC Static Command Reference Guide.

Goal View
The goal view (using the report_fml_engines command)
reports the different SJs that goal is present in and the status of that
goal in each SJ that contains that goal. This is displayed in a tabular
format for each goal. An example of the output in shown in
Figure 1-61.

By default, the report_fml_engines command prints a summary


for the latest check_* command in the current verification task.

The summary includes total number of goals in a verification


problem, number of proven goals, number of falsified goals, number
of bounded goals. Summary by itself does not provide any more
information over report_fv or report_seq.

Static and Formal Verification


132
Figure 1-61 Goal View for a Particular VP

The table in Figure 1-61 shows information about goals 0 to 11. Goal
0 corresponds to vacuity component of the
traffic.chk.assert_green_no_waiting_first property
while goal 1 corresponds to the
traffic.chk.assert_green_no_waiting_first itself
property. Goal 0 was falsified using engine s1 in 3 clock cycles
(FDepth) using solver job number 2. You can filter the table based on
column values.

The syntax of the report_fml_engines command is as follows:

report_fml_engines [-no_summary] [-list] [-verbose] [-


depthCounts] [-depthVsTime] \
[-status <proven, falsified, bounded>] \
[-jobId <num>] [-goalId <num>] [-engine <string>] \
[-subtype <vacuity,witness,->] [-jobStatus
<scheduled,running,completed>]
Where,

-no_summary: Does not print the summary information.

Static and Formal Verification


133
-list: When this option is specified, one row is printed for each
goal. The row contains a conclusive result proven/falsified for a
property, if available. If no conclusive result is available, it contains
the best bounded proof depth (if available). If this is not available
it contains no_status. The entries in the row are as follows:
GoalId, Status, Engine name, Time, SDepth,
FDepth, JobId, SubType, PropertyName

- The PropertyName column contains the name of the property.


For a given property, there can be multiple goals generated -
one corresponding to the actual property, one for witness (if
enabled) and one for vacuity (if enabled). The GoalId column
contains a unique ID assigned to each goal arising from a
property. The SubType field contains - for the actual property,
witness for witness version of the property, and vacuity for
vacuity check corresponding to the property.

- The JobId field gives the number of the solver job that got a
proven/falsified result for the goal or the best bounded result.
In addition, the JobId field has an indicator to indicate the status
of the job: (s) for scheduled, (r) for running, (c) for completed.
- The Engine column gives the engine name that got proven/
falsified result for a goal or best bounded result.
- Status can be one of the following:
no_status, proven, falsified, bounded

- SDepth is the safe depth of the goal. This is the depth for which
the goal has a bounded proof. Depth is the depth at which a
goal is falsified.

Static and Formal Verification


134
- Time column is the sum of time taken by all engines working
on this property in seconds. For the SEQ application, sub-
proofs named sdcp_* and edcp_* are created to simplify the
problem. The engine column for SEQ can contain names of
these sub-proofs. The runtime for a sub-proof is a sum-total of
all engines that were run in that sub-proof. It is not wall-clock
time.
-verbose: Prints multiple lines per goal. Each line corresponds
to one engine working on a goal. The format of the line is same
as the -list option.
Note:
In the verbose report, the status, time, SDepth, FDepth
of that particular engine is reported.

-depthCounts: Displays two tables, one for falsified goals and


one for bounded goals. Each row in the table has two columns.
The first column is the depth and the second column contains the
number of properties that are falsified/bounded at that depth.
-depthVsTime: Reports the change in BMC depth with respect
to time for a particular goal (for each engine that worked on that
goal). This option works only when the application variable
fml_orc_bmc_depth_profile is set to true and the -goalId
option is also given.
-status <val>: Lists the goals that have a particular status
proven, falsified and bounded.
-jobId <id>: Shows goals where solver job <id> provided a
conclusive result or provided the best bounded depth.
-goalId <gid>: Shows information for a particular goal id.
-engine <name>:Lists goals where engine with given name
provided conclusive result or best bounded depth.

Static and Formal Verification


135
-subtype <stype>: Lists only goals with a matching subtype -
, vacuity, witness. The meaning is as follows:
- - refers to original assertion
- vacuity refers to vacuity component of an assertion
- witness refers to witness component of an assertion
-jobStatus <js>: Shows only rows where the corresponding
job has the specified status <js>, where <js> can be scheduled,
running, completed.
Note:
With the -list option, the job that has provided best result
so far is shown.

Solver Job View


The - report_fml_jobs SJ view command reports the status of
each solver job. It displays the SJ status (scheduled/running/
completed), machine name, time, memory, and the number of goals
assigned to that SJ in a table format as shown in Figure 1-62.

Static and Formal Verification


136
Figure 1-62 Example of the Output Task View for a Particular VP

In Figure 1-62, information is listed for solver jobs 0 to 15. Solver job
1 corresponds to cc engine. All jobs ran on godel machine and
completed normally. You can filter the table based on particular
column values.

The syntax of the report_fml_jobs command is as follows:

report_fml_jobs [-no_summary] [-list] [-proof proofname] \


[-status <completed, running, scheduled>]
[-reason <crash, memout, misc_error, normal,
orc_killed, sig_killed, start_fail, timeout>] \
[-engine string] [-host string] [-jobId <id>]
where,

-no_summary: Does not print summary.

Static and Formal Verification


137
-list: Displays one line per SJ. The line contains the solver job
id, status (scheduled/running/completed), reason (normal/
timeout/memout/orc_killed,sig_killed,crash, start_fail,
misc_error) for completed SJs, wall-clock time in seconds,
memory consumption in MB, number of goals in the SJ, engine
id in the SJ, engine name in the SJ, timestamp for when the SJ
was started, the id for the physical machine where the SJ is being
run, machine name.
-status <val>: Shows solver jobs matching a particular status.
-reason <val>: Shows solver jobs matching a particular
reason for completion.
-engine <eng>: Shows solver jobs that are using a specified
engine name.
-host <m>: Shows solver jobs that were run or are running on
a particular host.
-jobId <id>: Shows the solver job <id>.
Note:
When a job (engine) is started, the initial number of goals
assigned to that job is reported in the #Goals column. For
certain engines, it is possible that the engine can work on more
goals than it was assigned to improve QoR. In such cases, the
#Goals reported in the report_fml_jobs -jobId <j>
command may not match with the number of rows reported in
the report_fml_engines -verbose -jobId <j>
command.

Static and Formal Verification


138
Bound Statistics for a Goal
The goal bound statistics reports how the proof depth changes for a
goal over time. To enable the profiling mode, the following
application variable must be set to true.

set_app_var set fml_orc_bmc_depth_profile true

Use the -depthVsTime switch in the repot_orc_engines


command to get the profile depth information.

An example of the output is shown in Table 1-7.

Table 1-7 Bound Statistics for Goals


Goal ID 0
Time Depth
10 1
20 2
30 3
40 4
50 5
60 6
70 7
80 8
90 9
100 10

Static and Formal Verification


139
Bounded Coverage Analysis

In a formal verification flow, when the verification runs are complete,


you can measure the code coverage targets that are hit within the
proof bounds achieved with the same formal setup - that is, under
the same constraints and abstractions for which proof bounds were
computed. The code coverage targets are identical to the targets
used by simulation tools, such as line, condition, toggle or Finite
State Machine (FSM) coverage.

Generally, you first run the formal assertions and determine the proof
bounds reported by the formal verification tool. Then, within the
same verification task, the smallest proof bound is supplied as an
input and the formal coverage tool reports the code coverage targets
reached within that bound.

VC Formal Coverage Analyzer now lets you find regions of the


design that have not been activated during the bounded exploration
of the formal assertions. This helps you to identify potential holes as
well as determine whether a certain bounded proof is good enough
to sign off the property or if more work is still needed.

Use Model
To enable bounded coverage analysis, the following setup is
required:

1. Specify the bounded coverage depth:


set_app_var fml_max_proof_depth <value>
where, value is the bounded proof depth for coverage analysis.

Static and Formal Verification


140
Note:
To get the minimum proof depth of all the properties in a formal
run (for example, check_fv), use the report_fv
min_proof_depth command.

The value returned from this command must be used as an input


to set the set_app_var fml_max_proof_depth application
variable.

2. Specify the desired coverage metrics for the bounded coverage


analysis:
read_file cov <metric_type> -sva top
<top_module_name> -vcs {<vcs_command_line> }
3. Run the bounded coverage analysis:
check_cov
4. Save the coverage database:
save_covdb <options>
5. Review results from bounded coverage analysis:
To get a list:
report_cov -list
From GUI:
Invoke view_activity and click Invoke Verdi Coverage.

Static and Formal Verification


141
Figure 1-63 Bounded Coverage Flow Diagram

Static and Formal Verification


142
Figure 1-64 Coverage Visualization in Verdi (via view_coverage)

You can also compare two bounded coverage results for further
analysis. More details on different scenarios are described in the
following sections.

Bounded Coverage Analysis for Inconclusive Properties From


Formal Verification
Given a formal verification task consisting of constraints, assertions,
line/cond/tgl/fsm coverage targets, snip points and so on, you can
invoke the following:

Static and Formal Verification


143
Formal Verification Phase
- Invoke the check_fv command and complete the run.
- The tool analyzes the results from formal verification phase and
finds a minimum bounded depth (K with respect to the reference
clock) across the assertions that have results as bounded proof.
- The report_fv status inconclusive command gives
the list of assertions that have results as inconclusive
(bounded proof is also reported as inconclusive).
- The report_fv min_proof_depth command gives the
minimum bounded depth (K).
Bounded Coverage Analysis Phase
- Set the proof depth for the tool as K.
-This command also requires the tool to use BMC/Proof
engines for further analysis.
- Invoke the check_cov command on line/cond/tgl/fsm targets
and complete the run.
- The report_cov command reports the following:
-Covered within bound K (no further action; good result) -
Primary status is covered. Depth field indicates the number
of cycles covered (<= K).
-Unreachables (implies further action to user) - Primary Status:
Uncoverable. Depth field would be empty meaning
exhaustive coverage.
-Inconclusive (goals not solved till depth K as resource limits
reached)
-Terminated (not solved as terminated by user).

Static and Formal Verification


144
The main point of interest here are the goals that have been detected
as unreachable by the tool and the goals that have not yet converged
till user depth.

For terminated (with cycle depth less than K)/inconclusive goals, you
can plan to run them again by increasing the resource limit to see if
they fall into the first three categories as follows:

Invoke the save_cov_exclusion command to dump the


unreachable coverage goals in an exclusion file.
Save the inconclusive goals (if needed) that have not yet
converged till the specified depth K in separate exclusion files
using the following set of commands:
set a [ get_props status inconclusive filter
{trace_depth==<k>}

save_cov_exclusion file inconclusive.el


targets a

These goals would have the annotation


VC_COV_INCONCLUSIVE (depth = <val>)

Save coverage database from the bounded coverage run. For


example, save_covdb -new bounded_12
A coverage database with the name of bounded_12 is created
in the CWD for unreachable targets.

Viewing Verdi Coverage GUI From VC Formal Coverage Analyzer


Invoke the view_activity command and click Invoke Verdi
Coverage to view the coverage database and the exclusion file
for the unreachable targets.

Static and Formal Verification


145
By default, when you click Invoke Verdi Coverage, the coverage
database imported using the read_covdb command and the
latest exclusion file saved using the save_cov_exclusion
command are loaded. In absence of an imported coverage
database, the latest database generated using the save_covdb
command is loaded.

If you want to override this default behavior, then you must use
the view_coverage command. The -cov_input and -
elfile options of the view_coverage command enable you
to specify the path of the coverage database to be imported and
the elfile(s) that must be loaded along with the coverage database.

Figure 1-65 Invoke Verdi Coverage

Bounded Coverage Analysis For User Estimated Proof Depth


This flow is similar to the flow described in section Bounded
Coverage Analysis for Inconclusive Properties From Formal
Verification . The only difference is that in this mode, you can decide
and specify the depth for which the tool needs to perform bounded
coverage analysis. This mode does not require you to perform the
Formal verification phase to extract the bound depth.

Static and Formal Verification


146
Using Verification Task to Manage Bounded Coverage Flow
If you want to run the property verification in the first phase to get the
minimum proof depth and then run the bounded coverage flow after
setting the minimum proof depth, use the following verification tasks:

In the parent task, call the read_file command that enables


both sva and cov <desired_metrics> options.
- Run Formal Verification Phase in the parent task.
- Get the minimum proof depth.
Create a child of the parent task by setting the minimum proof
depth as follows:
- fvtask <boundedFlowTaskName> -create copy
<parentTaskName> -attributes {
{fml_max_proof_depth <min_proof_depth>} }
- Run Coverage analysis in this task.

Importing Coverage Database in Bounded Coverage


You can import the coverage database to prune off covered targets
using the read_covdb command. In this case, the bounded
coverage phase targets only the uncovered goals from the imported
database.

Comparing Coverage Between Two Bounded Runs


Create two children tasks with two bounded proof depths and
compare the coverage results between the two.

Static and Formal Verification


147
Methods to Compare
Save database for covered/unreachable and exclusion file for
inconclusive status across multiple runs and generate the URG
report with these. Compare them.
Generate textual report across different runs and do a
differentiation.
Figure 1-66 Coverage Database Visualization in Verdi for Condition
Coverage (via view_coverage)

Static and Formal Verification


148
Figure 1-67 Exclusion File for Uncoverable Condition Coverage Targets

SEQ Debug Flow in GUI

VC Formal SEQ lets you debug counter examples. When SEQ


detects a failure, it issues a message as shown in Figure 1-68.

Static and Formal Verification


149
Figure 1-68 VC SEQ Debug Error Message

Use Model
The following sections provide a detailed description of this feature:

Results Reporting
Activity View

Static and Formal Verification


150
Results Reporting
Result reporting in SEQ is the same as other formal applications. All
assertions during a check_seq run start in the checking state. Once
an engine completes running, it updates the result as proven,
falsified, or bounded. When a check_seq run gets timed out, the
results of the inconclusive assertions are inconclusive and
appropriate message is issued. If no conclusive result was found for
a particular assertion and the check_seq command completes, the
result becomes inconclusive.

In SEQ, the original assertions are automatically decomposed into


many potentially simpler internal equivalence points. Progress on
the internal equivalence points can be observed using the
report_proofs command.

Activity View
Along with the various general changes to activity_view, one
important change for SEQ is that activity_view has SEQ tab in
it. All SEQ specific assertions generated using the map_by_name
command are a part of this tab. All assumptions (SEQ or otherwise)
are part of the environment tab.

At this point, the GUI is launched with the view_activity


command as shown in Figure 1-69.

Static and Formal Verification


151
Figure 1-69 Activity View

Click the falsified tree node in the Activities pane.

Static and Formal Verification


152
Click falsified in the Properties pane. The details of the falsified
property gets displayed in the Details pane at the bottom of the
screen as shown in Figure 1-70.

Figure 1-70 Select Falsified Property

Click the Property link displayed next to Debug in the Details pane.
This action invokes Verdi and the screen shown in Figure 1-71 is
displayed.

Static and Formal Verification


153
Figure 1-71 Verdi Debug Screen

Note the following:

The RTL hierarchy is shown in the VC Static Instance pane in


the top left. Here, the seq_top module contains an instance of
the specification design (spec) and an instance of the
implementation design (impl). The impl version has cg2, cg3,
and cg4 modules these are the clock gaters.
The other two panes at the top show the source code for the two
designs. The signal that failed in the output comparison is
highlighted.

Static and Formal Verification


154
The pane at the bottom shows the waveforms. The script property
_map_out_product failed and the two signals it compares are
shown under Support-Signals. The origins of these signals are
marked with S (for specification) and I (for implementation).
You can right-click the failing signal and select Show Driver/Load
Signals -> Driver Signals to pull the signals driving the failed
signals into the waveform display. This action brings in the driver
signals from both the spec and the impl designs as shown in
Figure 1-72.

Static and Formal Verification


155
Figure 1-72 Driver Signals in Different Stages

As shown in the figure, the signal stage4 is different between the


spec and the impl designs. You can right-click stage4 and trace
back further using the same method of displaying driver signals
described earlier, resulting in the screen as shown in Figure 1-73:

Static and Formal Verification


156
Figure 1-73 Tracing Back Signal - Stage4

Tracing back, you can see that stage3 in the impl is different as
shown in Figure 1-74. Tracing back further, you can see that, stage2
is the same for both designs. Thus, it is determined that the
discrepancy begins at stage3. Now, you can easily determine the
root cause of the discrepancy.

Static and Formal Verification


157
Figure 1-74 Tracing Back Signal - Stage3

The code and waveform windows are linked, hence, selecting a


signal in code pane and pressing CTRL-W adds spec and impl
versions of that signal to the display. Figure 1-75 shows this for 8-bit
signal ll:

Static and Formal Verification


158
Figure 1-75 8 Bit Signal - ll

There are several configuration options that are available for SEQ
debugging in Verdi as shown in Figure 1-76.

Static and Formal Verification


159
Figure 1-76 Configuration Options Available for SEQ Debugging in Verdi

Static and Formal Verification


160
SoC-IP Verification

Verification is one of the biggest challenges in the design of modern


SoC and IP blocks. Verification engineers spend tremendous efforts
on learning details of IP design, developing complex testbench and
test cases for each IP, and debugging IP behavior.

IP verification is illustrated in the following figure:

SoC-IP Verification
161
Figure 2-1 IP Verification
IP with Lint, CDC
Complete

Model Checking
Link to Specs

Verification Planning
Support for UVM, System C, SV,
V2k, Assertions , UPF Integration
Testbench Creation

Coverage Model
Simulation/ LP Protocol Creation Code Coverage, Cover groups, Cover
Compliance Constraint Random Directives, AMS Coverage, Formal
Tests Coverage

Verification Execution Check Testbench Validity

Regression Analysis Checker Quality/Quantity

Merging
Reporting
- URG
Close Coverage Gaps - Verdi
Manage Exclusions

Include Assertion or Testbench


Handoff to Integration

Verification Compiler Platform aids the IP verification, as described


in the following chapters:

Verification Planning
Testbench Creation
Validating Testbench: Functional Qualification
Verification Execution
Verification Debug
Closing Coverage Gaps

SoC-IP Verification
162
2
Verification Planning 1
Typically, verification starts with creating a plan that covers what
needs to be tested within a defined time frame. It also defines the
coverage criteria to determine that the verification is complete. Some
basic verification, such as checking a design against various coding
standards and design rules is performed on design blocks. The basic
verification is performed before the design is handed off to
verification engineers for sub-system and chip-level verification.

This chapter describes the following capabilities that are available as


part of the L-2016.06-SP1 release of Verification Compiler Platform:

Native LP Planner Support


Cell and Image Selection for Specification Linking
Integrated Planning for VC VIP Coverage Analysis

Verification Planning
163
Native LP Planner Support

Verification Planner provides support for adding Low Power (LP)


plans into a top-level HVP plan as sub-plans. This feature helps in
providing a complete overview of all the metrics being considered for
coverage. The added LP sub-plan is an instance in a top-level HVP
plan. The score calculated for the plan is the average score of all
metrics, including LP metrics. You can then view the LP elements in
the URG reports.

Verification Planner provides a predefined LP sub-plan named


snps_LP, which is defined in simv.vdb/
lp_coverage_metric.hvp plan. This LP sub-plan cannot be
edited.

You can add the LP sub-plan snps_LP to any HVP plan without
including the definition plan file in your top-level HVP plan, as shown
in the following example:

plan top;
feature feat_line;
measure Line Measure_1;
source = "tree: tb.i_top.level_1_test_a_dut";
endmeasure
endfeature
feature feat_lp_plan;
subplan snps_LP;
endfeature
endplan
Verification Planner automatically finds the plan definition for the
snps_LP plan from the simulation coverage database (simv.vdb),
when it needs to be annotated. If simv.vdb does not contain the
plan definition, an empty plan node is used as shown in the following
example:

Verification Planning
164
urg -dir simv.vdb -plan plan_w_lp_subplan.hvp
plan top;
feature feat_line;
measure Line Measure_1;
source = "tree: tb.i_top.level_1_test_a_dut";
endmeasure
endfeature
feature feat_lp_plan;
subplan snps_LP;
endfeature
endplan
By default, the LP metric is not defined in the top-level HVP plan.
When an LP object is added to an existing HVP plan, the LP object
is treated as low-level cover group object and is given a group metric
score instead of an LP metric score.

Note:
If you provide only the -metric option and do not specify the
group, the annotated report does not capture LP objects.

This section consists of the following subsections:

Viewing LP Elements in the HVP Hierarchy Page


Limitation

Verification Planning
165
Viewing LP Elements in the HVP Hierarchy Page

You can view the features of the LP plan in the HVP Hierarchy page
of the URG report, as shown in the following figure:

Figure 2-1 LP Subplan in HVP Hierarchy Page

You can see the LP sub-plan feature details in the UPF LP page, as
shown in the following figure:

Verification Planning
166
Figure 2-2 LP Subplan Feature Details

Limitation

This feature functions under the following limitation:

LP plan integration is not supported in the HVP Excel and Word


flow.

Verification Planning
167
Cell and Image Selection for Specification Linking

Verdi provides the capability to perform a cell-based selection of


running text in paragraphs or tables, and an image selection for
specification linking.

Previously, when you selected a cell or a table cell with multiple lines,
the tool highlighted the contents of the selected cell as well as the
contents of the adjacent cells as shown in the following figure:

There was no option to select the contents of only one cell in a table
and link it to a plan.

With the enhancements to the cell-based selection as described in


Section Cell-Based Selection , the selection of the content of a
single table cell is now possible.

Images are important elements in a specification as they capture


pivotal details, such as timing diagrams, block diagrams, register bit
assignments, and FSM charts.

Previously, Verdi did not provide a method to detect images and


therefore, you could not link images in a specification to a plan. This
resulted in missing out on important information from the
specification.

Verdi now provides the capability of an image selection as described


in Section Image Selection , using which you can select images
and link them to a plan.

Verification Planning
168
Cell-Based Selection

The selection mode for specification linking is text only by default. To


enable cell-based selection, set the following environment variable:

setenv ENABLE_SPECLINK_CELL_IMAGE_SELECT true

After setting the environment variable, you can perform cell-based


selection of text by selecting the Cell-Based Selection icon in the
toolbar (see Figure 2-3):

Figure 2-3 Cell-Based Selection Icon

You can perform cell-based selection of text by dragging the mouse


over the text you want to select (see Figure 2-4).

The cell-based selection of text feature functions under the


limitations specified in the Limitations section.

Verification Planning
169
Figure 2-4 Cell-Based Selection of Text

Note:t Verdi uses the following rules to consider what text is part of
the selection:
Verdi draws a rectangle box from the start point to the end
point.
- The selection is character based and not word based.
- Characters out of the box are excluded in the selection.
- Characters complete in the box are included in the selection.
- Characters partly included in the box (their boundary has
intersection with the outline of box) are excluded from the
selection.
Figure 2-5 illustrates when the character is included in the box.

Figure 2-5 Text Selection

After the text is linked in to the HVP planner, the text in the
specification is enclosed.

Verification Planning
170
Figure 2-6 Linked Section

You can now use the normal flow of linking the selected cell to a
feature by clicking New feature/ Link as an sibling feature to
create a new feature or link to an existing selected feature
respectively. The abstract of this link is displayed in the text box if it
is lesser than 1000 characters long. For text that is longer than 1000
characters, a part of it is displayed in the text box.

Figure 2-7 Linking Selected Cell to Feature

Verification Planning
171
Table Cell Selection
You can select a cell in a table and link to plan as shown in figure
Selecting a Cell in a Table .

Figure 2-8 Selecting a Cell in a Table

Image Selection

Similar to cell-based selection, you need to set the


ENABLE_SPECLINK_CELL_IMAGE_SELECT environment
variable to enable image selection.

After the environment variable is set, you can select an image in two
ways. The detailed steps are as follows:

Approach 1

1. Press and hold the right-click mouse button.

Verification Planning
172
2. Drag the mouse over the image till the end point and the release.
The image is selected.

Figure 2-9 Selecting an Image

Approach 2

1. Move the mouse over the image you want to select.


2. Double-click on the image to select it.
When you select images and text together and link it as a new
feature, the feature name and abstract of the link are the same as
the link without selecting any image as shown in Figure 2-10 and
Figure 2-11.

When you just select images without any text, then the feature name
and the abstract is the combined Identifier string of images. For
example, if you select two images with IDs are image1 and image2
respectively, the feature name generated is image1_image2 and so
will the abstract of the linked section as shown in Figure 2-12.

Verification Planning
173
Figure 2-10 Abstract for Text and Image Selection

Figure 2-11 Abstract for Text Only Selection

Verification Planning
174
Figure 2-12 Abstract for Multiple Image Selection

Examples for Image Selection


The following are examples that describe how sections containing
images are handled during spec-linking. Consider the following two
cases for a linked section containing images:

A section contains images and text as well.


A section contains image(s) only
These two cases are handled different ways during review and
updating the section to a new document during specification linking.

Verification Planning
175
Case 1: Section contains both images and text
In this case spec-linking follows the following process:

Review and update the text to the new section in the new
document.
Find the images in the updated section in the new document.
Generate a new section based on the text section and images,
the decision table below lists all the possibilities and review
results.
Table 2-1 Review Results for Text and Image Selection

Image Text Matching Missing New


Matching matching conflict conflict
Conflict conflict conflict conflict
Missing conflict missing conflict
New conflict conflict conflict

The review result for an image depends on the updated text section
in the new document. There can be three results for image
comparison: Matching, Missing, New. The result conflict is not
applicable for images.

Matching: The image in the old document is found in the new


document and it is present by the updated text section. In this
case, image is found means that the image in the old and new
documents are the same.
Missing: The image is not found in the new document, or it is
found but it is NOT in the updated text section.
New: The image is in the updated text section but not in the old
document. In Verdi, New is treated as a Conflict.

Verification Planning
176
The image and text works as an entire link in the review process, so
whatever actions (accept, reject) that you choose to apply, the action
is applied to both the image and the text. You will not be able to treat
the text or the image as separate review-able objects.

Case 2: A section contains only image(s)


Without considering in the text in the review process, the search
scope of an image is not limited to the updated text section in the
new document. The whole document is within the search scope. For
example, if an image at page 10 was selected and linked as a section
in the old document, Verdi spec-linking feature searches the image
at page 10 then around page 10, that is page 11, page 9, page12,
page 8 and so on, until the image is found.

If the image is found, then it is marked as matching, otherwise it is


marked as missing. The review result types new and conflict are
not applicable in this case.

Limitations

The specification-linking feature functions under the following


limitations:

Linking of multiple columns of a document at one time is not


supported if the selection is done as shown in the following figure.

Verification Planning
177
Figure 2-13 Cell-Based Selection Limitation

Selecting text in shapes from (Word/Excel/PPT) is supported,


however, selecting a group of shapes is not supported. For
example, you cannot select a group of images as shown in the
following figure:

Verification Planning
178
Selecting text from an MS Visio diagram is supported, however,
selecting the whole diagram is not supported.
Selecting an image as a whole is supported.
It is recommended that you do not use cell selection to select both
text and image. This might result in the review result being wrong
if the image in the updated document is moved to other location.
Figure 2-14 Text and Image Selection

Rotated text is not well supported, you may not able to select it.
This also impacts line coverage. For example, in the following
figure, the text selected covers two lines.

Verification Planning
179
When the text is rotated as shown in the following figure, the text
selection would have to cover many lines instead of two lines as
shown in the previous figure. This might impact line coverage.

Integrated Planning for VC VIP Coverage Analysis

Verification Compiler Platform offers integrated verification planning


that enables verification engineers to define a verification plan and
allows them to author, annotate, and maintain a dynamic and an
executable verification plan.

To leverage Verdi planning and management solutions, VIP


verification plans in the .xml format were required to be converted to
the .hvp format manually. Now, VC VIPs provide an executable
verification plan in the .hvp format together with the .xml format. You
can now load VIP verification plans easily to Verdi verification and
management solutions in a single step. Further, you can easily
integrate these plans to the top-level verification plan by a single-
click drag-and-drop.

Coverage results are annotated to the plan that helps to map the
verification completeness on a feature by feature basis at the
aggregate level.

Verification Planning
180
Use Model

Verdi Coverage flow requires the .hvp files that capture the
Verification Plan to be loaded on the Verdi Coverage Graphical User
Interface (GUI) and the coverage is annotated within the GUI.

To leverage a verification plan, perform the following steps:

3. Navigate to the example directory using the following command:


cd <intermediate_example_dir>
4. Invoke the gmake command with the name of the test you want
to run, as follows:
gmake USE_SIMULATOR=vcsvlog <test_name>
5. Invoke the Verdi GUI using the following command:
verdi cov &
6. Copy the Verification Plans folder from the installation path to the
current work area, as follows:
\cp $VC_VIP_HOME/vip/svt/<VIP_name>/
<VIP_release_version>/doc/VerificationPlans .

Note that, <VIP_name> is a generic name for VC VIPs that are


supported with Verification Compiler Platform.

7. Select Load Plan from the Plan menu.

Verification Planning
181
Figure 2-15 Open Plan

8. Load the coverage database (simvcssvlog.vdb) from the


output folder in the current directory using the Verdi menu, as
shown in Figure 2-16.

Verification Planning
182
Figure 2-16 Verdi GUI

9. Click the Recalculate button to annotate coverage results, as


shown in Figure 2-17.

Verification Planning
183
Figure 2-17 Recalculate

Verification Planning
184
3
Testbench Creation 1
Testbench creation includes the creation of a coverage model
(generally, using UVM methodologies), which is a method for
verification accountability.

VCS and Verdi use separate UVM libraries with respective


instrumentation to record transaction and catch messages. This
necessitates a recompilation of a design with the Verdi UVM library
for debugging the design, in case of simulation failure. The compile-
time associated with a typical SoC is sizable. So, the recompilation
required for debugging has a cascading effect on the verification
closure. UVM libraries used during simulation with VCS and
debugging with Verdi are different, hence, more time and effort is
spent to overcome these disparities.

Testbench Creation
185
The following solution is available as part of the L-2016.06-SP1
release of Verification Compiler Platform:

Unified UVM Library

Unified UVM Library

Verification Compiler Platform provides a unified UVM library that


integrates instrumented UVM libraries of VCS and Verdi. With the
introduction of the unified UVM library, VCS and Verdi transaction
recorder and message catcher now coexist and are compiled
together. You can directly use the Unified UVM library with the Verdi-
provided recording mechanism during simulation and for debugging
with Verdi. Thus, the overall verification cycle is accelerated. The
Unified UVM library also improves the debug productivity while
debugging UVM-based environments with VCS and Verdi, as both
the tools use the same UVM library. This eliminates the disparity
between simulation and debug libraries.

Single compilation, UUM and UVM-VMM interoperability flows are


supported in the unified UVM library. The unified UVM library can
also be qualified and validated using Synopsys VIPs.

UVM libraries are available at the following paths:

$VC_HOME/etc/uvm-1.1: This is the path for unified UVM 1.1d


library.
$VC_HOME/etc/uvm-1.2: This is the path for unified UVM 1.2
library.
$VC_HOME/etc/uvm: This path is symbolically linked to the
$VC_HOME/etc/uvm-1.1 directory.

Testbench Creation
186
You can select the following paths either for Verdi or for DVE
transaction recording during simulation:

$VC_HOME/etc/uvm[-<version>]/verdi: Directory for


Verdi transaction recorder and message catcher.
$VC_HOME/etc/uvm/vcs[-<version>]: Directory for DVE
transaction recorder and message catcher.
Starting with the J-2014.12-SP1 release, Verification Compiler
Platform uses Verdi's transaction recorder and message catcher as
default to record the transactions and messages in the UVM
environment. In other cases, if SNPS_SIM_DEFAULT_GUI is set to
verdi, Verdi's recorder and catcher is used by default when the
recording is enabled by +UVM_TR_RECORD and/or
+UVM_LOG_RECORD options.

Use Model

The following sections describe how you can use the unified UVM
library with VCS:

Transaction/Message Recording in DVE/Verdi With VCS

Testbench Creation
187
Transaction/Message Recording in DVE/Verdi With VCS
The following sections describe how you can use the unified UVM
library with Verdi or DVE transaction recorder and message catcher
for VCS:

Compilation
Simulation

Compilation
The following sections describe the different compilation flows that
you can perform with the unified UVM library.

Single Compile (VCS Two-Step Flow)

Add the -ntb_opts uvm compile-time option with the


-debug_access option to automatically compile the files and the
paths of transaction recorders and message catchers of both Verdi
and DVE.

+incdir+$VC_HOME/etc/uvm \

$VC_HOME/etc/uvm/uvm.sv \

$VC_HOME/etc/uvm/dpi/uvm_dpi.cc +incdir+$VC_HOME/etc/uvm/
vcs \

$VC_HOME/etc/uvm/vcs/uvm_custom_install_vcs_recorder.sv \

+incdir+$VC_HOME/uvm/verdi \

$VC_HOME/etc/uvm/verdi/
uvm_custom_install_verdi_recorder.sv

For example,

%> vcs -debug_access+all -sverilog -ntb_opts uvm \

Testbench Creation
188
<compile_options> <user source files using UVM>

For more information about the -debug_access option, see the


VCS/VCS MX LCA Features Guide.

UUM Compile (VCS Three-Step Flow)

Execute the following steps to run the UUM Compile flow:

1. Analyze the UVM library


Add the -ntb_opts uvm parse option to automatically analyze
the unified library including the files and the path of transaction
recorders and message catchers of both Verdi and DVE.
The following files are automatically included:
+incdir+$VC_HOME/etc/uvm \
$VC_HOME/etc/uvm/uvm.sv \
+incdir+$VC_HOME/etc/uvm/vcs \
$VC_HOME/etc/uvm/vcs/uvm_custom_install_vcs_recorder.sv
\
+incdir+$VC_HOME/etc/uvm/verdi \
$VC_HOME/etc/uvm/verdi/
uvm_custom_install_verdi_recorder.sv

For example,
%> vlogan -sverilog -ntb_opts uvm

2. Analyze the user source code.


There is no change in Step 2. For example,
%> vlogan -sverilog -ntb_opts uvm\
<parse_option> <user source files using UVM>

Testbench Creation
189
3. Elaboration
Add the -ntb_opts uvm compile-time option with the
-debug_access option to enable transaction and message
recording for both Verdi and DVE. The
uvm_custom_install_recording and
uvm_custom_install_verdi_recording top modules as
well as the following files are included automatically in elaboration:
$VC_HOME/etc/uvm/dpi/uvm_dpi.cc \
uvm_custom_install_recording \
uvm_custom_install_verdi_recording

For example,
%> vcs -debug_access+all -ntb_opts uvm \
<elab_options> <top module name>

Key Points to Note


Use -ntb_opts uvm-1.2 for UVM 1.2 library.
If you want to specify a different UVM library, you can use the
VCS_UVM_HOME environment variable.
If -debug_pp | -debug | -debug_all is used during VCS
compilation instead of -debug_access, you need to use the -P
option to link to novas.tab and pli.a files of FSDB dumper
during elaboration. For example,
- In single compile flow
%> vcs -sverilog -ntb_opts uvm -debug_pp \

-P $NOVAS_HOME/share/PLI/VCS/LINUX/novas.tab \
$NOVAS_HOME/share/PLI/VCS/LINUX/pli.a \
<compile_options> <user source files using UVM>

Testbench Creation
190
- In UUM compile flow
%> vlogan -sverilog -ntb_opts uvm

%> vlogan -sverilog -ntb_opts uvm\


<parse_option> <user source files using UVM>

%> vcs -debug_all -ntb_opts uvm \


-P $NOVAS_HOME/share/PLI/VCS/LINUX/novas.tab \
$NOVAS_HOME/share/PLI/VCS/LINUX/pli.a \
<elab_options> <top module name>

Simulation
The following sections describe how you can perform simulation
using the unified UVM library.

Enabling Transaction/Message Recording in Verdi

Add the following runtime options to enable Verdi transaction


recorder and message catcher:

+UVM_VERDI_TRACE=<Argument>
Enables Verdi flow when added during simulation.

You can use any of the following values as an input to the


<Argument> parameter:
UVM_AWARE|RAL|TLM|MSG|HIER|PRINT

For more details, see the Verdi Application Note -


V3_new_transaction_debug_platform.doc

+UVM_TR_RECORD
Enables Verdi transaction recorder.

Testbench Creation
191
+UVM_LOG_RECORD
Enables Verdi message catcher.

Note:
In the Verification Compiler environment, where Verdi is set as
default GUI (or value of the SNPS_SIM_DEFAULT_GUI
environment variable is set to verdi), Verdi recorder is enabled
to record messages and transactions into FSDB files. So, you do
not need to use the UVM_VERDI_TRACE variable to enable Verdi
UVM flow. However, you still need to add +UVM_TR_RECORD or
+UVM_LOG_RECORD options to enable transaction recording or
message recording accordingly.

For example,
//Use UVM_VERDI_TRACE and enable both transaction
recording and message catching

%> ./simv +UVM_VERDI_TRACE +UVM_TR_RECORD \


+UVM_LOG_RECORD

//Use SNPS_SIM_DEFAULT_GUI and only enable transaction


recording

%> setenv SNPS_SIM_DEFAULT_GUI verdi

%> ./simv +UVM_TR_RECORD

Testbench Creation
192
Enabling Transaction/Message Recording in DVE

Add the following runtime options without the +UVM_VERDI_TRACE


option to enable DVE transaction recorder and message catcher:

+UVM_TR_RECORD
Enables DVE transaction recorder.

+UVM_LOG_RECORD
Enables DVE message catcher.

For example,
%> ./simv +UVM_TR_RECORD +UVM_LOG_RECORD

Testbench Creation
193
4
Validating Testbench: Functional
Qualification 1
Testbench quality is commonly measured using coverage metrics,
such as code coverage, toggle coverage, and functional coverage,
which are enabled during simulation. These metrics reflect how well
a design is activated by test vectors, but are insensitive to error
detection capabilities of the testbench. It becomes imperative to
perform functional qualification of the testbench to improve the
testbench quality.

Mutation-based techniques used by the functional qualification tool


help improve the testbench by identifying flaws in the testbench.
Generating coverage metrics using simulation and performing
testbench qualification are sequential activities. This requires you to
perform redundant steps to set up the tool, one time for simulation
and second time for functional qualification of the testbench, which
involves generation of database and fault-aware simulation
executable.

Validating Testbench: Functional Qualification


194
The following solutions are available as part of the L-2016.06-SP1
release of Verification Compiler Platform:

Concurrent Fault Qualification With Certitude and VCS


Vectorization
VCS and Certitude Integration
Loading Design Automatically in Verdi with Native Certitude
Dumping and Comparing Waveforms in Verdi for SystemC
Designs

Concurrent Fault Qualification With Certitude and VCS


Vectorization

As design complexity continues to increase, higher verification effort


is needed. However, the number of faults injected by Certitude
becomes extremely high. The purpose of this solution is to enable
multiple combinations of faults within a single simulation to improve
the qualification runtime.

Certitude is a functional and safety qualification tool able to run


multiple jobs in parallel; however, each job can only simulate a single
combination of faults. The integration of Certitude and VCS extends
simulation to run different faults concurrently in each job. The
vectorization thus improves runtime significantly for a high number of
injected faults (see Figure 4-1 and Figure 4-2).

Validating Testbench: Functional Qualification


195
Figure 4-1 Certitude Standalone Mode.

Figure 4-2 Certitude Concurrent Mode

Validating Testbench: Functional Qualification


196
The focus of this integration is the propagation-only qualification for
all injected faults.

The propagation-only mode for safety-critical applications can


quickly identify faults that do or do not propagate with the specified
test cases.

The following sections provide a detailed description of this feature:

Use Model
Limitations

Use Model

To use the Certitude concurrent mode, perform the following steps:

1. Specify the Certitude mode using the following setting in the


certitude_config.cer configuration file:
setconfig -ConcurrentFaultQualification=true
Note:
A new model phase is not mandatory for the new setting of the
ConcurrentFaultQualification configuration option to be
effective.

You can specify other Certitude Tcl commands in the


certitude_config.cer file as needed.
2. Create certitude_hdl_files.cer, certitude_compile,
certitude_execute, and certitude_testcases.cer files
in the same way as done in standalone Certitude.

Validating Testbench: Functional Qualification


197
3. Invoke Certitude and execute commands. This step is executed
similar to standalone Certitude:
>certitude
cer> model
cer> activate
cer> DetectConcurrent
cer> report
cer> viewreport

Key Points to Note


Supports full backward compatibility with the standard flow. Users
do not need to run the model phase while switching from the
standard flow to the Certitude concurrent flow and vice versa.
Supports module-based and instance-based qualifications.

Limitations

The following limitations apply for this feature:

Supports Verilog and SystemVerilog only.


Supports netlist-like construction only (that is, flatten or
hierarchical design that contains instantiation of library primitives
only. The netlist-like construction can be the result of a synthesis
tool.)
Supports propagation only.
Does not support fault-impact-related features.

Validating Testbench: Functional Qualification


198
Supports StuckAt0/StuckAt1 faults along with Connectivity input
and output faults only. Supported fault sets as follows:
- OutputPortStuckAt0
- OutputPortStuckAt1
- OutputPortBitStuckAt0
- OutputPortBitStuckAt1
- SignalStuckAt0
- SignalStuckAt1
- SignalBitStuckAt0
- SignalBitStuckAt1
- InputPortConnectionStuckAt0
- InputPortConnectionStuckAt1
- InputPortConnectionBitStuckAt0
- InputPortConnectionBitStuckAt1
- OutputPortConnectionStuckAt0
- OutputPortConnectionStuckAt1
- OutputPortConnectionBitStuckAt0
- OutputPortConnectionBitStuckAt

Validating Testbench: Functional Qualification


199
VCS and Certitude Integration

Verification Compiler Platform maximizes your productivity by


providing a tighter integration of the compiled code simulator, VCS,
and Certitude Functional Qualification System. With VCS and
Certitude integration, Certitude runs in the Native mode (Native
Certitude) and you can generate VCS and Certitude databases with
the same compilation.

This seamless integration offers the following benefits:

Allows you to generate the fault-aware simulation executable


without having to prepare the design for testbench qualification
separately; thus, eliminating the dependency on different parsers.
Simplifies the setup requirements of Certitude environment with
VCS and generates one or more simulation executables that
contain all the instrumentation necessary to activate and detect
faults.
With this integration, VCS generates the following database and
executables:

A Certitude database that corresponds to the Certitude model


command.
One or more simulation executables that contain all the
instrumentation necessary to run activate, detect, regress,
and testscript commands.
Certitude Functional Qualification System requires a specific set of
configuration files for a qualification run. These configuration files
describe the verification environment. You must create these files
before running a qualification phase.

Validating Testbench: Functional Qualification


200
A complete Certitude qualification run for a verification environment
requires the following configuration files to be present and
functioning:

certitude_config.cer - File required at compile time


certitude_hdl_files.cer - File required at compile time
certitude_testcases.cer - File containing the name of all
test cases used by the flow
certitude_compile - VCS compile script
certitude_execute - File requires information about the
criteria for a test case to pass or fail
This section consists of the following subsections:

Use Model
Limitations

Use Model

This section describes the use model of Native Certitude as


illustrated in Figure 4-3.

Validating Testbench: Functional Qualification


201
Figure 4-3 Native Certitude Use Model

To use Native Certitude, perform the following steps:

1. Create the certitude_compile file as your default build script.


You can invoke this file using the following Certitude Tcl
commands:
- model
- activate -compile
- detect -compile
- regression -compile
- testscript -noinstrumentation

Validating Testbench: Functional Qualification


202
Note:
The certitude_compile file should be the same as your VCS
compilation script.

2. Prepare the input for fault-search and fault-aware simv


generation using the following steps:
- Specify the file for qualification
Specify the qualify command in the
certitude_hdl_files.cer file as your indication of design
files for qualification.

For example,

qualify -verilog -file=dut_bot.sv

- Specify the faults to qualify


Specify the faults for qualification through the existing
configuration option in the certitude_config.cer file.

For example,

setconfig { FaultSet= Connectivity }

- Specify the mode


Specify the Native mode using the following setting in the
certitude_config.cer configuration file:

setconfig -Simulator=native

You can specify other Certitude Tcl commands in the


certitude_config.cer file as needed.

Validating Testbench: Functional Qualification


203
Note:
These Tcl commands are a subset of the commands required
to use Certitude as a standalone product.

3. Create certitude_testcases.cer and


certitude_execute files in the same way as done in
standalone Certitude.
4. Invoke Certitude and execute commands that are compatible with
the Certitude activation and detection phases. This step is
executed similar to standalone Certitude:
>certitude
cer> model
cer> activate
cer> detect
cer> report
cer> viewreport
Note:
This use model is applicable to single-compile and single-simv
user environments only. Multiple-compile and multiple-simv
user environments are not supported with this release.

Key Points to Note


Certain commands, configuration options, and environment
variables are obsolete and are ignored in the Native mode. For
example, the SimulatorBitWidth configuration option is
ignored. For a detailed list, see Unsupported Commands and
Options in Native Certitude .

Validating Testbench: Functional Qualification


204
You need not create a setup for DUT reconstruction for the model
phase. Similarly, you need not add explicit PLI arguments to
compile commands.
Starting from K-2015.09, VCS and Certitude integrated capability
is enhanced to support instance-based fault modeling.
Low Power Faults With Certitude
VCS and Certitude integrated capability is enhanced to support
low-power simulations. That is, simulations that include a Unified
Power Format (UPF) file are supported in the Native mode.
Certitude injects and qualifies faults in the original RTL code and
ensure compatibility with logic changes implied by the UPF file.

Examples

The following is an example of the certitude_hdlfiles.cer


configuration file:
# DUT files extracted from the vcs command line
qualify -verilog -file=dut_bot.sv
qualify -verilog -file=dut_top.sv

The following is an example of the certitude_config.cer


configuration file:
# Set simulator Mode
setconfig -Simulator=native
# Specify the TopName
setconfig { -TopName = dut_top}
# Specify the fault type
setconfig { -FaultSet = Connectivity }

Validating Testbench: Functional Qualification


205
The following is an example of the certitude_compile
configuration file in two-step flow:
#!/bin/sh -e
# VCS compile script
vcs -sverilog tb_top.sv dut_top.sv dut_bot.sv -debug

Note:
Any file that needs to be qualified must be analyzed by
specifying it in the certitude_compile file.

For more information about Certitude configuration files, see the


Certitude User Manual. After the Certitude configuration files are
available, you can run Certitude as usual.
For example, execute the > certitude command to start
Certitude in the interactive mode. It opens the Certitude Tcl shell
and the cer> prompt appears. You can now execute a complete
Certitude qualification run as follows:
cer> model
cer> activate
cer> detect
cer> report
cer> viewreport

For more information on Certitude commands, see the Certitude


Reference Guide.

Note:
Certitude communicates with VCS internally through a
combination of environment variables and Certitude database.
This process is transparent.

Validating Testbench: Functional Qualification


206
Limitations

The following are the limitations with this feature:

Fault instrumentation is only supported in V2K or SystemVerilog


DUT. VHDL instrumentation is not supported.
Multiple-compile and multiple-simv are not supported.
Simprofile with Native Certitude is not supported. Simprofile
continues to work, but does not have a separate section for
Certitude in the report.
Partition compile and Precompiled IP technologies are not
supported.
Any file that needs to be qualified must be analyzed by specifying
it in the certitude_compile file.
Faults on SystemVerilog interfaces are not supported.

Unsupported Commands and Options in Native Certitude


The following Certitude commands and configuration options are not
relevant in the Native Certitude mode and are ignored as No
Operation (NoOp):

Model phase commands


- addverilog (replace it with the qualify command)
- addsystemverilog
- vdefine
- vundefine

Validating Testbench: Functional Qualification


207
Note:
The addverilog and addsystemverilog commands are
not ignored always. If you use these commands along with the
-qualify option, these are not ignored and the behavior is
the same as the qualify command.

VHDL related commands


- addvhdl
Configuration options
- SimulatorBitWidth
- TopVhdlGenerics
- TopVerilogParameters
- DefineSimulatorNativeMacro
- IncludePaths
- InstrumentOnTop
- instrumentationDirectory
- VhdlCerPackageFile
- VhdlCerPackageLibrary
Environment variables
- CER_INSTRUMENT_DIR
Other commands
- availablelicenses

Validating Testbench: Functional Qualification


208
Loading Design Automatically in Verdi with Native
Certitude

Verification Compiler Platform offers the integration of Certitude,


VCS and Verdi. With this integration, designs can be loaded
automatically in the Verdi GUI without setting the Certitude
VerdiInitCommand configuration option.

This section consists of the following subsections:

Use Model
Key Points to Note

Use Model

To use this feature, perform the following steps:

1. Specify the Native mode using the following setting in the


certitude_config.cer configuration file:
setconfig -Simulator=native
2. Specify the -kdb option in the certitude_compile
configuration file.
In the two-step flow, specify the -kdb option in the command line
as follows:
#!/bin/sh -e
# VCS compile script
vcs -kdb -sverilog tb_top.sv dut_top.sv dut_bot.sv -debug

Validating Testbench: Functional Qualification


209
In the UUM flow, specify the -kdb option in all the vcs/vlogan/
vhdlan command lines as follows:
#!/bin/sh -e
# VCS compile script
vlogan -kdb -sverilog tb_top.sv dut_top.sv dut_bot.sv
vcs -kdb -debug top

3. Leave the VerdiInitCommand configuration option as empty


(default value).

Key Points to Note

The following points must be noted for using this feature:

During the model phase, the certitude_compile file is


executed once and information of the KDB design is collected.
The KDB design is then automatically loaded when the Verdi GUI
is launched by Certitude. The KDB design cannot be loaded
automatically if the model phase has not been executed. The
feature is effective only after the model phase.
If the design is not loaded automatically in the Verdi GUI, it may
be due to one of the following:
- The -kdb option is not applied correctly in the
certitude_compile file.
- The -kdb option is applied but the KDB design is not compiled
and generated correctly.
- The VerdiInitCommand configuration option is set by the
user, and Certitude applies the user setting.

Validating Testbench: Functional Qualification


210
Dumping and Comparing Waveforms in Verdi for
SystemC Designs

Verification Compiler Platform provides the integration of Certitude,


VCS, Verdi and CBug. This seamless integration offers the following
benefits:

Dump the waveform for SystemC designs run on a specific


testcase with or without an injected fault.
Compare the reference waveform with the faulty waveform for
SystemC designs.
Generate Runtime Information Database (RIDB) for loading
SystemC designs in Verdi.
This section consists of the following subsections:

Use Model
Key Points to Note

Use Model

To use this feature, perform the following steps:

1. Specify VCS as the simulator using the following setting in the


certitude_config.cer configuration file:
setconfig -Simulator=vcs
In the certitude_compile configuration file, compile the
design using VCS. Example as follows:
#!/bin/sh -e

Validating Testbench: Functional Qualification


211
# VCS compile script
syscan $CER_SYSCAN_OPTIONS $SRC/top.cpp
vcs -sysc sc_main $CER_VCS_SC_OPTIONS

2. Set the WaveUseEmbeddedDumper configuration option to true


in the certitude_config.cer configuration file to use the
embedded dumper for dumping waveforms:
setconfig -WaveUseEmbeddedDumper=true
3. Invoke Certitude and execute commands for the model, activation
and detection phases.
>certitude
cer> model
cer> activate
cer> detect
4. Execute the dumpwaves and verdiwavedebug commands
accordingly to dump and compare waveforms. Examples are as
follows:
cer> dumpwaves -fault=10 -testcaselist=fir_rtl
cer> verdiwavedebug -fault=10 -testcase=fir_rtl
Note:
For more details on dump and compare waveforms with Certitude,
see the Certitude User Manual.

5. Generate an RIDB file with the original source code and load the
design automatically in Verdi with the verdistart, or
verdisourcedebug command. Example as follows:
verdidumpridb -testcase=fir_rtl

Validating Testbench: Functional Qualification


212
Key Points to Note

The following points must be noted for using this feature:

Simulation executed by the dumpwaves command will be killed


if simulation CPU timeout is reached. However, simulation will not
be killed if the dumpwaves command is executed straight after
the model command.

Validating Testbench: Functional Qualification


213
5
Verification Execution 1
Simulation is the dominant technique in verification. A verification
engineer compiles a design and runs simulations that either pass or
fail. If the simulation encounters a bug, the engineer needs to debug
the failure, fix the design or test and rerun the simulations until all the
tests pass. The compile turnaround time is extremely important to
improve engineers productivity during this process. Verification
execution encompasses the tasks from design compilation and
simulation to debugging failures as described in the following
section:

Compile Turnaround Time

Verification Execution
214
Compile Turnaround Time

The increasing complexity and demand for quality and time to


market today's electronic devices has brought in a new set of
challenges. Typically, majority of verification engineers time is spent
on debug. Simulation and debug are sequential activities and once
the simulation fails, debugging the failure is needed. You use the
desired debug tool to debug and fix the failure and may end up in
executing redundant steps to setup the tool for debugging after the
simulation is complete. For example, Verdi and VCS use different
compilers to compile the HDL design and testbench that requires
you to use different compiler scripts and compile the design twice to
simulate and debug the design. In addition, the subset of supported
HDL and the options for both the tools are different; so, you need to
spend more time and effort to overcome these disparities.

This section consists of the following subsections:

Unified Compile Front End


Native VIP Solutions

Verification Execution
215
Unified Compile Front End

Verification Compiler Platform offers Unified Compile front end,


which is the integration of VCS and Verdi compilers to unify
compilation for simulation and debugging. Unified Compile uses
VCS compiler scripts to compile the Knowledge Database (KDB) for
Verdi. Consequently, only one common compiler script needs to be
maintained for both VCS and Verdi, ensuring consistency between
the two databases.

Note: When using the VCS simulator, it is recommended that you use
the Unified front-end flow to generate the KDB. Otherwise, you
may encounter few issues with unnamed blocks. Since the
naming of an unnamed block, an unnamed assertion, and a c-unit
is different in KDB and FSDB, you may encounter annotation, drag
and drop, and other debug functionality issues in Verdi.
The benefits offered by Unified Compile are as follows:

Single VCS and Verdi compilation


Consistent HDL language support
Consistency in utilizing or handling VCS and Verdi options
This section consists of the following subsections:

Use Model
Limitations

Use Model
The following sections describe how to use the unified compile front
end to generate Verdi KDB and read the compiled design:

Verification Execution
216
Generating Verdi KDB With Unified Compile
Reading Compiled Design With Verdi

Generating Verdi KDB With Unified Compile


Unified Compile is supported in both VCS two-step and
three-step flows. In the VCS two-step flow, add the -kdb option to
the command line to generate KDB. In case of VCS three-step flow,
add the -kdb option in all the vlogan/vhdlan/vcs command
lines.

When you specify the kdb option, Unified Compile creates the
Verdi KDB and dumps the design into the libraries specified in the
synopsys_sim.setup file.

For example,

// Compile design using VCS and generate both VCS database


// and Verdi KDB

// -kdb in VCS two-step flow

% vcs -kdb <compile_options> <source files> -lca

// -kdb in VCS three-step flow

% vlogan -kdb <vlogan_options> <source files>

% vhdlan -kdb <vhdlan_options> <source files>

% vcs -kdb <top_name> -lca

To generate only the Verdi KDB and skip the simulation database
generation, specify the following argument with the -kdb option:

-kdb=only

Verification Execution
217
Generates only the Verdi KDB that is needed for both post-
process and interactive simulation debug with Verdi.

This option is supported only in VCS two-step flow. It is not


supported in VCS three-step flow.

In VCS two-step flow, this option does not generate VCS compile
data/executable, and does not disturb the existing VCS compile
data/executables.

For example,

% vcs -kdb=only <compile_options> <source files> -lca

Reading Compiled Design With Verdi


To read a compiled design, add the -simflow option to the Verdi
command line. This imports the KDB compiled by the Unified
Compile and enables Verdi and its utilities to use the library mapping
from the synopsys_sim.setup file. It is also used to import the
design from the KDB library paths.

You can perform the same operations through the Verdi GUI as
follows:

1. Click on File Import Design option.


2. In the Import Design form, select the From Library tab.
3. In the From field, select the VC/VCS Native Compile option, as
shown in Figure 5-1.

Verification Execution
218
Figure 5-1 Import Design Form

You can also add the -simdir <path> option to the Verdi
command line to ensure that VCS and Verdi use the same data from
the synopsys_sim.setup file. The <path> argument points to the
library directory from where VCS is compiled. Use this option to
invoke Verdi from a working directory that is different from the VCS
working directory.

You can also use the -top option with the -simdir option to specify
the top module in the specified library directory. For example,

Verification Execution
219
%> verdi -simflow simdir [<path>] -top [<your top module>]

If the -top option is not specified, the design top is used by default.

Example

Filename: 01_vhtop.vhd

library IEEE,STD;
use IEEE.STD_LOGIC_1164.all;
use std.textio.all;
use IEEE.STD_LOGIC_TEXTIO.all;

entity vh_top IS
end vh_top;
architecture arch of vh_top is
signal s1: std_logic_vector(3 downto 0);
signal s2: std_logic_vector(3 downto 0);

component VH1
port (s1: out std_logic_vector(3 downto 0);
s2: out std_ulogic_vector(3 downto 0));
end component;
begin
vh_inst : VH1 port map(s1,s2);
process(s1,s2)
variable L:line;
begin
write(L,NOW);
write(L,string'(" s1= "));
write(L,s1);
writeline(output,L);
write(L,string'(" s2= "));
write(L,s2);
writeline(output,L);
end process;
end arch;
library IEEE;
use IEEE.STD_LOGIC_1164.all;

ENTITY VH1 IS
port (s1: out std_logic_vector(3 downto 0);
s2: out std_ulogic_vector(3 downto 0));
END VH1;
architecture arch of VH1 is

begin
P: process
begin
wait for 0 ns;

Verification Execution
220
s1 <= "1111";
s2 <= "0011";
wait for 2 ns;
s1 <= "1010";
s2 <= "1110";
wait for 3 ns;
s1 <= "Z1X1";
s2 <= "XX10";
wait for 4 ns;
s1 <= "XXZ1";
s2 <= "1010";
wait;
end process P;
end arch;

Filename: synopsys_sim.setup

work > default


default: work

Command Lines

% vhdlan -kdb 01_vhtop.vhd


% vcs vh_top
% ./simv

Support for VHDL LRM Features


This section lists the following VHDL Language Reference Manual
features that are supported by VHDL Common analyzer:

VHDL Languages, Syntax, and Semantics


Support for Encryption Mechanism
Support for VHDL-AMS and PSL

Verification Execution
221
VHDL Languages, Syntax, and Semantics
Following is the list of supported VHDL languages, syntax, and
semantics:

VHDL-87 - VHDL 87 is completely supported and can be enabled


using the -vhdl87 option or by adding the VHDL_MODE=87 in
the synopsys_sim.setup configuration file.
VHDL-93 - VHDL 93 is completely supported in the default mode.
VHDL-2002 - VHDL 2002 is supported for protected types and
can be enabled using -vhdl02 or -vhdl08 options, or by adding
VHDL_MODE=08/VHDL_MODE=02 in the synopsys_sim.setup
file. The protected types in subprograms are not supported.
VHDL-2008 - VHDL 2008 can be enabled using the -vhdl08
option or by adding VHDL_MODE=08 in the
synopsys_sim.setup configuration file and the following
features are supported:
- Predefined data types and operators
- integer_vector, boolean_vector, real_vector,
time_vector, array logic operators, and so on
- C-style comments (enabled by default)
- Enhanced use clauses and aliases
- Standard environment package (std.env)
- External names
- The all keyword in process statements sensitivity list
- Logical unary reduction operator (for one dimensional array of
bit or boolean)

Verification Execution
222
- Matching relational operators (for bit and std_ulogic)
- Non-static expressions in port map
- Various enhancements to packages std_logic_1164,
numeric_std, numeric_std_unsigned, fixed point and
floating point are supported
- VHDL 1076-2008 encryption (IEEE encryption)

Support for Encryption Mechanism


You can parse the following three types of encrypted files in VCS MX
using vhdlan:

gen_ip encryption
128 bit VCS-only encryption
IEEE encryption

Support for VHDL-AMS and PSL


VHDL-AMS and Property Specification Language (PSL) are not
supported by vhdlcom and KDB. Therefore, these languages are
not supported by vhdlan in the Common Analyzer mode. Instances
of Verilog modules/configurations instantiated in VHDL are
supposed to be resolved by Novas at elaboration time, however,
vhdlan does not perform any binding operations and only considers
them as normal component instances. The KDB creation is turned
off at this time to avoid the creation of redundant KDB files.

Verification Execution
223
Key Points to Note
The vericom utility exists in Verdi. For VCS users, Unified
Compile flow is recommended to generate KDB for data
consistency and better performance. For third-party simulator
users, compilation does not change and continues to use the
vericom option. When loading the compiled design library
(KDB) from the GUI (loading from the command line stays the
same), ensure that the vericom option is selected in the From
field under the From Library tab of the Import Design form.
As VCS and vericom are different Verilog compilers, there are
some behavioral differences between them. In such cases,
Unified Compile follows the behavior of VCS for consistency
reasons. The supported language subset also follows the
supported subset of VCS.
All the compilation information including the compile log of Verdi
KDB is logged to the regular VCS compiler log file.
The library mapping information is obtained from the
synopsys_sim.setup file in VCS three-step flow. The library
mapping information in the novas.rc resource file is ignored in
the VCS three-step unified compile flow.
The Unified Compile does not apply to the import-from-file flow of
Verdi. Import-from-file continues to use the vericom parser to
read in the Verilog source code directly. It uses the library mapping
information from the novas.rc resource file, which is similar to
the Verdi behavior.
In the VCS two-step flow, the VCS generated KDB (work.lib++)
is saved in the work directory in the current working directory.

Verification Execution
224
In the VCS three-step flow, the vlogan -work <work>
generated KDB (work.lib++) is saved in the same working
directory as AN.DB and the physical directory path of the library
is picked as per the mapping present in the
synopsys_sim.setp file. You can use the verdi -simflow
-lib option to specify the working directory to load the KDB.
When the vhdlan command is executed in the VHDL Common
Analyzer mode to generate both intermediate files and KDB files,
the following impacts are expected:
- More CPU time is consumed to generate KDB.
- Higher peak memory is consumed to accommodate KDB
resident in memory and auxiliary data structures.

Limitations
The following are the limitations with Unified Compile:

Verilog-AMS (AMS) and Property Specification Language (PSL)


are not supported. Verdi can parse the constructs successfully
without an error message. However, Verdi has a limited support
for debug functionality for AMS and PSL.
Parallel compilation is not supported.
Fault tolerance compilation is not supported.
Both VC Apps Text and Design Manipulation models are not
supported.

Verification Execution
225
Native VIP Solutions

In design verification, every compilation and recompilation of a


design and testbench contributes to the overall project schedule. A
typical SoC design may have one or more VIPs where changes are
performed in the design or the testbench outside of VIPs. During the
development cycle and the debug cycle, a complete design along
with VIP is recompiled, which leads to increased compilation time.

This section consists of the following subsections:

Optimized VC VIP Compilation Performance With Partition


Compile and Precompiled IP

Optimized VC VIP Compilation Performance With


Partition Compile and Precompiled IP
Verification Compiler Platform offers the integration of VIPs with
Partition Compile (PC) and Precompiled IP (PIP) technologies. This
integration offers a scalable compilation strategy that minimizes VIP
recompilations and thus, improves compilation performance. This
further reduces the overall time to market of a product during the
development cycle and improves productivity during the debug
cycle.

PC and PIP features in Verification Compiler Platform provide the


following solutions to optimize the compilation performance:

Partition compile technology creates partitions (of module,


testbench, or package) for a design and recompiles only the
changed or modified partitions during the incremental compile.

Verification Execution
226
PIP technology allows you to compile a self-contained functional
unit separately in a design and a testbench. A shared object file
and a debug database are generated for a self-contained
functional unit. All of the generated shared object files and debug
databases are integrated in the integration step to generate a
simv executable. Only required PIPs are recompiled with
incremental changes in the design or the testbench.
For more information on Partition compile and Precompiled IP
technologies, see the VCS/VCS MX LCA Features Guide.

Use Model
You can use three new simulation targets in the Makefiles of VIP
UVM examples to run the examples in Partition compile or
Precompiled IP technologies. In addition, Makefiles allow you to run
the examples in back-to-back VIP configurations. The VIP UVM
examples are located in the following directory:

$VC_HOME/examples/vl/vip/svt/vip_title/sverilog

For example,

/project/vc_install/examples/vl/vip/svt/
usb_svt/sverilog

/project/vc_install/examples/vl/vip/svt/
amba_svt/sverilog

Each VIP UVM example includes a configuration file called as the


pc.optcfg file. This configuration file contains predefined partitions
or precompiled IPs for SystemVerilog packages that are used by VIP.
The predefined partitions are created using the following heuristics:

Verification Execution
227
Separate partitions are created for packages that are common to
multiple VIPs.
VIP-level partitions are defined in a way that all the partitions are
compiled in the similar duration of time. This enables the optimal
use of parallel compile with the -fastpartcomp option.
You can modify the pc.optcfg configuration file to include
additional testbench or DUT-level partitions.

This section consists of the following subsections:

The vcspcvlog Simulator Target in Makefiles


The vcsmxpcvlog Simulator Target in Makefiles
The vcsmxpipvlog Simulator Target in Makefiles
Partition Compile and Precompiled IP Implementation in
Testbenches With VIPs
The vcspcvlog Simulator Target in Makefiles

The vcspcvlog simulator target in the Makefiles of VIP UVM


examples enables compilation of the examples in the two-step
partition compile technology. The following partition compile options
are used:

-partcomp +optconfigfile+pc.optcfg -fastpartcomp=j4 -lca

One partition is created for each line specified in the pc.optcfg


configuration file. The -fastpartcomp=j4 option enables parallel
compilation of partitions on different cores of a multicore machine.
You can incorporate the partition compile options listed above in your
existing VCS command line.

Verification Execution
228
In Partition compile technology, changes in testbench, VIP, or DUT
source code trigger recompilation in required partitions only. You
must ensure that the Verification Compiler compilation database is
not deleted between successive recompilations.

The vcsmxpcvlog Simulator Target in Makefiles

The vcsmxpcvlog simulator target in the Makefiles of VIP UVM


examples enables compilation of the examples in the three-step
partition compile technology. The following partition compile options
are used:

-partcomp +optconfigfile+pc.optcfg -fastpartcomp=j4 -lca

There is no change in the vlogan commands. One partition is


created for each line specified in the pc.optcfg configuration file.

The -fastpartcomp=j4 option enables parallel compilation of


partitions on different cores of a multicore machine. You can
incorporate the partition compile options listed above in your existing
VCS command line.

In partition compile technology, changes in testbench, VIP, or DUT


source code trigger recompilation in required partitions only. You
must ensure that the Verification Compiler compilation database is
not deleted between successive recompilations.

The vcsmxpipvlog Simulator Target in Makefiles

The vcsmxpipvlog simulator target in the Makefiles of VIP UVM


examples enables compilation of the examples in PIP technology.
There is no change in the vlogan commands. One PIP compilation

Verification Execution
229
command with the -genip option is created for each line specified
in the pc.optcfg configuration file. The -integ option is used in
the integration step to generate the simv executable.

In PIP technology, changes in testbench, VIP, or DUT source code


trigger recompilation in required PIPs only. You must ensure that the
Verification Compiler compilation database is not deleted between
successive recompilations.

Partition Compile and Precompiled IP Implementation in


Testbenches With VIPs

You can use Makefiles in VIP UVM examples as a template to set up


the Partition compile or PIP technology in your design and
verification environment by performing the following steps:

Modify the pc.optcfg configuration file to include user-defined


partitions. The recommendations are as follows:
- Create four to eight overall partitions (DUT and VIP combined).
- Some VIP packages may include separate packages for
transmitter and receiver VIPs. If only a transmitter or a receiver
VIP is required, then the unused package can be removed from
the configuration file.
- Continue to use separate partitions for common packages, such
as uvm_pkg and svt_uvm_pkg, as defined in the VIP
configuration file.
Incorporate partition compile or precompiled IP command-line
options documented in previous sections or issued by Makefile
targets into the VCS command lines.

Verification Execution
230
For more information on partition compile and precompiled IP
options, such as -sharedlib and -pcmakeprof, see the VCS/
VCS MX LCA Features Guide.

Example
The following are the steps to integrate VIPs into the partition
compile and PIP technologies:

1. Once you set the VC_HOME variable, the VC_VIP_HOME variable


is automatically set to the following location:
$VC_HOME/vl
2. Check the available VIP examples using the following command:
$VC_VIP_HOME/bin/dw_vip_setup -i home
3. Install the example.
For example, to install the UART SVT UVM Basic example, use
the following command:
$VC_VIP_HOME/bin/dw_vip_setup -e uart_svt/
tb_uart_svt_uvm_basic_sys -svtb
cd examples/sverilog/uart_svt/
tb_uart_svt_uvm_basic_sys
4. Run the tests present in the tests directory in the example.
For example,
To run the ts.base_test.sv test in the VCS two-step flow with
Partition compile, use the following command:
gmake base_test USE_SIMULATOR=vcspcvlog

Verification Execution
231
To run the ts.base_test.sv test in the VCS UUM flow with
Partition compile, use the following command:
gmake base_test USE_SIMULATOR=vcsmxpcvlog
To run the ts.base_test.sv test in the VCS UUM flow with
Precompiled IP, use the following command:
gmake base_test USE_SIMULATOR=vcsmxpipvlog
5. To modify or change partitions, you must change the pc.optcfg
file for the example.

Verification Execution
232
6
Verification Debug 1
Nearly 40% of the verification time is spent on debugging and this is
the longest in the verification cycle. The time spent on debugging is
an indication of the number of defects and points to inadequacies in
the current approach. Simple defects can add to the performance
degradation and can turn into crucial defects that take more time to
get detected at the subsystem or at the chip level. For the block-level
verification, the formal verification technology can be used to find
corner case defects and improve the design quality in the most
efficient method.

This chapter describes the following integration offered by


Verification Compiler Platform (L-2016.06-SP1):

VC Formal Coverage With Verdi Coverage


This chapter also describes the following capabilities and
integrations offered by Verification Compiler Platform that help you
to significantly improve debug productivity:

Verification Debug
233
Generating VCS Coverage Database From Design and FSDB
Power Model L1/Utility/Applications
Value Traverse-Based Netlist
Value Traverse-Based Netlist
Enhancing NPI Applications in the Verification Compiler Platform
Application
Unified Verdi Debug Platform for Interactive and Post-Simulation
Debug
Saving/Loading Verdi Elaboration DB Library to/from Disk
Attaching a Running Simulation in Verdi
Reverse Interactive Debug
Unified UCLI Command for FSDB Dumping
AMS-Debug Integrations
Unified Transaction Debug With Native Verdi Protocol Analyzer
Memory Debug With Native Verdi Protocol Analyzer
VC APPs Protocol Analyzer
Protocol Analyzer: Native Performance Analyzer for
Transactions

Verification Debug
234
VC Formal Coverage With Verdi Coverage

This section explains how VC Formal coverage is integrated with the


Verdi coverage reporting flow. The two primary links between Verdi
and VC Formal are displaying VC Formal results in Verdi, and linking
VC Formal results into your verification plan using Verdi Planner.

Use Model

This section describes how to use Verdi coverage to display and link
to VC Formal results. This section consists of the following
subsections:

Collecting VC Formal Results in the Coverage Database


Measuring VC Formal Assert Status in HVP

Collecting VC Formal Results in the Coverage Database


To display VC Formal results in the coverage report, you must set the
VC_STATIC_HOME environment variable.
For example:

setenv VC_STATIC_HOME /tools/synopsys/vcst

where, VC_STATIC_HOME is an environment variable that must be set


to point to the installation directory for VC Formal.

To start the VC Formal tool, use the following command:

vcf -f test.tcl -verdi

where,

Verification Debug
235
vcf is a command to start the VC Formal tool with an interactive
shell. The vcf shell is a shell that calls vc_static_shell
internally. The vcf shell supports all the options supported by
vc_static_shell. The vcf shell automatically runs in the 64-bit
mode, unless you explicitly specify the -mode32 option. For details
on vc_static_shell, see the VC Formal Verification User Guide.
-f indicates your VC Formal execution script

You must provide your own tcl script to run VC Formal. To enable
collection and display of coverage data in Verdi, the following two
commands must be included in the tcl script:

1. The command to run VC Formal must include the all flag. If you
want to target assertions (not just cover properties), also include
the -cm assert as a flag to VCS:
read_file all -format verilog -sva -top $top -vcs "-cm assert -sver
ilog $testDir/test.v -sva"

2. The command to save the results to the coverage database, for


example, my_covdb:
save_covdb -name my_covdb assert+cover

This section consists of the following subsections:

Verdi GUI for VC Formal


VC Formal Coverage in Verdi

Verification Debug
236
Verdi GUI for VC Formal
To display the Verdi GUI for VC Formal (see the figure below), use
the vcf command with the -verdi option.

Figure 6-1 Verdi GUI for VC Formal

The properties are categorized according to the usage fields in VC


Formal as:

assert: This property specifies an assertion to be solved. Its


values are proven, inconclusive, vacuous, and falsified.
assume: This property specifies as a constraint.

cover: This property specifies as a cover property. Its values are


covered, inconclusive, and uncoverable.
unused: This property is disabled.

Verification Debug
237
VC Formal Coverage in Verdi
To load the coverage database generated with VC Formal in Verdi,
use the following command:

verdi dir my_covdb.vdb

where,

: Starts Verdi in the coverage mode.


dir: Opens the coverage database.

my_covdb.vdb: The name of the coverage database that gets


opened. This is specified in the test.tcl script with the
save_covdb command.

The command invokes the following window:

Figure 6-2 Verdi Coverage Assert Pane

The VC Formal information is displayed in the Assert tab in the


columns: FVtype, FVstatus, and FVdepth. This information is same
as the VC Formal results. The green color represents Proven or
Covered, the red color represents Falsified, Uncoverable, or
Vacuous, and the yellow color represents Inconclusive.

The FVtype column indicates the usage field. The value in this
column can be either Assert, Cover, or Assume.

Verification Debug
238
The FVstatus column indicates the run status of VC Formal. Its
possible values depend on FVtype:

If FVtype is Assert, the value of FVstatus can be Proven,


Inconclusive, Falsified, or Vacuous.
If FVtype is Cover, the value of FVstatus can be Covered,
Inconclusive, or Uncoverable.
The FVdepth column is interpreted as follows:

If FVstatus is Proven, Vacuous, or Uncoverable and its value is


-1, it represents infinite depth.
For other FVstatus, N>=0 represents the depth of the trace.

Measuring VC Formal Assert Status in HVP


VC Formal results can be annotated automatically onto features in
your verification plan. Using these results, you can measure your
expectations with FVstatus. The figure below shows VC Formal
results in a verification plan with attributes and metrics:

Verification Debug
239
Figure 6-3 VC Formal Results in HVP

To measure the VC Formal Assert status in HVP, perform the


following steps:

1. Set the expected FV Assert status with FV attribute values using


fvassert_expected_status.

feature f_default;

//These values are same as default value, no need


//assignments again here.

// fvassert_expected_status = proven;

// fvassert_expected_mindepth = -1;

// fvcover_expected_status = covered;

// fvcover_expected_maxdepth = 0;

measure Assert, FVAssert m1;

source = "property: fsm.*";

endmeasure

endfeature

feature f_expected_assigned;

fvassert_expected_status = inconclusive;

fvassert_expected_mindepth = 50;

Verification Debug
240
fvcover_expected_maxdepth = 50;

measure Assert, FVAssert m1;

source = "property: fsm.a2", "property:


fsm.a_loop_break", "property: fsm.c2";

endmeasure

endfeature

2. Add measures with property type sources and reference the


FVAssert metric.

The new FV Assert built-in metric is added to keep the score of


FV Assert status against users expectation. If FVstatus is same
as fvassert_expected_status, then the assertion/property is
considered to be covered.
For example:
measure Assert, FVAssert m1;

source = "property: top.a1*";

endmeasure

3. Calculate FVAssert metric scores.


After finding the matching region in the coverage database, the
covered/coverable status is extracted from that database and
added as a ratio for the FVAssert metric score. For the FVAssert
metric, you can get FVstatus from FV annotations in the coverage
database and compare fvassert_expected_status, which is set
in fvassert* attributes.
If FVstatus is same as fvassert_expected_status, then the
assertion/property is considered to be covered.
For example, the matching property top.a11: FVtype=assert and
FVstatus=Proven. If expectation is

Verification Debug
241
fvassert_expected_status = Proven, then the FVAssert metric
score becomes 1/1. If expectation is fvassert_expected_status
= inconclusive and fvassert_expected_depth = 60, then the
FVAssert metric score becomes 0/1.

Also, consider the following figure that shows six cases to


measure the VC Formal Assert status in your verification plan:

Case1: fsm.a2: FVtype is "assert", FVstatus is "falsified", FVdepth


is "6" and it is excluded. In the top.f_default.m1 feature, source
"property: fsm.*" matches the excluded property, therefore, no
score is added.
Case2: fsm.a_loop_break: FVtype is "assert", FVstatus is
"inconclusive", and FVdepth is 70". In the
top.f_expected_assigned feature, fvassert_expected_status is
"inconclusive" and fvassert_expected_mindepth is "50", which
are set by users. It is met with users expectations and the
FVAssert metric score becomes 1/1.

Case3: fsm.a_trans: FVtype is "assume". In the top.f_assume


feature, fvassume_status is "reviewed", which is set by users.
The FVAssert metric score becomes 1/1.

Verification Debug
242
Case4: fsm.c_blk_cnt: FVtype is "cover", FVstatus is "covered",
and FVdepth is "1". In the top.f_default feature, with default
attribute values, fvcover_expected_status is "covered" and
fvcover_expected_maxdepth is "0". Compare FVstatus to
expected attributes values and if the expectation does not meet,
the FVAssert metric score becomes 0/1.
Case5: fsm.c_trans, no FV annotation. In the
top.f_default.m_no_fv_ann measure, matching fsm.c_trans
without FV annotation, the FVAssert metric becomes 0/1 because
the FVAssert metric is referred explicated in measure.
Case6: dummy source. In the top.f_default.m_dummy measure,
for all metrics for "no matching region", the score becomes all "0".

Verification Debug
243
Generating VCS Coverage Database From Design and
FSDB

If you use the VCS verification flow, you can obtain VDB and use it
as the main coverage repository. However, in the non-VCS
verification flows, such as FPGA prototyping and emulator, you can
obtain FSDB rather than VDB. If you still have to manage the
coverage, you need to obtain the coverage database from the flow
and merge it to VDB.

To calculate the VCS coverage data from FSDB and then generate
the VCS coverage database from FSDB, use the covsim utility. The
coverage data supported by covsim includes the following:

FSM coverage including the state and transition coverage


Toggle coverage including MDA

Verification Debug
244
Figure 6-4 The covsim Utility Generates VCS Coverage Database From
FSDB

This section consists of the following subsections:

Data Input and Output Requirements


Use Model
The covsim Commands

Verification Debug
245
Data Input and Output Requirements

Before generating the VCS coverage database from a design and


FSDB, specify the FSDB input requirements in the configuration file.
The format to set input variables in the configuration file is as follows:

set VARIABLE_NAME = VALUE

The following is an example of the configuration file:

set FSDB = ./dump.fsdb


set FSDB_MAPFILE = ./rtl_to_fpga.map

The following descriptions explain the feature of the mentioned


variables:

FSDB: It contains the signal values that are required to calculate


the coverage data.
FSDB_MAPFILE: It specifies how to read the FSDB data, if the
hierarchy of the coverage database and FSDB is different.
Before using the covsim command, prepare the coverage
database.

To prepare the initial empty coverage database with VCS, use the
following command:

vcs -sverilog -f run.f -cm fsm+tgl -cm_tgl mda

where,

-cm fsm+tgl enables the FSM and toggle coverage.

Verification Debug
246
-cm_tgl mda enables the toggle coverage for Verilog 2001 and
System Verilog unpacked multidimensional arrays.

Use Model

You can perform the following steps to generate the VCS coverage
database from a design and FSDB:

4. Prepare an Initial VDB for covsim .


5. Prepare FSDB for covsim in the Extraction Phase
6. Calculate the Coverage Data From FSDB by covsim in the
Simulation Phase

Prepare an Initial VDB for covsim


Compile a design with the -cm option using VDB with the following
steps:

1. Prepare an initial VDB by using the VCS command to generate


the VCS VDB named simv.vdb:
vcs -f run.f -cm tgl...

2. Specify the VDB directory:


vcs -f run.f -cm tgl... -cm_dir init.vdb

Verification Debug
247
Prepare FSDB for covsim in the Extraction Phase
The extraction phase extracts essential signals for covsim. Note that
this phase is optional. The option is useful, if you use FPGA
prototyping/emulation since dumping all signals to FSDB is time-
consuming. If the FSDB is already configured for the simulation
phase, the extraction phase can be skipped.

Figure 6-5 Use Model - Extraction Phase

The essential signals are extracted for FSDB dumping that is


required by the simulation phase. In the extraction phase, the input
coverage database must be specified. To specify the input coverage
database, use the following options in the covsim command:

-simBin <path>: Specifies the path of the simulation binary


file. If this option is specified, covsim finds the corresponding
.vdb file as the input. For example, if -simBin vcssimv is
specified, covsim treats vcssimv.vdb as the input.
dir <dir>: Specifies the input coverage database. If the
dir option is specified, the path specified with the -simBin
option is ignored. For example, if dir target.vdb is specified,
then covsim treats target.vdb as the input.

Verification Debug
248
If either the -simBin option or the dir option is not specified, the
default input coverage database is simv.vdb.

After the extraction phase, the signal list is generated in the working
directory covsim.work++.

For the FSM coverage, FSM state signals are generated in the
fsm.list file.

memsys_test_top.dut.Urrarb.state
memsys_test_top.dut.Umem.state

For the toggle coverage, toggle signals are generated in the


tgl.list file.

The format of tgl.list is:

[Instance]
memsys_test_top.dut.u3
[Port]
memsys_test_top.dut.u3.ce_N
memsys_test_top.dut.u3.rdWr_N
memsys_test_top.dut.u3.ramAddr
memsys_test_top.dut.u3.ramData
[Signal]
memsys_test_top.dut.u3.chip (MDA)

[Instance]

[Port]

[Signal]

Verification Debug
249
Calculate the Coverage Data From FSDB by covsim in
the Simulation Phase
The simulation phase generates the calculated coverage database
from FSDB.

Figure 6-6 Use Model - Simulation Phase

In the simulation phase, the input coverage database must be


specified. To specify the input coverage database, use the following
two options in the covsim command:

-simBin <path>: Specifies the path of the simulation binary


file. If this option is specified, covsim finds the corresponding
.vdb file as the input. For example, if -simBin vcssimv is
specified, covsim treats vcssimv.vdb as the input.
dir <dir>: Specifies the input coverage database. If the
dir option is specified, the path specified with the -simBin
option is ignored. For example, if dir target.vdb is specified,
then covsim treats target.vdb as the input.
If either the -simBin option or the dir option is not specified, the
default input coverage database is simv.vdb.

Verification Debug
250
Another required input of the simulation phase is the FSDB file.

The FSDB_MAPFILE is an optional input.

Specify FSDB_MAPFILE, if the hierarchy of the coverage database


and FSDB is different.

The format of FSDB_MAPFILE is:

source_scope_name => fsdb_scope_name

The following is an example of FSDB_MAPFILE:

memsys_test_top.dut.u0 => memsys_test_top.fpga.u0


memsys_test_top.dut.u1 => memsys_test_top.fpga.u2

After the simulation phase, the calculated coverage database is


generated. You can specify the output database with the covsim
command.

Verification Debug
251
The covsim Commands

The covsim commands are the Perl script, which is installed in


$VERDI_HOME/bin/. The following covsim commands are
supported:

Table 6-1 VC Apps covsim Commands


Command Description
covsim <options> Prints its usage.
<options>
-h, -help
-config <config_file> Specifies the configuration file.
-extract fsm|tgl [-scope Enables the extraction phase that involves extracting
Scope] [-simBin <path>] essential signals for covsim. This phase is optional. The
[dir <dir>] essential signals are extracted for FSDB dumping.
-simBin <path> Specifies the path of the simulation binary file. The default
path is simv.
dir <dir> Specifies the input coverage database. If
dir is not specified but -simBin is specified, the directory
is set to <simBin_path>.vdb. The default directory is
simv.vdb
-sim fsm|tgl [-simBin Enables the simulation phase. Use the plus + character as
<path>] [dir <dir>] [-output a delimiter between arguments to monitor multiple
<db_name/test_name>] coverage metrics.
-output <db_name/ Specifies the output test where the coverage result is
test_name> stored. The default is covsim.vdb/covsim_test.

Verification Debug
252
Power Model L1/Utility/Applications

The VC Applications power model L1 APIs are used to access the


power model and traverse the status of the power model, including
the power domain crossing and the power supply network. These VC
Apps power model L1 APIs are created based on VC Apps power
model L0.

Use Model

Before using the VC Apps power model L1 APIs to access the power
model and traverse the status of power model, you need to import
the HDL design and the UPF file. After the desired VC Apps power
model L1 APIs are applied to access the power model, you can
check the results of the power domain crossing and the power
supply network.

The following figure shows the use flow of the VC Apps power model
L1 APIs:

Verification Debug
253
Figure 6-7 Use Flow - VC Apps Power Model L1 APIs

The VC Apps power model L1 APIs applications include the


following groups for different scenarios:

power domain crossing


power supply network
Both of these groups provide the TCL and C++ interface.

The following tables list the features of VC Apps power model L1


APIs for these scenarios:

Verification Debug
254
Table 6-2 VC Apps Power Model L1 APIs for Power Domain Crossing
API Description
npi_pw_get_domain_cro Gets the domain crossing power domain based on the NPI
ssing_pd Power Model.
npi_pw_get_domain_cro Gets the domain crossing path based on the NPI Power
ssing_path Model.
npi_pw_get_domain_cro Gets the domain crossing path through two ports based on
ssing_path_through_port the NPI Power Model.
s
npi_pw_get_domain_cro Gets the domain crossing path for the specified power
ssing_path_by_pd domain based on the NPI Power Model.
npi_pw_get_domain_cro Gets the domain crossing path between the two specified
ssing_path_through_pds power domains based on the NPI Power Model.
npi_pw_get_domain_cro Gets the domain crossing path for the specified power
ssing_path_by_pd_hdl domain based on the NPI Power Model.
npi_pw_get_domain_cro Gets the domain crossing path between the two specified
ssing_path_through_pd_ power domains based on the NPI Power Model.
hdls
npi_pw_get_all_domain_ Gets all the domain crossing paths based on the NPI Power
crossing_path Model.
npi_pw_domain_crossing Dumps the domain crossing path for the specified power
_path_by_pd_dump domain based on the NPI Power Model.
npi_pw_domain_crossing Dumps the domain crossing path between the two
_path_through_pds_dum specified power domains based on the NPI Power Model.
p
npi_pw_domain_crossing Dumps the domain crossing path for the specified power
_path_by_pd_hdl_dump domain based on the NPI Power Model.
npi_pw_domain_crossing Dumps the domain crossing path between the two
_path_through_pd_hdls_ specified power domains based on the NPI Power Model.
dump
npi_pw_all_domain_cros Dumps all the domain crossing paths based on the NPI
sing_path_dump Power Model.

Verification Debug
255
Table 6-3 VC Apps Power Model L1 APIs for Power Supply Network
API Description
npi_pw_get_pd_from_inst Gets the power domain handle for the specified instance
name.
npi_pw_get_primary_powe Gets the primary power net handle for the specified
r_net_from_inst instance name.
npi_pw_get_primary_groun Gets the primary ground net handle for the specified
d_net_from_inst instance name.
npi_pw_get_boundary_inst Gets the instance handles of boundary instances for the
_from_pd specified power domain handle.
npi_pw_trace_supply_drive Finds the drivers of a given supply port or supply net based
r on the Power Model.
npi_pw_trace_supply_load Finds the loads of a given supply port or supply net based
on the Power Model.
npi_pw_get_supply_networ Finds the paths in power supply network by tracing drivers
k_path and loads of the given from/through/to supply net or supply
port handle vectors based on the Power Model.
npi_pw_get_primary_powe Finds the paths of the primary power supply network for
r_network_path the specified instance name.
npi_pw_get_primary_groun Finds the paths of the primary ground supply network for
d_network_path the specified instance name.
npi_pw_supply_network_p Dumps supply network path vector.
ath_dump
npi_pw_primary_power_gr Dumps the paths of the primary power and primary ground
ound_network_path_dump supply network for the specified instance name.
npi_pw_get_pd_from_prim Gets the power domain handles for the specified primary
ary_power_net power net.
npi_pw_get_pd_from_prim Gets the power domain handles for the specified primary
ary_ground_net ground net.

Verification Debug
256
Value Traverse-Based Netlist

The VC Applications value traverse netlist L1 APIs are provided to


traverse and propagate values. The value traverse netlist L1 APIs
are created mainly based on the netlist model, and have the
following features:

Specifies a top scope for the value propagation.


Sets values on signals, depending on how these values are
propagated, and obtain values from signals.
Traces the driver/load and fan-in/fan-out cone based on
propagated values. This action is also called as the active trace.
These APIs help you to obtain profound understanding about how
values are evaluated between cells in the SystemVerilog (SV) or
VHDL designs. Meanwhile, you can save the design as a graph in
memory to increase the speed of tracing.

To propagate values through signals, you can first specify the scope
as the top scope of propagation. Signals outside the top scope are
not considered. Under a specified top scope, you can set values on
ports, instports, or nets either by bus or by bits. Each bit of instports
and ports is taken as one node and values are set on each node by
bit. After propagating, the fan-out cone of those set signals is
generated. The graph structure (Figure 6-8) stores the driver/load
relation. You can start to propagate values according to the
topological sorted order. You can obtain values or trace driver/load
of signals under the top scope after propagating.

Verification Debug
257
Figure 6-8 Top Boundary Scope

The section consists of the following subsections:

Use Model
Limitations

Use Model

Before using the VC Apps value traverse netlist L1 APIs to set values
to signals and propagate the value, you need to import the design
and initialize the VC Apps value traverse netlist L1 APIs. After
propagating the value, you can obtain the values.

The following figure shows the use flow of VC Apps value traverse
netlist L1 APIs:

Verification Debug
258
Figure 6-9 Use Flow - VC Apps Value Traverse Netlist L1 API

The following example shows how this use flow works in C++
environment:

Verification Debug
259
Figure 6-10 VC Apps Value Traverse Netlist L1 APIs Use Flow in C++

The VC Applications value traverse netlist L1 API applications


include two groups for settings:

Basic setting
Active trace
The VC Apps value traverse netlist L1 APIs are created based on the
NPI Netlist L1 and L0 model and are provided for both TCL and C++
interfaces. Table 6-4 lists the features of VC Apps value traverse
netlist L1 APIs for these settings:

Verification Debug
260
VC Apps Value Traverse Netlist L1 APIs for Basic Settings
The VC Apps value traverse netlist L1 APIs provide a series of
APIs to present the process to propagate the value from the driver
to the load and from the input port to the input instport of a register.
You should first pick some signals to set values and then perform
propagation to see the change of these values and the way they
affect their loads while passing through different cells. You can
obtain values from the target signals or figure out the exact bit of
the register, which is hit after a value that is propagated.
Note:A value traverse netlist model cannot function without calling
the APIs in the following order: begin, set_value, propagate,
get_value/trace/other, and then end.

Table 6-4 VC Apps Value Traverse Netlist L1 APIs for Basic Setting
API Description
npi_vanl_begin Initializes the VANL control center.
Sets the top boundary scope. If NULL, all
scopes are included. When a specific scope
is set, any signals outside the scope will not
be considered.
Finds all literal nets under the top boundary
scope and then call npi_vanl_set_value to
set values to corresponding nodes.
Sets default value that will be assigned to a
new created node if no value is set.
Value returns 1 if success.
npi_vanl_end Deletes all nodes, pins, and instances in the
control center.
Deletes the value traverse netlist L1 API
control center.
Value returns if success.

Verification Debug
261
API Description
npi_vanl_set_value Sets values to the specified signal.
npi_vanl_set_value_by_hdl The signal can be net, port, instport handle,
npi_vanl_set_value_from_file pseudo net, port, and instport.
Creates corresponding nodes of these
signals with the default value defined in
npi_vanl_begin.
Once set, those values are forced, will not
change after the next
npi_vanl_propagate_value is called.
Value returns the total number of nodes being
set.
npi_vanl_propagate_value Traces load and creates corresponding
nodes with the default value from user-set
nodes.
Topological sorts all user-set and generated
nodes.
Propagates and evaluates values according
to the topological sort order.
Value returns 1 if success.
npi_vanl_get_value Returns the value string of the specified
npi_vanl_get_value_by_hdl signal.
npi_vanl_dump_value_to_file The format can be npiNlBinFormat,
npiNlOctFormat, npiNlHexFormat, and
npiNlDecFormat.
If npi_vanl_dump_value_to_file is called, it
dumps all declaration net under the top
boundary scope with its name and value to a
specified file. This API returns the total
number of declaration nets being dumped.
npi_vanl_clear_all_value Clears the value of all nodes and sets all
nodes to the default value.
Nodes can be set or can obtain value again.
Value returns 1 if success.

Verification Debug
262
The following example shows the usage of VC Applications value
traverse netlist L1 APIs in C++ interface:

Figure 6-11 Example - VC Apps Value Traverse Netlist L1 APIs in C++

Verification Debug
263
The following example shows the usage of VC Applications value
traverse netlist L1 APIs in Tcl interface:

Figure 6-12 Example - VC Apps Value Traverse Netlist L1 APIs in Tcl

Verification Debug
264
VC Apps Value Traverse Netlist L1 APIs for Active Trace
The VC Apps value traverse netlist L1 APIs provide a way to trace
the driver/load with a value. The tracing action is called active trace.
You can trace the driver/load after the value is set and then
conveniently find the possible signals that cause the value change of
the target signal.

The following table shows the list of VC Apps value traverse netlist
L1 APIs for active trace.

Table 6-5 VC Apps Value Traverse Netlist L1 APIs for Active Trace
API Description
npi_vanl_driver Active traces drivers of a signal based on the
npi_vanl_driver_by_hdl NPI Netlist Model. Active trace only happens
npi_vanl_driver_dump when tracing from a cell output. It considers the
npi_vanl_driver_by_hdl_dump cell type and the value both on output and input.
The API does not traverse across the module
boundary and path through the assign cell
treated as a primitive cell.
Value returns the total number of driver found.
npi_vanl_load Active traces loads of a signal based on the NPI
npi_vanl_load_by_hdl Netlist Model. Active trace only happens when
npi_vanl_load_dump tracing from a cell input. It considers the cell type
npi_vanl_load_by_hdl_dump and the value both on the output and input.
The API does not traverse across the module
boundary and path through the assign cell
treated as a primitive cell.
Value returns the total number of load found.
npi_vanl_fan_in_reg Finds all fan-in register objects (instances) of a
npi_vanl_fan_in_reg_by_hdl signal using the NPI VANL graph under active
npi_vanl_fan_in_reg_dump trace.
npi_vanl_fan_in_reg_by_hdl_ Register types in the Netlist model are:
dump npiNlFlipFlopCell, npiNlLatchCell,
npiNlExternalRamCell, npiNlFSMCell,
and npiNlInfLatchCell.
Value returns the total number of fan-in register
instances found.

Verification Debug
265
API Description
npi_vanl_fan_out_reg Finds all fan-out register objects (instances) of a
npi_vanl_fan_out_reg_by_hdl signal using the NPI VANL graph under active
npi_vanl_fan_out_reg_dump trace.
npi_vanl_fan_out_reg_by_hdl Register types in Netlist model are:
_dump npiNlFlipFlopCell, npiNlLatchCell,
npiNlExternalRamCell, npiNlFSMCell,
and npiNlInfLatchCell.
Value returns the total number of fan-out register
instances found.

While tracing active driver of each instType, each of them has its own
rule. For npiNlSymbolLibInst, extract its npiNlFunc,
npiNlXFunc, and npiNlThreeStateFunc. The active drivers are
evaluated according to these function strings. For Example:

RTL code: "FN04D2 oai2 (ZN, A1, A2, B1, B2)"

npiNlFunc: !((A1&A2)|(B1&B2))

When A1=1, A2=0, B1=0, and B2=1, the active drivers of ZN are A2
and B1 because the output value of ZN is caused by the inputs
(A1&A2)=0 and (B1&B2)=0, and each of these inputs is caused by
A2 and B1.

For npiNlUDPInst, VC Apps value traverse netlist L1 APIs extract


the first table entry that matches the current input value. The value
traverse netlist L1 APIs treat UDP symbol ? as a "do not care bit". It
means the active driver should not include the bits whose symbol
matches ?. In the following example, the active driver result of this
UDP is same as npiNlAndCell:

primitive m_and (out, dataA, dataB);


output mux;
input control, dataA, dataB;
table
// dataA dataB out
0 0 : 0 ;
0 ? : 0 ;

Verification Debug
266
? 0 : 0 ;
1 1 : 1 ;
endtable
endprimitive

For npiNlRTLInst and npiNlHierInst, VC Applications value


traverse netlist L1 APIs evaluates the active drivers according to its
npiNlCellType. The following are the active driver rules of each
cellType. The active driver is equal to all drivers of the instance
whose cellType is not listed in the following tables:

CellType: npiNlAndCell, npiNlNandCell


Example: A = B & C
B C Driver
0 0 B, C
X X B, C
0 1, X B
1, X 0 C
1 1 B, C
CellType: npiNlOrCell, npiNlNorCell
Example: A = B | C
B C Driver
0 0 B, C
X X B, C
0 1, X C
1, X 0 B
1 1 B, C
CellType: npiNotCell
Example: A = !B
B[I] B[J] B[K] Driver
0 0 0 B
0 0 1, X B[K]
0 1 X B[J]
1 1 1 B
X X X B

CellType: npiNlAndReduCell, npiNlNandReduCell

Verification Debug
267
Example: A = & B
B[I] B[J] B[K] Driver
0 0 0 B
0 0 1, X B[I], B[J]
0 1 X B[J]
1 1 1 B
X X X B
CellType: npiNlOrReduCell, npiNlNorReduCell
Example: A = | B
B[I] B[J] B[K] Driver
0 0 0 B
0 0 1, X B[K]
0 1 X B[J]
1 1 1 B
X X X B
CellType: npiNlMuxCell, npiNlInfLatchCell
Example:
always @(sel or r1 or r2) begin
if (sel) r3 = r1;
else r3 = r2;
end
endmodule
r1 r2 sel Driver
? ? 0 r2, sel
? ? 1 r1, sel
? ? X r1, r2, sel
CellType: npiNlLogAndCell
Example: A = B && C
B[I] B[J] B[K] C[i] C[j] C[k] Driver
0 0 0 0 0 0 B, C
0 0 1, X 0 0 0 C
0 0 X 0 0 0 C
1 1 1 0 0 0 C
X X X 0 0 0 C
0 0 0 X X X B
0 0 X X X X B[K], C
0 1 X X X X C
1 1 1 X X X C
X X X X X X B, C

Verification Debug
268
B[I] B[J] B[K] C[i] C[j] C[k] Driver
0 0 0 1 1 1 B
0 0 X 1 1 1 B[K]
0 1 X 1 1 1 B[J], C
1 1 1 1 1 1 B, C
X X X 1 1 1 B
CellType: npiNlLogOrCell
Example: A = B || C
B[I] B[J] B[K] C[i] C[j] C[k] Driver
0 0 0 0 0 0 B, C
0 0 1, X 0 0 0 B[K]
0 0 X 0 0 0 B[J]
1 1 1 0 0 0 B
X X X 0 0 0 B
0 0 0 X X X C
0 0 X X X X B[K], C
0 1 X X X X B[J]
1 1 1 X X X B
X X X X X X B, C
0 0 0 1 1 1 C
0 0 X 1 1 1 C
0 1 X 1 1 1 B[J], C
1 1 1 1 1 1 B, C
X X X 1 1 1 C
CellType: npiNlTriBufCell
Example: bufif0 buf1 (out1, D, E)
D E Driver
? 0 D, E
? 1, X E
CellType: npiNlTriCell
Example: notif0 n1 (out3, D, E);
D E Driver
? 1 D, E
? 0, X E
CellType: npiNlEQCell, npiNlAOICell, npiNlOAICell
Analyze by its npiNlFunc, npiNlXfunc, and npiNlThreeStateFunc.

Verification Debug
269
CellType: npiNlUDP
Check it there any table term matching with value of drivers.
Active drivers: the corresponding term's value of input is not equal "?".
Example:
A B OUTPUT Driver
0 0 0 A, B
0 ? 0 A
? 0 0 B
1 1 1 A, B

The following example illustrates how to trace driver with value in the
C++ interface:

Verification Debug
270
Verification Debug
271
The following example illustrates how to trace driver with value in the
Tcl interface:

Verification Debug
272
The program recursively traces the driver with value and the value of
each node is shown in Schematic1. At the first stage, the active
driver of top.w1 should be top.w2. Note that top.w3 is dropped.
The reason is that the value of top.w1 is X and the possible driver
to cause this is only top.w2 with value X. With the similar reason,
the active driver of top.w2 should be top.w7 and top.w9, as the
value X of top.w2 is caused by the value of top.w7 and the control
pin top.w9 that are X and 1 respectively.

As for the active trace driver of top.w3 that has value as 1 and the
instance as npiNlOrCell, the possible driver that causes this 1 is
signals whose value is also 1. Thus, you obtain top.w5 and
top.w6.

Limitations

The following are the limitations with the VC Apps value traverse
netlist L1 APIs:

Detailed mode with enough level (bigger than 4) should be turned


on. The level depends on how Verdi can decompose
npiNlOpCell to simple logic cell that the VC Apps value traverse
netlist L1 API supports.
For the RTL type, the following cell types are not supported:
- npiNlNonSynCell
- npiNlLatchComboCell
- npiNlGateClkCell
- npiNlCounterCell
- npiNlMosCell

Verification Debug
273
- npiNlEncoderCell
- npiNlComboCell
- npiNlOpCell
- npiNlBusKeeperCell
- npiNlMacroCell
- npiNlInitCell
- npiNlBiDirectionCell
- npiNlFlipFlopComboCell
- npiNlTranCell
The value goes straight through npiNlModuleCell.
The value does not propagate through registers. The VC Apps
value traverse netlist L1 APIs is only applicable for propagating
values between combinational cells. Thus, the register output is
not evaluated during the propagation.
If a cell input or wire is driven by two or more drivers with different
values, the value is resolved according to the following rule:
Input 0 1 X Z
0 0 X X 0
1 X 1 X 1
X X X X X
Z 0 1 X Z

The following instance and cell types are not supported:


- instType: npiNlFSMInst

Verification Debug
274
Enhancing NPI Applications in the Verification Compiler
Platform Application

System Verilog (SV) interface port is becoming more and more


popular in modern designs. Now, you can use the enhanced NPI
Language models and corresponding API libraries to obtain better
driver/load tracing results on SV interface port.

This section consists of the following subsections:

Use Model
Limitations

Use Model
The following NPI library APIs (based on NPI Language model) are
enhanced to support driver/load tracing on SV interface port:

npi_trace_driver2
npi_trace_driver_by_hdl2
npi_trace_driver_dump2
npi_trace_driver_by_hdl_dump2
npi_trace_load2
npi_trace_load_by_hdl2
npi_trace_load_dump2
npi_trace_load_by_hdl_dump2

The original behavior of NPI language model was not enough to


support the tracing of interface port. This project enhances the NPI
language model to support the tracing of interface port.

Verification Debug
275
The following descriptions outline the enhancements of the object
diagram in NPI Language model:

The new object npiMpPort means the port of modport. It provides


the "direction" information and the name indicates for the module
ports.
Figure 6-13 npiMpPort Provides Direction Information and Name Indicates

The property npiDefName of npiPort shows the definition name


of SV interface port. This property returns NULL when
npiPortType is not npiInterfacePort and npiModportPort.

Verification Debug
276
Figure 6-14 Property npiDefName Shows Definition Name of SV Interface
Port

For npiRefObj:
- Applying the method npiRefObj to another reference object
obtains the reference object whose actual is same as the
declaration of the reference handle. In the following example,
the npiRefObj for the var bit top.i1.a[0] returns the reference
object top.u1.intf.a.
interface ii ;
reg [5:0] a;
endmodule
module m(interface intf);
initial
intf.a[3] = intf.a[1];
module top;
ii i1();
m u1 (i1);
endmodule

Verification Debug
277
- For the npiUse method, the resultant objects do not check the
overlapping with the reference handle.
Figure 6-15 Apply Method npiRefObj to Another Reference Object

Based on the enhancements of NPI Language model illustrated


above, the concept of equivalent object for the APIs of trace driver/
load 2 series is introduced. That is, you can include all the objects of
npiRefObj and the objects of npiActual of reference object before
tracing the driver/load. All equivalent objects from users input signal
is treated as the source signals.

Take the following HDL codes as an example:

[CASE1]
1 interface I;
2 logic [7:0] r;
3 int x=1;
4 bit R;
5 modport A (output .P({r[3:1],r[0]}), input x, .S(R));
6 modport B (input .P(r[3:0]), output x);
7 endinterface

Verification Debug
278
8
9 module M2 ( interface i, output reg clk, input [31:0]
varInt);
10
11 always@(*) begin
12 clk = i.P[2];
13 i.x = varInt ;
14 end
15 endmodule
16
17 module M1 ( interface i);
18 always@(*) begin
19 i.P = i.x;
20 end
21 endmodule
22
23 module top;
24 reg CLK2;
25 wire [31:0] input2;
26 I i1 ();
27 M1 u1 (i1.A);
28 M2 u2 (i1.B, CLK2, input2);
29 assign input2[2] = CLK2;
30 endmodule

The npiMpPort top.i1.B.P and the npiRefObj top.u2.i.P are


equivalent signals. Therefore, if you want to trace the driver or load
of top.i1.B.P, top.u2.i.P acts as a starting signal as well.

The following is an example of the results by applying the API


npi_trace_load_dump2 on CASE1:

%> ::npi_L1::npi_trace_load_dump2 {top.i1.B.P[2]}

npiMpPortBit, P[2], (null) /* results of trace load */


Need pass through
<1> source: i.P[2], scope: top.u2
<L> npiAssignment, clk = i.P[2];, {itf_port.v : 12}
npiReg, top.u2.clk, {itf_port.v : 9}

Verification Debug
279
The source signal top.u2.i.P[2] is the equivalent object of the input
signal top.i1.B.P[2]. This API uses top.u2.i.P[2] as another
input signal and then trace the loads. Therefore, the result
top.u2.clk is returned.

Below is the Tcl script to show how we recursively trace drivers


through SV interface port.

[traceDriver.tcl]
viaSetupL1Apps

proc trace_driver { hdl fileHdl } {


if { [info exist ::hdlSet($hdl)] } {
return
}
set ::hdlSet($hdl) 1
set resList {}
::npi_L1::npi_trace_driver_by_hdl2 $hdl "resList"
set size [llength $resList]
puts $fileHdl "trace from
[::npi_L1::npi_ut_get_hdl_info $hdl 1]"
set nextSigList {}
for { set i 0 } { $i < $size } { incr i } {
set dlStruct [lindex $resList $i]
set sigList [lindex $dlStruct 5]
set sigSize [llength $sigList]
for { set l 0 } { $l < $sigSize } { incr l } {
puts $fileHdl " [::npi_L1::npi_ut_get_hdl_info
[lindex $sigList $l] 1]"
lappend nextSigList [lindex $sigList $l]
}
}
puts $fileHdl "--------------------------------------
----------------"
set sigSize [llength $nextSigList]
for { set i 0 } { $i < $sigSize } { incr i } {
trace_driver [lindex $nextSigList $i] $fileHdl
}
}
debImport -sv itf_port.v

Verification Debug
280
array set hdlSet {}
set fileHdl stdout
set hdl [npi_handle_by_name -name top.CLK2 -scope ""]
trace_driver $hdl $fileHdl
debExit

On the Verdi Tcl command entry, source the script and then the
following log are obtained on CASE1:

%> source traceDriver.tcl

trace from npiReg, top.CLK2, {itf_port.v : 24}


npiRefObj, top.u2.i.P[2], {itf_port.v : 9}
npiReg, top.u2.clk, {itf_port.v : 9}
------------------------------------------------------
trace from npiRefObj, top.u2.i.P[2], {itf_port.v : 9}
npiRegBit, top.i1.r[2], {itf_port.v : 2}
------------------------------------------------------
trace from npiRegBit, top.i1.r[2], {itf_port.v : 2}
npiMpPortBit, P[2], (null)
------------------------------------------------------
trace from npiMpPortBit, P[2], (null)
npiRefObj, top.u1.i.x[2], {itf_port.v : 17}
------------------------------------------------------
trace from npiRefObj, top.u1.i.x[2], {itf_port.v : 17}
npiBitSelect, top.i1.x[2], {itf_port.v : 3}
------------------------------------------------------
trace from npiBitSelect, top.i1.x[2], {itf_port.v : 3}
npiMpPortBit, x[2], (null)
------------------------------------------------------
trace from npiMpPortBit, x[2], (null)
npiNetBit, top.u2.varInt[2], {itf_port.v : 9}
------------------------------------------------------
trace from npiNetBit, top.u2.varInt[2], {itf_port.v : 9}
npiNetBit, top.input2[2], {itf_port.v : 25}
npiReg, top.CLK2, {itf_port.v : 24}
------------------------------------------------------
trace from npiNetBit, top.input2[2], {itf_port.v : 25}
npiReg, top.CLK2, {itf_port.v : 24}
------------------------------------------------------
trace from npiReg, top.u2.clk, {itf_port.v : 9}

Verification Debug
281
npiRefObj, top.u2.i.P[2], {itf_port.v : 9}
------------------------------------------------------
As shown in the log, when the input signal is top.CLK2, the script
recursively traces through the SV interface, crossing the modport
port top.i1.B.P[2], and then obtains its driver top.i1.A.P[2].
Afterwards, the script follows the tracing loop to trace back to module
top.u1, crossing the signal top.u1.i.P[2]. It runs the same
behavior repeatedly until the duplicated signals are found.

Apparently the signals top.CLK2 and top.input[2] are connected


by the SV interface port. Therefore, NPI library APIs can report the
connected results easily.

Limitations
This enhancement only focuses on the SV interface port. The
npiRefObj of the reference port is not supported.

The import/export task function in modport is not supported.


For the use method of npiRefObj, the result returns NULL, if its
actual is not a signal.

Verification Debug
282
Unified Verdi Debug Platform for Interactive and Post-
Simulation Debug

To debug a simulation failure in a design and to bring up the desired


debugger GUI, you need to remember and explore different options
resulting in a lot of time being spent on setting up the debugging tools
rather than the real debugging. Additionally, you need to manually
configure Verdi to perform interactive simulation debugging in Verdi
with VCS. You also need to manually load the design to Verdi to
perform post-simulation debugging.

Verification Compiler Platform maximizes debug productivity by


providing a Unified Debug platform based on Verdi that is tightly
integrated to VCS. The unified debug platform provides ease of use,
ease of migration, and helps in avoiding the time spent on redundant
setups.

After the Verdi Knowledge Database (KDB) is generated using


Unified Compiler, the Unified Debug solution allows you to invoke
Verdi with KDB in a single step for the following debug modes:

Interactive Simulation Debug Mode


Post-Simulation Debug Mode
The following is the prerequisite to perform the interactive simulation
debugging using the Unified Debug solution:

Generate the Verdi KDB using VC Unified Compiler. For more


information, see Unified Compile Front End .
The following are the prerequisites to perform the post-simulation
debugging using the Unified Debug solution:

Verification Debug
283
Generate the Verdi KDB using VC Unified Compiler. For more
information, see Unified Compile Front End
Specify the -debug_access+<option> compile-time option on
the VCS command line. This option automatically picks up Novas
tab file and Novas PLI file and there is no need to pass these files
explicitly during compilation. For more information on this option,
see the VCS documentation.
Note:You can specify the -debug_access+all option to enable
the complete set of debug capabilities.

Enable the FSDB file dumping using the dumping tasks present
in the source file or at runtime using one of the following
commands from the UCLI command line:
dump -file <filename>.fsdb
fsdbDumpvars

Interactive Simulation Debug Mode

To perform interactive simulation debugging in Verdi without other


configurations, you can invoke Verdi with KDB through the simulator
command-line option.

This section consists of the following subsections:

Use Model
Rebuilding and Restarting Interactive Simulation Debug in Verdi

Verification Debug
284
Use Model
When executing the simv simulator executable, perform one of the
following steps to invoke Verdi or DVE within the interactive
simulation debug mode:

Add -gui/-verdi/-gui=verdi/-gui=dve options to specify


Verdi or DVE as the VC debug tool.
For example,

// invoke Verdi
%> simv <simv_options> -verdi [-verdi_opts
<verdi_options>]
%> simv <simv_options> gui=verdi [-verdi_opts
<verdi_options>]

// invoke DVE
%> simv <simv_options> gui=dve [-dve_opt
<dve_options>]

Set the SNPS_SIM_DEFAULT_GUI environment variable to Verdi/


DVE to specify Verdi or DVE as the VC debug tool. Verdi is the
default debug tool. For example,
// invoke Verdi
%> setenv SNPS_SIM_DEFAULT_GUI verdi
%> simv <simv_options> gui [-verdi_opts
<verdi_options>]

// invoke DVE
%> setenv SNPS_SIM_DEFAULT_GUI dve
%> simv <simv_options> gui [-dve_opt <dve_options>]

Key Points to Note


Use -verdi_opts and -dve_opt options to specify other Verdi
specific and DVE-specific options.

Verification Debug
285
You can perform interactive simulation from a directory that is
different from the compilation directory using the following
command:
<path of compilation directory>/<simv executable> -verdi
-i -simflow -simBin <simv_path/simv>

UVM Interactive Debug in Verdi is enabled by default while using


the Unified Debug solution.
If a design includes SystemC and default.ridb is not available
under the simv.daidir/ directory, Verdi generates it
automatically.

Rebuilding and Restarting Interactive Simulation Debug


in Verdi
Starting with the J-2014.12-SP1 release, support for rebuilding and
restarting interactive simulation in Verdi is available. You can use the
Simulation Rebuild and Restart command to configure the
setting in the invoked the Rebuild and Restart form to enable the
following features:

Rebuilding KDB and simv executable.


Loading KDB into Verdi after a design is rebuilt.
Restarting a simulation with specified runtime options after a
design is rebuilt.

Verification Debug
286
Post-Simulation Debug Mode

To perform post-simulation debugging, KDB and the


synopsys_sim.setup file information can be automatically loaded
into Verdi through the command-line option. You do not need to
manually specify the compiled design. VCS and Verdi have the same
information from the synopsys_sim.setup file.

This section consists of the following subsections:

Use Model
Limitations

Use Model
To automatically load the KDB compiled by Unified Compile, use the
following Verdi command-line options:

-simflow
Enables Verdi and its utilities to use the library mapping from the
synopsys_sim.setup file and also to import a design from KDB
library paths.

-simBin <simv_path/simv>
Specifies the path of the simv executable. This ensures that VCS
and Verdi have the same data from the synopsys_sim.setup
file.

For example,

%> verdi simflow simBin <simv_path/simv>


//import the FSDB file into Verdi

Verification Debug
287
%> verdi simflow simBin <simv_path/simv> ssf
novas.fsdb

After specifying the path of simv, you can directly start the Verdi
Interactive Simulation Debug mode using the Tools Run
Simulation menu command in Verdi nTrace.
If a design contains SystemC and the default.ridb file exists
in the simv.daidir/ directory, the default.ridb file is also
loaded into the KDB for SystemC debugging.
When simflow and simBin options are used together, all
other options related to importing KDB are ignored.
If you are trying to perform post-simulation debug from a directory
different than the compilation directory, you must specify the
absolute physical path mapping in the synopsys_sim.setup
file.
-simdir <path of compilation directory> -top <top
module name>
Specifies the top-level module name to load a design in Verdi. You
can use these options if you do not want to load the design from
the real design top. For more information, see Unified Compile
Front End .

Limitations
Following are the limitations when performing power debug with
UPF:

A UPF file needs to be manually imported into Verdi for Interactive


and post-simulation debug:
- In the Interactive-Simulation debug, add the -upf <UPF
file> option to import your UPF file.

Verification Debug
288
For example,

%> vlogan -kdb <compile_options> <source files>


%> vcs -kdb -upf <UPF file>
%> simv -gui -upf <UPF file>

In the Post-simulation debug, add the -upf <UPF file> option


to import your UPF file.
For example,

%> vlogan -kdb <compile_options> <source files>


%> vcs -kdb -upf <UPF file>
%> simv
%> verdi ssf novas.fsdb simflow simBin <simv_path/
simv> -upf <UPF file>

Saving/Loading Verdi Elaboration DB Library to/from


Disk

Verdi used to import unelaborated KDB from the disk and perform
design elaboration during loading when the GUI is invoked. Now,
Verdi offers an elaboration process that loads the design using the
Unified Compiler Front-End flow. Verdi elaborates the design and
saves the Elaboration database to the disk at batch time. You can
now reimport the elabDB file from the disk that speeds up invoking
of the GUI.

This section consists of the following subsections:

Use Model
Limitations

Verification Debug
289
Use Model

The following sections provide a detailed description for creating/


generating the elabDB flow:

Interactive Simulation Debug Flow


Post-Simulation Debug Flow

Interactive Simulation Debug Flow


You can perform the interactive simulation debugging while creating/
generating the elabDB flow. The following sections describe how
you can elaborate your design and load the generated elabDB into
Verdi with enabled interactive simulation debug mode:

Generating Verdi Elaboration Database Using VCS


Loading Verdi Elaboration Database Into Verdi

Generating Verdi Elaboration Database Using VCS


The flow for creating/generating elabDB is supported in both VCS
two-step and three-step flows with Unified Compile Front End and
the kdb=elab option specified on the VCS command line to
generate elabDB of your design. For example,

// VCS two-step flow


%> vcs kdb=elab <compile_options> <source files>
<elaborate_options> -lca
// VCS three-step flow
%> vlogan -kdb <vlogan options> <source files>
%> vhdlan -kdb <vhdlan options> <source files>
%> vcs kdb=elab <top_name> -lca

Verification Debug
290
The generated KDB and elabDB files are saved in the work.lib++
and kdb.elab++ directories. The work.lib++ directory is saved
in the current working directory and the kdb.elab++ directory is
saved in the simv.daidir directory.

Loading Verdi Elaboration Database Into Verdi


When elabDB is generated as described in the Interactive
Simulation Debug Flow section and the simv simulator executable
is generated, you can invoke Verdi within the interactive simulation
debug mode using the -gui/-verdi/-gui=verdi options.

For example,

// Example1:
%> simv <simv_options> -verdi [-verdi_opts
<verdi_options>]
// Example2:
%> simv <simv_options> gui=verdi [-verdi_opts
<verdi_options>]

The elabDB file is imported into the invoked Verdi automatically


within the interactive simulation debug mode.

Post-Simulation Debug Flow


You can perform post-simulation debugging while creating/
generating the elabDB flow. The following sections describe how
you can elaborate your design and load the generated elabDB into
Verdi with/without the FSDB file:

Generating Verdi Elaboration Database With Unified Compile


Front End
Generating elabDB Using VCS Elaboration Command

Verification Debug
291
Generating elabDB Using the elabcom Utility
Loading Verdi Elaboration Database Into Verdi

Generating Verdi Elaboration Database With Unified Compile


Front End
Creating/generating the elabDB flow is supported in both VCS two-
step and three-step flows with Unified Compile Front End. The
following are the methods to generate the elabDB:

You can generate elabDB during VCS elaboration (see the


Generating elabDB Using VCS Elaboration Command section).
You can generate elabDB independently using the Verdi utility,
elabcom (see the Generating elabDB Using the elabcom Utility
section).

Generating elabDB Using VCS Elaboration Command


The kdb=elab option is provided on the VCS command line to
generate the KDB and elabDB of your design.

For example,

// VCS two-step flow


%> vcs kdb=elab <compile_options> <source files>
<elaborate_options> -lca

// VCS three-step flow


%> vlogan -kdb <vlogan options> <source files>
%> vhdlan -kdb <vhdlan options> <source files>
%> vcs kdb=elab <top_name> -lca

Verification Debug
292
The generated KDB and elabDB are saved as work.lib++ and
kdb.elab++ directories. The work.lib++ directory is saved in
the current working directory and the kdb.elab++ directory is
saved under the simv.daidir directory.

Generating elabDB Using the elabcom Utility


The elabcom utility is provided to generate elabDB of your design
after your KDB is generated. The elabcom utility works on both
VCS-UFE compile flow and Verdi vericom and vhdlcom flows. The
options for the Verdi - Importing Design From Library feature are
available. For example,

//After the KDB is generated


%> elabcom lib work top top

The generated elabDB is saved as kdb.elab++/ in the current


working directory, by default.

You can also specify the path name of elabDB using the -elab
option. For example,

%> elabcom lib work top top elab ./myelab/mydesign

The generated elabDB is saved in the following file:

./myelab/mydesign.elab++.

Using the saveLevel option increases the DB size since it copies


all the source files. You can then remove KDB (work.lib++) to free
some disk space.

For example, the current working directory is /AAA/BBB/CCC/

//Example 1: To display the result without -saveLevel option

Verification Debug
293
when creating elabDB using elabcom
%> elabcom lib work

//Example 2: To display the result with -saveLevel when


creating elabDB using elabcom
%> elabcom saveLevel lib work

Both the elabcom -lib work and elabcom -lib work -


saveLevel commands create /AAA/BBB/CCC/kdb.elab++ and
/AAA/BBB/CCC/kdb.elab++/work.lib++ files.

The difference between executing the command with or without the


-saveLevel option is as follows:

Without the -saveLevel option: All .tdc files at /AAA/BBB/


CCC/kdb.elab++/work.lib++ are symbolic files and not
physical files.
With the -saveLevel option: All .tdc files at /AAA/BBB/CCC/
kdb.elab++/work.lib++ are physical files.

Loading Verdi Elaboration Database Into Verdi


When elabDB is generated, you can import it into Verdi using the -
elab <elabDB path without the .elab++ postfix >
option. For example,

//Example 1: load the ./kdb.elab++ elabDB


%> verdi elab kdb

//Example 2: load the ./myelab/mydesign.elab++ elabDB


%> verdi elab ./myelab/mydesign -ssf novas.fsdb

Verification Debug
294
Limitations

The following are the limitations with this feature:

Both Verdi Verilog and VHDL KDB library format are changed with
this release. Backward compatibility for this database version is
not available. You need to recompile your designs to create a new
Verdi KDB library.
The following runtime options are not supported:
- -dynaconfig
- -impConf
The netlistcom Verdi utility is not supported.
VHDL +vtop usage is not supported.
Loading Verdi elabDB in the Import GUI form is not supported.

Verification Debug
295
Attaching a Running Simulation in Verdi

Simulations can be executed in batch mode or by executing a


command on the command-line interface. Sometimes, you may
need to debug a running simulation interactively while the simulation
is executed. Also, if a running simulation encounters an error or an
assertion, the simulation gets terminated. To debug such a scenario,
you need to invoke Verdi (GUI) and rerun the simulation from time 0
to reach the target time where the error was encountered.

You can independently invoke Verdi and attach a simulation process


ID to it, and start to visualize and debug the design, as if it was
started in GUI mode. Verdi provides support to dynamically attach to
a VCS simulation process that is executed by invoking it outside of
Verdi. It gives you flexibility to debug a running simulation in Verdi as
follows:

You can start the interactive-simulation debugging in Verdi at the


runtime simulation stage instead of starting it at the beginning of
simulation.
You do not need to restart simulation with Verdi and navigate to
the same point manually to debug.
After setting up the connection channel seamlessly with the currently
running simulation in Verdi, you can attach or detach Verdi to a
simulation that is in progress to start the interactive simulation
debugging session. Additionally, if the simulation is running in UCLI
mode, you can also easily invoke Verdi and connect to the currently
running simulation to debug in Verdi. You can also switch back to the
UCLI mode, from the GUI mode and continue debugging in UCLI;
thus providing more flexibility to the user and better debug
experience.

Verification Debug
296
This section consists of the following subsections:

Prerequisites
Use Model
Limitations

Prerequisites

The following are the prerequisites to enable this feature:

The simulation must start with the -ucli2Proc VCS runtime


option.
The -debug_access/-debug_pp/-debug/-debug_all VCS
elaboration options must be added to enable the VCS debugging
feature.
This feature supports attaching the simulation in the same
machine of same user. If the simulation is running in farm, you
can use puts [exec hostname] Tcl command to fetch the
hostname in the log file and start Verdi in the same machine to
attach the simulation.

Verification Debug
297
Use Model

Two flows are provided to debug the running simulation in Verdi at


the current running stage:

Attach Verdi to a simulation process in Verdi (see the Attaching


to Simulation Process in Verdi section).
Start Verdi in UCLI mode (see the Invoking Verdi in the UCLI
Mode section).
After attaching, you can also detach the simulation from Verdi (see
the Detaching Verdi From Simulation Process section).

Attaching to Simulation Process in Verdi


Perform the following steps to attach Verdi to a running simulation
with a process ID:

1. Use Simulation Attach to Simulation (see Figure 6-1) to


invoke the Attach to Simulation dialog box.

Verification Debug
298
Figure 6-16 Attach to Simulation Command

2. In the Attach to Simulation dialog box, all the simulation


processes are listed for the current user. From the list view, select
a single simulation process and click Attach. After you select a
simv process to attach from the dialog box, press enter when in
UCLI terminal to accept the attaching.
Figure 6-17 Attach to Simulation Dialog Box

The Attach to Simulation dialog box also provides the following


features:

Verification Debug
299
- Shows all processes in the dialog box when you enable the
Show all simulation processes option.
- Filters all the columns texts when you specify a string in the text
filter field.
- Refreshes up-to-date processes in the current working machine
when you click the Refresh button.
3. When the selected simulation cannot be attached due to one of
the following situations, a warning message is displayed:
- When the selected process is not a valid simulation process
(not correct version or a suspended process)
- When the simulation is running in the GUI mode (either Verdi
or DVE)
- When the simulation is running in specman
Figure 6-18 Warning Message for Unable to Attach Selected Process

4. If the selected simulation can be connected, the simulation is


paused and the connection channel with the simulation is set up.
You can then use Verdi to perform the interactive simulation
debugging, similar to the simulation starts within Verdi.

Verification Debug
300
Figure 6-19 Enabled Verdi Interactive Simulation Debug Mode

After the connection between simulation and Verdi is built, all frames
in Verdi Interactive Simulation Debug mode shows the
corresponding information for the current simulation state
accordingly.

Key Points to Note

If the simulation is compiled by Unified Compile (with the -kdb


option), the corresponding design is loaded into Verdi.
If the simulation is not compiled by Unified Compile, a warning
message is displayed to prompt you to load your design manually.
You must load the design in Verdi before attaching Verdi to a
simulation.
The FSDB file that the simulation is using is used in Verdi. If there
is no FSDB file used before attaching, the inter.fsdb file is
generated and used for FSDB dumping.

Verification Debug
301
If the -l option is used to specify the log file in UCLI mode after
Verdi attaches to simv, the same log file is used to debug in Verdi.
If no log file is specified, the verdiLog/sim.log file is not
generated.
The following features are supported in Verdi:
- Restore State
- Checkpoint and Rewind
- Breakpoints
Verdi can only attach to one simulation process at a time.

Invoking Verdi in the UCLI Mode


If the simulation is running in the UCLI mode, you can use the
start_verdi UCLI command to launch Verdi. The -verdi_opts
option is also provided to pass runtime options to Verdi.

Perform the following steps to launch Verdi using the UCLI command
line:

1. When the simulation stops in UCLI command line, use the


start_verdi UCLI command to invoke Verdi. You can also use
the -verdi_opts option to pass runtime options to Verdi. For
example,
# invoke Verdi
ucli% start_verdi
# invoke Verdi with Verdi options
ucli% start_verdi verdi_opts <Verdi Options>

2. After Verdi is launched, you can perform the interactive simulation


debugging in Verdi. In UCLI command mode, the UCLI prompt is
blocked until Verdi detaches from the simulation.

Verification Debug
302
Figure 6-20 Run Verdi in UCLI Command Mode

You can detach the simulation in Verdi and return back to the UCLI
command line mode (see the Detaching Verdi From Simulation
Process section for the usage).

Invoking Verdi in the Tcl File


You can add the start_verdi UCLI command in a Tcl file. Verdi is
invoked and the simulation control is transferred to users for
interactive simulation debugging. If the simulation is already
attached by Verdi or is running in Verdi or DVE, the start_verdi
command is skipped and a warning message is displayed. The
following are several examples to invoke Verdi in Tcl mode.

A run.tcl Tcl script contains run


1000;step;start_verdi:
# Execute the simulation in command line
%> simv ucli ucli2Proc i run.tcl

# While the simulation execute the start_verdi command,


# Verdi is launched and the interactive simulation debug
# mode is enabled

Launch Verdi, if the breakpoint hits:


# The Tcl script contains the following command
stop -file design.v line 111 command {start_verdi};run

Verification Debug
303
Set configurations in UCLI when the specified simulation stage/
error meets. Invoking Verdi on some simulation events is
supported.

Detaching Verdi From Simulation Process


You can also detach the simulation from Verdi. After detaching, the
control of the simulation returns to the original simulation controller
and continues to run or stop in the UCLI mode.

Perform one of the following methods to detach a simulation:

Detach a simulation with the Simulation Detach Simulation


command:

Use the Simulation Detach Simulation command to detach


the simulation directly without opening any form (see Figure 6-
21).
Figure 6-21 Detach Simulation Command

Detach a simulation using the Attach to Simulation form:

Verification Debug
304
1. Use the Simulation Attach to Simulation command to invoke
the Attach to Simulation form (see Figure 6-21).
2. In the Attach to Simulation form, the attached process is marked
with the attached icon. Select the connected process and click
Detach button (see Figure 6-22).
Figure 6-22 Detach to Simulation Form

After detaching, you can attach another simulation in this form.

Limitations

The following are the limitations with this feature:

If the simulation is not compiled with the -kdb option, Verdi cannot
load the design hierarchy automatically.
While attaching a simulation process in Verdi after you select a
simv process to attach from the dialog box, you need to press
enter when in UCLI terminal to accept the attaching.
The following features are not supported in Verdi after a simulation
is attached:

Verification Debug
305
- Restart
- Restore Session

Reverse Interactive Debug

The interactive simulation debug mode does not allow you to debug
backwards in time. When a simulation fails and issues an error, you
need to rerun the simulation to debug the problem by setting
checkpoints at different time intervals. While debugging a failure, you
cannot save the debug scenario and restore later at the same time.
If you need to perform what-if analysis or explore the past state, you
cannot go back in time as the future is destroyed. Also, there is no
provision to pass the debug session to another user (in a persistent
state form).

The new reverse interactive debug feature allows you to run


interactive simulation backwards in time. You can start debugging at
the symptom of the problem and systematically go back in time along
the bug propagation cause-effect chain. Divide-and-conquer
debugging method is much more efficient with reverse debugging.
For example, if the simulation is stopped before some function call
and you are not sure whether the function returns the correct value
or not, you can step over this function call and check the returned
result. If the result is wrong, you can perform the reverse next
command, step into the function, and investigate what causes the
wrong result. Without reverse debugging, this requires very costly
restart of debugging and playing with breakpoints to reach the same
simulation state.

The following are the new VCS simulation control commands for
reverse executing the simulation in Verdi:

Verification Debug
306
- Run/Continue Reverse Next Reverse
- Step Reverse
- Step Out Reverse
- Step in Thread Reverse
- Step in Testbench Reverse
- Next in Thread Reverse
You can also easily reverse/advance the simulation to the previous/
next value assignment of a signal or a variable by invoking a
command for the selected one. For a DUT design, you can achieve
this by dumping all signals and using the traced information to figure
out where to go. For a testbench, this approach does not work. Also,
it should work without all signals traced. You can use this new feature
to go to the desired assignment of the object.

Furthermore, you can keep the future (for example, while reversing
a simulation, the time and the information generated from an active
point, Point A, to a previous point, Point B is termed as future). when
going back in simulation time during reverse debugging. The
following are the benefits to keep the future:

Better performance during the rewinding operation and reverse


debugging.

Verification Debug
307
During the debugging, you can bookmark interesting points using
checkpoints and later quickly return to them even after reverse
executing to time before these checkpoints. The checkpoints (in
the future) are preserved and you can easily go to the recorded
future checkpoint from the past. For example, consider that there
are four checkpoints, A,B,C, and D. If you remove the checkpoint
C, then the values at the checkpoint C are preserved. This allows
you to go back to the checkpoint C at a later point during the
debugging process.
The following sections provide a detailed description of these
features:

Prerequisite
Use Model
Using Reverse Simulation Control Commands
Going to Previous/Next Value Assignment

Prerequisite

To use these Reverse Debug features in the Interactive Simulation


Debug mode, the -debug_access+all+reverse VCS
elaboration option must be used before the Interactive Simulation
Debug mode is enabled.

Verification Debug
308
Use Model

The following sections describe how to enable Reverse Debug


features in detail:

Before Invoking Verdi Interactive Simulation Debug Mode


Enable the Verdi Reverse Interactive Simulation Debug Mode

Before Invoking Verdi Interactive Simulation Debug


Mode
You need to generate the simv simulator executable with the
-debug_access+all+reverse1 debugging options on the VCS
command line. For example:

% vcs -sverilog example.sv -debug_access+all+reverse kdb


-lca <compilation/elaboration options>

Enable the Verdi Reverse Interactive Simulation Debug


Mode
You can invoke the Verdi Interactive Simulation Debug mode by
using the following command:

% simv verdi2

Perform the following steps to enable Reverse Debug features to


use them in Interactive Simulation Debug mode:
1. Refer to the VCS manual for details on the simulation options.
2. Refer to the Verdi manual for details about how to enable Verdi Interactive Simulation Debug mode.

Verification Debug
309
1. Use the Tools Preferences command to invoke the
Preferences form and select the Interactive Debug Reverse
Debug page.
2. Enable the Reverse Debug option.
Figure 6-23 Enable Reverse Debug

3. Click Apply or OK in the Preferences form. As shown in


Figure 6-24, the new toolbar for the reverse
simulation control commands appears in the Verdi main window.
Figure 6-24 Reverse Simulation Control Commands Toolbar

Verification Debug
310
Using Reverse Simulation Control Commands

The reverse simulation control commands provide the ability to move


to an earlier simulation state from current interactive simulation
debugging. All commands bring the simulation back in time to the
completely functional execution state with full visibility. The
information of Interactive Simulation Debug frames (for example, the
Local and Object tabs, and so on) is reversed and updated
accordingly.

As shown in Figure 6-25 and Figure 6-26, you can use the
commands by clicking the command icons or selecting them from
the Simulation command menu to direct the simulator how to
reverse execute the program.

Figure 6-25 Simulation Control Commands in Toolbar

Verification Debug
311
Figure 6-26 Simulation Control Commands in Simulation Command Menu

For example, you can reverse the simulation to the previous


executed statement using the Step Reverse option (by clicking
or selecting the Simulation Step Reverse option). The Local and
Watch tabs are refreshed accordingly.

This section consists of the following subsections:

Run/Continue Reverse Simulation Control Command


Step and Next Reverse Simulation Control Commands
New UCLI Commands

Verification Debug
312
Run/Continue Reverse Simulation Control Command
You can specify the amount of time and use the Run/Continue
Reverse command (by clicking the icon or selecting the
Simulation Run/Continue Reverse command) to go back in
time for the specified amount of time, as the same time entry field
used by the Run/Continue command (see Figure 6-27). All the
current breakpoints are respected and the simulation stops at the
most recent (considered back from the current execution state)
breakpoint hit.

Figure 6-27 Run/Continue Reverse the Simulation

When the time entry field is empty and no breakpoints are hit, the
simulation is rewound back to the start. If the reverse debug feature
is enabled some time after the simulation starts, using the Run/
Continue Reverse command (as well as other reverse commands)
stops at this point. In this case, a confirmation dialog box is displayed
with the Rewind to first checkpoint at time <time>?
message.

The runtime of executing the Run/Continue Reverse command is


dependent on how far back in time it goes. If executing the Run/
Continue Reverse command takes long time, you can interrupt it by
clicking the Stop icon in the toolbar.

Verification Debug
313
Step and Next Reverse Simulation Control Commands
The following reverse commands are available to reverse the
simulation by clicking the command icons or selecting them from the
Simulation command menu:

Next Reverse: Goes back one SystemVerilog line that steps over
task/function calls. Eventually, it might stop on a breakpoint inside
the task/function called at the previous line.
Step Reverse: Goes back one SystemVerilog source code line.
Step Out Reverse: Goes back to the source code line where the
current function has been called.
Next in Thread Reverse: Goes back one source code line in the
current thread that steps over task/function calls.
Step in Thread Reverse: Goes back one source code line in the
current thread.
Step in Testbench Reverse: Goes back one source code line in
the testbench code.
In the source code frame, you can use the Reverse Run to Source
Line right-click command to reverse the simulation to the selected
source line.

Verification Debug
314
Figure 6-28 Reverse Run to Source Line Command

New UCLI Commands


The new -reverse option is also added to the step, next, and
run UCLI commands to execute the corresponding reverse
execution commands that are available only in Verdi Interactive
Simulation Debug mode.

Table 6-6 lists the corresponding Verdi simulation control


commands.

Table 6-6 New UCLI Commands and Corresponding Verdi Command


New UCLI Command Corresponding Simulation Control
Command in Verdi
step reverse Step Reverse
step reverse thread
Step in Thread Reverse
step reverse tb
Step in Testbench Reverse

Verification Debug
315
next reverse Next Reverse
next reverse thread Next in Thread Reverse
next reverse end Step Out Reverse
run reverse Run/Continue Reverse
run reverse [time [unit]] Run/Continue Reverse with the specified
time
run reverse -absolute | relative Run/Continue Reverse with the specified
<time> absolute or related time
run reverse -line <line#> [-file Run/Continue Reverse to the specified
<file>] line in the specified source code file
[-instance <nid>]
[-thread <tid>]
For example, execute the following command in the Verdi
Interactive_Console frame:

SimCmd> run reverse 20ns

The simulation goes back 20ns and the interactive debugging


frames are changed accordingly.

Figure 6-29 Example of UCLI Command Usage

Verification Debug
316
Going to Previous/Next Value Assignment

When you want to find out the previous assignment for a signal or
variable, you can select the object in the Watch, Local, or source
code frames and invoke the Go to Previous Value Assignment
right-click option or click the icon to reverse the simulation to the
assignment point.

For example, in Figure 6-30, at 280 ns simulation time, the value of


the read_write variable is changed to READ from Write. You can
also see the cursor pointing to the line 152 of the
ubus_master_monitor.sv file in the source code frame, which is
the current debug position.

Figure 6-30 Before Go to Previous Value Assignment

Select the read_write variable in the Watch frame, then invoke


the Go to Previous Value Assignment right-click command (refer
to Figure 6-31) or click the icon.

Verification Debug
317
Figure 6-31 Invoke Go to Previous Value Assignment

In Figure 6-32, you can see the simulation time is reversed to 170 ns
where the previous assignment of the read_write variable
happened. The value of the read_write variable is changed to
Write. You can also see the debug cursor pointing to line 186 of the
ubus_slave_monitor.sv file in the source code frame, which is
the current debug position.

Verification Debug
318
Figure 6-32 After Go to Previous Value Assignment

You can also use the Go to Next Value Assignment right-click


command or click the command icon to find out the next
assignment for the selected signal or variable, which advances the
simulation to the assignment point.

Limitations

The following are the limitations with the Reverse Debugging


feature:

VCS design level parallelism (DLP) is not supported


SystemC Debug is not available when Reverse Debug is activated
The following actions of PLI code are not supported:
- IPC communication using sockets, pipes or shared memory
- Multi-threading

Verification Debug
319
- Performing the file seeking operations, and then writing at a
new position (that is, it is assumed that simulation only
appends data to the output files)
Simulation with Specman is not supported
Analog-digital co-simulation (using NanoSim) is not supported.
The Reverse debug commands are not supported for VHDL
source code. For example, using the Step Reverse command
moves to the previous Verilog source code line, ignoring all VHDL
code in between
When the design is compiled with the -simprofile switch for
simulation profiling, reverse debug is not possible.
The following are the limitations with the GO to Previous/Next
Assignment feature:

Several object types do not allow the setting of a value change


breakpoint. When the breakpoint cannot be set, an error message
is generated and the command has no effect.

Verification Debug
320
Unified UCLI Command for FSDB Dumping

The UCLI dump command is enhanced to dump the Fast Signal


Database (FSDB) file by default. You can now use the UCLI dump
command to dump the FSDB file instead of calling FSDB system
tasks ($fsdbDumpfile/$fsdbDumpvars) using the UCLI call
command or using the FSDB commands (fsdbDumpfile/
fsdbDumpvars) on the UCLI command prompt.

You can also use the dump command to open multiple FSDB dump
files simultaneously and manage them individually.

This section consists of the following subsections:

Default Dump Type


Default Dump File
Use Model
Enhanced UCLI Options for FSDB Dumping
New UCLI Options for FSDB Dumping
Limitations

Verification Debug
321
Default Dump Type

In the VC mode, the default GUI is Verdi and the default dump type
is FSDB. In the VCS mode, the default GUI is DVE and the default
dump type is VPD.

You can use the following environment variable to control both the
default dump type (FSDB, VPD) and GUI (Verdi, DVE) in the VCS
mode and the VC mode using the following command:

% setenv SNPS_SIM_DEFAULT_GUI <verdi or dve>

In the VCS mode, use the following environment variable to set the
default GUI as Verdi and the default dump type as FSDB:

% setenv SNPS_SIM_DEFAULT_GUI verdi

Similarly, in the VC mode, use the following environment variable to


set the default GUI as DVE and the default dump type as VPD:

% setenv SNPS_SIM_DEFAULT_GUI dve

Default Dump File

The default dump file for FSDB is the inter.fsdb file.

Verification Debug
322
Use Model

The following steps describe the use model for FSDB dumping:

1. Set the NOVAS_HOME variable as follows:


% setenv NOVAS_HOME <novas_path>

2. Compile your designs with the -debug_access option as


follows:
% vcs -debug_access <file_name>

or

Compile your designs with a debug option (that is, -debug, -


debug_pp, or -debug_all), as follows:

% vcs debug_option -p $NOVAS_HOME/share/PLI/VCS/LINUX/


novas.tab $NOVAS_HOME/share/PLI/VCS/LINUX/pli.a test.v

Note:
If you use -debug, -debug_pp, and -debug_all options,
you must specify novas.tab and pli.a files on the VCS
command line. The -debug_access option automatically sets
the novas.tab and pli.a files.

Key Points to Note


If a single dump file is open, you are not required to specify the -
fid argument with the dump commands that follow the
dump -file command.
ucli% dump -file test.fsdb (This command returns
FSDB0)

Verification Debug
323
ucli% dump add/-depth 0

If multiple dump files are open, you must specify the -fid
argument with the dump commands that follow the second dump
-file command.
ucli% dump -file test1.fsdb (This command returns
FSDB1)

ucli% dump add/-depth 0 -fid FSDB1 (This command


dumps into the test1.fsdb FSDB file)

During simulation, if the number of open dump files return to one,


you can exclude the -fid argument. An error message is issued,
if you specify a dump command without the -fid argument when
multiple dump files are open.

Enhanced UCLI Options for FSDB Dumping

The following are the enhanced UCLI dump options:

dump -file
dump -add
dump -close
dump -deltaCycle
dump -flush
dump -switch
dump -power
dump -powerstate

Verification Debug
324
dump -file
Opens a specific type of file for dumping.

Syntax:

dump -file <filename> -type <file_type>

where, <file_type> is FSDB.

This command returns file ID, <fid>, which is a unique string that
identifies the opened file.

For example,

ucli% dump -file test.fsdb -type FSDB

This command returns FSDB0

dump -add
Adds design objects to the dump file.

Syntax:

dump -add <list_of_nids> [-fid <fid>] -depth


<levels> [-aggregates] [-ports |-in|-out| -inout]
[ -filter=<filter_list>] [-power]

This command returns an integer value that increments after each


call.

Note:
You must specify the -fid argument, if multiple dump files are
open, else an error message is issued.

Verification Debug
325
For a dump file of the FSDB type,

A warning message is issued, if the port direction is specified with


the -filter argument.
The -aggregates argument dumps both SVA and MDA signals.
This argument combines the functionality of $fsdbDumpSVA and
$fsdbDumpMDA system tasks.
If no dump file is opened using dump file, an FSDB file is opened
and its file ID is returned.

For example,

ucli% dump -add top.a -aggregates -fid FSDB0

The dump -add command dumps the signals in the default dump
file.

For example,

ucli% dump -file a.dump

Creates a dump file based on the default dump type.

ucli% dump -add/-aggregates

Dumps signals in the default dump file. The default dump file for
FSDB is inter.fsdb.

Support for the $fsdbDumpvars Options

The dump -add command supports the $fsdbDumpvars system


task options using the -fsdb_opt argument, as shown in the
following command:

dump -add <object> -fsdb_opt <+option> [-fid <fid>]

Verification Debug
326
The -fid argument must specify a valid FSDB ID, else an error
message is issued.

For example,

dump -add . -fsdb_opt +mda+packedmda+struct -fid FSDB0

Table 6-7 lists the options supported for the -fsdb_opt argument.
For more information on these options, see the Linking Novas Files
with Simulators and Enabling FSDB Dumping User Guide.

Table 6-7 Supported Options


Option Description
+mda Dumps memory and MDA signals in all scopes. This
does not apply to VHDL
+packedmda Dumps packed signals
+struct Dumps structs
+skip_cell_instance=mode Enables or disables cell dumping
+strength Enables strength dumping
+parameter Dumps parameters
+power Dumps power-related signals
+trace_process Dumps VHDL processes
+no_functions Disables the dumping of functions
+fsdb+<filename> Specifies the dump file name. The default name is no-
vas.fsdb
Note: This option is ignored if the file ID is present

Verification Debug
327
Option Description
+sva Dumps assertions
+Reg_Only Dumps only reg type signals
+IO_Only Dumps only IO port signals
+by_file=<filename> File to specify objects to add
+all Dumps memories, MDA signals, structs, unions, pow-
er, and packed structs

dump -close
Closes the opened dump files.

Syntax:

dump -close

You can use the dump -close command to close all opened dump
files. The FSDB API does not support the closing of specific open
FSDB files.

dump -deltaCycle
Enables or disables the delta cycle dumping.

Syntax:

dump -deltaCycle <on|off> [-fid <fid>]

For FSDB dump files, you must execute this command before
dumping starts.

Verification Debug
328
Note:
You must specify file ID, if multiple dump files are open, else an
error message is issued.

dump -flush
Forces the contents of the value change buffer to be written to the
disk file.

Syntax:

dump -flush [-fid <fid>]

Here, <fid> specifies file ID and follows these rules:

If file ID is FSDB, this option flushes the FSDB dump file.


If file ID is not specified and there is only one open file, this option
flushes the open dump file.

dump -switch
Closes the current file and opens a new file with the given name. The
new file retains the hierarchy of the closed file.

Syntax:

dump switch <new_name> [-fid <fid>]

Note:
- You must specify the file ID if multiple dump files are open, else
an error message is issued.
- The new file inherits the file ID of the closed file.

Verification Debug
329
dump -power
Globally enables or disables the dumping of the low power scopes
and nodes.

Syntax:

dump -power <on|off> [-fid <fid>]

For FSDB dumping, the dump -power on command uses the


$fsdbDumpvars +power system task. There is no corresponding
procedure to stop FSDB dumping, that is, you cannot stop the
dumping of power signals into the FSDB dump file after it has
started.

Note:
You must specify file ID if multiple dump files are open, else an
error message is issued.

dump -powerstate
Globally enables or disables the dumping of the low power domain
state signals, PST signals, and PST supply signals.

Syntax:

dump -powerstate <on|off> [-fid <fid>]

For FSDB dumping, the dump -powerstate on command uses


the $fsdbDumpvars +power system task. There is no
corresponding procedure used to stop FSDB dumping - that is, you
cannot stop the dumping of the power signals into the FSDB dump
file after it has started.

Verification Debug
330
Note:
You must specify file ID, if multiple dump files are open, else an
error message is issued.

New UCLI Options for FSDB Dumping

The following are the new UCLI dump options that support FSDB
dumping:

dump -suppress_file
dump -suppress_instance
dump -enable
dump -disable
dump -glitch
dump -opened

dump -suppress_file
Specifies scopes in a file that are not dumped into the FSDB file.

Syntax:

dump -suppress_file <file_name>

This command returns a string.

Note:
- You must use this command before dumping a file. An error
message is issued, if this command is specified after the dump
-add command.

Verification Debug
331
- This command is supported only for the FSDB dump file and is
global to all FSDB files.

dump -suppress_instance
Specifies the list of instances that is not dumped into the FSDB file.

Syntax:

dump -suppress_instance <list_of_instances>

This command returns a string.

Note:
- You must use this command before dumping a file. An error
message is issued, if this command is specified after the dump
-add command.
- This command is supported only for the FSDB dump file and is
global to all FSDB files.

dump -enable
Enables the dumping again, if it is disabled.

Syntax:

dump -enable [-fid <fid>]

This command returns the state as on or off.

The functionality of the dump -enable command is similar to the


$fsdbDumpon system task.

Verification Debug
332
Note:
- You must specify file ID, if multiple dump files are open, else
an error message is issued.
- This command has more precedence over the
$fsdbDumpvars system task.

dump -disable
Disables the dumping of all dumped signals.

Syntax:

dump -disable [-fid <fid>]

This command returns the state as on or off.

The functionality of the dump -disable command is similar to the


$fsdbDumpoff system task.

Note:
- You must specify the -fid argument if multiple dump files are
open, else an error message is issued.
- This command has more precedence over the
$fsdbDumpvars system task.

Verification Debug
333
dump -glitch
Enables or disables the dumping of glitches.

Syntax:

dump -glitch <on|off> [-fid <fid>]

This command returns the state as on or off. By default, it is set to


off.

The functionality of the dump -glitch command is similar to the


$fsdbDumpon(+glitch)system task.

Note:
You must set the NOVAS_FSDB_ENV_MAX_GLITCH_NUM
environment variable to 0 to enable the dumping of glitches in the
FSDB file.

This command is supported only for FSDB dump files.

dump -opened
Displays all opened dump files and their file type.

Syntax:

dump -opened

The output format of this command is FID Name.

The following is a sample output when three dump files of different


types are open:

VPD0 inter.vpd

Verification Debug
334
FSDB1 novas.fsdb

EVCD2 verilog.dump

Limitations

The following are the limitations of this feature:

The dump -close command does not work on a specified FSDB


file ID. You can close all FSDB files only.
The dump -power on and dump -powerstate on commands
use the $fsdbDumpvars +power system task for FSDB
dumping with no corresponding procedure to stop the dumping.
That is, you cannot stop the dumping of the power signals into the
FSDB dump file after it has started.
FSDB dumping is not supported for the dump -power off and
dump -powerstate off commands.
The dump -enable and dump -disable commands do not
support time-unit arguments.

Verification Debug
335
AMS-Debug Integrations

An engineer working on a mixed-signal flow for the verification of a


complex analog design dumps the analog nodes and design signals
in separate databases. To debug a mixed signal simulation
environment, you require to switch between two different databases
in a debugger GUI, which is very time consuming. This could have
huge impact on productivity and can impact the schedules especially
in the case of long running simulations. This also helps you for not
having internal scripts or changing the individual dump commands
manually on analog and digital when you replace few analog blocks
to digital or vice versa.

Verification Compiler Platform offers an integrated environment


where the analog nodes and design database can be dumped in a
single FSDB file and then can be loaded in Verdi GUI to view the
analog and digital values that is also in sync with each other, thereby
improving the usability and debug productivity.

The following AMS-Debug integration features are available with this


release:

Unified Dumping of Analog Signals in FSDB in VCS-CustomSim


Cosimulation Flow
Verdi Interactive Simulation Debugging With Analog Mixed-
Signal Designs

Verification Debug
336
Unified Dumping of Analog Signals in FSDB in VCS-
CustomSim Cosimulation Flow

The UCLI dump command is enhanced to dump analog signals in the


FSDB file in the VCS-CustomSim cosimulation environment.

You can now use the msv, UCLI dump option to enable dumping of
the analog signals in the FSDB file.

With this enhancement, for an object specified in the design, the


UCLI dump command supports dumping of the hierarchy scope with
mixed digital and analog modules.

This section consists of the following subsections:

Use Model
Usage Example

Use Model

Use Model for FSDB Dumping


The following steps describe the use model for FSDB dumping:

1. Set the NOVAS_HOME variable as follows:


% setenv NOVAS_HOME <novas_path>

2. Compile your design with the -debug_access option, as follows:


% vcs -debug_access <file_name>

This section consists of the following subsections:

Verification Debug
337
Enabling Dumping of the Analog/Digital Signals in the FSDB File
Enabling Merge Dumping

Enabling Dumping of the Analog/Digital Signals in the


FSDB File
The following steps describe the use model to dump the digital
signals, analog signals, or both analog and digital signals in the
FSDB file:

1. You can use one of the following ways to invoke Verdi dumper on
analog signals:
ucli% dump -msv[=on|off]

ucli% dump -file analog_mixed_signal.fsdb -type


fsdb

OR

ucli% dump -file analog_mixed_signal.fsdb -type


fsdb -msv[=on|off]

Note:
-You can use the -msv option to enable (on) or disable (off)
dumping of analog signals throughout the simulation. By
default, this option is enabled if on or off is not specified.
-The analog targets are ignored if the -msv option is not
specified.
-Once an analog scope is enabled with the dump -msv on
command, it cannot be disabled for dumping throughout the
simulation using the dump -msv off command.

Verification Debug
338
-If -type is not specified, you can use the following command
to set the default dump type as FSDB:
% setenv SNPS_SIM_DEFAULT_GUI verdi

2. Use the dump -add UCLI command to dump analog signals,


digital signals, or both analog and digital signals in the FSDB file.
Example-1: dump -msv on|off is not specified

The -msv option is enabled by default when on or off is not


specified. Consider the following example:

ucli% dump -msv -type fsdb -file


analog_mixed_signal.fsdb
ucli% dump -add top.a -fid FSDB0

This example dumps all the analog and digital signals of the
top.a scope.

Note:
You must specify the -fid argument if multiple dump files are
open, else VCS issues an error message.

Example-2: dump -msv off is specified

ucli% dump -msv off -type fsdb -file


analog_mixed_signal.fsdb
ucli% dump -add top.U0 -fid FSDB0

This example dumps all the digital signals of the top.U0 scope
and all the hierarchies under it, excluding all analog signals in the
hierarchy.

Verification Debug
339
For more information on the dump -add and dump -file UCLI
commands, see Integrating VCS MX with Verdi chapter in the VCS
User Guide.

Enabling Merge Dumping


Use the set_waveform_option CustomSim configuration file
command, as shown below, to enable merge dumping:

set_waveform_option -format fsdb -file merge

This command dumps all the digital and analog signals in the target
FSDB file. If the target FSDB file is not specified, then both analog
and digital signals are dumped in the default FSDB file
novas.fsdb.

If the -file merge option is not used in the


set_waveform_option command, the analog signals are
dumped in a separate file called xa.fsdb and digital signals are
dumped in the default FSDB file novas.fsdb.

Note:
If any CustomSim probe command is invoked on a SPICE signal,
its wave is dumped in the target FSDB file. For more information
on the CustomSim configuration commands, refer to the
CustomSim Command Reference User Guide.

Usage Example

If the -msv option is set to on, the dump -add a.b.c -type
command exhibits the following behavior:

If a.b.c is an analog net, dumps its voltage

Verification Debug
340
If a.b.c is an analog sub-circuit, dumps all the ports and internal
nets of the sub-circuit
If a.b.c is a digital net, dumps its digital value
If a.b.c is a digital instance, dumps the signal inside this scope
If a.b.c is a digital or analog instance where c contains mixed-
signal hierarchies, then both digital and analog signals of c and
its hierarchies are dumped.

Verdi Interactive Simulation Debugging With Analog


Mixed-Signal Designs

Verification Compiler Platform provides an approach to perform


Analog Mixed-Signal (AMS) Interactive Simulation Debugging that
enables you to debug complex AMS designs during the simulation.

This section consists of the following subsections:

Prerequisites
Use Model
Interactive Simulation Debugging With Mixed-Signal Design
Limitations

Verification Debug
341
Prerequisites

To perform AMS debugging in the Interactive Simulation Debug


mode, you have to first compile your mixed-signal design with VCS
and Verdi, then configure the simulation setting and enable the
Interactive Simulation Debug mode. The subsequent sections
describe how to enable these features.

Use Model

Perform the following steps to debug analog mixed-signal designs:

Compiling Mixed-Signal Design With VCS


Compiling With Verdi

Compiling Mixed-Signal Design With VCS


Perform the following steps to compile your mixed-signal design with
VCS and CustomSim:

1. Specify the set_waveform_option -format fsdb1


command in the CustomSim command file to have CustomSim
generate an FSDB output with analog signals.

2. Specify the -ad2 option in vcs command line to compile your


mixed-signal design and generate the simv simulator
executable. For example,
%> vcs digital_design_file -ad [=mixed-
signal_control_file] [vcs options]

1. Refer to the CustomSim User Guide for details on the configuration commands.
2. Refer to the Mixed-Signal Simulation User Guide for details on the compilation and simulation options.

Verification Debug
342
Compiling With Verdi
Only the Import Design from Library feature is supported to load
you mixed-signal design into the Verdi platform. Therefore, you need
to compile your design using the Verdi KDB with both digital and
analog hierarchy/signals first. The spicom compiler allows you to
compile the analog modules.

The following sections describe how to compile your design with


spicom and how to load KDB into Verdi:

Compiling and Importing SPICE Design


Compiling With SPICE Unified Front-End Flow
Enabling Verdi Interactive Simulation Debug Mode

Compiling and Importing SPICE Design


When your mixed-signal design includes SPICE format, you need to
use the spicom utility to compile the analog modules of your design.

Perform the following steps to compile and load your design into
Verdi.

1. Use the vericom utility to compile the Verilog modules in your


design. For example,
%> vericom <your_verilog_design> [vericom_options]

2. Use the spicom utility to compile the analog modules in your


design. You need to specify one of the following required options
to the spicom command line:
- -ad: Loads the simulation setup file.

Verification Debug
343
- -runfile: Loads the simulation runfile that is used during
simulation. spicom automatically compiles the SPICE and AD
files recorded in the runfile.

Additionally, the following options are optional:


- -topname: Specifies a name for the SPICE top cell. The default
value is topcell.
- -lib: Specifies the library to save your design. Default value is
work.
For example:

// Use the AD file as the input


%> spicom ad vcsAD.init
// Use the runfile file as the input
%> spicom runfile simu.run

The netlist type data in the design is created. The analysis


statement for simulation (for example, ALTER, PRINT, DATA and
so on) is not compiled.

3. Use the vhdlcom utility to compile the VHDL modules in your


design. For example,
%> vhdlcom <your_vhdl_design> [vhdlcom_options]

4. Add the -msv option into the Verdi command line to enable the
mixed-signal debug features. For example,
%> verdi top <your_design_top> -msv

5. Importing multiple libraries is supported when you have one or


more Verdi libraries compiled with the vericom utility. For example,
%> verdi top <your_design_top> lib work L <libA> L
<libB> -msv

Verification Debug
344
Compiling With SPICE Unified Front-End Flow
Earlier while using VCS, you need to parse the analog mixed-signal
designs with Spicom to debug it with Verdi.

On invoking vlogan, or spicean, or vhdlan, or VCS, and specifying


the -kdb option along with the AD file, you can now compile the
SPICE designs on Verdi seamlessly.

Add the -kdb option on the VCS command line as follows:

%>vcs -kdb -ad=vcsAD.init

VCS internally invokes spicom with this command line to analyze the
vcsAD.init file, and the KDB model of SPICE is generated.

For the UFE flow, the $NOVAS_HOME variable must be set.

Enabling Verdi Interactive Simulation Debug Mode


Perform the following steps to set the VCS simulation configurations
and enable Interactive Simulation Debug mode in Verdi:

1. Use the Tools Preferences command to invoke the


Preferences form and select the Simulation page.
2. Select VCS-AMS as the Simulator and add the required simulator
options in the Options field to set up the simulation and click OK.

Verification Debug
345
3. Use the Tools Run Simulation command to enable interactive
simulation debug mode.
You are now able to perform interactive simulation debugging with
your mixed-signal design. However, the simulation is controlled by
VCS and there are some limitations in VCS/CUSTOMSIM.

Verification Debug
346
Interactive Simulation Debugging With Mixed-Signal
Design

The following sections describe the supported interactive simulation


debug features.

Variable Observation in the Watch Tab


Annotation Value
Dumping Analog Signals Into FSDB
Force/Release Node Voltage
Save/Restore the Simulation Session/State
Interactive Console

Variable Observation in the Watch Tab


Analog signals, ports, or nodes and their voltage values at current
simulation time are now shown in the Watch tab. You can add analog
signals, ports, or nodes to the Watch tab by using the drag-and-drop
method or the Add to Watch right-click option for a selected signal.

You can also drag and drop a SPICE instance into the Watch tab. All
analog ports and nodes which are defined in this SPICE instance are
added to the Watch tab. For example, jj.inst1 is a SPICE
instance, you can drag and drop all its ports and nodes into the
Watch tab.

Verification Debug
347
The following information is provided for analog variables:

Name: Specifies the variable name


Value: Specifies the voltage value of current simulation time, for
example, 3.30V.
Type: Specifies Analog signal, port, or node
Scope: Specifies the Full hierarchical scope
The following options are disabled for analog designs:

Set Breakpoint
Set Radix
Add Object to Watch Tab
Show Declaration, Show in Class Browser, View Object
References
Dump Object to FSDB file, Dump, Add Object to Waveform

Verification Debug
348
Annotation Value
Current voltage values can be annotated to the analog signals on the
source code and nSchema frames by enabling the Source Active
Annotation option.

Dumping Analog Signals Into FSDB


You can dump the dump voltage of analog signals to the
inter.fsdb FSDB file by default. You can then check the
waveform of the dumped signals in the nWave frame.

When the voltage value of an analog signal is in the form of v(),


the following ways are provided to dump its voltage value:

Drag and drop an analog signal from the source code frame to
the nWave frame.
Use the Add Signal(s) to Waveform right-click option for a
selected analog signal in Verdi main window.
Use the Dump to FSDB File right-click option for selected analog
signals and instances in Verdi main window.
When analog signals are selected, voltage values of the selected
analog signals are dumped. When analog instances are selected,
voltage values of all analog signals under the selected analog
instances are dumped.

Verification Debug
349
You can use the nWave File Restore Signals command and
select the stored *.rc file to dump batch signals stored in the
specified *.rc file.

Force/Release Node Voltage


You can force voltage values of the SPICE signals using the Force
Signal Value Set Force right-click command.

The following fields are provided in the invoked Force Value dialog
box for SPICE signals:

Signal Name
The signal name is specified automatically when this dialog box
is invoked using the right-click option. You can also drag and drop
a signal to the Signal Name field to specify the signal name.
Value
Specifies the forced voltage value in Volt.
Slope
Specifies the slope value in Seconds/Volt. The slope value is 1ps/
Volt, by default. You can also specify time unit, such as 1ms, 1us,
1s and so on.
Note:
If the unit is not specified in this field, seconds is used as the unit.

Verification Debug
350
Save/Restore the Simulation Session/State
You can use save or restore the simulation session/state for your
mixed-signal design.

The Simulation Save State and File Save Session


commands are available and the usages are the same as debugging
your digital design.

Note:
Only the voltage values of analog signals in form of v() are
supported to be restored.

Interactive Console
You can specify CustomSim interactive command in the Interactive
Console frame by specifying ace before the command. However,
some CustomSim interactive commands that affects the simulation
progress are disabled.

Limitations

Verilog-A is not supported.


The simulation is controlled by VCS and the limitations in VCS/
CUSTOMSIM when performing AMS interactive debug are as
follows:
- Performing step or next in analog design is not supported
- Breakpoints cannot be added into analog design
- Rewind is not supported

Verification Debug
351
- Some of the menu commands and the right-click commands
are disabled
Watch tab

Current value is not supported in the Watch tab.

Annotation value

If the signal is not dumped before, changing analog format and


precision for the annotated value is not supported.
The Annotation option in the Preferences Spice has no effect.
Dumping analog signals

If analog signals are dumped by SPICE statements, CustomSim


configuration file, or CustomSim interactive dumping command,
the analog signals are not dumped into Verdi interactive FSDB
file. For example,
.print, .probe
probe_waveform_voltage, iprobe_waveform_current

Forcing value

The force/release icons are not shown for analog signals in the
nWave frame.
CustomSim/PLI does not dump force events for analog signals
into the FSDB file.

Verification Debug
352
Unified Transaction Debug With Native Verdi Protocol
Analyzer

Verification Compiler Platform offers the integration of Verdi and


Protocol Analyzer using the Unified Debug solution that maximizes
debug productivity. The new Verdi Protocol Analyzer allows you to
view high-level transaction information along with waveform and
source code in a synchronized window. Verdi Protocol Analyzer
supports both post-processing and interactive modes of debug. It
addresses usability issues, where a single process solution provides
user-friendly platform to debug the issues using waveform,
transaction data, log, and source code. This also enables more
efficient protocol-level analysis and signal-level debugging of issues,
and increases productivity of the debug process.

Additionally, with this release, VC VIPs offer support for dumping


both signal-level and transaction-level FSDB files directly during
simulation. The FSDB files can be used in both Protocol Analyzer
and Verdi.

The following VC VIPs support the unified native FSDB dumping:

AMBA VIP (AXI, AHB, and APB)


CAN VIP
DP VIP
DFI VIP
Ethernet VIP
HBM VIP
JTAG VIP

Verification Debug
353
OCP VIP
PCIe VIP
MIPI VIPs
- DigRF, M-PHY, UFS, UniPro, and SoundWire
Memory VIPs
- DDR and LPDDR
SWD VIP
UART VIP
USB PD and USB VIP

Prerequisite

The following is the prerequisite to use Protocol Analyzer features in


Verdi:

Capture the VIP transaction data during simulation and store the
data in an FSDB file for interactive debug feature validation.

Use Model

To invoke Protocol Analyzer in Verdi, go to Tools Transaction


Debug Protocol Analyzer.

Verification Debug
354
The Protocol Analyzer window appears as shown in Figure 6-33.

Figure 6-33 Protocol Analyzer Interface

Protocol Analyzer offers the following features:

Hierarchy Tree
Hierarchy Tree enables you to view different protocol layers and the
exact number of transactions in each layer which are grouped using
the protocol definition file.

Verification Debug
355
Quick Filter
Using Quick Filter, you can filter the added Hierarchy Tree contents
by setting the filter condition and view only the required streams by
applying the right filter.

Protocols
The Protocols tab displays all available VIPs installed within
DESIGNWARE_HOME or VC_HOME. The Protocols tab displays the list
of protocols from the custom location where Protocol Definition files
are present for the proprietary protocol. To set the proprietary
protocol location, use the PA_CUSTOM_VIP_PATH environment
variable.

Note:
You can view protocol- specific Class Reference Guide and User
Guide from the Protocols tab Hierarchy Tree.

Global Pane
The Global pane enables you to select a particular viewable area
from the Global pane and view the selected area in the Object pane.

In the Global pane, the default visible range is the full length of the
simulation. You can select the visible range and view the range
accordingly in the Object pane.

Verification Debug
356
Object Pane
The Object pane displays the objects that belong to different layers
within the simulation. Similar to the Protocol Analyzer object, the
Object pane reads the object color, object name, and column name
from the protocol definition file and displays these properties for each
object. Also, you can change the color and override the color
selection for each object with Preference settings.

When an object is selected in the Object pane, the corresponding


parent-child relation is highlighted in yellow. You can select the
object and zoom to the full view.

Details
The Details tab displays the attribute and value for the selected
object. The Details tab updates the attribute and value details when
the simulation stops during interactive debug. You can use the filter
to search the desired attributes and values.

The Details tab also provides a link to the documentation for a


selected field where you can right-click the selected object and open
the document in the embedded browser or the external browser
based on Preference settings.

Call Stack
The Call Stack tab includes the File Name and Line columns to
display the complete call stack of the selected object. You can check
the call hierarchy. If the class design is loaded into nTrace, you can
double-click the particular call stack and view the corresponding call
stack code in nTrace.

Verification Debug
357
Search Results
The Search Results tab displays all the matching search criteria
objects. You can traverse through the objects to find out details of
any selected object. Search results get updated with the matching
search criteria if the simulator stops during interactive debug.

For more information on the Menu commands, synchronization with


nWave, nTrace, SMART Log, tBrowser and other features, see Verdi
documentation.

Limitation

You may encounter performance issues when Quick Filter is applied


on large data.

Memory Debug With Native Verdi Protocol Analyzer

Synopsys Memory VIPs are based on the SystemVerilog technology


(SVT) infrastructure that includes SystemVerilog classes and
Memory Server.The Memory Server is one of the key elements of
SVT Memory VIP infrastructure because the actual memory values
are stored in the Memory Server instead of the SystemVerilog array.
The primary advantage of using this approach is that the
performance of the Memory VIP does not rely on the quality of the
implementation of the equivalent sparse array server of each
simulator. For multi-GB memories, the simulation accesses only a
small address range within the overall memory. In addition, this
approach supports virtual pattern where memory can be initialized
without incurring any significant memory overhead.

Verification Debug
358
In a Memory VIP, the protocol-level transactions and memory actions
are stored in an FSDB file and the .mempa file respectively. The
Memory Server can be configured to record all memory actions to
view the memory-related data in the Verdi Protocol Analyzer. With
the information recorded by the Memory Server, you can determine
the value of any memory address at any time during simulation.

The SVT Memory Server is enhanced to store the memory actions


associated with the server in an FSDB file as transactions. The
streams of these transactions are tagged with the hidden attributes
so that the memory instances can be recognized by the Verdi
environment and viewed in the new Memory Array browser. In
addition, you can view the memory actions of the Memory Server in
tAnalyzer or tBrowser windows.

For performance considerations, each instance of a Synopsys


Memory VIP must be configured to dump data to the FSDB file. By
default, this capability is not enabled. Synopsys Memory VIPs
provide controls for dumping protocol-level transactions, the status
of internal state machines and the memory actions performed by the
Memory Server.

This section consists of the following subsections:

Prerequisite
Use Model
Limitations

Verification Debug
359
Prerequisite

The following is the prerequisite to use Memory Protocol Analyzer


feature in Verdi:

The Verdi Memory Protocol Analyzer feature is dependent on the


Verdi FSDB dumper.

Use Model

To invoke Memory Protocol Analyzer in Verdi, go to Tools


Transaction Debug Memory Protocol Analyzer.

The Memory Aware Debug window appears shown in Figure 6-34.

Verification Debug
360
Figure 6-34 Memory Aware Debug Interface

Memory Protocol Analyzer offers the following features:

Memory Array View


The Memory Array view is the main view for examining memory
address values. This view displays the value of an address location
at any particular time. The Display Range field enables you to control
the address range so that you can avoid viewing entire memory for
large transactions.

Verification Debug
361
Detail View
The Detail view displays attributes and values associated with the
most recent memory action for a selected memory address.

History View
The History view enables you to display all memory actions that have
been applied to a selected address for the specified time.

Summary View
The Summary view displays memory actions and memory errors for
the selected address.

Search Results View


The Search Results view enables you to display memory actions as
per the specified search criteria. You can traverse through the list of
memory actions and select any memory action to investigate the
details of that item.

For more information on menu commands, see Verdi


documentation.

Verification Debug
362
Limitations

The following are the limitations with this feature:

Performance issues are expected with large data while loading


the Memory Array View.
There is currently no checkpoint and restore support for the
Memory Server.

VC APPs Protocol Analyzer

The protocols utilized by a SoC comply with the standards defined


by industry consortium to provide predictable interoperability
between SoC blocks and sub-systems. The standard protocol
specifications facilitate third-party development of VIPs to shorten
SoC development and verification time and also reduce the risks
associated with the design of complex SoCs and sub-systems.

The growth in complexity and increasing number of protocols used


on SoCs create a verification challenge to correlate information from
different sources and to perform root cause analysis. Traditional
debug methodologies include a combination of loosely connected
waveforms, log files, messages, and documentation that are
insufficient for productive debugging of protocol-related issues of
third party VIPs.

The VC APPs Protocol Analyzer feature of the Verdi Protocol


Analyzer provides you an interface to integrate third-party VIPs into
the Verdi environment. The objective of this feature is to standardize
Verdi Protocol Analyzer as the common debug platform for
Synopsys VIPs and third-party VIPs. The VC APPs Protocol

Verification Debug
363
Analyzer SystemVerilog API supports the components that can
generate complex (hierarchical) protocol-style transactions as well
as components that incorporate memory operations (including
complex memory operations, such as front-door and back-door
access methods).

Verdi Protocol Analyzer utilizes protocol-specific data that is stored


in a common FSDB database along with the protocol knowledge
provided by the Synopsys VIP to create a high level, interactive
display of the protocol-related comprehensive analysis and debug
capabilities.

Similarly, for integrating third-party VIPs into the Verdi environment,


you must capture the protocol-specific activity and store that data in
an FSDB database. You must have access to the VIP source code
to capture the protocol-specific activities. If you have restricted
access to the source code due to contract obligations, the only
available mechanism for capturing protocol-specific activity is
directly from the signal changes that occur on the protocol bus. In
addition, you must specify the protocol information that is required to
facilitate the protocol-specific debug features available in the Verdi
environment.

The protocol specification defines the activity that occurs in each


layer along with its associated data and the manner in which the data
communicates between the layers.

This section consists of the following subsections:

Prerequisite
Use Model

Verification Debug
364
Prerequisite

The following is the prerequisite to use VC Apps Protocol Analyzer


feature in Verdi:

Verdi FSDB dumper must be installed to use VC Apps Protocol


Analyzer.

Use Model

This section describes the use model of VC Apps Protocol Analyzer


integration process for custom or third-party VIPs.

The Verdi Protocol Analyzer processes the data captured during


simulation with the protocol extension definition data to enable the
protocol-oriented debug and analysis capabilities of Verdi.

The VC Apps Protocol Analyzer integration process for custom or


third-party VIP is shown in Figure 6-35.

Verification Debug
365
Figure 6-35 VC APPs Protocol Analyzer Integration process

You must capture the protocol-related data along with other design
information, such as waveforms, source code, stack traces, into an
FSDB file as shown in Figure 6-35.

Verdi Protocol Analyzer uses the protocol-related data and other


design information in conjunction with protocol knowledge provided
by the Synopsys VIP or created as part of the VC APPs Protocol
Analyzer integration process to create a higher-level, interactive
display of the protocol-related activity, and provide comprehensive
protocol-oriented analysis and debug capabilities. The protocol-
related activity can also be synchronized with other design-related
debug information, such as signal activity and simulation log
messages to enable an easy correlation of errors with the protocol
activity.

Verification Debug
366
Figure 6-36 System Topology With Integration of Custom VIPs

The VC APPs Protocol Analyzer integration process involves the


following steps:

Capturing Protocol Data During Simulation


Creating or Importing Protocol Extension Definition

Verification Debug
367
Capturing Protocol Data During Simulation
Perform the following steps to capture protocol-related data:

Modify the SystemVerilog source code with calls to the VC Apps


Protocol Analyzer SystemVerilog API. You can annotate the
source code using the verdi_pa_writer class that is delivered
as part of the Verdi installation. The SystemVerilog API enables
the recording of protocol-related activity to an FSDB file during
simulation.
Note:
Before importing the writer, you must analyze the structure of VIP
and make necessary modifications.

Identify the object to capture and store it in an FSDB file.


The following steps illustrate the SystemVerilog code annotation
flow as shown in Figure 6-37.
1. Create the object.

2. Add relationship with other objects.

3. Set the object attributes.

4. End the object at the appropriate location in the VIP code.

Verification Debug
368
Figure 6-37 SystemVerilog Code Annotation Flow

Creating or Importing Protocol Extension Definition


You can create or import the protocol extension definition to describe
the specific characteristics of the protocol information that is
captured during simulation.

Perform the following steps to create protocol extension definition:

1. Identify the object type and hierarchies you want to capture.


2. Identify the fields that are associated with the object type.
3. Create a protocol extension definition structure for the objects and
fields.
4. Validate the structure and contents of the initial protocol extension
definition.

Verification Debug
369
Note:
To validate the structure of protocol extension definition, set the
PA_CUSTOM_VIP_PATH environment variable and examine the
protocol definition in the Protocols tab of Verdi Protocol Analyzer.

For more information on VC Apps Protocol Analyzer, see Verdi


documentation.

Protocol Analyzer: Native Performance Analyzer for


Transactions

The performance analyzer is an interactive graphical application that


provides the analysis results of performance, statistics, and
transaction of the VC Verification IP in graphical forms, including
pine chart, line chart, and bar chart. It provides a convenient way to
measure the performance of protocols.

The performance analyzer is based on the protocol analyzer. It


works as a feature of the protocol analyzer and provides protocol-
oriented analysis, compliance checking, report generation, and
report comparison. In addition to accelerating the investigation of
protocol behavior, the performance analyzer also defines several
extendable interfaces and commands that can be used to integrate
the performance analyzer to the regression system or other VIP
tools.

The performance analyzer is a protocol-aware platform that supports


different protocols with different versions. It also provides extendable
interfaces to support new protocols in the future. By default, the
performance analyzer provides common performance metrics and
report layouts. VIP developers and end users can also define their
own performance metrics and report layouts for different protocols.

Verification Debug
370
The following are the key features of the Performance Analyzer:

Protocol aware platform: The performance analyzer is a general


protocol-aware platform that supports different protocols with
different versions. The performance analyzer provides
extendable interfaces to support new protocols in the future.
Performance metric customization: You can customize the
performance metrics for different protocols. In addition to the
metric definition, the performance analyzer also provides
convenient ways to manage and configure performance metrics.
Performance report customization: The Performance Analyzer
uses the WYSIWYG visual designer and layout tools to create
performance reports. You can easily add metrics and UI controls
using the drag-and-drop feature, change various properties of
visual component using a property editor, and apply compliance
check rules.
Graphical view: The Performance Analyzer provides different
types of charts and controls, such as bar chart, pie chart, and line
chart to display the performance data. Related charts and controls
can be grouped into the section control.
Multiple format report: Performance reports can be exported in
different formats, such as HTML, and Excel file. You can also
export the simulation performance data to a CSV file to perform
data analysis by using Microsoft Excel.
Batch mode capability: Most functions in the Performance
Analyzer can be run in the non-GUI mode. You can call functions
by typing the commands in the console view. Full batch mode
capability can also be used in regression tests.

Verification Debug
371
The section consists of the following subsections:

Performance Analyzer Use Model


Summary of Usage

Performance Analyzer Use Model

The performance analyzer application in the Verdi platform is used


to measure the performance of protocols and provides performance,
statistical, and transaction analysis of protocols.

To invoke performance analyzer, use the Tools -> Transaction


Debug -> Performance Analyzer command in nTrace.

Figure 6-38 Invoke Performance Analyzer from nTrace

The default pane layout of the Performance Analyzer frame is shown


in Figure 6-39. Its frame is divided into three main areas:

Verification Debug
372
The left region contains the Hierarchy Tree pane that displays
the instance hierarchy, protocol metrics, and performance results.
The middle region contains the Performance Report pane that
displays the performance report.
The right region contains the Details pane that shows the details
associated with the selected metric result.
Figure 6-39 Performance Analyzer Default Layout

If there is a primary FSDB file, any newly-opened Performance


Analyzer frame automatically opens the primary FSDB file.
Otherwise, the newly opened Performance Analyzer frame is empty.
You can then open an FSDB file with the File -> Open command or
with the corresponding toolbar icon.

The selection is synchronized between the Performance Report


pane and the Metrics tab. The Details tab shows the metric results
and constraint violations for the selected metric.

Verification Debug
373
Hierarchy Tree Pane
The Hierarchy Tree pane contains the Hierarchy Tree, Metrics, and
Results tabs.

Hierarchy Tree Tab


The Hierarchy Tree tab is used to display all VIP instances in the
design. The hierarchy is extracted from the FSDB file and is the
same hierarchy that is displayed for the Protocol Analyzer. However,
note that only the instances can be selected. The hierarchy view is
used to select the instances for which performance metrics are to be
calculated.

Figure 6-40 Hierarchy Tree Tab

The following right-click commands are available in the Hierarchy


Tree tab:

Expand All: Expands all items.


Collapse All: Collapses all items.

Verification Debug
374
Show in Result View: Shows the selected instance in the result
view.
Show Report: Evaluates the enabled performance metrics for
the selected instances and displays the performance report.
Show Report in New Window: Evaluates the enabled
performance metrics for the selected instances and displays the
performance report in a new Performance Analyzer frame.
Only one performance report is displayed in the new performance
analyzer frame. You can open the Performance Report in a new
window using the Show Report in New Window command.

If instances from multiple protocols have been selected for


performance evaluation, only the instances from the first protocol are
used and a warning message pops up that shows performance
report for other instances in a new Performance Analyzer frame.

Metrics Tab
The Metrics tab displays the metrics and associated constraints for
all protocols in the design. Even if there are multiple instances of the
same protocol in the design, there is still only one root node for that
protocol. This tab allows you to enable/disable metrics, define/edit
custom metrics and constraints, or add/ change/remove metric
constraints.

Verification Debug
375
Figure 6-41 Metrics Tab

For each protocol, the tree of performance metrics has two sub-
nodes one node for the instance performance metrics and the
other node for the multi-instance performance metrics. The
corresponding metrics appear as leaf-level tree nodes within each of
these sub-nodes.

To enable or disable Performance metrics, click check boxes in the


hierarchy tree. Use the check box for the instance folder and protocol
folder to enable/disable all metrics in the folder.

Pre-defined metrics are differentiated from custom metrics by the


icon of the metric name. You cannot delete or edit the pre-defined
metrics.

Verification Debug
376
Figure 6-42 Enable Performance Metrics by Clicking Check Box

The following right-click commands are available in the Metrics tab:

Expand All: Expands all items.


Collapse All: Collapses all items.
Add Metric: Displays a dialog for adding a custom metric.
View Metric: Shows details of the selected metric in the metric
dialog.
Edit Metric: Displays the Add/Edit Metric form to edit a custom
metric. Only custom metrics can be edited. Pre-defined metrics
cannot be edited, but it can be copied with the Copy Metric
command.
Copy Metric: Displays a dialog to copy the selected metric. All
fields are same as the selected metric, except for the metric name.
Delete Metric: Deletes the selected metric.

Verification Debug
377
Set Chart: Displays a dialog to set the chart option for the selected
metric.
Save Configuration: Saves the current metric/constraint
configuration into a Tcl file.
Load Configuration: Loads a metric/constraint configuration
from a Tcl file.
Reset Constraint: Resets the constraints for the protocol of the
selected metric. You are prompted to confirm the operation.
Reset Configuration: Resets both metrics and constraints for
the protocol of the selected metric. You are prompted to confirm
the operation.
If the metric definition contains any syntax errors, a warning
message is displayed when you attempt to create or modify the
metric.

Any runtime error associated with the metric definition, such as an


incorrect table name or column name can only be detected when the
metric is being evaluated. This class of errors is annotated in the
Results tab.

Custom performance metrics can be defined with New Metric and


Edit Metric right-click commands. Invoke one of these commands to
open the Add/Edit Metric form where you can specify or modify the
performance metric definition.

Verification Debug
378
Figure 6-43 Add/Edit Metric Form

In the Metrics tab, double-click the Min and Max columns of the
metric to set the minimal and maximal constraint for that metric.

Figure 6-44 Set Minimal/Maximal Constraint for Metric

The Set Chart command can be used to set the metric display chart
option.

Verification Debug
379
Figure 6-45 Set Chart Type Form

Results Tab
The Results tab is a tree view that displays the performance
analysis results. The root folder for the tree is the protocol name.
This folder contains a sub-item that contains the results of evaluating
all multi-instance metrics and separate sub-items for each instance
that was evaluated.

The next level in the tree hierarchy displays the individual metrics
(instance or multi-instance) that have been evaluated. Constraint
violations are indicated with an alternate background color for the
violating metrics.

Verification Debug
380
Figure 6-46 Results Tab

The Count column shows the number of results generated for the
metric. If the runtime error occurs when evaluating the metric, NA is
shown and the tip provides details of the error.

The metric that has constraint violations is highlighted in a different


color purple for maximal constraint violations, yellow for minimal
constraint violations, and red for both.

Selections in the Results tab is synchronized automatically with the


corresponding item in the Results tab. For example, if the
trans_write_latency metric is selected for instance2, the
Results tab scrolls to the section for instance2 and highlights the
chart for the trans_write_latency metric automatically.

Verification Debug
381
The following right-click commands are available in the Results tab:

Expand All: Expands all items.


Collapse All: Collapses all items.
Show in Hier. Tree: Shows the selected instance in the
Hierarchy Tree tab.
Show in Metrics View: Shows the selected metric in the Metrics
tab.

Performance Report Pane


The Performance Report pane displays the performance report of
the specified protocol. It is a WebKit-based view that displays HTML/
CSS/JavaScript-based reports.

After the performance evaluation is completed for the selected


instances, an HTML-based metric report is generated and is
displayed in the Performance Report pane.

The first section of the report contains general information about the
performance evaluation, such as a FSDB file, selected instances,
metrics that have been evaluated, and any constraint violations.

As previously explained, multi-instance metrics are evaluated only


once on all input instances, whereas instance metrics are evaluated
on every input instance. The performance report includes a separate
section for each input instance and a summary section for all input
instances. The instance section includes performance results for all
instance performance metrics. The summary section only includes
performance results for multi-instance performance metrics.

Verification Debug
382
A performance metric can generate a result that has only one record
or contains multiple records. The result can also have only one
column or can include multiple columns. Note that if a performance
result has multiple columns, the first column is taken as the value of
the metric.

Figure 6-47 Performance Report

For a metric result with only one record, the result is shown as a
label.

Figure 6-48 Result as Label

If there is constraint violation for the metric, the result is shown in a


different color.

Figure 6-49 Result in Different Color in Label

For the metric result with multiple records, the result is shown either
as a bar chart or as a line chart. If there are constraints associated
with the metric, the constraint values are also displayed.

Verification Debug
383
Use the middle button of mouse to scroll up/down or to zoom in/out
the chart. To move the chart left and right, drag the mouse left and
right. The scroll bar can also be used to move the chart left/right and
to zoom in and out the chart.

For line/bar chart, some of the details may be lost, if there are too
many items in the visible range. As some of the date items may
overlap, zooming in the chart for several times makes all items in the
visible range visible.

Figure 6-50 Line Chart of Performance Report

Figure 6-51 Bar Chart of Performance Report

For pie chart, only the top ten items are shown in separate slices.
Other items are grouped as one item with the name others.

Verification Debug
384
Figure 6-52 Pie Chart of Performance Report

Note:The selection of the pie chart does not synchronize to the


Details and Results pane. This is because the pie chart shows
the ratio of various performance analysis results, but the Details
and Results pane shows the detail and result of only the
selected performance analysis result.
The Performance Report pane includes a section for every input
instance and a summary section for all input instances. The following
figure shows a portion of an instance section:

Verification Debug
385
Figure 6-53 Performance Report - Instance Section

The following figure shows a portion of a summary section:

Verification Debug
386
Figure 6-54 Performance Report - Summary Section

The following right-click commands are available in the


Performance Report pane:

Save Report in HTML: Saves the report in the HTML format.


Save Report in CSV: Saves the report in the CSV format.
You can save the report in the HTML format that can be opened in
any web browser that supports JavaScript and css3.The user can
also export the metric results into a CSV file that can be opened in
an Excel file.

Verification Debug
387
Figure 6-55 Performance in CSV Format

Details Pane
For any metric result selected in the Performance Report pane, the
result data and any other data specified in the SQL metric expression
are shown in the Details pane. Metric constraint violations are also
annotated in this pane.

Details Tab
The Details tab displays the result data for any metric result selected
in the performance report pane and includes all data selected in the
SQL metric expression.

Verification Debug
388
Figure 6-56 Details Tab

The first column of the table displays the metric result. Any vector
result (a result that has multiple records) can be shown as a bar chart
in the HTML report using the values from the first column as the data
for the y-axis and the values from the second column as the data for
the x-axis.

If the table includes a column of transaction ids, the information can


be used to aid the debugging process. For example, the transaction
IDs can be used to trace the ultimate cause for constraint violations
or to synchronize selections with Performance Analyzer,
Transaction Browser, and Transaction Analyzer frames. In
addition, records that include transaction IDs can be dragged and
dropped to another frame to be viewed.

Verification Debug
389
Summary of Usage

Perform the following steps to execute the performance analysis:

1. Open the Performance Analyzer frame.


2. Open an FSDB file. Alternatively, use the primary FSDB file if one
has been loaded automatically.
3. Select instances from the Hierarchy Tree pane.
4. Set performance metrics and constraints (disable/enable metrics;
set constraint values).
5. Perform the metric evaluation with New Metric and Edit Metric
commands.
6. Open the performance report.
7. Check the performance report and debug any constraint
violations.
8. Save the performance report in the HTML or CSV format.
9. Open the HTML report in a web browser and review the report.
10. Open the CSV report in an Excel file, show charts on the data,
and review the report.

Verification Debug
390
Optimized Performance of Gate-Level Designs Using
Native FSDB Gates

RTL simulation is the basic requirement and verification engineers


prefer gate-level simulations to improve the quality of netlist, before
handing off a design to an integration team. For large gate-level
designs, simulation is time consuming and the size of the Fast Signal
Database (FSDB) file is huge.

Verification Compiler Platform offers the integration of VCS with


Verdi dumper to optimize FSDB gate-level dumping without
Standard Delay Format (SDF) information for Verilog designs.

The FSDB gate acceleration feature helps to reduce the dumped


FSDB file size and optimizes the VCS simulation time for specific
coding styles and forced signal flows. This improves simulation
performance. This feature can be enabled during simulation using a
simple runtime option.

The FSDB gate acceleration feature directs VCS to analyze


essential signals and the netlist information and uses FSDB Dumper
to dump essential signals only in an FSDB file. Applications, such as
Waveform Viewer and FSDB Reader, retrieve the data stored in the
FSDB file. VCS computation engine uses the retrieved data to
generate complete signal data during debugging.

Note:
This feature is supported with FSDB Reader 5.2 (for users of
FSDB reader API). If the API libraries of FSDB Reader are used
to read the FSDB file with a new format, a Verdi license is required.

This section consists of the following subsections:

Verification Debug
391
Use Model
Limitations

Use Model

If your design includes force events, you need to enable VCS force
capability using -debug, -debug_all, or -debug_access+f
options during compilation.

Use the +fsdb+gate runtime option in the VCS simulation


command line to enable this feature.

For example,

%> ./simv +fsdb+gate

Alternatively, set the following environment variable before starting


the simulation:

%> setenv FSDB_GATE 1

Key Points to Note


After simulation, a new format of an FSDB file is generated.
Expect to see higher simulation speed in the SystemVerilog gate-
level design without SDF.
FSDB reading performance (CPU or memory) when using Verdi
debug might be impacted.

Verification Debug
392
Limitations

The following are the limitations with the FSDB-Gate feature:

The +fsdb+gate option is disabled with a warning message, if


you add any of the following FSDB Dumper options in the
simulation command line or if you specify them using the setenv
command:
- +fsdb+glitch=<num> (corresponding environment variable
is NOVAS_FSDB_ENV_MAX_GLITCH_NUM or FSDB_GLITCH):
If the <num> argument is not equal to 1, the +fsdb+gate
option is disabled.
- +fsdb+dumpon_glitch+time and
+fsdb+dumpoff_glitch+time
- +fsdb+region (corresponding environment variable is
FSDB_REGION)
- +fsdb+sequential (corresponding environment variable is
NOVAS_FSDB_ENV_DUMP_SEQ_NUM)
- +fsdb+strength=on (corresponding environment variable
is NOVAS_FSDB_STRENGTH)
- +fsdb+esdb (corresponding environment variable is
FSDB_ESDB)
If the +fsdb+gate option is enabled, the +strength option in
dumping tasks is ignored with a warning message.
FSDB Gate acceleration does not support VCS MVSIM Native
mode to have optimized performance.
FSDB utilities require many computations. Performance
slowdown is expected when using FSDB utilities.

Verification Debug
393
7
Closing Coverage Gaps 1
For a typical SoC design, achieving coverage closure is a crucial
requirement. Coverage metrics provide a measurement of
correctness and completeness of a simulation test environment.
However, identifying which uncovered targets are actually
achievable is extremely painful and time consuming. Verification
engineers might need to reiterate debug cycle to close some targets
that are unreachable in a design.

Lack of coverage is expected due to the presence of the following


aspects in the code:

Code for error scenarios, which are almost impossible to achieve.


Code that simply cannot be executed (dead code), such as
unused functionality within the re-used code.

Closing Coverage Gaps


394
Verification Compiler platform offers the following coverage and
Formal Coverage Analyzer features with this release:

NPI Coverage Model


VC Formal Coverage Analyzer Features

NPI Coverage Model

VC APPs provide model native programming interface (NPI) for


users to traverse coverage information in the Verilog Database
(VDB), such as line coverage, toggle coverage, FSM coverage,
condition coverage, and branch metrics.

VC Apps provides NPI Coverage APIs. The APIs are used to read
coverage database. The NPI coverage objects include: assert,
testbench, and power coverage.

Closing Coverage Gaps


395
Figure 7-1 NPI Coverage Objects: Assert, Testbench, and Power

Multiple databases can exist simultaneously now. The


npi_cov_merge_test API can merge two different tests from the
same database or two different databases. Note that only the code
coverage data can be merged. The npi_cov_merge_test API
has an additional mapfile argument to specify the map file to map
different database hierarchies. The format of the mapfile is similar to
the following one:

Closing Coverage Gaps


396
Figure 7-2 Mapfile Format

This format describes how to merge the coverage data among two
or more instances that are defined in the same module. The merged
coverage data are saved to the destination instance. The instance
list is a comma-separated list of full names. It can also be a wildcard
character *. This means that all instances are merged with the same
defined module name.

This section consists of the following subsections:

Use Model
Object Diagram
APIs for C Interface
APIs for Tcl Interface

Use Model

Two types of use models are available for NPI Coverage APIs. The
first one is the database-based use model with the metric handle
from the database. The line, branch, toggle, fsm, condition, and
assert coverage database belong to the database-based use model.

Closing Coverage Gaps


397
Figure 7-3 Use Model - Database-Based NPI Coverage APIs

The second one is the test-based use model with the metric handle
from the test. The testbench and power coverage belong to the test-
based use model.

Closing Coverage Gaps


398
Figure 7-4 Use Model - Test-Based NPI Coverage APIs

Object Diagram

The following are the object diagrams of the NPI Coverage Model:

Assert Metric
Testbench Metric
Power Metric

Closing Coverage Gaps


399
Assert Metric
The assertion metric includes the coverage information about
assertions, cover properties, and sequences.

Figure 7-5 Object Diagram - Asset Metric

The following are the details of the object diagram of the assert
metric:

The assert generates child bins: success, attempt, failure, and


incomplete. The cover property generates child bins: success,
attempt, vacuous, and incomplete. The cover sequence
generates child bins: success, attempt, firstmatch, and
incomplete.

Closing Coverage Gaps


400
Cover information only shows in the assert, cover property and
cover sequence. The property npiCoverable of them is 1. The
property npiCovCount of the success bin determines whether
its scope handle that can be assert, cover property, or cover
sequence, is covered. The result is the value of the property
npiCovCovered.
The property npiCovCount can be the set for success bin,
attempt bin, failure bin, incomplete bin, and firstmatch bin.
Similarly, the count of success bin can affect the property
npiCovered and npiCovStatus of its scope handle.

Testbench Metric
The testbench metric includes the coverage information to monitor
user-specified expressions during simulation.

Closing Coverage Gaps


401
Figure 7-6 Object Diagram - Testbench Metric

The following are the details of the object diagram of the testbench
metric:

If the option property npiCovPerInstance of the covergroup is


1, cover instances can be generated from the covergroup.
The property npiCovCount can be set for the cover bin and
affects the property npiCovered and npiCovStatus of the
cover bin.
The cover bin mentioned in the last point must be generated from
the cover instance or covergroup whose property
npiCovPerInstance is equal to 0.

Closing Coverage Gaps


402
Power Metric
The power metric is placed under the power domain and power
measure, if the hierarchical verification plan is specified. Power
metric includes coverage information by cover groups that are
distributed over power measures.

Figure 7-7 Object Diagram - Power Metric

The following are the details of the object diagram of the power
metric:

If the coverage database generated by the simulator contains the


low-power hierarchical verification plan, power domains is
generated from the test. Otherwise, the power metric is generated
from the test directly.
For VCS, the -power=coverage+dump_hvp option must be
applied in order to generate the low-power hierarchical verification
plan.

Closing Coverage Gaps


403
APIs for C Interface

The following are the APIs of NPI Coverage Model for the C
interface:

npi_cov_handle_by_name
Syntax:
npiCovHandle npi_cov_handle_by_name( char *name,
npiCovHandle scope );

Description:
Obtains instance by full hierarchical name and database.
Parameters:
- name: Specifies the full hierarchical name of the instance.
- scope: Specifies the database handle.
Return:
On success, the target handle is returned.
On failure, a null pointer is returned.
Example:
npiCovHandle db1 = npi_cov_open("simv1.vdb");
npiCovHandle db2 = npi_cov_open("simv2.vdb");

// get handle "top.a0" from database db1


npiCovHandle db1Hbn = npi_cov_handle_by_name("top.a0",
db1);

// get handle "top.a0" from database db2

Closing Coverage Gaps


404
npiCovHandle db2Hbn = npi_cov_handle_by_name("top.a0",
db2);

npi_cov_merge_test
Syntax:
npiCovHandle npi_cov_merge_test ( npiCovHandle dstTest,
npiCovHandle srcTest, const NPI_BYTE8 *mapfile = NULL );

Description:
Merges the code coverage from the source test to the destination
test by mapping file.
Parameters:
- dstTest: Specifies the destination test handle.
- srcTest: Specifies the source test handle.
- mapfile: Specifies the hierarchy mapping file name. If
dstTest and srcTest belong to different databases, this
argument is required; otherwise, it can be NULL.
Return:
The merged test handle that is equal to destination test handle.
Mapfile Format:
{MODULE: [module name]
INSTANCE:
{SRC: [instance list]
DST: [instance list]
}
}

Closing Coverage Gaps


405
Note:The instance list is a comma-separated list of full names, and it
can also be a wildcard character *. This means that all instances
are merged with the same define module name.
Example:

APIs for Tcl Interface

The following are the APIs of NPI Coverage Model for the Tcl
interface:

npi_cov_handle_by_name
Syntax:
npi_cov_handle_by_name -name Name -scope Scope

Description:
Obtains instance by full hierarchical name and database.
Parameters:
- Name: Specifies the full hierarchical name of the instance.
- Scope: Specifies the database handle.
Return:
On success, the target handle is returned.

Closing Coverage Gaps


406
On failure, an empty string is returned.
Example:
set db1 [npi_cov_open -dir "simv1.vdb"]
set db2 [npi_cov_open -dir "simv2.vdb"]

// get handle "top.a0" from database db1


set db1Hbn [npi_cov_handle_by_name -name "top.a0" -scope
$db1]

// get handle "top.a0" from database db2


set db2Hbn [npi_cov_handle_by_name -name "top.a0" -scope
$db2]

npi_cov_merge_test
Syntax:
npi_cov_merge_test -dest DestTest -src SrcTest -mapfile
Mapfile

Description:
Merges the code coverage from the source test to the destination
test by mapping file.
Parameters:
- DestTest: Specifies the destination test handle.
- SrcTest: Specifies the source test handle.
- Mapfile: Specifies the hierarchy mapping file name. If Dest
and Src belong to different databases, this argument is
required; otherwise, it can be "".

Closing Coverage Gaps


407
Return:
The merged test handle that is equal to the destination test handle.
Mapfile Format:
{MODULE: [module name]
INSTANCE:
{SRC: [instance list]
DST: [instance list]
}
}

Note:The instance list is a comma-separated list of full names and it


can also be a wildcard character *. This means that all instances
are instantiated by the specific module name.
Example:

Closing Coverage Gaps


408
VC Formal Coverage Analyzer Features

VC Formal Coverage Analyzer provides an integrated mechanism


for using robust formal techniques. It determines which structural
coverage metrics are attainable and automatically excludes metrics
that cannot be reached. It performs this process automatically by
integrating with simulation engine, coverage reporting, and debug
solutions. It can be used to find coverage properties in a design and
automatically test them. It detects unreachable targets in the design
and provides this data in the form of exclusion files. Regression runs
can be pre-enabled with unreachability analysis and verification
engineers are not required to wait for the later stages of verification
cycle to understand unreachable targets in the design.

For more information, see the Synopsys VC Formal Coverage


Analyzer User Guide on SolvNet.

Verification Compiler Platform offers the following VC Formal


Coverage Analyzer features with this release:

Saving Formal Covered/Uncoverable Coverage Goal Into a


Coverage Database
Saving Coverage Database and Exclusion File When Database
is Not Imported

Closing Coverage Gaps


409
Saving Formal Covered/Uncoverable Coverage Goal
Into a Coverage Database

Coverage is a form of metrics with the goal of providing a clear,


quantitative, objective measure for assessing an overall verification
process and assessing the project's progress towards its
completeness goal.

As part of the signoff review process, code coverage has to meet a


certain target. When that target is not met at the beginning phase of
coverage convergence, many project managers require that the
verification team present their final coverage results and assess the
risk for not hitting a particular coverage item. The Verification team
wants to minimize the effort of reporting and justifying uncovered
coverage items during the review process, especially if the
uncovered item is really unreachable.

You can use VC Formal Coverage Analyzer to find coverage


properties in your design and automatically test them.

Use Model
The use model for this feature is explained in the following sections:

Automated Coverage Closure Flow


GUI Updates

Closing Coverage Gaps


410
Automated Coverage Closure Flow
In the Automated Coverage Closure Flow mode, VC Formal
Coverage Analyzer imports a simulation database and targets only
the uncovered coverage goals from the database. At the end of the
run, the tool dumps an exclusion file for the coverage goals that is
uncoverable by the tool. The exclusion file can then be read back
with the simulation database using URG/DVE to create the
exhaustive coverage report.

Previously, to dump an exclusion file, you have to maintain an extra


exclusion file in addition to the existing coverage database.
Therefore, there is a requirement to save the formal covered/
uncoverable coverage targets directly into the coverage database
itself.

To overcome this shortcoming, VC Formal now offers a new -


save_covdb command, to save covered/uncoverable coverage
goals into the existing or a new coverage database:

save_covdb

[-new <covDbName>]

[-update_existing]

[status <list-of-status-attributes>]

(property status selection: Values: "", covered,


uncoverable)
[-line_intent]
[-no_line_intent]
[cov <cov_type>[+<cov_type> ]]

where,

Closing Coverage Gaps


411
new: Creates a new coverage database by copying the original
(user-provided) coverage database and adding the additional
information about new covered/uncoverable (by formal) coverage
goals into it.
update_existing: Updates the existing covDB (instead of
creating a new one) by adding the additional information about
new covered/uncoverable (by formal) coverage goals into it.
Note:
new and update_existing are mutually exclusive; both
cannot be provided at the same time. It is mandatory to specify
one of them while invoking the command.

-line_intent: Saves only coverage goals that are


unintentionally covered or intentionally uncoverable.
-no_line_intent: Filters out coverage goals that are
unintentionally covered or intentionally uncoverable.
-status: Controls if covered, uncoverable or both type of
coverage goals need to be saved. If no option is specified, both
type of goals are considered.
[cov <cov_type>[+<cov_type>]]: Specifies the
coverage metric for which an exclusion file should be written out.
The following are the valid coverage property types and are
separated by the plus + delimiter:
- line
- cond
- toggle
- fsm_state
- fsm_transition

Closing Coverage Gaps


412
Note:
The all keyword is supported to dump an exclusion file for all
structural coverage property types. The option is an optional
argument. If it is not specified, it dumps out the exclusion file
for all coverage metrics.

GUI Updates
The coverage flow can be enabled in VC Formal Coverage Analyzer
using the following commands from vc_static_shell (in the
interactive mode) or using the f <command>.tcl option from
vc_static_shell:

Enable formal application mode:


set_app_var fml_mode_on true

Import a simulation database:


read_covdb cov_input <dbPath> _dut
<instancePath>

Build Command:
read_file cov <metric_type> -single_step top
<top_module_name> -vcs { <vcs_command_line> }

Clock/Reset Settings:
create_clock <clockName> -period <timePeriod>

create_reset <resetName> -high/low

sim_run <no_of_clock_cycles_to_run>

Closing Coverage Gaps


413
Formal Run:
check_cov

Reporting Command:
report_cov

GUI-Based Debugging:
view_activity

Saving Coverage Database:


save_covdb new <newCovDBName>

Saving Coverage Database and Exclusion File When


Database is Not Imported

Using VC Formal Coverage Analyzer, you can import a simulation


coverage database and perform the following tasks:

Save unreachables found by the tool in an exclusion file.


Save the covered/uncoverable status found by the tool either in
the user specified simulation database or in a completely new
database depending on your preference.
Earlier these tasks can be performed only when a simulation
coverage database is imported in the tool. Now, VC Formal
Coverage Analyzer is enhanced to support a situation when no
coverage database is imported in the tool. To achieve this,
save_covdb/save_cov_exclusion commands are updated.
From the usage point of view, there is no change in save_covdb/
save_cov_exclusion command. You can use them exactly as

Closing Coverage Gaps


414
used prior to this release. However, previously these two commands
issued an error message when called in scenario when no coverage
database is imported. This limitation is removed with this release.

Use Model
The use model for this feature is explained in the following sections:

Saving Exclusion File


Saving Coverage Database

Saving Exclusion File


The save_cov_exclusion command generates a coverage
exclusion file for coverage goals which have been detected as
uncoverable by VC Formal.

When the command is applied successfully, 1 is returned, else 0 is


returned. The syntax of the command is as follows:

int save_cov_exclusion

[-file<elfileName>]

[-append]

where,

[-file <elFileName>]: Specifies the name of an exclusion


file in which the uncoverable information needs to be dumped by
the tool. If just the file name is given then it is saved in the current
working directory.
[-append]: Specifies whether the uncoverable information
needs to be appended to an already existing exclusion file.

Closing Coverage Gaps


415
[-targets <collection-oferage-goals-list-of-
goal-names>]: By default, this command saves unreachables
found by the tool in the exclusion file.
However, if targets is used, the tool is guided to save an
exclusion file for the list of coverage goals that have been specified
by this switch.
Note:
Exclusion manager is absent when targets excluded by an
exclusion file are marked unreachable (hard-coded) in the
database.

Saving Coverage Database


The save_covdb command saves the covered/uncoverable status
in the user-imported simulation database or in a completely new
database.

The syntax of the command is as follows:

int save_covdb

[-new <covDbName>]

[-update_existing]

[-status <list-of-status-attributes>]

(property status selection: Values: "", covered,


uncoverable)

[-line_intent]

[-no_line_intent]

Closing Coverage Gaps


416
[ <cov_type>[+<cov_type> ]]

where,

new: Creates a new coverage database by copying the original


(user-provided) coverage database and adding the additional
information about new covered/uncoverable (by formal) coverage
goals into it.
update_existing: Updates the existing covDB (instead of
creating a new one) by adding the additional information about
new covered/uncoverable (by formal) coverage goals into it.
Note:
The new and update_existing options are mutually
exclusive, both cannot be provided at the same time. Also, it is
mandatory to specify one of them while invoking the command.
If no coverage database is imported, this switch issues an error.

line_intent: Saves only coverage goals which are


unintentionally covered or intentionally uncoverable.
no_line_intent: Filters out coverage goals which are
unintentionally covered or intentionally uncoverable.
status: Controls if covered, uncoverable, or both type of
coverage goals need to be saved. If it is not specified, both type
of goals get considered.
[ <cov_type>[+<cov_type>]]: Specifies the coverage
metric for which an exclusion file must be written out. The following
are the valid coverage property types and are separated by the
plus + delimiter:
- line
- cond

Closing Coverage Gaps


417
- toggle
- fsm_state
- fsm_transition
The all keyword is supported to dump an exclusion file for all
structural coverage property types.

Note:
The argument is optional. If you do not specify, it dumps out all
coverage metrics.

Closing Coverage Gaps


418
Index

Symbols C
+fsdb+gate 392 cancel task 129
+UVM_LOG_RECORD 192 Canceling Queued/Running Tasks 129
+UVM_TR_RECORD 191 Clock 105
+UVM_VERDI_TRACE 191 cov 412, 417
Coverage Goal 410
Covered 410
A create_clock 413
Activity View 82, 151 create_reset 413
Adding/Edit 103
Alive Workers Status 127
Analog Mixed-Signal (AMS) Interactive D
Simulation Debugging 341 depthCounts 135
-ad 342 depthVsTime 135, 139
CUSTOMSIM 346
dump 321
CustomSim 351
Dumping Analog Signals into FSDB 349
set_waveform_option -format fsdb 342
append 415
Assume 102 E
Automated Coverage Closure Flow 411 elabDB 289
-elab 294
-gui/-verdi/-gui=verdi 291, 292
B kdb=elab 290
Bounded Coverage Analysis 140 engine 135, 138
explore_end 114
Extending Traces 112

419
F J
Fast Signal Database 82 jobId 135, 138
-fastpartcomp 228 jobStatus 136
-fastpartcomp=j4 229
file 415
Finding RTL Name 66
K
FPV Enhancements 66, 87 -kdb 217
FSM 140
fsm_state 412 L
fsm_transition 412
line_intent 412, 416, 417
Functions 106
list 134, 136, 138
check_ 124

M
G map_by_name 151
Goal View 132
map_out_product 155
goalId 135
Menu 86
Grid Configuration 125
Message ID 84
Grid Control 129
Grid Functionality 124
Grid Report 126 N
Grid submission 124 new 412, 417
Grid Submission Error Report 126 no_line_intent 412, 416, 417
no_summary 133, 137
non-record-type 85
H -ntb_opts uvm 188
host 138

P
I pc.optcfg 227
Importing SPICE Design 343
primary property 102
-runfile 344
PrimaryNWave 110
spicom 343
Progress Reporting 129
Importing SPICE Design-lib 344
Property Form 103
Importing SPICE Design-mvsim 344
property ID 84
Importing SPICE Design-topname 344
-integ 230

420
Q -simflow 218
Solver Job View 136
qualify 203
solver_task_id 129
Quit 114
Source Viewer 29
spec 154
R status 135, 138, 412, 417
read_file 413 subtype 136
reason 138
Replot 102
report_fml_engines 131
T
targets 416
report_fml_jobs 131
Toggle 85
report_fv 132
Trigger Condition 105
report_seq 132
type 125
repot_orc_engines 139
Results Reporting 151
reverse debugging 306 U
-debug_access+all+reverse 308
Uncoverable 410
reverse next 306
update_existing 412, 417
simulation control commands for reverse
executing simulation 306
RMB 66, 86 V
VC Static Menu 80
S VC_STATIC_HOME 80
save_cov_exclusion 414, 415 vcsmxpcvlog 229
save_covdb 411, 414, 416 vcsmxpipvlog 229
Schematic 82 vcspcvlog 228
SEQ 87 VCst Instance 154
SEQ Debug Flow 149 vcst_command.log 83
seq_top 154 verbose 135
set_app_var 413 Verdi 82
set_grid_usage 125 Verdi Knowledge Database 82
sim_run 413 view_activity 151, 414

421
422

Vous aimerez peut-être aussi