( ESNUG 485 Item 3 ) -------------------------------------------- [06/08/10]
Subject: (ESNUG 484 #1) Yet another production user of NextOp BugScope
> What we wrote in 5 lines for a particular design feature, NextOp wrote in
> 1 line. NextOp has a comprehensive way to look at all signals affecting
> an assertion, such that it can look at the root cause and produce more
> efficient code. Because NextOp's assertions have fewer lines, we don't
> need to spend 15 to 20 minutes to understand an assertion with more
> complicated code. Plus we used NextOp to check modules that were well
> verified with a comprehensive Specman "e" testbench and it still found
> RTL issues based on coverage holes we were missing.
>
> - [ The Swedish Chef ]
From: [ The Kool Aid Man ]
Hi, John,
Please have me be anonymous.
We evaluated NextOp's BugScope assertion synthesis about a year ago and
started using it for production in September 2009. Our verification team
now runs BugScope as part of our verification sign-off.
BEFORE BUGSCOPE:
We were already heavy users of assertion based verification and we know
how to create assertions.
We generate our stimulus based on a constrained random environment running
on Cadence NC-Verilog at both the system and block level.
We use system level checkers to cover our basic chip functionality, then use
block level checkers and assertions to specify detailed features. For some
blocks, we only use assertions to check correctness. We find approximately
10 percent of our bugs by using assertions. Some of the bugs are really
good corner bugs which were not exposed by the system level checkers.
WITH BUGSCOPE:
BugScope's automatic assertion synthesis is fairly straightforward:
1. Run simulation regression with NextOp PLI. For each simulation run,
the PLI call creates a database (NDB file) that captures the dynamic
behaviors of the RTL module of interest.
2. Run BugScope with the NDB files and our RTL. BugScope merges the
databases and generates synthesized properties Verilog format. NextOp
properties are expressed in mostly Verilog Syntax with two new
operators:
|-> for implication and @ for next state.
3. Review BugScope's generated properties and classify them into assertions
and coverage properties.
For example, coverage holes BugScope exposed:
- Flush only tested for IDLE states: cover (flush && st_next != IDLE)
- Protocol corner cases such as: cover (grant && !data_taken)
And assertions BugScope generated, and how they covered:
- Proper arbitration such as: onehot0(gnt);
- Design assumptions such as: alloc |-> dat_avail;
- Resource conflict constraints: res_avail[0] || res_avail[1]
- FIFO/Counter overflow underflow conditions.
4. BugScope generates the properties in SVA or PSL format for input back
to NC-Verilog.
5. Repeat the sequence if tests are added or RTL modified, reviewing only
new properties generated (classified properties are separated)
RESULTS
BugScope Round 1:
Lines of RTL source code: 3,000
# of constrained random regression tests: 354
BugScope runtime: 5 hours
BugScope output: 159 properties.
We first ran BugScope when our verification was only about 1/3 complete and
a lot of our configurations had not been tested. BugScope was still able to
generate a large number of high quality assertions and coverage points for
the test portion of the logic.
BugScope also generated an additional 167 constant properties (showing the
signal did not toggle) to show the configurations that had not been tested.
It took our designer 1.5 hours to review the 159 NextOp properties and
classify them.
Assertions: 108
BugScope's whitebox assertions
we had already hand-produced: 11
BugScope whitebox assertions
not hand produced: 97
Coverage properties: 40
Redundant properties - logically implied by the
RTL in adjacent modules: 11
Exposed unexpected logic: 1
Total BugScope properties classified as high quality: 140 (88%)
The NextOp PLI caused 13% slowdown in simulation when generating NDB files.
BugScope Round 2:
We reran BugScope after adding more constrained random regression tests.
Constrained random regression tests added: 122
BugScope runtime: 7 hours
BugScope output: 6 additional properties.
The small change in the number of new properties BugScope generated meant
our random verification suite was stabilizing. The fact that we still had
significant functional coverage holes meant we still needed to adjust our
constrained random stimulus generation.
BUGSCOPE-GENERATED VS. MANUALLY-CODED ASSERTIONS TIME COMPARISON
NextOp assertions: Looking at Round 1, it took a total of ~1 minute
engineering time + ~2 minutes CPU time per property. 6.5 hours (BugScope
runtime + classification time) to get 140 high quality properties.
Hand-coded assertions: It takes us 50 man-hours to write and debug the
assertions we develop by hand.
So BugScope was faster than manually creating assertions and extended the
coverage of the assertions. It is a good complement to the assertions
created manually.
Additionally, at first we were skeptical that NextOp could generate
meaningful assertions with just the design and the testbench, but the
results showed us that not only did BugScope catch 92% of the whitebox
assertions that we wrote by hand, but it also came up with 10X as many
assertions (108 vs. 11) as we did by hand. Apparently, NextOp is able to
find good clues in the simulation data to identify high quality properties.
For control-intensive blocks, we try to enforce an assertion/RTL ratio of
1 assertion for every 20-30 lines of RTL code. So for the 3000 lines
of RTL in our design above, this would be 100-150 assertions. At 30 min
per assertion this would take us about 60 hours, which clearly does not
scale to the full chip level, especially for our designers who are
relatively new to working with assertion-based verification. (It's easy
for them to write assertions that essentially duplicate the RTL, but not
only do those assertions not find bugs, they also slow down our sims.)
BUGSCOPE IS NOT PERFECT:
- It does not have a pre-packaged GUI which we think helps to classifying
assertions. So we developed a basic one on our own.
- It generates properties within one module at a time, so it's linearly
scalable across the full chip. It would be nice to be able to generate
assertions across multiple modules.
- It focuses on whitebox properties. We still have to ask our engineers
to manually develop blackbox assertions and coverage properties based
on our functional specification.
- We'd like to somehow use BugScope with emulation, but since we can't port
our testbench (especially our non-synthesizable checker) to emulation,
we're not sure if it's doable.
Regardless, BugScope is already part of our simulation-based sign-off flow.
- [ The Kool Aid Man ]
Join
Index
Next->Item
|
|