( DAC'16 Item 1b ) ---------------------------------------------- [12/09/16]
Subject: Jasper's #1 in #1 "Best of 2016", through killer capacity & debug
WE NEEDED A BIGGER BOAT: Other than Apps making Formal usable by the common
designer, the only other barrier to widespread Formal verification use by
the masses was that pesky state-space cycle-depth problem which overwhelmed
both the convergence algorithms and CPU capacity of 20 years ago.
Well, according to the user comments in this survey (see below) apparently
the Jasper R&D folks have solved those pesky convergence/capacity problems.
Being an engineer, I asked CDNS for the hard data and specifics.
"We use 14 customer design testcases, max size 10.8M flops (101M gates)
down to 36K flops (344K gates), using the same set of properties.
Testcase 1 10,761,890 flops 101,265,523 gates
Testcase 2 9,028,718 flops 94,090,533 gates
Testcase 3 1,244,431 flops 26,713,108 gates
Testcase 4 856,865 flops 15,332,286 gates
Testcase 5 1,486,194 flops 12,897,297 gates
Testcase 6 1,603,545 flops 12,581,017 gates
Testcase 7 749,000 flops 11,000,000 gates
Testcase 8 336,691 flops 5,827,817 gates
Testcase 9 440,668 flops 3,239,592 gates
Testcase 10 81,000 flops 1,600,000 gates
Testcase 11 423,500 flops 916,589 gates
Testcase 12 77,066 flops 735,000 gates
Testcase 13 25,000 flops 474,000 gates
Testcase 14 35,956 flops 344,241 gates
Comparing the JasperGold 2016.06 release to 2015.06 release:
Time to elaborate the design and report its instances (get_design_info)
is 1.85X faster on average.
The # of determined properties in maximum time limit of 5000 seconds:
1.4X more (aggregate of 19,400 properties completing versus 13,880)."
Beyond extended capacity, one repeat theme the users said they liked about
JasperGold are its many different Formal apps (which I've broken off as
its own discussion in a later section of this report here.)
"The beauty of JasperGold is that it can be used by dummies,
as well as experts."
Two other things to note in the user comments is how they enthusiatically
like JasperGold Visualize Debug with its different ways to trace, analyze,
and show bugs -- plus how they creatively use JasperGold's 23 proof engines
(including something new called "TRIDENT").
ONE OOPS: one unstated advantage Jasper has overall is it's a mature tool.
It's gone through its growing pains and has all of its gliches removed.
"Like a fine wine, formal tools easily take decades to mature.
Buyer beware if you hear a vendor claim that they have a
'new breakthrough' formal tool."
- Jim Hogan in ESNUG 558 #5 (DC 02/19/16)
Which is kind of ironic because the one place where the JasperGold users are
complaining is with Jasper's breakthrough new "Cover Analysis" types. (It's
confusing to the users because it works nothing like how Verilog/VHDL code
coverage is done in RTL simulation. It's hardcore Formal wizard stuff.)
In formal, it's always the new stuff that you gotta watch out for...
QUESTION ASKED:
Q: "What were the 3 or 4 most INTERESTING specific EDA tools
you've seen this year? WHY did they interest you?"
---- ---- ---- ---- ---- ---- ----
We've used JasperGold for 4-5 years now.
JasperGold has the best runtime and capacity of any formal tool that
I'm aware of, including Cadence IFV, Synopsys VC Formal, Mentor Questa
Formal. (We've continued to keep up with/evaluate the other tools,
even within the last 1-2 years.)
- JasperGold can works with a compute farm to spawn off lots of
jobs. We can launch 50-100 jobs, and it has a coordinated
process. We can also have an engine race to determine the
fastest engine for a job.
- Some of Jasper's engines can do multiple properties in parallel.
You can use proofs from previous results to help proof -- the
downside is if the jobs run in a different order, sometimes it
can take longer to reproduce the same counter-examples again
in JasperGold.
- Capacity-wise it's good. It can always do better of course, as
compared with an RTL simulator.
JasperGold Visualize debug is best in class.
- Root-cause analysis. This is good -- you can click on something
and it tells you why. (There is literally a button labeled
'why'). Makes it easier to figure out what's going on.
- Design exploration. You can do "what-if" analysis, e.g. if you
make a change, what would happen? You change the waveform, and
JasperGold redraws it on the fly. You can also add extra
constraints, such as the signal must be high under these
conditions, etc.
- Relevant logic analysis. Jasper will give you a minimum set
of relevant waves. The waves that are not relevant to property
being checked, are a light gray color -- unlike what you see with
a simulator -- although the newer simulation wave-viewer tools
are now trying to do this using big data.
- Quiet Trace. If some of your properties fail, you will get a
trace. But they can be noisy. If you click on "Quiet Trace",
the Jasper engine irons/flattens it out, i.e. it reduces the
ripples in the wave form, so it's easier to figure out. (The
icon is actually an iron.) It removes irrelevant examples, and
tries to give you shortest waveform possible to give you the fault.
- Minimal Trace. Generally speaking, if Jasper finds a counter
example, it will give you a minimum trace, which is easier to
debug. Most Jasper engines will give you minimum cycles to
demonstrate a counter example, e.g. 10-20 cycles instead of
to 60 cycles.
We use JasperGold's formal Cover Analysis types, but only a bit, as they
are somewhat difficult to understand, are different from simulation, and
there is no industry standard.
- Bounded proofs. Jasper lets you know you haven't gone deep
enough. This is not a full proof, i.e. it's not 'no bugs'.
There is a change in thinking for formal tools. You may be used
to thinking "bug-free", but it's often actually very difficult
because the state space is massive. For formal you can approach
it from a proof standpoint or bug hunting standpoint. Sometimes
bug hunting is good enough (and is the same approach as
constrained random simulation).
- ProofCore dynamic analysis. Requires hand holding by the AEs,
as output is confusing -- it's difficult to understand what it
means. It can help you decide whether you have enough properties
for your design.
- Mutation analysis. I've done this in the past, however
JasperGold doesn't do it. It would require a lot of capacity.
- Constraint analysis. This can be useful to avoid
over-constraining the design. This was slow in past, but
Cadence has improved it. It finds over constraints you didn't
know about; there are still some limitations, as it doesn't know
what you intended to achieve.
I am very positive on JasperGold - both in terms of how we use it and
Cadence's support responsiveness.
---- ---- ---- ---- ---- ---- ----
We know what the JasperGold engines can do in an overnight run --
usually this is the longest we want to run. We rely on our methodology
to get us to our target proof depth, rather than by brute force (letting
the engine run for days).
The largest design blocks we tackle with it are ~40-50k gates. Instead
of attempting to do larger blocks; we partition them once they get very
large.
We use Jasper's proof grid parallel engine function all the time.
- We throw as many CPU's as we have allocated to our group to
solve the problem.
- We can't reach our target proof depth after an overnight run, we
review and revise our approach.
JasperGold Visualize is an incredible debug tool. We use it for
debugging, finding root causes, and exploring. It has a number of
features that make the tasks easier, e.g. what-if analysis, relevant
logic analysis, and quiet trace.
We do bug hunting as we try to reach our target proof depth. (Bug
hunting is as important as reaching proof depth.) Jasper's B engine
lets us to jump to specific clocks beyond our target proof depth.
We are not a heavy user of Jasper's Coverage Analysis yet. We do pay
attention to the unreachable properties to check for over constraints.
We use our own calculations for proof depth, and simulation as another
check for over constraints.
JasperGold has built-in function calls in the tool to traverse the
connectivity. We do reverse connectivity on our chip, and essentially
use Jasper to give us access to the connectivity data structure of the
design, and to generate a table of connectivity for review. (They have
a formal connectivity app, but we don't use it.)
I highly recommend Jasper. We find bugs with it, so we know we are
improving the quality of the design.
---- ---- ---- ---- ---- ---- ----
I first started using JasperGold formal to see if I could identify
corner case bugs that were being missed by standard vector sets for a
core that I had designed. Prior to that, I had looked at JasperGold 3
to 4 years ago; it was challenging only because I come from traditional
design and verification background.
Most of my learning at the time was through attending AE hosted
workshops where the formal app was run on cores that was either not too
familiar or on simple blocks such as FIFO's that were not too useful
for me. However, using formal on one of my designs and with the
Jasper AE's help, I was able to come up to speed pretty quickly.
I found JasperGold to be pretty powerful for a number of scenarios:
- I was using formal for a FSM based packet processor where
packets can be as large as 64K requiring 64K cycles of
permutations for the formal engine to weed through. Obviously
this is impossible for a formal engine to tackle, but I was able
to use smaller packet sizes to generate counter-example traces
and with some clever use of some of the formal engines, we had
great confidence that the design would work for larger packet
sizes.
- Our design also has a large memory for which we used abstraction
techniques for proof.
- My initial runs were with the default engines but as we started
tackling larger packet sizes, we used different formal engines.
Victor (the AE) was a great help in identifying the optimal
engines for our design.
---- ---- ---- ---- ---- ---- ----
My company has been using Cadence JasperGold for several years now (well
before it was acquired by Cadence).
The areas where JasperGold wins: excellent GUI, complete TCL interface,
proof engines, engine orchestration, and cluster support.
The beauty of JasperGold is that it can be used by dummies, as well
as experts.
We like "Visualize", it does really good debug of failing assertions.
It's usually used by our designers. Our Formal verification experts
have access to the many configuration parameters, and can hand select
their preferred proof engines. This is a differentiation factor against
other tools, where the EDA vendors are usually afraid to let users
configure their tools.
Cadence releases new versions on a frequent basis (quarterly for main
versions, about monthly for minor revisions). This is good to quickly
get bug fixes, and the new features we requested.
Cadence support on JasperGold is also usually top-notch.
---- ---- ---- ---- ---- ---- ----
Our team's been using JasperGold formal heavily for about 2 years.
We use it to verify our interconnect fabric application. We build
formal test benches for our entire environment, both at the top level
and at the unit/block level. Then engineers work through counter
examples in Jasper.
- JasperGold has made big strides in capacity. You can take a big
design and get proofs or counterexamples for properties to show
which are false or indeterminate in a reasonable amount of time.
For us, this means an overnight run.
- Overall, we are happy with debug using JasperGold Visualize; it
gave us good traces.
- We went 80 cycles deep for bug hunting -- that's a deep trace.
To give you an idea as to how difficult that problem was: our
design team wanted to see a simulation, too. Once we had the
trace from Jasper, it took 2 weeks of handcrafting the Verilog
RTL simulation to trace it.
- We did some specialized bug hunting post-silicon, and found bugs
in JasperGold that we didn't find in RTL simulation.
- Running multiple proof engines in parallel is potentially very
powerful -- however, it's not obvious as to the best way to
exploit it. You need in-depth knowledge as to which engines are
best for different proofing of different things; and which are
for bug hunting. Only the expert Jasper AEs know this.
Our recent focus was to see whether we could write our own intelligent
proof kit for our unique internal protocols; and how much effort it
would take us. We were successful. We figured out a good, regimented
methodology to structure and layer the code. The initial effort was a
lot, but we expect to leverage it over time. If we can carry our
"formal VIP" forward in time with our design evolution, we will get
some time savings.
We've been happy. Cadence JasperGold has good apps, good support and
it's pretty productive.
---- ---- ---- ---- ---- ---- ----
We visited Cadence at DAC, and are JasperGold users.
You have to have the right expectations of the performance of formal
tools. A hierarchical approach is typically used. Formal is run on
blocks one-level down from where simulation is run. Formal can fully
cover most or all of the features at the sub-block level. Simulation
then covers the integration of the blocks at the sub-system level.
Emulation covers the full system level with S/W integration. Blending
verification tools to exploit their strengths in this way gives the best
results in terms of verification efficiency and design quality.
Jasper's debugger is highly optimized for the FPV app. Engineers can
quickly hone in on the root cause of failures. However, the GUI for
some of the other apps isn't quite as slick. For example, performance
and usability is compromised with the automatic formal lint (AFL) app
when tens of thousands of properties are automatically generated by the
tool.
The ability to manually craft a bug hunting flow has been in Jasper for
some time and it works well. (For example, using a cover point state as
a launch point for deeper formal analysis vs. starting from reset as
our analysis launch point. But Jasper R&D recently added that as an
automatic bug hunting feature, so the jury is still out on that one.)
---- ---- ---- ---- ---- ---- ----
We use JasperGold primarily for unit level functionality for CPUs,
mostly control. (Model-checking is less ideal for data path.) Formal
cleans up shallower bugs/corner cases which may be hard to reach with
simulation.
Jasper's performance keeps improving over time. We've used it on block
sizes as large as 1M to 5M gates, though we do not converge on all the
proofs.
ProofGrid orchestrates Jasper's various engines; it runs a bunch of jobs
and schedules property-sets on them. It's a race -- if one engine wins,
then the other engines abort. This works pretty well, Cadence has made
improvements and it scales reasonably well.
Visualize Debug is one of Jasper's strong points. It has a quite
intuitive UI to allow you to trace root cause of failure from the
counter-example trace. It can also minimize the number of toggling
signals (i.e. quiet the trace). It's good for design exploration,
e.g. you can specify an arbitrary condition.
---- ---- ---- ---- ---- ---- ----
JasperGold has exceptional performance and capacity limits compared to
other tools we have used -- specifically, Cadence IEV and Synopsys
Magellan.
- JasperGold's runtimes are better.
- JasperGold worked well for us with blocks/sub-systems
larger than 1 million gates.
We have been able to get conclusive proofs on assertions with and
without abstraction. The Jasper engine documentation is good
and the ProofGrid window is perfect for visualizing the progress
and effectiveness of each engine on any given property. Using
ProofGrid we were able to tailor the Jasper engine configurations
for different proofs to boost our overall regression throughput.
Also Jasper has a unique algorithm called 'TRIDENT' which breaks the
logic into different pieces and automatically uses the most suitable
available engine for each required piece. We have seen some good
convergence using this algorithm.
Jasper's debug is similar to other formal tools. The biggest issue is
our engineers must learn ANOTHER debug tool which is frustrating. The
What-if and Why analysis in Jasper are interesting but we are still
learning how to better use them.
One unique aspect of JasperGold is that it can report code coverage
based on just the assertions being proven. IEV and VC Formal can't
do this.
Jasper Coverage Pros:
- A very good metric for the verification being performed using
assertions
- Support for three different types of coverage: stimuli,
structural, and proof coverage. Each with their own advantage
Jasper Coverage Cons:
- Only supports block coverage at the moment
- Formal coverage cannot be merged with simulation coverage
- It's not code coverage the way most engineers think of as
code coverage. It's covering assertions -- not lines of
Verilog/VHDL design source code.
We see Jasper as best in class and highly recommended it whether you
are new to formal verification or struggling with your existing tools.
---- ---- ---- ---- ---- ---- ----
We use Cadence JasperGold. We find it useful for clock domain crossing
analysis on non-trivial cases and for post-silicon debugging, i.e.
trying to reproduce some misbehavior found in the field.
Kudos to JasperGold's debugging:
- We rank its what-if analysis, root-cause analysis, and
quiet tracing as productivity boosting features.
- The Tcl API is consistent.
- On-the-fly creation of assertions and constraints with
a SVA-like syntax makes using Jasper easier.
- Aids for debugging constraint conflicts are helpful.
We recommend it.
---- ---- ---- ---- ---- ---- ----
We are using Cadence JasperGold for the following reasons:
1) Easy-to-use intuitive Debug UI
Its GUI is significantly helpful, especially the what-if
and root cause analysis features.
2) "Trident" formal engine
Jasper inherits Cadence IEV's powerful engine "Trident". It
orchestrates multiple engines on multiple cores and hence it
can prove highly complex properties very quickly.
3) Sequential Equivalence Checking
Its formal engines are optimized to SEC, thus it is much
faster and much easier to use than VC Formal or IEV.
JasperGold also has some weaknesses:
1) Jasper's price is high
Currently, each application requires individual license.
Although FPV (Formal Property Verification apps) itself is not
so expensive, but once some other apps are added, the end
price is much higher than other Formal competitors.
2) Jasper's Coverage Metrics are confusing
Jasper Coverage app shows us logic core (which can be touched
by formal engines) and proof core (which is touched), but these
are different from Verilog/VHDL simulation's coverage metrics
(line/condition/branch). They are hard to unify.
In contrast, Synopsys proposes unified sign-off metrics using their
fault injection tool, Certitude. Once injected fault changes the
result of simulation or formal test, this cover point is regarded as
covered. Therefore, if any injected fault does not change both
simulation and formal test results, these tests seem to cover all
logics.
3) Jasper's Automatic Formal Linting weak
Beginners tend to use auto check, however, Jasper's auto check
has less features than competitors. (Mentor's Questa AutoCheck
is the best. It is able to cover much wider features, such as
combinatorial loops and liveness. ** -- We evaluated the Jasper
Automatic Formal Linting app few years ago, thus I do not have
the current confirm current status.)
4) Jasper's GUI doesn't match CDNS Indago
Their debug UI is original, not Cadence Indago Debug Analysis
platform. Even though Jasper's GUI is easy-to-use, engineers
have to learn two different operations. In comparison,
Synopsys' VC Formal shares Verdi-based platform.
In conclusion, Jasper is an excellent independent formal verification
tool. However, it is not fully integrated to Cadence Incisive
Verification Platform. Also it is not suitable for casual users, who
only use simple tools like Mentor Questa AutoCheck.
---- ---- ---- ---- ---- ---- ----
Related Articles
How engineers feel about Formal verification vs. RTL simulation
Jasper's #1 in #1 "Best of 2016", through killer capacity & debug
OneSpin's #2 in #1 "Best of 2016", the "we try harder" company
And how JasperGold compares to OneSpin in the Formal Apps biz
Join
Index
Next->Item
|
|