Editor's Note: Well, according to most measures, my landlord's "3rd Annual
Mini-Woodstock Revivial" was a resounding success. The local newspapers
this week reported twelve kids were arrested for under-age possession of
alchohol; I've had three friends tell me a story (I didn't see this one
myself) about how another guy was dragged out of the party in handcuffs by
the cops because he was stupid enough to roll a joint right in front of
them; and one of the morning radio shows on WFNX 101.7 even discussed our
party in detail. I guess we've arrived, party-wise! Now, the only problem
is that it appears that the local authorities may not allow a "4th Annual
Mini-Woodstock Revivial" next year.... :^(
On a more personal note, my whirlwind de-Bachelorization_Clean-Up_Of_My_
Living_Space_Because_My_New_Girlfriend_Is_Coming_Over_For_The_First_Time
went splendidly. I'd like to thank Karen Tyrrell for her last minute tips:
From: Karen@vitalcompr.com (Karen Tyrrell)
Subject: Clean Up Thoughts - Do Read This, John!
Very important - a clean bathroom w/toliet seat down.
Also important:
(optional) - Fresh flowers on the table
(not optional) - Napkins
- and telling her she looks... beautiful,
or that dress looks wonderful on you
(Or some such gushes)
Good luck!
K a r e n T y r r e l l
VitalCom Public Relations
And I got a bit of a laugh from a Tim Davis "tip", too!
From: Tim Davis <timdavis@tdcon.com>
John,
I understand that Cadence Spectrum services has a special branch to
handle the "cleanup" problems you are experiencing. They will come
in and tell you exactly what needs to be cleaned up, draw up a plan,
statement of work, etc. All that for only $300/hr. (You of course
have to do the actual cleanup work yourself. Their simulation of
the cleanup process guarantees that your girlfriend will be
completely happy with the result however.)
Tim Davis
Timothy Davis Consulting Broomfield, CO
+-------------------------------------------------------------------+
: Engineering is like having an 8 a.m. class and a late afternoon :
: lab every day for the rest of your life. :
+-------------------------------------------------------------------+
( ESNUG 297 Item 1 ) ---------------------------------------------- [7/98]
Subject: (ESNUG 295 #12 296 #1) VERA vs. Specman: VERA Is A Subset Of Specman
> I am open to compare specific features of SpecMAN and VERA if theres an
> SpecMAN expert out there who is willing to have a friendly comparison.
>
> [ Long, Detailed Technical Description Of VERA Snipped ]
>
> I just like VERA and hope it will continue to grow. ...
>
> - Rudolf Usselmann, Consultant
> Logic One, Inc.
From: cbotta@taux01.nsc.com (Boaz Tabachnik)
Hello, John,
I've been using Specman for almost four years (since the erlier version).
Similar to this user, I didn't use the other tool (VERA in my case) and feel
'Specman expert' enough to make the comparison you proposed. However I
think that this comparison could not be done, since VERA is only a _subset_
of Specman.
Let's assume you're generating tests for a general purpose CPU. Let the
address of the next CPU instruction be a large (but simple to code)
function of the current CPU instruction and the machine's state (regs'
values etc.). Let's define the full range of 'legal' instructions, that
will not go to an 'illegal' address space (e.g. if you choose indirect jump
than only those pairs of registers are legal (according to their current
value). Let's not allow long instructions -- say we are near the end of a
block ...)
This is usually hard to program and drives automatic-test-generators to be
less 'aggressive' than they could be.
Using Specman you just need to supply the function that computes the next
instruction's address, then give a constraint like:
"keep next_address in range [ 0x0..0x7fff, 0x10000..0x10100 ];"
and Specman will do the rest. Beside the ability to code test environments
(high level, OO, string manipulation, regexps, debugger etc.) which both
VERA & Specman supply, Specman is able to GENERATE tests using its built-in
constraints solver. This is the core of the tool, and the reason we use it.
(Theoretically you could use Specman to just generate the tests, then run
the tests using VERA -- or any other tool -- but, there's probably no
reason to do it that way.)
- Boaz Tabachnik
National Semiconductor Tel-Aviv, Israel
[ Editor's Note: The July 27 "EE Times" (pg. 1) reports that Synopsys has
acquired System Science (w/ VERA) and the CEO of Verisity (which has
SpecMan) claims Synopsys tried to acquire Verisity but failed. - John ]
( ESNUG 297 Item 2 ) ---------------------------------------------- [7/98]
From: [ "Stanley Tweedle, 4th Level Security Guard" ]
Subject: WATCH OUT! -- Synopsys Test Compiler Gives BAD Coverage Numbers
Hi John,
Please keep me anonymous.
I generated scan vectors using Test Compiler and it reported a 94% coverage
number. I thought to myself what better pattern to collect characterization
data on the pins than a pattern the supposedly should drive or compare on
just about every pin. What I did was run the scan pattern generated by Test
Compiler on a tester and vary the input levels on the inputs and inouts to
see what the Vih and Vil numbers were for the pins. Some of the pins were
reported with no minimum or maximum Vih or Vil which means these pins do not
effect the compare results in the pattern. I also ran the pattern and looked
at the output levels to find Voh and Vol.
Some of the pins were reported as having no minimum or maximum Voh or Vol.
This means that these pins are not compared in the pattern. The pins that
were reported as not effecting the pattern or being compared in the pattern
were always inouts. Some of the uncovered inouts were part of a bus where
6 out of 8 pins were covered.
This wouldn't have been very interesting except I decided to look in the
Test Compiler fault list to see if these pins that the tester said were
uncovered were reported as tested or untested.
To my surprise most of the missing pins were reported as tested by the
scan pattern!
I thought that there may have been a problem in the vector conversion
process so I looked at the TSSI-ASCII, TSSI-WGL and verilog files output
by test compiler and there was no place in these patterns that the pins
reported by Test Compiler as being tested were actually tested.
I decided to run an experiment to see what the coverage number was if the
missing pins were treated as unconnected since this is how the test vectors
treated them. The test coverage dropped to 87%.
This situation is rather alarming to me because the test coverage number
reported Test Compiler is assumed by me ( and probably others ) to be
accurate, but my experiment showed that this is not the case.
I have read the Synopsys Test Compiler documentation and looked at Solvit
notes but I couldn't find any mention of the problem I am having.
Has anybody else ever seen this type of thing? Has anybody else even tried
this simple experiment to see if they get similar results?
Please keep me anonymous.
- [ "Stanley Tweedle, 4th Level Security Guard" ]
( ESNUG 297 Item 3 ) ---------------------------------------------- [7/98]
Subject: (ESNUG 295 #10) Boba Fett's View Of The Synopsys Hotline
> But in that same ESNUG 292, an anonymous Synopsys contractor (using the
> pseudonym "Boba Fett of the Evil Empire"), cyncically told the rest of the
> story with: "You guys can't imagine what a chore it is, fighting my way in
> to work at Synopsys every day through the ravening hoards of experienced
> design engineers who are vying for the very few prized openings in Synopsys
> Tech. Support, just desperate to get that choice job listening to whining,
> insulting and obnoxious engineers like the above calling them on the
> telephone all day, each caller taking the attitude that their project is
> the only one in the world that's late, that they are Synopsys' number one
> priority, and that Synopsys should send them a patch for their particular
> problem yesterday, even if it can't be reproduced. How many experienced
> design engineers do YOU know who would be willing to take on a helpline
> job, full-time? And how much would we have to pay you?"
From: Marty.Hood@smc.com (Marty Hood)
John,
I read Boba Fet's post with interest. I thought it was interesting that
only an anonymous employee of Synopsys was willing to state the painfully
obvious truth, Synopsys is unwilling to compensate Hotline employees at an
appropriate level.
I let the initial whining, then the "official" response, and finally the
"unofficial" response pass without comment because I've always felt that
"corporate America" doesn't give a rat's behind about customer service.
Most companies have decided their customers are not willing to pay for
top-notch customer service. Complaining publicly about customer service
from companies like Synopsys usually provokes lip-service like that from
Vito.
Now that I've laid bare my prejudices, here's my response. If Synopsys (or
Cadence or whoever) wants a reputation for fantastic customer service,
staff your front-line customer organization with experienced designers with
excellent communication skills. Then give them incentive to clear up
customer problems in a timely fashion (say 72 hours). This will cost big
bucks. But if a company wants the reputation they must be willing to pay
for it, i.e., there's no free lunch.
I chose to ignore the little exchange because I felt it would be a waste of
time to bash the poor Hotline folks or defend Synopsys pitiful excuse for
customer service. But since you effectively called us wimps for not
answering the challenge, consider me one designer who thinks Synopsys has a
long way to go on customer service.
I have called or emailed the Hotline about a half dozen times. They have
never solved my problems. A local FAE, colleague, or ESNUG post have
always been the source of a solution or work-around. I got tired of taking
the time to document my problem just to see it disappear in to the bit
bucket. I haven't bothered in a long time.
The best way to make EDA tools work in today's business climate is to join
forums like ESNUG and build up your network of colleagues. Then share
information, it will always return a dividend.
Keep up the good work,
- Marty Hood
Standard Microsystems Corp.
---- ---- ---- ---- ---- ---- ----
From: Martin Harriman <martinh@sei.com>
John,
Personally, I've never called the (telephone) hotline. I've been using the
Exciting New World of Computer Networking to log calls electronically -- and
I've been quite pleased with the results.
Once upon a time I did telephone support for a company that produced
products even weirder and more cryptic than Synopsys (namely, Ramtek, a
long-gone and not much lamented builder of strange computer graphics
devices).
You haven't lived until you've tried to debug problems at a classified
installation over the phone:
me: "What's wrong?"
them: "It just sort of sits there."
me: "What were you trying to do?"
them: "I can't tell you that."
me: "Well, were you trying to draw a rectangle or a triangle?"
them: "I can't tell you that."
me: "But you were trying to draw something?"
them: "I can't tell you that."
Anyhow, at least through e-mail, the Synopsys Support Center is coherent and
helpful -- sometimes it takes a couple of tries to communicate the problem,
but that's not too surprising given the complexity of the software. It's
certainly frustrating that there are problems they can't solve "until
version 1998.some-large-number," but that's not exactly the fault of the
support center.
(Oh, and I quit before my security clearance came through, so I never did
learn whether they were trying to draw a rectangle -- or indeed whether
they were trying to draw anything at all.)
- Martin Harriman
Silicon Engineering, Inc.
( ESNUG 297 Item 4 ) ---------------------------------------------- [7/98]
From: Raul Oteyza <r_oteyza@emulex.com>
Subject: Something More To Add To The EDA-Should-Support-Linux Debate
hello john,
veriwell used to support a verilog simulator for the linux platform. i
received the following message from veriwell customer support:
Hello,
The funny thing is that the port is complete and we sold it for a while.
But we could no longer justify the time to maintain and support it, so
we gave up. Now we now longer have Linux running on any of our machines
(they all have some different piece of hardware that Linux doesn't
support, sigh).
Peruse ftp://ftp.wellspring.com. There still may be an older version
there that you can use.
Best Regards,
- Elliot
Elliot Mednick
Wellspring Solutions, Inc. Salem, NH
Just thought i'd let you know,
- raul oteyza
emulex corporation
---- ---- ---- ---- ---- ---- ----
From: James Manning <jmm@raleigh.ibm.com>
John,
Though it is still in its infancy, www.linuxeda.com could help
tremendously in the movement to get EDA tools actively supported on Linux.
ESNUG readers (as well as readers of Industry Gadfly - hint, hint)
could really help in making this site a focal point for our pleas.
- James Manning
IBM
( ESNUG 297 Item 5 ) ---------------------------------------------- [7/98]
Subject: PowerMill "spFactor" Error & L-U Matrix Factorization
> I am a user of PowerMill power estimation tool. When I run the PowerMill
> on my hspice netlist, I get this error message both on the screen and in
> the .log file:
>
> "ERROR: After spFactor Error=3"
>
> But there is nothing recorded in the .err file.
>
> Does anybody know what "spFactor" is and how I can fix this error?
>
> - Roshanak Shafiiha
> University of Southern California Los Angeles, CA
From: Steve Hamm <hamm@adttx.sps.mot.com>
spFactor is a L-U matrix factorization function in a sparse matrix
package -- evidently PowerMill uses "sparse" from Berkeley. Looking at
the version of "sparse" that I have, error=3 corresponds to a singular
matrix -- but the PowerMill authors could have changed the codes.
So, do you get any topology error messages (loops of inductors/voltage
sources, floating nodes) if you run the netlist in HSPICE? This would
be an easy way to generate a singular matrix.
PowerMill should give a bit more diagnostic help here...
- Steve Hamm
Motorola Austin, Texas
( ESNUG 297 Item 6 ) ---------------------------------------------- [7/98]
From: muzok@pacbell.net (muzo)
Subject: Help! -- Need Datapath Compiler For Fast Multiplier And Adder
I need a fast 6 by 10 multiplier and an 18 input adder. Does anyone know a
data path compiler for a very popular .35u process which can just give me
layouts for these things? Synthesis is not fast enough for me even though
I have tried booth multipliers and wallace tree adders. I guess I need
physical level design output.
- Muzo
Kal Consulting
( ESNUG 297 Item 7 ) ---------------------------------------------- [7/98]
Subject: (ESNUG 295 #4 296 #8) My Sun Crashes Once A Day -- SILENT RECALL!
> Well, after struggling with what falsely appeared to be bad memory modules
> and very few utilities to diagnose any kind of hardware problem, the Sun
> guys came out and swapped out the CPUs. They mumbled something about a CPU
> bug and a silent recall, and our workstation has been up and running for
> the past week solidly. I have no idea if I'll be charged for these CPUs
> yet, but go figure that Sun's "simulate crashes instead of crashing" ad
> campaign! I thought UNIX/Sun was immune to this kind of stuff and only
> Intel/PC's were plauged with it. Gash darn that complexity thing is at
> it again, Vern!
>
> - Victor J. Duvanenko
> Truevision
From: John Patty <patty@rtp.ericsson.se>
We had an Ultra1 do that sort of thing about a month ago. It turned out
that the CPU fan had stopped working. The workstation would crash randomly
whenever the processor got too hot.
- John Patty
Ericsson Research Triangle Park, North Carolina
( ESNUG 297 Item 8 ) ---------------------------------------------- [7/98]
From: Gregg Lahti <glahti@sedona.intel.com>
Subject: IPO -- Got Messy Functional Equivalents When I Wanted Buffering!
John,
Anyone know how to get Synopsys *not* to change the cell-type during an
-in_place optimziation compile (98.02-2)? Specifically, it will change
functional equivalents like the following:
from Z = !(A*B!)
to Z = (A + B!) ( with nets to A & B swapped )
(Yes, it's convoluted but they ARE functional equivalents if you think it
out -- but this really isn't what I had in mind.) I was intending to freeze
the netlist & layout and just add any buffering or swap out cells with
higher/lower drive to meet the min/hold design rules. I had everything
defaulted except "compile_ok_to_buffer_during_inplace_opt = true". In the
case above, DC decided that the funky NAND (for lack of a better term) was
slower than the other and swapped the cell rather than add a buffer. Size
of the cells were identical, but the input net connections got reversed in
the process and not reported in the change log.
Of course, this blows the LVS checks.
Also, anyone else not really happy with the change log contents? I would
think it would be more beneficial to have more info (such as the net changes
that were done on the above cell swap, as well as the start/endpoint
insertion for any buffering.) Anyone have a better idea or methodology on
this?
- Gregg D. Lahti
Intel Corporation
( ESNUG 297 Item 9 ) ---------------------------------------------- [7/98]
Subject: (ESNUG 294 #1 296 #4) Response To Small's "Bad Software Explained"
> This simple-seeming notion of standard practice is, in fact, a crucial
> difference between technological endeavors performed by real technologists
> and the work of programmers. Consider, that for all new technological
> fields, except programming, rapid progress at exponential rates followed an
> initial period of confusion. Examples are automobiles, aviation, and
> electronics. The rapid progress is a direct result of engineers` ability
> to codify and assimilate their collective art into standard practice.
>
> But in all the time that programmers have been hacking away, poking little
> groups of characters into text files, they have not been able to accumulate
> the powerful mixture of art (practice) and science that practitioners in
> other technological fields have. For example, reusable software
> components, such as operating-system utilities and mathematical library
> program that have been in use for more than 30 years, are still found to
> be full of bugs.
From: dchapman@goldmountain.com (Dave Chapman)
Well, now, John, this deserves a response.
First of all, anybody who has not yet read Brooks' book "The Mythical Man
Month" needs to do so.
As a historical fact, almost all of the progress in the fields of programming
and computer science have come from purely technical advances, rather than
from social or organizational changes. These were:
1. The Von Neumann architecture (so you didn't program by using
patch panels).
2. The development of the Assembler.
3. The development of the standard library, which functioned as
the first generation operating systems.
4. The development of the high-level language.
5. The development of the virtual machine.
6. The development of the block-structured language.
7. The development of the standardized GUI.
(Note that "matrix management" is not on the list.)
Almost all of the advances in the field of programming have come from the
fact that hardly anybody writes device drivers any more, and from the fact
that hardly anybody writes string move subroutines any more. These have
been standardized and moved into the operating system, or into the standard
syntax of the programming language.
If you want to look for standard practice, here it is: The codified art of
the assembly language programmers has been incorporated into the c language
itself, and the codified art of the people like Brooks has been incorporated
into ideas like streams and redirection.
I would go further, and say that it is likely that future advances in the
art of programming will not come from sociology, but will come from the
incorporation of current practice into Better Tools. It is possible that
Better Tools will include highly visual interactive devices, but I doubt it.
The intelligence required to design a machine is clearly not the same as the
intelligence required to organize a sales team, and it is not reasonable to
expect machine designers to have the same interaction abilities as the
mainstream population. Make no mistake here: What software engineers do is
to design machines. This is not a discipline like creative writing; it
is quite different.
My overall suspicion is that future useful(*) advances in programming will
require a continued reliance on text, because text is the most concise and
unambiguous method of expressing logic, which is the raw material out of
which software is constructed. (* - Windows 98, for example, is not useful,
even if you believe that it is an advance. The world does not need any
more operating systems.)
We will know for sure in about another 20 years.
- Dave Chapman
Goldmountain
( ESNUG 297 Item 10 ) --------------------------------------------- [7/98]
Subject: (ESNUG 295 #3 296 #9) Cadence's PB-OPT Trounces reoptimize_design
> With regard to Cadence's PB-OPT, I implemented this tool flow (in a former
> life) and used it on very high performance designs. The results seemed to
> match Cadence's claims and this option enabled me to achieve a one-pass,
> timing-convergent, design flow from RTL->layout. This additional feature
> allowed me to replace my previous methodology, which was to iterate
> (sometimes > 15 times) through Design Compiler's reoptimize_design and
> Cell3/Silicon Ensemble's ECO place and route.
>
> - Brian Arnold
> Fusion Networks Corp. Longmont, CO
From: Daniel Leduc <Daniel.Leduc@matrox.com>
To: briana@innie.sitera.com (Brian Arnold)
Brian,
Your experience with PB-Opt (as described in ESNUG #296) corresponds
to the "holy grail" of optimization that we can never get our hands on.
We evaluated PB-Opt last fall, and it did not seem stable enough at that
point to be used in our design flow. More importantly, feedback we had
from many people at Cadence is that the GCF-based (as opposed to the older
way of doing timing-driven layout) is still not stable enough to be
incorporated in our design flow, and that very few people within the
Cadence organization are knowledgeable with GCF.
When I read the comments from the Cadence marketing guy in ESNUG 295 #3,
I was sure it was total hype, if not outright a lie, but your comments
give me some hope...
Could you tell us what version of PB-Opt your worked with, and whether
it was GCF-based (GCF being that new scheme whereby you specify global
constraints for a design, such as clock period and input arrival times,
just as you would do with a synthesis tool, as opposed to using the
older path-based constraining) ?
Any additional insight (e.g. pitfalls to avoid, etc.) would be greatly
appreciated.
- Daniel Leduc
Matrox Graphics Dorval, Quebec, Canada
---- ---- ---- ---- ---- ---- ----
From: briana@innie.sitera.com (Brian Arnold)
To: Daniel Leduc <Daniel.Leduc@matrox.com>
Daniel,
I was using the latest version of PBOPT, I don't remember the exact
version number. I have since left the company I was using it on and I
don't have access to the bits any more. I agree, the version Cadence
had last fall was brain dead, and it was called PBS (Placement Based
Synthesis) and not PBOPT. The latest version has changed substantially
and is much better. The timing reports actually correlated to what I
saw both in Synopsys and PathMill.
I was using the complete timing-based flow that took advantage of the
GCF format. However, to do this you need a tlf format for your synopsys
library. I created my own Synopsys library, so this wasn't a problem.
To do so, Cadence has a program, syn2tlf, that will create a .tlf for
your library, but you need to have the .lib for your library. To create
a gcf file for a given block, I wrote a translator that converted a
synopsys constraints file to a .gcf file and it worked quite well.
Another thing, I didn't use Cadence's CTGEN tool either as I couldn't
tolerate the amount of skew that CTGEN introduced. Therefore, I created
my own clock tree solution that worked much better and introduce near
zero skew.
As for pitfalls, it did take me a while to get the whole flow up and
running. Read the literature Cadence has on its Timing driven flow and
make sure you input the gcf files in the appropriate manner as there are
different options to reading them in.
- Brian Arnold
SiTera Corp. Longmont, CO
( ESNUG 297 Item 11 ) --------------------------------------------- [7/98]
Subject: (ESNUG 296 #6) DesignWare, get_licence, and Infinite Looping Bugs
> The get_license command is already smart enough to know that you already
> have the license you are trying to get so it should be easy to fix it to
> work correctly. Message from while loop if you already have the license:
>
> Information: You already have a 'DesignWare-Foundation' license. (UI-31)
> Information: You already have a 'DesignWare-Foundation' license. (UI-31)
>
> For all of the other licenses it is possible to work around this BUG in the
> get_license command by first removing the license before trying to get it.
> This will also work for the DesignWare-Foundation license as long as it was
> not already gotten by reading in a db that required it.
>
> If this happens and you try to do a remove_license DesignWare-Foundation
> you get a message stating that you cannot remove the license until you
> remove the design.
>
> I will admit that it is possible to kludge up scripts to work with the
> get_license command, it just seems like it would make lots of users
> lives easier if Synopsys fixed the command to work as one would expect.
>
> - Paul Fletcher
> Motorola SPS Chandler, AZ
From: Paul Fletcher <paulf@chdasic.sps.mot.com>
John,
Here is the e-mail I got back from Synopsys their bug. Hope this helps.
- Paul Fletcher
Motorola SPS Chandler, AZ
From: [ Synopsys DesignWare Support ]
Hi Paul,
Thanks for pointing out the problem. The while loop is the workaround to
ensure that compilation only start when you have access to DesignWare
Foundation license. I have forwarded your request to Foundation R&D team
and fix will be available ASAP. We are introducing one more dc_shell
variable which will act exactly like while loop. If you set
synlib_wait_for_design_license = {"DesignWare-Foundation"}
Then the DC compile, read, elaborate commands will wait for DesignWare
Foundation rather than aborting the process. The default value of this
variable is empty list. This feature will be available in 1998.08
Synopsys release.
Again set
synlib_wait_for_design_license = {"DesignWare-Foundation"}
to cause Design Compiler to wait for a DesignWare-Foundation license to
become available before proceeding with compile.
- [ Synopsys DesignWare Support ]
( ESNUG 297 Item 12 ) --------------------------------------------- [7/98]
From: janick@qualis.com (Janick Bergeron)
Subject: A Review Of The Infamous Synopsys/Mentor "Reuse Methodology Manual"
Dear John,
At the recent DAC'98, Synopsys and Mentor created an awful lot of hype
around their so-called "Reuse Methodology Manual" (and I use that term
lightly!) Here's my completely biased review of their manual from the
perspective of someone who _actually_ has to do this stuff instead of
just selling the reuse concept to managers.
First the strengths: "Reuse Methodology Manual for System-on-a-Chip
Designs", which is jointly authored by Synopsys and Mentor Graphics, is
clear, concise and to the point. Keating & Bricaud propose valuable
guidelines and rules that can only improve any design methodology. The
book's main achievement is its promotion of a systematic approach to
specification, modeling, and verification in sections 2.3, and 4.2, and
chapters 7 and 11. These are integral activities in high-quality designs.
No longer should these important tasks be relegated to the end of the
project, to be done as time or schedule allows, or by junior engineers
because they are not the fun or sexy part of the design process. These
tasks are the most critical component of the designprocess, the ones that
ensure that the efforts spent on the design proper are not misdirected or
wasted.
The book's greatest failure is its unfulfilled promise to cover issues
faced by the user of reusable blocks. A single chapter (10) is devoted to
this much broader audience and presents little beyond what has already been
known from experience of using silicon-compiled memories. With Synopsys
and Mentor pushing reuse so strongly, why don't they have more to offer
to users of reuseable blocks?
Anyone who has followed the SNUG, VIUF, or IVC conferences for the past few
years, or has several years of experience designing ASICs using synthesis
and HDLs, will find familiar techniques, guidelines and rules outlined in
chapters 5 through 7. The book's contribution is to collect, consolidate,
and present all of the various sets of guidelines published in recent
years. It sins in making all guidelines very Design Compiler centric.
Verification techniques and testbench structures presented in chapter 7
concentrate on the stimulus half of the problem. The more difficult
component, the validation of the output, is briefly discussed in the very
weak section 7.2.4. This section describes a technique that relies on
comparison with a rarely available reference design.
Starting with section 4.3.1, little value is put in behavioral models. That
is an area where I passionately disagree. Behavioral models are a precious
enabler of system-level simulation, of parallel testbench development, and
provide an audit for the specification. Furthermore, a behavioral model of
a reusable macro could also be formidable marketing tool that would enable
customers to functionally integrate a macro into their design without
revealing implementation details or trade secrets.
The book puts a lot of emphasis on the RTL model (rightly so), but it goes
too far in recommending in section 7.3 that testbenches be written in RTL as
much as possible in some situations. The book should not dismiss in so
cavalier a fashion the powerfulness of behavioral models and testbenches.
Some sections in the book show some misconceptions of the VHDL/Verilog
languages. For example, in section 4.6.2, the authors recommend simulating
a macro on several simulators "particularly important for the
VHDL simulators". It is just as important for the Verilog simulators. The
text should mention the unspecified behaviors in the Verilog language that
make it inherently non-portable to various simulators (or to the same
simulator with different command-line options). Specific guidelines are
needed to make Verilog models portable, yet none are outlined in the
subsequent chapters.
Another example is in section 5.5.4 where the text mentions only the VHDL
signal assignment and the Verilog non-blocking assignment, but it fails to
mention the VHDL variable assignment and the Verilog blocking assignment.
In addition, the "poor style" example in section 5.5.6 is very poor: the
entire guideline is completely meaningless for VHDL. Assuming that the
synthesis tool is bug-free, it is impossible for the pre- and post-synthesis
simulation behavior to be different given the semantics of the VHDL language.
The whole guideline probably comes from a literal translation of a similar
guideline for Verilog in section 5.5.5.
There are a few contradictory statements and recommendations throughout the
book. For example, section 2.4 describes a linear design process just after
a spiral design process has been recommended in section 2.2. Another example
is the relative importance given to the macro and sub-block level
verification tasks. In section 4.4.1, very high verification requirements
are put on the sub-block design, yet section 7.2.1 states that the
sub-block testbenches tend to be ad hoc, with the entire chapter focused on
the macro-level verification. Ad hoc testbenches cannot achieve high
verification requirements. Solid verification can only be reached through a
formal verification process -- which is well described in chapter 7.
Readers interested in marketing and sales material should refer to sections
6.4 and 10.4 for Module Compiler. Those interested in VMC can refer to
section 8.6.1; those interested in CBA can read section 8.7. Formality is
the only formal verification tool mentioned in sections 4.6.2, 6.2.9, and
11.5.2 with no reference to the market-leading tools from Chrysalis.
InterHDL's linting tools receive honorable mentions in sections 4.4.2 and
6.2.8, but they are not given their proper due as to the value they can
bring to a design process. The text often implies that linting is an
after-thought activity. However, if linting is an ongoing, up-front
activity, then it will detect several problems in a few seconds without a
single simulation cycle being spent.
Throughout, the book provides an excellent description of the various tasks,
tools, and activities involved in completing today's complex ASIC designs,
from specification, to verification, and to implementation and physical
design. Every company should require any new hire or any engineer with less
than two years of experience to read this book. The investment is well worth
it. Managers and design leaders should also read it. These readers should
pay special attention to the section about specification and verification.
The book would benefit from the additions of rules and guidelines
specifically targeted toward design-for-testability using full scan or BIST
techniques and toward formal verification and static timing analysis to
eliminate gate-level simulation.
But as a book about reuse methodology, it falls short in two areas:
- First, most of the book is devoted to the producer or creator of
reusable blocks. Only one chapter is directly addressed to the users
and integrators of reusable blocks.
- Second, the book does not seem to present any new information that
would not be known to companies with a lot of experience in high-
quality HDL-based design.
For example, all of the concepts, rules, and guidelines presented in this
book are in use at a large, well known telecommunications company, yet they
have been struggling with the concept and mechanisms of design reuse for
years. To this day, very little design reuse happens there. On the other
hand, a small telecommunication component company is very successful at
design reuse because of a historical reuse strategy. Yet they lack modern
HDL design techniques, specification, or verification procedures.
Why then, if a company does everything the book recommends, can reuse fail to
happen? In contrast, how can a company be successful at reuse, while
breaking several of the rules and guidelines presented in the book? For the
user, non-technical, non-design issues must be addressed such as:
- What are the characteristics of an engineering culture that promotes
reuse?
- How do you reward for reuse?
- How should systems be designed to make maximum use of available blocks?
- What infrastructure is required to properly support reuse?
- How far should reuse be carried?
- How does a company protect itself from legal liability issues in an
externally sourced block?
Overall Conclusion: Excellent book, wrong title. A more appropriate title
would have been "Best-In Class ASIC Design Practices".
- Janick Bergeron
Qualis Design Corporation
P.S. John, I'm writing another longer, very detailed technical review to
discusses the design reuse ideas they proposed and how they clashed with our
real world experiences. (It's about 900 lines long. This review was ~150
lines.) Do you think your readers would be interested in it?
|
|