( DAC 04 Item 22 ) --------------------------------------------- [ 02/09/05 ]

Subject: Apache RedHawk-SDL vs. Sequence CoolTime

CALIENTE -- Apache has a great reputation in the dynamic IR-drop business,
especially with its vectorless claim to fame.  Cadence VoltageStorm is stuck
playing catch-up and I don't think Synopsys is in their league either.
Redhawk's only serious contendor is Sequence CoolTime.  (And on the stupid
EDA dramas front, most folks by now have probably have forgotten that silly
Apache-spying-on-Sequence story back in DAC 2003 from ESNUG 416 #4.)


    Apache Design Solutions created a bit of a stir last year by claiming
    that their tool, Redhawk, was able to provide dynamic power estimates
    without vectors, which would eliminate the problem of trying to define
    a vectors set that creates maximum power but doesn't take too long to
    initialize.  Their technology was proprietary so they didn't reveal
    much.  This year they talked a little more.  They use SPICE on a lib
    to get profiles of VDD and VSS when cells are switching.  They take
    this new library, plus your LEF/DEF, GDSII, packaging model, and STA
    information (a window for each instance describing where within the
    cycle it might switch).  The result is a physically based estimate that
    includes on-chip and packaging parasitics.  They can also use VCD from
    the user or the tool can generate vectors itself based on switching
    activity levels.  Apache sells another tool for automatic power/ground
    busing, including suggestions for decoupling capacitors.

        - John Weiland of Intrinsix Corp.


    There doesn't seem to be any changes in power integrity analysis
    market, with Apache keeping the lead.  Cadence's VoltageStorm has
    added dynamic power analysis features, which is pretty competitive
    with Apache, although no one seems to know how to calculate the
    impact of dynamic power noise on delay and further link to STA.

        - [ An Anon Engineer ]


    Dynamic IR of Apache Redhawk:

    + vecorless
    - cost

    Our dynamic IR results were obtained

        1) by using Apache Power Libraries (APL) cells that were
           characterized for this purpose (done once per netlist),
        2) with the peak activity test - run this test, identify
           the peak frame (of one cycle) and check the currents/voltages.

    We did this experiment with and without decaps, and also tried the
    vectorless capability.  The inherent capacitance (diffusion, etc.) is
    always present and hence one cannot expect the pad current to
    completely follow the instance current.  Still we were able to see
    the difference.

    In the w/ dcaps run, the current contributed by the dcaps were observed
    and they were as expected.

    We also checked the rate of change of current in instance and pad
    current waveform and pad current gradients were cross checked.  The
    peak voltage drops, were observed.  We got some utility to dump the
    voltages of any instances in the design for every 10 ps over the frame
    of computation.  This was compared against the worst case reported by
    Redhawk as min and max and they were similar.  The worst case voltage
    drop of vdd was also in the expected range.

    The vectorless approach is computed based on the PAR (peak-to-average-
    ratio) we provide to Redhawk.  We derived this ratio, as an average of
    the high activity region.  On applying this factor, the worst case
    voltage was 2x worser than static value.  Also the instances which were
    worst case in w/ vector run, continued to show similar voltage drops
    indicating they were still potential candidates of violation.

    The current peaks were 1.2 A in w/ vector-based run and 1.78 A in
    vectorless.  The effects of dcaps were observed and we got the high
    activity (switching) plotted through a Redhawk utility which matched
    with current peaks.  The initial currents in the frame were synced by
    adding some extra time before the actual region of interest.

    Overall, a good methodology for dynamic IR, at a premium price.

        - [ An Anon Engineer ]


    We choose RedHawk as a power signoff tool because:

    The database generated by AstroRail was too big.  It's very difficult
    to analyze.  Besides, AstroRail only provides cell-based calculation.
    For whole chip IR-Drop analysis, Synopsys solution is NanoSim which is
    a SPICE-level simulator, takes time.  

    Cadence VoltageStorm only provide static analysis, no dynamic.

    Apache RedHawk could provide both static and dynamic IR-drop analysis.
    The running speed is pretty fast, and the tool was adopted by TSMC
    reference flow for 90 um or under.

        - Tina Chen of GenesysLogic, Inc.


    RedHawk's biggest strength is it's fast and accurate extraction and
    analysis engine for power/ground rail on cell based designs.  RedHawk
    is works very well for power integrity sign off, including the peak
    noise verification.  We have data of good correlation between RedHawk
    simulation and silicon measurement.  Our only complaint is that we
    wish we could get RedHawk for less money.

        - Hiro Tsujikawa of Matsushita


    I've been using RedHawk-SDL for about 8 months, now.  Primarily, I've
    been doing static and (statistical) dynamic IR-drop analysis on a
    couple of our chips that we're doing with a major customer.  The tool
    is still very young, but it shows good prospects and is backed by some
    very determined and capable individuals.

    Weaknesses:

    - Unfortunately, the Apache statistical approach does not give the user
      control over which instances are toggling.  This made it difficult to
      locate areas which had weakened power structures.  Apache addressed
      this issue, and will be making changes to future releases to allow
      more user controllability.  They are also promising, based on our
      requests, a further extension to RedHawk which will seek out weaker
      power structures, and automatically increase the toggle rates for
      cells located in those regions.

    - The Apache statistical approach made the various runs inconsistent
      across platforms and across versions of RedHawk, making it difficult
      to compare apples-to-apples (however, this oversight has since been
      corrected).

    - RedHawk uses a "ton" of RAM!  Upwards of 54 G for some runs.  Until
      recently, if you didn't have the RAM available, it wouldn't work
      properly if it went into swap.  This limits the machines on which
      we can run the tool.  However, due to the long run times on the
      Solaris machines, it pretty much dictates that you use a Linux box
       anyway (but you still need to load it up!).

    - RedHawk uses a "ton" of disk space!  Count on 100 G+ per iteration of
      the design (includes static and dynamic for both BC, WC analyses).

    - RedHawk is very picky about the consistency and completeness of the
      database.  It requires LEF, DEF, NIF, SPEF, GDS, and STA results for
      it to function.  If any of the files are incomplete/inconsistent, or
      if there are DRC errors, then the tool will give erroneous results.
      Also, RedHawkl does not work well with a mixed flow (e.g. if your LEF
      specifies a macro which is only available as a GDSII component).
      They have work-arounds, but it can become tedious.

    - RedHawk versions come fast and furious.  Databases are not backward
      compatible, nor are they compatible across Linux vs. Solaris.

    - Every time you have a new layout for analysis, you have to regenerate
      and recharacterize all the Apache cell libraries.

    Strengths:

    - Apache customer support has been very good.  They are extremely
      responsive to our needs, and are willing to (quickly) develop
      extensions to the tool to solve point problems.  I can't say this
      about my past experiences with Synopsys.

    - Apache training has been excellent.  They use real-world examples and,
      wherever possible, use our own designs as part of the training.

    - Even though we have our own internal methodologies, they are willing
      to work within our constraints to get RedHawk to make valid results.

    - The difference between static and dynamic analysis is "night and day".
      Dynamic-voltage-drop analysis is a staple in our ASIC flow diet.

    - RedHawk has helped us catch issues in one design before going to
      masks, and also in another design where it demonstrated a sufficient
      improvement in power structure integrity after an ECO (allowing it
      to go to mask, too).

    In summary, I will continue to use RedHawk.  Not because my company says
    so, but because I've come to know it's limitations and strengths, and
    because I believe in the enhancements yet to come (and the people behind
    them).  RedHawk is young and buggy but, with great support, it makes it
    much easier to take.

        - [ An Anon Engineer ]


    We have used Apache Redhawk tool on several of our ASICs, and our
    experience has been quite positive.

      - Fast runtime - the analysis even on large chips is of the
        order of a few hours.
      - Built-in current/decap characterization engine for
        Dynamic Voltage Drop (DVD) analysis.
      - Excellent customer support - bugs got fixed with updated
        release within days (as opposed to months for other tools).
      - Vectorless DVD analysis - to find hotspots without
        requiring a VCD file.

    On the negative side:

      - Data preparation takes long time - typically 1-2 days on
        a large asic. Customized flow and design independent APL
        library support can reduce this time.
      - We would like to see more implementation features -
        to automatically fix the power grid, add decap etc. to
        reduce DVD.

    We would also like to see more checking/debugging capabilities in
    Redhawk to identify any user errors like missing data, etc.  Some
    of this can be built-into the Apache flow as we become more
    familiar with typical user errors.

        - Rajiv Sharma of Toshiba


    The Redhawk/NSPICE setup:

    The Redhawk tool requires the input setup technology files to be in an
    industry standard format.  The LEFs, .libs, and GDSII are required to
    be in an industry standard format.  This requirement helps to double
    check these technology data files.  The Process technology, package,
    decoupling capacitance, electro migration and resistance need to
    characterized to produce accurate results.  This is common between most
    of the voltage drop/ground bounce/cap/inductance/resistance tools.

    After the setup data has been gathered up the technology, it will need
    to be characterized into an APL (Apache power library).  The Redhawk
    tool can be used early on in the design, lets say at the placement
    stage.  To design an ASIC we must first do placement of the cells,
    insert the clock trees and then route the design.  The individual loads
    for each cell is then characterized.  If in the future, the load on a
    cell changes, the APL can be re-characterized or the Redhawk tool can
    interpolate to calculate the power/load.

    To run APL for a new 6 M gate design takes 1 day on a 3 Ghz Linux box.

    The ASIC design phase:

    The object is to get to route without giving much thought to timing.
    After the first trial route, we can add fill cells with de-cap.
    Redhawk is used to check if a sufficient amount of decap has been
    added.  The cells may have internal decap, but with the placement of
    the cells and the switching of the cells more decap may be required.
    The Redhawk tool does a good job identifying the insufficient decap
    location according to its statistical switching scenario algorithm.
    If a VCD file is used for the switching scenario a better estimation
    of the required de-cap is obtained.

    Redhawk doesn't look for power structure weakness itself. The tool is
    calculating a switching scenario that may or may not detect weak power
    structures.  The Apache AEs are now adding a feature into Redhawk
    that will look specifically for missing power wire/via(s).  The power
    grid should be checked for strength first, before trying to meet
    timing and timing optimizations.

    The tool identifies areas to add decap, the placement can be redone and
    the DvD rechecked to see how the ground bounce and voltage drop is
    behaving.  This analysis should start early in the design cycle to
    ensure the timing closure time is as short as possible.

        - [ An Anon Engineer ]


    We use RedHawk for IR drop and EM analysis.  The setup is simple (but
    hopefully can be simplified further with a Milkyway interface) and the
    results are fast and accurate.  Porting to the Opteron reduced our full
    chip run time by a factor of 4 which is great.

    The main drawbacks of Redhawk are slow GUI at full chip level and
    difficulty of debug.  However the difficulty in debug is offset by the
    excellent AE support we have received.

        - Lama Mouayad of Raza Microelectronics


    Apache Redhawk is still really the only good option for dynamic voltage
    drop analysis.  Sequence is probably a close second in terms of ability
    and capacity.  Magma and Synopsys are catching up, especially in
    overall integration inside the backend flow.

    Apache still has a lot of work to do in making their Redhawk more user
    friendly and stable, but they have been addressing these issues in
    their latest releases.  Overall accuracy in terms of IR and power
    consumption is very good and seems to correlate to our silicon, which
    is always nice.

        - [ An Anon Engineer ]


    I've used Apache's Redhawk power analysis product on two taped-out
    designs so far.  This tool is an integral part of our design planning
    flow. There are many things that I like and dislike about Redhawk,
    but IMHO it is the best post route power analysis tool on the market.

    Things I like:

    1. APL: This is their library characterization flow. I wish all EDA
       vendors did this, and leave "standard library formats" to those that
       don't require accuracy.  Spice inputs are the inputs to this flow,
       and the resulting models are much better than any proposed or
       implemented "standard" format available today.

    2. Accurate Vectorless Dynamic Analysis: This is the mode we've been
       using, and we've been able to correlate our results fairly well to
       silicon.

    3. Support: The FAEs that we've worked with are top rate, and with
       their help we were able to create a one button flow, driven by our
       P&R flow, that runs the Apache flows and generates reports within
       24 hours (both static and dynamic fast and slow corners).

    Things I dis-liked:

    1. Apache's use of the file system is extensive, and has created some
       system configuration issues for us.  We basically had to buy a very
       big box just for Apache, W.R.T. extra big memory and swap space.
       I hear that they're coming out with a fix for this, W.R.T. local
       data caching and the use of gziped files for inputs as well as
       outputs, but I haven't used this yet.

    2. The APL flow has to be run for every design run.  I hear they're
       moving away from this, and doing a more exhaustive APL run as part
       of a library flow, that doesn't have to be repeated for the during
       the design analysis flow, but I haven't used this yet.

    3. There is no Tcl interface to access the design data, and build
       custom features.  I hear that this, too, is changing...

    Basically, Redhawk is a great power verification point tool.

        - Mike Newman of Airgo Networks


    We started an evaluation one year and half ago.  At that time, the only
    practical usable tools were Apache RedHawk and Sequence CoolTime.  We
    made comparison between these two tools; though we found no fatal
    drawback in CoolTime, RedHawk seemed to be slightly better in terms of
    the ease of use.

    Now our experience with RedHawk:

      - Faster than expected.  Even for the maximum design, the execution
        time is several hours excluding library generation.
      - Software quality (number of bugs) is not yet satisfactory.
      - Easy to use.
      - Strong local (Japan) support.
      - Enhancement is still needed to fit in various chip design styles.

    Cadence and Synopsys were far behind RedHawk and CoolTime (and, I
    believe, still are); I have no experience with them.

        - [ An Anon Engineer ]


    Sequence CoolTime vs. Apache Redhawk benchmark
    ----------------------------------------------

    We evaluated both CoolTime and Redhawk on a 130 nm testchip and
    correlated the data collected to measurement data.

    We correlated average power for different frequencies and found both
    CoolTime and RedHawk correlate within 10% of silicon measurement.

    We correlated static IR drop at various points in the design and at
    different frequencies and found that in all cases, CoolTime and RedHawk
    gave IR drop values very close to silicon measurement (both simulation
    and vectorless) with the vectorless being slightly optimistic at some
    frequencies.  The results were comparable to what we got with our
    signoff tool, VoltageStorm.

    Dynamic or Instantaneous Voltage Drop:

    Here again we correlated values at various points and at different
    frequencies including 10 MHz for detailed voltage waveform comparision
    to SPICE data.  We found CoolTime's vectorless and simulation based
    results were excellent and within 3% of silicon measurements.  The
    shape of the waveforms also matched what we observed in silicon/SPICE.

    RedHawk correlated within 6% of silicon measurements.  The shape of the
    waveforms from RedHawk also matched silicon data.

    Runtime and Memory usage:

    It took CoolTime about 6 hrs for our design which has about 330 K
    placed instances.  (Note that we performed analysis on 3 to 8 clock
    cycles compared to the typical 1 or 2 clock cycles.)  Static IR drop
    analysis took about 30 minutes in CoolTime.

    Apache was better for dynamic analysis as it took 1.5 hrs on our test
    case.  Apache runtime results were comparable to CoolTime for static
    IR drop analysis.  

    Memory consumption was a bit high in CoolTime taking about 7.2 GB for
    the dynamic analysis.  This is one thing that needs improvement moving
    forward.  RedHawk took about 2.5 GB for the same analysis.

    What we liked about CoolTime apart from the quality of results was the
    tight integration it had with other tools such as Physical Studio and
    PrimeTime to make a seamless flow.  In addition, CoolTime integrates
    Delay Calculation (and impact of IR drop on delay), Timing, Signal
    Integrity analysis etc making it easier to do cause-&-effect analysis.

        - [ An Anon Engineer ]

Index   
Next->Item







   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)