( ESNUG 494 Item 10 ) ------------------------------------------- [10/25/11]
From: [ The Watcher ]
Subject: User verifies 6-sigma SRAM with Synopsys HSPICE and Solido
Hi John,
Please don't publish my identity.
I also attended the Synopsys DAC "SPICE Up Your Chip" AMS dinner. Here's
the case study Mohamed Abu-Rahma of Qualcomm presented where he used 100
Synopsys HSPICE licenses on an LSF farm with Solido High-Sigma Monte Carlo
(HSMC) to do SRAM design.
He outlined the following trends with SRAM design:
- SRAM content is increasing 2x per node, plus there is a 50% area
scaling with every technology node starting 350 nm through 22 nm.
For example, an SRAM at 32 nm has 0.15um^2 cell size, which is a
very small area to fit the 6 transistors of a bitcell.
- Because SRAMs use the smallest area in the design, it is most
sensitive to variation effects such as random dopant fluctuation
(RDF) and line edge roughness (LER).
- Low voltage failures are a big concern for SRAM design. At 28 nm,
1-sigma for threshold voltage is 45-55 mV. With supply at 1 V and
6-sigma design specs, there's little margin for SRAM bitcells.
- More and more, memory designers need to design at 5-6 sigma, and
require high accuracy SPICE runs to hit their yield targets.
Mohamed then discussed the limits of random Monte Carlo simulations.
1. If you design a bitcell to just 3-sigma, you get very low yield on
tiny memory arrays.
128 b ############################## 92% yield
1 kb ################ 50% yield
4 kb ## 6% yield
8 kb . 0.4% yield
2. SRAM design requires:
- More coverage due to 10's to 100's Mb of memory content.
- 100's millions of Monte Carlo simulations (5-6 sigma), even
with memory redundancy included.
- 'Golden simulator' accuracy.
QUALCOMM SRAM EXAMPLE
Mohamed then went through his SRAM design case study; his objective was to
determine the minimum supply voltage for his large SRAM caches.
Design specifics:
- 28 nm LP technology. Foundry device and mismatch statistical
models characterized and validated using HSPICE.
- Very high-sigma design: 5-6 sigma.
- Non-linear operation in low voltage range, so required very high
accuracy SPICE simulation.
- Can't run 100,000 Monte Carlo simulations and extrapolate to
high-sigma, due to non-Gaussian distribution.
Tools and general methodology:
- Used Solido HSMC ver 2.5 to drive transient analysis on 100 Synopsys
HSPICE licenses (ver 2010.12) over LSF grid, for 'golden accuracy'.
- Solido tool did the sampling and statistical analysis to drastically
cut the # of required HSPICE runs.
- Verified a robust write operation to 5.8 sigma. 5.8 sigma spec was
calculated by memory density and available memory redundancy.
Results:
- 100,000 conventional HSPICE Monte Carlo simulations: distribution
was non-Gaussian, so it was not known how it would continue to
high-sigma. When Qualcomm extrapolated from just 100,000 Monte
Carlo runs, it predicted 6-sigma design would pass 500 picoseconds
write time specification.
- Solido High-Sigma Monte Carlo ran 1 billion (1 Giga) samples, but
focused the HSPICE simulations in area where bitcell was failing
spec. It did not waste simulations in other areas of distribution
not around target. By reviewing Solido’s results, Qualcomm
determined that the 500 ps spec was only 5.38 sigma, which didn't
meet their 5.8 sigma yield requirement. Qualcomm increased the
write time from 500 ps to 650 ps to meet their 5.8 sigma yield
target. They wouldn't have known this without being able to see
the distribution out to the tail.
Benchmark results:
plain HSPICE Solido HSMC Solido HSMC Solido HSMC
Monte Carlo Case 1 Case 2 Case 3
# of Samples 100K 1 Million 8 Million 1 Billion
Simulation Time 3.5 hours 0.5 hour 0.5 hour 3 hours
Sigma Verification 3.5 hours 4.1 hours 4.6 hours 5.9 hours
Mohamed summarized his results by saying that using HSPICE with Solido HSMC
provided >10,000X improvement in his required simulation time because it
gave him 1 Billion samples vs. 100K samples; plus he got 'golden simulation'
accuracy along the way.
He also warned that Solido's fast sampling techniques are limited to few
hundred variables. In most practical cases this is OK, but it needs
improvements for future applications.
Mohamed wants to see:
- statistical simulation combined with aging.
- negative-bias/positive-bias temperature instability (NBTI/PBTI)
simulation + Monte Carlo.
- Monte Carlo techniques to handle larger circuits.
- faster high-sigma Monte Carlo reducing the required number
of samples even further.
He would also like an optimization flow to optimize device sizing that
accounts for variation.
- [ The Watcher ]
Join
Index
|
|