( ESNUG 518 Item 6 ) -------------------------------------------- [02/01/13]

   Editor's Note: After reading all the analog R&D here, I was blown
   away by the keynote from the guy who invented SPICE!  Wow!  - John

From: [ Trent McConaghy of Solido Design ]
Subject: Solido Brainiac's Trip Report on the ICCAD'12 in San Jose, CA

Hi John,

On Thursday, November 9, 2012, International Conference on Computer-Aided 
Design (ICCAD) held an International Workshop on Design Automation for 
Analog and Mixed-Signal Circuits at the Hilton San Jose. 

It had 10 speakers, a 6-person panel, and a session with 12 posters.  The
main organizer was Xin Li (CMU), with co-organizers Chandramouli Kashyap
(Intel), Jaeha Kim (Seoul National University), Jun Tao (Fudan University),
Angan Das (Intel), and me (Trent McConaghy of Solido Design Automation). 

I will now elaborate each person's talk, and the panel. 


JOEL PHILIPS OF CADENCE:

Joel works on Cadence's front end simulation, analysis, and visualization
tools.  Joel described three overarching themes in his current work:

    (1) increasing scale
    (2) handling "intemperate devices" due to variability, layout dependent
        effects (LDEs), low margin due to low power / low voltage
    (3) "what happened to my schematic?" such as post-layout, EM / IR drop,
        and LDEs.  

These themes / challenges lead to "lots of opportunity in the analog tool 
space", which he and his team have been working on, such as

    (a) more complex checks and constraints
    (b) more powerful verification
    (c) layout-aware front-end design, such as local, quick-and-dirty 
        layouts for parasitic estimation. 

Joel also discussed how CAD theory and practice often differ, and to create
truly useful CAD tools one must aim to make an industrial-strength 
implementation.  He gave an example from his own experience.  Joel has an 
outstanding record of research in model order reduction (MOR).  Yet, while
recently working on standard cell characterization, he uncovered fundamental
flaws in MOR math.  He was able to solve the problem specific to standard
cells, but cautioned that a general solution may not be available (and 
that's ok!).


DUAINE PRYOR OF MENTOR GRAPHICS:

Duaine works on circuit simulation at Mentor Graphics.  Duaine discussed the
potential of graphics processing units (GPUs) for speeding up circuit
simulation.  To him, a "minimum interesting" simulator is a problem that is: 

    - long-running transient (versus short); 
    - not worried with DC (let a CPU handle that);
    - for ordinary circuits (not specialized);
    - outputs waveforms. 

Therefore one needs a direct solver (i.e. direct matrix solution) with 
double precision, and the vast majority of time is in the simulation loop
(implied by the long-running transient). 

Duaine presented a thorough analysis of the different simulation operations
where GPUs might be used.  He discussed academic reports of 30x-40x speedup.
However, Duaine pointed out that these are "a lot of negative results 
reported as positive results", because the speedups evaporate when compared
with industrial state of the art.  For example, the research needs to 
measure against 8 CPU cores in parallel (versus 1), and the simulation 
operations must consider limitations due to latency and bandwidth (usually
ignored by the LU solver speedups).  Duaine reported the results of his
thorough industrial-grade benchmarking: the overall price/performance 
benefits of GPUs are <2.  In his words: "GPU is not suitable to be 
applicable to this problem", but with caveats: the numbers might change for
example with a breakthrough algorithm to solve Ax=B on a GPU.


MOHAN SUNDERAJAN OF SYNOPSYS:

Mohan works on the Titan ADX analog sizer / floorplanner.  Mohan has been a
key driver of the Titan convex optimization technology in its trajectory
from Barcelona Design, to Sabio Labs (which acquired the  Barcelona IP), to
Magma (which bought Sabio), and now Synopsys (which bought Magma).  Mohan
presented Titan as a systematic methodology for AMS design.  It has three
components: 

    - A model of the circuit from device to circuit to system level. 
      FlexCells are process-independent and specification-independent 
      models of circuits and systems.  Device-level models, which relate
      transconductance/voltage/current with sizes, are auto-generated
      for each process.
    - Problem representation that is amenable to a (convex) optimizer.
      This basically means that the circuit models can be convexified,
      for example they are posynomials (polynomials with positive
      coefficients).  Note that some non-convex mappings can be 
      converted to convex via math tricks.
    - Equation-based (convex) optimizer, such as Geometric Programming.
      As the name implies, these solve a convex optimization problem 
      (one big hill).  Convex optimizers are fast; Steve Boyd (Stanford;
      Barcelona co-founder) suggests expecting a runtime equivalent to
      10-30 least-squares linear solves.

Mohan described various flows for the Titan tool, depending on the level of
intervention needed.  Once one has set up the tool on their architecture and
an initial fab process, they type in specs and the tool rapidly returns 
sizings and a floorplan. It is easy to switch up specs, corners, and 
processes.  With more intervention, one can explore different device types 
or different architectures (via changes to Matlab architecture 
specification).  Mohan provided examples on a two-stage opamp, a pipelined
ADC (exploring various architectures), and a SERDES circuit. 


JOHN CARULLI OF TEXAS INSTRUMENTS:

TI has an interesting challenge: it has 80,000 analog products, and it must
test each chip of each product before shipping that chip.  Its status quo
testing approaches date from a time when little data and little computation
/ machine learning was available. John didn't mince words: this old approach
wastes a lot of money.  John's goal is to minimize the cost of testing, 
while still shipping well-tested products and meeting time-to-market goals.

John's high-level approach is to "assign an economic value to test", then to
be more proactive in testing by reconciling test with design.  Big Data and
large-scale machine learning will play an increasingly large role.


CHRIS MYERS OF THE UNIVERSITY OF UTAH:

Chris is a professor at the University of Utah.  He has spent the better
part of a decade working on formal verification of AMS circuits, nurturing
the idea from a curiosity to a key roadmap target for future AMS design
systems.  Formal verification has had success in digital design and
software design, in the form of model checking.  Model checking performs 
formal verification via non-determinism and state exploration.  Chris' 
most recent analog formal verification system is LEMA, for LPN Embedded
Mixed-signal Analyzer.  An LPN is a Labeled Petri Net, which is a graph
based approach to model states and their transitions.  The inputs to LEMA
are a SPICE netlist and a set of analog assertions (specs, device operating
constraints, and in general "unit tests" for analog circuits).  Its formal
verification flow has two steps:

    (1) from a SPICE netlist, run simulation traces and generate a 
        verification model
    (2) from the model and analog assertions, run a satisfiability
        check, and output pass/fail. 

Scalability is a big challenge.  Using simulation traces was an advance, by
constraining the state space to realistic scenarios, similar in philosophy
to trajectory-based MOR approaches.  Another scalability advance was in 
moving from binary decision diagrams (BDDs) to satisfiability model theory
(SMT), in order to take advantages in the progress of satisfiability (SAT)
solvers.  Scalability remains the biggest challenge.


JESS CHEN OF QUALCOMM:

Jess presented the next talk, entitled "A Short List of Challenges in
Mixed-Signal Verification."  Jess said that is that functional verification
space is too big to explore with MS simulations.

By example, in a design with thousands of 8-bit registers, how does one
verify that users cannot inadvertently lock the chip into a non-operational
mode?  Or in a low-power circuit with multiple power settings, how does one
catch static bugs like missing level shifting, or dynamic bugs like legacy
gate models that do not capture power-off settings?  

To improve speed at the expense of accuracy, one can go from (full) SPICE,
to Fast SPICE, to behavioral models, to event-driven simulation.  Jess 
focused on an approach to the latter: Real Number Modeling (RNM).  Like 
fully digital simulation, RNM uses discrete time steps, therefore avoiding
solving differential equations, to simulate fast.  In RNM, signals are real
valued, which allows analog behavior to be modeled to some fidelity.  

Jess described how RNM would ideally work across the whole verification food
chain, for example from DACs to receivers to digital control and finally up
to baseband DSP.  Jess has been exploring flows like this with promising
results.  From Jess' experience, RNM simulation needs better support and 
more consistent implementations.


MARK HOROWITZ OF STANFORD UNIVERSITY:

Mark Horowitz is one of those rare people who is both a world-class analog
designer *and* a world-class tool builder, equally at home at ISSCC and at
DAC (not to mention the NASDAQ).  Mark described how to make (formal) analog
system verification possible.  His answer is the use of abstraction in the
form of linear models (as opposed to ever-faster SPICE).  Digital practices
in abstraction can help to guide analog, as follows:

    - High-level design.  By designing in a *model-first* fashion, 
      digital designers can work at extremely high levels of abstraction.
      Analog designers need to design model-first too. First, attitudes 
      need to change...

    - Attitude to models.  Boolean is a model but digital designers 
      "believe" it (or at least trust it) when working at higher levels.
      In contrast, the analog design perspective is to see models as 
      "just" approximations; or, as Mark put it, "no one would be so
      stupid as to believe a model."  

    - Base-level building blocks.  In digital, these are digital standard
      cells. Each block has well-defined Boolean behavior that is 
      sometimes static (e.g. NAND) and sometimes dynamic (e.g. flip
      flop).  We need a set of analog standard cells, with testbenches
      and assertions.  Mark has been gathering analog cells and 
      testbenches.

    - Appropriate abstraction.  Boolean is the obvious abstraction for 
      digital.  Mark argued that the appropriate abstraction for analog
      circuits is *linear* models, since every analog circuit has a 
      target of linear behavior in the appropriate domain (voltage, time,
      phase, etc).  Mark and his ex-student Jaeha Kim have publications 
      demonstrating this on PLLs, ADCs, and other common blocks.  Mark 
      even challenged the audience to come up with circuits that defied
      the linear stance.  He did acknowledge piecewise-linear (PWL) was
      OK, however.
 
    - Formally-defined functional models and verification.  Mark 
      described research being done at Stanford (his group) and at Seoul 
      National University (Jaeha's group) that takes in PWL approximations
      of signals and composes together linear analog blocks, and outputs
      PWL signals.  Mark's group has used this approach for formal 
      verification: via interval analysis and branch-and-bound, they could
      find all DC operating states of analog systems. 


RACHAEL PARKER OF INTEL:

Rachael is an highly experienced analog designer at Intel.  She gave a talk
on "analog design challenges in the new era of process scaling".  She 
provided analog circuit numbers that are alarming to SoC design, but perhaps
opportunity for tool builders.  First, she introduced the idea of analog 
*importance* scaling -- how analog's importance has increased with each 
process generation. On Intel microprocessor chips, each process generation 
has brought on average 4 new unique analog blocks.  As of 2010, the total 
count was 40 unique blocks, including 15 PLLs. 

The big problem is that analog does not scale.  This is illustrated by some
specific equations that influence analog circuit performance, such as:

  - voltage threshold variation inversely proportional to device area

  - oscillator jitter inversely proportional to square root of current 

Because analog doesn't scale, on key Intel chips, analog takes 30% area 
already, and it will take 50% by 2015 and 90% by 2020 if nothing is done.
Power used by analog circuitry will exceed that of core CPU circuitry.  
Analog circuits are "in a vice", because headroom has slipped away over the
last 20 years.  Rachael suggested some basic design solutions: 

    (1) increase capacitor density 
    (2) lower threshold voltage for more headroom, e.g. via FinFETs
    (3) digitally-assisted design to reduce required margin. She wondered
        aloud if there could be "a new way to do analog design with 
        digital, without pretending to be analog." 

Rachael also gave her tool "wants" list:

    - Block level:  Analog and digital power-performance-area analysis;
      Digitally-assisted analog partitioning and centering; Variation
      aware and reliability-aware design

    - AMS system-level:  Block-level constraints, system-level 
      integration and validation; Behavioral modeling; Exploration among
      analog, digitally-assisted, digital block implementations; 
      Transistor-level and behavior-level optimization


BOB MULLEN OF TSMC:

Bob is in charge of recommended custom design flows at TSMC.  He outlined
the challenges for 28 nm and under custom design, and elaborated on each:

    - Double patterning lithography (DPL).  Starting at 20nm, two mask 
      offset patterns are needed at for every layer, in order to get 
      finer line widths.  Every physical design tool needs retooling to
      handle DPL. Higher-level tools may need to capture DPL effects too;
      for example, parasitics between the layers can be captured as a 
      set of ~15 corners.  This means a 15x increase in the total number
      of PVT corners.

    - Variability. Designs are increasingly more sensitive to PVT 
      (process, voltage, temperature) conditions. Reduced Vdd (<1V) 
      limits headroom. Transistor mismatch (local variation) can be more
      pronounced. Advanced statistical methods are needed, including 3
      sigma corner extraction, variation-aware sensitivities, and fast 
      high-sigma analysis.

    - Reliability.  Devices are more susceptible to aging effects (e.g. 
      NBTI, HCI) than before; and new aging effects are emerging (e.g.
      PBTI). Aging and variability interact to cause even more difficulty.

    - RC (+Lk) parasitics. Parasitics from resistors, capacitors, and even
      inductors and mutual inductance (at high frequency) are a greater 
      concern now than ever before.
 
    - Layout dependent effects (LDEs).  Effects like well proximity 
      (WPE),poly spacing (PSE), oxide spacing (OSE) degrade Vth and Idsat,
      causing circuit failures. They can be avoided by guardbanding wells,
      but that increases area.

    - EM and IR drop. Degradation on wires can hurt circuit performance.

    - Analog layout guidelines. The count for design rules is increasing,
      and constraints on analog layouts have become quite restrictive.

At TSMC, Bob has been working on recommended design flows, finding EDA 
vendor solutions to help handle the challenges.  TSMC's recently-announced
Custom Reference Flow has just a handful of vendors, with tools from 
traditional big players like Cadence and Synopsys, along with key components
from up-and-comers like Solido DA and Berkeley DA. (I am CTO for Solido.)

          ----    ----    ----    ----    ----    ----   ----

PANEL: Challenges in analog design and how CAD can help

The panelists were: Jim Hogan, Rachael Parker, Duaine Pryor, John Carulli,
Mohan Sunderajan, and Joel Phillips. 

    - Jim Hogan drew on his background from Cadence days where he oversaw
      the building of Virtuoso and Spectre, and his more recent 
      background as a highly successful EDA investor.  He outlined how 
      variation is causing issues in modern process nodes, with an 
      example showing traditional FF/SS corners completely missing the
      variation in the duty cycle of a PLL VCO. Simulation time and 
      simulation count is a huge concern, where desired analyses might
      take several days or more. 

    - Jim stated these challenges are giving rise to a "Custom 2.0 
      Retooling" that is both needed, and already underway.  Elements
      include vastly improved simulation speed and capacity, and fast,
      accurate variation analysis.  Jim's responses to panel questions
      could be summarized as:  SPICE-based analysis is where the money 
      is, and that's literally where he's putting his money. Jim has
      elaborated on his Si2 keynote on Custom 2.0 here in Deepchip.

    - Other panelists' stances were in line with what they'd presented
      earlier. For example, Mohan emphasized modeling, and Joel 
      emphasized getting physical early in the flow. Duaine's ideal tool
      is a simulator that models every single thing that has ever caused
      a chip to fail.

    - My favorite quote:

        "Digital is just wasteful analog.  It's analog with a whole
         bunch of harmonics you don't need."

             - Rachael Parker of Intel at ICCAD'12

          ----    ----    ----    ----    ----    ----   ----

THE ICCAD AMS WORKSHOP 2012 KEYNOTE ADDRESS:

I had the honor of introducing the last speaker of the day, Larry Nagel
of Omega Enterprises.  For DeepChip readers newer to EDA: Larry's 1975
PhD thesis was SPICE.

Since that time, Larry has continually improved the state-of-the-art in 
simulation, first at Bell Labs then as a consultant.  Larry's talk was 
entitled "Analog Design: Still Crazy After All These Years."  His talk was
organized by the circuits that had a profound effect on simulation, in 
chronological order.  I had never thought that a list of circuits could be
riveting, but Larry made it so!

    - The Bandgap Reference was invented in 1964.  These were the first 
      circuits that you couldn't easily solve to first order with pen 
      and paper.  It led Bill Howard to develop BIAS, the very first 
      Berkeley EDA software. 

    - The Switched-Mode Power Supply (1967) was the first circuit that
      required implicit integration and steady-state analysis.

    - The Op Amp (1968).  The uA741 opamp required a computer to analyze
      second-order effects.  OpAmps have a very large number of measures,
      which required a variety of analyses:  DC, OP, AC small-signal, 
      transient, noise, and distortion. Macromodeling arose out of the
      need to simulate circuits with many opamps.

    - The Gilbert Multiplier (1968) introduced the challenges of widely
      spaced time constants, steady state analysis, mixer noise, 
      and distortion.  It took 20 years before a simulator could properly
      handle these circuits; a fact that the circuit's inventor Barry 
      Gilbert often reminded Larry of during that time!

    - The Phase-Locked Loop (PLL) (1969) brought challenges of widely
      spaced time constants, steady-state analysis, and jitter simulation
      challenges. It also took 20 years to adequately address this.
 
    - Dynamic Random Access Memory (DRAM) (1970) was always large and 
      usually gigantic, and digital accuracy wasn't enough.  
      Interestingly, transient analysis solved almost all DRAM 
      simulation problems.

    - Switched-Capacitor Filters (1978) brought challenges of widely
      spaced time constants, steady-state analysis, and nonlinear 
      frequency-domain analysis, and noise / distortion problems.  Led 
      to a new class of simulators, like SWITCAP.

    - The Delta-Sigma Modulator (1978) brought so many challenges that
      behavioral modeling became widely adopted.

    - Dynamic Logic (1982) took an easy task, simulating digital logic,
      and made it complicated, because it had to model leakage and handle 
      floating nodes. 

    - RF CMOS (1988) simulation required device models with good 
      derivatives (for distortion), and good noise analysis and 
      distortion analysis in general. Introduced a new class of steady
      state simulators like Spectre-RF, Eldo-RF, and HSPICE-RF.

    - Since 1988, new circuits drove development less.  Instead, 
      development has been guided by other challenges like complexity,
      variation, and more.

Larry concluded with some nuggets:

    - Analog EDA tools have always been reactive since tool requirements
      are hard to predict. 

    - The need for new tool capabilities never replaces the need for
      legacy capabilities.

    - And finally... the life of an EDA tool builder will be crazy well
      into the future!

          ----    ----    ----    ----    ----    ----   ----

Overall there were 12 papers presented at this AMS EDA workshop, so it ended
with 12 author-poster sessions to wrap up the day.

I hope your readers that couldn't make it to ICCAD find this summary useful.

    - Trent McConaghy
      Solido Design Automation                   Vancouver, Canada
Join    Index    Next->Item






   
 Sign up for the DeepChip newsletter.
Email
 Read what EDA tool users really think.


Feedback About Wiretaps ESNUGs SIGN UP! Downloads Trip Reports Advertise

"Relax. This is a discussion. Anything said here is just one engineer's opinion. Email in your dissenting letter and it'll be published, too."
This Web Site Is Modified Every 2-3 Days
Copyright 1991-2024 John Cooley.  All Rights Reserved.
| Contact John Cooley | Webmaster | Legal | Feedback Form |

   !!!     "It's not a BUG,
  /o o\  /  it's a FEATURE!"
 (  >  )
  \ - / 
  _] [_     (jcooley 1991)