( ESNUG 576 Item 4 ) ---------------------------------------------- [09/22/17]
Subject: Amit on Solido machine learning, lib characterization, and Siri
DAC'17 Troublemakers Panel in Austin, TX
Cooley: The deep learning stuff though, isn't that well... Amit you're
barking about deep learning, so that's Solido's new thing?
Isn't this a failure of machine learning, deep learning? I mean
this ... they [Ansys Apache Redhawk] tried it... oops.
Amit: First of all, it's not our new thing. We've been doing machine
learning for 12 years. We had it in our initial product, called
Variation Designer, that did machine learning models to reduce
the amount of SPICE simulation. So being able to do variation
analysis, statistical analysis, cutting back the amount of
simulation required.
What we found in developing out the solution to that problem
space, is machine learning is hard. It's hard for EDA because
you can't just pull off scikit-learn libraries, and just
apply it to EDA, and the get the accuracy that you need because
if you're doing prediction and the prediction isn't brute force
accurate, your chip is going to fail.
Cooley: Right.
Amit: You need to have the machine learning algorithms able to scale
to the 100's of 1000's of dimensions that are associated with
EDA problems. You need to deal with non-Gaussian problems,
interaction effects. It needs to be repeatable.
Cooley: That was the repeatability problem that was biting RedHawk,
or SeaHawk, or whatever Ansys calls it now. (See ESNUG 561 #1)
Amit: The answers need to be verifiable so that the user can trust that
it's not just a result that's getting spit out. So, we've
engineered over a dozen years these machine learning technologies
for our Variation Designer product line, and our customers are
coming to us and saying "you've applied it to variation design,
can you apply it to the library characterization space?"
So earlier this year we announced our own machine learning
characterization suite. We're able with our machine learning
technologies to be able to cut down library characterization time
by 1/2. Taking it down from months to weeks, or weeks to days;
all with brute force accuracy, repeatability, and scaling to a
large dimension. That just takes innovation and R&D to be able
to create all this.
Cooley: Why are you going into a market that's crowded out? I mean I
think characterization is... who the hell... someone owns that.
Someone. Anirudh don't you own it? Liberate???
Anirudh: Cadence Altos, yeah.
Cooley: Yeah, I think Liberate has something like 70-80% market share.
Amit: Yeah with the characterization market, the problem is, TSMC is
coming out with every flavor of 16NF, 16NFC, 28 low power, 28
high performance... And you need to characterize all these
different variants of PDKs now across all of your different
process corners. One of our customers for example, it takes
them 4,000 CPUs and 1 month to be able to do all their library
characterizations.
So just a ton of time and resource. If you can do intelligent
machine learning on those kind of data sets and be able to cut
down the time, that's where the value is. And that's why our
users are seeing the value in our software.
John: Jim, this machine learning... is it real or is it bullshit?
Hogan: Yeah, so if you go back when characterization was, and certainly
when we started Altos for example. The world was using HSPICE
and doing 4 corners. That was the world. So not a huge problem
in terms of data. Right? The world changed a lot.
Now there's multiple corners, thousands of corners. So how do
you do that?
If you used just statistical methods you're into weeks and months
in terms of characterizing a library. So, you gotta find ways
like these Solido guy did, to reduce the Monte Carlos through
intelligent means -- which was machine learning.
By the way, machine learning has been at Solido for a long time,
but it was invented back in the 50's for God sake. The real
problem is having enough data to train the system. Then Amit
mentioned one other thing that's probably worth talking about.
Let's talk about machine learning. What's an example of machine
learning? Apple Siri is an example of machine learning.
"Siri tell me what the best restaurant in Austin is?"
"Well, the Ironworks, Jim."
Ok, well that's not a bad answer, Siri. But how the hell is that
verifiable? What the hell did the machine do to verify that? By
the way, what criteria did you use to do that? Machine learning
can get off into its own little niche and it might think I like
vegetarian food or something.
This the big problem that you have to solve. The good news for
EDA, especially simulation -- either transistor level or RTL
level simulation -- is there's tons of data. With all that data,
you can really build intelligent training systems that will yield
results that shorten up the run time. The basic thing on
simulation, always, is reduce the number of things you gotta
verify. Number one.
Cooley: OK, cool.
---- ---- ---- ---- ---- ---- ----
Related Articles:
Anirudh, Sawicki, Hogan go at it over the CDNS Pegasus DRC launch
Dean Drako on IC Manage PeerCache P2P caching EDA tool accelerator
Prakash sees 3X revenue in 2017 by crushing SNPS Atrenta Spyglass
Amit on Solido machine learning, lib characterization, and Siri
Join
Index
Next->Item
|
|