( ESNUG 588 Item 6 ) ---------------------------------------------- [10/01/19]
Subject: "It's results, NOT algorithms, dummy!"; Clarity vs. HFSS benchmark
I distrust all allegedly new algorithms until I see them fully vetted
first.
---- ---- ---- ---- ---- ---- ----
A matrix solver I don't understand is a matrix solver I don't use.
---- ---- ---- ---- ---- ---- ----
I don't trust secret new algorithms.
---- ---- ---- ---- ---- ---- ----
There's something off about a magic new algorithm that gets blazing
fast results and uses no approximations. Sounds an awful lot like
magic beans to me.
- 56% doubt secret "new" Clarity matrix solver (09/19/19)
From: [ Anon EDA User #1 ]
I'm the same as your 56%.
Whenever possible, I don't use algorithms I don't fully understand.
That's my mathematics degree speaking. Not my engineering job speaking.
- [ Anon EDA User #1 ]
---- ---- ---- ---- ---- ---- ----
From: [ Anon EDA User #2 ]
Your 56% are wrong.
Most medicines we use today are used because they work and only because
they're known to work. The doctors sometimes have a theory of how a
medicine works. But they don't know for sure. All they really know is
it works; hence they prescribe it.
This has been true the entire history of medicine, and for engineering.
- [ Anon EDA User #2 ]
---- ---- ---- ---- ---- ---- ----
From: [ Anon EDA User #3 ]
Hi, John,
Ever since commercial SPICE simulators ca,e about back in the 1980's, the
details on how each specific commerical SPICE (or 3-D EM FEM) tool solved
its matrices was a closely guarded secret.
Yes, Anirudh can brag about how well his new "secret sauce" matrix solver
is. In doing so he's just following the traditition of how new commercial
SPICE tools have been launched in the past 30 years. Brag, brag, brag,
till the cows come home and then hope that the individual user benchmarks
confirm that his new matrix solution actually delivers on his brags.
In this way and in very short order, if Anirudh is lying or exaggerating
his claims, word will very quickly leak out to the customer base.
To expect Anirudh to change this approach to instead be open algorthm so
that all can assess its efficiency is, at best, naive, at worse, folly.
- [ Anon EDA User #3 ]
---- ---- ---- ---- ---- ---- ----
From: [ Anon EDA User #4 ]
While 56% of your readers might have it wrong, your Reagan Cold War quote
"Trust, but verify."
- Ronald Reagan, U.S. President (1911 - 2004), an old
Russian proverb he often used in nuclear disarmament
discussions with the Soviet Union.
best describes the spirit of how engineering works. Once something is
verified to work, we use it.
It's the results, NOT the algorithms, dummy!
- [ Anon EDA User #4 ]
---- ---- ---- ---- ---- ---- ----
From: [ Anon EDA User #5 ]
Those 56% guys have it wrong. My boss and his boss and his boss above him
don't give a [expletive deleted] about the theory of how our tools work.
What they are only concerned about is how well and quicker it works.
We are engineers. We're not scientists. We make things that must work.
We leave the theories to the science boys.
- [ Anon EDA User #5 ]
---- ---- ---- ---- ---- ---- ----
From: [ Anon EDA User #6 ]
Hi, John,
I love it when you post the users comments like this!
One minor point. Understanding how an algorithm functions is nice, but most
if not all engineers will freely use it even if they don't understand it.
This is with the caveats of only after extensive testing of the algorithm,
of course. But once it passes muster, the mystery algorithm becomes part
of the engineer's tool chest.
- [ Anon EDA User #6 ]
---- ---- ---- ---- ---- ---- ----
---- ---- ---- ---- ---- ---- ----
---- ---- ---- ---- ---- ---- ----
I looked up the U.S. Patent Office applications. So far, there is
nothing filed by Cadence nor Anirudh Devgan regarding any new matrix
solving algorthms!
This means Anirudh's new matrix solution is only known within the
walls of Cadence R&D -- and that no outside academics have done a
formal proof that his algorithm actually works, much less how
accurate it is.
Caveat emptor!
- 56% doubt secret "new" Clarity matrix solver (09/19/19)
From: [ Anon EDA User #7 ]
Hi, John,
As a user, we are results driven. We don't care about anything else but
results. All this talk about having to understand the inner workings of
Anirudh's secret new matrix solving algorthm does not make sense to us.
Either it works and is faster than HFSS and we use it. Or it doesn't.
And we stick with HFSS. Everything else is empty talk.
Our group, teaming up with our IC designers and customer's system designers,
covers analysis on various "IC + Package + PCB" combinations for a wide
range of chip types like DDR, high-speed AMS, and high-performance network
servers. We also analyze 3D IC packages and, sometime, cell phone casings.
My team has been a long-time user of HFSS, which, I agree, is indeed running
out of steam for today's 10/7nm FinFET chips and our sophisticated packaging
of 8-layer flip-chip BGA and 9-layer board designs.
Though accurate and full of features, my biggest daily headache with HFSS is
how to include more structures in my EM analysis and how to get it to finish
not in days (or weeks) even if I have a big enough machine to run it on.
If we can't simulate the entire thing, my guys live asking: "what segmented
divide-and-conquer approach can we take to partition this in HFSS and then
run the system so that it won't mess up our accuracy requirements?"
When we heard Cadence had a new 3DEM project with claims of "true 3D", and
"significantly higher capacity and performance", we immediately became an
early partner to test out what is now called "Clarity".
HOW WE BENCHMARKED CLARITY
After months of running Clarity on many testcases I can say their mktg claims
mostly checks out.
- Clarity's adaptive meshing works well. It does multiple iterations
until it converges to refined elements that give high accuracy,
capacity and performance
Take a DDR with double-layer board (10 Mhz - 30 GHz) as an example,
Solution frequency: 30 GHz
Clarity meshing converged element count: 8.9 M+
After 5 iterations, Clarity refined itself to 140,982 mesh elements in the
4th run to get reasonable coverage and accuracy compared HFSS 118,427 count.
Clarity gave a more granular (140,982 vs 118,427) mesh in much less time
(6.75X faster) with better accuracy.
The other thing I can confirm is:
- Clarity's smart partitioning works by multi-CPU parallel execution
1. Massively distributed processing in all stages of the
run, such as adaptive meshing, frequency sweeping and
matrix solving;
2. Near-linear performance and capacity scalability;
3. Better accuracy than HFSS, regardless the number of
machines/CPUs.
To benchmark performance and memory use, we took a 26-layer package with
64-ports as a testcase that's big enough to run both HFSS and Clarity.
We had three scenarios (32/64/240) of CPU core configurations:
From this table we can say that:
- Clarity is 3.5X faster on baseline comparison at 32 CPUs;
- Clarity scalable performance goes to 240 CPU cores to show
it's 9.5X faster than HFSS and HFSS doesn't scale well
beyond 64 CPU cores;
- Clarity is able to run with 83% smaller memory footprint:
25-to-35 GB per 8-CPU core. This means a group of smaller,
and hence cheaper, servers can be used to analyze a larger
design, instead of HFSS which only works with fully loaded
1 to 1.5 TB machines.
We also worked with the CDNS Clarity team on a large 8-layer flip-chip
BGA package that had dozens of signals and 100+ ports. With 4 machines
(of 64 CPU each), we got this performance scalability chart:
Due to competitive sensitivities of lab measurement data, John, please
pardon me that I am not sharing my accuracy correlation studies on
Clarity vs. HFSS vs. our test board with you; but what I can say is our
Clarity results matched our HFSS data -- and Clarity also gave consistent
results with our board measurements regardless of # of machines/CPUs
deployed in the extraction and simulation in Clarity.
CLARITY GOTCHAS
While the performance is impressive, this is still new/early technology.
Our results are encouraging with Clarity but we need to do more work with
it. We notice one area that is lacking in
- Clarity is weak in adaptive frequency sampling (AFS). 3D EM solvers
need to generate s-parameters over a large frequency spectrum, say
from 100 Mhz up to 40 GHz. The problem here is that current methods
can produce 1000's of sampling points, which cause longer analysis
run times. There are multiple techniques to do reduce the number of
sampling points. Clarity's AFS solver generates roughly 2X more
sampling points than HFSS does. We've given this feedback to CDNS,
but it's not clear if they understand how to improve this without
sacrificing accuracy.
- Clarity only supports lumped ports s-parameters; which is less
accurate than wave port s-parameters. This is basic functionality
that is expected in any system simulation tool, but for best
accuracy we need to have support for wave port models for EM
simulation. Currently Clarity only supports lumped models, but
CDNS R&D plans to support it in a future release. HFSS does both.
- Clarity only runs on Linux boxes, and NOT Windows boxes. This is
bad because we have huge compute farms using Windows. HFSS runs
on either Windows or Linux.
Right now our test cases show Clarity works and it's faster than HFSS and
it uses less memory.
Concerning the "HFSS will catch up in 12 to 18 months" comments; we don't
have time wait for HFSS to catch up. We are very satisfied with Clarity's
results now, so we have no reason wait. It doesn't make sense to wait.
The good results *now* are what counts the most for us.
- [ Anon EDA User #7 ]
---- ---- ---- ---- ---- ---- ----
Related Articles:
56% of users doubt Anirudh's secret "new" Clarity matrix solver
SCOOP! -- Anirudh goes total war on Ansys mothership at CDNlive'19!
Ansys CEO Ajei Gopal cleverly acquires Helic as #8 "Best of 2018"
Hogan and Anirudh on Ansys ANSS Redhawk/SeaScape vs. CDNS Voltus
Anirudh and Sawicki on iffy Apache IR-drop #'s vs. Voltus/Innovus
Cooley's open letter to ANSS Ajei Gopal to join Troublemaker Panel
Join
Index
Next->Item
|
|