( ESNUG 458 Item 8 ) -------------------------------------------- [11/16/06]
From: [ Alice in Wonderland ]
Subject: User asks on eval does Arithmatica CellMath really reduce power?
Hi John,
Please keep me anonymous.
I searched DeepChip for any user comments or articles by you about the EDA
company "Arithmatica" and/or their "CellMath" tools -- but found nothing.
Arithmatica has the following tools:
1) CellMath Designer and CellMath Builder (CMD)
2) CellMath Optimizer
They claim that standard cells can be used to synthesize for better power,
area and delay in designs that are math intensive. I have not run a full
evaluation yet because I needed to first resolve a few issues related to
getting the correct environment setup before I could run the tool. (Yes,
I have run a small test case to ensure CellMath is running and that their
results are within expectation when compared to Synopsys DC Ultra.)
Here's what I know so far.
- These tools are supposed to improve power, area and timing compared to
Synopsys DW. CMD generates bit/cycle accurate C models, performs
auto-pipelining/retiming. It performs "on-the-fly" power simulation
using generated random vectors. The simulation vectors can also be
provided by the user to help CMD estimate power. Many parametrizable
blocks are available much like Synopsys DW.
Arithmatica claims improvement in timing and power are on the order of
30% to 40% for floating point arithmetic. For fixed point arithmetic
it is supposedly more in the 10% to 20% range.
The largest block that CMD can accept to this point is about 20K gates.
More might be accepted but the run time might be longer.
This is all what Arithmatica claims.
- What I've found is that power improvements can only be achieved on
"data path" types of blocks. Any other block with control logic will
not be improved.
Another down side is that the partitioning of the blocks needs to be
done by the designer. In other words it is best to isolate down to
the smallest Verilog modules of data paths and then feed these to CMD;
which then outputs a Verilog netlist which needs to be fed to Synopsys
DC for compile with "don't_touch" attribute set and then followed with
an incremental DC compile without the "don't_touch" attribute.
Arithmatica also claims to have some architecture trade offs that cannot
be achieved by current Synopsys DW.
It supports Formal verification by generating functional Verilog which
is used as the golden model. The golden Verilog model is used for EC
or simulation.
There's no harm in feeding CMD a block that also contains control logic
(non-datapath). The disadvantage is that the tool will only churn on
this logic. It will improve the data path portion of these blocks. An
upcoming version that they are working on will do auto-partitioning and
take the data path blocks thorough the optimization process.
The tool allows for area over power prioritization. Same with timing.
I am wondering about the the QoR that people have been able to achive with
Arithmatica. Has the experience of users been indeed better than what plain
old Synopsys DC Ultra + DesignWare can achieve?
Can their tools really reduce power by as much as 25%?
- [ Alice in Wonderland ]
Index
Next->Item
|
|