TRtol setting. Speed vs. accuracy
Al (and others),
In my spice program (b2 Spice A/d 2000), I took parts of xspice and merged
them with spice3f5. One of the changes in xspice nearly doubles the
transient simulation time. In the comment, it suggests that it was changed
to improve simulation accuracy. I'm wondering your thoughts on what value
you think it should be set at, and whether there's a way to get the best of
both convergence and speed. Here's the code:
/* ARGSUSED */
int
CKTnewTask(ckt,taskPtr,taskName)
GENERIC *ckt;
GENERIC **taskPtr;
IFuid taskName;
{
register TSKtask *tsk;
*taskPtr = (GENERIC *)MALLOC(sizeof(TSKtask));
if(*taskPtr==NULL) return(E_NOMEM);
tsk = *(TSKtask **)taskPtr;
tsk->TSKname = taskName;
// more stuff in here...
/* gtri - modify - 4/17/91 - wbk - Change trtol default */
/* Lower default value of trtol to give more accuracy */
/* tsk->TSKtrtol = 7; */
tsk->TSKtrtol = 1;
/* gtri - modify - 4/17/91 - wbk - Change trtol default */
As you can see, TSKtrtol has been changed from 7 to 1, which later gets
passed to ckt->CKTtrtol, which is used in CKTterr which is in turn used to
determine truncation errors for many of the device types. In practice, this
results in the transient taking a lot longer to settle on each "next" time.
I don't understand this very well, but I have profiled it both ways and
stepped through it so I'm confident that I have the general idea correct.
Since this parameter plays such a big role in simulation speed, I'd like to
understand it better, and if anyone can help, I'd appreciate it.
-Jon Engelbert
-----Original Message-----
From: Al Davis [mailto:aldavis@ieee.org]
Sent: Tuesday, May 01, 2001 5:38 PM
To: ng-spice-devel@ieee.ing.uniroma1.it
Subject: Re: [ng-spice-devel] convergence
On Tue, 01 May 2001, Steve Hamm wrote:
> Uh, we have a different definition of "consistent".
> When I mention consistent, I'm meaning that when the iterations
> stop, I want some solution x, such that F(x) is close to zero, and
> F and J have been evaluated at x so that conductances,
> capacitances, currents, etc. have been evaluated at x.
Define it however you want. It is all an approximation anyway. You
even say so in "F(x) is CLOSE TO zero".
What is important is that errors are bounded, with reasonable bounds,
and we have some confidence that the bounds are meaningful.
With either method, if you stop the iteration prematurely you get a
bad solution. If it hasn't converged to a reasonable tolerance it
doesn't matter whether it is "consistent" or not. It is bad anyway.
If the convergence checking is adequate, the last two iterations are
sufficiently close that the error is bounded. You can mix and match
and the error will still be bounded.
Strict attention to this "consistency" detail may have the benefit of
being able to accept a solution with one less iteration, but more
likely it distracts us from more significant issues.
When you throw in bypass, trace algorithms, latency exploitation,
multi-rate, .... you throw out any notion of exactness in terms of
"consistency", but with proper tolerances the error is bounded, and
can be bounded to an arbitrary tolerance.
If you don't throw in bypass, trace algorithms, latency exploitation,
multi-rate, .... you get a slow simulation that is only useful for
small circuits.
I did get some benefit from this, in the form of ideas for
maintaining some notion of "consistency" when limiting and damping is
applied.
By the way ... ACS doesn't even guarantee that all components of
F(x) are solved at the same iteration, or that all components of J(x)
are solved at the same iteration, or even that all components of x
are solved at the same iteration or by the same method. As long as
the errors are bounded, to a reasonable bound, it is ok.
Partial thread listing:
- TRtol setting. Speed vs. accuracy, (continued)
Latest commit
Manu Rouat