Catching up
Hi Folks,
There's been a lot happening on the lists recently, and I
haven't kept up.
Firstly, thanks for all the Linux recommendations. I tried
Redhat in lots of different versions, and was never entirely
happy with any of them. They always seemed to have annoying
stupid bugs. I've also tried the last few Mandrake's, and
I must say they're very "sexy". But again, they had some
annoyingly obvious bugs which I felt showed a lack of
attention to detail. So I settled on Caldera, 'cos I'd
heard that they err on the side of stability rather than
variety or the latest versions. On the whole, I'd say that
seems to be true, so I'll stick with it. The idea of trying
FreeBSD or NetBSD had only fleetingly crossed my mind, and
it's an interesting idea, but I really need to get back to
work, rather than spend time "learning" even more strange
Unix dialects. Al's suggestion of using a Mac is intruiging,
but nobody mentioned BEOS. Anybody tried that ?
The only problem with Caldera is the lack of variety in
the distribution. It was while I was adding some programs
that I like, in particular RXVT, that I came across the
BS/DEL problem. At the time it took me a long time to get
round the problem by individually configuring both rxvt
and bash, and I was despairing that I'd need to do that
if I ever added any other text based programs. I've since
found out that you can give a couple of options to the
./configure in rxvt to stop it interfering with the BS and
DEL keys, which makes it a lot easier. However, I still
think that the whole BS/DEL fiasco in Unix/Linux is only
easy to handle if you let someone else do it, i.e. use
a distribution. Incidentally, I couldn't get the DEL key
to work properly with bash either way, but I checked with
the KDE terminals, and also on the Solaris machines at
work, and none of them seem to handle DEL properly
within bash.
Anyway, enough of the distribution stuff, and on with some
of the other comments that have been appearing on the list
recently, the majority of the most interesting ones coming
from Al Davis. There's so much been happening that I can't
think of a way to organise it, so I'll just address them
in a roughly chronological order, unless some of them
obviously fit together.
Al's comments on autoconf/makefiles
-----------------------------------
Al gives the impression that he's not that keen on autoconf,
and I'm tempted to agree. I downloaded ACS 0.28 from somewhere,
(not the CVS, cos I still can't connect with my winmodem), and
I've been looking at trying to get that to compile on Win32.
It doesn't seem to have any Makefiles to do that, but at least
I can make some progress by working with the existing ones.
Ng-spice, on the other hand, needs autoconf, which is only
"portable" between Unix-like operating systems. Even then,
I couldn't get ng-spice to compile on cygwin or solaris, so
what did autoconf actually gain us ?
However, I remember the days of Linux before autoconf, and
then almost nothing seemed to compile without some kind of
doctoring of the Makefiles. Nowadays, the "./configure, make,
make install" sequence invariably seems to work ON A LINUX
SYSTEM. It often doesn't work so simply on Solaris. So my
general impression is that autoconf only helps if the author
of the program knows, understands, and has catered for every
flavour of Unix. And who wants to be bothered with all that ?
I'm already pissed off enough trying to learn two flavours
of Unix (Solaris at work and Linux at home).
IMHO autoconf is a band-aid to try to cover the ludicrous
variety of Unices that are around, and it doesn't solve the
fundamental problem. A standard Unix structure would be much
easier for programmers to handle. In fact, a standard C/C++
programming environment would be more to the point, since
for most application programs, there shouldn't be any need
to be closely linked to the underlying operating system,
so that text based applications should be properly
portable between Unix/Linux, Win32, MacOS, BEOS, RiscOS,
etc.
Dream on !! I was under the naive impression that GNU C
at least would be portable between itself on different
systems, but even that is too much to hope for. For example,
in the old outitf.c, I used a macro FLT_MAX (or something
like that), which worked ok in Linux, but in Cygwin and
Solaris, this needed another header file to be included.
Obviously, this header file was being included by some
other header in the Linux version, but how am I supposed
to know which headers include which other headers for
every different installation of gcc for every distribution
of Linux, Solaris, Cygwin, NetBSD, etc. ? And I'm supposed
to know this for every single "standard" C macro which I
want to use in my code. Since most ng-spice source files
have a whole bunch of ng-spice headers included at the top
of each file, and most of those call other ones, I assumed
that if the macro didn't cause an error, then one of the
ng-spice headers must already include the correct <system>
header file to collect the macro. No need for me to re-
include it. But since this only works on Linux, and the
ng-spice headers are the same on any system, then the Linux
system header files must be different on different systems.
Now, I can see an excuse (if not a good reason) why different
compilers might have different structures of header files,
but I would have thought that the same compiler would have
the same header file structure no matter what the system.
Surely the only differences should be extra or missing
functions for different systems.
Anyway, Al, I notice that you have stuff in ACS to handle
Win32. Which compiler were you using ? I'm attempting to
get it to compile with either the freely downloadable
Borland command line compiler, or the Watcom compiler,
which, apparently, is soon to become open-source.
Prof Vincentelli, ACS, Model compilers, etc.
--------------------------------------------
I forget what the actual problem with the Berkeley license was.
But we seem to have our heads down, battering along with ng-spice
without any "decision" being made about the spice v. ACS "question".
Do we intend to make a decision ? Are we going to attempt to support
both, choose one, or start a whole new one ?
Is the main problem with ACS the lack of a bipolar model ? Can that
be easily fixed with the model compiler ? Is it worth the effort
of porting the compiler to ng-spice ? As I mentioned above, I'm
trying to get ACS to compile on a windows system, and at the same
time, I'll be taking a closer look at ACS in general, to see if
I personally want to concentrate my efforts in that direction.
What's our simulator for, anyway ? Do we really expect people to
be doing real work with it, or is it a free tool for hobbyists
and students to get started on or tinker with. For the latter,
there's no need to have all the latest sexy models anyway. For
the former, we still have a lot of debugging work to do to make
it reliable enough to use in a commercial environment. This latest
set of bugs where it (in)conveniently ignores temperatures and options,
shows how badly debugged it is. Or how little it's been used, if
nobody's ever noticed such fundamental problems as that before.
If we want to compete with "real", i.e. commercial, simulators,
we need to take a whole different approach to the debugging/testing
area. (Although, I think there are a few commercial ones out there
with glaring errors).
Ng-spice and SMP
----------------
I agree with Al's comments about the main CPU-time hog being
the evaluation of the device model equations, rather than the
actual matrix calculation itself. However, I don't know if I agree
with his conclusions, i.e. that the best we could hope for with
SMP would be a 3x improvement.
Surely, in a circuit with thousands of devices, it would be relatively
easy to split the model evaluation up between as many CPU's as you
like. As long as the final loading of the results into the matrix
doesn't become a bottleneck, then surely we can get an almost
arbitrary speedup limited only by the number of CPUs. The amount
of data each CPU needs to do the evaluation will easily fit in it's
own local cache, so they shouldn't be fighting over access to shared
memory.
So I'd say that the evidence leans towards the side that spice or ACS
would benefit from a bit of multi-threading in the model evaluation
area, even if we leave SMPing the matrix code till later. I'm assuming
that multi-threading it would allow the operating system to automatically
use more CPUs if they're available.
It might be fun to try SuperLU-ing the matrix code, though ;-) although I
think we already discussed the fact that the code in spice is rather
specialist, and a general matrix solver would probably be a step
back. If one of us understood it properly, though, we might be able
to get the best of both worlds.
"Save all" is a big resource consumer
-------------------------------------
I really must beg to differ here :-) We just said that the main
time consumer was the model evaluation step, but that was actually
a simplification. As circuits get bigger, then the model evaluation
effort should just go up linearly with the size of the circuit (i.e.
the number of devices which must be evaluated. However, matrix
solution goes up more than linearly with matrix size. (I can't
remember how much more than linearly). Also, it would be reasonable
to assume that the bigger the circuit, the more times spice will have
to go round the evaluate-solve loop. So for bigger circuits, the
matrix solution becomes more significant, and the overall effort
goes up more than linearly.
The data size, though, only goes up linearly. In fact, I think I'm
correct in saying that if you just save all the node voltages, then
the data size does not even go up linearly with circuit size. Think of
supply lines and busses, for instance. Many times when you add more
devices, you don't add many new nodes. Even if you save all terminal
currents, that can only go up linearly with circuit size (i.e. number
of devices in the circuit).
Nowadays, PCs all have bus mastering IDE controllers, so the amount
of CPU time needed to save data is negligible, and the data transfer
rate to the hard disk can easily handle the rate at which spice can
generate data. So if you output the data after calculating each
time-step, you effectively get the data saved to disk for free.
This assumes that you output the data immediately after it's
calculated, and don't do it the way Spice 2x used to, and some
modern brain dead ones still do (TI spice, Adice, Spice3/ngspice
in interactive mode), and that is to attempt to save all the results
in memory, and write it to disk after you've finished the simulation.
This is what I call "the workstation mentality", where the programmer
assumes that he has an infinite amount of resources that can handle
any size of job he asks it to. This only works until you reach the
point where the hardware can't handle it.
Many moons ago, I used to use PSPICE on a PC, and various others on
Unix workstations, and contrary to popular belief, PSPICE on a PC used
to handle really big simulations better than the workstations did. There
would be a range in the middle where the workstations were better,
due to the fact that they did have much more memory than a PC at the
time, but the workstations all relied on virtual memory, and as soon
as you reach the point where you actually need that, everything goes
very slowly.
The final trick is to be able to read the data back off the disk in a
usefully quick manner, which is what my rawfile format proposal is all
about. This is where the real price is paid for the "save all" option,
at the moment. But we can fix that :-)
Proposed netlist format extension
---------------------------------
Al's proposal makes the best of a bad situation. The spice syntax
is pretty horrible, but we really need to maintain compatibility
with it. It would be nice to get a language scientist to design
us a good unambiguous syntax, but even if we did get one, we really
need access to any new features from within the old spice syntax.
Al's suggestion intuitively looks like it'll work, but we must
ensure that it doesn't break anything.
Do we have to use the dot notation for new models ? Should we keep
the dot for commands, and choose another gobbledigook symbol (in true
Unix style ;-) for the new device types ?
Currently we have * for comments, + for line continuation, and . for
commands. I suppose the _ should be a valid name character, although
since the first character on every spice line is "special", you can't
start a device name with that. How about the C style \ character,
i.e. treat the following character as English instead of gobbledigook,
and in our case it would mean don't consider the next character as
a device type identifier.
In fact, you could follow the \ with whatever syntax we see fit. I
would recommend one which is verbose and clear, rather than a new set
of gobbledigook. The spectre syntax looks quite clear and readable to
me, i.e. "identifier (terminal list) type" followed by a sequence of
parameter=value pairs. If we enforce the need for the () brackets
and = signs, the collection of terminal nodes and parameters can be
done by a general purpose routine, independant of the device
type.
This doesn't allow for new commands within the \ syntax, but maybe
that is where it would be more appropriate to extend the . syntax.
Al's proposal is nice and simple, though, so maybe we should just
stick with that. We should reserve .analysis and .command for new
analyses and commands. Either way, we should think carefully
about it, 'cos we don't want to keep extending the syntax in
an ad-hoc manner, and end up with a kludgy mess.
As has been mentioned before, you can have a .option newsyntax
line, and in spectre you can arbitrarily switch syntaxes at
many points in the same file. I'm not so sure I like the idea
of handling the code to dynamically switch parsers in the
middle of a file, but ultimately this would lead to a much
cleaner looking spice source file, as you could start the file
with a .option newsyntax line, and then exclusively use the new,
hopefully better, clearer syntax.
Presumably we want, eventually, to use a new, better syntax,
and we only need the old stuff to save effort of converting
old spice source files. Neither Al's suggestion, nor mine, allows
us to get to a clean new syntax, without having either \ or
. at the start of every line, i.e. gobbledigook. How about we
take Al's simple approach at the moment, but meanwhile we
work on a proper, human readable, language which could be
"switched on" by the first line in the Spice file. At the
moment, the first line is totally ignored by the spice
parser, as it's supposed to be an arbitrary title. Suppose
we say that if the title line begins with "Spice 4:" followed
by a title, then a new parser would be used. Maybe we make the
first line have to be exactly "Spice 4:" and have a specific
title command that could appear anywhere in the file.
Or something like that. Something that would eventually allow
us to discard the gobbledigook completely (i.e. arbitrary properties
attached to otherwise meaningless, characters, which have different,
equally arbitrary properties in almost every other Unix program).
Possibly we choose a different suffix, in the same way that .cc
or .cpp makes gcc invoke the C++ parser instead of the C one.
(I don't really like that last idea. It smacks of gobbledigook :-)
Documentation for outitf.c
--------------------------
After spending all of Sunday afternoon writing this s**t, I'm now
going to try to get on with what I should be doing, i.e. that
documentation. If this message isn't immediately followed by the
documentation, then you'll know I wasted too much time with this.
I hope you all don't waste too much time reading it ;-)
Cheers,
Alan
Partial thread listing: