Quoting Dan Cross <crossd(a)gmail.com>:
On Tue, Jan 6, 2015 at 12:33 PM, Johnny Billquist
<bqt(a)update.uu.se> wrote:
On 2015-01-06 17:56, Dan Cross wrote:
I believe that Mary Ann is referring to repeatedly looking up
(presumably different) elements in the entry. Assuming that e.g. `vi`
looks up O(n) elements, where $n$ is the number of elements, doing a
linear scan for each, you'd end up with quadratic behavior.
Assuming that you'd look up all the elements of the termcap entry at
startup, and did each one from scratch, yes.
Yes. Isn't that exactly what Mary Ann said was happening? :-)
Yes
But that would beg the question, why is vi doing a
repeated scan of the
terminal entry at startup, if not to find all the
capabilities and store
this somewhere? And if doing a look for all of them, why not scan the
string from start to finish and store the information as it is found? At
which point we move from quadratic to linear time.
I don't think she said it did things intelligently, just that that's how it
did things. :-)
But now we're getting into the innards of vi, which I never looked at
anyway, and I guess is less relevant in this thread anyway.
vi does indeed look up all the various capabilities it will need,
once, when it starts up. It uses the documented interface, which
is tgetent followed by tgetstr/tgetnum/tgetflag for each capability.
tgetent did a linear search.
The short of
it (from what I got out of it) is that the move from termcap
to terminfo was mostly motivated by attribute name changing away from fixed
2 character names.
A secondary motivation would be performance, but I don't really buy that
one. Since we only moved to terminfo on systems with plenty of memory,
performance of termcap could easily be on par anyway.
I tend to agree with you and I'll go one further: it seems that frequently
we tend to identify a problem and then go to 11 with the "solution." I can
buy that termcap performance was an issue; I don't know that going directly
to hashed terminfo files was the optimal solution. A dbm file of termcap
data and a hash table in whatever library parsed termcap would go a long
way towards fixing the performance issues. Did termcap have to be
discarded just to add longer names? I kind of tend to doubt it, but I
wasn't there and don't know what the design criteria were, so my
very-much-after-the-fact second guessing is just that.
It's been 30+ years, so the memory is a little fuzzy. But as I recall,
I did measure the performance and that's how I discovered that the
quadratic algorithm was causing a big performance hit on the hardware
available at the time (I think I was on a VAX 11/750, this would have
been about 1982.)
I was making several improvements at the same time. The biggest one
was rewriting curses to improve the screen update optimization, so it
would use insert/delete line/character on terminals supporting it.
Cleaning up the mess termcap had become (the format had become horrible
to update, and I was spending a lot of time making updates with all
the new terminals coming out) and improving startup time (curses also
had to read in a lot of attributes) were part of an overall cleanup.
IIRC, there may also have been some restrictions on parameters to string
capabilities that needed to be generalized.
Hashing could have been done differently, using DBM or some other method.
In fact, I'd used DBM to hash /etc/aliases in delivermail years earlier
(I have an amusing story about the worlds slowest email I'll tell some
other time) but at the time, it seemed best to break with termcap
and go with a cleaner format.