GNU bug report logs -
#75593
31.0.50; Faulty macro kills Emacs
Previous Next
Full log
View this message in rfc822 format
> If we wanted to optimize for large lexical environments, we might be
> able to, but we've decided not to do that.
Indeed. The lack of optimization is for two reasons:
- As you say, we expect lexical envs to be small.
- We also expect that performance of interpreted code is not
very significant.
> If there's any evidence that lexenvs are large enough that scanning them
> linearly in this one place is a significant expense, we might want to
> rewrite the entire lexenv code: all of it is O(N) in the size of the
> environment: it assumes lexenvs are small enough to scan linearly, and
> so does the fixed Feval.
I suggest to simply compare the time to bootstrap with and without the
`Flength` call. This way we can base our decision on actual data rather
than just our intuition.
> (I think it would be possible to allow a hash table as the non-list
> final CDR of Feval's lexenv argument, but we'd still end up overriding
> bindings with an alist, so it'd be a bit of a Frankenstein affair).
Also, since lexenvs are mostly modified by extending them (without
affecting the base env since it might have been captured by a closure)
and then later reverting to the unextended base, hash-tables tend to
work rather poorly.
More common choices to handle large lexical environments are balanced
binary trees. Other options generally require first replacing the var
names with something else, like small integers (e.g. DeBruijn indices),
but that requires a kind of preprocessing which we'd usually call
"compilation" (tho it can be simpler than what `bytecomp.el` does).
Stefan "who likes «Myers stacks»: like singly-linked lists
but with O(log N) time for `nthcdr`"
This bug report was last modified 155 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.