GNU bug report logs - #75593
31.0.50; Faulty macro kills Emacs

Previous Next

Package: emacs;

Reported by: Alexander Prähauser <ahprae <at> protonmail.com>

Date: Wed, 15 Jan 2025 19:01:01 UTC

Severity: normal

Found in version 31.0.50

Full log


View this message in rfc822 format

From: Pip Cet <pipcet <at> protonmail.com>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: ahprae <at> protonmail.com, mattiase <at> acm.org, monnier <at> iro.umontreal.ca, 75593 <at> debbugs.gnu.org
Subject: bug#75593: 31.0.50; Faulty macro kills Emacs
Date: Thu, 16 Jan 2025 12:08:37 +0000
"Eli Zaretskii" <eliz <at> gnu.org> writes:

>> Date: Thu, 16 Jan 2025 10:38:21 +0000
>> From: Pip Cet <pipcet <at> protonmail.com>
>> Cc: monnier <at> iro.umontreal.ca, mattiase <at> acm.org, ahprae <at> protonmail.com, 75593 <at> debbugs.gnu.org
>>
>> "Eli Zaretskii" <eliz <at> gnu.org> writes:
>>
>> >> Date: Wed, 15 Jan 2025 22:20:42 +0000
>> >> From: Pip Cet <pipcet <at> protonmail.com>
>> >> Cc: Eli Zaretskii <eliz <at> gnu.org>, Alexander Prähauser <ahprae <at> protonmail.com>, 75593 <at> debbugs.gnu.org
>> >>
>> >> --- a/src/eval.c
>> >> +++ b/src/eval.c
>> >> @@ -2446,8 +2446,11 @@ DEFUN ("eval", Feval, Seval, 1, 2, 0,
>> >>    (Lisp_Object form, Lisp_Object lexical)
>> >>  {
>> >>    specpdl_ref count = SPECPDL_INDEX ();
>> >> -  specbind (Qinternal_interpreter_environment,
>> >> -	    CONSP (lexical) || NILP (lexical) ? lexical : list_of_t);
>> >> +  if (CONSP (lexical) || NILP (lexical))
>> >> +    Flength (lexical);
>> >> +  else
>> >> +    lexical = list_of_t;
>> >> +  specbind (Qinternal_interpreter_environment, lexical);
>> >>    return unbind_to (count, eval_sub (form));
>> >>  }
>> >
>> > Isn't there a cheaper way of verifying LEXICAL is a list, not a cons
>> > cell?
>>
>> I don't think there is, no.  As I explained, it's unlikely to be a
>> significant expense: lexical environments are not very large, usually,
>> and all other users of Vinternal_interpreter_environment are optimized
>> for small lists which are scanned until we find the right entry.  If we
>> wanted to optimize for large lexical environments, we might be able to,
>> but we've decided not to do that.
>
> OK, so can you then benchmark this?  E.g., by byte-compiling all the
> *.el files in the tree with and without the change?  Or maybe there's
> something appropriate in the benchmark suite?  Or both?

I'll see what I can do.  The patch certainly can be improved by checking
NILP (lexical) first, and only calling Flength if it's a cons cell, and
that appears to be the common case: building Emacs only calls Feval with
a cons cell only once or twice, and in those cases, it's a single-item
list.

>> > How about calling Flength only "#ifdef ENABLE_CHECKING", and otherwise
>> > just verifying that LEXICAL is a list?
>>
>> ENABLE_CHECKING shouldn't affect code behavior in such ways, no.
>
> ?? It already does.  We discover problems and bugs earlier all the
> time when running a build with --enable-checking, because it aborts
> before it runs into a potentially fatal problem.

Calling Flength is very different from aborting, IMHO.

>> I
>> don't think a crash is acceptable here, but if people end up disabling
>> ENABLE_CHECKING and using XCAR on a non-cons, a crash is the *best*
>> possible outcome: more likely, we'd just be creating nonsensical
>> Lisp_Objects which may cause memory corruption.
>
> I think the best outcome is to signal an error, not to crash, if that
> is feasible.

Indeed.  That's what Flength does.

>> Making behavior depend on ENABLE_CHECKING in such ways will cause bugs
>> in !ENABLE_CHECKING builds that cannot be debugged because the
>> corresponding ENABLE_CHECKING build will never reach the problematic
>> code.
>
> I wrote that under the assumption that we could test for a proper list
> without calling Flength.  If that is not possible/feasible, that point
> is moot, and IMO we should only see if the run-time price is
> acceptable.

(We can use list_length, not Flength, but the difference won't be
significant.  Or Fproper_list_p, of course, which might make it clearer
that we're not interested in the length at all.)

>> (Also, this is not primarily about circular lists, it's about non-lists
>> in general).
>
> That should be easy to detect up front, no?  I thought the issue was
> about a cons cell vs a proper list.

Sorry, but I don't see a way of checking that we're looking at a proper
list that's cheaper that list_length: we need to find the last cons
cell, so we need to walk the list, and counting cells while we're doing
so should not affect performance.

>> > IOW, can we prevent crashes in a way that is cheap in production
>> > builds, since 'eval' is quite a hotspot?
>>
>> If there's any evidence that lexenvs are large enough that scanning them
>> linearly in this one place is a significant expense
>
> I didn't say that; it remains to be established.  Thus my suggestion
> above to benchmark.  But if there is a less expensive way, we could
> take it regardless: one doesn't have to prove that a simple test is
> less expensive than scanning the entire list structure.

I don't think there is a simple test.  I may be wrong, of course.

Pip





This bug report was last modified 155 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.