GNU bug report logs - #64735
29.0.92; find invocations are ~15x slower because of ignores

Previous Next

Package: emacs;

Reported by: Spencer Baugh <sbaugh <at> janestreet.com>

Date: Wed, 19 Jul 2023 21:17:02 UTC

Severity: normal

Found in version 29.0.92

Full log


Message #521 received at 64735 <at> debbugs.gnu.org (full text, mbox):

From: Dmitry Gutov <dmitry <at> gutov.dev>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: luangruo <at> yahoo.com, sbaugh <at> janestreet.com, yantar92 <at> posteo.net,
 64735 <at> debbugs.gnu.org
Subject: Re: bug#64735: 29.0.92; find invocations are ~15x slower because of
 ignores
Date: Tue, 12 Sep 2023 23:27:49 +0300
On 12/09/2023 22:35, Eli Zaretskii wrote:
>> Date: Tue, 12 Sep 2023 21:48:37 +0300
>> Cc: luangruo <at> yahoo.com, sbaugh <at> janestreet.com, yantar92 <at> posteo.net,
>>   64735 <at> debbugs.gnu.org
>> From: Dmitry Gutov <dmitry <at> gutov.dev>
>>
>>> then we could try extending
>>> internal-default-process-filter (or writing a new filter function
>>> similar to it) so that it inserts the stuff into the gap and then uses
>>> decode_coding_gap,
>>
>> Can that work at all? By the time internal-default-process-filter is
>> called, we have already turned the string from char* into Lisp_Object
>> text, which we then pass to it. So consing has already happened, IIUC.
> 
> That's why I said "or writing a new filter function".
> read_and_dispose_of_process_output will have to call this new filter
> differently, passing it the raw text read from the subprocess, where
> read_and_dispose_of_process_output current first decodes the text and
> produces a Lisp string from it.  Then the filter would need to do
> something similar to what insert-file-contents does: insert the raw
> input into the gap, then call decode_coding_gap to decode that
> in-place.

Does the patch from my last patch-bearing email look similar enough to 
what you're describing?

The one called read_and_insert_process_output.diff

The result there, though, is that a "filter" (in the sense that 
make-process uses that term) is not used at all.

>>> which converts inserted bytes in-place -- that, at
>>> least, will be correct and will avoid consing intermediate temporary
>>> strings from the process output, then decoding them, then inserting
>>> them.  Other than that, the -2 and -3 variants are very close
>>> runners-up of -5, so maybe I'm missing something, but I see no reason
>>> be too excited here?  I mean, 0.89 vs 0.92? really?
>>
>> The important part is not 0.89 vs 0.92 (that would be meaningless
>> indeed), but that we have an _asyncronous_ implementation of the feature
>> that works as fast as the existing synchronous one (or faster! if we
>> also bind read-process-output-max to a large value, the time is 0.72).
>>
>> The possible applications for that range from simple (printing progress
>> bar while the scan is happening) to more advanced (launching a
>> concurrent process where we pipe the received file names concurrently to
>> 'xargs grep'), including visuals (xref buffer which shows the
>> intermediate search results right away, updating them gradually, all
>> without blocking the UI).
> 
> Hold your horses.  Emacs only reads output from sub-processes when
> it's idle.  So printing a progress bar (which makes Emacs not idle)
> with the asynchronous implementation is basically the same as having
> the synchronous implementation call some callback from time to time
> (which will then show the progress).

Obviously there is more work to be done, including further desgin and 
benchmarking. But unlike before, at least the starting performance 
(before further features are added) is not worse.

Note that the variant -5 is somewhat limited since it doesn't use a 
filter - that means that no callbacks a issued while the output is 
arriving, meaning that if it's taken as base, whatever refreshes would 
have to be initiated from somewhere else. E.g. from a timer.

> As for piping to another process, this is best handled by using a
> shell pipe, without passing stuff through Emacs.  And even if you do
> need to pass it through Emacs, you could do the same with the
> synchronous implementation -- only the "xargs" part needs to be
> asynchronous, the part that reads file names does not.  Right?

Yes and no: if both steps are asynchronous, the final output window 
could be displayed right away, rather than waiting for the first step 
(or both) to be finished. Which can be a meaningful improvement for some 
(and still is an upside of 'M-x rgrep').

> Please note: I'm not saying that the asynchronous implementation is
> not interesting.  It might even have advantages in some specific use
> cases.  So it is good to have it.  It just isn't a breakthrough,
> that's all.

Not a breakthrough, of course, just a lower-level insight (hopefully).

I do think it would be meaningful to manage to reduce the runtime of a 
real-life program (which includes other work) by 10-20% solely by 
reducing GC pressure in a generic facility like process output handling.

> And if we want to use it in production, we should
> probably work on adding that special default filter which inserts and
> decodes directly into the buffer, because that will probably lower the
> GC pressure and thus has hope of being faster.  Or even replace the
> default filter implementation with that new one.

But a filter must be a Lisp function, which can't help but accept only 
Lisp strings (not C string) as argument. Isn't that right?




This bug report was last modified 1 year and 274 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.