GNU bug report logs - #64735
29.0.92; find invocations are ~15x slower because of ignores

Previous Next

Package: emacs;

Reported by: Spencer Baugh <sbaugh <at> janestreet.com>

Date: Wed, 19 Jul 2023 21:17:02 UTC

Severity: normal

Found in version 29.0.92

Full log


View this message in rfc822 format

From: Dmitry Gutov <dmitry <at> gutov.dev>
To: Eli Zaretskii <eliz <at> gnu.org>
Cc: luangruo <at> yahoo.com, sbaugh <at> janestreet.com, yantar92 <at> posteo.net, 64735 <at> debbugs.gnu.org
Subject: bug#64735: 29.0.92; find invocations are ~15x slower because of ignores
Date: Thu, 27 Jul 2023 16:30:56 +0300
On 27/07/2023 08:22, Eli Zaretskii wrote:
>> Date: Thu, 27 Jul 2023 03:41:29 +0300
>> Cc: Eli Zaretskii <eliz <at> gnu.org>, luangruo <at> yahoo.com, sbaugh <at> janestreet.com,
>>   64735 <at> debbugs.gnu.org
>> From: Dmitry Gutov <dmitry <at> gutov.dev>
>>
>>> I have modified `directory-files-recursively' to avoid O(N^2) `nconc'
>>> calls + bypassing regexp matches when REGEXP is nil.
>>
>> Sounds good. I haven't examined the diff closely, but it sounds like an
>> improvement that can be applied irrespective of how this discussion ends.
> 
> That change should be submitted as a separate issue and discussed in
> detail before we decide we can make it.

Sure.

>>> If we forget about GC, Elisp version can get fairly close to GNU find.
>>> And if we do not perform regexp matching (which makes sense when the
>>> REGEXP is ""), Elisp version is faster.
>>
>> We can't really forget about GC, though.
> 
> But we could temporarily lift the threshold while this function runs,
> if that leads to significant savings.

I mean, everything's doable, but if we do this for this function, why 
not others? Most long-running code would see an improvement from that 
kind of change (the 'find'-based solutions too).

IIRC the main drawback is running out of memory in extreme conditions or 
on low-memory platforms/devices. It's not like this feature is 
particularly protected from this.

>> But the above numbers make me hopeful about the async-parallel solution,
>> implying that the parallelization really can help (and offset whatever
>> latency we lose on pselect), as soon as we determine the source of extra
>> consing and decide what to do about it.
> 
> Isn't it clear that additional consing comes from the fact that we
> first insert the Find's output into a buffer or produce a string from
> it, and then chop that into individual file names?

But we do that in all 'find'-based solutions: the synchronous one takes 
buffer text and chops it into strings. The first asynchronous does the 
same. The other ("with-find-p") works from a process filter, chopping up 
strings that get passed to it.

But the amount of time spent in GC is different, with most of the 
difference in performance attributable to it: if we subtract time spent 
in GC, the runtimes are approximately equal.

I can imagine that the filter-based approach necessarily creates more 
strings (to pass to the filter function). Maybe we could increase those 
strings' size (thus reducing the number) by increasing the read buffer 
size? I haven't found a relevant variable, though.

Or if there was some other callback that runs after the next chunk of 
output arrives from the process, we could parse it from the buffer. But 
the insertion into the buffer would need to be made efficient 
(apparently internal-default-process-filter currently uses the same 
sequence of strings as the other filters for input, with the same amount 
of consing).




This bug report was last modified 1 year and 273 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.