GNU bug report logs -
#66020
[PATCH] Reduce GC churn in read_process_output
Previous Next
Reported by: Dmitry Gutov <dmitry <at> gutov.dev>
Date: Sat, 16 Sep 2023 01:27:02 UTC
Severity: wishlist
Tags: patch
Done: Dmitry Gutov <dmitry <at> gutov.dev>
Bug is archived. No further changes may be made.
Full log
Message #58 received at 66020 <at> debbugs.gnu.org (full text, mbox):
On 21/09/2023 20:33, Dmitry Gutov wrote:
> On 21/09/2023 17:37, Dmitry Gutov wrote:
>> We could look into improving that part specifically: for example,
>> reading from the process multiple times into 'chars' right away while
>> there is still pending output present (either looping inside
>> read_process_output, or calling it in a loop in
>> wait_reading_process_output, at least until the process' buffered
>> output is exhausted). That could reduce reactivity, however (can we
>> find out how much is already buffered in advance, and only loop until
>> we exhaust that length?)
>
> Hmm, the naive patch below offers some improvement for the value 4096,
> but still not comparable to raising the buffer size: 0.76 -> 0.72.
>
> diff --git a/src/process.c b/src/process.c
> index 2376d0f288d..a550e223f78 100644
> --- a/src/process.c
> +++ b/src/process.c
> @@ -5893,7 +5893,7 @@ wait_reading_process_output (intmax_t time_limit,
> int nsecs, int read_kbd,
> && ((fd_callback_info[channel].flags & (KEYBOARD_FD |
> PROCESS_FD))
> == PROCESS_FD))
> {
> - int nread;
> + int nread = 0, nnread;
>
> /* If waiting for this channel, arrange to return as
> soon as no more input to be processed. No more
> @@ -5912,7 +5912,13 @@ wait_reading_process_output (intmax_t time_limit,
> int nsecs, int read_kbd,
> /* Read data from the process, starting with our
> buffered-ahead character if we have one. */
>
> - nread = read_process_output (proc, channel);
> + do
> + {
> + nnread = read_process_output (proc, channel);
> + nread += nnread;
> + }
> + while (nnread >= 4096);
> +
> if ((!wait_proc || wait_proc == XPROCESS (proc))
> && got_some_output < nread)
> got_some_output = nread;
>
>
> And "unlocking" the pipe size on the external process takes the
> performance further up a notch (by default it's much larger): 0.72 -> 0.65.
>
> diff --git a/src/process.c b/src/process.c
> index 2376d0f288d..85fc1b4d0c8 100644
> --- a/src/process.c
> +++ b/src/process.c
> @@ -2206,10 +2206,10 @@ create_process (Lisp_Object process, char
> **new_argv, Lisp_Object current_dir)
> inchannel = p->open_fd[READ_FROM_SUBPROCESS];
> forkout = p->open_fd[SUBPROCESS_STDOUT];
>
> -#if (defined (GNU_LINUX) || defined __ANDROID__) \
> - && defined (F_SETPIPE_SZ)
> - fcntl (inchannel, F_SETPIPE_SZ, read_process_output_max);
> -#endif /* (GNU_LINUX || __ANDROID__) && F_SETPIPE_SZ */
> +/* #if (defined (GNU_LINUX) || defined __ANDROID__) \ */
> +/* && defined (F_SETPIPE_SZ) */
> +/* fcntl (inchannel, F_SETPIPE_SZ, read_process_output_max); */
> +/* #endif /\* (GNU_LINUX || __ANDROID__) && F_SETPIPE_SZ *\/ */
> }
>
> if (!NILP (p->stderrproc))
>
> Apparently the patch from bug#55737 also made things a little worse by
> default, by limiting concurrency (the external process has to wait while
> the pipe is blocked, and by default Linux's pipe is larger). Just
> commenting it out makes performance a little better as well, though not
> as much as the two patches together.
>
> Note that both changes above are just PoC (e.g. the hardcoded 4096, and
> probably other details like carryover).
>
> I've tried to make a more nuanced loop inside read_process_output
> instead (as replacement for the first patch above), and so far it
> performs worse that the baseline. If anyone can see when I'm doing wrong
> (see attachment), comments are very welcome.
This seems to have been a dead end: while looping does indeed make
things faster, it doesn't really fit the approach of the
'adaptive_read_buffering' part that's implemented in read_process_output.
And if the external process is crazy fast (while we, e.g. when using a
Lisp filter, are not so fast), the result could be much reduced
interactivity, with this one process keeping us stuck in the loop.
But it seems I've found an answer to one previous question: "can we find
out how much is already buffered in advance?"
The patch below asks that from the OS (how portable is this? not sure)
and allocates a larger buffer when more output has been buffered. If we
keep OS's default value of pipe buffer size (64K on Linux and 16K-ish on
macOS, according to
https://unix.stackexchange.com/questions/11946/how-big-is-the-pipe-buffer),
that means auto-scaling the buffer on Emacs's side depending on how much
the process outputs. The effect on performance is similar to the
previous (looping) patch (0.70 -> 0.65), and is comparable to bumping
read-process-output-max to 65536.
So if we do decide to bump the default, I suppose the below should not
be necessary. And I don't know whether we should be concerned about
fragmentation: this way buffers do get allocates in different sizes
(almost always multiples of 4096, but with rare exceptions among larger
values).
diff --git a/src/process.c b/src/process.c
index 2376d0f288d..13cf6d6c50d 100644
--- a/src/process.c
+++ b/src/process.c
@@ -6137,7 +6145,18 @@
specpdl_ref count = SPECPDL_INDEX ();
Lisp_Object odeactivate;
char *chars;
+#ifdef USABLE_FIONREAD
+#ifdef DATAGRAM_SOCKETS
+ if (!DATAGRAM_CHAN_P (channel))
+#endif
+ {
+ int available_read;
+ ioctl (p->infd, FIONREAD, &available_read);
+ readmax = MAX (readmax, available_read);
+ }
+#endif
+
USE_SAFE_ALLOCA;
chars = SAFE_ALLOCA (sizeof coding->carryover + readmax);
What do people think?
This bug report was last modified 1 year and 30 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.