GNU bug report logs -
#78773
[PATCH] Speedup url-retrieve-synchronously for low-latency connections
Previous Next
Full log
View this message in rfc822 format
> From: Steven Allen <steven <at> stebalien.com>
> Cc: rpluim <at> gmail.com, 78773 <at> debbugs.gnu.org, larsi <at> gnus.org, ian <at> iankelling.org
> Date: Mon, 16 Jun 2025 09:10:28 -0700
>
> Eli Zaretskii <eliz <at> gnu.org> writes:
>
> > /* If we've got some output and haven't limited our timeout
> > with adaptive read buffering, limit it. */
> > if (got_some_output > 0 && !process_skipped
> > && (timeout.tv_sec
> > || timeout.tv_nsec > READ_OUTPUT_DELAY_INCREMENT))
> > timeout = make_timespec (0, READ_OUTPUT_DELAY_INCREMENT);
> >
> > This clearly leaves any existing wait that is shorter than 10 ms
> > unaltered.
>
> Fair enough, we can be more precise here:
>
> (a) after reading output from any process,
>
> (b) when we didn't skip any of those processes due to adaptive read
> buffering,
>
> (c) when we're not waiting on any specific process†,
>
> it waits an additional (after reading the output) 10ms or the remainder
> of the timeout, whichever is shorter, before returning to the user.
Agreed.
> My point is that it should reduce the timeout to 0 in this case because
> we've just read process output (got_some_output > 0) and we KNOW that we
> haven't read too little output from any of the processes we've read from
> because process_skipped is false (i.e., we did NOT skip a process
> because it had too little output available for us to read).
I think you are looking at this only from the POV of how fast should
the caller of wait_reading_process_output return. But that's not the
only use of this function: it is also used in applications where
reading large amounts of sub-process output should be as fast as
possible. In those other applications, waiting for a short while in
case there's another chunk soon might be preferable, for minimizing
the time it takes to read the entire output from the sub-process.
> >> > Can we please agree that as long as we think the problem happens due
> >> > to some specific feature of the code, such as
> >> > process-adaptive-read-buffering, the fix should be limited _only_ to
> >> > that feature, and should not affect any other uses? Because that's
> >> > the only way to make changes in this code without risking regressions.
> >>
> >> I don't think the problem happens because of
> >> `process-adaptive-read-buffering'. I think the problem happens because,
> >> in an attempt to make adaptive read buffering work again, an
> >> unconditional 10ms delay was added to `accept-process-output'.
> >
> > I'm not talking about the reason _why_ the problem happens, I'm
> > talking about which parts of the code it is okay to touch to fix the
> > problem to minimize the risk of regressions in unrelated scenarios.
> > If the problem happens only when process-adaptive-read-buffering is
> > nil, let's not change code that's used when it's non-nil, and vice
> > versa.
>
> The problem happens regardless of the value of
> process-adaptive-read-buffering.
Once again, I'm talking about which case the modifications should
affect. If url-retrieve-synchronously always uses the nil value, then
whatever changes we make should not affect the case when this variable
is non-nil, because then we are running the risk of introducing
unintended regressions.
This bug report was last modified 1 day ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.