GNU bug report logs - #78773
[PATCH] Speedup url-retrieve-synchronously for low-latency connections

Previous Next

Package: emacs;

Reported by: Steven Allen <steven <at> stebalien.com>

Date: Thu, 12 Jun 2025 04:10:02 UTC

Severity: normal

Tags: patch

Full log


Message #8 received at 78773 <at> debbugs.gnu.org (full text, mbox):

From: Eli Zaretskii <eliz <at> gnu.org>
To: Steven Allen <steven <at> stebalien.com>, Robert Pluim <rpluim <at> gmail.com>
Cc: 78773 <at> debbugs.gnu.org, larsi <at> gnus.org, dick.r.chiang <at> gmail.com
Subject: Re: bug#78773: [PATCH] Speedup url-retrieve-synchronously for
 low-latency connections
Date: Thu, 12 Jun 2025 09:45:17 +0300
> Cc: Lars Ingebrigtsen <larsi <at> gnus.org>, dick <dick.r.chiang <at> gmail.com>
> Date: Wed, 11 Jun 2025 21:08:45 -0700
> From:  Steven Allen via "Bug reports for GNU Emacs,
>  the Swiss army knife of text editors" <bug-gnu-emacs <at> gnu.org>
> 
> **Present context:**
> 
> I'm running into an issue in emacs-syncthing [1] where making a few
> localhost network requests are taking a full second (blocking Emacs)
> when they should be instantaneous. While digging into
> `url-retrieve-synchronously', I found that waiting on the correct
> network process instead of passing nil to `accept-process-output' made
> these network requests instantaneous (although I have no idea why). See
> the attached patch.
> 
> [1]: https://github.com/KeyWeeUsr/emacs-syncthing
> 
> To test this patch, you can:
> 
> 1. Run a simple web server on localhost. I usually use
> 
>     python -m http.server --bind 127.0.0.1 8000
> 
> 2. Evaluate:  (benchmark 100 '(url-retrieve-synchronously "http://127.0.0.1:8000"))
> 
> With this patch, this form is ~16x faster on my machine. I've also
> tested this against a remote machine with a ~45ms latency and found a
> 1.25x speedup.
> 
> I've confirmed that this isn't busy-waiting by modifying this code to
> print a message each time it loops: it loops the same number of times
> with or without my patch.
> 
> I've confirmed that this speedup is strictly due to passing the target
> process to `accept-process-output' by applying my patch then changing
> JUST "proc" to "nil":
> 
>                     ;; ms, so split the difference.
>                     (accept-process-output proc 0.05))
> 
> to
> 
>                     ;; ms, so split the difference.
>                     (accept-process-output nil 0.05))
> 
> With this one change, this code goes back to being as slow as it was before.

What happens if you leave the 1st argument of accept-process-output at
its current nil value, but change the 2nd argument to be 0.005 instead
of 0.05 (i.e., 10 times smaller)?

Also, how many other sub-processes (of any kind, not just network
processes) do you have in that session when you are testing this
issue?

> **Historical context**
> 
> `url-retrieve-synchronously' was changed to wait on "nil" in:
> 
> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=49897,
> 
> Motivated by this bug report:
> 
> https://debbugs.gnu.org/cgi/bugreport.cgi?bug=49861
> 
> However, the patch in question also changed the rest of
> `url-retrieve-synchronously' so I'm hoping the issue lies elsewhere?

Unfortunately, I doubt that we will get any useful answers to this
question.  We need to understand better why asking for output from a
single process has such a dramatic effect in your case with localhost
requests.  If it happens only with localhost requests, perhaps we
could make some changes only for that case.

Robert, any ideas or suggestions?




This bug report was last modified today.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.