GNU bug report logs -
#71295
29.3; url-retrieve-synchronously does not timeout if initial connection hangs
Previous Next
Full log
View this message in rfc822 format
On 07/06/2024 09:20, Eli Zaretskii wrote:
>> Date: Thu, 6 Jun 2024 23:41:39 +0300
>> Cc:71295 <at> debbugs.gnu.org,azeng <at> janestreet.com
>> From: Dmitry Gutov<dmitry <at> gutov.dev>
>>
>> On 06/06/2024 14:40, Eli Zaretskii wrote:
>>>> This is probably rather naive of me, but I guess now I'm wondering why url-retrieve-synchronously actually sets url-asynchronous to nil. Is there a good reason not to use :nowait when it is available? It seems like it would be useful to have a wrapper around url-retrieve that just "does what I mean" here.
>>> Maybe. I wonder what others think about this.
>> It seems like a leaky abstraction (the caller has to be aware that what
>> happens under the covers is done in several steps, and the timeout only
>> applies to subsequent ones).
>>
>> If we could change the implementation to a more intuitive behavior, that
>> would be a win, I think. Can somebody think of adverse effects?
> Do you have a patch to consider?
This seems to work:
diff --git a/lisp/url/url.el b/lisp/url/url.el
index dea251b453b..3b4021ceca8 100644
--- a/lisp/url/url.el
+++ b/lisp/url/url.el
@@ -235,7 +235,7 @@ url-retrieve-synchronously
TIMEOUT is passed, it should be a number that says (in seconds)
how long to wait for a response before giving up."
(url-do-setup)
- (let* (url-asynchronous
+ (let* ((url-asynchronous t)
data-buffer
(callback (lambda (&rest _args)
(setq data-buffer (current-buffer))
At first I was going to suggest the patch which reduces to Aaron's
url-retrieve-synchronously-but-dont-hang implementation, but it misses
the process cleanup stuff.
The above change seems more harmless. In my brief testing anyway.
This bug report was last modified 1 year and 12 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.