GNU bug report logs -
#9101
timeout should use setitimer if available
Previous Next
Reported by: Paul Eggert <eggert <at> cs.ucla.edu>
Date: Sat, 16 Jul 2011 19:29:02 UTC
Severity: normal
Tags: fixed
Done: Assaf Gordon <assafgordon <at> gmail.com>
Bug is archived. No further changes may be made.
Full log
Message #17 received at 9101 <at> debbugs.gnu.org (full text, mbox):
On 18/07/11 17:44, Paul Eggert wrote:
> On 07/18/11 03:01, Pádraig Brady wrote:
>> I'll apply this soon.
>
> Thanks for doing that. Some comments:
>
> I see that my bug report was incoherent, as it was talking about
> nanosecond resolution and setitimer, when I meant to be writing about
> timer_settime (which *does* have nanosecond resolution). Sorry about
> that.
>
> POSIX says that setitimer is obsolescent, and that applications should use
> timer_settime. How about if we add a gnulib module for timer_settime
> and have 'timeout' use that instead? (This could be a separate patch,
> done later.)
Hmm. We don't need the extra resolution or functionality of timer_settime().
So I prefer the simplier setitimer() interface TBH.
Currently, setitimer should be more portable too, I guess.
No rush with timer_settime() I think.
>
> + struct timespec ts = dtotimespec (duration);
> + tv.tv_sec = MIN (ts.tv_sec, LONG_MAX); /* Assuming tv_sec is positive. */
>
> That second statement should be removed. time_t is not necessarily
> the same as 'long', and setitimer should work with values longer than
> LONG_MAX, if time_t is wider than 'long'.
Bah, I was confused by the linux setitimer man page
which explicitly stated 'long' was used for setitimer.
Also the value on freebsd is truncated to 100000000.
> And parse_duration when
> combined with dtotimespec guarantees that tv_sec is nonnegative here
> (it might be zero), so that comment needs to be removed too.
Well the comment was confirming that for the purpose of that calculation :)
>
> + if (tv.tv_sec != LONG_MAX)
>
> This LONG_MAX should be TYPE_MAXIMUM (time_t).
Could it ever be 'long', and that being different to time_t?
I suppose we could use _GL_INT_MAXIMUM (tv.tv_sec),
but I'll assume time_t is always used for the moment.
I'll use the following:
static void
settimeout (double duration)
{
struct timeval tv;
struct timespec ts = dtotimespec (duration);
tv.tv_usec = (ts.tv_nsec + 999) / 1000;
if (tv.tv_usec == 1000 * 1000)
{
if (tv.tv_sec != TYPE_MAXIMUM (time_t))
{
tv.tv_sec++;
tv.tv_usec = 0;
}
else
tv.tv_usec--;
}
struct itimerval it = { {0, 0}, tv };
if (setitimer (ITIMER_REAL, &it, NULL) == -1)
error (EXIT_FAILURE, errno, _("error setting timer"));
}
> Of course all of this rounding-and-checking-for-overflow business
> goes away once we go to timer_settime....
Talking about overflow, for my own reference
I noticed that there was some info lost
in the double -> 64bit conversion:
$ strace -e setitimer src/timeout 9223372036852775425 sleep 0
setitimer(ITIMER_REAL, {it_interval={0, 0}, it_value={9223372036852775936, 0}}, NULL) = 0
Not a practical issue for `timeout` at least.
I'm marking this bug done now.
thanks for the review,
Pádraig.
This bug report was last modified 6 years and 280 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.