On 11/08/2010 08:22 AM, Pádraig Brady wrote: > On 08/11/10 14:33, Jim Meyering wrote: >> Looks like I got very lucky here and hit a number of nanoseconds >> that happened to be a multiple of 100,000: >> >> $ for i in $(seq 1000); do touch -d '1970-01-01 18:43:33.5000000000' 2; t=$(stat -c "%.W %.X %.Y %.Z" 2); test $(echo "$t"|wc -c) -lt 57 && echo "$t"; done >> 0.000000 63813.500000 63813.500000 1289224045.731146 >> 0.0000 63813.5000 63813.5000 1289224047.8224 >> [Exit 1] >> >> I realize this is due to the way the precision estimation >> heuristic works. Wondering if there's a less-surprising >> way to do that. > > You could snap to milli, micro, nano, > though that would just mean it would > happen less often. On cygwin, the default precision is 100nano (that is, 7 digits). See also gnulib/lib/utimecmp.c, which also sets up a hash table that tries to determine a file system's default resolution. In particular, it makes 100% correct analysis on file systems that support _PC_TIMESTAMP_RESOLUTION, but that's still pretty rare today. In fact, since Paul originally wrote utimecmp.c, I'm surprised that you rewrote the coreutils hash table from scratch rather than trying to reuse the code. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org