GNU bug report logs -
#69535
Problem with copying an EXTREMELY large file - cmp finds a mismatch
Previous Next
Reported by: Brian <b_lists <at> patandbrian.org>
Date: Mon, 4 Mar 2024 04:27:02 UTC
Severity: normal
Done: Paul Eggert <eggert <at> cs.ucla.edu>
Bug is archived. No further changes may be made.
Full log
Message #11 received at 69535 <at> debbugs.gnu.org (full text, mbox):
On 3/4/24 03:10, Paul Eggert wrote:
> Try running 'strace -o tr cp data.dat original' and then look at the
> file 'tr' (which could be quite large). Look for the syscalls near the
> start, and near the end, of the bulk copy.
>
> Quite possibly it's a bug in your Linux drivers or your firmware or
> hardware. For example, if you're using ZFS, see:
>
> https://github.com/openzfs/zfs/issues/15526
>
> The strace output might help figure this out.
My drives are formatted using ext4. The command above did indeed
produce a large output file, almost 40 Megabytes of it, but deleting
every line that started with
read(3,
or
write(4,
(there were over 300,000 pairs) got the file down to a far more
manageable 7 KB. At first glance, it doesn't make much sense to me,
but I will try going through it line-by-line tomorrow (it's silly
o'clock at the moment) and see whether anything jumps out at me.
Thanks for the help.
This bug report was last modified 1 year and 167 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.