GNU bug report logs - #59382
cp(1) tries to allocate too much memory if filesystem blocksizes are unusual

Previous Next

Package: coreutils;

Reported by: Korn Andras <korn-gnu.org <at> elan.rulez.org>

Date: Sat, 19 Nov 2022 09:26:03 UTC

Severity: normal

Done: Pádraig Brady <P <at> draigBrady.com>

Bug is archived. No further changes may be made.

Full log


Message #20 received at 59382 <at> debbugs.gnu.org (full text, mbox):

From: Paul Eggert <eggert <at> cs.ucla.edu>
To: Korn Andras <korn-gnu.org <at> elan.rulez.org>
Cc: 59382 <at> debbugs.gnu.org, Pádraig Brady <P <at> draigBrady.com>
Subject: Re: bug#59382: cp(1) tries to allocate too much memory if filesystem
 blocksizes are unusual
Date: Sun, 20 Nov 2022 09:29:33 -0800
On 2022-11-19 22:43, Korn Andras wrote:
> the same file can contain records of different
> sizes. Reductio ad absurdum: the "optimal" blocksize for reading may in fact
> depend on the position within the file (and only apply to the next read).

This sort of problem exists on traditional devices as well. A tape drive 
can have records of different sizes. For these devices, the best 
approach is to allocate a buffer of the maximum blocksize the drive 
supports.

For the file you describe the situation is different, since ZFS will 
straddle small blocks during I/O. Although there's no single "best" I 
would guess that it'd typically be better to report the blocksize 
currently in use for creating new blocks (which would be a power of two 
for ZFS), as that will map better to how programs like cp deal with 
blocksizes. This may not be perfect but it'd be better than what ZFS 
does now, at least for the instances of 'cp' that are already out there.





This bug report was last modified 2 years and 195 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.