GNU bug report logs - #59382
cp(1) tries to allocate too much memory if filesystem blocksizes are unusual

Previous Next

Package: coreutils;

Reported by: Korn Andras <korn-gnu.org <at> elan.rulez.org>

Date: Sat, 19 Nov 2022 09:26:03 UTC

Severity: normal

Done: Pádraig Brady <P <at> draigBrady.com>

Bug is archived. No further changes may be made.

Full log


Message #11 received at 59382 <at> debbugs.gnu.org (full text, mbox):

From: Paul Eggert <eggert <at> cs.ucla.edu>
To: Pádraig Brady <P <at> draigBrady.com>,
 Korn Andras <korn-gnu.org <at> elan.rulez.org>, 59382 <at> debbugs.gnu.org
Subject: Re: bug#59382: cp(1) tries to allocate too much memory if filesystem
 blocksizes are unusual
Date: Sat, 19 Nov 2022 19:50:06 -0800
[Message part 1 (text/plain, inline)]
>> The block size for filesystems can also be quite large (currently, up 
>> to 16M).

It seems ZFS tries to "help" apps by reporting misinformation (namely a 
smaller block size than actually preferred) when the file is small. This 
is unfortunate, since it messes up cp and similar programs that need to 
juggle multiple block sizes. Plus, it messes up any program that assumes 
st_blksize is constant for the life of a file descriptor, which "cp" 
does assume elsewhere.

GNU cp doesn't need ZFS's "help", as it's already smart enough to not 
over-allocate a buffer when the input file is small but its blocksize is 
large. Instead, this "help" from ZFS causes GNU cp to over-allocate 
because it naively trusts the blocksize ZFS that reports.


> The proposed patch attached removes the use of buffer_lcm()
> and just picks the largest st_blksize, which would be 4MiB in your case.
> It also limits the max buffer size to 32MiB in the edge case
> where st_blksize returns a larger value that this.

I suppose this could break cp if st_blksize is not a power of 2, and if 
the file is not a regular file, and reads must be a multiple of the 
block size. POSIX allows such things though I expect nowadays it'd be 
limited to weird devices.

Although we inadvertently removed support for weird devices in 2009 by 
commit 55efc5f3ee485b3e31a91c331f07c89aeccc4e89, and nobody seems to 
care (because people use dd or whatever to deal with weird devices), I 
think it'd be better to limit the fix to regular files. And while we're 
at it we might as well resurrect support for weird devices.


> +#include <assert.h>

No need for this, as static_assert works without <assert.h> in C23, and 
Gnulib's assert-h module support this.


> +/* Set a max constraint to avoid excessive mem usage or type overflow.  */
> +enum { IO_BUFSIZE_MAX = 128 * IO_BUFSIZE };
> +static_assert (IO_BUFSIZE_MAX <= MIN (IDX_MAX, SIZE_MAX) / 2 + 1);

I'm leery of putting in a maximum as low as 16 MiB. Although that's OK 
now (it matches OpenZFS's current maximum), cp in the future will surely 
deal with bigger block sizes. Instead, how about if we stick with GNU's 
"no arbitrary limits" policy and work around the ZFS bug instead?

Something like the attached patch, perhaps?
[0001-cp-work-around-ZFS-misinformation.patch (text/x-patch, attachment)]

This bug report was last modified 2 years and 193 days ago.

Previous Next


GNU bug tracking system
Copyright (C) 1999 Darren O. Benham, 1997,2003 nCipher Corporation Ltd, 1994-97 Ian Jackson.