GNU bug report logs -
#9734
[solaris] `dd if=/dev/urandom of=file bs=1024k count=1' gets a file of 133120 bytes
Previous Next
Reported by: "Clark J. Wang" <dearvoid <at> gmail.com>
Date: Wed, 12 Oct 2011 12:34:02 UTC
Severity: normal
Tags: notabug
Done: Eric Blake <eblake <at> redhat.com>
Bug is archived. No further changes may be made.
Full log
View this message in rfc822 format
[Message part 1 (text/plain, inline)]
Your bug report
#9734: [solaris] `dd if=/dev/urandom of=file bs=1024k count=1' gets a file of 133120 bytes
which was filed against the coreutils package, has been closed.
The explanation is attached below, along with your original report.
If you require more details, please reply to 9734 <at> debbugs.gnu.org.
--
9734: http://debbugs.gnu.org/cgi/bugreport.cgi?bug=9734
GNU Bug Tracking System
Contact help-debbugs <at> gnu.org with problems
[Message part 2 (message/rfc822, inline)]
tag 9734 notabug
thanks
On 10/12/2011 02:22 AM, Clark J. Wang wrote:
> I'm not sure if it's a bug but it's not reasonable to me. On Solaris 11
> (SunOS 5.11 snv_174, i86pc):
>
> $ uname -a
> SunOS sollab-242.cn.oracle.com 5.11 snv_174 i86pc i386 i86pc
> $ pkg list gnu-coreutils
> NAME (PUBLISHER) VERSION
> IFO
> file/gnu-coreutils 8.5-0.174.0.0.0.0.504
> i--
> $ /usr/gnu/bin/dd if=/dev/urandom of=file bs=1024k count=1
> 0+1 records in
Notice that this means you read a partial record - read() tried to read
1024k bytes, but the read ended short at only 133120 bytes.
> 0+1 records out
And because you didn't request dd to group multiple short reads before
doing a full write, you got a single (short) record written.
> I'm new to Solaris but I've never seen this problem whe I use Linux so it
> really suprises me.
Solaris and Linux kernels differ on when you will get short reads, and
magic files like /dev/urandom are more likely to display the issue than
regular files. That said, Linux also has the "problem" of short reads;
it's especially noticeable when passing the output of dd to a pipe.
You probably wanted to use this GNU extension:
dd if=/dev/urandom of=file bs=1024k count=1 iconv=fullblock
where the iconv flag requests that dd pile together multiple read()s
until it has a full block, so that you no longer have a partial block
output.
>
> I found this in the man page of /dev/urandom on Solaris: "The limitation per
> read for /dev/random is 1040 bytes. The limit for /dev/urandom is (128 *
> 1040 = 133120)." That seems to be the reason but I think dd should handle
> that and check the return value of the read() system call and make sure
> 1024k bytes have really been read from /dev/urandom.
Only if the iconv=fullblock flag is specified, since it is a violation
of POSIX to do more than one read() without an explicit flag requesting
multiple reads per block.
--
Eric Blake eblake <at> redhat.com +1-801-349-2682
Libvirt virtualization library http://libvirt.org
[Message part 3 (message/rfc822, inline)]
[Message part 4 (text/plain, inline)]
I'm not sure if it's a bug but it's not reasonable to me. On Solaris 11
(SunOS 5.11 snv_174, i86pc):
$ uname -a
SunOS sollab-242.cn.oracle.com 5.11 snv_174 i86pc i386 i86pc
$ pkg list gnu-coreutils
NAME (PUBLISHER) VERSION
IFO
file/gnu-coreutils 8.5-0.174.0.0.0.0.504
i--
$ /usr/gnu/bin/dd if=/dev/urandom of=file bs=1024k count=1
0+1 records in
0+1 records out
133120 bytes (133 kB) copied, 0.00290536 s, 45.8 MB/s
$ ls -l file
-rw-r--r-- 1 root root 133120 2011-10-12 16:12 file
$
I'm new to Solaris but I've never seen this problem whe I use Linux so it
really suprises me.
I found this in the man page of /dev/urandom on Solaris: "The limitation per
read for /dev/random is 1040 bytes. The limit for /dev/urandom is (128 *
1040 = 133120)." That seems to be the reason but I think dd should handle
that and check the return value of the read() system call and make sure
1024k bytes have really been read from /dev/urandom.
Any idea?
Thanks.
-Clark
[Message part 5 (text/html, inline)]
This bug report was last modified 13 years and 226 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.