Hi, I wanted to help the Debian project and found the gzip bug report   https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=575884 Despite not being a C programmer I decided to give it a try. Since this is the first time I have a patch for an open source project, I thought that maybe I should first ask upstream if it makes sense. In the code of gzip.c there is the comment:  * If the lseek fails, we could use read() to get to the end, but  * --list is used to get quick results.  * Use "gunzip < foo.gz | wc -c" to get the uncompressed size if  * you are not concerned about speed. Assuming it is correct, the patch does just that, use read() to get to the end. After applying the patch and running   $ time cat rnd0.bin.gz | gzip -l - on a gzipped 3GB file created with /dev/urandom, the result is    compressed        uncompressed  ratio uncompressed_name    3000485948          3000000000  -0.0% stdout    real   0m0.740s    user   0m0.013s    sys    0m1.134s To me it seems quite fast, but maybe gzip is used with much bigger files and one second is too slow. The patch is attached to this e-mail. I'd like to know what you think about it. Thanks