Hi, I have a script which has a logged very repetitive textual output (mostly output of ping and date). To minimize disk usage, I thought to pipe it to gzip -9. Then I realized the log, contrarily to before, remained empty, and recalled the GNU policy of “reading all input and only then outputting” to maximize overall speed at the expense of the decreasingly expensive memory. Yet I want to run that script all the time and being able to dirtily killing it or just shutdown, without loosing all its output (nor am I sure anyway it is a good practice of keeping everything in ram until shutdown, considering I suppose gzip only keeps the compressed output in memory anyway, discarding the then useless input), and “tail -f”-ing the files it writes. I guess piping the whole output is the way to go to achieve optimal compression, since otherwise just gzipping each line/command output wouldn’t compress as much (since anyway the repetition occurs among the lines, not inside them). Yet would there be a way to obtain this maximal compression, while having gzip outputing each time I stop giving it input (has I do every 30 seconds or so), without having to save the uncompressed file, nor recompressing the whole file several times? I mean, it seems to me a good thing to wait everything is compressed before to output, rather than outputing as soon as possible, but isn’t there a way to trigger the output each time it has been processed and there’s no more input for a certain amount of time (that is ~30s)? Am I looking at something like this: