The adaptive read buffering code delays reads in the hope that we can read in buffer chunks if we wait a little bit between reads from a process producing a lot of output. Sounds good. Doesn't work. The attempted optimization reduces performance in various scenarios and causes an 8x regression in performance for me in flows involving mixtures of big and small reads.
With adaptive reading, we increase the read delay every time we get a short read and decrease it when we get a full buffer of data or do a write. The problem is 1) that there are legitimate flows involving long sequences of reads without an intervening write and 2) reads (especially from PTYs) may *never* report a full buffer because the kernel limits the maximum read size no matter how big the backlog is. (For example, the Darwin kernel limits PTY (and presumably TTY in general?) reads to 1024 bytes, but the default Emacs read size is 64k, so we never recognize a signal that we should reduce the read delay.
I'd suggest just deleting the feature. It's not worth the complexity and edge cases, IMHO.
If that's not an option, I'd suggest detecting bulk flows by doing a zero timeout select() after we're tempted to increase the delay and actually increasing the delay only when that select times out.
Just tweaking the maximum read size probably isn't a good idea: it's an implementation detail and can change with time and over the types of FD from which we read.