GNU bug report logs -
#20154
25.0.50; json-encode-string is too slow for large strings
Previous Next
Reported by: Dmitry Gutov <dgutov <at> yandex.ru>
Date: Fri, 20 Mar 2015 14:27:01 UTC
Severity: normal
Found in version 25.0.50
Done: Dmitry Gutov <dgutov <at> yandex.ru>
Bug is archived. No further changes may be made.
Full log
Message #56 received at 20154 <at> debbugs.gnu.org (full text, mbox):
On 03/21/2015 09:58 AM, Eli Zaretskii wrote:
> It depends on your requirements. How fast would it need to run to
> satisfy your needs?
In this case, the buffer contents are encoded to JSON at most once per
keypress. So 50ms or below should be fast enough, especially since most
files are smaller than that.
Of course, I'm sure there are use cases for fast JSON encoding/decoding
of even bigger volumes of data, but they can probably wait until we have
FFI.
> You don't really need regexp replacement functions with all its
> features here, do you? What you need is a way to skip characters that
> are "okay", then replace the character that is "not okay" with its
> encoded form, then repeat.
It doesn't seem like regexp searching is the slow part: save for the GC
pauses, looking for the non-matching regexp in the same string -
(replace-regexp-in-string "x" "z" s1 t t)
- only takes ~3ms.
And likewise, after changing them to use `concat' instead of `format',
both alternative json-encode-string implementations that I have "encode"
a numbers-only (without newlines) string of the same length in a few
milliseconds. Again, save for the GC pauses, which can add 30-40ms.
> For starters, how fast
> can you iterate through the string with 'skip-chars-forward', stopping
> at characters that need encoding, without actually encoding them, but
> just consing the output string by appending the parts delimited by
> places where 'skip-chars-forward' stopped? That's the lower bound on
> performance using this method.
70-90ms if we simply skip 0-9, even without nreverse-ing and
concatenating. But the change in runtime after adding an (apply #'concat
(nreverse res)) step doesn't look statistically insignificant. Here's
the implementation I tried:
(defun foofoo (string)
(with-temp-buffer
(insert string)
(goto-char (point-min))
(let (res)
(while (not (eobp))
(let ((skipped (skip-chars-forward "0-9")))
(push (buffer-substring (- (point) skipped) (point))
res))
(forward-char 1))
res)))
But that actually goes down to 30ms if we don't accumulate the result.
> I think the latest tendency is the opposite: move to Lisp everything
> that doesn't need to be in C.
Yes, and often that's great, if we're dealing with some piece of UI
infrastructure that only gets called at most a few times per command,
with inputs of size we can anticipate in advance.
> If some specific application needs more
> speed than we can provide, the first thing I'd try is think of a new
> primitive by abstracting your use case enough to be more useful than
> just for JSON.
That's why I suggested to do that with `replace-regexp-in-string' first.
That's a very common feature, and in Python and Ruby it's written in C.
Ruby's calling convention is even pretty close (the replacement can be a
string, or it can take a block, which is a kind of a function).
> Of course, implementing the precise use case in C first is probably a
> prerequisite, since it could turn out that the problem is somewhere
> else, or that even in C you won't get the speed you want.
A fast `replace-regexp-in-string' may not get us where I want, but it
should get us close. It will still be generally useful, and it'll save
us from having two `json-encode-string' implementations - for long and
short strings.
>> Replacing "z" with #'identity (so now we include a function call
>> overhead) increases the averages to 0.15s and 0.10s respectively.
>
> Sounds like the overhead of the Lisp interpreter is a significant
> factor here, no?
Yes and no. Given the 50ms budget, I think we can live with it for now,
when it's the only problem.
This bug report was last modified 10 years and 38 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.