GNU bug report logs -
#20154
25.0.50; json-encode-string is too slow for large strings
Previous Next
Reported by: Dmitry Gutov <dgutov <at> yandex.ru>
Date: Fri, 20 Mar 2015 14:27:01 UTC
Severity: normal
Found in version 25.0.50
Done: Dmitry Gutov <dgutov <at> yandex.ru>
Bug is archived. No further changes may be made.
Full log
View this message in rfc822 format
On 03/22/2015 07:31 PM, Eli Zaretskii wrote:
> I understand why you _send_ everything, but not why you need to
> _encode_ everything. Why not encode only the new stuff?
That's the protocol. You're welcome to bring the question up with the
author, but for now, as already described, there has been no need to
complicate it, because Vim compiled with Python support can encode even
a large buffer quickly enough.
>>> Then a series of calls to replace-regexp-in-string, one each for every
>>> one of the "special" characters, should get you close to your goal,
>>> right?
Actually, that wouldn't work anyway: aside from the special characters,
JSON \\u1234 needs to encode any non-ASCII characters. Look at the
"Fallback: UCS code point" comment.
> I meant something like
>
> (replace-regexp-in-string "\n" "\\n" s1 t t)
> (replace-regexp-in-string "\f" "\\f" s1 t t)
>
> etc. After all, the list of characters to be encoded is not very
> long, is it?
One (replace-regexp-in-string "\n" "\\n" s1 t t) call already takes
~100ms, which is more than the latest proposed json-encode-string
implementation takes.
> But when you've encoded them once, you only need to encode the
> additions, no? If you can do this incrementally, the amount of work
> for each keystroke will be much smaller, I think.
Sure, that's optimizable, with a sufficiently smart server (which ycmd
currently isn't), and at the cost of some buffer state tracking and
diffing logic.
This bug report was last modified 10 years and 38 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.