GNU bug report logs -
#48841
fido-mode is slower than ido-mode with similar settings
Previous Next
Reported by: Dmitry Gutov <dgutov <at> yandex.ru>
Date: Sat, 5 Jun 2021 01:40:01 UTC
Severity: normal
Done: João Távora <joaotavora <at> gmail.com>
Bug is archived. No further changes may be made.
Full log
View this message in rfc822 format
Dmitry Gutov <dgutov <at> yandex.ru> writes:
> On 06.06.2021 02:20, João Távora wrote:
>> My bet is that the remaining lag is due to sorting. In a dumb but
>> illustrative example, when given the pattern 'fmcro' flex-enabled
>> ido-mode pops 'flymake--backend-state-p--cmacro' to the top, while fido
>> mode selects the much more reasonable 'defmacro'.
>
> Perhaps not sorting exactly, but the scoring part? Lowering the
> implementation into C might help, we discussed something like that in
> the past.
Perhaps, all could be measured. But I also remember explaining that
scoring is basically free. The flex algorithm search algorithm is
greedy already, it doesn't backtrack. Given a pattern 'foo' against
'fabrobazo', it takes 9 steps to see that it matches. I can't see any
other way to improve that, short of a something like a tottaly different
string implementation. The scoring's numeric calculations at each step
are trivial.
One way to verify this is to do the scoring, but simply disregard it for
sorting purposes.
> And/or pick a different algorithm. E.g. Jaro-Winkler, which apparently
> is used in a lot of "fuzzy matching" implementations out there, it's
> pretty fast.
That may be useful, but for other purposes. If I understand correctly,
Jaro-Winkler is for finding the distante between two arbitrary strings,
where the first in not a subsequence of the second. I bet google uses
stuff like that when you accitendally transpose characters. Flex just
gives up. Those other others algos still catch the match (and Google
than probably NL-scours your most intimate fears and checks with your
local dictator before showing you typing suggestions)
> I took an example like
>
> (setq s (all-completions "" obarray))
> (setq ss (cl-delete-if-not (lambda (s) (string-match-p "a" s)) s))
>
> then
>
> (benchmark 1 '(completion-all-completions "a" ss nil 1))
>
> prints 0.180s here, whereas a "pure Ruby" implementation of
> Jaro-Winkler takes about 0.060s on the exact same set of strings. But
> perhaps Ruby is just faster than Elisp, I don't have a good
> comparison.
Go ahead and kill the scoring calculationg altogether in
completion-pcm--hilit-commonality. I bet it won't make a difference.
If fact, for that experiment, try a simple substring search. I bet
you're just seeing an inferior GC at work, or a string implementation
that's made optimized for other stuff that Ruby's can't, like
propertization. Try making Ruby strings that mimic Elips if you've time
to spare...
> (The only J-W implementation in Elisp I have found yet --
> https://github.com/rdiankov/emacs-config/blob/master/.emacs-lisp/auto-complete-1.3.1/fuzzy.el#L70
> -- is slower than the current scoring algo).
There you have it.
João
This bug report was last modified 3 years and 350 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.