On 2025-07-06 03:29, Daniel Mendler wrote: > Then we can maybe convince Eli that the following calling convention is > better, since the filtering arguments STRING and REGEXP-LIST are > deprioritized? The STRING and REGEXP-LIST arguments are > completion-related, while IGNORE-CASE is needed for basic > case-insensitive prefix computation. > > (string-common-prefix LIST &optional IGNORE-CASE STRING REGEXP-LIST) > > Alternatively we could go with the simplest form and suggest to use > `try-completion' if additional filtering is needed. > > (string-common-prefix LIST &optional IGNORE-CASE) I'm happy to proceed with either option. I figured that a potential argument for keeping the filter arguments is efficiency, so I've done some benchmarking. (I've included the PREDICATE filter in my benchmarks for completeness, but unless others change their minds I'm not looking to include that -- and my results don't support it for performance reasons in any case.) For a collection of strings, I used the symbol names from the obarray after running "emacs -Q" (for my build of 30.1-ish), which gave me 18271 strings (this is my "small data set"). I repeated the benchmarks in my regular instance of Emacs (not -Q), which has been running for a while, and that has 91167 strings (my "large data set"). I used string-common-prefix to obtain the common prefix "buffer" after filtering the collection to only the strings beginning with "buf", using each of the three filter options. I've attached a file with the specific process I used. Besides using the filter args, I compared two ways of filtering the COLLECTION in advance. The "cl-loop ... collect" approach only generates the desired list, whereas the "mapcar" approach is also generating a ton of nil values which need deleting, so that approach produces a lot of garbage (it caused 100 gcs on the large data set, which dominates the time taken). (I figure this would still be something that some folks would do in practice though.) I've only shown `benchmark-run-compiled' results. With uncompiled `benchmark-run', the mapcar and cl-loop approachs get significantly slower, but that probably isn't a consideration. I ran each benchmark repeatedly and have shown a representative result of the most-common timing. My results were: 1. STRING filter A: small data set: - filter arg took 0.06 seconds - cl-loop took 0.11 seconds (1.8 times as long) - mapcar took 0.72 seconds (12 times as long) B: large data set: - filter arg took 0.22 seconds - cl-loop took 0.44 seconds (2 times as long) - mapcar took 2.24 seconds (10.4 times as long) 2. REGEXP-LIST filter A: small data set: - filter arg took 0.16 seconds - cl-loop took 0.14 seconds (0.9 times as long) - mapcar took 0.75 seconds (4.6 times as long) B: large data set: - filter arg took 0.72 seconds - cl-loop took 0.65 seconds (0.9 times as long) - mapcar took 2.40 seconds (3.3 times as long) 3. PREDICATE filter A: small data set: - filter arg took 0.17 seconds - cl-loop took 0.11 seconds (0.6 times as long) - mapcar took 0.79 seconds (4.8 times as long) B: large data set: - filter arg took 0.76 seconds - cl-loop took 0.46 seconds (0.6 times as long) - mapcar took 2.49 seconds (3.3 times as long) So the STRING filter was significantly faster when performed by `try-completion' directly, but I got a faster result for the REGEXP-LIST and PREDICATE filters when pre-filtering the collection using the cl-loop ... collect approach. I guess that there's some additional overhead for `try-completion' in those cases. Anyone who was inclined to filter their collections using a mapcar/delq combination may certainly benefit from having the filter arguments available. That aside, the benefit of a REGEXP-LIST filter would seem to be purely one of convenience rather than performance (unless the comparative slowness when try-completion handles the regexp list turned out to be some inefficiency which can be addressed). -Phil