GNU bug report logs -
#18454
Improve performance when -P (PCRE) is used in UTF-8 locales
Previous Next
Full log
View this message in rfc822 format
[Message part 1 (text/plain, inline)]
Your message dated Tue, 23 Nov 2021 19:36:11 -0800
with message-id <964e27cb-0484-0daf-ffab-39382de271f0 <at> cs.ucla.edu>
and subject line Re: bug#18454: Improve performance when -P (PCRE) is used in UTF-8 locales
has caused the debbugs.gnu.org bug report #18454,
regarding Improve performance when -P (PCRE) is used in UTF-8 locales
to be marked as done.
(If you believe you have received this mail in error, please contact
help-debbugs <at> gnu.org.)
--
18454: http://debbugs.gnu.org/cgi/bugreport.cgi?bug=18454
GNU Bug Tracking System
Contact help-debbugs <at> gnu.org with problems
[Message part 2 (message/rfc822, inline)]
With the patch that fixes bug 18266, grep -P works again on binary
files (with invalid UTF-8 sequences), but it is now significantly
slower than old versions (which could yield undefined behavior).
Timings with the Debian packages on my personal svn working copy
(binary + text files):
2.18-2 0.9s with -P, 0.4s without -P
2.20-3 11.6s with -P, 0.4s without -P
On this example, that's a 13x slowdown! Though the performance issue
would better be fixed in libpcre3, I suppose that it is not so simple
and won't occur any time soon. Things could be done in grep:
1. Ignore -P when the pattern would have the same meaning without -P
(patterns could also be transformed, e.g. "a\d+b" -> "a[0-9]\+b",
at least for the simplest cases).
2. Call PCRE in the C locale when this is equivalent.
3. Transform invalid bytes to null bytes in-place before the PCRE
call. This changes the current semantic, but:
* the semantic on invalid bytes has never been specified, AFAIK;
* the best *practical* behavior may not be the current one
(I personally prefer to be able to match invalid bytes, just
like one can match top-bit-set characters in the C locale, and
seeing such invalid bytes as equivalent to null bytes would
not be a problem for most users, IMHO -- things can also be
configurable).
--
Vincent Lefèvre <vincent <at> vinc17.net> - Web: <https://www.vinc17.net/>
100% accessible validated (X)HTML - Blog: <https://www.vinc17.net/blog/>
Work: CR INRIA - computer arithmetic / AriC project (LIP, ENS-Lyon)
[Message part 3 (message/rfc822, inline)]
On 9/30/14 12:39, Paul Eggert wrote:
> GNU grep is smart
> enough to start matching at character boundaries without checking the
> validity of the input data. This helps it run faster. However, because
> libpcre requires a validity prepass, grep -P must slow down and do the
> validity check one way or another. Grep does this only when libpcre is
> used, and that's one reason grep -P is slower than plain grep.
Now that Grep master on Savannah has been changed to use PCRE2 instead
of PCRE, the 'grep -P' performance problem seems to have been fixed, in
that the following commands now take about the same amount of time:
grep -P zzzyyyxxx 10840.pdf
pcre2grep -U zzzyyyxxx 10840.pdf
where the file is from <http://research.nhm.org/pdfs/10840/10840.pdf>.
Formerly, 'grep -P' was about 10x slower on this test.
My guess is that the grep -P performance boost comes from bleeding-edge
grep using PCRE2's PCRE2_MATCH_INVALID_UTF option.
I'm closing this old bug report <https://bugs.gnu.org/18454>. We can
always reopen it if there are still performance issues that I've missed.
This bug report was last modified 3 years and 181 days ago.
Previous Next
GNU bug tracking system
Copyright (C) 1999 Darren O. Benham,
1997,2003 nCipher Corporation Ltd,
1994-97 Ian Jackson.