Some tries to compare ``plain grep'' with ``ppGrep.sh''. On my host, running `grep` for string "empty" with "-abo" switches, on files in "/usr/lib" (114'969 files, 10.38G on raid 1, 3Tb HDD) $ find "/usr/lib" -type f -exec grep -abo "empty" {} + will render something like: /usr/lib/tuxmath/tuxmath:276730:empty /usr/lib/tuxmath/tuxmath:276906:empty /usr/lib/tuxmath/tuxmath:277239:empty /usr/lib/tuxmath/tuxmath:283001:empty And 94'624 more lines... (5.98Mb) The overall of his output is: declare -- refSum="4726f856313f306de8810d616e23c63c5cf4e6ab" After many test, when cache is empty, this command will take ~11 minutes. Re run when datas are cached, then will take less than 8 seconds. Now, using my ppGrep, syntax will differ a little: $ find "/usr/lib" -type f -print0 | ppGrep -j1 -abo -zT - "empty" - `-T` switch was inspired by `tar` - `-j1` ensure only 1 grep are run (no parallelization, by default ppGrep run 3 jobs). Here is my little comparison table: No cache Cached grep 10' 58.352902" 7.646374" ppGrep 1p. 10' 33.748950" 18.293024" ppGrep 2p. 9' 23.600785" 16.992925" ppGrep 3p. 8' 21.149429" 14.62201" ppGrep 4p. 8' 8.025807" 14.60411" ppGrep 5p. 7' 52.778152" 14.888845" ppGrep 6p. 7' 12.015095" 12.992566" Same test on my laptop (28'879 files, 2.53G with low cost SSD) will produce this table: No cache Cached grep 18.084168" 2.415218" ppGrep 1p. 15.904358" 4.642838" ppGrep 2p. 11.189050" 3.996576" ppGrep 3p. 10.095855" 3.837164" ppGrep 4p. 9.450112" 3.869272" ppGrep 5p. 9.116996" 3.944487" ppGrep 6p. 9.099851" 4.055299" And yes, when not cached, using my command is mostly quicker than plain grep! Even with only 1 job, ppGrep is sligthly quicker than plain grep!! This seem to be due to `find` or `xargs` tools which split lines over 128Kb, then will run a lot (16 time) more forks than my script which will split lines over `getconf ARG_MAX` -> 2Mb (16x128Kb).