In this optimization, the zap PostingsIterator skips the parsing of
freq/norm/locs chunks based on the includeFreq|Norm|Locs flags.
In bleve-query microbenchmark on dev macbookpro, with 50K en-wiki
docs, on a medium frequency term search that does not ask for term
vectors, throughput was ~750 q/sec before the change and
went to ~1400 q/sec after the change.
This commit adds boolean flag params to the scorch
PostingsList.Iterator() method, so that the caller can specify whether
freq/norm/locs information is needed or not.
Future changes can leverage these params for optimizations.
The previous code would inefficiently throw away the nextLocs and
would also throw away the []segment.Location slice if there were no
locations, such as if it was a 1-hit postings list.
This change tries to reuse the nextLocs/nextSegmentLocs for all cases.
The previous commit's optimization that replaced the locsBitmap was
incorrectly handling the case when there was a 1-bit encoding
optimization in the postingsIterator.nextBytes() method,
incorrectly generating the freq-norm bytes.
Also as part of this change, more unused locsBitmap's were removed.
This is attempt #2 of the optimization that replaces the locsBitmap,
without any changes from the original commit attempt. A commit that
follows this one contains the actual fix.
See also...
- commit 621b58dd83 (the 1st attempt)
- commit 49a4ee60ba (the revert)
-------------
The original commit message body from 621b58 was...
NOTE: this is a zap file format change.
The separate "postings locations" roaring Bitmap that encoded whether
a posting has locations info is now replaced by the least significant
bit in the freq varint encoded in the freq-norm chunkedIntCoder.
encode/decodeFreqHasLocs() are added as helper functions.
Testing with the cbft application led to cbft process exits...
AsyncError exit()... error reading location field: EOF --
main.initBleveOptions.func1() at init_bleve.go:85
This reverts commit 621b58dd83.
NOTE: this is a zap file format change.
The separate "postings locations" roaring Bitmap that encoded whether
a posting has locations info is now replaced by the least significant
bit in the freq varint encoded in the freq-norm chunkedIntCoder.
encode/decodeFreqHasLocs() are added as helper functions.
This commit avoids creating roaring.Bitmap's (which would have just a
single entry) when a postings list/iterator represents a single
"1-hit" encoding.
NOTE: this is a scorch zap file format change / bump to version 4.
In this optimization, the uint64 val stored in the vellum FST (term
dictionary) now may either be a uint64 postingsOffset (same as before
this change) or a uint64 encoding of the docNum + norm (in the case
where a term appears in just a single doc).
Do not re-account for certain referenced data in the zap structures.
New estimates:
ESTIMATE BENCHMEM
TermQuery 11396 12437
MatchQuery 12244 12951
DisjunctionQuery (Term queries) 20644 20709
This API (unexported) will estimate the amount of memory needed to execute
a search query over an index before the collector begins data collection.
Sample estimates for certain queries:
{Size: 10, BenchmarkUpsidedownSearchOverhead}
ESTIMATE BENCHMEM
TermQuery 4616 4796
MatchQuery 5210 5405
DisjunctionQuery (Match queries) 7700 8447
DisjunctionQuery (Term queries) 6514 6591
ConjunctionQuery (Match queries) 7524 8175
Nested disjunction query (disjunction of disjunctions) 10306 10708
…
This change adds a zap PostingsIterator.nextBytes() method, which is
similar to Next(), but instead of returning a Posting instance,
nextBytes() returns the encoded freq/norm and location byte slices.
The zap merge code then provides those byte slices directly to the
intCoder's via a new method, intCoder.AddBytes(), thereby avoiding
having to encode many uvarint's.
During zap segment merging, a new zap PostingsIterator was allocated
for every field X segment X term.
This change optimizes by reusing a single PostingsIterator instance
per persistMergedRest() invocation.
And, also unused fields are removed from the PostingsIterator.
The zap DictionaryIterator Next() was incorrectly returning the
postingsList offset as the term count. As part of this, refactored
out a PostingsList.read() helper method.
Also added more merge unit test scenarios, including merging a segment
for a few rounds to see if there are differences before/after merging.
The zap SegmentBase struct is a refactoring of the zap Segment into
the subset of fields that are needed for read-only ops, without any
persistence related info. This allows us to use zap's optimized data
encoding as scorch's in-memory segments.
The zap Segment struct now embeds a zap SegmentBase struct, and layers
on persistence. Both the zap Segment and zap SegmentBase implement
scorch's Segment interface.
With this change, there are no more memory allocations in the calls to
PostingsIterator.Next() in the micro benchmarks of bleve-query. On a
dev macbook, on an index of 50K wikipedia docs, using high frequency
search of "text:date"...
400 qps - upsidedown/moss
565 qps - scorch before
680 qps - scorch after