During zap segment merging, a new zap PostingsIterator was allocated
for every field X segment X term.
This change optimizes by reusing a single PostingsIterator instance
per persistMergedRest() invocation.
And, also unused fields are removed from the PostingsIterator.
When a scorch is just opened and is "empty", RollbackPoints() no
longer considers that an error situation.
Also, this commit makes the TestIndexRollback unit tests is a bit more
forgiving to races, as we were seeing failures sometimes in travis-CI
environments (TestIndexRollback was passing fine on my dev macbook).
The theory is the double-looping in the persisterLoop would sometimes
be racy, leading to 1 or 2 rollback points.
The optimization to byte-copy all the storedDocs for a given segment
during merging kicks in when the fields are the same across all
segments and when there are no deletions for that given segment. This
can happen, for example, during data loading or insert-only scenarios.
As part of this commit, the Segment.copyStoredDocs() method was added,
which uses a single Write() call to copy all the stored docs bytes of
a segment to a writer in one shot.
And, getDocStoredMetaAndCompressed() was refactored into a related
helper function, getDocStoredOffsets(), which provides the storedDocs
metadata (offsets & lengths) for a doc.
COMPATIBILITY NOTE: scorch zap version bumped in this commit.
The version bump is because mergeFields() now computes whether fields
are the same across segments and it relies on the previous commit
where fieldID's are assigned in field name sorted order (albeit with
_id field always having fieldID of 0).
Potential future commits might rely on this info that "fields are the
same across segments" for more optimizations, etc.
This is a stepping stone to allow easier future comparisons of field
maps and potential merge optimizations.
In bleve-blast tests on a 2015 macbook (50K wikipedia docs, 8
indexers, batch size 100, ssd), this does not seem to have a distinct
effect on indexing throughput.
The slow merger was lagging behind the fast persister
to a persister notify send-loop while the persister awaits
for any new introductions from introducer totally blocking
the merger
This fix along with the deleted files eligibilty flipping
makes the file count to around 6 to 11 files per shard
for both travel and beer samples
This change turns zap.MergeToWriter() into a public func, so that it's
now directly callable from outside packages (such as from scorch's
top-level merger or persister). And, MergerToWriter() now takes input
of SegmentBases instead of Segments, so that it can now work on either
in-memory zap segments or file-based zap segments.
This is yet another stepping stone towards in-memory merging of zap
segments.
This change optimizes the scorch/mem DictionaryIterator by reusing a
DictEntry struct across multiple Next() calls. This follows the same
optimization trick and Next() semantics as upsidedown's FieldDict
implementation.
Adjusting the merge task creation loop to accommodate
the newly merged segments so that the eventual merge
results/ number of segments stay within the calculated budget.
The TestIndexRollback unit test was failing more often than ever
(perhaps raciness?), so this commit tries to remove avenues of
raciness in the test...
- The Scorch.Open() method is refactored into an Scorch.openBolt()
helper method in order to allow unit tests to control which
background goroutines are started.
- TestIndexRollback() doesn't start the merger goroutine, to simulate
a really slow merger that never gets around to merging old segments.
- TestIndexRollback() creates a long-lived reader after the first
batch, so that the first index snapshot isn't removed due to the
long-lived reader's ref-count.
- TestIndexRollback() temporarily bumps NumSnapshotsToKeep to a large
number so the persister isn't tempted to removeOldData() that we're
trying to rollback to.
The zap DictionaryIterator Next() was incorrectly returning the
postingsList offset as the term count. As part of this, refactored
out a PostingsList.read() helper method.
Also added more merge unit test scenarios, including merging a segment
for a few rounds to see if there are differences before/after merging.
Instead of sorting docNum keys from a hashmap, this change instead
iterates from docNum 0 to N and uses an array instead of hashmap.
The array is also reused across outer loop iterations.
This optimizes for when there's a lot of structural similarity between
docs, where many/most docs have the same fields. i.e., beers,
breweries. If every doc has completely different fields, then this
change might produce worse behavior compared to the previous sparse
hashmap approach.