Previously term entries were encoded pairwise (field/term), so
you'd have data like:
F1/T1 F1/T2 F1/T3 F2/T4 F3/T5
As you can see, even though field 1 has 3 terms, we repeat the F1
part in the encoded data. This is a bit wasteful.
In the new format we encode it as a list of terms for each field:
F1/T1,T2,T3 F2/T4 F3/T5
When fields have multiple terms, this saves space. In unit
tests there is no additional waste even in the case that a field
has only a single value.
Here are the results of an indexing test case (beer-search):
$ benchcmp indexing-before.txt indexing-after.txt
benchmark old ns/op new ns/op delta
BenchmarkIndexing-4 11275835988 10745514321 -4.70%
benchmark old allocs new allocs delta
BenchmarkIndexing-4 25230685 22480494 -10.90%
benchmark old bytes new bytes delta
BenchmarkIndexing-4 4802816224 4741641856 -1.27%
And here are the results of a MatchAll search building a facet
on the "abv" field:
$ benchcmp facet-before.txt facet-after.txt
benchmark old ns/op new ns/op delta
BenchmarkFacets-4 439762100 228064575 -48.14%
benchmark old allocs new allocs delta
BenchmarkFacets-4 9460208 3723286 -60.64%
benchmark old bytes new bytes delta
BenchmarkFacets-4 260784261 151746483 -41.81%
Although we expect the index to be smaller in many cases, the
beer-search index is about the same in this case. However,
this may be due to the underlying storage (boltdb) in this case.
Finally, the index version was bumped from 5 to 7, since smolder
also used version 6, which could lead to some confusion.
This change adds methods that provide access to the actual, underlying
mossStore instance in the bleve/index/store/moss KVStore adaptor.
This enables applications to utilize advanced, mossStore-specific
features (such as partial rollback of indexes). See also
https://issues.couchbase.com/browse/MB-17805
In this commit, I saw that there was a simple incrementBytes()
implementation elsewhere in bleve that seemed simpler than using the
big int package.
Edge case note: if the input bytes would overflow in incrementBytes(),
such as with an input of [0xff 0xff 0xff], it returns nil. moss then
treats a nil endKeyExclusive iterator param as a logical
"higher-than-topmost" key, which produces the prefix iteration
behavior that we want for this edge situation.
Previously bleve allowed you to create a memory-only index by
simply passing "" as the path argument to the New() method.
This was not clear when reading the code, and led to some
problematic error cases as well.
Now, to create a memory-only index one should use the
NewMemOnly() method. Passing "" as the path argument
to the New() method will now return os.ErrInvalid.
Advanced users calling NewUsing() can create disk-based or
memory-only indexes, but the change here is that pass ""
as the path argument no longer defaults you into getting
a memory-only index. Instead, the KV store is selected
manually, just as it is for the disk-based solutions.
Here is an example use of the NewUsing() method to create
a memory-only index:
NewUsing("", indexMapping, Config.DefaultIndexType,
Config.DefaultMemKVStore, nil)
Config.DefaultMemKVStore is just a new default value
added to the configuration, it currently points to
gtreap.Name (which could have been used directly
instead for more control)
closes#427
On a dev laptop, bleve-query benchmark on wiki dataset using
query-string of "+text:afternoon +text:coffee" previously had
throughput of 1222qps, and with this change hits 1940qps.
This change to upside_down term-field-reader no longer moves the
underlying iterator forward preemptively. Instead, it will invoke
Next() on the underlying iterator only when the caller invokes the
term-field-reader's Next().
There's a special case to handle the situation on the first Next()
invocation after the term-field-reader is created.
This commit modifies the upside_down TermFrequencyRow parseKDoc() to
skip the ByteSeparator (0xFF) scan, as we already know the term's
length in the UpsideDownCouchTermFieldReader.
On my dev box, results from bleve-query test on high frequency terms
went from previous 107qps to 124qps.
The DumpXXX() methods were always documented as internal and
unsupported. However, now they are being removed from the
public top-level API. They are still available on the internal
IndexReader, which can be accessed using the Advanced() method.
The DocCount() and DumpXXX() methods on the internal index
have moved to the internal index reader, since they logically
operate on a snapshot of an index.
the test had incorreclty been updated to compare the internal
document ids, but these are opaque and may not be the expected
ids in some cases, the test should simply check that it
corresponds to the correct external ids
This change depends on the recently introduced mossStore Stats() API
in github.com/couchbase/moss 564bdbc0 commit. So, gvt for moss has
been updated as part of this change.
Most of the change involves propagating the mossStore instance (the
statsFunc callback) so that it's accessible to the KVStore.Stats()
method.
See also: http://review.couchbase.org/#/c/67524/
previously from JSON we would just deserialize strings like
"-abv" or "city" or "_id" or "_score" as simple sorts
on fields, ids or scores respectively
while this is simple and compact, it can be ambiguous (for
example if you have a field starting with - or if you have a field
named "_id" already. also, this simple syntax doesnt allow us
to specify more cmoplex options to deal with type/mode/missing
we keep support for the simple string syntax, but now also
recognize a more expressive syntax like:
{
"by": "field",
"field": "abv",
"desc": true,
"type": "string",
"mode": "min",
"missing": "first"
}
type, mode and missing are optional and default to
"auto", "default", and "last" respectively
in a recent commit, we changed the code to reuse
TermFrequencyRow objects intsead of constantly allocating new
ones. unfortunately, one of the original methods was not coded
with this reuse in mind, and a lazy initialization cause us to
leak data from previous uses of the same object.
in particular this caused term vector information from previous
hits to still be applied to subsequent hits. eventually this
causes the highlighter to try and highlight invalid regions
of a slice.
fixes#404
previous attempt was flawed (but maked by Reset() method)
new approach is to do this work in the Reset() method itself,
logically this is where it belongs.
but further we acknowledge that IndexInternalID []byte lifetime
lives beyond the TermFieldDoc, so another copy is made into
the DocumentMatch. Although this introduces yet another copy
the theory being tested is that it allows each of these
structuress to reuse memory without additional allocation.
when the term field reader is copying ID values out of the
kv store's iterator, it is already attempting to reuse the
term frequency row data structure. this change allows us
to also attempt to reuse the []byte allocated for previous
copies of the docid. we reset the slice length to zero
then copy the data into the existing slice, avoiding
new allocation and garbage collection in the cases where
there is already enough space
IndexInternalID is now []byte
this is still opaque, and should still work for any future
index implementations as it is a least common denominator
choice, all implementations must internally represent the
id as []byte at some point for storage to disk
index id's are now opaque (until finally returned to top-level user)
- the TermFieldDoc's returned by TermFieldReader no longer contain doc id
- instead they return an opaque IndexInternalID
- items returned are still in the "natural index order"
- but that is no longer guaranteed to be "doc id order"
- correct behavior requires that they all follow the same order
- but not any particular order
- new API FinalizeDocID which converts index internal ID's to public string ID
- APIs used internally which previously took doc id now take IndexInternalID
- that is DocumentFieldTerms() and DocumentFieldTermsForFields()
- however, APIs that are used externally do not reflect this change
- that is Document()
- DocumentIDReader follows the same changes, but this is less obvious
- behavior clarified, used to iterate doc ids, BUT NOT in doc id order
- method STILL available to iterate doc ids in range
- but again, you won't get them in any meaningful order
- new method to iterate actual doc ids from list of possible ids
- this was introduced to make the DocIDSearcher continue working
searchers now work with the new opaque index internal doc ids
- they return new DocumentMatchInternal (which does not have string ID)
scorerers also work with these opaque index internal doc ids
- they return DocumentMatchInternal (which does not have string ID)
collectors now also perform a final step of converting the final result
- they STILL return traditional DocumentMatch (with string ID)
- but they now also require an IndexReader (so that they can do the conversion)
at the time you create the term field reader, you can specify
that you don't need the term freq, the norm, or the term vectors
in that case, the index implementation can choose to not return
them in its subsequently returned values
this is advisory only, some simple implementations may ignore this
and continue to return the values anyway (as the current impl of
upside_down does today)
this change will allow future index implementations the
opportunity to do less work when it isn't required
The UpsideDownCouchTermFieldReader.Next() only needs the doc ID from
the key, so this change provides a specialized parseKDoc() method for
that optimization.
Additionally, fields in various structs are more 64-bit aligned, in an
attempt to reduce the invocations of runtime.typedmemmove() and
runtime.heapBitsBulkBarrier(), which the go compiler seems to
automatically insert to transparently handle misaligned data.
Previously, the PrefixIterator() for moss was implemented by comparing
the prefix bytes on every Next().
With this optimization, the next larger endKeyExclusive is computed at
the iterator's initialization, which allows us to avoid all those
prefix comparisons.
This optimization changes the index.TermFieldReader.Next() interface
API, adding an optional, pre-allocated *TermFieldDoc parameter, which
can help prevent garbage creation.
Before this change, upside down's reader would alloc a new
TermFrequencyRow on every Next(), which would be immediately
transformed into an index.TermFieldDoc{}. This change reuses a
pre-allocated TermFrequencyRow that's a field in the reader.
From some bleve-query perf profiling, term field vectors appeared to
be alloc'ed, which was unnecessary as term field vectors are disabled
in the bleve-blast/bleve-query tests.
Currently bleve batch is build by user goroutine
Then read by bleve gourinte
This is still safe when used correctly
However, Reset() will modify the map, which is now a data race
This fix is to simply make batch.Reset() alloc new maps.
This provides a data-access pattern that can be used safely.
Also, this thread argues that creating a new map may be faster
than trying to reuse an existing one:
https://groups.google.com/d/msg/golang-nuts/UvUm3LA1u8g/jGv_FobNpN0J
Separate but related, I have opted to remove the "unsafe batch"
checking that we did. This was always limited anyway, and now
users of Go 1.6 are just as likely to get a panic from the
runtime for concurrent map access anyway. So, the price paid
by us (additional mutex) is not worth it.
fixes#360 and #260
several data structures had a pointer at the start of the struct
on some 32-bit systems, this causes the remaining fields no longer
be aligned on 64-bit boundaries
the fix identifed by @pmezard is to put the counters first in the
struct, which guarantees correct alignment
fixes#359
The moss RegistryCollectionOptions allows applications to register
moss-related callback API functions and other advanced feature usage
at process initialization time.
For example, this could be used for moss's OnError(), OnEvent() and
logging callback options.
This is somewhat unlikely, but if a term is (incredibly) popular, its
uvarint count value representation might go beyond 8 bytes.
Some KVStore implementations (like forestdb) provide a BatchEx cgo
optimization that depends on proper preallocated counting, so this
change provides a proper worst-case estimate based on the max-unvarint
of 10 bytes instead of the previously incorrect 8 bytes.
Performance optimization. Before this change, by using Merge()
instead of AllocMerge(), moss's internal batch buf's would be
wastefully, dramatically grown during append()'s to a mis-sized buf.
With this change, the upside_down batchRows() and firestorm
batchRows() now use the new KVWriter.NewBatchEx() API, which can
improve performance by reducing the number of cgo hops.
In order to spend less time in append(), this change in upside_down
(similar to another recent performance change in firestorm) builds up
an array of arrays as the eventual input to batchRows().
Previously, the code would gather all the backIndexRows before
processing them. This change instead merges the backIndexRows
concurrently on the theory that we might as well make progress on
compute & processing tasks while waiting for the rest of the back
index rows to be fetched from the KVStore.
Start backindex reading concurrently with analysi to try to utilize
more I/O bandwidth.
The analysis time vs indexing time stats tracking are also now "off",
since there's now concurrency between those actiivties.
One tradeoff is that the lock area in upside_down Batch() is increased
as part of this change.
Taking another optimization from firestorm, upside_down's
storeField()/indexField() funcs now also append() to passed-in arrays
rather than always allocating their own arrays.
Rather than append() all received rows into a flat []IndexRow during
the result gathering loop, this change instead collects the analysis
result rows into a [][]IndexRow, which avoids extra copying.
As part of this, firestorm batchRows() now takes the [][]IndexRow as
its input.
The new analyzeField() helper func is used for both regular fields and
for composite fields.
With this change, all analysis is done up front, for both regular
fields and composite fields.
After analysis, this change counts up all the row capacity needed and
extends the AnalysisResult.Rows in one shot, as opposed to the
previous approach of dynamically growing the array as needed during
append()'s.
Also, in this change, the TermFreqRow for _id is added first, which
seems more correct.
This uses the "backing array" technique to allocate many TermFreqRow's
at the front of firestorm.indexField(), instead of the previous
one-by-one, as-needed TermFreqRow allocation approach.
Results from micro-benchmark, null-firestorm, bleve-blast has this
change producing a ~half MB/sec improvement.
Previously, the firestorm.Batch() would notify the lookuper goroutine
on a document by document basis. If the lookuper input channel became
full, then that would block the firestorm.Batch() operation.
With this change, lookuper is notified once, with a "batch" that is an
[]*InFlightItem.
This change also reuses that same []*InFlightItem to invoke the
compensator.MutateBatch().
This also has the advantage of only converting the docID's from string
to []byte just once, outside of the lock that's used by the
compensator.
Micro-benchmark of this change with null-firestorm bleve-blast does
not show large impact, neither degradation or improvement.
After this change, with null kvstore micro-benchmark...
GOMAXPROCS=8 ./bleve-blast -source=../../tmp/enwiki.txt \
-count=100000 -numAnalyzers=8 -numIndexers=8 \
-config=../../configs/null-firestorm.json -batch=100
Then TermFreqRow key and value methods dissapear as large boxes from
the cpu profile graphs.
The field cache is expected to be the authority on which field
names are identified by which identifier. This code was
optimized for the most common case in which fields already
exist. However, if we deterimine the field is missing with
the read lock (shared), we incorrectly immediately proceed
to create a new row with the write lock (exclusive). The
problem is that multiple goroutines might have come to
the same conclusion, and they all proceed to add rows. The two
choices were to do the whole operation with the write lock, or
recheck the value again with the write lock. We have chosen
to repeat the check inside the write-lock, as this optimizes
for what we believe to be the most common case, in which most
fields will already exist.
Rows content is an implementation detail of bleve index and may change
in the future. That said, they also contains information valuable to
assess the quality of the index or understand its performances. So, as
long as we agree that type asserting rows should only be done if you
know what you are doing and are ready to deal with future changes, I see
no reason to hide the row fields from external packages.
Fix#268
Two issues:
- Seeking before i.start and iterating returned keys before i.start
- Seeking after the store last key did not invalidate the iterator and
could cause infinite loops.
It boils down to:
1. client sends some work and a notification channel to a single worker,
then waits.
2. worker processes the work
3. worker sends the result to the client using the notification channel
I do not see any problem with this, even with unbuffered channels.
Use this option when rebuilding indexes from scratch. In my small case
(~17000 json documents), it reduces indexing from 520s to 250s.
I did not add any test, short of forced indexing termination it only
has performance effects, which are hard to test. And unknown options are
currently ignored.
Issue #240
benchmark old ns/op new ns/op delta
BenchmarkBatch-4 16950972 16377194 -3.38%
benchmark old allocs new allocs delta
BenchmarkBatch-4 136164 136161 -0.00%
benchmark old bytes new bytes delta
BenchmarkBatch-4 7168872 7109691 -0.83%
benchmark old ns/op new ns/op delta
BenchmarkBatch-4 20738739 17047158 -17.80%
benchmark old allocs new allocs delta
BenchmarkBatch-4 136423 136160 -0.19%
benchmark old bytes new bytes delta
BenchmarkBatch-4 20277781 7168772 -64.65%
this lays the foundation for supporting the new firestorm
indexing scheme. i'm merging these changes ahead of
the rest of the firestorm branch so i can continue
to make changes to the analysis pipeline in parallel
also changed over to mschoch fork of goleveldb (temporary)
the change to my fork is pending some read-only issues described
here: https://github.com/syndtr/goleveldb/issues/111
hopefully we can find a path forward, and get that addressed upstream
the logic for reading the docID from the keys
in this row relies on the keys NEVER containing
the byte separator character (0xff), this is OK
as we require that all keys be valid utf-8
however, it turns out that in the case where this
rule was violated, we would panic, because we
return nil, nil and later try to print the doc id
in boltdb, long readers *MAY* block a writer. in particular if
the write requires additional allocation, it must acquire a lock
already held by the reader. in general this is not a problem
for bleve (though it can affect performance in some cases), but
it is a problem for the reader isolation test. this commit
adds a hack to try and avoid the need for additional allocation
closes#208
in some limited cases we can detect unsafe usage
in these cases, do not trip over ourselves and panic
instead return a strongly typed error upside_down.UnsafeBatchUseDetected
also, introduced Batch.Reset() to allow batch reuse
this is currently still experimental
closes#195
trying to be clever, we reused the memory allocated for the left
operand when doing partial merges
this had been tested to be safe, in general. however, the
implementation was then written such that we always reused
globally defined operands, this meant that we mutated
the operands which were intended to always represent
+1/-1
this then cascades quickly to making increment/decrement
values much larger/smaller than they should be
related to #197
refactor to share code in emulated batch
refactor to share code in emulated merge
refactor index kvstore benchmarks to share more code
refactor index kvstore benchmarks to be more repeatable
This reverts commit cb8c1741289a0f00b30733e0d52d9d81d1199603.
This commit is no longer desired. The KV store API has been changed to
better address this issue.
For more details, see the google group conversation thread at:
https://groups.google.com/forum/#!topic/bleve/aHZ8gmihLiY
- this change keeps the method behavior consistent with the
levigo/leveldb implementation.
- the leveldb store_test.go and goleveldb store_test.go are now
identical.
improvements uncovered some issues with how k/v data was copied
or not. to address this, kv abstraction layer now lets impl
specify if the bytes returned are safe to use after a reader
(or writer since writers are also readers) are closed
See index/store/KVReader - BytesSafeAfterClose() bool
false is the safe value if you're not sure
it will cause index impls to copy the data
Some kv impls already have created a copy a the C-api barrier
in which case they can safely return true.
Overall this yields ~25% speedup for searches with leveldb.
It yields ~10% speedup for boltdb.
Returning stored fields is now slower with boltdb, as previously
we were returning unsafe bytes.
this introduces disk format v4
now the summary rows for a term are stored in their own
"dictionary row" format, previously the same information
was stored in special term frequency rows
this now allows us to easily iterate all the terms for a field
in sorted order (useful for many other fuzzy data structures)
at the top-level of bleve you can now browse terms within a field
using the following api on the Index interface:
FieldDict(field string) (index.FieldDict, error)
FieldDictRange(field string, startTerm []byte, endTerm []byte) (index.FieldDict, error)
FieldDictPrefix(field string, termPrefix []byte) (index.FieldDict, error)
fixes#127
this is due to forestdb auto-compaction using the provided
path as just the prefix, so if we're not careful we end
up with many stray files laying around
here, we create a sub-directory first, and just nuke the
whole subdir when we're done