The new analyzeField() helper func is used for both regular fields and
for composite fields.
With this change, all analysis is done up front, for both regular
fields and composite fields.
After analysis, this change counts up all the row capacity needed and
extends the AnalysisResult.Rows in one shot, as opposed to the
previous approach of dynamically growing the array as needed during
append()'s.
Also, in this change, the TermFreqRow for _id is added first, which
seems more correct.
This uses the "backing array" technique to allocate many TermFreqRow's
at the front of firestorm.indexField(), instead of the previous
one-by-one, as-needed TermFreqRow allocation approach.
Results from micro-benchmark, null-firestorm, bleve-blast has this
change producing a ~half MB/sec improvement.
Previously, the firestorm.Batch() would notify the lookuper goroutine
on a document by document basis. If the lookuper input channel became
full, then that would block the firestorm.Batch() operation.
With this change, lookuper is notified once, with a "batch" that is an
[]*InFlightItem.
This change also reuses that same []*InFlightItem to invoke the
compensator.MutateBatch().
This also has the advantage of only converting the docID's from string
to []byte just once, outside of the lock that's used by the
compensator.
Micro-benchmark of this change with null-firestorm bleve-blast does
not show large impact, neither degradation or improvement.
After this change, with null kvstore micro-benchmark...
GOMAXPROCS=8 ./bleve-blast -source=../../tmp/enwiki.txt \
-count=100000 -numAnalyzers=8 -numIndexers=8 \
-config=../../configs/null-firestorm.json -batch=100
Then TermFreqRow key and value methods dissapear as large boxes from
the cpu profile graphs.
The field cache is expected to be the authority on which field
names are identified by which identifier. This code was
optimized for the most common case in which fields already
exist. However, if we deterimine the field is missing with
the read lock (shared), we incorrectly immediately proceed
to create a new row with the write lock (exclusive). The
problem is that multiple goroutines might have come to
the same conclusion, and they all proceed to add rows. The two
choices were to do the whole operation with the write lock, or
recheck the value again with the write lock. We have chosen
to repeat the check inside the write-lock, as this optimizes
for what we believe to be the most common case, in which most
fields will already exist.
Rows content is an implementation detail of bleve index and may change
in the future. That said, they also contains information valuable to
assess the quality of the index or understand its performances. So, as
long as we agree that type asserting rows should only be done if you
know what you are doing and are ready to deal with future changes, I see
no reason to hide the row fields from external packages.
Fix#268
Two issues:
- Seeking before i.start and iterating returned keys before i.start
- Seeking after the store last key did not invalidate the iterator and
could cause infinite loops.
It boils down to:
1. client sends some work and a notification channel to a single worker,
then waits.
2. worker processes the work
3. worker sends the result to the client using the notification channel
I do not see any problem with this, even with unbuffered channels.
Use this option when rebuilding indexes from scratch. In my small case
(~17000 json documents), it reduces indexing from 520s to 250s.
I did not add any test, short of forced indexing termination it only
has performance effects, which are hard to test. And unknown options are
currently ignored.
Issue #240
benchmark old ns/op new ns/op delta
BenchmarkBatch-4 16950972 16377194 -3.38%
benchmark old allocs new allocs delta
BenchmarkBatch-4 136164 136161 -0.00%
benchmark old bytes new bytes delta
BenchmarkBatch-4 7168872 7109691 -0.83%
benchmark old ns/op new ns/op delta
BenchmarkBatch-4 20738739 17047158 -17.80%
benchmark old allocs new allocs delta
BenchmarkBatch-4 136423 136160 -0.19%
benchmark old bytes new bytes delta
BenchmarkBatch-4 20277781 7168772 -64.65%
this lays the foundation for supporting the new firestorm
indexing scheme. i'm merging these changes ahead of
the rest of the firestorm branch so i can continue
to make changes to the analysis pipeline in parallel
also changed over to mschoch fork of goleveldb (temporary)
the change to my fork is pending some read-only issues described
here: https://github.com/syndtr/goleveldb/issues/111
hopefully we can find a path forward, and get that addressed upstream
the logic for reading the docID from the keys
in this row relies on the keys NEVER containing
the byte separator character (0xff), this is OK
as we require that all keys be valid utf-8
however, it turns out that in the case where this
rule was violated, we would panic, because we
return nil, nil and later try to print the doc id
in boltdb, long readers *MAY* block a writer. in particular if
the write requires additional allocation, it must acquire a lock
already held by the reader. in general this is not a problem
for bleve (though it can affect performance in some cases), but
it is a problem for the reader isolation test. this commit
adds a hack to try and avoid the need for additional allocation
closes#208
in some limited cases we can detect unsafe usage
in these cases, do not trip over ourselves and panic
instead return a strongly typed error upside_down.UnsafeBatchUseDetected
also, introduced Batch.Reset() to allow batch reuse
this is currently still experimental
closes#195
trying to be clever, we reused the memory allocated for the left
operand when doing partial merges
this had been tested to be safe, in general. however, the
implementation was then written such that we always reused
globally defined operands, this meant that we mutated
the operands which were intended to always represent
+1/-1
this then cascades quickly to making increment/decrement
values much larger/smaller than they should be
related to #197
refactor to share code in emulated batch
refactor to share code in emulated merge
refactor index kvstore benchmarks to share more code
refactor index kvstore benchmarks to be more repeatable
This reverts commit cb8c1741289a0f00b30733e0d52d9d81d1199603.
This commit is no longer desired. The KV store API has been changed to
better address this issue.
For more details, see the google group conversation thread at:
https://groups.google.com/forum/#!topic/bleve/aHZ8gmihLiY
- this change keeps the method behavior consistent with the
levigo/leveldb implementation.
- the leveldb store_test.go and goleveldb store_test.go are now
identical.
improvements uncovered some issues with how k/v data was copied
or not. to address this, kv abstraction layer now lets impl
specify if the bytes returned are safe to use after a reader
(or writer since writers are also readers) are closed
See index/store/KVReader - BytesSafeAfterClose() bool
false is the safe value if you're not sure
it will cause index impls to copy the data
Some kv impls already have created a copy a the C-api barrier
in which case they can safely return true.
Overall this yields ~25% speedup for searches with leveldb.
It yields ~10% speedup for boltdb.
Returning stored fields is now slower with boltdb, as previously
we were returning unsafe bytes.
this introduces disk format v4
now the summary rows for a term are stored in their own
"dictionary row" format, previously the same information
was stored in special term frequency rows
this now allows us to easily iterate all the terms for a field
in sorted order (useful for many other fuzzy data structures)
at the top-level of bleve you can now browse terms within a field
using the following api on the Index interface:
FieldDict(field string) (index.FieldDict, error)
FieldDictRange(field string, startTerm []byte, endTerm []byte) (index.FieldDict, error)
FieldDictPrefix(field string, termPrefix []byte) (index.FieldDict, error)
fixes#127
this is due to forestdb auto-compaction using the provided
path as just the prefix, so if we're not careful we end
up with many stray files laying around
here, we create a sub-directory first, and just nuke the
whole subdir when we're done