1. porter stemmer offers method to NOT do lowercasing, however
to use this we must convert to runes first ourself, so we did this
2. now we can invoke the version that skips lowercasing, we
already do this ourselves before stemming through separate filter
due to the fact that the stemmer modifies the runes in place
we have no way to know if there were changes, thus we must
always encode back into the term byte slice
added unit test which catches the problem found
NOTE this uses analysis.BuildTermFromRunes so perf gain is
only visible with other PR also merged
future gains are possible if we udpate the stemmer to let us
know if changes were made, thus skipping re-encoding to
[]byte when no changes were actually made
avoid allocating unnecessary intermediate buffer
also introduce new method to let a user optimistically
try and encode back into an existing buffer, if it isn't
large enough, it silently allocates a new one and returns it
previous impl always did full utf8 decode of rune
if we assume most tokens are not possessive this is unnecessary
and even if they are, we only need to chop off last to runes
so, now we only decode last rune of token, and if it looks like
s/S then we proceed to decode second to last rune, and then
only if it looks like any form of apostrophe, do we make any
changes to token, again by just reslicing original to chop
off the possessive extension
the token stream resulting from the removal of stop words must
be shorter or the same length as the original, so we just
reuse it and truncate it at the end.
This change depends on the recently introduced mossStore Stats() API
in github.com/couchbase/moss 564bdbc0 commit. So, gvt for moss has
been updated as part of this change.
Most of the change involves propagating the mossStore instance (the
statsFunc callback) so that it's accessible to the KVStore.Stats()
method.
See also: http://review.couchbase.org/#/c/67524/
this improvement was started to improve code coverage
but also improves performance and adds support for escaping
escaping:
The following quoted string enumerates the characters which
may be escaped.
"+-=&|><!(){}[]^\"~*?:\\/ "
Note that this list includes space.
In order to escape these characters, they are prefixed with the \
(backslash) character. In all cases, using the escaped version
produces the character itself and is not interpretted by the
lexer.
Two simple examples:
my\ name
Will be interpretted as a single argument to a match query
with the value "my name".
"contains a\" character"
Will be interpretted as a single argument to a phrase query
with the value `contains a " character`.
Performance:
before$ go test -v -run=xxx -bench=BenchmarkLexer
BenchmarkLexer-4 100000 13991 ns/op
PASS
ok github.com/blevesearch/bleve 1.570s
after$ go test -v -run=xxx -bench=BenchmarkLexer
BenchmarkLexer-4 500000 3387 ns/op
PASS
ok github.com/blevesearch/bleve 1.740s
the collector has optimizations to avoid allocation and reslicing
during the common case of searching for top hits
however, in some cases users request an a very large number of
search hits to be returned (attempting to get them all) this
caused unnecessary allocation of ram.
to address this we introduce a new constant PreAllocSizeSkipCap
it defaults the value of 1000. if your search+skip is less than
this constant, you get the optimized behavior. if your
search+skip is greater than this, we cap the preallcations to
this lower value. additional space is acquired on an as needed
basis by growing the DocumentMatchPool and reslicing the
collector backing slice
applications can change the value of PreAllocSizeSkipCap to suit
their own needs
fixes#408
counter-intuitively the list impl was faster than the heap
the theory was the heap did more comparisons and swapping
so even though it benefited from no interface and some cache
locality, it was still slower
the idea was to just use a raw slice kept in order
this avoids the need for interface, but can take same comparison
approach as the list
it seems to work out:
go test -run=xxx -bench=. -benchmem -cpuprofile=cpu.out
BenchmarkTop10of100000Scores-4 5000 299959 ns/op 2600 B/op 36 allocs/op
BenchmarkTop100of100000Scores-4 2000 601104 ns/op 20720 B/op 216 allocs/op
BenchmarkTop10of1000000Scores-4 500 3450196 ns/op 2616 B/op 36 allocs/op
BenchmarkTop100of1000000Scores-4 500 3874276 ns/op 20856 B/op 216 allocs/op
PASS
ok github.com/blevesearch/bleve/search/collectors 7.440s
the TopNCollector now can either use a heap or a list
i did not code it to use an interface, because this is a very hot
loop during searching. rather, it lets bleve developers easily
toggle between the two (or other ideas) by changing 2 lines
The list is faster in the benchmark, but causes more allocations.
The list is once again the default (for now).
To switch to the heap implementation, change:
store *collectStoreList
to
store *collectStoreHeap
and
newStoreList(...
to
newStoreHeap(...
primary change is going back to sort values be []string
and not []interface{}, this avoid allocatiosn converting
into the interface{}
that sounds obvious, so why didn't we just do that first?
because a common (default) sort is score, which is naturally
a number, not a string (like terms). converting into the
number was also expensive, and the common case.
so, this solution also makes the change to NOT put the score
into the sort value list. instead you see the dummy value
"_score". this is just a placeholder, the actual sort impl
knows that field of the sort is the score, and will sort
using the actual score.
also, several other aspets of the benchmark were cleaned up
so that unnecessary allocations do not pollute the cpu profiles
Here are the updated benchmarks:
$ go test -run=xxx -bench=. -benchmem -cpuprofile=cpu.out
BenchmarkTop10of100000Scores-4 3000 465809 ns/op 2548 B/op 33 allocs/op
BenchmarkTop100of100000Scores-4 2000 626488 ns/op 21484 B/op 213 allocs/op
BenchmarkTop10of1000000Scores-4 300 5107658 ns/op 2560 B/op 33 allocs/op
BenchmarkTop100of1000000Scores-4 300 5275403 ns/op 21624 B/op 213 allocs/op
PASS
ok github.com/blevesearch/bleve/search/collectors 7.188s
Prior to this PR, master reported:
$ go test -run=xxx -bench=. -benchmem
BenchmarkTop10of100000Scores-4 3000 453269 ns/op 360161 B/op 42 allocs/op
BenchmarkTop100of100000Scores-4 2000 519131 ns/op 388275 B/op 219 allocs/op
BenchmarkTop10of1000000Scores-4 200 7459004 ns/op 4628236 B/op 52 allocs/op
BenchmarkTop100of1000000Scores-4 200 8064864 ns/op 4656596 B/op 232 allocs/op
PASS
ok github.com/blevesearch/bleve/search/collectors 7.385s
So, we're pretty close on the smaller datasets, and we scale better on the larger datasets.
We also show fewer allocations and bytes in all cases (some of this is artificial due to test cleanup).
this change means simple sort requirements no longer require
importing the search package (high-level API goal)
also the sort test at the top-level was changed to use this form
previously from JSON we would just deserialize strings like
"-abv" or "city" or "_id" or "_score" as simple sorts
on fields, ids or scores respectively
while this is simple and compact, it can be ambiguous (for
example if you have a field starting with - or if you have a field
named "_id" already. also, this simple syntax doesnt allow us
to specify more cmoplex options to deal with type/mode/missing
we keep support for the simple string syntax, but now also
recognize a more expressive syntax like:
{
"by": "field",
"field": "abv",
"desc": true,
"type": "string",
"mode": "min",
"missing": "first"
}
type, mode and missing are optional and default to
"auto", "default", and "last" respectively