this change means simple sort requirements no longer require
importing the search package (high-level API goal)
also the sort test at the top-level was changed to use this form
previously from JSON we would just deserialize strings like
"-abv" or "city" or "_id" or "_score" as simple sorts
on fields, ids or scores respectively
while this is simple and compact, it can be ambiguous (for
example if you have a field starting with - or if you have a field
named "_id" already. also, this simple syntax doesnt allow us
to specify more cmoplex options to deal with type/mode/missing
we keep support for the simple string syntax, but now also
recognize a more expressive syntax like:
{
"by": "field",
"field": "abv",
"desc": true,
"type": "string",
"mode": "min",
"missing": "first"
}
type, mode and missing are optional and default to
"auto", "default", and "last" respectively
The syntax used is an array of strings. The strings "_id" and
"_score" are special and reserved to mean sorting on the document
id and score repsectively. All other strings refer to the literal
field name with that value. If the string is prefixed with "-"
the order of that sort is descending, without it, it defaults to
ascending.
Examples:
"sort":["-abv","-_score"]
This will sort results in decreasing order of the "abv" field.
Results which have the same value of the "abv" field will then
be sorted by their score, also decreasing.
If no value for "sort" is provided in the search request the
default soring is the same as before, which is decreasing score.
the motivation for this commit is long and detailed and has been
documented externally here:
https://gist.github.com/mschoch/5cc5c9cf4669a5fe8512cb7770d3c1a2
the core of the changes are:
1. recognize that collector/searcher need only a fixed number
of DocumentMatch instances, and this number can be determined
from the structure of the query, not the size of the data
2. knowing this, instances can be allocated in bulk, up front
and they can be reused without locking (since all search
operations take place in a single goroutine
3. combined with previous commits which enabled reuse of
the IndexInternalID []byte, this allows for no allocation/copy
of these bytes as well (by using DocumentMatch Reset() method
when returning entries to the pool
in a recent commit, we changed the code to reuse
TermFrequencyRow objects intsead of constantly allocating new
ones. unfortunately, one of the original methods was not coded
with this reuse in mind, and a lazy initialization cause us to
leak data from previous uses of the same object.
in particular this caused term vector information from previous
hits to still be applied to subsequent hits. eventually this
causes the highlighter to try and highlight invalid regions
of a slice.
fixes#404
previous attempt was flawed (but maked by Reset() method)
new approach is to do this work in the Reset() method itself,
logically this is where it belongs.
but further we acknowledge that IndexInternalID []byte lifetime
lives beyond the TermFieldDoc, so another copy is made into
the DocumentMatch. Although this introduces yet another copy
the theory being tested is that it allows each of these
structuress to reuse memory without additional allocation.
when the term field reader is copying ID values out of the
kv store's iterator, it is already attempting to reuse the
term frequency row data structure. this change allows us
to also attempt to reuse the []byte allocated for previous
copies of the docid. we reset the slice length to zero
then copy the data into the existing slice, avoiding
new allocation and garbage collection in the cases where
there is already enough space
instead of separate DocumentMatch/DocumentMatchInternal
rules are simple, everything operates on the IndexInternalID field
until the results are returned, then ID is set correctly
the IndexInternalID field is not exported to JSON
IndexInternalID is now []byte
this is still opaque, and should still work for any future
index implementations as it is a least common denominator
choice, all implementations must internally represent the
id as []byte at some point for storage to disk
index id's are now opaque (until finally returned to top-level user)
- the TermFieldDoc's returned by TermFieldReader no longer contain doc id
- instead they return an opaque IndexInternalID
- items returned are still in the "natural index order"
- but that is no longer guaranteed to be "doc id order"
- correct behavior requires that they all follow the same order
- but not any particular order
- new API FinalizeDocID which converts index internal ID's to public string ID
- APIs used internally which previously took doc id now take IndexInternalID
- that is DocumentFieldTerms() and DocumentFieldTermsForFields()
- however, APIs that are used externally do not reflect this change
- that is Document()
- DocumentIDReader follows the same changes, but this is less obvious
- behavior clarified, used to iterate doc ids, BUT NOT in doc id order
- method STILL available to iterate doc ids in range
- but again, you won't get them in any meaningful order
- new method to iterate actual doc ids from list of possible ids
- this was introduced to make the DocIDSearcher continue working
searchers now work with the new opaque index internal doc ids
- they return new DocumentMatchInternal (which does not have string ID)
scorerers also work with these opaque index internal doc ids
- they return DocumentMatchInternal (which does not have string ID)
collectors now also perform a final step of converting the final result
- they STILL return traditional DocumentMatch (with string ID)
- but they now also require an IndexReader (so that they can do the conversion)
at the time you create the term field reader, you can specify
that you don't need the term freq, the norm, or the term vectors
in that case, the index implementation can choose to not return
them in its subsequently returned values
this is advisory only, some simple implementations may ignore this
and continue to return the values anyway (as the current impl of
upside_down does today)
this change will allow future index implementations the
opportunity to do less work when it isn't required
the default configuration, which sets the default kv engine
to boltdb is now done in file protected with the !appengine
build tag. this at least lets the analysis-wizzard app
run locally in the appengine simulator.
this still has not been tested on the real appengine, and further
changes may be required.
The UpsideDownCouchTermFieldReader.Next() only needs the doc ID from
the key, so this change provides a specialized parseKDoc() method for
that optimization.
Additionally, fields in various structs are more 64-bit aligned, in an
attempt to reduce the invocations of runtime.typedmemmove() and
runtime.heapBitsBulkBarrier(), which the go compiler seems to
automatically insert to transparently handle misaligned data.
Previously, the PrefixIterator() for moss was implemented by comparing
the prefix bytes on every Next().
With this optimization, the next larger endKeyExclusive is computed at
the iterator's initialization, which allows us to avoid all those
prefix comparisons.
This optimization changes the index.TermFieldReader.Next() interface
API, adding an optional, pre-allocated *TermFieldDoc parameter, which
can help prevent garbage creation.
This optimization changes the search.Search.Next() interface API,
adding an optional, pre-allocated *DocumentMatch parameter.
When it's non-nil, the TermSearcher and TermQueryScorer will use that
pre-allocated *DocumentMatch, instead of allocating a brand new
DocumentMatch instance.
Before this change, upside down's reader would alloc a new
TermFrequencyRow on every Next(), which would be immediately
transformed into an index.TermFieldDoc{}. This change reuses a
pre-allocated TermFrequencyRow that's a field in the reader.
From some bleve-query perf profiling, term field vectors appeared to
be alloc'ed, which was unnecessary as term field vectors are disabled
in the bleve-blast/bleve-query tests.