Previously, unicode.Tokenize() would allocate a Token one-by-one, on
an as-needed basis.
This change allocates a "backing array" of Tokens, so that it goes to
the runtime object allocator much less often. It takes a heuristic
guess as to the backing array size by using the average token
(segment) length seen so far.
Results from micro-benchmark (null-firestorm, bleve-blast) seem to
give perhaps less than ~0.5 MB/second throughput improvement.
The goal of the "web" tokenizer is to recognize web things like
- email addresses
- URLs
- twitter @handles and #hashtags
This implementation uses regexp exceptions. There will most
likely be endless debate about the regular expressions. These
were chosein as "good enough for now".
There is also a "web" analyzer. This is just the "standard"
analyzer, but using the "web" tokenizer instead of the "unicode"
one. NOTE: after processing the exceptions, it still falls back
to the standard "unicode" one.
For many users, you can simply set your mapping's default analyzer
to be "web".
closes#269
name "exception"
configure with list of regexp string "exceptions"
these exceptions regexps that match sequences you want treated
as a single token. these sequences are NOT sent to the
underlying tokenizer
configure "tokenizer" is the named tokenizer that should be
used for processing all text regions not matching exceptions
An example configuration with simple patterns to match URLs and
email addresses:
map[string]interface{}{
"type": "exception",
"tokenizer": "unicode",
"exceptions": []interface{}{
`[hH][tT][tT][pP][sS]?://(\S)*`,
`[fF][iI][lL][eE]://(\S)*`,
`[fF][tT][pP]://(\S)*`,
`\S+@\S+`,
}
}
many of these defaults were arbitrary, and not having
defaults lets us more easily flag them for configuration
added a shingle filter
introduce new toke type for shingles
ultimately this is make it more convenient for us to wire up
different elements of the analysis pipeline, without having to
preload everything into memory before we need it
separately the index layer now has a mechanism for storing
internal key/value pairs. this is expected to be used to
store the mapping, and possibly other pieces of data by the
top layer, but not exposed to the user at the top.
removed analyzers (these are now built as needed through config)
removed html chacter filter (now built as needed through config)
added missing license header
changed constructor signature of filters that cannot return errors
filter constructors that can have errors, now have Must variant which panics
change cdl2 tokenizer into filter (should only see lower-case input)
new top level index api, closes#5
refactored index tests to not rely directly on analyzers
moved query objects to top-level
new top level search api, closes#12
top score collector allows skipping results
index mapping supports _all by default, closes#3 and closes#6
index mapping supports disabled sections, closes#7
new http sub package with reusable http.Handler's, closes#22