0
0
bleve/analysis/token_filters/cld2
Marty Schoch 1dc466a800 modified token filters to avoid creating new token stream
often the result stream was the same length, so can reuse the
existing token stream
also, in cases where a new stream was required, set capacity to
the length of the input stream.  most output stream are at least
as long as the input, so this may avoid some subsequent resizing
2014-09-23 18:41:32 -04:00
..
cld2_filter_test.go rename imports from couchbaselabs to blevesearch 2014-08-28 15:38:57 -04:00
cld2_filter.cc major refactor, apologies for the large commit 2014-07-30 12:30:38 -04:00
cld2_filter.go modified token filters to avoid creating new token stream 2014-09-23 18:41:32 -04:00
cld2_filter.h major refactor, apologies for the large commit 2014-07-30 12:30:38 -04:00
compile_cld2.sh compile libcld2 statically 2014-08-24 03:44:57 +10:00
README.md tried to word the instructions for static and dynamic linking 2014-08-25 10:54:15 -04:00

cld2 token filter

A bleve token filter which passes the text of each token and passes it to the cld2 library. The library determines what it thinks the language most likely is. The ISO-639 language code replaces the token term.

In normal usage, you use this with the "single" tokenizer, so there is only one input token. Further, you should precede it with the "to_lower" filter so that the input term is in all lower-case unicode characters.

Building

  1. Acquire the source to cld2 in this directory.

    $ svn checkout -r 167 http://cld2.googlecode.com/svn/trunk/ cld2-read-only
    
  2. Build cld2

    As dynamic library

    $ cd cld2-read-only/internal/
    $ ./compile_libs.sh
    $ cp *.so /usr/local/lib
    $ cd ../..
    

    Or static library

    $ ./compile_cld2.sh
    $ cp *.a /usr/local/lib
    
  3. Run the unit tests

    $ go test -v
    === RUN TestCld2Filter
    --- PASS: TestCld2Filter (0.00 seconds)
    PASS
    ok      github.com/couchbaselabs/bleve/analysis/token_filters/cld2      0.033s