Commit Graph

3199 Commits

Author SHA1 Message Date
Matthew Honnibal
7e2cdc0c81 Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-22 12:39:34 +02:00
Matthew Honnibal
70a8c531cd Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-22 05:39:18 -05:00
Matthew Honnibal
2f78413a02 PseudoProjectivity->nonproj 2017-05-22 05:39:03 -05:00
Matthew Honnibal
89ebc5c3cd Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-22 12:38:15 +02:00
Matthew Honnibal
d8bb5bb959 Implement StringStore serialization, and update tests 2017-05-22 12:38:00 +02:00
ines
54f04a9fe0 Update API docs with changes in spacy.gold and spacy.language 2017-05-22 12:29:30 +02:00
ines
b5fb43fdd8 Allow sys.exit status as exits keyword arg in util.prints() 2017-05-22 12:29:15 +02:00
ines
fc3ec733ea Reduce complexity in CLI
Remove now redundant model command and move plac annotations to cli
files
2017-05-22 12:28:58 +02:00
Matthew Honnibal
b45b4aa392 PseudoProjectivity --> nonproj 2017-05-22 05:17:44 -05:00
Matthew Honnibal
aae97f00e9 Fix nonproj import 2017-05-22 05:15:06 -05:00
Matthew Honnibal
9262fc4829 Fix syntax error 2017-05-22 05:14:59 -05:00
Matthew Honnibal
93a042253b Make GoldParse attributes writeable 2017-05-22 04:51:08 -05:00
Matthew Honnibal
2a5eb9f61e Make nonproj methods top-level functions, instead of class methods 2017-05-22 04:51:08 -05:00
Matthew Honnibal
c998776c25 Make single array for features, to reduce GPU copies 2017-05-22 04:51:08 -05:00
Matthew Honnibal
bc2294d7f1 Add support for fiddly hyper-parameters to train func 2017-05-22 04:51:08 -05:00
Matthew Honnibal
80e19a2399 Simplify CLI implementation for subcommands. Remove model command. 2017-05-22 04:51:08 -05:00
Matthew Honnibal
33e2222839 Remove unused code in deprojectivize 2017-05-22 04:51:08 -05:00
Matthew Honnibal
4e0988605a Pass through non-projective=True 2017-05-22 04:51:08 -05:00
Matthew Honnibal
025d9bbc37 Fix handling of non-projective deps 2017-05-22 04:51:08 -05:00
Matthew Honnibal
5738d373d5 Add deprojectivize to pipeline 2017-05-22 04:51:08 -05:00
Matthew Honnibal
1b5fa68996 Do pseudo-projective pre-processing for parser 2017-05-22 04:51:08 -05:00
Matthew Honnibal
1d5d9838a2 Fix action collection for parser 2017-05-22 04:51:08 -05:00
Matthew Honnibal
8d1e64be69 Add experimental NeuralLabeller 2017-05-22 04:51:08 -05:00
Matthew Honnibal
9b1b0742fd Fix prediction for tok2vec 2017-05-22 04:51:08 -05:00
Matthew Honnibal
f13d6c7359 Support gold preprocessing and single gold files 2017-05-22 04:51:08 -05:00
Matthew Honnibal
e14533757b Use averaged params for evaluation 2017-05-22 04:51:08 -05:00
Matthew Honnibal
7811d97339 Refactor CLI 2017-05-22 04:51:08 -05:00
Matthew Honnibal
5db89053aa Merge docstrings 2017-05-21 13:46:23 -05:00
Matthew Honnibal
432b3499b3 Fix memory leak 2017-05-21 13:38:46 -05:00
Matthew Honnibal
59fbfb3829 Remove train.py -- functions now in GoldCorpus and Language 2017-05-21 09:08:27 -05:00
Matthew Honnibal
8904814c0e Add missing import 2017-05-21 09:07:56 -05:00
Matthew Honnibal
baf3ef0ddc Remove import of removed train_config script 2017-05-21 09:07:34 -05:00
Matthew Honnibal
4c9202249d Refactor training, to fix memory leak 2017-05-21 09:07:06 -05:00
Matthew Honnibal
4803b3b69e Add GoldCorpus class, to manage data streaming 2017-05-21 09:06:17 -05:00
Matthew Honnibal
180e5afede Fix tokvecs flattening in pipeline 2017-05-21 09:05:34 -05:00
Matthew Honnibal
0731971bfc Add itershuffle utility function. Maybe belongs in thinc 2017-05-21 09:05:05 -05:00
ines
2c5cfe8bbf Update docstrings and API docs for StringStore 2017-05-21 14:18:58 +02:00
ines
251346b59f Fix typos and formatting 2017-05-21 14:18:46 +02:00
ines
075f5ff87a Update docstrings and API docs for GoldParse 2017-05-21 13:53:46 +02:00
ines
99b631617d Reformat docstrings 2017-05-21 13:32:15 +02:00
ines
885e82c9b0 Update docstrings and remove deprecated load classmethod 2017-05-21 13:27:52 +02:00
ines
c5a653fa48 Update docstrings and API docs for Tokenizer 2017-05-21 13:18:14 +02:00
ines
f216422ac5 Remove deprecated load classmethod 2017-05-21 13:18:01 +02:00
ines
d82ae9a585 Change "function" to "callable" in docs 2017-05-21 13:17:40 +02:00
ines
3871157d84 Update spacy.util documentation 2017-05-21 01:12:09 +02:00
ines
0c6c65aa3c Improve messaging if model linking fails after download 2017-05-21 00:28:37 +02:00
Matthew Honnibal
3b7c108246 Pass tokvecs through as a list, instead of concatenated. Also fix padding 2017-05-20 13:23:32 -05:00
ines
924e8506de Move Defaults subclass to module scope (necessary for pickling) 2017-05-20 19:02:27 +02:00
Matthew Honnibal
d52b65aec2 Revert "Move to contiguous buffer for token_ids and d_vectors"
This reverts commit 3ff8c35a79.
2017-05-20 11:26:23 -05:00
ines
27de0834b2 Update docstrings and API docs for Lexeme 2017-05-20 15:13:42 +02:00
ines
7ed8a92ed1 Update docstrings and API docs for Token 2017-05-20 15:13:33 +02:00
ines
4ed6a36622 Update docstrings and API docs for Matcher 2017-05-20 14:43:10 +02:00
ines
39f36539f6 Update docstrings and API docs for Matcher 2017-05-20 14:32:34 +02:00
ines
c00ff257be Update docstrings and API docs for Matcher 2017-05-20 14:26:10 +02:00
ines
790435e51c Update docstrings 2017-05-20 14:05:07 +02:00
ines
f0cc642bb9 Update docstrings and API docs for Vocab 2017-05-20 14:00:41 +02:00
Matthew Honnibal
ce9234f593 Update Matcher API 2017-05-20 13:54:53 +02:00
Matthew Honnibal
b272890a8c Try to move parser to simpler PrecomputedAffine class. Currently broken -- maybe the previous change 2017-05-20 06:40:10 -05:00
ines
e39ad78267 Resolve model name properly in cli.info
Use util.resolve_model_path() to also allow package names and paths.
2017-05-20 12:24:40 +02:00
Matthew Honnibal
3ff8c35a79 Move to contiguous buffer for token_ids and d_vectors 2017-05-20 04:17:30 -05:00
Matthew Honnibal
8b04b0af9f Remove freqs from transition_system 2017-05-20 02:20:48 -05:00
Matthew Honnibal
61fe55efba Move EnglishDefaults class out of English 2017-05-20 02:18:19 -05:00
Matthew Honnibal
a1ba20e2b1 Fix over-run on parse_batch 2017-05-19 18:57:30 -05:00
ines
1d4d3d0ecd Add TODO 2017-05-20 01:38:04 +02:00
Matthew Honnibal
7ee1827af0 Disable data caching in parser 2017-05-19 18:17:11 -05:00
Matthew Honnibal
e84de028b5 Remove 'rebatch' op, and remove min-batch cap 2017-05-19 18:16:36 -05:00
Matthew Honnibal
3376d4d6e8 Update the train script, fixing GPU memory leak 2017-05-19 18:15:50 -05:00
Matthew Honnibal
836fe1d880 Update neural net tests 2017-05-19 18:11:29 -05:00
ines
fe5d8819ea Update Matcher docstrings and API docs 2017-05-19 21:47:06 +02:00
Matthew Honnibal
08766240c3 Add incomplete iob converter 2017-05-19 13:27:51 -05:00
Matthew Honnibal
c12ab47a56 Remove state argument in pipeline. Other changes 2017-05-19 13:26:36 -05:00
Matthew Honnibal
66ea9aebe7 Remove the state argument from Language 2017-05-19 13:25:42 -05:00
Matthew Honnibal
09a877886b WIP on iob converter 2017-05-19 13:24:39 -05:00
ines
a804045597 Use is_ancestor instead of deprecated is_ancestor_of 2017-05-19 20:23:40 +02:00
Matthew Honnibal
8d5e6d9f4f Rename no_ner arg to no_entities 2017-05-19 13:23:11 -05:00
ines
e9e62b01b0 Update docstrings and API docs for Token 2017-05-19 18:47:56 +02:00
ines
62ceec4fc6 Update docstrings and API docs for Span 2017-05-19 18:47:46 +02:00
ines
23f9a3ccc8 Update docstrings and API docs for Doc 2017-05-19 18:47:39 +02:00
ines
2c8c9dc0c9 Update docstrings and API docs for Language 2017-05-19 18:47:24 +02:00
ines
0791f0aae6 Update docstrings and API docs for Span class 2017-05-19 00:31:31 +02:00
ines
8455cb1327 Update docstring for Doc.__getitem__ 2017-05-19 00:30:51 +02:00
ines
0fc05e54e4 Document TokenVectorEncoder 2017-05-19 00:00:02 +02:00
ines
b687ad109d Update docstrings and API docs for Doc class 2017-05-18 23:59:44 +02:00
ines
d42bc16868 Update docstrings and API docs for Language class 2017-05-18 23:57:38 +02:00
ines
593361ee3c Update docstrings for Span class 2017-05-18 22:17:41 +02:00
ines
b87066ff10 Update docstrings and API docs for Doc class 2017-05-18 22:17:41 +02:00
Matthew Honnibal
238be0f16a Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-18 08:32:22 -05:00
Matthew Honnibal
c214c0decb Improve env_opt reporting 2017-05-18 08:32:03 -05:00
Matthew Honnibal
bbb59e371c Fix GPU evaluation 2017-05-18 08:31:15 -05:00
Matthew Honnibal
c2c825127a Fix use_params and pipe methods 2017-05-18 08:30:59 -05:00
Matthew Honnibal
ca70b08661 Fix GPU training and evaluation 2017-05-18 08:30:33 -05:00
ines
489d2fb4ba Add is_in_jupyter() helper for displaCy (see #1058) 2017-05-18 14:13:14 +02:00
ines
abf0188b0a Move cupy and CudaStream to compat 2017-05-18 14:12:45 +02:00
ines
33decd85b6 Reorganise and explicitly state what's importable 2017-05-18 14:12:31 +02:00
Matthew Honnibal
a438cef8c5 Fix significant bug in feature calculation -- off by 1 2017-05-18 06:21:32 -05:00
Matthew Honnibal
fc8d3a112c Add util.env_opt support: Can set hyper params through environment variables. 2017-05-18 04:36:53 -05:00
Matthew Honnibal
d2626fdb45 Fix name error in nn parser 2017-05-18 04:31:01 -05:00
Matthew Honnibal
b460533827 Bug fixes to pipeline 2017-05-18 04:29:51 -05:00
Matthew Honnibal
8815507f8e Move SpanishDefaults out of Language class, for pickle 2017-05-18 04:28:51 -05:00
Matthew Honnibal
2713041571 Fix GPU usage in Language 2017-05-18 04:25:19 -05:00
Matthew Honnibal
711ad5edc4 Cache features in doc2feats 2017-05-18 04:22:20 -05:00
Matthew Honnibal
39ea38c4b1 Add option to use gpu to spacy train 2017-05-18 04:21:49 -05:00
Matthew Honnibal
a1d8e420b5 Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-17 08:00:04 -05:00
Matthew Honnibal
edfea3a513 Fix progress bar 2017-05-17 14:59:37 +02:00
Matthew Honnibal
0b7fd67408 Fix style check in displacy 2017-05-17 07:57:24 -05:00
Matthew Honnibal
55dab77de8 Add conversion rule for .conll 2017-05-17 13:13:48 +02:00
Matthew Honnibal
692bd2a186 Bug fix to tagger: wasnt backproping to token vectors 2017-05-17 13:13:14 +02:00
Matthew Honnibal
877f83807f Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-17 12:09:29 +02:00
Matthew Honnibal
793430aa7a Get spaCy train command working with neural network
* Integrate models into pipeline
* Add basic serialization (maybe incorrect)
* Fix pickle on vocab
2017-05-17 12:04:50 +02:00
Matthew Honnibal
3bf4a28d8d Use tag in CoNLL converter, not POS 2017-05-17 12:04:33 +02:00
ines
1a05078c79 Add language-specific syntax iterators to en and de 2017-05-17 12:04:03 +02:00
Matthew Honnibal
c9a5d5d24b Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-16 16:22:05 +02:00
Matthew Honnibal
8cf097ca88 Redesign training to integrate NN components
* Obsolete .parser, .entity etc names in favour of .pipeline
* Components no longer create models on initialization
* Models created by loading method (from_disk(), from_bytes() etc), or
    .begin_training()
* Add .predict(), .set_annotations() methods in components
* Pass state through pipeline, to allow components to share information
    more flexibly.
2017-05-16 16:17:30 +02:00
Matthew Honnibal
221b4c1ee8 Fix test for Python 3 2017-05-16 13:06:30 +02:00
Matthew Honnibal
5211645af3 Get data flowing through pipeline. Needs redesign 2017-05-16 11:21:59 +02:00
Matthew Honnibal
1d7c18e58a Merge branch 'develop' of https://github.com/explosion/spaCy into develop 2017-05-15 21:53:47 +02:00
Matthew Honnibal
a9edb3aa1d Improve integration of NN parser, to support unified training API 2017-05-15 21:53:27 +02:00
ines
98354be150 Only get user_data if it exists on doc 2017-05-15 13:39:47 +02:00
ines
c33bdeb564 Use uppercase for entity types 2017-05-15 01:24:57 +02:00
ines
4aaa607b8d Add xmlns:xlink so SVGs are rendered properly as individual files 2017-05-14 19:54:13 +02:00
ines
9dd13cd76a Update docstrings 2017-05-14 19:30:47 +02:00
ines
a04550605a Add Jupyter notebook support (see #1058) 2017-05-14 18:39:01 +02:00
ines
c31792aaec Add displaCy visualisers (see #1058) 2017-05-14 17:50:23 +02:00
ines
b462076d80 Merge load_lang_class and get_lang_class 2017-05-14 01:31:10 +02:00
ines
36bebe7164 Update docstrings 2017-05-14 01:30:29 +02:00
Matthew Honnibal
4b9d69f428 Merge branch 'v2' into develop
* Move v2 parser into nn_parser.pyx
* New TokenVectorEncoder class in pipeline.pyx
* New spacy/_ml.py module

Currently the two parsers live side-by-side, until we figure out how to
organize them.
2017-05-14 01:10:23 +02:00
Matthew Honnibal
5cac951a16 Move new parser to nn_parser.pyx, and restore old parser, to make tests pass. 2017-05-14 00:55:01 +02:00
Matthew Honnibal
f8c02b4341 Remove cupy imports from parser, so it can work on CPU 2017-05-14 00:37:53 +02:00
Matthew Honnibal
613ba79e2e Fiddle with sizings for parser 2017-05-13 17:20:23 -05:00
Matthew Honnibal
e6d71e1778 Small fixes to parser 2017-05-13 17:19:04 -05:00
Matthew Honnibal
188c0f6949 Clean up unused import 2017-05-13 17:18:27 -05:00
Matthew Honnibal
f85c8464f7 Draft support of regression loss in parser 2017-05-13 17:17:27 -05:00
ines
1694c24e52 Add docstrings, error messages and fix consistency 2017-05-13 21:22:49 +02:00
ines
ee7dcf65c9 Fix expand_exc to make sure it returns combined dict 2017-05-13 21:22:25 +02:00
ines
824d09bb74 Move resolve_load_name to deprecated 2017-05-13 21:21:47 +02:00
ines
a4a37a783e Remove import from non-existing module 2017-05-13 16:00:09 +02:00
ines
5858857a78 Update languages list in conftest 2017-05-13 15:37:54 +02:00
ines
9d85cda8e4 Fix models error message and use about.__docs_models__ (see #1051) 2017-05-13 13:05:47 +02:00
ines
6b942763f0 Tidy up imports 2017-05-13 13:04:40 +02:00
ines
8c2a0c026d Fix parse_tree test 2017-05-13 12:32:45 +02:00
ines
6129016e15 Replace deepcopy 2017-05-13 12:32:37 +02:00
ines
df68bf45ce Set defaults for light and flat kwargs 2017-05-13 12:32:23 +02:00
ines
b9dea345e5 Remove old import 2017-05-13 12:32:11 +02:00
ines
293ee359c5 Fix formatting 2017-05-13 12:32:06 +02:00
ines
4eefb288e3 Port over PR #1055 2017-05-13 03:25:32 +02:00
Matthew Honnibal
ee1d35bdb0 Fix merge conflict 2017-05-13 03:20:19 +02:00
Matthew Honnibal
b2540d2379 Merge Kengz's tree_print patch 2017-05-13 03:18:49 +02:00
Matthew Honnibal
827b5af697 Update draft of parser neural network model
Model is good, but code is messy. Currently requires Chainer, which may cause the build to fail on machines without a GPU.

Outline of the model:

We first predict context-sensitive vectors for each word in the input:

(embed_lower | embed_prefix | embed_suffix | embed_shape)
>> Maxout(token_width)
>> convolution ** 4

This convolutional layer is shared between the tagger and the parser. This prevents the parser from needing tag features.
To boost the representation, we make a "super tag" with POS, morphology and dependency label. The tagger predicts this
by adding a softmax layer onto the convolutional layer --- so, we're teaching the convolutional layer to give us a
representation that's one affine transform from this informative lexical information. This is obviously good for the
parser (which backprops to the convolutions too).

The parser model makes a state vector by concatenating the vector representations for its context tokens. Current
results suggest few context tokens works well. Maybe this is a bug.

The current context tokens:

* S0, S1, S2: Top three words on the stack
* B0, B1: First two words of the buffer
* S0L1, S0L2: Leftmost and second leftmost children of S0
* S0R1, S0R2: Rightmost and second rightmost children of S0
* S1L1, S1L2, S1R2, S1R, B0L1, B0L2: Likewise for S1 and B0

This makes the state vector quite long: 13*T, where T is the token vector width (128 is working well). Fortunately,
there's a way to structure the computation to save some expense (and make it more GPU friendly).

The parser typically visits 2*N states for a sentence of length N (although it may visit more, if it back-tracks
with a non-monotonic transition). A naive implementation would require 2*N (B, 13*T) @ (13*T, H) matrix multiplications
for a batch of size B. We can instead perform one (B*N, T) @ (T, 13*H) multiplication, to pre-compute the hidden
weights for each positional feature wrt the words in the batch. (Note that our token vectors come from the CNN
-- so we can't play this trick over the vocabulary. That's how Stanford's NN parser works --- and why its model
is so big.)

This pre-computation strategy allows a nice compromise between GPU-friendliness and implementation simplicity.
The CNN and the wide lower layer are computed on the GPU, and then the precomputed hidden weights are moved
to the CPU, before we start the transition-based parsing process. This makes a lot of things much easier.
We don't have to worry about variable-length batch sizes, and we don't have to implement the dynamic oracle
in CUDA to train.

Currently the parser's loss function is multilabel log loss, as the dynamic oracle allows multiple states to
be 0 cost. This is defined as:

(exp(score) / Z) - (exp(score) / gZ)

Where gZ is the sum of the scores assigned to gold classes. I'm very interested in regressing on the cost directly,
but so far this isn't working well.

Machinery is in place for beam-search, which has been working well for the linear model. Beam search should benefit
greatly from the pre-computation trick.
2017-05-12 16:09:15 -05:00
ines
c4857bc7db Remove unused argument 2017-05-12 15:37:54 +02:00
ines
c13b3fa052 Add LEX_ATTRS 2017-05-12 15:37:45 +02:00