From e09f1347fa55fa056346aa21de0024db9235aef5 Mon Sep 17 00:00:00 2001 From: Roshni Biswas Date: Mon, 18 Feb 2019 01:02:28 -0800 Subject: [PATCH 01/10] updates for Bengali language (#3286) * Update morph_rules.py * contributor agreement for roshni-b * created example sentences --- .github/contributors/roshni-b.md | 107 +++++++++++++++++++++++++++++++ spacy/lang/bn/examples.py | 17 +++++ spacy/lang/bn/morph_rules.py | 2 + 3 files changed, 126 insertions(+) create mode 100644 .github/contributors/roshni-b.md create mode 100644 spacy/lang/bn/examples.py diff --git a/.github/contributors/roshni-b.md b/.github/contributors/roshni-b.md new file mode 100644 index 000000000..1b967aefb --- /dev/null +++ b/.github/contributors/roshni-b.md @@ -0,0 +1,107 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Roshni Biswas | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 02-17-2019 | +| GitHub username | roshni-b | +| Website (optional) | | + diff --git a/spacy/lang/bn/examples.py b/spacy/lang/bn/examples.py new file mode 100644 index 000000000..9318fe81e --- /dev/null +++ b/spacy/lang/bn/examples.py @@ -0,0 +1,17 @@ +# coding: utf8 +from __future__ import unicode_literals + + +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.bn.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + 'তুই খুব ভালো', + 'আজ আমরা ডাক্তার দেখতে যাবো', + 'আমি জানি না ' +] diff --git a/spacy/lang/bn/morph_rules.py b/spacy/lang/bn/morph_rules.py index 5b0ac2c4b..402593349 100644 --- a/spacy/lang/bn/morph_rules.py +++ b/spacy/lang/bn/morph_rules.py @@ -51,6 +51,8 @@ MORPH_RULES = { 'Case': 'Nom'}, 'তার': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Three', 'PronType': 'Prs', 'Poss': 'Yes', 'Case': 'Nom'}, + 'তাহাার': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Three', 'PronType': 'Prs', 'Poss': 'Yes', + 'Case': 'Nom'}, 'তোমাদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Two', 'PronType': 'Prs', 'Poss': 'Yes', 'Case': 'Nom'}, 'আমাদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'One', 'PronType': 'Prs', 'Poss': 'Yes', From c5476bd75b4591d23fbecc5a45022bc268f459e5 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 10:03:35 +0100 Subject: [PATCH 02/10] Update languages.json --- website/meta/languages.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/website/meta/languages.json b/website/meta/languages.json index ed3f087df..600500c7a 100644 --- a/website/meta/languages.json +++ b/website/meta/languages.json @@ -102,7 +102,7 @@ { "code": "te", "name": "Telugu", "example": "ఇది ఒక వాక్యం.", "has_examples": true }, { "code": "si", "name": "Sinhala", "example": "මෙය වාක්‍යයකි.", "has_examples": true }, { "code": "ga", "name": "Irish" }, - { "code": "bn", "name": "Bengali" }, + { "code": "bn", "name": "Bengali", "has_examples": true }, { "code": "hi", "name": "Hindi", "example": "यह एक वाक्य है।", "has_examples": true }, { "code": "kn", "name": "Kannada" }, { "code": "ta", "name": "Tamil", "has_examples": true }, From c32290557ff3732134ae210e826adc2f7a1cd285 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 10:59:31 +0100 Subject: [PATCH 03/10] Add xfailing test for #3288 --- spacy/tests/regression/test_issue3288.py | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) create mode 100644 spacy/tests/regression/test_issue3288.py diff --git a/spacy/tests/regression/test_issue3288.py b/spacy/tests/regression/test_issue3288.py new file mode 100644 index 000000000..d17dec971 --- /dev/null +++ b/spacy/tests/regression/test_issue3288.py @@ -0,0 +1,20 @@ +# coding: utf-8 +from __future__ import unicode_literals + +import pytest +import numpy +from spacy import displacy + +from ..util import get_doc + + +@pytest.mark.xfail +def test_issue3288(en_vocab): + """Test that retokenization works correctly via displaCy when punctuation + is merged onto the preceeding token and tensor is resized.""" + words = ["Hello", "World", "!", "When", "is", "this", "breaking", "?"] + heads = [1, 0, -1, 1, 0, 1, -2, -3] + deps = ["intj", "ROOT", "punct", "advmod", "ROOT", "det", "nsubj", "punct"] + doc = get_doc(en_vocab, words=words, heads=heads, deps=deps) + doc.tensor = numpy.zeros(96, dtype="float32") + displacy.render(doc) From 8fa26ca97e52bf571f1a130e771e5edf293edf9f Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 11:01:54 +0100 Subject: [PATCH 04/10] Fix tensor shape in test for #3288 --- spacy/tests/regression/test_issue3288.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/spacy/tests/regression/test_issue3288.py b/spacy/tests/regression/test_issue3288.py index d17dec971..5379a585a 100644 --- a/spacy/tests/regression/test_issue3288.py +++ b/spacy/tests/regression/test_issue3288.py @@ -16,5 +16,5 @@ def test_issue3288(en_vocab): heads = [1, 0, -1, 1, 0, 1, -2, -3] deps = ["intj", "ROOT", "punct", "advmod", "ROOT", "det", "nsubj", "punct"] doc = get_doc(en_vocab, words=words, heads=heads, deps=deps) - doc.tensor = numpy.zeros(96, dtype="float32") + doc.tensor = numpy.zeros((len(words), 96), dtype="float32") displacy.render(doc) From 660cfe44c5e411d3763cb8bf683a7fdae14dba08 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 13:26:22 +0100 Subject: [PATCH 05/10] Fix formatting --- website/docs/usage/rule-based-matching.md | 5 +---- 1 file changed, 1 insertion(+), 4 deletions(-) diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md index 43ff50b86..c4086d1ec 100644 --- a/website/docs/usage/rule-based-matching.md +++ b/website/docs/usage/rule-based-matching.md @@ -292,7 +292,7 @@ that they are listed as "User name: {username}". The name itself may contain any character, but no whitespace – so you'll know it will be handled as one token. ```python -[{'ORTH': 'User'}, {'ORTH': 'name'}, {'ORTH': ':'}, {}] +[{"ORTH": "User"}, {"ORTH": "name"}, {"ORTH": ":"}, {}] ``` ### Adding on_match rules {#on_match} @@ -301,9 +301,6 @@ To move on to a more realistic example, let's say you're working with a large corpus of blog articles, and you want to match all mentions of "Google I/O" (which spaCy tokenizes as `['Google', 'I', '/', 'O'`]). To be safe, you only match on the uppercase versions, in case someone has written it as "Google i/o". -You also add a second pattern with an added `{IS_DIGIT: True}` token – this will -make sure you also match on "Google I/O 2017". If your pattern matches, spaCy -should execute your custom callback function `add_event_ent`. ```python ### {executable="true"} From 38e4422c0db522e23c86728824ae09966d4ad14c Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 13:26:37 +0100 Subject: [PATCH 06/10] Improve matcher example (resolves #3287) --- website/docs/usage/rule-based-matching.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md index c4086d1ec..a73c4386d 100644 --- a/website/docs/usage/rule-based-matching.md +++ b/website/docs/usage/rule-based-matching.md @@ -306,28 +306,29 @@ match on the uppercase versions, in case someone has written it as "Google i/o". ### {executable="true"} import spacy from spacy.matcher import Matcher +from spacy.tokens import Span nlp = spacy.load("en_core_web_sm") matcher = Matcher(nlp.vocab) -# Get the ID of the 'EVENT' entity type. This is required to set an entity. -EVENT = nlp.vocab.strings["EVENT"] - def add_event_ent(matcher, doc, i, matches): # Get the current match and create tuple of entity label, start and end. # Append entity to the doc's entity. (Don't overwrite doc.ents!) match_id, start, end = matches[i] - entity = (EVENT, start, end) + entity = Span(doc, start, end, label="EVENT") doc.ents += (entity,) - print(doc[start:end].text, entity) + print(entity.text) -matcher.add("GoogleIO", add_event_ent, - [{"ORTH": "Google"}, {"ORTH": "I"}, {"ORTH": "/"}, {"ORTH": "O"}], - [{"ORTH": "Google"}, {"ORTH": "I"}, {"ORTH": "/"}, {"ORTH": "O"}, {"IS_DIGIT": True}],) -doc = nlp(u"This is a text about Google I/O 2015.") +pattern = [{"ORTH": "Google"}, {"ORTH": "I"}, {"ORTH": "/"}, {"ORTH": "O"}] +matcher.add("GoogleIO", add_event_ent, pattern) +doc = nlp(u"This is a text about Google I/O.") matches = matcher(doc) ``` +A very similar logic has been implemented in the built-in +[`EntityRuler`](/api/entityruler) by the way. It also takes care of handling +overlapping matches, which you would otherwise have to take care of yourself. + > #### Tip: Visualizing matches > > When working with entities, you can use [displaCy](/api/top-level#displacy) to From f30aac324ca87e44e780be6023e62eb28562a629 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 13:36:15 +0100 Subject: [PATCH 07/10] Update test_issue1971.py --- spacy/tests/regression/test_issue1971.py | 1 + 1 file changed, 1 insertion(+) diff --git a/spacy/tests/regression/test_issue1971.py b/spacy/tests/regression/test_issue1971.py index b28d07f60..177fa83b7 100644 --- a/spacy/tests/regression/test_issue1971.py +++ b/spacy/tests/regression/test_issue1971.py @@ -38,6 +38,7 @@ def test_issue_1971_2(en_vocab): @pytest.mark.xfail def test_issue_1971_3(en_vocab): + """Test that pattern matches correctly for multiple extension attributes.""" Token.set_extension("a", default=1) Token.set_extension("b", default=2) doc = Doc(en_vocab, words=["hello", "world"]) From 91f260f2c4c260f51e9a2529da6c028b968e5ce7 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 13:36:20 +0100 Subject: [PATCH 08/10] Add another test for #1971 --- spacy/tests/regression/test_issue1971.py | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/spacy/tests/regression/test_issue1971.py b/spacy/tests/regression/test_issue1971.py index 177fa83b7..ecc7ebda1 100644 --- a/spacy/tests/regression/test_issue1971.py +++ b/spacy/tests/regression/test_issue1971.py @@ -48,3 +48,20 @@ def test_issue_1971_3(en_vocab): matches = sorted((en_vocab.strings[m_id], s, e) for m_id, s, e in matcher(doc)) assert len(matches) == 4 assert matches == sorted([("A", 0, 1), ("A", 1, 2), ("B", 0, 1), ("B", 1, 2)]) + + +# @pytest.mark.xfail +def test_issue_1971_4(en_vocab): + """Test that pattern matches correctly with multiple extension attribute + values on a single token. + """ + Token.set_extension("ext_a", default="str_a") + Token.set_extension("ext_b", default="str_b") + matcher = Matcher(en_vocab) + doc = Doc(en_vocab, words=["this", "is", "text"]) + pattern = [{"_": {"ext_a": "str_a", "ext_b": "str_b"}}] * 3 + matcher.add("TEST", None, pattern) + matches = matcher(doc) + # Interesting: uncommenting this causes a segmentation fault, so there's + # definitely something going on here + # assert len(matches) == 1 From 57ae71ea95c071f94b83dab4a2d9cfa765f5a320 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 14:13:29 +0100 Subject: [PATCH 09/10] Add docs on serializing the pipeline (see #3289) [ci skip] --- website/docs/usage/saving-loading.md | 37 ++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/website/docs/usage/saving-loading.md b/website/docs/usage/saving-loading.md index b7a8476a8..55755246f 100644 --- a/website/docs/usage/saving-loading.md +++ b/website/docs/usage/saving-loading.md @@ -22,6 +22,43 @@ the changes, see [this table](/usage/v2#incompat) and the notes on +### Serializing the pipeline + +When serializing the pipeline, keep in mind that this will only save out the +**binary data for the individual components** to allow spaCy to restore them – +not the entire objects. This is a good thing, because it makes serialization +safe. But it also means that you have to take care of storing the language name +and pipeline component names as well, and restoring them separately before you +can load in the data. + +> #### Saving the model meta +> +> The `nlp.meta` attribute is a JSON-serializable dictionary and contains all +> model meta information, like the language and pipeline, but also author and +> license information. + +```python +### Serialize +bytes_data = nlp.to_bytes() +lang = nlp.meta["lang"] # "en" +pipeline = nlp.meta["pipeline"] # ["tagger", "parser", "ner"] +``` + +```python +### Deserialize +nlp = spacy.blank(lang) +for pipe_name in pipeline: + pipe = nlp.create_pipe(pipe_name) + nlp.add_pipe(pipe) +nlp.from_bytes(bytes_data) +``` + +This is also how spaCy does it under the hood when loading a model: it loads the +model's `meta.json` containing the language and pipeline information, +initializes the language class, creates and adds the pipeline components and +_then_ loads in the binary data. You can read more about this process +[here](/usage/processing-pipelines#pipelines). + ### Using Pickle {#pickle} > #### Example From 3b667787a932efdf2179bb8eb8a1654517e9a3e6 Mon Sep 17 00:00:00 2001 From: Ines Montani Date: Mon, 18 Feb 2019 16:45:04 +0100 Subject: [PATCH 10/10] Add xfailing test for #3289 --- spacy/tests/regression/test_issue3289.py | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) create mode 100644 spacy/tests/regression/test_issue3289.py diff --git a/spacy/tests/regression/test_issue3289.py b/spacy/tests/regression/test_issue3289.py new file mode 100644 index 000000000..92b4ec853 --- /dev/null +++ b/spacy/tests/regression/test_issue3289.py @@ -0,0 +1,17 @@ +# coding: utf-8 +from __future__ import unicode_literals + +import pytest +from spacy.lang.en import English + + +@pytest.mark.xfail +def test_issue3289(): + """Test that Language.to_bytes handles serializing a pipeline component + with an uninitialized model.""" + nlp = English() + nlp.add_pipe(nlp.create_pipe("textcat")) + bytes_data = nlp.to_bytes() + new_nlp = English() + new_nlp.add_pipe(nlp.create_pipe("textcat")) + new_nlp.from_bytes(bytes_data)