diff --git a/.github/contributors/roshni-b.md b/.github/contributors/roshni-b.md new file mode 100644 index 000000000..1b967aefb --- /dev/null +++ b/.github/contributors/roshni-b.md @@ -0,0 +1,107 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Roshni Biswas | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 02-17-2019 | +| GitHub username | roshni-b | +| Website (optional) | | + diff --git a/spacy/lang/bn/examples.py b/spacy/lang/bn/examples.py new file mode 100644 index 000000000..9318fe81e --- /dev/null +++ b/spacy/lang/bn/examples.py @@ -0,0 +1,17 @@ +# coding: utf8 +from __future__ import unicode_literals + + +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.bn.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + 'তুই খুব ভালো', + 'আজ আমরা ডাক্তার দেখতে যাবো', + 'আমি জানি না ' +] diff --git a/spacy/lang/bn/morph_rules.py b/spacy/lang/bn/morph_rules.py index bab6da2ea..21a76c7e6 100644 --- a/spacy/lang/bn/morph_rules.py +++ b/spacy/lang/bn/morph_rules.py @@ -194,6 +194,14 @@ MORPH_RULES = { "Poss": "Yes", "Case": "Nom", }, + "তাহাার": { + LEMMA: PRON_LEMMA, + "Number": "Sing", + "Person": "Three", + "PronType": "Prs", + "Poss": "Yes", + "Case": "Nom", + }, "তোমাদের": { LEMMA: PRON_LEMMA, "Number": "Plur", diff --git a/spacy/tests/regression/test_issue1971.py b/spacy/tests/regression/test_issue1971.py index b28d07f60..ecc7ebda1 100644 --- a/spacy/tests/regression/test_issue1971.py +++ b/spacy/tests/regression/test_issue1971.py @@ -38,6 +38,7 @@ def test_issue_1971_2(en_vocab): @pytest.mark.xfail def test_issue_1971_3(en_vocab): + """Test that pattern matches correctly for multiple extension attributes.""" Token.set_extension("a", default=1) Token.set_extension("b", default=2) doc = Doc(en_vocab, words=["hello", "world"]) @@ -47,3 +48,20 @@ def test_issue_1971_3(en_vocab): matches = sorted((en_vocab.strings[m_id], s, e) for m_id, s, e in matcher(doc)) assert len(matches) == 4 assert matches == sorted([("A", 0, 1), ("A", 1, 2), ("B", 0, 1), ("B", 1, 2)]) + + +# @pytest.mark.xfail +def test_issue_1971_4(en_vocab): + """Test that pattern matches correctly with multiple extension attribute + values on a single token. + """ + Token.set_extension("ext_a", default="str_a") + Token.set_extension("ext_b", default="str_b") + matcher = Matcher(en_vocab) + doc = Doc(en_vocab, words=["this", "is", "text"]) + pattern = [{"_": {"ext_a": "str_a", "ext_b": "str_b"}}] * 3 + matcher.add("TEST", None, pattern) + matches = matcher(doc) + # Interesting: uncommenting this causes a segmentation fault, so there's + # definitely something going on here + # assert len(matches) == 1 diff --git a/spacy/tests/regression/test_issue3288.py b/spacy/tests/regression/test_issue3288.py new file mode 100644 index 000000000..5379a585a --- /dev/null +++ b/spacy/tests/regression/test_issue3288.py @@ -0,0 +1,20 @@ +# coding: utf-8 +from __future__ import unicode_literals + +import pytest +import numpy +from spacy import displacy + +from ..util import get_doc + + +@pytest.mark.xfail +def test_issue3288(en_vocab): + """Test that retokenization works correctly via displaCy when punctuation + is merged onto the preceeding token and tensor is resized.""" + words = ["Hello", "World", "!", "When", "is", "this", "breaking", "?"] + heads = [1, 0, -1, 1, 0, 1, -2, -3] + deps = ["intj", "ROOT", "punct", "advmod", "ROOT", "det", "nsubj", "punct"] + doc = get_doc(en_vocab, words=words, heads=heads, deps=deps) + doc.tensor = numpy.zeros((len(words), 96), dtype="float32") + displacy.render(doc) diff --git a/spacy/tests/regression/test_issue3289.py b/spacy/tests/regression/test_issue3289.py new file mode 100644 index 000000000..92b4ec853 --- /dev/null +++ b/spacy/tests/regression/test_issue3289.py @@ -0,0 +1,17 @@ +# coding: utf-8 +from __future__ import unicode_literals + +import pytest +from spacy.lang.en import English + + +@pytest.mark.xfail +def test_issue3289(): + """Test that Language.to_bytes handles serializing a pipeline component + with an uninitialized model.""" + nlp = English() + nlp.add_pipe(nlp.create_pipe("textcat")) + bytes_data = nlp.to_bytes() + new_nlp = English() + new_nlp.add_pipe(nlp.create_pipe("textcat")) + new_nlp.from_bytes(bytes_data) diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md index 43ff50b86..a73c4386d 100644 --- a/website/docs/usage/rule-based-matching.md +++ b/website/docs/usage/rule-based-matching.md @@ -292,7 +292,7 @@ that they are listed as "User name: {username}". The name itself may contain any character, but no whitespace – so you'll know it will be handled as one token. ```python -[{'ORTH': 'User'}, {'ORTH': 'name'}, {'ORTH': ':'}, {}] +[{"ORTH": "User"}, {"ORTH": "name"}, {"ORTH": ":"}, {}] ``` ### Adding on_match rules {#on_match} @@ -301,36 +301,34 @@ To move on to a more realistic example, let's say you're working with a large corpus of blog articles, and you want to match all mentions of "Google I/O" (which spaCy tokenizes as `['Google', 'I', '/', 'O'`]). To be safe, you only match on the uppercase versions, in case someone has written it as "Google i/o". -You also add a second pattern with an added `{IS_DIGIT: True}` token – this will -make sure you also match on "Google I/O 2017". If your pattern matches, spaCy -should execute your custom callback function `add_event_ent`. ```python ### {executable="true"} import spacy from spacy.matcher import Matcher +from spacy.tokens import Span nlp = spacy.load("en_core_web_sm") matcher = Matcher(nlp.vocab) -# Get the ID of the 'EVENT' entity type. This is required to set an entity. -EVENT = nlp.vocab.strings["EVENT"] - def add_event_ent(matcher, doc, i, matches): # Get the current match and create tuple of entity label, start and end. # Append entity to the doc's entity. (Don't overwrite doc.ents!) match_id, start, end = matches[i] - entity = (EVENT, start, end) + entity = Span(doc, start, end, label="EVENT") doc.ents += (entity,) - print(doc[start:end].text, entity) + print(entity.text) -matcher.add("GoogleIO", add_event_ent, - [{"ORTH": "Google"}, {"ORTH": "I"}, {"ORTH": "/"}, {"ORTH": "O"}], - [{"ORTH": "Google"}, {"ORTH": "I"}, {"ORTH": "/"}, {"ORTH": "O"}, {"IS_DIGIT": True}],) -doc = nlp(u"This is a text about Google I/O 2015.") +pattern = [{"ORTH": "Google"}, {"ORTH": "I"}, {"ORTH": "/"}, {"ORTH": "O"}] +matcher.add("GoogleIO", add_event_ent, pattern) +doc = nlp(u"This is a text about Google I/O.") matches = matcher(doc) ``` +A very similar logic has been implemented in the built-in +[`EntityRuler`](/api/entityruler) by the way. It also takes care of handling +overlapping matches, which you would otherwise have to take care of yourself. + > #### Tip: Visualizing matches > > When working with entities, you can use [displaCy](/api/top-level#displacy) to diff --git a/website/docs/usage/saving-loading.md b/website/docs/usage/saving-loading.md index b7a8476a8..55755246f 100644 --- a/website/docs/usage/saving-loading.md +++ b/website/docs/usage/saving-loading.md @@ -22,6 +22,43 @@ the changes, see [this table](/usage/v2#incompat) and the notes on +### Serializing the pipeline + +When serializing the pipeline, keep in mind that this will only save out the +**binary data for the individual components** to allow spaCy to restore them – +not the entire objects. This is a good thing, because it makes serialization +safe. But it also means that you have to take care of storing the language name +and pipeline component names as well, and restoring them separately before you +can load in the data. + +> #### Saving the model meta +> +> The `nlp.meta` attribute is a JSON-serializable dictionary and contains all +> model meta information, like the language and pipeline, but also author and +> license information. + +```python +### Serialize +bytes_data = nlp.to_bytes() +lang = nlp.meta["lang"] # "en" +pipeline = nlp.meta["pipeline"] # ["tagger", "parser", "ner"] +``` + +```python +### Deserialize +nlp = spacy.blank(lang) +for pipe_name in pipeline: + pipe = nlp.create_pipe(pipe_name) + nlp.add_pipe(pipe) +nlp.from_bytes(bytes_data) +``` + +This is also how spaCy does it under the hood when loading a model: it loads the +model's `meta.json` containing the language and pipeline information, +initializes the language class, creates and adds the pipeline components and +_then_ loads in the binary data. You can read more about this process +[here](/usage/processing-pipelines#pipelines). + ### Using Pickle {#pickle} > #### Example diff --git a/website/meta/languages.json b/website/meta/languages.json index ed3f087df..600500c7a 100644 --- a/website/meta/languages.json +++ b/website/meta/languages.json @@ -102,7 +102,7 @@ { "code": "te", "name": "Telugu", "example": "ఇది ఒక వాక్యం.", "has_examples": true }, { "code": "si", "name": "Sinhala", "example": "මෙය වාක්‍යයකි.", "has_examples": true }, { "code": "ga", "name": "Irish" }, - { "code": "bn", "name": "Bengali" }, + { "code": "bn", "name": "Bengali", "has_examples": true }, { "code": "hi", "name": "Hindi", "example": "यह एक वाक्य है।", "has_examples": true }, { "code": "kn", "name": "Kannada" }, { "code": "ta", "name": "Tamil", "has_examples": true },