diff --git a/.github/contributors/tmetzl.md b/.github/contributors/tmetzl.md
new file mode 100644
index 000000000..e3c8529c8
--- /dev/null
+++ b/.github/contributors/tmetzl.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Tim Metzler |
+| Company name (if applicable) | University of Applied Sciences Bonn-Rhein-Sieg |
+| Title or role (if applicable) | |
+| Date | 03/10/2019 |
+| GitHub username | tmetzl |
+| Website (optional) | |
diff --git a/spacy/lang/ar/__init__.py b/spacy/lang/ar/__init__.py
index c6ff071cf..c120703f6 100644
--- a/spacy/lang/ar/__init__.py
+++ b/spacy/lang/ar/__init__.py
@@ -23,6 +23,7 @@ class ArabicDefaults(Language.Defaults):
tokenizer_exceptions = update_exc(BASE_EXCEPTIONS, TOKENIZER_EXCEPTIONS)
stop_words = STOP_WORDS
suffixes = TOKENIZER_SUFFIXES
+ writing_system = {"direction": "rtl", "has_case": False, "has_letters": True}
class Arabic(Language):
diff --git a/spacy/lang/ja/__init__.py b/spacy/lang/ja/__init__.py
index 39a3a3385..daea9b8d6 100644
--- a/spacy/lang/ja/__init__.py
+++ b/spacy/lang/ja/__init__.py
@@ -9,6 +9,7 @@ from .tag_map import TAG_MAP
from ...attrs import LANG
from ...language import Language
from ...tokens import Doc, Token
+from ...compat import copy_reg
from ...util import DummyTokenizer
@@ -107,4 +108,11 @@ class Japanese(Language):
return self.tokenizer(text)
+def pickle_japanese(instance):
+ return Japanese, tuple()
+
+
+copy_reg.pickle(Japanese, pickle_japanese)
+
+
__all__ = ["Japanese"]
diff --git a/spacy/tests/doc/test_doc_api.py b/spacy/tests/doc/test_doc_api.py
index 4069e018a..86c7fbf72 100644
--- a/spacy/tests/doc/test_doc_api.py
+++ b/spacy/tests/doc/test_doc_api.py
@@ -272,3 +272,9 @@ def test_doc_is_nered(en_vocab):
# Test serialization
new_doc = Doc(en_vocab).from_bytes(doc.to_bytes())
assert new_doc.is_nered
+
+
+def test_doc_lang(en_vocab):
+ doc = Doc(en_vocab, words=["Hello", "world"])
+ assert doc.lang_ == "en"
+ assert doc.lang == en_vocab.strings["en"]
diff --git a/spacy/tests/doc/test_underscore.py b/spacy/tests/doc/test_underscore.py
index 6d79c56e7..8f47157fa 100644
--- a/spacy/tests/doc/test_underscore.py
+++ b/spacy/tests/doc/test_underscore.py
@@ -106,3 +106,37 @@ def test_underscore_raises_for_invalid(invalid_kwargs):
def test_underscore_accepts_valid(valid_kwargs):
valid_kwargs["force"] = True
Doc.set_extension("test", **valid_kwargs)
+
+
+def test_underscore_mutable_defaults_list(en_vocab):
+ """Test that mutable default arguments are handled correctly (see #2581)."""
+ Doc.set_extension("mutable", default=[])
+ doc1 = Doc(en_vocab, words=["one"])
+ doc2 = Doc(en_vocab, words=["two"])
+ doc1._.mutable.append("foo")
+ assert len(doc1._.mutable) == 1
+ assert doc1._.mutable[0] == "foo"
+ assert len(doc2._.mutable) == 0
+ doc1._.mutable = ["bar", "baz"]
+ doc1._.mutable.append("foo")
+ assert len(doc1._.mutable) == 3
+ assert len(doc2._.mutable) == 0
+
+
+def test_underscore_mutable_defaults_dict(en_vocab):
+ """Test that mutable default arguments are handled correctly (see #2581)."""
+ Token.set_extension("mutable", default={})
+ token1 = Doc(en_vocab, words=["one"])[0]
+ token2 = Doc(en_vocab, words=["two"])[0]
+ token1._.mutable["foo"] = "bar"
+ assert len(token1._.mutable) == 1
+ assert token1._.mutable["foo"] == "bar"
+ assert len(token2._.mutable) == 0
+ token1._.mutable["foo"] = "baz"
+ assert len(token1._.mutable) == 1
+ assert token1._.mutable["foo"] == "baz"
+ token1._.mutable["x"] = []
+ token1._.mutable["x"].append("y")
+ assert len(token1._.mutable) == 2
+ assert token1._.mutable["x"] == ["y"]
+ assert len(token2._.mutable) == 0
diff --git a/spacy/tests/regression/test_issue2001-2500.py b/spacy/tests/regression/test_issue2001-2500.py
index df5d76641..82b3a81a9 100644
--- a/spacy/tests/regression/test_issue2001-2500.py
+++ b/spacy/tests/regression/test_issue2001-2500.py
@@ -7,7 +7,6 @@ from spacy.tokens import Doc
from spacy.displacy import render
from spacy.gold import iob_to_biluo
from spacy.lang.it import Italian
-import numpy
from spacy.lang.en import English
from ..util import add_vecs_to_vocab, get_doc
diff --git a/spacy/tokens/doc.pyx b/spacy/tokens/doc.pyx
index 4d3ed084a..857c7b538 100644
--- a/spacy/tokens/doc.pyx
+++ b/spacy/tokens/doc.pyx
@@ -597,6 +597,16 @@ cdef class Doc:
if start != self.length:
yield Span(self, start, self.length)
+ @property
+ def lang(self):
+ """RETURNS (uint64): ID of the language of the doc's vocabulary."""
+ return self.vocab.strings[self.vocab.lang]
+
+ @property
+ def lang_(self):
+ """RETURNS (unicode): Language of the doc's vocabulary, e.g. 'en'."""
+ return self.vocab.lang
+
cdef int push_back(self, LexemeOrToken lex_or_tok, bint has_space) except -1:
if self.length == 0:
# Flip these to false when we see the first token.
@@ -748,7 +758,7 @@ cdef class Doc:
# Allow strings, e.g. 'lemma' or 'LEMMA'
attrs = [(IDS[id_.upper()] if hasattr(id_, "upper") else id_)
for id_ in attrs]
-
+
if SENT_START in attrs and HEAD in attrs:
raise ValueError(Errors.E032)
cdef int i, col
diff --git a/spacy/tokens/underscore.py b/spacy/tokens/underscore.py
index 4e2057e4a..ef1d78717 100644
--- a/spacy/tokens/underscore.py
+++ b/spacy/tokens/underscore.py
@@ -2,11 +2,13 @@
from __future__ import unicode_literals
import functools
+import copy
from ..errors import Errors
class Underscore(object):
+ mutable_types = (dict, list, set)
doc_extensions = {}
span_extensions = {}
token_extensions = {}
@@ -32,7 +34,15 @@ class Underscore(object):
elif method is not None:
return functools.partial(method, self._obj)
else:
- return self._doc.user_data.get(self._get_key(name), default)
+ key = self._get_key(name)
+ if key in self._doc.user_data:
+ return self._doc.user_data[key]
+ elif isinstance(default, self.mutable_types):
+ # Handle mutable default arguments (see #2581)
+ new_default = copy.copy(default)
+ self.__setattr__(name, new_default)
+ return new_default
+ return default
def __setattr__(self, name, value):
if name not in self._extensions:
diff --git a/website/docs/api/doc.md b/website/docs/api/doc.md
index 953a31c2d..f5a94335f 100644
--- a/website/docs/api/doc.md
+++ b/website/docs/api/doc.md
@@ -654,6 +654,8 @@ The L2 norm of the document's vector representation.
| `tensor` 2 | object | Container for dense vector representations. |
| `cats` 2 | dictionary | Maps either a label to a score for categories applied to whole document, or `(start_char, end_char, label)` to score for categories applied to spans. `start_char` and `end_char` should be character offsets, label can be either a string or an integer ID, and score should be a float. |
| `user_data` | - | A generic storage area, for user custom data. |
+| `lang` 2.1 | int | Language of the document's vocabulary. |
+| `lang_` 2.1 | unicode | Language of the document's vocabulary. |
| `is_tagged` | bool | A flag indicating that the document has been part-of-speech tagged. |
| `is_parsed` | bool | A flag indicating that the document has been syntactically parsed. |
| `is_sentenced` | bool | A flag indicating that sentence boundaries have been applied to the document. |
diff --git a/website/docs/usage/processing-pipelines.md b/website/docs/usage/processing-pipelines.md
index ab780485f..264774b7c 100644
--- a/website/docs/usage/processing-pipelines.md
+++ b/website/docs/usage/processing-pipelines.md
@@ -458,9 +458,7 @@ There are three main types of extensions, which can be defined using the
1. **Attribute extensions.** Set a default value for an attribute, which can be
overwritten manually at any time. Attribute extensions work like "normal"
variables and are the quickest way to store arbitrary information on a `Doc`,
- `Span` or `Token`. Attribute defaults behaves just like argument defaults
- [in Python functions](http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments),
- and should not be used for mutable values like dictionaries or lists.
+ `Span` or `Token`.
```python
Doc.set_extension("hello", default=True)
@@ -527,25 +525,6 @@ Once you've registered your custom attribute, you can also use the built-in
especially useful it you want to pass in a string instead of calling
`doc._.my_attr`.
-
-
-When using **mutable values** like dictionaries or lists as the `default`
-argument, keep in mind that they behave just like mutable default arguments
-[in Python functions](http://docs.python-guide.org/en/latest/writing/gotchas/#mutable-default-arguments).
-This can easily cause unintended results, like the same value being set on _all_
-objects instead of only one particular instance. In most cases, it's better to
-use **getters and setters**, and only set the `default` for boolean or string
-values.
-
-```diff
-+ Doc.set_extension('fruits', getter=get_fruits, setter=set_fruits)
-
-- Doc.set_extension('fruits', default={})
-- doc._.fruits['apple'] = u'🍎' # all docs now have {'apple': u'🍎'}
-```
-
-
-
### Example: Pipeline component for GPE entities and country meta data via a REST API {#component-example3}
This example shows the implementation of a pipeline component that fetches