diff --git a/.github/contributors/Arvindcheenu.md b/.github/contributors/Arvindcheenu.md
new file mode 100644
index 000000000..707a9821d
--- /dev/null
+++ b/.github/contributors/Arvindcheenu.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Arvind Srinivasan |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 2020-06-13 |
+| GitHub username | arvindcheenu |
+| Website (optional) | |
diff --git a/.github/contributors/JannisTriesToCode.md b/.github/contributors/JannisTriesToCode.md
new file mode 100644
index 000000000..d834794c5
--- /dev/null
+++ b/.github/contributors/JannisTriesToCode.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | ----------------------------- |
+| Name | Jannis Rauschke |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 22.05.2020 |
+| GitHub username | JannisTriesToCode |
+| Website (optional) | https://twitter.com/JRauschke |
diff --git a/.github/contributors/MartinoMensio.md b/.github/contributors/MartinoMensio.md
index 1cd32d622..27e453699 100644
--- a/.github/contributors/MartinoMensio.md
+++ b/.github/contributors/MartinoMensio.md
@@ -99,8 +99,8 @@ mark both statements:
| Field | Entry |
|------------------------------- | -------------------- |
| Name | Martino Mensio |
-| Company name (if applicable) | Polytechnic University of Turin |
-| Title or role (if applicable) | Student |
+| Company name (if applicable) | The Open University |
+| Title or role (if applicable) | PhD Student |
| Date | 17 November 2017 |
| GitHub username | MartinoMensio |
| Website (optional) | https://martinomensio.github.io/ |
diff --git a/.github/contributors/R1j1t.md b/.github/contributors/R1j1t.md
new file mode 100644
index 000000000..a92f1e092
--- /dev/null
+++ b/.github/contributors/R1j1t.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Rajat |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 24 May 2020 |
+| GitHub username | R1j1t |
+| Website (optional) | |
diff --git a/.github/contributors/hiroshi-matsuda-rit.md b/.github/contributors/hiroshi-matsuda-rit.md
new file mode 100644
index 000000000..bf19125fb
--- /dev/null
+++ b/.github/contributors/hiroshi-matsuda-rit.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Hiroshi Matsuda |
+| Company name (if applicable) | Megagon Labs, Tokyo |
+| Title or role (if applicable) | Research Scientist |
+| Date | June 6, 2020 |
+| GitHub username | hiroshi-matsuda-rit |
+| Website (optional) | |
diff --git a/.github/contributors/jonesmartins.md b/.github/contributors/jonesmartins.md
new file mode 100644
index 000000000..5663f6193
--- /dev/null
+++ b/.github/contributors/jonesmartins.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Jones Martins |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 2020-06-10 |
+| GitHub username | jonesmartins |
+| Website (optional) | |
diff --git a/.github/contributors/leomrocha.md b/.github/contributors/leomrocha.md
new file mode 100644
index 000000000..495654153
--- /dev/null
+++ b/.github/contributors/leomrocha.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Leonardo M. Rocha |
+| Company name (if applicable) | |
+| Title or role (if applicable) | Eng. |
+| Date | 31/05/2020 |
+| GitHub username | leomrocha |
+| Website (optional) | |
diff --git a/.github/contributors/lfiedler.md b/.github/contributors/lfiedler.md
new file mode 100644
index 000000000..61f8ffeb4
--- /dev/null
+++ b/.github/contributors/lfiedler.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Leander Fiedler |
+| Company name (if applicable) | |
+| Title or role (if applicable) | |
+| Date | 06 April 2020 |
+| GitHub username | lfiedler |
+| Website (optional) | |
\ No newline at end of file
diff --git a/.github/contributors/mahnerak.md b/.github/contributors/mahnerak.md
new file mode 100644
index 000000000..cc7739681
--- /dev/null
+++ b/.github/contributors/mahnerak.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Karen Hambardzumyan |
+| Company name (if applicable) | YerevaNN |
+| Title or role (if applicable) | Researcher |
+| Date | 2020-06-19 |
+| GitHub username | mahnerak |
+| Website (optional) | https://mahnerak.com/|
diff --git a/.github/contributors/myavrum.md b/.github/contributors/myavrum.md
new file mode 100644
index 000000000..dc8f1bb84
--- /dev/null
+++ b/.github/contributors/myavrum.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI GmbH](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | -------------------- |
+| Name | Marat M. Yavrumyan |
+| Company name (if applicable) | YSU, UD_Armenian Project |
+| Title or role (if applicable) | Dr., Principal Investigator |
+| Date | 2020-06-19 |
+| GitHub username | myavrum |
+| Website (optional) | http://armtreebank.yerevann.com/ |
diff --git a/.github/contributors/theudas.md b/.github/contributors/theudas.md
new file mode 100644
index 000000000..3d8a2bd95
--- /dev/null
+++ b/.github/contributors/theudas.md
@@ -0,0 +1,106 @@
+# spaCy contributor agreement
+
+This spaCy Contributor Agreement (**"SCA"**) is based on the
+[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
+The SCA applies to any contribution that you make to any product or project
+managed by us (the **"project"**), and sets out the intellectual property rights
+you grant to us in the contributed materials. The term **"us"** shall mean
+[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
+**"you"** shall mean the person or entity identified below.
+
+If you agree to be bound by these terms, fill in the information requested
+below and include the filled-in version with your first pull request, under the
+folder [`.github/contributors/`](/.github/contributors/). The name of the file
+should be your GitHub username, with the extension `.md`. For example, the user
+example_user would create the file `.github/contributors/example_user.md`.
+
+Read this agreement carefully before signing. These terms and conditions
+constitute a binding legal agreement.
+
+## Contributor Agreement
+
+1. The term "contribution" or "contributed materials" means any source code,
+object code, patch, tool, sample, graphic, specification, manual,
+documentation, or any other material posted or submitted by you to the project.
+
+2. With respect to any worldwide copyrights, or copyright applications and
+registrations, in your contribution:
+
+ * you hereby assign to us joint ownership, and to the extent that such
+ assignment is or becomes invalid, ineffective or unenforceable, you hereby
+ grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
+ royalty-free, unrestricted license to exercise all rights under those
+ copyrights. This includes, at our option, the right to sublicense these same
+ rights to third parties through multiple levels of sublicensees or other
+ licensing arrangements;
+
+ * you agree that each of us can do all things in relation to your
+ contribution as if each of us were the sole owners, and if one of us makes
+ a derivative work of your contribution, the one who makes the derivative
+ work (or has it made will be the sole owner of that derivative work;
+
+ * you agree that you will not assert any moral rights in your contribution
+ against us, our licensees or transferees;
+
+ * you agree that we may register a copyright in your contribution and
+ exercise all ownership rights associated with it; and
+
+ * you agree that neither of us has any duty to consult with, obtain the
+ consent of, pay or render an accounting to the other for any use or
+ distribution of your contribution.
+
+3. With respect to any patents you own, or that you can license without payment
+to any third party, you hereby grant to us a perpetual, irrevocable,
+non-exclusive, worldwide, no-charge, royalty-free license to:
+
+ * make, have made, use, sell, offer to sell, import, and otherwise transfer
+ your contribution in whole or in part, alone or in combination with or
+ included in any product, work or materials arising out of the project to
+ which your contribution was submitted, and
+
+ * at our option, to sublicense these same rights to third parties through
+ multiple levels of sublicensees or other licensing arrangements.
+
+4. Except as set out above, you keep all right, title, and interest in your
+contribution. The rights that you grant to us under these terms are effective
+on the date you first submitted a contribution to us, even if your submission
+took place before the date you sign these terms.
+
+5. You covenant, represent, warrant and agree that:
+
+ * Each contribution that you submit is and shall be an original work of
+ authorship and you can legally grant the rights set out in this SCA;
+
+ * to the best of your knowledge, each contribution will not violate any
+ third party's copyrights, trademarks, patents, or other intellectual
+ property rights; and
+
+ * each contribution shall be in compliance with U.S. export control laws and
+ other applicable export and import laws. You agree to notify us if you
+ become aware of any circumstance which would make any of the foregoing
+ representations inaccurate in any respect. We may publicly disclose your
+ participation in the project, including the fact that you have signed the SCA.
+
+6. This SCA is governed by the laws of the State of California and applicable
+U.S. Federal law. Any choice of law rules will not apply.
+
+7. Please place an “x” on one of the applicable statement below. Please do NOT
+mark both statements:
+
+ * [x] I am signing on behalf of myself as an individual and no other person
+ or entity, including my employer, has or will have rights with respect to my
+ contributions.
+
+ * [ ] I am signing on behalf of my employer or a legal entity and I have the
+ actual authority to contractually bind that entity.
+
+## Contributor Details
+
+| Field | Entry |
+|------------------------------- | ------------------------ |
+| Name | Philipp Sodmann |
+| Company name (if applicable) | Empolis |
+| Title or role (if applicable) | |
+| Date | 2017-05-06 |
+| GitHub username | theudas |
+| Website (optional) | |
diff --git a/.github/workflows/issue-manager.yml b/.github/workflows/issue-manager.yml
new file mode 100644
index 000000000..3fb42ed01
--- /dev/null
+++ b/.github/workflows/issue-manager.yml
@@ -0,0 +1,29 @@
+name: Issue Manager
+
+on:
+ schedule:
+ - cron: "0 0 * * *"
+ issue_comment:
+ types:
+ - created
+ - edited
+ issues:
+ types:
+ - labeled
+
+jobs:
+ issue-manager:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: tiangolo/issue-manager@0.2.1
+ with:
+ token: ${{ secrets.GITHUB_TOKEN }}
+ config: >
+ {
+ "resolved": {
+ "delay": "P7D",
+ "message": "This issue has been automatically closed because it was answered and there was no follow-up discussion.",
+ "remove_label_on_comment": true,
+ "remove_label_on_close": true
+ }
+ }
diff --git a/Makefile b/Makefile
index 9916e3cf5..83ea0d634 100644
--- a/Makefile
+++ b/Makefile
@@ -5,8 +5,9 @@ VENV := ./env$(PYVER)
version := $(shell "bin/get-version.sh")
dist/spacy-$(version).pex : wheelhouse/spacy-$(version).stamp
- $(VENV)/bin/pex -f ./wheelhouse --no-index --disable-cache -m spacy -o $@ spacy==$(version) spacy_lookups_data
+ $(VENV)/bin/pex -f ./wheelhouse --no-index --disable-cache -m spacy -o $@ spacy==$(version) spacy-lookups-data jieba pkuseg==0.0.22 sudachipy sudachidict_core
chmod a+rx $@
+ cp $@ dist/spacy.pex
dist/pytest.pex : wheelhouse/pytest-*.whl
$(VENV)/bin/pex -f ./wheelhouse --no-index --disable-cache -m pytest -o $@ pytest pytest-timeout mock
@@ -14,7 +15,7 @@ dist/pytest.pex : wheelhouse/pytest-*.whl
wheelhouse/spacy-$(version).stamp : $(VENV)/bin/pex setup.py spacy/*.py* spacy/*/*.py*
$(VENV)/bin/pip wheel . -w ./wheelhouse
- $(VENV)/bin/pip wheel spacy_lookups_data -w ./wheelhouse
+ $(VENV)/bin/pip wheel spacy-lookups-data jieba pkuseg==0.0.22 sudachipy sudachidict_core -w ./wheelhouse
touch $@
wheelhouse/pytest-%.whl : $(VENV)/bin/pex
diff --git a/README.md b/README.md
index 500431b9f..f711ea1b1 100644
--- a/README.md
+++ b/README.md
@@ -6,12 +6,12 @@ spaCy is a library for advanced Natural Language Processing in Python and
Cython. It's built on the very latest research, and was designed from day one to
be used in real products. spaCy comes with
[pretrained statistical models](https://spacy.io/models) and word vectors, and
-currently supports tokenization for **50+ languages**. It features
+currently supports tokenization for **60+ languages**. It features
state-of-the-art speed, convolutional **neural network models** for tagging,
parsing and **named entity recognition** and easy **deep learning** integration.
It's commercial open-source software, released under the MIT license.
-💫 **Version 2.2 out now!**
+💫 **Version 2.3 out now!**
[Check out the release notes here.](https://github.com/explosion/spaCy/releases)
[![Azure Pipelines]()](https://dev.azure.com/explosion-ai/public/_build?definitionId=8)
@@ -31,7 +31,7 @@ It's commercial open-source software, released under the MIT license.
| --------------- | -------------------------------------------------------------- |
| [spaCy 101] | New to spaCy? Here's everything you need to know! |
| [Usage Guides] | How to use spaCy and its features. |
-| [New in v2.2] | New features, backwards incompatibilities and migration guide. |
+| [New in v2.3] | New features, backwards incompatibilities and migration guide. |
| [API Reference] | The detailed reference for spaCy's API. |
| [Models] | Download statistical language models for spaCy. |
| [Universe] | Libraries, extensions, demos, books and courses. |
@@ -39,7 +39,7 @@ It's commercial open-source software, released under the MIT license.
| [Contribute] | How to contribute to the spaCy project and code base. |
[spacy 101]: https://spacy.io/usage/spacy-101
-[new in v2.2]: https://spacy.io/usage/v2-2
+[new in v2.3]: https://spacy.io/usage/v2-3
[usage guides]: https://spacy.io/usage/
[api reference]: https://spacy.io/api/
[models]: https://spacy.io/models
@@ -119,12 +119,13 @@ of `v2.0.13`).
pip install spacy
```
-To install additional data tables for lemmatization in **spaCy v2.2+** you can
-run `pip install spacy[lookups]` or install
+To install additional data tables for lemmatization and normalization in
+**spaCy v2.2+** you can run `pip install spacy[lookups]` or install
[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data)
separately. The lookups package is needed to create blank models with
-lemmatization data, and to lemmatize in languages that don't yet come with
-pretrained models and aren't powered by third-party libraries.
+lemmatization data for v2.2+ plus normalization data for v2.3+, and to
+lemmatize in languages that don't yet come with pretrained models and aren't
+powered by third-party libraries.
When using pip it is generally recommended to install packages in a virtual
environment to avoid modifying system state:
diff --git a/examples/training/train_intent_parser.py b/examples/training/train_intent_parser.py
index c3d5a279b..df1356e3c 100644
--- a/examples/training/train_intent_parser.py
+++ b/examples/training/train_intent_parser.py
@@ -2,7 +2,7 @@
# coding: utf-8
"""Using the parser to recognise your own semantics
-spaCy's parser component can be used to trained to predict any type of tree
+spaCy's parser component can be trained to predict any type of tree
structure over your input text. You can also predict trees over whole documents
or chat logs, with connections between the sentence-roots used to annotate
discourse structure. In this example, we'll build a message parser for a common
diff --git a/setup.cfg b/setup.cfg
index 5bda29c68..01b18ef29 100644
--- a/setup.cfg
+++ b/setup.cfg
@@ -60,7 +60,7 @@ install_requires =
[options.extras_require]
lookups =
- spacy_lookups_data>=0.3.1,<0.4.0
+ spacy_lookups_data>=0.3.2,<0.4.0
cuda =
cupy>=5.0.0b4,<9.0.0
cuda80 =
@@ -79,7 +79,8 @@ cuda102 =
cupy-cuda102>=5.0.0b4,<9.0.0
# Language tokenizers with external dependencies
ja =
- fugashi>=0.1.3
+ sudachipy>=0.4.5
+ sudachidict_core>=20200330
ko =
natto-py==0.9.0
th =
diff --git a/spacy/cli/__init__.py b/spacy/cli/__init__.py
index 14623000a..206f8dd3b 100644
--- a/spacy/cli/__init__.py
+++ b/spacy/cli/__init__.py
@@ -15,7 +15,6 @@ from .evaluate import evaluate # noqa: F401
from .convert import convert # noqa: F401
from .init_model import init_model # noqa: F401
from .validate import validate # noqa: F401
-from .project import project_clone, project_get_assets, project_run # noqa: F401
@app.command("link", no_args_is_help=True, deprecated=True, hidden=True)
diff --git a/spacy/errors.py b/spacy/errors.py
index d6fdd1b43..c3c820987 100644
--- a/spacy/errors.py
+++ b/spacy/errors.py
@@ -3,7 +3,7 @@ def add_codes(err_cls):
class ErrorsWithCodes(err_cls):
def __getattribute__(self, code):
- msg = super().__getattribute__(code)
+ msg = super(ErrorsWithCodes, self).__getattribute__(code)
if code.startswith("__"): # python system attributes like __class__
return msg
else:
@@ -111,6 +111,25 @@ class Warnings(object):
"`spacy.gold.biluo_tags_from_offsets(nlp.make_doc(text), entities)`"
" to check the alignment. Misaligned entities ('-') will be "
"ignored during training.")
+ W031 = ("Model '{model}' ({model_version}) requires spaCy {version} and "
+ "is incompatible with the current spaCy version ({current}). This "
+ "may lead to unexpected results or runtime errors. To resolve "
+ "this, download a newer compatible model or retrain your custom "
+ "model with the current spaCy version. For more details and "
+ "available updates, run: python -m spacy validate")
+ W032 = ("Unable to determine model compatibility for model '{model}' "
+ "({model_version}) with the current spaCy version ({current}). "
+ "This may lead to unexpected results or runtime errors. To resolve "
+ "this, download a newer compatible model or retrain your custom "
+ "model with the current spaCy version. For more details and "
+ "available updates, run: python -m spacy validate")
+ W033 = ("Training a new {model} using a model with no lexeme normalization "
+ "table. This may degrade the performance of the model to some "
+ "degree. If this is intentional or the language you're using "
+ "doesn't have a normalization table, please ignore this warning. "
+ "If this is surprising, make sure you have the spacy-lookups-data "
+ "package installed. The languages with lexeme normalization tables "
+ "are currently: da, de, el, en, id, lb, pt, ru, sr, ta, th.")
# TODO: fix numbering after merging develop into master
W094 = ("Model '{model}' ({model_version}) specifies an under-constrained "
@@ -578,6 +597,9 @@ class Errors(object):
E197 = ("Row out of bounds, unable to add row {row} for key {key}.")
E198 = ("Unable to return {n} most similar vectors for the current vectors "
"table, which contains {n_rows} vectors.")
+ E199 = ("Unable to merge 0-length span at doc[{start}:{end}].")
+ E200 = ("Specifying a base model with a pretrained component '{component}' "
+ "can not be combined with adding a pretrained Tok2Vec layer.")
# TODO: fix numbering after merging develop into master
E983 = ("Invalid key for '{dict_name}': {key}. Available keys: "
diff --git a/spacy/gold.pyx b/spacy/gold.pyx
index 19b135193..af98eda8b 100644
--- a/spacy/gold.pyx
+++ b/spacy/gold.pyx
@@ -1052,6 +1052,7 @@ cdef class GoldParse:
representing the external IDs in a knowledge base (KB)
mapped to either 1.0 or 0.0, indicating positive and
negative examples respectively.
+ make_projective (bool): Whether to projectivize the dependency tree.
RETURNS (GoldParse): The newly constructed object.
"""
self.mem = Pool()
diff --git a/spacy/kb.pyx b/spacy/kb.pyx
index 8d8464f3c..3f226596c 100644
--- a/spacy/kb.pyx
+++ b/spacy/kb.pyx
@@ -446,6 +446,8 @@ cdef class Writer:
assert not path.isdir(loc), f"{loc} is directory"
if isinstance(loc, Path):
loc = bytes(loc)
+ if path.exists(loc):
+ assert not path.isdir(loc), "%s is directory." % loc
cdef bytes bytes_loc = loc.encode('utf8') if type(loc) == unicode else loc
self._fp = fopen(bytes_loc, 'wb')
if not self._fp:
@@ -487,10 +489,10 @@ cdef class Writer:
cdef class Reader:
def __init__(self, object loc):
- assert path.exists(loc)
- assert not path.isdir(loc)
if isinstance(loc, Path):
loc = bytes(loc)
+ assert path.exists(loc)
+ assert not path.isdir(loc)
cdef bytes bytes_loc = loc.encode('utf8') if type(loc) == unicode else loc
self._fp = fopen(bytes_loc, 'rb')
if not self._fp:
diff --git a/spacy/lang/el/syntax_iterators.py b/spacy/lang/el/syntax_iterators.py
index ea3af576c..1bb21d24d 100644
--- a/spacy/lang/el/syntax_iterators.py
+++ b/spacy/lang/el/syntax_iterators.py
@@ -20,29 +20,25 @@ def noun_chunks(doclike):
conj = doc.vocab.strings.add("conj")
nmod = doc.vocab.strings.add("nmod")
np_label = doc.vocab.strings.add("NP")
- seen = set()
+ prev_end = -1
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced
- if word.i in seen:
+ if word.left_edge.i <= prev_end:
continue
if word.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
flag = False
if word.pos == NOUN:
# check for patterns such as γραμμή παραγωγής
for potential_nmod in word.rights:
if potential_nmod.dep == nmod:
- seen.update(
- j for j in range(word.left_edge.i, potential_nmod.i + 1)
- )
+ prev_end = potential_nmod.i
yield word.left_edge.i, potential_nmod.i + 1, np_label
flag = True
break
if flag is False:
- seen.update(j for j in range(word.left_edge.i, word.i + 1))
+ prev_end = word.i
yield word.left_edge.i, word.i + 1, np_label
elif word.dep == conj:
# covers the case: έχει όμορφα και έξυπνα παιδιά
@@ -51,9 +47,7 @@ def noun_chunks(doclike):
head = head.head
# If the head is an NP, and we're coordinated to it, we're an NP
if head.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.i + 1))
+ prev_end = word.i
yield word.left_edge.i, word.i + 1, np_label
diff --git a/spacy/lang/en/syntax_iterators.py b/spacy/lang/en/syntax_iterators.py
index c41120afb..b63db3539 100644
--- a/spacy/lang/en/syntax_iterators.py
+++ b/spacy/lang/en/syntax_iterators.py
@@ -25,17 +25,15 @@ def noun_chunks(doclike):
np_deps = [doc.vocab.strings.add(label) for label in labels]
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
- seen = set()
+ prev_end = -1
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced
- if word.i in seen:
+ if word.left_edge.i <= prev_end:
continue
if word.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.i + 1))
+ prev_end = word.i
yield word.left_edge.i, word.i + 1, np_label
elif word.dep == conj:
head = word.head
@@ -43,9 +41,7 @@ def noun_chunks(doclike):
head = head.head
# If the head is an NP, and we're coordinated to it, we're an NP
if head.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.i + 1))
+ prev_end = word.i
yield word.left_edge.i, word.i + 1, np_label
diff --git a/spacy/lang/en/tokenizer_exceptions.py b/spacy/lang/en/tokenizer_exceptions.py
index 908ac3940..dc8a5c04d 100644
--- a/spacy/lang/en/tokenizer_exceptions.py
+++ b/spacy/lang/en/tokenizer_exceptions.py
@@ -136,7 +136,19 @@ for pron in ["he", "she", "it"]:
# W-words, relative pronouns, prepositions etc.
-for word in ["who", "what", "when", "where", "why", "how", "there", "that"]:
+for word in [
+ "who",
+ "what",
+ "when",
+ "where",
+ "why",
+ "how",
+ "there",
+ "that",
+ "this",
+ "these",
+ "those",
+]:
for orth in [word, word.title()]:
_exc[orth + "'s"] = [
{ORTH: orth, LEMMA: word, NORM: word},
@@ -396,6 +408,8 @@ _other_exc = {
{ORTH: "Let", LEMMA: "let", NORM: "let"},
{ORTH: "'s", LEMMA: PRON_LEMMA, NORM: "us"},
],
+ "c'mon": [{ORTH: "c'm", NORM: "come", LEMMA: "come"}, {ORTH: "on"}],
+ "C'mon": [{ORTH: "C'm", NORM: "come", LEMMA: "come"}, {ORTH: "on"}],
}
_exc.update(_other_exc)
diff --git a/spacy/lang/es/examples.py b/spacy/lang/es/examples.py
index a1db41a16..2bcbd8740 100644
--- a/spacy/lang/es/examples.py
+++ b/spacy/lang/es/examples.py
@@ -14,5 +14,9 @@ sentences = [
"El gato come pescado.",
"Veo al hombre con el telescopio.",
"La araña come moscas.",
- "El pingüino incuba en su nido.",
+ "El pingüino incuba en su nido sobre el hielo.",
+ "¿Dónde estais?",
+ "¿Quién es el presidente Francés?",
+ "¿Dónde está encuentra la capital de Argentina?",
+ "¿Cuándo nació José de San Martín?",
]
diff --git a/spacy/lang/es/punctuation.py b/spacy/lang/es/punctuation.py
index f989221c2..e9552371e 100644
--- a/spacy/lang/es/punctuation.py
+++ b/spacy/lang/es/punctuation.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from ..char_classes import LIST_PUNCT, LIST_ELLIPSES, LIST_QUOTES
from ..char_classes import LIST_ICONS, CURRENCY, LIST_UNITS, PUNCT
from ..char_classes import CONCAT_QUOTES, ALPHA_LOWER, ALPHA_UPPER, ALPHA
diff --git a/spacy/lang/es/tokenizer_exceptions.py b/spacy/lang/es/tokenizer_exceptions.py
index 7836f1c43..50f2988b1 100644
--- a/spacy/lang/es/tokenizer_exceptions.py
+++ b/spacy/lang/es/tokenizer_exceptions.py
@@ -7,8 +7,12 @@ _exc = {
for exc_data in [
+ {ORTH: "n°", LEMMA: "número"},
+ {ORTH: "°C", LEMMA: "grados Celcius"},
{ORTH: "aprox.", LEMMA: "aproximadamente"},
{ORTH: "dna.", LEMMA: "docena"},
+ {ORTH: "dpto.", LEMMA: "departamento"},
+ {ORTH: "ej.", LEMMA: "ejemplo"},
{ORTH: "esq.", LEMMA: "esquina"},
{ORTH: "pág.", LEMMA: "página"},
{ORTH: "p.ej.", LEMMA: "por ejemplo"},
@@ -16,6 +20,7 @@ for exc_data in [
{ORTH: "Vd.", LEMMA: PRON_LEMMA, NORM: "usted"},
{ORTH: "Uds.", LEMMA: PRON_LEMMA, NORM: "ustedes"},
{ORTH: "Vds.", LEMMA: PRON_LEMMA, NORM: "ustedes"},
+ {ORTH: "vol.", NORM: "volúmen"},
]:
_exc[exc_data[ORTH]] = [exc_data]
@@ -35,10 +40,14 @@ for h in range(1, 12 + 1):
for orth in [
"a.C.",
"a.J.C.",
+ "d.C.",
+ "d.J.C.",
"apdo.",
"Av.",
"Avda.",
"Cía.",
+ "Dr.",
+ "Dra.",
"EE.UU.",
"etc.",
"fig.",
@@ -54,9 +63,9 @@ for orth in [
"Prof.",
"Profa.",
"q.e.p.d.",
- "S.A.",
+ "Q.E.P.D." "S.A.",
"S.L.",
- "s.s.s.",
+ "S.R.L." "s.s.s.",
"Sr.",
"Sra.",
"Srta.",
diff --git a/spacy/lang/fa/syntax_iterators.py b/spacy/lang/fa/syntax_iterators.py
index c41120afb..b63db3539 100644
--- a/spacy/lang/fa/syntax_iterators.py
+++ b/spacy/lang/fa/syntax_iterators.py
@@ -25,17 +25,15 @@ def noun_chunks(doclike):
np_deps = [doc.vocab.strings.add(label) for label in labels]
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
- seen = set()
+ prev_end = -1
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced
- if word.i in seen:
+ if word.left_edge.i <= prev_end:
continue
if word.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.i + 1))
+ prev_end = word.i
yield word.left_edge.i, word.i + 1, np_label
elif word.dep == conj:
head = word.head
@@ -43,9 +41,7 @@ def noun_chunks(doclike):
head = head.head
# If the head is an NP, and we're coordinated to it, we're an NP
if head.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.i + 1))
+ prev_end = word.i
yield word.left_edge.i, word.i + 1, np_label
diff --git a/spacy/lang/fr/_tokenizer_exceptions_list.py b/spacy/lang/fr/_tokenizer_exceptions_list.py
index 7f908dac8..50f439501 100644
--- a/spacy/lang/fr/_tokenizer_exceptions_list.py
+++ b/spacy/lang/fr/_tokenizer_exceptions_list.py
@@ -531,7 +531,6 @@ FR_BASE_EXCEPTIONS = [
"Beaumont-Hamel",
"Beaumont-Louestault",
"Beaumont-Monteux",
- "Beaumont-Pied-de-Buf",
"Beaumont-Pied-de-Bœuf",
"Beaumont-Sardolles",
"Beaumont-Village",
@@ -948,7 +947,7 @@ FR_BASE_EXCEPTIONS = [
"Buxières-sous-les-Côtes",
"Buzy-Darmont",
"Byhleguhre-Byhlen",
- "Burs-en-Othe",
+ "Bœurs-en-Othe",
"Bâle-Campagne",
"Bâle-Ville",
"Béard-Géovreissiat",
@@ -1586,11 +1585,11 @@ FR_BASE_EXCEPTIONS = [
"Cruci-Falgardiens",
"Cruquius-Oost",
"Cruviers-Lascours",
- "Crèvecur-en-Auge",
- "Crèvecur-en-Brie",
- "Crèvecur-le-Grand",
- "Crèvecur-le-Petit",
- "Crèvecur-sur-l'Escaut",
+ "Crèvecœur-en-Auge",
+ "Crèvecœur-en-Brie",
+ "Crèvecœur-le-Grand",
+ "Crèvecœur-le-Petit",
+ "Crèvecœur-sur-l'Escaut",
"Crécy-Couvé",
"Créon-d'Armagnac",
"Cubjac-Auvézère-Val-d'Ans",
@@ -1616,7 +1615,7 @@ FR_BASE_EXCEPTIONS = [
"Cuxac-Cabardès",
"Cuxac-d'Aude",
"Cuyk-Sainte-Agathe",
- "Cuvres-et-Valsery",
+ "Cœuvres-et-Valsery",
"Céaux-d'Allègre",
"Céleste-Empire",
"Cénac-et-Saint-Julien",
@@ -1679,7 +1678,7 @@ FR_BASE_EXCEPTIONS = [
"Devrai-Gondragnières",
"Dhuys et Morin-en-Brie",
"Diane-Capelle",
- "Dieffenbach-lès-Wrth",
+ "Dieffenbach-lès-Wœrth",
"Diekhusen-Fahrstedt",
"Diennes-Aubigny",
"Diensdorf-Radlow",
@@ -1752,7 +1751,7 @@ FR_BASE_EXCEPTIONS = [
"Durdat-Larequille",
"Durfort-Lacapelette",
"Durfort-et-Saint-Martin-de-Sossenac",
- "Duil-sur-le-Mignon",
+ "Dœuil-sur-le-Mignon",
"Dão-Lafões",
"Débats-Rivière-d'Orpra",
"Décines-Charpieu",
@@ -2687,8 +2686,8 @@ FR_BASE_EXCEPTIONS = [
"Kuhlen-Wendorf",
"KwaZulu-Natal",
"Kyzyl-Arvat",
- "Kur-la-Grande",
- "Kur-la-Petite",
+ "Kœur-la-Grande",
+ "Kœur-la-Petite",
"Kölln-Reisiek",
"Königsbach-Stein",
"Königshain-Wiederau",
@@ -4024,7 +4023,7 @@ FR_BASE_EXCEPTIONS = [
"Marcilly-d'Azergues",
"Marcillé-Raoul",
"Marcillé-Robert",
- "Marcq-en-Barul",
+ "Marcq-en-Barœul",
"Marcy-l'Etoile",
"Marcy-l'Étoile",
"Mareil-Marly",
@@ -4258,7 +4257,7 @@ FR_BASE_EXCEPTIONS = [
"Monlezun-d'Armagnac",
"Monléon-Magnoac",
"Monnetier-Mornex",
- "Mons-en-Barul",
+ "Mons-en-Barœul",
"Monsempron-Libos",
"Monsteroux-Milieu",
"Montacher-Villegardin",
@@ -4348,7 +4347,7 @@ FR_BASE_EXCEPTIONS = [
"Mornay-Berry",
"Mortain-Bocage",
"Morteaux-Couliboeuf",
- "Morteaux-Coulibuf",
+ "Morteaux-Coulibœuf",
"Morteaux-Coulibœuf",
"Mortes-Frontières",
"Mory-Montcrux",
@@ -4391,7 +4390,7 @@ FR_BASE_EXCEPTIONS = [
"Muncq-Nieurlet",
"Murtin-Bogny",
"Murtin-et-le-Châtelet",
- "Murs-Verdey",
+ "Mœurs-Verdey",
"Ménestérol-Montignac",
"Ménil'muche",
"Ménil-Annelles",
@@ -4612,7 +4611,7 @@ FR_BASE_EXCEPTIONS = [
"Neuves-Maisons",
"Neuvic-Entier",
"Neuvicq-Montguyon",
- "Neuville-lès-Luilly",
+ "Neuville-lès-Lœuilly",
"Neuvy-Bouin",
"Neuvy-Deux-Clochers",
"Neuvy-Grandchamp",
@@ -4773,8 +4772,8 @@ FR_BASE_EXCEPTIONS = [
"Nuncq-Hautecôte",
"Nurieux-Volognat",
"Nuthe-Urstromtal",
- "Nux-les-Mines",
- "Nux-lès-Auxi",
+ "Nœux-les-Mines",
+ "Nœux-lès-Auxi",
"Nâves-Parmelan",
"Nézignan-l'Evêque",
"Nézignan-l'Évêque",
@@ -5343,7 +5342,7 @@ FR_BASE_EXCEPTIONS = [
"Quincy-Voisins",
"Quincy-sous-le-Mont",
"Quint-Fonsegrives",
- "Quux-Haut-Maînil",
+ "Quœux-Haut-Maînil",
"Quœux-Haut-Maînil",
"Qwa-Qwa",
"R.-V.",
@@ -5631,12 +5630,12 @@ FR_BASE_EXCEPTIONS = [
"Saint Aulaye-Puymangou",
"Saint Geniez d'Olt et d'Aubrac",
"Saint Martin de l'If",
- "Saint-Denux",
- "Saint-Jean-de-Buf",
- "Saint-Martin-le-Nud",
- "Saint-Michel-Tubuf",
+ "Saint-Denœux",
+ "Saint-Jean-de-Bœuf",
+ "Saint-Martin-le-Nœud",
+ "Saint-Michel-Tubœuf",
"Saint-Paul - Flaugnac",
- "Saint-Pierre-de-Buf",
+ "Saint-Pierre-de-Bœuf",
"Saint-Thegonnec Loc-Eguiner",
"Sainte-Alvère-Saint-Laurent Les Bâtons",
"Salignac-Eyvignes",
@@ -6208,7 +6207,7 @@ FR_BASE_EXCEPTIONS = [
"Tite-Live",
"Titisee-Neustadt",
"Tobel-Tägerschen",
- "Togny-aux-Bufs",
+ "Togny-aux-Bœufs",
"Tongre-Notre-Dame",
"Tonnay-Boutonne",
"Tonnay-Charente",
@@ -6336,7 +6335,7 @@ FR_BASE_EXCEPTIONS = [
"Vals-près-le-Puy",
"Valverde-Enrique",
"Valzin-en-Petite-Montagne",
- "Vanduvre-lès-Nancy",
+ "Vandœuvre-lès-Nancy",
"Varces-Allières-et-Risset",
"Varenne-l'Arconce",
"Varenne-sur-le-Doubs",
@@ -6457,9 +6456,9 @@ FR_BASE_EXCEPTIONS = [
"Villenave-d'Ornon",
"Villequier-Aumont",
"Villerouge-Termenès",
- "Villers-aux-Nuds",
+ "Villers-aux-Nœuds",
"Villez-sur-le-Neubourg",
- "Villiers-en-Désuvre",
+ "Villiers-en-Désœuvre",
"Villieu-Loyes-Mollon",
"Villingen-Schwenningen",
"Villié-Morgon",
@@ -6467,7 +6466,7 @@ FR_BASE_EXCEPTIONS = [
"Vilosnes-Haraumont",
"Vilters-Wangs",
"Vincent-Froideville",
- "Vincy-Manuvre",
+ "Vincy-Manœuvre",
"Vincy-Manœuvre",
"Vincy-Reuil-et-Magny",
"Vindrac-Alayrac",
@@ -6511,8 +6510,8 @@ FR_BASE_EXCEPTIONS = [
"Vrigne-Meusiens",
"Vrijhoeve-Capelle",
"Vuisternens-devant-Romont",
- "Vlfling-lès-Bouzonville",
- "Vuil-et-Giget",
+ "Vœlfling-lès-Bouzonville",
+ "Vœuil-et-Giget",
"Vélez-Blanco",
"Vélez-Málaga",
"Vélez-Rubio",
@@ -6615,7 +6614,7 @@ FR_BASE_EXCEPTIONS = [
"Wust-Fischbeck",
"Wutha-Farnroda",
"Wy-dit-Joli-Village",
- "Wlfling-lès-Sarreguemines",
+ "Wœlfling-lès-Sarreguemines",
"Wünnewil-Flamatt",
"X-SAMPA",
"X-arbre",
diff --git a/spacy/lang/fr/syntax_iterators.py b/spacy/lang/fr/syntax_iterators.py
index c09b0e840..ca711593f 100644
--- a/spacy/lang/fr/syntax_iterators.py
+++ b/spacy/lang/fr/syntax_iterators.py
@@ -24,17 +24,15 @@ def noun_chunks(doclike):
np_deps = [doc.vocab.strings[label] for label in labels]
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
- seen = set()
+ prev_end = -1
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced
- if word.i in seen:
+ if word.left_edge.i <= prev_end:
continue
if word.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
elif word.dep == conj:
head = word.head
@@ -42,9 +40,7 @@ def noun_chunks(doclike):
head = head.head
# If the head is an NP, and we're coordinated to it, we're an NP
if head.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
diff --git a/spacy/lang/fr/tokenizer_exceptions.py b/spacy/lang/fr/tokenizer_exceptions.py
index 7bf4922d8..6806ea4fe 100644
--- a/spacy/lang/fr/tokenizer_exceptions.py
+++ b/spacy/lang/fr/tokenizer_exceptions.py
@@ -1,7 +1,6 @@
import re
from .punctuation import ELISION, HYPHENS
-from ..tokenizer_exceptions import URL_PATTERN
from ..char_classes import ALPHA_LOWER, ALPHA
from ...symbols import ORTH, LEMMA
@@ -452,9 +451,6 @@ _regular_exp += [
for hc in _hyphen_combination
]
-# URLs
-_regular_exp.append(URL_PATTERN)
-
TOKENIZER_EXCEPTIONS = _exc
TOKEN_MATCH = re.compile(
diff --git a/spacy/lang/gu/__init__.py b/spacy/lang/gu/__init__.py
index 1f080c7c2..bc8fc260c 100644
--- a/spacy/lang/gu/__init__.py
+++ b/spacy/lang/gu/__init__.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from .stop_words import STOP_WORDS
from ...language import Language
diff --git a/spacy/lang/gu/examples.py b/spacy/lang/gu/examples.py
index 202a8d022..1cf75fd32 100644
--- a/spacy/lang/gu/examples.py
+++ b/spacy/lang/gu/examples.py
@@ -1,7 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-
"""
Example sentences to test spaCy and its language models.
diff --git a/spacy/lang/gu/stop_words.py b/spacy/lang/gu/stop_words.py
index 85d33763d..2c859681b 100644
--- a/spacy/lang/gu/stop_words.py
+++ b/spacy/lang/gu/stop_words.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
STOP_WORDS = set(
"""
એમ
diff --git a/spacy/lang/hu/punctuation.py b/spacy/lang/hu/punctuation.py
index 1fea6d510..597f01b65 100644
--- a/spacy/lang/hu/punctuation.py
+++ b/spacy/lang/hu/punctuation.py
@@ -7,7 +7,6 @@ _concat_icons = CONCAT_ICONS.replace("\u00B0", "")
_currency = r"\$¢£€¥฿"
_quotes = CONCAT_QUOTES.replace("'", "")
-_units = UNITS.replace("%", "")
_prefixes = (
LIST_PUNCT
@@ -18,7 +17,8 @@ _prefixes = (
)
_suffixes = (
- LIST_PUNCT
+ [r"\+"]
+ + LIST_PUNCT
+ LIST_ELLIPSES
+ LIST_QUOTES
+ [_concat_icons]
@@ -26,7 +26,7 @@ _suffixes = (
r"(?<=[0-9])\+",
r"(?<=°[FfCcKk])\.",
r"(?<=[0-9])(?:[{c}])".format(c=_currency),
- r"(?<=[0-9])(?:{u})".format(u=_units),
+ r"(?<=[0-9])(?:{u})".format(u=UNITS),
r"(?<=[{al}{e}{q}(?:{c})])\.".format(
al=ALPHA_LOWER, e=r"%²\-\+", q=CONCAT_QUOTES, c=_currency
),
diff --git a/spacy/lang/hu/tokenizer_exceptions.py b/spacy/lang/hu/tokenizer_exceptions.py
index cc5eede17..ffb4d1472 100644
--- a/spacy/lang/hu/tokenizer_exceptions.py
+++ b/spacy/lang/hu/tokenizer_exceptions.py
@@ -1,7 +1,6 @@
import re
from ..punctuation import ALPHA_LOWER, CURRENCY
-from ..tokenizer_exceptions import URL_PATTERN
from ...symbols import ORTH
@@ -646,4 +645,4 @@ _nums = r"(({ne})|({t})|({on})|({c}))({s})?".format(
TOKENIZER_EXCEPTIONS = _exc
-TOKEN_MATCH = re.compile(r"^({u})|({n})$".format(u=URL_PATTERN, n=_nums)).match
+TOKEN_MATCH = re.compile(r"^{n}$".format(n=_nums)).match
diff --git a/spacy/lang/hy/__init__.py b/spacy/lang/hy/__init__.py
index 6aaa965bb..8928e52ae 100644
--- a/spacy/lang/hy/__init__.py
+++ b/spacy/lang/hy/__init__.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from .stop_words import STOP_WORDS
from .lex_attrs import LEX_ATTRS
from .tag_map import TAG_MAP
diff --git a/spacy/lang/hy/examples.py b/spacy/lang/hy/examples.py
index 323f77b1c..69e354688 100644
--- a/spacy/lang/hy/examples.py
+++ b/spacy/lang/hy/examples.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
"""
Example sentences to test spaCy and its language models.
>>> from spacy.lang.hy.examples import sentences
diff --git a/spacy/lang/hy/lex_attrs.py b/spacy/lang/hy/lex_attrs.py
index 910625fb8..f84472d60 100644
--- a/spacy/lang/hy/lex_attrs.py
+++ b/spacy/lang/hy/lex_attrs.py
@@ -1,12 +1,9 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from ...attrs import LIKE_NUM
_num_words = [
- "զրօ",
- "մէկ",
+ "զրո",
+ "մեկ",
"երկու",
"երեք",
"չորս",
@@ -28,10 +25,10 @@ _num_words = [
"քսան" "երեսուն",
"քառասուն",
"հիսուն",
- "վաթցսուն",
+ "վաթսուն",
"յոթանասուն",
"ութսուն",
- "ինիսուն",
+ "իննսուն",
"հարյուր",
"հազար",
"միլիոն",
diff --git a/spacy/lang/hy/stop_words.py b/spacy/lang/hy/stop_words.py
index d75aad6e2..46d0f6b51 100644
--- a/spacy/lang/hy/stop_words.py
+++ b/spacy/lang/hy/stop_words.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
STOP_WORDS = set(
"""
նա
diff --git a/spacy/lang/hy/tag_map.py b/spacy/lang/hy/tag_map.py
index 722270110..09be1fd8d 100644
--- a/spacy/lang/hy/tag_map.py
+++ b/spacy/lang/hy/tag_map.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from ...symbols import POS, ADJ, NUM, DET, ADV, ADP, X, VERB, NOUN
from ...symbols import PROPN, PART, INTJ, PRON, SCONJ, AUX, CCONJ
diff --git a/spacy/lang/id/syntax_iterators.py b/spacy/lang/id/syntax_iterators.py
index c09b0e840..ca711593f 100644
--- a/spacy/lang/id/syntax_iterators.py
+++ b/spacy/lang/id/syntax_iterators.py
@@ -24,17 +24,15 @@ def noun_chunks(doclike):
np_deps = [doc.vocab.strings[label] for label in labels]
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
- seen = set()
+ prev_end = -1
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced
- if word.i in seen:
+ if word.left_edge.i <= prev_end:
continue
if word.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
elif word.dep == conj:
head = word.head
@@ -42,9 +40,7 @@ def noun_chunks(doclike):
head = head.head
# If the head is an NP, and we're coordinated to it, we're an NP
if head.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
diff --git a/spacy/lang/ja/__init__.py b/spacy/lang/ja/__init__.py
index d1ce651d7..05ced3f7a 100644
--- a/spacy/lang/ja/__init__.py
+++ b/spacy/lang/ja/__init__.py
@@ -1,111 +1,266 @@
-import re
-from collections import namedtuple
+import srsly
+from collections import namedtuple, OrderedDict
from .stop_words import STOP_WORDS
+from .syntax_iterators import SYNTAX_ITERATORS
from .tag_map import TAG_MAP
+from .tag_orth_map import TAG_ORTH_MAP
+from .tag_bigram_map import TAG_BIGRAM_MAP
from ...attrs import LANG
-from ...language import Language
-from ...tokens import Doc
from ...compat import copy_reg
+from ...errors import Errors
+from ...language import Language
+from ...symbols import POS
+from ...tokens import Doc
from ...util import DummyTokenizer
+from ... import util
+
+
+# Hold the attributes we need with convenient names
+DetailedToken = namedtuple("DetailedToken", ["surface", "pos", "lemma"])
# Handling for multiple spaces in a row is somewhat awkward, this simplifies
# the flow by creating a dummy with the same interface.
-DummyNode = namedtuple("DummyNode", ["surface", "pos", "feature"])
-DummyNodeFeatures = namedtuple("DummyNodeFeatures", ["lemma"])
-DummySpace = DummyNode(" ", " ", DummyNodeFeatures(" "))
+DummyNode = namedtuple("DummyNode", ["surface", "pos", "lemma"])
+DummySpace = DummyNode(" ", " ", " ")
-def try_fugashi_import():
- """Fugashi is required for Japanese support, so check for it.
- It it's not available blow up and explain how to fix it."""
+def try_sudachi_import(split_mode="A"):
+ """SudachiPy is required for Japanese support, so check for it.
+ It it's not available blow up and explain how to fix it.
+ split_mode should be one of these values: "A", "B", "C", None->"A"."""
try:
- import fugashi
+ from sudachipy import dictionary, tokenizer
- return fugashi
+ split_mode = {
+ None: tokenizer.Tokenizer.SplitMode.A,
+ "A": tokenizer.Tokenizer.SplitMode.A,
+ "B": tokenizer.Tokenizer.SplitMode.B,
+ "C": tokenizer.Tokenizer.SplitMode.C,
+ }[split_mode]
+ tok = dictionary.Dictionary().create(mode=split_mode)
+ return tok
except ImportError:
raise ImportError(
- "Japanese support requires Fugashi: " "https://github.com/polm/fugashi"
+ "Japanese support requires SudachiPy and SudachiDict-core "
+ "(https://github.com/WorksApplications/SudachiPy). "
+ "Install with `pip install sudachipy sudachidict_core` or "
+ "install spaCy with `pip install spacy[ja]`."
)
-def resolve_pos(token):
+def resolve_pos(orth, pos, next_pos):
"""If necessary, add a field to the POS tag for UD mapping.
Under Universal Dependencies, sometimes the same Unidic POS tag can
be mapped differently depending on the literal token or its context
- in the sentence. This function adds information to the POS tag to
- resolve ambiguous mappings.
+ in the sentence. This function returns resolved POSs for both token
+ and next_token by tuple.
"""
- # this is only used for consecutive ascii spaces
- if token.surface == " ":
- return "空白"
+ # Some tokens have their UD tag decided based on the POS of the following
+ # token.
- # TODO: This is a first take. The rules here are crude approximations.
- # For many of these, full dependencies are needed to properly resolve
- # PoS mappings.
- if token.pos == "連体詞,*,*,*":
- if re.match(r"[こそあど此其彼]の", token.surface):
- return token.pos + ",DET"
- if re.match(r"[こそあど此其彼]", token.surface):
- return token.pos + ",PRON"
- return token.pos + ",ADJ"
- return token.pos
+ # orth based rules
+ if pos[0] in TAG_ORTH_MAP:
+ orth_map = TAG_ORTH_MAP[pos[0]]
+ if orth in orth_map:
+ return orth_map[orth], None
+
+ # tag bi-gram mapping
+ if next_pos:
+ tag_bigram = pos[0], next_pos[0]
+ if tag_bigram in TAG_BIGRAM_MAP:
+ bipos = TAG_BIGRAM_MAP[tag_bigram]
+ if bipos[0] is None:
+ return TAG_MAP[pos[0]][POS], bipos[1]
+ else:
+ return bipos
+
+ return TAG_MAP[pos[0]][POS], None
-def get_words_and_spaces(tokenizer, text):
- """Get the individual tokens that make up the sentence and handle white space.
+# Use a mapping of paired punctuation to avoid splitting quoted sentences.
+pairpunct = {"「": "」", "『": "』", "【": "】"}
- Japanese doesn't usually use white space, and MeCab's handling of it for
- multiple spaces in a row is somewhat awkward.
+
+def separate_sentences(doc):
+ """Given a doc, mark tokens that start sentences based on Unidic tags.
"""
- tokens = tokenizer.parseToNodeList(text)
+ stack = [] # save paired punctuation
+ for i, token in enumerate(doc[:-2]):
+ # Set all tokens after the first to false by default. This is necessary
+ # for the doc code to be aware we've done sentencization, see
+ # `is_sentenced`.
+ token.sent_start = i == 0
+ if token.tag_:
+ if token.tag_ == "補助記号-括弧開":
+ ts = str(token)
+ if ts in pairpunct:
+ stack.append(pairpunct[ts])
+ elif stack and ts == stack[-1]:
+ stack.pop()
+
+ if token.tag_ == "補助記号-句点":
+ next_token = doc[i + 1]
+ if next_token.tag_ != token.tag_ and not stack:
+ next_token.sent_start = True
+
+
+def get_dtokens(tokenizer, text):
+ tokens = tokenizer.tokenize(text)
words = []
- spaces = []
- for token in tokens:
- # If there's more than one space, spaces after the first become tokens
- for ii in range(len(token.white_space) - 1):
- words.append(DummySpace)
- spaces.append(False)
+ for ti, token in enumerate(tokens):
+ tag = "-".join([xx for xx in token.part_of_speech()[:4] if xx != "*"])
+ inf = "-".join([xx for xx in token.part_of_speech()[4:] if xx != "*"])
+ dtoken = DetailedToken(token.surface(), (tag, inf), token.dictionary_form())
+ if ti > 0 and words[-1].pos[0] == "空白" and tag == "空白":
+ # don't add multiple space tokens in a row
+ continue
+ words.append(dtoken)
- words.append(token)
- spaces.append(bool(token.white_space))
- return words, spaces
+ # remove empty tokens. These can be produced with characters like … that
+ # Sudachi normalizes internally.
+ words = [ww for ww in words if len(ww.surface) > 0]
+ return words
+
+
+def get_words_lemmas_tags_spaces(dtokens, text, gap_tag=("空白", "")):
+ words = [x.surface for x in dtokens]
+ if "".join("".join(words).split()) != "".join(text.split()):
+ raise ValueError(Errors.E194.format(text=text, words=words))
+ text_words = []
+ text_lemmas = []
+ text_tags = []
+ text_spaces = []
+ text_pos = 0
+ # handle empty and whitespace-only texts
+ if len(words) == 0:
+ return text_words, text_lemmas, text_tags, text_spaces
+ elif len([word for word in words if not word.isspace()]) == 0:
+ assert text.isspace()
+ text_words = [text]
+ text_lemmas = [text]
+ text_tags = [gap_tag]
+ text_spaces = [False]
+ return text_words, text_lemmas, text_tags, text_spaces
+ # normalize words to remove all whitespace tokens
+ norm_words, norm_dtokens = zip(
+ *[
+ (word, dtokens)
+ for word, dtokens in zip(words, dtokens)
+ if not word.isspace()
+ ]
+ )
+ # align words with text
+ for word, dtoken in zip(norm_words, norm_dtokens):
+ try:
+ word_start = text[text_pos:].index(word)
+ except ValueError:
+ raise ValueError(Errors.E194.format(text=text, words=words))
+ if word_start > 0:
+ w = text[text_pos : text_pos + word_start]
+ text_words.append(w)
+ text_lemmas.append(w)
+ text_tags.append(gap_tag)
+ text_spaces.append(False)
+ text_pos += word_start
+ text_words.append(word)
+ text_lemmas.append(dtoken.lemma)
+ text_tags.append(dtoken.pos)
+ text_spaces.append(False)
+ text_pos += len(word)
+ if text_pos < len(text) and text[text_pos] == " ":
+ text_spaces[-1] = True
+ text_pos += 1
+ if text_pos < len(text):
+ w = text[text_pos:]
+ text_words.append(w)
+ text_lemmas.append(w)
+ text_tags.append(gap_tag)
+ text_spaces.append(False)
+ return text_words, text_lemmas, text_tags, text_spaces
class JapaneseTokenizer(DummyTokenizer):
- def __init__(self, cls, nlp=None):
+ def __init__(self, cls, nlp=None, config={}):
self.vocab = nlp.vocab if nlp is not None else cls.create_vocab(nlp)
- self.tokenizer = try_fugashi_import().Tagger()
- self.tokenizer.parseToNodeList("") # see #2901
+ self.split_mode = config.get("split_mode", None)
+ self.tokenizer = try_sudachi_import(self.split_mode)
def __call__(self, text):
- dtokens, spaces = get_words_and_spaces(self.tokenizer, text)
- words = [x.surface for x in dtokens]
+ dtokens = get_dtokens(self.tokenizer, text)
+
+ words, lemmas, unidic_tags, spaces = get_words_lemmas_tags_spaces(dtokens, text)
doc = Doc(self.vocab, words=words, spaces=spaces)
- unidic_tags = []
- for token, dtoken in zip(doc, dtokens):
- unidic_tags.append(dtoken.pos)
- token.tag_ = resolve_pos(dtoken)
+ next_pos = None
+ for idx, (token, lemma, unidic_tag) in enumerate(zip(doc, lemmas, unidic_tags)):
+ token.tag_ = unidic_tag[0]
+ if next_pos:
+ token.pos = next_pos
+ next_pos = None
+ else:
+ token.pos, next_pos = resolve_pos(
+ token.orth_,
+ unidic_tag,
+ unidic_tags[idx + 1] if idx + 1 < len(unidic_tags) else None,
+ )
# if there's no lemma info (it's an unk) just use the surface
- token.lemma_ = dtoken.feature.lemma or dtoken.surface
+ token.lemma_ = lemma
doc.user_data["unidic_tags"] = unidic_tags
+
return doc
+ def _get_config(self):
+ config = OrderedDict((("split_mode", self.split_mode),))
+ return config
+
+ def _set_config(self, config={}):
+ self.split_mode = config.get("split_mode", None)
+
+ def to_bytes(self, **kwargs):
+ serializers = OrderedDict(
+ (("cfg", lambda: srsly.json_dumps(self._get_config())),)
+ )
+ return util.to_bytes(serializers, [])
+
+ def from_bytes(self, data, **kwargs):
+ deserializers = OrderedDict(
+ (("cfg", lambda b: self._set_config(srsly.json_loads(b))),)
+ )
+ util.from_bytes(data, deserializers, [])
+ self.tokenizer = try_sudachi_import(self.split_mode)
+ return self
+
+ def to_disk(self, path, **kwargs):
+ path = util.ensure_path(path)
+ serializers = OrderedDict(
+ (("cfg", lambda p: srsly.write_json(p, self._get_config())),)
+ )
+ return util.to_disk(path, serializers, [])
+
+ def from_disk(self, path, **kwargs):
+ path = util.ensure_path(path)
+ serializers = OrderedDict(
+ (("cfg", lambda p: self._set_config(srsly.read_json(p))),)
+ )
+ util.from_disk(path, serializers, [])
+ self.tokenizer = try_sudachi_import(self.split_mode)
+
class JapaneseDefaults(Language.Defaults):
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
lex_attr_getters[LANG] = lambda _text: "ja"
stop_words = STOP_WORDS
tag_map = TAG_MAP
+ syntax_iterators = SYNTAX_ITERATORS
writing_system = {"direction": "ltr", "has_case": False, "has_letters": False}
@classmethod
- def create_tokenizer(cls, nlp=None):
- return JapaneseTokenizer(cls, nlp)
+ def create_tokenizer(cls, nlp=None, config={}):
+ return JapaneseTokenizer(cls, nlp, config)
class Japanese(Language):
diff --git a/spacy/lang/ja/bunsetu.py b/spacy/lang/ja/bunsetu.py
new file mode 100644
index 000000000..e8c802246
--- /dev/null
+++ b/spacy/lang/ja/bunsetu.py
@@ -0,0 +1,176 @@
+POS_PHRASE_MAP = {
+ "NOUN": "NP",
+ "NUM": "NP",
+ "PRON": "NP",
+ "PROPN": "NP",
+ "VERB": "VP",
+ "ADJ": "ADJP",
+ "ADV": "ADVP",
+ "CCONJ": "CCONJP",
+}
+
+
+# return value: [(bunsetu_tokens, phrase_type={'NP', 'VP', 'ADJP', 'ADVP'}, phrase_tokens)]
+def yield_bunsetu(doc, debug=False):
+ bunsetu = []
+ bunsetu_may_end = False
+ phrase_type = None
+ phrase = None
+ prev = None
+ prev_tag = None
+ prev_dep = None
+ prev_head = None
+ for t in doc:
+ pos = t.pos_
+ pos_type = POS_PHRASE_MAP.get(pos, None)
+ tag = t.tag_
+ dep = t.dep_
+ head = t.head.i
+ if debug:
+ print(
+ t.i,
+ t.orth_,
+ pos,
+ pos_type,
+ dep,
+ head,
+ bunsetu_may_end,
+ phrase_type,
+ phrase,
+ bunsetu,
+ )
+
+ # DET is always an individual bunsetu
+ if pos == "DET":
+ if bunsetu:
+ yield bunsetu, phrase_type, phrase
+ yield [t], None, None
+ bunsetu = []
+ bunsetu_may_end = False
+ phrase_type = None
+ phrase = None
+
+ # PRON or Open PUNCT always splits bunsetu
+ elif tag == "補助記号-括弧開":
+ if bunsetu:
+ yield bunsetu, phrase_type, phrase
+ bunsetu = [t]
+ bunsetu_may_end = True
+ phrase_type = None
+ phrase = None
+
+ # bunsetu head not appeared
+ elif phrase_type is None:
+ if bunsetu and prev_tag == "補助記号-読点":
+ yield bunsetu, phrase_type, phrase
+ bunsetu = []
+ bunsetu_may_end = False
+ phrase_type = None
+ phrase = None
+ bunsetu.append(t)
+ if pos_type: # begin phrase
+ phrase = [t]
+ phrase_type = pos_type
+ if pos_type in {"ADVP", "CCONJP"}:
+ bunsetu_may_end = True
+
+ # entering new bunsetu
+ elif pos_type and (
+ pos_type != phrase_type
+ or bunsetu_may_end # different phrase type arises # same phrase type but bunsetu already ended
+ ):
+ # exceptional case: NOUN to VERB
+ if (
+ phrase_type == "NP"
+ and pos_type == "VP"
+ and prev_dep == "compound"
+ and prev_head == t.i
+ ):
+ bunsetu.append(t)
+ phrase_type = "VP"
+ phrase.append(t)
+ # exceptional case: VERB to NOUN
+ elif (
+ phrase_type == "VP"
+ and pos_type == "NP"
+ and (
+ prev_dep == "compound"
+ and prev_head == t.i
+ or dep == "compound"
+ and prev == head
+ or prev_dep == "nmod"
+ and prev_head == t.i
+ )
+ ):
+ bunsetu.append(t)
+ phrase_type = "NP"
+ phrase.append(t)
+ else:
+ yield bunsetu, phrase_type, phrase
+ bunsetu = [t]
+ bunsetu_may_end = False
+ phrase_type = pos_type
+ phrase = [t]
+
+ # NOUN bunsetu
+ elif phrase_type == "NP":
+ bunsetu.append(t)
+ if not bunsetu_may_end and (
+ (
+ (pos_type == "NP" or pos == "SYM")
+ and (prev_head == t.i or prev_head == head)
+ and prev_dep in {"compound", "nummod"}
+ )
+ or (
+ pos == "PART"
+ and (prev == head or prev_head == head)
+ and dep == "mark"
+ )
+ ):
+ phrase.append(t)
+ else:
+ bunsetu_may_end = True
+
+ # VERB bunsetu
+ elif phrase_type == "VP":
+ bunsetu.append(t)
+ if (
+ not bunsetu_may_end
+ and pos == "VERB"
+ and prev_head == t.i
+ and prev_dep == "compound"
+ ):
+ phrase.append(t)
+ else:
+ bunsetu_may_end = True
+
+ # ADJ bunsetu
+ elif phrase_type == "ADJP" and tag != "連体詞":
+ bunsetu.append(t)
+ if not bunsetu_may_end and (
+ (
+ pos == "NOUN"
+ and (prev_head == t.i or prev_head == head)
+ and prev_dep in {"amod", "compound"}
+ )
+ or (
+ pos == "PART"
+ and (prev == head or prev_head == head)
+ and dep == "mark"
+ )
+ ):
+ phrase.append(t)
+ else:
+ bunsetu_may_end = True
+
+ # other bunsetu
+ else:
+ bunsetu.append(t)
+
+ prev = t.i
+ prev_tag = t.tag_
+ prev_dep = t.dep_
+ prev_head = head
+
+ if bunsetu:
+ yield bunsetu, phrase_type, phrase
diff --git a/spacy/lang/ja/syntax_iterators.py b/spacy/lang/ja/syntax_iterators.py
new file mode 100644
index 000000000..bb0554cf9
--- /dev/null
+++ b/spacy/lang/ja/syntax_iterators.py
@@ -0,0 +1,54 @@
+from ...symbols import NOUN, PROPN, PRON, VERB
+
+# XXX this can probably be pruned a bit
+labels = [
+ "nsubj",
+ "nmod",
+ "dobj",
+ "nsubjpass",
+ "pcomp",
+ "pobj",
+ "obj",
+ "obl",
+ "dative",
+ "appos",
+ "attr",
+ "ROOT",
+]
+
+
+def noun_chunks(obj):
+ """
+ Detect base noun phrases from a dependency parse. Works on both Doc and Span.
+ """
+
+ doc = obj.doc # Ensure works on both Doc and Span.
+ np_deps = [doc.vocab.strings.add(label) for label in labels]
+ doc.vocab.strings.add("conj")
+ np_label = doc.vocab.strings.add("NP")
+ seen = set()
+ for i, word in enumerate(obj):
+ if word.pos not in (NOUN, PROPN, PRON):
+ continue
+ # Prevent nested chunks from being produced
+ if word.i in seen:
+ continue
+ if word.dep in np_deps:
+ unseen = [w.i for w in word.subtree if w.i not in seen]
+ if not unseen:
+ continue
+
+ # this takes care of particles etc.
+ seen.update(j.i for j in word.subtree)
+ # This avoids duplicating embedded clauses
+ seen.update(range(word.i + 1))
+
+ # if the head of this is a verb, mark that and rights seen
+ # Don't do the subtree as that can hide other phrases
+ if word.head.pos == VERB:
+ seen.add(word.head.i)
+ seen.update(w.i for w in word.head.rights)
+ yield unseen[0], word.i + 1, np_label
+
+
+SYNTAX_ITERATORS = {"noun_chunks": noun_chunks}
diff --git a/spacy/lang/ja/tag_bigram_map.py b/spacy/lang/ja/tag_bigram_map.py
new file mode 100644
index 000000000..9d15fc520
--- /dev/null
+++ b/spacy/lang/ja/tag_bigram_map.py
@@ -0,0 +1,28 @@
+from ...symbols import ADJ, AUX, NOUN, PART, VERB
+
+# mapping from tag bi-gram to pos of previous token
+TAG_BIGRAM_MAP = {
+ # This covers only small part of AUX.
+ ("形容詞-非自立可能", "助詞-終助詞"): (AUX, None),
+ ("名詞-普通名詞-形状詞可能", "助動詞"): (ADJ, None),
+ # ("副詞", "名詞-普通名詞-形状詞可能"): (None, ADJ),
+ # This covers acl, advcl, obl and root, but has side effect for compound.
+ ("名詞-普通名詞-サ変可能", "動詞-非自立可能"): (VERB, AUX),
+ # This covers almost all of the deps
+ ("名詞-普通名詞-サ変形状詞可能", "動詞-非自立可能"): (VERB, AUX),
+ ("名詞-普通名詞-副詞可能", "動詞-非自立可能"): (None, VERB),
+ ("副詞", "動詞-非自立可能"): (None, VERB),
+ ("形容詞-一般", "動詞-非自立可能"): (None, VERB),
+ ("形容詞-非自立可能", "動詞-非自立可能"): (None, VERB),
+ ("接頭辞", "動詞-非自立可能"): (None, VERB),
+ ("助詞-係助詞", "動詞-非自立可能"): (None, VERB),
+ ("助詞-副助詞", "動詞-非自立可能"): (None, VERB),
+ ("助詞-格助詞", "動詞-非自立可能"): (None, VERB),
+ ("補助記号-読点", "動詞-非自立可能"): (None, VERB),
+ ("形容詞-一般", "接尾辞-名詞的-一般"): (None, PART),
+ ("助詞-格助詞", "形状詞-助動詞語幹"): (None, NOUN),
+ ("連体詞", "形状詞-助動詞語幹"): (None, NOUN),
+ ("動詞-一般", "助詞-副助詞"): (None, PART),
+ ("動詞-非自立可能", "助詞-副助詞"): (None, PART),
+ ("助動詞", "助詞-副助詞"): (None, PART),
+}
diff --git a/spacy/lang/ja/tag_map.py b/spacy/lang/ja/tag_map.py
index d922cd22b..c6de3831a 100644
--- a/spacy/lang/ja/tag_map.py
+++ b/spacy/lang/ja/tag_map.py
@@ -1,79 +1,68 @@
-from ...symbols import POS, PUNCT, INTJ, X, ADJ, AUX, ADP, PART, SCONJ, NOUN
-from ...symbols import SYM, PRON, VERB, ADV, PROPN, NUM, DET, SPACE
+from ...symbols import POS, PUNCT, INTJ, ADJ, AUX, ADP, PART, SCONJ, NOUN
+from ...symbols import SYM, PRON, VERB, ADV, PROPN, NUM, DET, SPACE, CCONJ
TAG_MAP = {
# Explanation of Unidic tags:
# https://www.gavo.t.u-tokyo.ac.jp/~mine/japanese/nlp+slp/UNIDIC_manual.pdf
- # Universal Dependencies Mapping:
+ # Universal Dependencies Mapping: (Some of the entries in this mapping are updated to v2.6 in the list below)
# http://universaldependencies.org/ja/overview/morphology.html
# http://universaldependencies.org/ja/pos/all.html
- "記号,一般,*,*": {
- POS: PUNCT
- }, # this includes characters used to represent sounds like ドレミ
- "記号,文字,*,*": {
- POS: PUNCT
- }, # this is for Greek and Latin characters used as sumbols, as in math
- "感動詞,フィラー,*,*": {POS: INTJ},
- "感動詞,一般,*,*": {POS: INTJ},
- # this is specifically for unicode full-width space
- "空白,*,*,*": {POS: X},
- # This is used when sequential half-width spaces are present
+ "記号-一般": {POS: NOUN}, # this includes characters used to represent sounds like ドレミ
+ "記号-文字": {
+ POS: NOUN
+ }, # this is for Greek and Latin characters having some meanings, or used as symbols, as in math
+ "感動詞-フィラー": {POS: INTJ},
+ "感動詞-一般": {POS: INTJ},
"空白": {POS: SPACE},
- "形状詞,一般,*,*": {POS: ADJ},
- "形状詞,タリ,*,*": {POS: ADJ},
- "形状詞,助動詞語幹,*,*": {POS: ADJ},
- "形容詞,一般,*,*": {POS: ADJ},
- "形容詞,非自立可能,*,*": {POS: AUX}, # XXX ADJ if alone, AUX otherwise
- "助詞,格助詞,*,*": {POS: ADP},
- "助詞,係助詞,*,*": {POS: ADP},
- "助詞,終助詞,*,*": {POS: PART},
- "助詞,準体助詞,*,*": {POS: SCONJ}, # の as in 走るのが速い
- "助詞,接続助詞,*,*": {POS: SCONJ}, # verb ending て
- "助詞,副助詞,*,*": {POS: PART}, # ばかり, つつ after a verb
- "助動詞,*,*,*": {POS: AUX},
- "接続詞,*,*,*": {POS: SCONJ}, # XXX: might need refinement
- "接頭辞,*,*,*": {POS: NOUN},
- "接尾辞,形状詞的,*,*": {POS: ADJ}, # がち, チック
- "接尾辞,形容詞的,*,*": {POS: ADJ}, # -らしい
- "接尾辞,動詞的,*,*": {POS: NOUN}, # -じみ
- "接尾辞,名詞的,サ変可能,*": {POS: NOUN}, # XXX see 名詞,普通名詞,サ変可能,*
- "接尾辞,名詞的,一般,*": {POS: NOUN},
- "接尾辞,名詞的,助数詞,*": {POS: NOUN},
- "接尾辞,名詞的,副詞可能,*": {POS: NOUN}, # -後, -過ぎ
- "代名詞,*,*,*": {POS: PRON},
- "動詞,一般,*,*": {POS: VERB},
- "動詞,非自立可能,*,*": {POS: VERB}, # XXX VERB if alone, AUX otherwise
- "動詞,非自立可能,*,*,AUX": {POS: AUX},
- "動詞,非自立可能,*,*,VERB": {POS: VERB},
- "副詞,*,*,*": {POS: ADV},
- "補助記号,AA,一般,*": {POS: SYM}, # text art
- "補助記号,AA,顔文字,*": {POS: SYM}, # kaomoji
- "補助記号,一般,*,*": {POS: SYM},
- "補助記号,括弧開,*,*": {POS: PUNCT}, # open bracket
- "補助記号,括弧閉,*,*": {POS: PUNCT}, # close bracket
- "補助記号,句点,*,*": {POS: PUNCT}, # period or other EOS marker
- "補助記号,読点,*,*": {POS: PUNCT}, # comma
- "名詞,固有名詞,一般,*": {POS: PROPN}, # general proper noun
- "名詞,固有名詞,人名,一般": {POS: PROPN}, # person's name
- "名詞,固有名詞,人名,姓": {POS: PROPN}, # surname
- "名詞,固有名詞,人名,名": {POS: PROPN}, # first name
- "名詞,固有名詞,地名,一般": {POS: PROPN}, # place name
- "名詞,固有名詞,地名,国": {POS: PROPN}, # country name
- "名詞,助動詞語幹,*,*": {POS: AUX},
- "名詞,数詞,*,*": {POS: NUM}, # includes Chinese numerals
- "名詞,普通名詞,サ変可能,*": {POS: NOUN}, # XXX: sometimes VERB in UDv2; suru-verb noun
- "名詞,普通名詞,サ変可能,*,NOUN": {POS: NOUN},
- "名詞,普通名詞,サ変可能,*,VERB": {POS: VERB},
- "名詞,普通名詞,サ変形状詞可能,*": {POS: NOUN}, # ex: 下手
- "名詞,普通名詞,一般,*": {POS: NOUN},
- "名詞,普通名詞,形状詞可能,*": {POS: NOUN}, # XXX: sometimes ADJ in UDv2
- "名詞,普通名詞,形状詞可能,*,NOUN": {POS: NOUN},
- "名詞,普通名詞,形状詞可能,*,ADJ": {POS: ADJ},
- "名詞,普通名詞,助数詞可能,*": {POS: NOUN}, # counter / unit
- "名詞,普通名詞,副詞可能,*": {POS: NOUN},
- "連体詞,*,*,*": {POS: ADJ}, # XXX this has exceptions based on literal token
- "連体詞,*,*,*,ADJ": {POS: ADJ},
- "連体詞,*,*,*,PRON": {POS: PRON},
- "連体詞,*,*,*,DET": {POS: DET},
+ "形状詞-一般": {POS: ADJ},
+ "形状詞-タリ": {POS: ADJ},
+ "形状詞-助動詞語幹": {POS: AUX},
+ "形容詞-一般": {POS: ADJ},
+ "形容詞-非自立可能": {POS: ADJ}, # XXX ADJ if alone, AUX otherwise
+ "助詞-格助詞": {POS: ADP},
+ "助詞-係助詞": {POS: ADP},
+ "助詞-終助詞": {POS: PART},
+ "助詞-準体助詞": {POS: SCONJ}, # の as in 走るのが速い
+ "助詞-接続助詞": {POS: SCONJ}, # verb ending て0
+ "助詞-副助詞": {POS: ADP}, # ばかり, つつ after a verb
+ "助動詞": {POS: AUX},
+ "接続詞": {POS: CCONJ}, # XXX: might need refinement
+ "接頭辞": {POS: NOUN},
+ "接尾辞-形状詞的": {POS: PART}, # がち, チック
+ "接尾辞-形容詞的": {POS: AUX}, # -らしい
+ "接尾辞-動詞的": {POS: PART}, # -じみ
+ "接尾辞-名詞的-サ変可能": {POS: NOUN}, # XXX see 名詞,普通名詞,サ変可能,*
+ "接尾辞-名詞的-一般": {POS: NOUN},
+ "接尾辞-名詞的-助数詞": {POS: NOUN},
+ "接尾辞-名詞的-副詞可能": {POS: NOUN}, # -後, -過ぎ
+ "代名詞": {POS: PRON},
+ "動詞-一般": {POS: VERB},
+ "動詞-非自立可能": {POS: AUX}, # XXX VERB if alone, AUX otherwise
+ "副詞": {POS: ADV},
+ "補助記号-AA-一般": {POS: SYM}, # text art
+ "補助記号-AA-顔文字": {POS: PUNCT}, # kaomoji
+ "補助記号-一般": {POS: SYM},
+ "補助記号-括弧開": {POS: PUNCT}, # open bracket
+ "補助記号-括弧閉": {POS: PUNCT}, # close bracket
+ "補助記号-句点": {POS: PUNCT}, # period or other EOS marker
+ "補助記号-読点": {POS: PUNCT}, # comma
+ "名詞-固有名詞-一般": {POS: PROPN}, # general proper noun
+ "名詞-固有名詞-人名-一般": {POS: PROPN}, # person's name
+ "名詞-固有名詞-人名-姓": {POS: PROPN}, # surname
+ "名詞-固有名詞-人名-名": {POS: PROPN}, # first name
+ "名詞-固有名詞-地名-一般": {POS: PROPN}, # place name
+ "名詞-固有名詞-地名-国": {POS: PROPN}, # country name
+ "名詞-助動詞語幹": {POS: AUX},
+ "名詞-数詞": {POS: NUM}, # includes Chinese numerals
+ "名詞-普通名詞-サ変可能": {POS: NOUN}, # XXX: sometimes VERB in UDv2; suru-verb noun
+ "名詞-普通名詞-サ変形状詞可能": {POS: NOUN},
+ "名詞-普通名詞-一般": {POS: NOUN},
+ "名詞-普通名詞-形状詞可能": {POS: NOUN}, # XXX: sometimes ADJ in UDv2
+ "名詞-普通名詞-助数詞可能": {POS: NOUN}, # counter / unit
+ "名詞-普通名詞-副詞可能": {POS: NOUN},
+ "連体詞": {POS: DET}, # XXX this has exceptions based on literal token
+ # GSD tags. These aren't in Unidic, but we need them for the GSD data.
+ "外国語": {POS: PROPN}, # Foreign words
+ "絵文字・記号等": {POS: SYM}, # emoji / kaomoji ^^;
}
diff --git a/spacy/lang/ja/tag_orth_map.py b/spacy/lang/ja/tag_orth_map.py
new file mode 100644
index 000000000..9d32cdea7
--- /dev/null
+++ b/spacy/lang/ja/tag_orth_map.py
@@ -0,0 +1,22 @@
+from ...symbols import DET, PART, PRON, SPACE, X
+
+# mapping from tag bi-gram to pos of previous token
+TAG_ORTH_MAP = {
+ "空白": {" ": SPACE, " ": X},
+ "助詞-副助詞": {"たり": PART},
+ "連体詞": {
+ "あの": DET,
+ "かの": DET,
+ "この": DET,
+ "その": DET,
+ "どの": DET,
+ "彼の": DET,
+ "此の": DET,
+ "其の": DET,
+ "ある": PRON,
+ "こんな": PRON,
+ "そんな": PRON,
+ "どんな": PRON,
+ "あらゆる": PRON,
+ },
+}
diff --git a/spacy/lang/kn/examples.py b/spacy/lang/kn/examples.py
index d82630432..3e055752e 100644
--- a/spacy/lang/kn/examples.py
+++ b/spacy/lang/kn/examples.py
@@ -1,7 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-
"""
Example sentences to test spaCy and its language models.
diff --git a/spacy/lang/ml/__init__.py b/spacy/lang/ml/__init__.py
index d052ded1b..e92a7617f 100644
--- a/spacy/lang/ml/__init__.py
+++ b/spacy/lang/ml/__init__.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from .stop_words import STOP_WORDS
from ...language import Language
diff --git a/spacy/lang/ml/examples.py b/spacy/lang/ml/examples.py
index a2a0ed10e..9794eab29 100644
--- a/spacy/lang/ml/examples.py
+++ b/spacy/lang/ml/examples.py
@@ -1,7 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-
"""
Example sentences to test spaCy and its language models.
diff --git a/spacy/lang/ml/lex_attrs.py b/spacy/lang/ml/lex_attrs.py
index 468ad88f8..9ac19b6a7 100644
--- a/spacy/lang/ml/lex_attrs.py
+++ b/spacy/lang/ml/lex_attrs.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from ...attrs import LIKE_NUM
diff --git a/spacy/lang/ml/stop_words.py b/spacy/lang/ml/stop_words.py
index 8bd6a7e02..441e93586 100644
--- a/spacy/lang/ml/stop_words.py
+++ b/spacy/lang/ml/stop_words.py
@@ -1,7 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
-
STOP_WORDS = set(
"""
അത്
diff --git a/spacy/lang/nb/syntax_iterators.py b/spacy/lang/nb/syntax_iterators.py
index c09b0e840..ca711593f 100644
--- a/spacy/lang/nb/syntax_iterators.py
+++ b/spacy/lang/nb/syntax_iterators.py
@@ -24,17 +24,15 @@ def noun_chunks(doclike):
np_deps = [doc.vocab.strings[label] for label in labels]
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
- seen = set()
+ prev_end = -1
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced
- if word.i in seen:
+ if word.left_edge.i <= prev_end:
continue
if word.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
elif word.dep == conj:
head = word.head
@@ -42,9 +40,7 @@ def noun_chunks(doclike):
head = head.head
# If the head is an NP, and we're coordinated to it, we're an NP
if head.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
diff --git a/spacy/lang/pl/lemmatizer.py b/spacy/lang/pl/lemmatizer.py
index cd555b9c2..b80a1a143 100644
--- a/spacy/lang/pl/lemmatizer.py
+++ b/spacy/lang/pl/lemmatizer.py
@@ -1,103 +1,75 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
from ...lemmatizer import Lemmatizer
from ...parts_of_speech import NAMES
class PolishLemmatizer(Lemmatizer):
- # This lemmatizer implements lookup lemmatization based on
- # the Morfeusz dictionary (morfeusz.sgjp.pl/en) by Institute of Computer Science PAS
- # It utilizes some prefix based improvements for
- # verb and adjectives lemmatization, as well as case-sensitive
- # lemmatization for nouns
- def __init__(self, lookups, *args, **kwargs):
- # this lemmatizer is lookup based, so it does not require an index, exceptionlist, or rules
- super().__init__(lookups)
- self.lemma_lookups = {}
- for tag in [
- "ADJ",
- "ADP",
- "ADV",
- "AUX",
- "NOUN",
- "NUM",
- "PART",
- "PRON",
- "VERB",
- "X",
- ]:
- self.lemma_lookups[tag] = self.lookups.get_table(
- "lemma_lookup_" + tag.lower(), {}
- )
- self.lemma_lookups["DET"] = self.lemma_lookups["X"]
- self.lemma_lookups["PROPN"] = self.lemma_lookups["NOUN"]
-
+ # This lemmatizer implements lookup lemmatization based on the Morfeusz
+ # dictionary (morfeusz.sgjp.pl/en) by Institute of Computer Science PAS.
+ # It utilizes some prefix based improvements for verb and adjectives
+ # lemmatization, as well as case-sensitive lemmatization for nouns.
def __call__(self, string, univ_pos, morphology=None):
if isinstance(univ_pos, int):
univ_pos = NAMES.get(univ_pos, "X")
univ_pos = univ_pos.upper()
+ lookup_pos = univ_pos.lower()
+ if univ_pos == "PROPN":
+ lookup_pos = "noun"
+ lookup_table = self.lookups.get_table("lemma_lookup_" + lookup_pos, {})
+
if univ_pos == "NOUN":
- return self.lemmatize_noun(string, morphology)
+ return self.lemmatize_noun(string, morphology, lookup_table)
if univ_pos != "PROPN":
string = string.lower()
if univ_pos == "ADJ":
- return self.lemmatize_adj(string, morphology)
+ return self.lemmatize_adj(string, morphology, lookup_table)
elif univ_pos == "VERB":
- return self.lemmatize_verb(string, morphology)
+ return self.lemmatize_verb(string, morphology, lookup_table)
- lemma_dict = self.lemma_lookups.get(univ_pos, {})
- return [lemma_dict.get(string, string.lower())]
+ return [lookup_table.get(string, string.lower())]
- def lemmatize_adj(self, string, morphology):
+ def lemmatize_adj(self, string, morphology, lookup_table):
# this method utilizes different procedures for adjectives
# with 'nie' and 'naj' prefixes
- lemma_dict = self.lemma_lookups["ADJ"]
-
if string[:3] == "nie":
search_string = string[3:]
if search_string[:3] == "naj":
naj_search_string = search_string[3:]
- if naj_search_string in lemma_dict:
- return [lemma_dict[naj_search_string]]
- if search_string in lemma_dict:
- return [lemma_dict[search_string]]
+ if naj_search_string in lookup_table:
+ return [lookup_table[naj_search_string]]
+ if search_string in lookup_table:
+ return [lookup_table[search_string]]
if string[:3] == "naj":
naj_search_string = string[3:]
- if naj_search_string in lemma_dict:
- return [lemma_dict[naj_search_string]]
+ if naj_search_string in lookup_table:
+ return [lookup_table[naj_search_string]]
- return [lemma_dict.get(string, string)]
+ return [lookup_table.get(string, string)]
- def lemmatize_verb(self, string, morphology):
+ def lemmatize_verb(self, string, morphology, lookup_table):
# this method utilizes a different procedure for verbs
# with 'nie' prefix
- lemma_dict = self.lemma_lookups["VERB"]
-
if string[:3] == "nie":
search_string = string[3:]
- if search_string in lemma_dict:
- return [lemma_dict[search_string]]
+ if search_string in lookup_table:
+ return [lookup_table[search_string]]
- return [lemma_dict.get(string, string)]
+ return [lookup_table.get(string, string)]
- def lemmatize_noun(self, string, morphology):
+ def lemmatize_noun(self, string, morphology, lookup_table):
# this method is case-sensitive, in order to work
# for incorrectly tagged proper names
- lemma_dict = self.lemma_lookups["NOUN"]
-
if string != string.lower():
- if string.lower() in lemma_dict:
- return [lemma_dict[string.lower()]]
- elif string in lemma_dict:
- return [lemma_dict[string]]
+ if string.lower() in lookup_table:
+ return [lookup_table[string.lower()]]
+ elif string in lookup_table:
+ return [lookup_table[string]]
return [string.lower()]
- return [lemma_dict.get(string, string)]
+ return [lookup_table.get(string, string)]
def lookup(self, string, orth=None):
return string.lower()
diff --git a/spacy/lang/sv/lex_attrs.py b/spacy/lang/sv/lex_attrs.py
index 24d06a97a..f8ada9e2e 100644
--- a/spacy/lang/sv/lex_attrs.py
+++ b/spacy/lang/sv/lex_attrs.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
from ...attrs import LIKE_NUM
diff --git a/spacy/lang/sv/syntax_iterators.py b/spacy/lang/sv/syntax_iterators.py
index ec92c08d3..c3de21921 100644
--- a/spacy/lang/sv/syntax_iterators.py
+++ b/spacy/lang/sv/syntax_iterators.py
@@ -25,17 +25,15 @@ def noun_chunks(doclike):
np_deps = [doc.vocab.strings[label] for label in labels]
conj = doc.vocab.strings.add("conj")
np_label = doc.vocab.strings.add("NP")
- seen = set()
+ prev_end = -1
for i, word in enumerate(doclike):
if word.pos not in (NOUN, PROPN, PRON):
continue
# Prevent nested chunks from being produced
- if word.i in seen:
+ if word.left_edge.i <= prev_end:
continue
if word.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
elif word.dep == conj:
head = word.head
@@ -43,9 +41,7 @@ def noun_chunks(doclike):
head = head.head
# If the head is an NP, and we're coordinated to it, we're an NP
if head.dep in np_deps:
- if any(w.i in seen for w in word.subtree):
- continue
- seen.update(j for j in range(word.left_edge.i, word.right_edge.i + 1))
+ prev_end = word.right_edge.i
yield word.left_edge.i, word.right_edge.i + 1, np_label
diff --git a/spacy/lang/ta/examples.py b/spacy/lang/ta/examples.py
index a53227220..c3c47e66e 100644
--- a/spacy/lang/ta/examples.py
+++ b/spacy/lang/ta/examples.py
@@ -14,4 +14,9 @@ sentences = [
"இந்த ஃபோனுடன் சுமார் ரூ.2,990 மதிப்புள்ள போட் ராக்கர்ஸ் நிறுவனத்தின் ஸ்போர்ட் புளூடூத் ஹெட்போன்ஸ் இலவசமாக வழங்கப்படவுள்ளது.",
"மட்டக்களப்பில் பல இடங்களில் வீட்டுத் திட்டங்களுக்கு இன்று அடிக்கல் நாட்டல்",
"ஐ போன்க்கு முகத்தை வைத்து அன்லாக் செய்யும் முறை மற்றும் விரலால் தொட்டு அன்லாக் செய்யும் முறையை வாட்ஸ் ஆப் நிறுவனம் இதற்கு முன் கண்டுபிடித்தது",
+ "இது ஒரு வாக்கியம்.",
+ "ஆப்பிள் நிறுவனம் யு.கே. தொடக்க நிறுவனத்தை ஒரு லட்சம் கோடிக்கு வாங்கப் பார்க்கிறது",
+ "தன்னாட்சி கார்கள் காப்பீட்டு பொறுப்பை உற்பத்தியாளரிடம் மாற்றுகின்றன",
+ "நடைபாதை விநியோக ரோபோக்களை தடை செய்வதை சான் பிரான்சிஸ்கோ கருதுகிறது",
+ "லண்டன் ஐக்கிய இராச்சியத்தில் ஒரு பெரிய நகரம்.",
]
diff --git a/spacy/lang/tokenizer_exceptions.py b/spacy/lang/tokenizer_exceptions.py
index 3bb299d6d..f732a9097 100644
--- a/spacy/lang/tokenizer_exceptions.py
+++ b/spacy/lang/tokenizer_exceptions.py
@@ -55,7 +55,8 @@ URL_PATTERN = (
# fmt: on
).strip()
-TOKEN_MATCH = re.compile("(?u)" + URL_PATTERN).match
+TOKEN_MATCH = None
+URL_MATCH = re.compile("(?u)" + URL_PATTERN).match
BASE_EXCEPTIONS = {}
diff --git a/spacy/lang/zh/tag_map.py b/spacy/lang/zh/tag_map.py
index 1ff0827be..b0eeb391d 100644
--- a/spacy/lang/zh/tag_map.py
+++ b/spacy/lang/zh/tag_map.py
@@ -1,5 +1,5 @@
from ...symbols import POS, PUNCT, ADJ, SCONJ, CCONJ, NUM, DET, ADV, ADP, X
-from ...symbols import NOUN, PART, INTJ, PRON, VERB, SPACE
+from ...symbols import NOUN, PART, INTJ, PRON, VERB, SPACE, PROPN
# The Chinese part-of-speech tagger uses the OntoNotes 5 version of the Penn
# Treebank tag set. We also map the tags to the simpler Universal Dependencies
@@ -25,7 +25,7 @@ TAG_MAP = {
"URL": {POS: X},
"INF": {POS: X},
"NN": {POS: NOUN},
- "NR": {POS: NOUN},
+ "NR": {POS: PROPN},
"NT": {POS: NOUN},
"VA": {POS: VERB},
"VC": {POS: VERB},
diff --git a/spacy/language.py b/spacy/language.py
index 97bdd698c..94da63a1a 100644
--- a/spacy/language.py
+++ b/spacy/language.py
@@ -25,7 +25,7 @@ from .util import link_vectors_to_models, create_default_optimizer, registry
from .attrs import IS_STOP, LANG, NORM
from .lang.punctuation import TOKENIZER_PREFIXES, TOKENIZER_SUFFIXES
from .lang.punctuation import TOKENIZER_INFIXES
-from .lang.tokenizer_exceptions import TOKEN_MATCH
+from .lang.tokenizer_exceptions import TOKEN_MATCH, URL_MATCH
from .lang.norm_exceptions import BASE_NORMS
from .lang.tag_map import TAG_MAP
from .tokens import Doc
@@ -86,6 +86,7 @@ class BaseDefaults(object):
def create_tokenizer(cls, nlp=None):
rules = cls.tokenizer_exceptions
token_match = cls.token_match
+ url_match = cls.url_match
prefix_search = (
util.compile_prefix_regex(cls.prefixes).search if cls.prefixes else None
)
@@ -103,10 +104,12 @@ class BaseDefaults(object):
suffix_search=suffix_search,
infix_finditer=infix_finditer,
token_match=token_match,
+ url_match=url_match,
)
pipe_names = ["tagger", "parser", "ner"]
token_match = TOKEN_MATCH
+ url_match = URL_MATCH
prefixes = tuple(TOKENIZER_PREFIXES)
suffixes = tuple(TOKENIZER_SUFFIXES)
infixes = tuple(TOKENIZER_INFIXES)
@@ -951,9 +954,7 @@ class Language(object):
serializers["tokenizer"] = lambda p: self.tokenizer.to_disk(
p, exclude=["vocab"]
)
- serializers["meta.json"] = lambda p: p.open("w").write(
- srsly.json_dumps(self.meta)
- )
+ serializers["meta.json"] = lambda p: srsly.write_json(p, self.meta)
serializers["config.cfg"] = lambda p: self.config.to_disk(p)
for name, proc in self.pipeline:
if not hasattr(proc, "name"):
@@ -977,17 +978,30 @@ class Language(object):
DOCS: https://spacy.io/api/language#from_disk
"""
+
+ def deserialize_meta(path):
+ if path.exists():
+ data = srsly.read_json(path)
+ self.meta.update(data)
+ # self.meta always overrides meta["vectors"] with the metadata
+ # from self.vocab.vectors, so set the name directly
+ self.vocab.vectors.name = data.get("vectors", {}).get("name")
+
+ def deserialize_vocab(path):
+ if path.exists():
+ self.vocab.from_disk(path)
+ _fix_pretrained_vectors_name(self)
+
if disable is not None:
warnings.warn(Warnings.W014, DeprecationWarning)
exclude = disable
path = util.ensure_path(path)
+
deserializers = {}
if Path(path / "config.cfg").exists():
deserializers["config.cfg"] = lambda p: self.config.from_disk(p)
- deserializers["meta.json"] = lambda p: self.meta.update(srsly.read_json(p))
- deserializers["vocab"] = lambda p: self.vocab.from_disk(
- p
- ) and _fix_pretrained_vectors_name(self)
+ deserializers["meta.json"] = deserialize_meta
+ deserializers["vocab"] = deserialize_vocab
deserializers["tokenizer"] = lambda p: self.tokenizer.from_disk(
p, exclude=["vocab"]
)
@@ -1041,15 +1055,25 @@ class Language(object):
DOCS: https://spacy.io/api/language#from_bytes
"""
+
+ def deserialize_meta(b):
+ data = srsly.json_loads(b)
+ self.meta.update(data)
+ # self.meta always overrides meta["vectors"] with the metadata
+ # from self.vocab.vectors, so set the name directly
+ self.vocab.vectors.name = data.get("vectors", {}).get("name")
+
+ def deserialize_vocab(b):
+ self.vocab.from_bytes(b)
+ _fix_pretrained_vectors_name(self)
+
if disable is not None:
warnings.warn(Warnings.W014, DeprecationWarning)
exclude = disable
deserializers = {}
deserializers["config.cfg"] = lambda b: self.config.from_bytes(b)
- deserializers["meta.json"] = lambda b: self.meta.update(srsly.json_loads(b))
- deserializers["vocab"] = lambda b: self.vocab.from_bytes(
- b
- ) and _fix_pretrained_vectors_name(self)
+ deserializers["meta.json"] = deserialize_meta
+ deserializers["vocab"] = deserialize_vocab
deserializers["tokenizer"] = lambda b: self.tokenizer.from_bytes(
b, exclude=["vocab"]
)
@@ -1132,7 +1156,7 @@ class component(object):
def _fix_pretrained_vectors_name(nlp):
# TODO: Replace this once we handle vectors consistently as static
# data
- if "vectors" in nlp.meta and nlp.meta["vectors"].get("name"):
+ if "vectors" in nlp.meta and "name" in nlp.meta["vectors"]:
nlp.vocab.vectors.name = nlp.meta["vectors"]["name"]
elif not nlp.vocab.vectors.size:
nlp.vocab.vectors.name = None
@@ -1142,7 +1166,7 @@ def _fix_pretrained_vectors_name(nlp):
else:
raise ValueError(Errors.E092)
if nlp.vocab.vectors.size != 0:
- link_vectors_to_models(nlp.vocab, skip_rank=True)
+ link_vectors_to_models(nlp.vocab)
for name, proc in nlp.pipeline:
if not hasattr(proc, "cfg"):
continue
diff --git a/spacy/lexeme.pyx b/spacy/lexeme.pyx
index fc3b30a6d..581a75f32 100644
--- a/spacy/lexeme.pyx
+++ b/spacy/lexeme.pyx
@@ -9,7 +9,6 @@ import numpy
from thinc.api import get_array_module
import warnings
-from libc.stdint cimport UINT64_MAX
from .typedefs cimport attr_t, flags_t
from .attrs cimport IS_ALPHA, IS_ASCII, IS_DIGIT, IS_LOWER, IS_PUNCT, IS_SPACE
from .attrs cimport IS_TITLE, IS_UPPER, LIKE_URL, LIKE_NUM, LIKE_EMAIL, IS_STOP
@@ -20,7 +19,7 @@ from .attrs import intify_attrs
from .errors import Errors, Warnings
-OOV_RANK = UINT64_MAX
+OOV_RANK = 0xffffffffffffffff # UINT64_MAX
memset(&EMPTY_LEXEME, 0, sizeof(LexemeC))
EMPTY_LEXEME.id = OOV_RANK
diff --git a/spacy/matcher/phrasematcher.pyx b/spacy/matcher/phrasematcher.pyx
index aa4534296..1bb06c0a3 100644
--- a/spacy/matcher/phrasematcher.pyx
+++ b/spacy/matcher/phrasematcher.pyx
@@ -328,7 +328,7 @@ def unpickle_matcher(vocab, docs, callbacks, attr):
matcher = PhraseMatcher(vocab, attr=attr)
for key, specs in docs.items():
callback = callbacks.get(key, None)
- matcher.add(key, callback, *specs)
+ matcher.add(key, specs, on_match=callback)
return matcher
diff --git a/spacy/morphology.pyx b/spacy/morphology.pyx
index 3e369fb3e..e9d640f81 100644
--- a/spacy/morphology.pyx
+++ b/spacy/morphology.pyx
@@ -66,7 +66,10 @@ cdef class Morphology:
self.tags = PreshMap()
# Add special space symbol. We prefix with underscore, to make sure it
# always sorts to the end.
- space_attrs = tag_map.get('SP', {POS: SPACE})
+ if '_SP' in tag_map:
+ space_attrs = tag_map.get('_SP')
+ else:
+ space_attrs = tag_map.get('SP', {POS: SPACE})
if '_SP' not in tag_map:
self.strings.add('_SP')
tag_map = dict(tag_map)
diff --git a/spacy/pipeline/pipes.pyx b/spacy/pipeline/pipes.pyx
index 75628ce3c..4e04b96b5 100644
--- a/spacy/pipeline/pipes.pyx
+++ b/spacy/pipeline/pipes.pyx
@@ -216,7 +216,7 @@ class Pipe(object):
serialize = {}
serialize["cfg"] = lambda p: srsly.write_json(p, self.cfg)
serialize["vocab"] = lambda p: self.vocab.to_disk(p)
- serialize["model"] = lambda p: p.open("wb").write(self.model.to_bytes())
+ serialize["model"] = lambda p: self.model.to_disk(p)
exclude = util.get_serialization_exclude(serialize, exclude, kwargs)
util.to_disk(path, serialize, exclude)
@@ -384,6 +384,8 @@ class Tagger(Pipe):
lemma_tables = ["lemma_rules", "lemma_index", "lemma_exc", "lemma_lookup"]
if not any(table in self.vocab.lookups for table in lemma_tables):
warnings.warn(Warnings.W022)
+ if len(self.vocab.lookups.get_table("lexeme_norm", {})) == 0:
+ warnings.warn(Warnings.W033.format(model="part-of-speech tagger"))
orig_tag_map = dict(self.vocab.morphology.tag_map)
new_tag_map = {}
for example in get_examples():
@@ -395,6 +397,8 @@ class Tagger(Pipe):
cdef Vocab vocab = self.vocab
if new_tag_map:
+ if "_SP" in orig_tag_map:
+ new_tag_map["_SP"] = orig_tag_map["_SP"]
vocab.morphology = Morphology(vocab.strings, new_tag_map,
vocab.morphology.lemmatizer,
exc=vocab.morphology.exc)
@@ -485,7 +489,7 @@ class Tagger(Pipe):
serialize = {
"vocab": lambda p: self.vocab.to_disk(p),
"tag_map": lambda p: srsly.write_msgpack(p, tag_map),
- "model": lambda p: p.open("wb").write(self.model.to_bytes()),
+ "model": lambda p: self.model.to_disk(p),
"cfg": lambda p: srsly.write_json(p, self.cfg),
}
exclude = util.get_serialization_exclude(serialize, exclude, kwargs)
@@ -1129,6 +1133,8 @@ class EntityLinker(Pipe):
raise ValueError(Errors.E990.format(type=type(self.kb)))
self.cfg = dict(cfg)
self.distance = CosineDistance(normalize=False)
+ # how many neightbour sentences to take into account
+ self.n_sents = cfg.get("n_sents", 0)
def require_kb(self):
# Raise an error if the knowledge base is not initialized.
@@ -1161,6 +1167,9 @@ class EntityLinker(Pipe):
for doc, gold in zip(docs, golds):
ents_by_offset = dict()
+
+ sentences = [s for s in doc.sents]
+
for ent in doc.ents:
ents_by_offset[(ent.start_char, ent.end_char)] = ent
@@ -1173,20 +1182,39 @@ class EntityLinker(Pipe):
# the gold annotations should link to proper entities - if this fails, the dataset is likely corrupt
if not (start, end) in ents_by_offset:
raise RuntimeError(Errors.E188)
+
ent = ents_by_offset[(start, end)]
for kb_id, value in kb_dict.items():
# Currently only training on the positive instances - we assume there is at least 1 per doc/gold
if value:
try:
- sentence_docs.append(ent.sent.as_doc())
+ # find the sentence in the list of sentences.
+ sent_index = sentences.index(ent.sent)
+
except AttributeError:
# Catch the exception when ent.sent is None and provide a user-friendly warning
raise RuntimeError(Errors.E030)
+
+ # get n previous sentences, if there are any
+ start_sentence = max(0, sent_index - self.n_sents)
+
+ # get n posterior sentences, or as many < n as there are
+ end_sentence = min(len(sentences) -1, sent_index + self.n_sents)
+
+ # get token positions
+ start_token = sentences[start_sentence].start
+ end_token = sentences[end_sentence].end
+
+ # append that span as a doc to training
+ sent_doc = doc[start_token:end_token].as_doc()
+ sentence_docs.append(sent_doc)
+
set_dropout_rate(self.model, drop)
sentence_encodings, bp_context = self.model.begin_update(sentence_docs)
loss, d_scores = self.get_similarity_loss(scores=sentence_encodings, golds=golds)
bp_context(d_scores)
+
if sgd is not None:
self.model.finish_update(sgd)
@@ -1268,68 +1296,78 @@ class EntityLinker(Pipe):
docs = [docs]
for i, doc in enumerate(docs):
+ sentences = [s for s in doc.sents]
+
if len(doc) > 0:
# Looping through each sentence and each entity
# This may go wrong if there are entities across sentences - which shouldn't happen normally.
- for sent in doc.sents:
- sent_doc = sent.as_doc()
- # currently, the context is the same for each entity in a sentence (should be refined)
- sentence_encoding = self.model.predict([sent_doc])[0]
- xp = get_array_module(sentence_encoding)
- sentence_encoding_t = sentence_encoding.T
- sentence_norm = xp.linalg.norm(sentence_encoding_t)
+ for sent_index, sent in enumerate(sentences):
+ if sent.ents:
+ # get n_neightbour sentences, clipped to the length of the document
+ start_sentence = max(0, sent_index - self.n_sents)
+ end_sentence = min(len(sentences) -1, sent_index + self.n_sents)
- for ent in sent_doc.ents:
- entity_count += 1
+ start_token = sentences[start_sentence].start
+ end_token = sentences[end_sentence].end
- to_discard = self.cfg.get("labels_discard", [])
- if to_discard and ent.label_ in to_discard:
- # ignoring this entity - setting to NIL
- final_kb_ids.append(self.NIL)
- final_tensors.append(sentence_encoding)
+ sent_doc = doc[start_token:end_token].as_doc()
+ # currently, the context is the same for each entity in a sentence (should be refined)
+ sentence_encoding = self.model.predict([sent_doc])[0]
+ xp = get_array_module(sentence_encoding)
+ sentence_encoding_t = sentence_encoding.T
+ sentence_norm = xp.linalg.norm(sentence_encoding_t)
- else:
- candidates = self.kb.get_candidates(ent.text)
- if not candidates:
- # no prediction possible for this entity - setting to NIL
+ for ent in sent.ents:
+ entity_count += 1
+
+ to_discard = self.cfg.get("labels_discard", [])
+ if to_discard and ent.label_ in to_discard:
+ # ignoring this entity - setting to NIL
final_kb_ids.append(self.NIL)
final_tensors.append(sentence_encoding)
- elif len(candidates) == 1:
- # shortcut for efficiency reasons: take the 1 candidate
-
- # TODO: thresholding
- final_kb_ids.append(candidates[0].entity_)
- final_tensors.append(sentence_encoding)
-
else:
- random.shuffle(candidates)
+ candidates = self.kb.get_candidates(ent.text)
+ if not candidates:
+ # no prediction possible for this entity - setting to NIL
+ final_kb_ids.append(self.NIL)
+ final_tensors.append(sentence_encoding)
- # this will set all prior probabilities to 0 if they should be excluded from the model
- prior_probs = xp.asarray([c.prior_prob for c in candidates])
- if not self.cfg.get("incl_prior", True):
- prior_probs = xp.asarray([0.0 for c in candidates])
- scores = prior_probs
+ elif len(candidates) == 1:
+ # shortcut for efficiency reasons: take the 1 candidate
- # add in similarity from the context
- if self.cfg.get("incl_context", True):
- entity_encodings = xp.asarray([c.entity_vector for c in candidates])
- entity_norm = xp.linalg.norm(entity_encodings, axis=1)
+ # TODO: thresholding
+ final_kb_ids.append(candidates[0].entity_)
+ final_tensors.append(sentence_encoding)
- if len(entity_encodings) != len(prior_probs):
- raise RuntimeError(Errors.E147.format(method="predict", msg="vectors not of equal length"))
+ else:
+ random.shuffle(candidates)
- # cosine similarity
- sims = xp.dot(entity_encodings, sentence_encoding_t) / (sentence_norm * entity_norm)
- if sims.shape != prior_probs.shape:
- raise ValueError(Errors.E161)
- scores = prior_probs + sims - (prior_probs*sims)
+ # this will set all prior probabilities to 0 if they should be excluded from the model
+ prior_probs = xp.asarray([c.prior_prob for c in candidates])
+ if not self.cfg.get("incl_prior", True):
+ prior_probs = xp.asarray([0.0 for c in candidates])
+ scores = prior_probs
- # TODO: thresholding
- best_index = scores.argmax().item()
- best_candidate = candidates[best_index]
- final_kb_ids.append(best_candidate.entity_)
- final_tensors.append(sentence_encoding)
+ # add in similarity from the context
+ if self.cfg.get("incl_context", True):
+ entity_encodings = xp.asarray([c.entity_vector for c in candidates])
+ entity_norm = xp.linalg.norm(entity_encodings, axis=1)
+
+ if len(entity_encodings) != len(prior_probs):
+ raise RuntimeError(Errors.E147.format(method="predict", msg="vectors not of equal length"))
+
+ # cosine similarity
+ sims = xp.dot(entity_encodings, sentence_encoding_t) / (sentence_norm * entity_norm)
+ if sims.shape != prior_probs.shape:
+ raise ValueError(Errors.E161)
+ scores = prior_probs + sims - (prior_probs*sims)
+
+ # TODO: thresholding
+ best_index = scores.argmax().item()
+ best_candidate = candidates[best_index]
+ final_kb_ids.append(best_candidate.entity_)
+ final_tensors.append(sentence_encoding)
if not (len(final_tensors) == len(final_kb_ids) == entity_count):
raise RuntimeError(Errors.E147.format(method="predict", msg="result variables not of equal length"))
@@ -1355,7 +1393,7 @@ class EntityLinker(Pipe):
serialize["cfg"] = lambda p: srsly.write_json(p, self.cfg)
serialize["vocab"] = lambda p: self.vocab.to_disk(p)
serialize["kb"] = lambda p: self.kb.dump(p)
- serialize["model"] = lambda p: p.open("wb").write(self.model.to_bytes())
+ serialize["model"] = lambda p: self.model.to_disk(p)
exclude = util.get_serialization_exclude(serialize, exclude, kwargs)
util.to_disk(path, serialize, exclude)
diff --git a/spacy/syntax/nn_parser.pyx b/spacy/syntax/nn_parser.pyx
index 7bd9562e2..1dcb92016 100644
--- a/spacy/syntax/nn_parser.pyx
+++ b/spacy/syntax/nn_parser.pyx
@@ -1,6 +1,7 @@
# cython: infer_types=True, cdivision=True, boundscheck=False
cimport cython.parallel
cimport numpy as np
+from itertools import islice
from cpython.ref cimport PyObject, Py_XDECREF
from cpython.exc cimport PyErr_CheckSignals, PyErr_SetFromErrno
from libc.math cimport exp
@@ -607,6 +608,8 @@ cdef class Parser:
def begin_training(self, get_examples, pipeline=None, sgd=None, **kwargs):
self.cfg.update(kwargs)
+ if len(self.vocab.lookups.get_table("lexeme_norm", {})) == 0:
+ warnings.warn(Warnings.W033.format(model="parser or NER"))
if not hasattr(get_examples, '__call__'):
gold_tuples = get_examples
get_examples = lambda: gold_tuples
diff --git a/spacy/tests/conftest.py b/spacy/tests/conftest.py
index d75db26b6..22ff2ce26 100644
--- a/spacy/tests/conftest.py
+++ b/spacy/tests/conftest.py
@@ -137,7 +137,7 @@ def it_tokenizer():
@pytest.fixture(scope="session")
def ja_tokenizer():
- pytest.importorskip("fugashi")
+ pytest.importorskip("sudachipy")
return get_lang_class("ja").Defaults.create_tokenizer()
diff --git a/spacy/tests/doc/test_retokenize_merge.py b/spacy/tests/doc/test_retokenize_merge.py
index 5e564d1f2..e941b48ed 100644
--- a/spacy/tests/doc/test_retokenize_merge.py
+++ b/spacy/tests/doc/test_retokenize_merge.py
@@ -429,3 +429,10 @@ def test_retokenize_skip_duplicates(en_vocab):
retokenizer.merge(doc[0:2])
assert len(doc) == 2
assert doc[0].text == "hello world"
+
+
+def test_retokenize_disallow_zero_length(en_vocab):
+ doc = Doc(en_vocab, words=["hello", "world", "!"])
+ with pytest.raises(ValueError):
+ with doc.retokenize() as retokenizer:
+ retokenizer.merge(doc[1:1])
diff --git a/spacy/tests/lang/de/test_noun_chunks.py b/spacy/tests/lang/de/test_noun_chunks.py
index 8d76ddd79..ff9f8d5e5 100644
--- a/spacy/tests/lang/de/test_noun_chunks.py
+++ b/spacy/tests/lang/de/test_noun_chunks.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/el/test_noun_chunks.py b/spacy/tests/lang/el/test_noun_chunks.py
index 4f24865d0..38e72b0b2 100644
--- a/spacy/tests/lang/el/test_noun_chunks.py
+++ b/spacy/tests/lang/el/test_noun_chunks.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/en/test_exceptions.py b/spacy/tests/lang/en/test_exceptions.py
index ce0dac50b..f72dfbf25 100644
--- a/spacy/tests/lang/en/test_exceptions.py
+++ b/spacy/tests/lang/en/test_exceptions.py
@@ -43,7 +43,7 @@ def test_en_tokenizer_doesnt_split_apos_exc(en_tokenizer, text):
assert tokens[0].text == text
-@pytest.mark.parametrize("text", ["we'll", "You'll", "there'll"])
+@pytest.mark.parametrize("text", ["we'll", "You'll", "there'll", "this'll", "those'll"])
def test_en_tokenizer_handles_ll_contraction(en_tokenizer, text):
tokens = en_tokenizer(text)
assert len(tokens) == 2
diff --git a/spacy/tests/lang/es/test_noun_chunks.py b/spacy/tests/lang/es/test_noun_chunks.py
index 66bbd8c3a..a7ec4e562 100644
--- a/spacy/tests/lang/es/test_noun_chunks.py
+++ b/spacy/tests/lang/es/test_noun_chunks.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/fa/test_noun_chunks.py b/spacy/tests/lang/fa/test_noun_chunks.py
index a98aae061..767e91f6b 100644
--- a/spacy/tests/lang/fa/test_noun_chunks.py
+++ b/spacy/tests/lang/fa/test_noun_chunks.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/fr/test_noun_chunks.py b/spacy/tests/lang/fr/test_noun_chunks.py
index ea93a5a35..5fd6897f7 100644
--- a/spacy/tests/lang/fr/test_noun_chunks.py
+++ b/spacy/tests/lang/fr/test_noun_chunks.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/gu/test_text.py b/spacy/tests/lang/gu/test_text.py
index aa8d442a2..2d251166f 100644
--- a/spacy/tests/lang/gu/test_text.py
+++ b/spacy/tests/lang/gu/test_text.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/hy/test_text.py b/spacy/tests/lang/hy/test_text.py
index cbdb77e4e..ac0f1e128 100644
--- a/spacy/tests/lang/hy/test_text.py
+++ b/spacy/tests/lang/hy/test_text.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
import pytest
from spacy.lang.hy.lex_attrs import like_num
diff --git a/spacy/tests/lang/hy/test_tokenizer.py b/spacy/tests/lang/hy/test_tokenizer.py
index 3eeb8b54e..e9efb224a 100644
--- a/spacy/tests/lang/hy/test_tokenizer.py
+++ b/spacy/tests/lang/hy/test_tokenizer.py
@@ -1,6 +1,3 @@
-# coding: utf8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/id/test_noun_chunks.py b/spacy/tests/lang/id/test_noun_chunks.py
index add76f9b9..445643933 100644
--- a/spacy/tests/lang/id/test_noun_chunks.py
+++ b/spacy/tests/lang/id/test_noun_chunks.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/ja/test_lemmatization.py b/spacy/tests/lang/ja/test_lemmatization.py
index 4cb3110b3..6041611e6 100644
--- a/spacy/tests/lang/ja/test_lemmatization.py
+++ b/spacy/tests/lang/ja/test_lemmatization.py
@@ -3,7 +3,7 @@ import pytest
@pytest.mark.parametrize(
"word,lemma",
- [("新しく", "新しい"), ("赤く", "赤い"), ("すごく", "凄い"), ("いただきました", "頂く"), ("なった", "成る")],
+ [("新しく", "新しい"), ("赤く", "赤い"), ("すごく", "すごい"), ("いただきました", "いただく"), ("なった", "なる")],
)
def test_ja_lemmatizer_assigns(ja_tokenizer, word, lemma):
test_lemma = ja_tokenizer(word)[0].lemma_
diff --git a/spacy/tests/lang/ja/test_serialize.py b/spacy/tests/lang/ja/test_serialize.py
new file mode 100644
index 000000000..4d4174b03
--- /dev/null
+++ b/spacy/tests/lang/ja/test_serialize.py
@@ -0,0 +1,33 @@
+from spacy.lang.ja import Japanese
+from ...util import make_tempdir
+
+
+def test_ja_tokenizer_serialize(ja_tokenizer):
+ tokenizer_bytes = ja_tokenizer.to_bytes()
+ nlp = Japanese()
+ nlp.tokenizer.from_bytes(tokenizer_bytes)
+ assert tokenizer_bytes == nlp.tokenizer.to_bytes()
+ assert nlp.tokenizer.split_mode is None
+
+ with make_tempdir() as d:
+ file_path = d / "tokenizer"
+ ja_tokenizer.to_disk(file_path)
+ nlp = Japanese()
+ nlp.tokenizer.from_disk(file_path)
+ assert tokenizer_bytes == nlp.tokenizer.to_bytes()
+ assert nlp.tokenizer.split_mode is None
+
+ # split mode is (de)serialized correctly
+ nlp = Japanese(meta={"tokenizer": {"config": {"split_mode": "B"}}})
+ nlp_r = Japanese()
+ nlp_bytes = nlp.to_bytes()
+ nlp_r.from_bytes(nlp_bytes)
+ assert nlp_bytes == nlp_r.to_bytes()
+ assert nlp_r.tokenizer.split_mode == "B"
+
+ with make_tempdir() as d:
+ nlp.to_disk(d)
+ nlp_r = Japanese()
+ nlp_r.from_disk(d)
+ assert nlp_bytes == nlp_r.to_bytes()
+ assert nlp_r.tokenizer.split_mode == "B"
diff --git a/spacy/tests/lang/ja/test_tokenizer.py b/spacy/tests/lang/ja/test_tokenizer.py
index 481f346bb..f76a9067a 100644
--- a/spacy/tests/lang/ja/test_tokenizer.py
+++ b/spacy/tests/lang/ja/test_tokenizer.py
@@ -1,5 +1,7 @@
import pytest
+from ...tokenizer.test_naughty_strings import NAUGHTY_STRINGS
+from spacy.lang.ja import Japanese
# fmt: off
TOKENIZER_TESTS = [
@@ -11,20 +13,25 @@ TOKENIZER_TESTS = [
]
TAG_TESTS = [
- ("日本語だよ", ['名詞,固有名詞,地名,国', '名詞,普通名詞,一般,*', '助動詞,*,*,*', '助詞,終助詞,*,*']),
- ("東京タワーの近くに住んでいます。", ['名詞,固有名詞,地名,一般', '名詞,普通名詞,一般,*', '助詞,格助詞,*,*', '名詞,普通名詞,副詞可能,*', '助詞,格助詞,*,*', '動詞,一般,*,*', '助詞,接続助詞,*,*', '動詞,非自立可能,*,*', '助動詞,*,*,*', '補助記号,句点,*,*']),
- ("吾輩は猫である。", ['代名詞,*,*,*', '助詞,係助詞,*,*', '名詞,普通名詞,一般,*', '助動詞,*,*,*', '動詞,非自立可能,*,*', '補助記号,句点,*,*']),
- ("月に代わって、お仕置きよ!", ['名詞,普通名詞,助数詞可能,*', '助詞,格助詞,*,*', '動詞,一般,*,*', '助詞,接続助詞,*,*', '補助記号,読点,*,*', '接頭辞,*,*,*', '名詞,普通名詞,一般,*', '助詞,終助詞,*,*', '補助記号,句点,*,*']),
- ("すもももももももものうち", ['名詞,普通名詞,一般,*', '助詞,係助詞,*,*', '名詞,普通名詞,一般,*', '助詞,係助詞,*,*', '名詞,普通名詞,一般,*', '助詞,格助詞,*,*', '名詞,普通名詞,副詞可能,*'])
+ ("日本語だよ", ['名詞-固有名詞-地名-国', '名詞-普通名詞-一般', '助動詞', '助詞-終助詞']),
+ ("東京タワーの近くに住んでいます。", ['名詞-固有名詞-地名-一般', '名詞-普通名詞-一般', '助詞-格助詞', '名詞-普通名詞-副詞可能', '助詞-格助詞', '動詞-一般', '助詞-接続助詞', '動詞-非自立可能', '助動詞', '補助記号-句点']),
+ ("吾輩は猫である。", ['代名詞', '助詞-係助詞', '名詞-普通名詞-一般', '助動詞', '動詞-非自立可能', '補助記号-句点']),
+ ("月に代わって、お仕置きよ!", ['名詞-普通名詞-助数詞可能', '助詞-格助詞', '動詞-一般', '助詞-接続助詞', '補助記号-読点', '接頭辞', '名詞-普通名詞-一般', '助詞-終助詞', '補助記号-句点']),
+ ("すもももももももものうち", ['名詞-普通名詞-一般', '助詞-係助詞', '名詞-普通名詞-一般', '助詞-係助詞', '名詞-普通名詞-一般', '助詞-格助詞', '名詞-普通名詞-副詞可能'])
]
POS_TESTS = [
- ('日本語だよ', ['PROPN', 'NOUN', 'AUX', 'PART']),
+ ('日本語だよ', ['fish', 'NOUN', 'AUX', 'PART']),
('東京タワーの近くに住んでいます。', ['PROPN', 'NOUN', 'ADP', 'NOUN', 'ADP', 'VERB', 'SCONJ', 'VERB', 'AUX', 'PUNCT']),
('吾輩は猫である。', ['PRON', 'ADP', 'NOUN', 'AUX', 'VERB', 'PUNCT']),
('月に代わって、お仕置きよ!', ['NOUN', 'ADP', 'VERB', 'SCONJ', 'PUNCT', 'NOUN', 'NOUN', 'PART', 'PUNCT']),
('すもももももももものうち', ['NOUN', 'ADP', 'NOUN', 'ADP', 'NOUN', 'ADP', 'NOUN'])
]
+
+SENTENCE_TESTS = [
+ ("あれ。これ。", ["あれ。", "これ。"]),
+ ("「伝染るんです。」という漫画があります。", ["「伝染るんです。」という漫画があります。"]),
+]
# fmt: on
@@ -40,14 +47,56 @@ def test_ja_tokenizer_tags(ja_tokenizer, text, expected_tags):
assert tags == expected_tags
+# XXX This isn't working? Always passes
@pytest.mark.parametrize("text,expected_pos", POS_TESTS)
def test_ja_tokenizer_pos(ja_tokenizer, text, expected_pos):
pos = [token.pos_ for token in ja_tokenizer(text)]
assert pos == expected_pos
-def test_extra_spaces(ja_tokenizer):
+@pytest.mark.skip(reason="sentence segmentation in tokenizer is buggy")
+@pytest.mark.parametrize("text,expected_sents", SENTENCE_TESTS)
+def test_ja_tokenizer_sents(ja_tokenizer, text, expected_sents):
+ sents = [str(sent) for sent in ja_tokenizer(text).sents]
+ assert sents == expected_sents
+
+
+def test_ja_tokenizer_extra_spaces(ja_tokenizer):
# note: three spaces after "I"
tokens = ja_tokenizer("I like cheese.")
- assert tokens[1].orth_ == " "
- assert tokens[2].orth_ == " "
+ assert tokens[1].orth_ == " "
+
+
+@pytest.mark.parametrize("text", NAUGHTY_STRINGS)
+def test_ja_tokenizer_naughty_strings(ja_tokenizer, text):
+ tokens = ja_tokenizer(text)
+ assert tokens.text_with_ws == text
+
+
+@pytest.mark.parametrize(
+ "text,len_a,len_b,len_c",
+ [
+ ("選挙管理委員会", 4, 3, 1),
+ ("客室乗務員", 3, 2, 1),
+ ("労働者協同組合", 4, 3, 1),
+ ("機能性食品", 3, 2, 1),
+ ],
+)
+def test_ja_tokenizer_split_modes(ja_tokenizer, text, len_a, len_b, len_c):
+ nlp_a = Japanese(meta={"tokenizer": {"config": {"split_mode": "A"}}})
+ nlp_b = Japanese(meta={"tokenizer": {"config": {"split_mode": "B"}}})
+ nlp_c = Japanese(meta={"tokenizer": {"config": {"split_mode": "C"}}})
+
+ assert len(ja_tokenizer(text)) == len_a
+ assert len(nlp_a(text)) == len_a
+ assert len(nlp_b(text)) == len_b
+ assert len(nlp_c(text)) == len_c
+
+
+def test_ja_tokenizer_emptyish_texts(ja_tokenizer):
+ doc = ja_tokenizer("")
+ assert len(doc) == 0
+ doc = ja_tokenizer(" ")
+ assert len(doc) == 1
+ doc = ja_tokenizer("\n\n\n \t\t \n\n\n")
+ assert len(doc) == 1
diff --git a/spacy/tests/lang/ml/test_text.py b/spacy/tests/lang/ml/test_text.py
index 2883cf5bb..aced78461 100644
--- a/spacy/tests/lang/ml/test_text.py
+++ b/spacy/tests/lang/ml/test_text.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/nb/test_noun_chunks.py b/spacy/tests/lang/nb/test_noun_chunks.py
index 653491a64..c6a00354b 100644
--- a/spacy/tests/lang/nb/test_noun_chunks.py
+++ b/spacy/tests/lang/nb/test_noun_chunks.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
diff --git a/spacy/tests/lang/sv/test_lex_attrs.py b/spacy/tests/lang/sv/test_lex_attrs.py
index abe6b0f7b..656c4706b 100644
--- a/spacy/tests/lang/sv/test_lex_attrs.py
+++ b/spacy/tests/lang/sv/test_lex_attrs.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
from spacy.lang.sv.lex_attrs import like_num
diff --git a/spacy/tests/lang/zh/test_serialize.py b/spacy/tests/lang/zh/test_serialize.py
index 56f092ed8..d84920c3e 100644
--- a/spacy/tests/lang/zh/test_serialize.py
+++ b/spacy/tests/lang/zh/test_serialize.py
@@ -1,6 +1,3 @@
-# coding: utf-8
-from __future__ import unicode_literals
-
import pytest
from spacy.lang.zh import Chinese
from ...util import make_tempdir
diff --git a/spacy/tests/matcher/test_phrase_matcher.py b/spacy/tests/matcher/test_phrase_matcher.py
index 23cd80d1d..2a3c7d693 100644
--- a/spacy/tests/matcher/test_phrase_matcher.py
+++ b/spacy/tests/matcher/test_phrase_matcher.py
@@ -1,4 +1,5 @@
import pytest
+import srsly
from mock import Mock
from spacy.matcher import PhraseMatcher
from spacy.tokens import Doc
@@ -263,3 +264,26 @@ def test_phrase_matcher_basic_check(en_vocab):
pattern = Doc(en_vocab, words=["hello", "world"])
with pytest.raises(ValueError):
matcher.add("TEST", pattern)
+
+
+def test_phrase_matcher_pickle(en_vocab):
+ matcher = PhraseMatcher(en_vocab)
+ mock = Mock()
+ matcher.add("TEST", [Doc(en_vocab, words=["test"])])
+ matcher.add("TEST2", [Doc(en_vocab, words=["test2"])], on_match=mock)
+ doc = Doc(en_vocab, words=["these", "are", "tests", ":", "test", "test2"])
+ assert len(matcher) == 2
+
+ b = srsly.pickle_dumps(matcher)
+ matcher_unpickled = srsly.pickle_loads(b)
+
+ # call after pickling to avoid recursion error related to mock
+ matches = matcher(doc)
+ matches_unpickled = matcher_unpickled(doc)
+
+ assert len(matcher) == len(matcher_unpickled)
+ assert matches == matches_unpickled
+
+ # clunky way to vaguely check that callback is unpickled
+ (vocab, docs, callbacks, attr) = matcher_unpickled.__reduce__()[1]
+ assert isinstance(callbacks.get("TEST2"), Mock)
diff --git a/spacy/tests/package/test_requirements.py b/spacy/tests/package/test_requirements.py
index 0dc0f9d6c..6cc8fa6a8 100644
--- a/spacy/tests/package/test_requirements.py
+++ b/spacy/tests/package/test_requirements.py
@@ -10,7 +10,13 @@ def test_build_dependencies():
"mock",
"flake8",
]
- libs_ignore_setup = ["fugashi", "natto-py", "pythainlp"]
+ libs_ignore_setup = [
+ "fugashi",
+ "natto-py",
+ "pythainlp",
+ "sudachipy",
+ "sudachidict_core",
+ ]
# check requirements.txt
req_dict = {}
diff --git a/spacy/tests/parser/test_ner.py b/spacy/tests/parser/test_ner.py
index e82de03bf..db4efcd95 100644
--- a/spacy/tests/parser/test_ner.py
+++ b/spacy/tests/parser/test_ner.py
@@ -1,6 +1,9 @@
import pytest
from spacy import util
from spacy.lang.en import English
+
+from spacy.language import Language
+from spacy.lookups import Lookups
from spacy.pipeline.defaults import default_ner
from spacy.pipeline import EntityRecognizer, EntityRuler
from spacy.vocab import Vocab
@@ -349,6 +352,21 @@ def test_overfitting_IO():
assert ents2[0].label_ == "LOC"
+def test_ner_warns_no_lookups():
+ nlp = Language()
+ nlp.vocab.lookups = Lookups()
+ assert not len(nlp.vocab.lookups)
+ ner = nlp.create_pipe("ner")
+ nlp.add_pipe(ner)
+ with pytest.warns(UserWarning):
+ nlp.begin_training()
+ nlp.vocab.lookups.add_table("lexeme_norm")
+ nlp.vocab.lookups.get_table("lexeme_norm")["a"] = "A"
+ with pytest.warns(None) as record:
+ nlp.begin_training()
+ assert not record.list
+
+
class BlockerComponent1(object):
name = "my_blocker"
diff --git a/spacy/tests/regression/test_issue5230.py b/spacy/tests/regression/test_issue5230.py
new file mode 100644
index 000000000..9ffa3862c
--- /dev/null
+++ b/spacy/tests/regression/test_issue5230.py
@@ -0,0 +1,141 @@
+import warnings
+from unittest import TestCase
+import pytest
+import srsly
+from numpy import zeros
+from spacy.kb import KnowledgeBase, Writer
+from spacy.vectors import Vectors
+from spacy.language import Language
+from spacy.pipeline import Pipe
+
+
+from ..util import make_tempdir
+
+
+def nlp():
+ return Language()
+
+
+def vectors():
+ data = zeros((3, 1), dtype="f")
+ keys = ["cat", "dog", "rat"]
+ return Vectors(data=data, keys=keys)
+
+
+def custom_pipe():
+ # create dummy pipe partially implementing interface -- only want to test to_disk
+ class SerializableDummy(object):
+ def __init__(self, **cfg):
+ if cfg:
+ self.cfg = cfg
+ else:
+ self.cfg = None
+ super(SerializableDummy, self).__init__()
+
+ def to_bytes(self, exclude=tuple(), disable=None, **kwargs):
+ return srsly.msgpack_dumps({"dummy": srsly.json_dumps(None)})
+
+ def from_bytes(self, bytes_data, exclude):
+ return self
+
+ def to_disk(self, path, exclude=tuple(), **kwargs):
+ pass
+
+ def from_disk(self, path, exclude=tuple(), **kwargs):
+ return self
+
+ class MyPipe(Pipe):
+ def __init__(self, vocab, model=True, **cfg):
+ if cfg:
+ self.cfg = cfg
+ else:
+ self.cfg = None
+ self.model = SerializableDummy()
+ self.vocab = SerializableDummy()
+
+ return MyPipe(None)
+
+
+def tagger():
+ nlp = Language()
+ nlp.add_pipe(nlp.create_pipe("tagger"))
+ tagger = nlp.get_pipe("tagger")
+ # need to add model for two reasons:
+ # 1. no model leads to error in serialization,
+ # 2. the affected line is the one for model serialization
+ tagger.begin_training(pipeline=nlp.pipeline)
+ return tagger
+
+
+def entity_linker():
+ nlp = Language()
+ kb = KnowledgeBase(nlp.vocab, entity_vector_length=1)
+ kb.add_entity("test", 0.0, zeros((1, 1), dtype="f"))
+ nlp.add_pipe(nlp.create_pipe("entity_linker", {"kb": kb}))
+ entity_linker = nlp.get_pipe("entity_linker")
+ # need to add model for two reasons:
+ # 1. no model leads to error in serialization,
+ # 2. the affected line is the one for model serialization
+ entity_linker.begin_training(pipeline=nlp.pipeline)
+ return entity_linker
+
+
+objects_to_test = (
+ [nlp(), vectors(), custom_pipe(), tagger(), entity_linker()],
+ ["nlp", "vectors", "custom_pipe", "tagger", "entity_linker"],
+)
+
+
+def write_obj_and_catch_warnings(obj):
+ with make_tempdir() as d:
+ with warnings.catch_warnings(record=True) as warnings_list:
+ warnings.filterwarnings("always", category=ResourceWarning)
+ obj.to_disk(d)
+ # in python3.5 it seems that deprecation warnings are not filtered by filterwarnings
+ return list(filter(lambda x: isinstance(x, ResourceWarning), warnings_list))
+
+
+@pytest.mark.parametrize("obj", objects_to_test[0], ids=objects_to_test[1])
+def test_to_disk_resource_warning(obj):
+ warnings_list = write_obj_and_catch_warnings(obj)
+ assert len(warnings_list) == 0
+
+
+def test_writer_with_path_py35():
+ writer = None
+ with make_tempdir() as d:
+ path = d / "test"
+ try:
+ writer = Writer(path)
+ except Exception as e:
+ pytest.fail(str(e))
+ finally:
+ if writer:
+ writer.close()
+
+
+def test_save_and_load_knowledge_base():
+ nlp = Language()
+ kb = KnowledgeBase(nlp.vocab, entity_vector_length=1)
+ with make_tempdir() as d:
+ path = d / "kb"
+ try:
+ kb.dump(path)
+ except Exception as e:
+ pytest.fail(str(e))
+
+ try:
+ kb_loaded = KnowledgeBase(nlp.vocab, entity_vector_length=1)
+ kb_loaded.load_bulk(path)
+ except Exception as e:
+ pytest.fail(str(e))
+
+
+class TestToDiskResourceWarningUnittest(TestCase):
+ def test_resource_warning(self):
+ scenarios = zip(*objects_to_test)
+
+ for scenario in scenarios:
+ with self.subTest(msg=scenario[1]):
+ warnings_list = write_obj_and_catch_warnings(scenario[0])
+ self.assertEqual(len(warnings_list), 0)
diff --git a/spacy/tests/regression/test_issue5458.py b/spacy/tests/regression/test_issue5458.py
new file mode 100644
index 000000000..a7a2959df
--- /dev/null
+++ b/spacy/tests/regression/test_issue5458.py
@@ -0,0 +1,23 @@
+from spacy.lang.en import English
+from spacy.lang.en.syntax_iterators import noun_chunks
+from spacy.tests.util import get_doc
+from spacy.vocab import Vocab
+
+
+def test_issue5458():
+ # Test that the noun chuncker does not generate overlapping spans
+ # fmt: off
+ words = ["In", "an", "era", "where", "markets", "have", "brought", "prosperity", "and", "empowerment", "."]
+ vocab = Vocab(strings=words)
+ dependencies = ["ROOT", "det", "pobj", "advmod", "nsubj", "aux", "relcl", "dobj", "cc", "conj", "punct"]
+ pos_tags = ["ADP", "DET", "NOUN", "ADV", "NOUN", "AUX", "VERB", "NOUN", "CCONJ", "NOUN", "PUNCT"]
+ heads = [0, 1, -2, 6, 2, 1, -4, -1, -1, -2, -10]
+ # fmt: on
+
+ en_doc = get_doc(vocab, words, pos_tags, heads, dependencies)
+ en_doc.noun_chunks_iterator = noun_chunks
+
+ # if there are overlapping spans, this will fail with an E102 error "Can't merge non-disjoint spans"
+ nlp = English()
+ merge_nps = nlp.create_pipe("merge_noun_chunks")
+ merge_nps(en_doc)
diff --git a/spacy/tests/test_lemmatizer.py b/spacy/tests/test_lemmatizer.py
index 1779ff933..050206539 100644
--- a/spacy/tests/test_lemmatizer.py
+++ b/spacy/tests/test_lemmatizer.py
@@ -30,7 +30,7 @@ def test_lemmatizer_reflects_lookups_changes():
assert Doc(new_nlp.vocab, words=["hello"])[0].lemma_ == "world"
-def test_tagger_warns_no_lemma_lookups():
+def test_tagger_warns_no_lookups():
nlp = Language()
nlp.vocab.lookups = Lookups()
assert not len(nlp.vocab.lookups)
@@ -41,6 +41,8 @@ def test_tagger_warns_no_lemma_lookups():
with pytest.warns(UserWarning):
nlp.begin_training()
nlp.vocab.lookups.add_table("lemma_lookup")
+ nlp.vocab.lookups.add_table("lexeme_norm")
+ nlp.vocab.lookups.get_table("lexeme_norm")["a"] = "A"
with pytest.warns(None) as record:
nlp.begin_training()
assert not record.list
diff --git a/spacy/tests/test_misc.py b/spacy/tests/test_misc.py
index 4e6c0e652..f6724f632 100644
--- a/spacy/tests/test_misc.py
+++ b/spacy/tests/test_misc.py
@@ -93,6 +93,17 @@ def test_ascii_filenames():
assert all(ord(c) < 128 for c in path.name), path.name
+def test_load_model_blank_shortcut():
+ """Test that using a model name like "blank:en" works as a shortcut for
+ spacy.blank("en").
+ """
+ nlp = util.load_model("blank:en")
+ assert nlp.lang == "en"
+ assert nlp.pipeline == []
+ with pytest.raises(ImportError):
+ util.load_model("blank:fjsfijsdof")
+
+
@pytest.mark.parametrize(
"version,constraint,compatible",
[
diff --git a/spacy/tests/tokenizer/test_urls.py b/spacy/tests/tokenizer/test_urls.py
index 87211ab95..57e970f87 100644
--- a/spacy/tests/tokenizer/test_urls.py
+++ b/spacy/tests/tokenizer/test_urls.py
@@ -121,12 +121,12 @@ SUFFIXES = ['"', ":", ">"]
@pytest.mark.parametrize("url", URLS_SHOULD_MATCH)
def test_should_match(en_tokenizer, url):
- assert en_tokenizer.token_match(url) is not None
+ assert en_tokenizer.url_match(url) is not None
@pytest.mark.parametrize("url", URLS_SHOULD_NOT_MATCH)
def test_should_not_match(en_tokenizer, url):
- assert en_tokenizer.token_match(url) is None
+ assert en_tokenizer.url_match(url) is None
@pytest.mark.parametrize("url", URLS_BASIC)
diff --git a/spacy/tokenizer.pxd b/spacy/tokenizer.pxd
index e82833701..20508ead7 100644
--- a/spacy/tokenizer.pxd
+++ b/spacy/tokenizer.pxd
@@ -17,6 +17,7 @@ cdef class Tokenizer:
cpdef readonly Vocab vocab
cdef object _token_match
+ cdef object _url_match
cdef object _prefix_search
cdef object _suffix_search
cdef object _infix_finditer
diff --git a/spacy/tokenizer.pyx b/spacy/tokenizer.pyx
index 538bf60e9..b40113460 100644
--- a/spacy/tokenizer.pyx
+++ b/spacy/tokenizer.pyx
@@ -31,7 +31,8 @@ cdef class Tokenizer:
DOCS: https://spacy.io/api/tokenizer
"""
def __init__(self, Vocab vocab, rules=None, prefix_search=None,
- suffix_search=None, infix_finditer=None, token_match=None):
+ suffix_search=None, infix_finditer=None, token_match=None,
+ url_match=None):
"""Create a `Tokenizer`, to create `Doc` objects given unicode text.
vocab (Vocab): A storage container for lexical types.
@@ -44,6 +45,8 @@ cdef class Tokenizer:
`re.compile(string).finditer` to find infixes.
token_match (callable): A boolean function matching strings to be
recognised as tokens.
+ url_match (callable): A boolean function matching strings to be
+ recognised as tokens after considering prefixes and suffixes.
RETURNS (Tokenizer): The newly constructed object.
EXAMPLE:
@@ -56,6 +59,7 @@ cdef class Tokenizer:
self._cache = PreshMap()
self._specials = PreshMap()
self.token_match = token_match
+ self.url_match = url_match
self.prefix_search = prefix_search
self.suffix_search = suffix_search
self.infix_finditer = infix_finditer
@@ -76,6 +80,14 @@ cdef class Tokenizer:
if self._property_init_count <= self._property_init_max:
self._property_init_count += 1
+ property url_match:
+ def __get__(self):
+ return self._url_match
+
+ def __set__(self, url_match):
+ self._url_match = url_match
+ self._flush_cache()
+
property prefix_search:
def __get__(self):
return self._prefix_search
@@ -120,11 +132,12 @@ cdef class Tokenizer:
def __reduce__(self):
args = (self.vocab,
- self._rules,
+ self.rules,
self.prefix_search,
self.suffix_search,
self.infix_finditer,
- self.token_match)
+ self.token_match,
+ self.url_match)
return (self.__class__, args, None, None)
cpdef Doc tokens_from_list(self, list strings):
@@ -461,7 +474,9 @@ cdef class Tokenizer:
cache_hit = self._try_cache(hash_string(string), tokens)
if specials_hit or cache_hit:
pass
- elif self.token_match and self.token_match(string):
+ elif (self.token_match and self.token_match(string)) or \
+ (self.url_match and \
+ self.url_match(string)):
# We're always saying 'no' to spaces here -- the caller will
# fix up the outermost one, with reference to the original.
# See Issue #859
@@ -638,6 +653,11 @@ cdef class Tokenizer:
suffix_search = self.suffix_search
infix_finditer = self.infix_finditer
token_match = self.token_match
+ if token_match is None:
+ token_match = re.compile("a^").match
+ url_match = self.url_match
+ if url_match is None:
+ url_match = re.compile("a^").match
special_cases = {}
for orth, special_tokens in self.rules.items():
special_cases[orth] = [intify_attrs(special_token, strings_map=self.vocab.strings, _do_deprecated=True) for special_token in special_tokens]
@@ -646,6 +666,10 @@ cdef class Tokenizer:
suffixes = []
while substring:
while prefix_search(substring) or suffix_search(substring):
+ if token_match(substring):
+ tokens.append(("TOKEN_MATCH", substring))
+ substring = ''
+ break
if substring in special_cases:
tokens.extend(("SPECIAL-" + str(i + 1), self.vocab.strings[e[ORTH]]) for i, e in enumerate(special_cases[substring]))
substring = ''
@@ -666,12 +690,15 @@ cdef class Tokenizer:
break
suffixes.append(("SUFFIX", substring[split:]))
substring = substring[:split]
- if substring in special_cases:
- tokens.extend(("SPECIAL-" + str(i + 1), self.vocab.strings[e[ORTH]]) for i, e in enumerate(special_cases[substring]))
- substring = ''
- elif token_match(substring):
+ if token_match(substring):
tokens.append(("TOKEN_MATCH", substring))
substring = ''
+ elif url_match(substring):
+ tokens.append(("URL_MATCH", substring))
+ substring = ''
+ elif substring in special_cases:
+ tokens.extend(("SPECIAL-" + str(i + 1), self.vocab.strings[e[ORTH]]) for i, e in enumerate(special_cases[substring]))
+ substring = ''
elif list(infix_finditer(substring)):
infixes = infix_finditer(substring)
offset = 0
@@ -733,6 +760,7 @@ cdef class Tokenizer:
"suffix_search": lambda: _get_regex_pattern(self.suffix_search),
"infix_finditer": lambda: _get_regex_pattern(self.infix_finditer),
"token_match": lambda: _get_regex_pattern(self.token_match),
+ "url_match": lambda: _get_regex_pattern(self.url_match),
"exceptions": lambda: dict(sorted(self._rules.items()))
}
exclude = util.get_serialization_exclude(serializers, exclude, kwargs)
@@ -754,6 +782,7 @@ cdef class Tokenizer:
"suffix_search": lambda b: data.setdefault("suffix_search", b),
"infix_finditer": lambda b: data.setdefault("infix_finditer", b),
"token_match": lambda b: data.setdefault("token_match", b),
+ "url_match": lambda b: data.setdefault("url_match", b),
"exceptions": lambda b: data.setdefault("rules", b)
}
exclude = util.get_serialization_exclude(deserializers, exclude, kwargs)
@@ -766,6 +795,8 @@ cdef class Tokenizer:
self.infix_finditer = re.compile(data["infix_finditer"]).finditer
if "token_match" in data and isinstance(data["token_match"], str):
self.token_match = re.compile(data["token_match"]).match
+ if "url_match" in data and isinstance(data["url_match"], str):
+ self.url_match = re.compile(data["url_match"]).match
if "rules" in data and isinstance(data["rules"], dict):
# make sure to hard reset the cache to remove data from the default exceptions
self._rules = {}
diff --git a/spacy/tokens/_retokenize.pyx b/spacy/tokens/_retokenize.pyx
index dd0b2b820..3943767a0 100644
--- a/spacy/tokens/_retokenize.pyx
+++ b/spacy/tokens/_retokenize.pyx
@@ -50,6 +50,8 @@ cdef class Retokenizer:
"""
if (span.start, span.end) in self._spans_to_merge:
return
+ if span.end - span.start <= 0:
+ raise ValueError(Errors.E199.format(start=span.start, end=span.end))
for token in span:
if token.i in self.tokens_to_merge:
raise ValueError(Errors.E102.format(token=repr(token)))
diff --git a/spacy/tokens/morphanalysis.pyx b/spacy/tokens/morphanalysis.pyx
index ed987f4e4..77e499968 100644
--- a/spacy/tokens/morphanalysis.pyx
+++ b/spacy/tokens/morphanalysis.pyx
@@ -45,12 +45,6 @@ cdef class MorphAnalysis:
"""The number of features in the analysis."""
return self.c.length
- def __str__(self):
- return self.to_json()
-
- def __repr__(self):
- return self.to_json()
-
def __hash__(self):
return self.key
@@ -79,3 +73,10 @@ cdef class MorphAnalysis:
"""Produce a dict representation.
"""
return self.vocab.morphology.feats_to_dict(self.to_json())
+
+ def __str__(self):
+ return self.to_json()
+
+ def __repr__(self):
+ return self.to_json()
+
diff --git a/spacy/util.py b/spacy/util.py
index feb863261..d3b1012b7 100644
--- a/spacy/util.py
+++ b/spacy/util.py
@@ -141,6 +141,8 @@ def load_model(name, **overrides):
RETURNS (Language): `Language` class with the loaded model.
"""
if isinstance(name, str): # name or string path
+ if name.startswith("blank:"): # shortcut for blank model
+ return get_lang_class(name.replace("blank:", ""))()
if is_package(name): # installed as package
return load_model_from_package(name, **overrides)
if Path(name).exists(): # path to model data directory
diff --git a/spacy/vectors.pyx b/spacy/vectors.pyx
index 4537d612d..81f3affca 100644
--- a/spacy/vectors.pyx
+++ b/spacy/vectors.pyx
@@ -376,8 +376,16 @@ cdef class Vectors:
save_array = lambda arr, file_: xp.save(file_, arr, allow_pickle=False)
else:
save_array = lambda arr, file_: xp.save(file_, arr)
+
+ def save_vectors(path):
+ # the source of numpy.save indicates that the file object is closed after use.
+ # but it seems that somehow this does not happen, as ResourceWarnings are raised here.
+ # in order to not rely on this, wrap in context manager.
+ with path.open("wb") as _file:
+ save_array(self.data, _file)
+
serializers = {
- "vectors": lambda p: save_array(self.data, p.open("wb")),
+ "vectors": lambda p: save_vectors(p),
"key2row": lambda p: srsly.write_msgpack(p, self.key2row)
}
return util.to_disk(path, serializers, [])
@@ -410,10 +418,11 @@ cdef class Vectors:
self.data = ops.xp.load(str(path))
serializers = {
- "key2row": load_key2row,
- "keys": load_keys,
"vectors": load_vectors,
+ "keys": load_keys,
+ "key2row": load_key2row,
}
+
util.from_disk(path, serializers, [])
self._sync_unset()
return self
diff --git a/spacy/vocab.pyx b/spacy/vocab.pyx
index 19896f07b..d70f62dd3 100644
--- a/spacy/vocab.pyx
+++ b/spacy/vocab.pyx
@@ -43,7 +43,8 @@ cdef class Vocab:
vice versa.
lookups (Lookups): Container for large lookup tables and dictionaries.
lookups_extra (Lookups): Container for optional lookup tables and dictionaries.
- name (unicode): Optional name to identify the vectors table.
+ oov_prob (float): Default OOV probability.
+ vectors_name (unicode): Optional name to identify the vectors table.
RETURNS (Vocab): The newly constructed object.
"""
lex_attr_getters = lex_attr_getters if lex_attr_getters is not None else {}
diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md
index aacfb414c..0591d19a1 100644
--- a/website/docs/api/cli.md
+++ b/website/docs/api/cli.md
@@ -455,7 +455,7 @@ improvement.
```bash
$ python -m spacy pretrain [texts_loc] [vectors_model] [output_dir]
-[--width] [--depth] [--cnn-window] [--cnn-pieces] [--use-chars] [--sa-depth]
+[--width] [--conv-depth] [--cnn-window] [--cnn-pieces] [--use-chars] [--sa-depth]
[--embed-rows] [--loss_func] [--dropout] [--batch-size] [--max-length]
[--min-length] [--seed] [--n-iter] [--use-vectors] [--n-save-every]
[--init-tok2vec] [--epoch-start]
@@ -467,7 +467,7 @@ $ python -m spacy pretrain [texts_loc] [vectors_model] [output_dir]
| `vectors_model` | positional | Name or path to spaCy model with vectors to learn from. |
| `output_dir` | positional | Directory to write models to on each epoch. |
| `--width`, `-cw` | option | Width of CNN layers. |
-| `--depth`, `-cd` | option | Depth of CNN layers. |
+| `--conv-depth`, `-cd` | option | Depth of CNN layers. |
| `--cnn-window`, `-cW` 2.2.2 | option | Window size for CNN layers. |
| `--cnn-pieces`, `-cP` 2.2.2 | option | Maxout size for CNN layers. `1` for [Mish](https://github.com/digantamisra98/Mish). |
| `--use-chars`, `-chr` 2.2.2 | flag | Whether to use character-based embedding. |
@@ -541,16 +541,17 @@ $ python -m spacy init-model [lang] [output_dir] [--jsonl-loc] [--vectors-loc]
[--prune-vectors]
```
-| Argument | Type | Description |
-| ----------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| `lang` | positional | Model language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes), e.g. `en`. |
-| `output_dir` | positional | Model output directory. Will be created if it doesn't exist. |
-| `--jsonl-loc`, `-j` | option | Optional location of JSONL-formatted [vocabulary file](/api/annotation#vocab-jsonl) with lexical attributes. |
-| `--vectors-loc`, `-v` | option | Optional location of vectors. Should be a file where the first row contains the dimensions of the vectors, followed by a space-separated Word2Vec table. File can be provided in `.txt` format or as a zipped text file in `.zip` or `.tar.gz` format. |
-| `--truncate-vectors`, `-t` | option | Number of vectors to truncate to when reading in vectors file. Defaults to `0` for no truncation. |
-| `--prune-vectors`, `-V` | option | Number of vectors to prune the vocabulary to. Defaults to `-1` for no pruning. |
-| `--vectors-name`, `-vn` | option | Name to assign to the word vectors in the `meta.json`, e.g. `en_core_web_md.vectors`. |
-| **CREATES** | model | A spaCy model containing the vocab and vectors. |
+| Argument | Type | Description |
+| ----------------------------------------------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| `lang` | positional | Model language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes), e.g. `en`. |
+| `output_dir` | positional | Model output directory. Will be created if it doesn't exist. |
+| `--jsonl-loc`, `-j` | option | Optional location of JSONL-formatted [vocabulary file](/api/annotation#vocab-jsonl) with lexical attributes. |
+| `--vectors-loc`, `-v` | option | Optional location of vectors. Should be a file where the first row contains the dimensions of the vectors, followed by a space-separated Word2Vec table. File can be provided in `.txt` format or as a zipped text file in `.zip` or `.tar.gz` format. |
+| `--truncate-vectors`, `-t` 2.3 | option | Number of vectors to truncate to when reading in vectors file. Defaults to `0` for no truncation. |
+| `--prune-vectors`, `-V` | option | Number of vectors to prune the vocabulary to. Defaults to `-1` for no pruning. |
+| `--vectors-name`, `-vn` | option | Name to assign to the word vectors in the `meta.json`, e.g. `en_core_web_md.vectors`. |
+| `--omit-extra-lookups`, `-OEL` 2.3 | flag | Do not include any of the extra lookups tables (`cluster`/`prob`/`sentiment`) from `spacy-lookups-data` in the model. |
+| **CREATES** | model | A spaCy model containing the vocab and vectors. |
## Evaluate {#evaluate new="2"}
diff --git a/website/docs/api/cython-structs.md b/website/docs/api/cython-structs.md
index 935bce25d..8ee1f1b9a 100644
--- a/website/docs/api/cython-structs.md
+++ b/website/docs/api/cython-structs.md
@@ -171,9 +171,6 @@ struct.
| `shape` | `attr_t` | Transform of the lexeme's string, to show orthographic features. |
| `prefix` | `attr_t` | Length-N substring from the start of the lexeme. Defaults to `N=1`. |
| `suffix` | `attr_t` | Length-N substring from the end of the lexeme. Defaults to `N=3`. |
-| `cluster` | `attr_t` | Brown cluster ID. |
-| `prob` | `float` | Smoothed log probability estimate of the lexeme's word type (context-independent entry in the vocabulary). |
-| `sentiment` | `float` | A scalar value indicating positivity or negativity. |
### Lexeme.get_struct_attr {#lexeme_get_struct_attr tag="staticmethod, nogil" source="spacy/lexeme.pxd"}
diff --git a/website/docs/api/goldparse.md b/website/docs/api/goldparse.md
index 379913ba2..23937e702 100644
--- a/website/docs/api/goldparse.md
+++ b/website/docs/api/goldparse.md
@@ -12,17 +12,18 @@ expects true examples of a label to have the value `1.0`, and negative examples
of a label to have the value `0.0`. Labels not in the dictionary are treated as
missing – the gradient for those labels will be zero.
-| Name | Type | Description |
-| ----------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `doc` | `Doc` | The document the annotations refer to. |
-| `words` | iterable | A sequence of unicode word strings. |
-| `tags` | iterable | A sequence of strings, representing tag annotations. |
-| `heads` | iterable | A sequence of integers, representing syntactic head offsets. |
-| `deps` | iterable | A sequence of strings, representing the syntactic relation types. |
-| `entities` | iterable | A sequence of named entity annotations, either as BILUO tag strings, or as `(start_char, end_char, label)` tuples, representing the entity positions. If BILUO tag strings, you can specify missing values by setting the tag to None. |
-| `cats` | dict | Labels for text classification. Each key in the dictionary is a string label for the category and each value is `1.0` (positive) or `0.0` (negative). |
-| `links` | dict | Labels for entity linking. A dict with `(start_char, end_char)` keys, and the values being dicts with `kb_id:value` entries, representing external KB IDs mapped to either `1.0` (positive) or `0.0` (negative). |
-| **RETURNS** | `GoldParse` | The newly constructed object. |
+| Name | Type | Description |
+| ----------------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `doc` | `Doc` | The document the annotations refer to. |
+| `words` | iterable | A sequence of unicode word strings. |
+| `tags` | iterable | A sequence of strings, representing tag annotations. |
+| `heads` | iterable | A sequence of integers, representing syntactic head offsets. |
+| `deps` | iterable | A sequence of strings, representing the syntactic relation types. |
+| `entities` | iterable | A sequence of named entity annotations, either as BILUO tag strings, or as `(start_char, end_char, label)` tuples, representing the entity positions. If BILUO tag strings, you can specify missing values by setting the tag to None. |
+| `cats` | dict | Labels for text classification. Each key in the dictionary is a string label for the category and each value is `1.0` (positive) or `0.0` (negative). |
+| `links` | dict | Labels for entity linking. A dict with `(start_char, end_char)` keys, and the values being dicts with `kb_id:value` entries, representing external KB IDs mapped to either `1.0` (positive) or `0.0` (negative). |
+| `make_projective` | bool | Whether to projectivize the dependency tree. Defaults to `False`. |
+| **RETURNS** | `GoldParse` | The newly constructed object. |
## GoldParse.\_\_len\_\_ {#len tag="method"}
@@ -42,17 +43,17 @@ Whether the provided syntactic annotations form a projective dependency tree.
## Attributes {#attributes}
-| Name | Type | Description |
-| ------------------------------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `words` | list | The words. |
-| `tags` | list | The part-of-speech tag annotations. |
-| `heads` | list | The syntactic head annotations. |
-| `labels` | list | The syntactic relation-type annotations. |
-| `ner` | list | The named entity annotations as BILUO tags. |
-| `cand_to_gold` | list | The alignment from candidate tokenization to gold tokenization. |
-| `gold_to_cand` | list | The alignment from gold tokenization to candidate tokenization. |
-| `cats` 2 | dict | Keys in the dictionary are string category labels with values `1.0` or `0.0`. |
-| `links` 2.2 | dict | Keys in the dictionary are `(start_char, end_char)` triples, and the values are dictionaries with `kb_id:value` entries. |
+| Name | Type | Description |
+| ------------------------------------ | ---- | ------------------------------------------------------------------------------------------------------------------------ |
+| `words` | list | The words. |
+| `tags` | list | The part-of-speech tag annotations. |
+| `heads` | list | The syntactic head annotations. |
+| `labels` | list | The syntactic relation-type annotations. |
+| `ner` | list | The named entity annotations as BILUO tags. |
+| `cand_to_gold` | list | The alignment from candidate tokenization to gold tokenization. |
+| `gold_to_cand` | list | The alignment from gold tokenization to candidate tokenization. |
+| `cats` 2 | dict | Keys in the dictionary are string category labels with values `1.0` or `0.0`. |
+| `links` 2.2 | dict | Keys in the dictionary are `(start_char, end_char)` triples, and the values are dictionaries with `kb_id:value` entries. |
## Utilities {#util}
diff --git a/website/docs/api/lexeme.md b/website/docs/api/lexeme.md
index 39148e476..b39664a55 100644
--- a/website/docs/api/lexeme.md
+++ b/website/docs/api/lexeme.md
@@ -156,7 +156,7 @@ The L2 norm of the lexeme's vector representation.
| `like_url` | bool | Does the lexeme resemble a URL? |
| `like_num` | bool | Does the lexeme represent a number? e.g. "10.9", "10", "ten", etc. |
| `like_email` | bool | Does the lexeme resemble an email address? |
-| `is_oov` | bool | Is the lexeme out-of-vocabulary? |
+| `is_oov` | bool | Does the lexeme have a word vector? |
| `is_stop` | bool | Is the lexeme part of a "stop list"? |
| `lang` | int | Language of the parent vocabulary. |
| `lang_` | str | Language of the parent vocabulary. |
diff --git a/website/docs/api/matcher.md b/website/docs/api/matcher.md
index 8a872558c..5244244b1 100644
--- a/website/docs/api/matcher.md
+++ b/website/docs/api/matcher.md
@@ -40,7 +40,8 @@ string where an integer is expected) or unexpected property names.
## Matcher.\_\_call\_\_ {#call tag="method"}
-Find all token sequences matching the supplied patterns on the `Doc`.
+Find all token sequences matching the supplied patterns on the `Doc`. As of
+spaCy v2.3, the `Matcher` can also be called on `Span` objects.
> #### Example
>
@@ -54,10 +55,10 @@ Find all token sequences matching the supplied patterns on the `Doc`.
> matches = matcher(doc)
> ```
-| Name | Type | Description |
-| ----------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| `doc` | `Doc` | The document to match over. |
-| **RETURNS** | list | A list of `(match_id, start, end)` tuples, describing the matches. A match tuple describes a span `doc[start:end`]. The `match_id` is the ID of the added match pattern. |
+| Name | Type | Description |
+| ----------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
+| `doclike` | `Doc`/`Span` | The document to match over or a `Span` (as of v2.3). |
+| **RETURNS** | list | A list of `(match_id, start, end)` tuples, describing the matches. A match tuple describes a span `doc[start:end`]. The `match_id` is the ID of the added match pattern. |
diff --git a/website/docs/api/sentencizer.md b/website/docs/api/sentencizer.md
index 03e843fcc..14482c353 100644
--- a/website/docs/api/sentencizer.md
+++ b/website/docs/api/sentencizer.md
@@ -42,7 +42,7 @@ Initialize the sentencizer.
| Name | Type | Description |
| ------------- | ------------- | ------------------------------------------------------------------------------------------------------ |
-| `punct_chars` | list | Optional custom list of punctuation characters that mark sentence ends. Defaults to `[".", "!", "?"].` |
+| `punct_chars` | list | Optional custom list of punctuation characters that mark sentence ends. Defaults to `['!', '.', '?', '։', '؟', '۔', '܀', '܁', '܂', '߹', '।', '॥', '၊', '။', '።', '፧', '፨', '᙮', '᜵', '᜶', '᠃', '᠉', '᥄', '᥅', '᪨', '᪩', '᪪', '᪫', '᭚', '᭛', '᭞', '᭟', '᰻', '᰼', '᱾', '᱿', '‼', '‽', '⁇', '⁈', '⁉', '⸮', '⸼', '꓿', '꘎', '꘏', '꛳', '꛷', '꡶', '꡷', '꣎', '꣏', '꤯', '꧈', '꧉', '꩝', '꩞', '꩟', '꫰', '꫱', '꯫', '﹒', '﹖', '﹗', '!', '.', '?', '𐩖', '𐩗', '𑁇', '𑁈', '𑂾', '𑂿', '𑃀', '𑃁', '𑅁', '𑅂', '𑅃', '𑇅', '𑇆', '𑇍', '𑇞', '𑇟', '𑈸', '𑈹', '𑈻', '𑈼', '𑊩', '𑑋', '𑑌', '𑗂', '𑗃', '𑗉', '𑗊', '𑗋', '𑗌', '𑗍', '𑗎', '𑗏', '𑗐', '𑗑', '𑗒', '𑗓', '𑗔', '𑗕', '𑗖', '𑗗', '𑙁', '𑙂', '𑜼', '𑜽', '𑜾', '𑩂', '𑩃', '𑪛', '𑪜', '𑱁', '𑱂', '𖩮', '𖩯', '𖫵', '𖬷', '𖬸', '𖭄', '𛲟', '𝪈', '。', '。']`. |
| **RETURNS** | `Sentencizer` | The newly constructed object. |
## Sentencizer.\_\_call\_\_ {#call tag="method"}
diff --git a/website/docs/api/token.md b/website/docs/api/token.md
index 69dac23d6..9f8594c96 100644
--- a/website/docs/api/token.md
+++ b/website/docs/api/token.md
@@ -351,25 +351,9 @@ property to `0` for the first word of the document.
- assert doc[4].sent_start == 1
+ assert doc[4].is_sent_start == True
```
+
-## Token.is_sent_end {#is_sent_end tag="property" new="2"}
-
-A boolean value indicating whether the token ends a sentence. `None` if
-unknown. Defaults to `True` for the last token in the `Doc`.
-
-> #### Example
->
-> ```python
-> doc = nlp("Give it back! He pleaded.")
-> assert doc[3].is_sent_end
-> assert not doc[4].is_sent_end
-> ```
-
-| Name | Type | Description |
-| ----------- | ---- | ------------------------------------ |
-| **RETURNS** | bool | Whether the token ends a sentence. |
-
## Token.has_vector {#has_vector tag="property" model="vectors"}
A boolean value indicating whether a word vector is associated with the token.
@@ -424,71 +408,71 @@ The L2 norm of the token's vector representation.
## Attributes {#attributes}
-| Name | Type | Description |
-| -------------------------------------------- | ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| `doc` | `Doc` | The parent document. |
-| `sent` 2.0.12 | `Span` | The sentence span that this token is a part of. |
-| `text` | unicode | Verbatim text content. |
-| `text_with_ws` | unicode | Text content, with trailing space character if present. |
-| `whitespace_` | unicode | Trailing space character if present. |
-| `orth` | int | ID of the verbatim text content. |
-| `orth_` | unicode | Verbatim text content (identical to `Token.text`). Exists mostly for consistency with the other attributes. |
-| `vocab` | `Vocab` | The vocab object of the parent `Doc`. |
-| `tensor` 2.1.7 | `ndarray` | The tokens's slice of the parent `Doc`'s tensor. |
-| `head` | `Token` | The syntactic parent, or "governor", of this token. |
-| `left_edge` | `Token` | The leftmost token of this token's syntactic descendants. |
-| `right_edge` | `Token` | The rightmost token of this token's syntactic descendants. |
-| `i` | int | The index of the token within the parent document. |
-| `ent_type` | int | Named entity type. |
-| `ent_type_` | unicode | Named entity type. |
-| `ent_iob` | int | IOB code of named entity tag. `3` means the token begins an entity, `2` means it is outside an entity, `1` means it is inside an entity, and `0` means no entity tag is set. |
-| `ent_iob_` | unicode | IOB code of named entity tag. "B" means the token begins an entity, "I" means it is inside an entity, "O" means it is outside an entity, and "" means no entity tag is set. |
-| `ent_kb_id` 2.2 | int | Knowledge base ID that refers to the named entity this token is a part of, if any. |
-| `ent_kb_id_` 2.2 | unicode | Knowledge base ID that refers to the named entity this token is a part of, if any. |
-| `ent_id` | int | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
-| `ent_id_` | unicode | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
-| `lemma` | int | Base form of the token, with no inflectional suffixes. |
-| `lemma_` | unicode | Base form of the token, with no inflectional suffixes. |
-| `norm` | int | The token's norm, i.e. a normalized form of the token text. Usually set in the language's [tokenizer exceptions](/usage/adding-languages#tokenizer-exceptions) or [norm exceptions](/usage/adding-languages#norm-exceptions). |
-| `norm_` | unicode | The token's norm, i.e. a normalized form of the token text. Usually set in the language's [tokenizer exceptions](/usage/adding-languages#tokenizer-exceptions) or [norm exceptions](/usage/adding-languages#norm-exceptions). |
-| `lower` | int | Lowercase form of the token. |
-| `lower_` | unicode | Lowercase form of the token text. Equivalent to `Token.text.lower()`. |
+| Name | Type | Description |
+| -------------------------------------------- | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `doc` | `Doc` | The parent document. |
+| `sent` 2.0.12 | `Span` | The sentence span that this token is a part of. |
+| `text` | unicode | Verbatim text content. |
+| `text_with_ws` | unicode | Text content, with trailing space character if present. |
+| `whitespace_` | unicode | Trailing space character if present. |
+| `orth` | int | ID of the verbatim text content. |
+| `orth_` | unicode | Verbatim text content (identical to `Token.text`). Exists mostly for consistency with the other attributes. |
+| `vocab` | `Vocab` | The vocab object of the parent `Doc`. |
+| `tensor` 2.1.7 | `ndarray` | The tokens's slice of the parent `Doc`'s tensor. |
+| `head` | `Token` | The syntactic parent, or "governor", of this token. |
+| `left_edge` | `Token` | The leftmost token of this token's syntactic descendants. |
+| `right_edge` | `Token` | The rightmost token of this token's syntactic descendants. |
+| `i` | int | The index of the token within the parent document. |
+| `ent_type` | int | Named entity type. |
+| `ent_type_` | unicode | Named entity type. |
+| `ent_iob` | int | IOB code of named entity tag. `3` means the token begins an entity, `2` means it is outside an entity, `1` means it is inside an entity, and `0` means no entity tag is set. |
+| `ent_iob_` | unicode | IOB code of named entity tag. "B" means the token begins an entity, "I" means it is inside an entity, "O" means it is outside an entity, and "" means no entity tag is set. |
+| `ent_kb_id` 2.2 | int | Knowledge base ID that refers to the named entity this token is a part of, if any. |
+| `ent_kb_id_` 2.2 | unicode | Knowledge base ID that refers to the named entity this token is a part of, if any. |
+| `ent_id` | int | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
+| `ent_id_` | unicode | ID of the entity the token is an instance of, if any. Currently not used, but potentially for coreference resolution. |
+| `lemma` | int | Base form of the token, with no inflectional suffixes. |
+| `lemma_` | unicode | Base form of the token, with no inflectional suffixes. |
+| `norm` | int | The token's norm, i.e. a normalized form of the token text. Usually set in the language's [tokenizer exceptions](/usage/adding-languages#tokenizer-exceptions) or [norm exceptions](/usage/adding-languages#norm-exceptions). |
+| `norm_` | unicode | The token's norm, i.e. a normalized form of the token text. Usually set in the language's [tokenizer exceptions](/usage/adding-languages#tokenizer-exceptions) or [norm exceptions](/usage/adding-languages#norm-exceptions). |
+| `lower` | int | Lowercase form of the token. |
+| `lower_` | unicode | Lowercase form of the token text. Equivalent to `Token.text.lower()`. |
| `shape` | int | Transform of the tokens's string, to show orthographic features. Alphabetic characters are replaced by `x` or `X`, and numeric characters are replaced by `d`, and sequences of the same character are truncated after length 4. For example,`"Xxxx"`or`"dd"`. |
| `shape_` | unicode | Transform of the tokens's string, to show orthographic features. Alphabetic characters are replaced by `x` or `X`, and numeric characters are replaced by `d`, and sequences of the same character are truncated after length 4. For example,`"Xxxx"`or`"dd"`. |
-| `prefix` | int | Hash value of a length-N substring from the start of the token. Defaults to `N=1`. |
-| `prefix_` | unicode | A length-N substring from the start of the token. Defaults to `N=1`. |
-| `suffix` | int | Hash value of a length-N substring from the end of the token. Defaults to `N=3`. |
-| `suffix_` | unicode | Length-N substring from the end of the token. Defaults to `N=3`. |
-| `is_alpha` | bool | Does the token consist of alphabetic characters? Equivalent to `token.text.isalpha()`. |
-| `is_ascii` | bool | Does the token consist of ASCII characters? Equivalent to `all(ord(c) < 128 for c in token.text)`. |
-| `is_digit` | bool | Does the token consist of digits? Equivalent to `token.text.isdigit()`. |
-| `is_lower` | bool | Is the token in lowercase? Equivalent to `token.text.islower()`. |
-| `is_upper` | bool | Is the token in uppercase? Equivalent to `token.text.isupper()`. |
-| `is_title` | bool | Is the token in titlecase? Equivalent to `token.text.istitle()`. |
-| `is_punct` | bool | Is the token punctuation? |
-| `is_left_punct` | bool | Is the token a left punctuation mark, e.g. `'('` ? |
-| `is_right_punct` | bool | Is the token a right punctuation mark, e.g. `')'` ? |
-| `is_space` | bool | Does the token consist of whitespace characters? Equivalent to `token.text.isspace()`. |
-| `is_bracket` | bool | Is the token a bracket? |
-| `is_quote` | bool | Is the token a quotation mark? |
-| `is_currency` 2.0.8 | bool | Is the token a currency symbol? |
-| `like_url` | bool | Does the token resemble a URL? |
-| `like_num` | bool | Does the token represent a number? e.g. "10.9", "10", "ten", etc. |
-| `like_email` | bool | Does the token resemble an email address? |
-| `is_oov` | bool | Is the token out-of-vocabulary? |
-| `is_stop` | bool | Is the token part of a "stop list"? |
-| `pos` | int | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
-| `pos_` | unicode | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
-| `tag` | int | Fine-grained part-of-speech. |
-| `tag_` | unicode | Fine-grained part-of-speech. |
-| `dep` | int | Syntactic dependency relation. |
-| `dep_` | unicode | Syntactic dependency relation. |
-| `lang` | int | Language of the parent document's vocabulary. |
-| `lang_` | unicode | Language of the parent document's vocabulary. |
-| `prob` | float | Smoothed log probability estimate of token's word type (context-independent entry in the vocabulary). |
-| `idx` | int | The character offset of the token within the parent document. |
-| `sentiment` | float | A scalar value indicating the positivity or negativity of the token. |
-| `lex_id` | int | Sequential ID of the token's lexical type, used to index into tables, e.g. for word vectors. |
-| `rank` | int | Sequential ID of the token's lexical type, used to index into tables, e.g. for word vectors. |
-| `cluster` | int | Brown cluster ID. |
-| `_` | `Underscore` | User space for adding custom [attribute extensions](/usage/processing-pipelines#custom-components-attributes). |
+| `prefix` | int | Hash value of a length-N substring from the start of the token. Defaults to `N=1`. |
+| `prefix_` | unicode | A length-N substring from the start of the token. Defaults to `N=1`. |
+| `suffix` | int | Hash value of a length-N substring from the end of the token. Defaults to `N=3`. |
+| `suffix_` | unicode | Length-N substring from the end of the token. Defaults to `N=3`. |
+| `is_alpha` | bool | Does the token consist of alphabetic characters? Equivalent to `token.text.isalpha()`. |
+| `is_ascii` | bool | Does the token consist of ASCII characters? Equivalent to `all(ord(c) < 128 for c in token.text)`. |
+| `is_digit` | bool | Does the token consist of digits? Equivalent to `token.text.isdigit()`. |
+| `is_lower` | bool | Is the token in lowercase? Equivalent to `token.text.islower()`. |
+| `is_upper` | bool | Is the token in uppercase? Equivalent to `token.text.isupper()`. |
+| `is_title` | bool | Is the token in titlecase? Equivalent to `token.text.istitle()`. |
+| `is_punct` | bool | Is the token punctuation? |
+| `is_left_punct` | bool | Is the token a left punctuation mark, e.g. `'('` ? |
+| `is_right_punct` | bool | Is the token a right punctuation mark, e.g. `')'` ? |
+| `is_space` | bool | Does the token consist of whitespace characters? Equivalent to `token.text.isspace()`. |
+| `is_bracket` | bool | Is the token a bracket? |
+| `is_quote` | bool | Is the token a quotation mark? |
+| `is_currency` 2.0.8 | bool | Is the token a currency symbol? |
+| `like_url` | bool | Does the token resemble a URL? |
+| `like_num` | bool | Does the token represent a number? e.g. "10.9", "10", "ten", etc. |
+| `like_email` | bool | Does the token resemble an email address? |
+| `is_oov` | bool | Does the token have a word vector? |
+| `is_stop` | bool | Is the token part of a "stop list"? |
+| `pos` | int | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
+| `pos_` | unicode | Coarse-grained part-of-speech from the [Universal POS tag set](https://universaldependencies.org/docs/u/pos/). |
+| `tag` | int | Fine-grained part-of-speech. |
+| `tag_` | unicode | Fine-grained part-of-speech. |
+| `dep` | int | Syntactic dependency relation. |
+| `dep_` | unicode | Syntactic dependency relation. |
+| `lang` | int | Language of the parent document's vocabulary. |
+| `lang_` | unicode | Language of the parent document's vocabulary. |
+| `prob` | float | Smoothed log probability estimate of token's word type (context-independent entry in the vocabulary). |
+| `idx` | int | The character offset of the token within the parent document. |
+| `sentiment` | float | A scalar value indicating the positivity or negativity of the token. |
+| `lex_id` | int | Sequential ID of the token's lexical type, used to index into tables, e.g. for word vectors. |
+| `rank` | int | Sequential ID of the token's lexical type, used to index into tables, e.g. for word vectors. |
+| `cluster` | int | Brown cluster ID. |
+| `_` | `Underscore` | User space for adding custom [attribute extensions](/usage/processing-pipelines#custom-components-attributes). |
diff --git a/website/docs/api/tokenizer.md b/website/docs/api/tokenizer.md
index c71f849ad..3281275e1 100644
--- a/website/docs/api/tokenizer.md
+++ b/website/docs/api/tokenizer.md
@@ -34,15 +34,16 @@ the
> tokenizer = nlp.Defaults.create_tokenizer(nlp)
> ```
-| Name | Type | Description |
-| ---------------- | ----------- | ------------------------------------------------------------------------------------- |
-| `vocab` | `Vocab` | A storage container for lexical types. |
-| `rules` | dict | Exceptions and special-cases for the tokenizer. |
-| `prefix_search` | callable | A function matching the signature of `re.compile(string).search` to match prefixes. |
-| `suffix_search` | callable | A function matching the signature of `re.compile(string).search` to match suffixes. |
-| `infix_finditer` | callable | A function matching the signature of `re.compile(string).finditer` to find infixes. |
-| `token_match` | callable | A function matching the signature of `re.compile(string).match to find token matches. |
-| **RETURNS** | `Tokenizer` | The newly constructed object. |
+| Name | Type | Description |
+| ---------------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------ |
+| `vocab` | `Vocab` | A storage container for lexical types. |
+| `rules` | dict | Exceptions and special-cases for the tokenizer. |
+| `prefix_search` | callable | A function matching the signature of `re.compile(string).search` to match prefixes. |
+| `suffix_search` | callable | A function matching the signature of `re.compile(string).search` to match suffixes. |
+| `infix_finditer` | callable | A function matching the signature of `re.compile(string).finditer` to find infixes. |
+| `token_match` | callable | A function matching the signature of `re.compile(string).match` to find token matches. |
+| `url_match` | callable | A function matching the signature of `re.compile(string).match` to find token matches after considering prefixes and suffixes. |
+| **RETURNS** | `Tokenizer` | The newly constructed object. |
## Tokenizer.\_\_call\_\_ {#call tag="method"}
diff --git a/website/docs/api/vocab.md b/website/docs/api/vocab.md
index b851f6882..2bca6c5b1 100644
--- a/website/docs/api/vocab.md
+++ b/website/docs/api/vocab.md
@@ -21,14 +21,17 @@ Create the vocabulary.
> vocab = Vocab(strings=["hello", "world"])
> ```
-| Name | Type | Description |
-| ------------------------------------------- | -------------------- | ------------------------------------------------------------------------------------------------------------------ |
-| `lex_attr_getters` | dict | A dictionary mapping attribute IDs to functions to compute them. Defaults to `None`. |
-| `tag_map` | dict | A dictionary mapping fine-grained tags to coarse-grained parts-of-speech, and optionally morphological attributes. |
-| `lemmatizer` | object | A lemmatizer. Defaults to `None`. |
-| `strings` | `StringStore` / list | A [`StringStore`](/api/stringstore) that maps strings to hash values, and vice versa, or a list of strings. |
-| `vectors_name` 2.2 | str | A name to identify the vectors table. |
-| **RETURNS** | `Vocab` | The newly constructed object. |
+| Name | Type | Description |
+| -------------------------------------------- | -------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
+| `lex_attr_getters` | dict | A dictionary mapping attribute IDs to functions to compute them. Defaults to `None`. |
+| `tag_map` | dict | A dictionary mapping fine-grained tags to coarse-grained parts-of-speech, and optionally morphological attributes. |
+| `lemmatizer` | object | A lemmatizer. Defaults to `None`. |
+| `strings` | `StringStore` / list | A [`StringStore`](/api/stringstore) that maps strings to hash values, and vice versa, or a list of strings. |
+| `lookups` | `Lookups` | A [`Lookups`](/api/lookups) that stores the `lemma_\*`, `lexeme_norm` and other large lookup tables. Defaults to `None`. |
+| `lookups_extra` 2.3 | `Lookups` | A [`Lookups`](/api/lookups) that stores the optional `lexeme_cluster`/`lexeme_prob`/`lexeme_sentiment`/`lexeme_settings` lookup tables. Defaults to `None`. |
+| `oov_prob` | float | The default OOV probability. Defaults to `-20.0`. |
+| `vectors_name` 2.2 | unicode | A name to identify the vectors table. |
+| **RETURNS** | `Vocab` | The newly constructed object. |
## Vocab.\_\_len\_\_ {#len tag="method"}
diff --git a/website/docs/usage/101/_pos-deps.md b/website/docs/usage/101/_pos-deps.md
index 1a438e424..1e8960edf 100644
--- a/website/docs/usage/101/_pos-deps.md
+++ b/website/docs/usage/101/_pos-deps.md
@@ -36,7 +36,7 @@ for token in doc:
| Text | Lemma | POS | Tag | Dep | Shape | alpha | stop |
| ------- | ------- | ------- | ----- | ---------- | ------- | ------- | ------- |
| Apple | apple | `PROPN` | `NNP` | `nsubj` | `Xxxxx` | `True` | `False` |
-| is | be | `VERB` | `VBZ` | `aux` | `xx` | `True` | `True` |
+| is | be | `AUX` | `VBZ` | `aux` | `xx` | `True` | `True` |
| looking | look | `VERB` | `VBG` | `ROOT` | `xxxx` | `True` | `False` |
| at | at | `ADP` | `IN` | `prep` | `xx` | `True` | `True` |
| buying | buy | `VERB` | `VBG` | `pcomp` | `xxxx` | `True` | `False` |
diff --git a/website/docs/usage/adding-languages.md b/website/docs/usage/adding-languages.md
index 70411ec0b..29a9a1c27 100644
--- a/website/docs/usage/adding-languages.md
+++ b/website/docs/usage/adding-languages.md
@@ -288,7 +288,7 @@ common spelling. This has no effect on any other token attributes, or
tokenization in general, but it ensures that **equivalent tokens receive similar
representations**. This can improve the model's predictions on words that
weren't common in the training data, but are equivalent to other words – for
-example, "realize" and "realize", or "thx" and "thanks".
+example, "realise" and "realize", or "thx" and "thanks".
Similarly, spaCy also includes
[global base norms](https://github.com/explosion/spaCy/tree/master/spacy/lang/norm_exceptions.py)
@@ -297,9 +297,35 @@ though `$` and `€` are very different, spaCy normalizes them both to `$`. This
way, they'll always be seen as similar, no matter how common they were in the
training data.
-Norm exceptions can be provided as a simple dictionary. For more examples, see
-the English
-[`norm_exceptions.py`](https://github.com/explosion/spaCy/tree/master/spacy/lang/en/norm_exceptions.py).
+As of spaCy v2.3, language-specific norm exceptions are provided as a
+JSON dictionary in the package
+[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) rather
+than in the main library. For a full example, see
+[`en_lexeme_norm.json`](https://github.com/explosion/spacy-lookups-data/blob/master/spacy_lookups_data/data/en_lexeme_norm.json).
+
+```json
+### Example
+{
+ "cos": "because",
+ "fav": "favorite",
+ "accessorise": "accessorize",
+ "accessorised": "accessorized"
+}
+```
+
+If you're adding tables for a new languages, be sure to add the tables to
+[`spacy_lookups_data/__init__.py`](https://github.com/explosion/spacy-lookups-data/blob/master/spacy_lookups_data/__init__.py)
+and register the entry point under `spacy_lookups` in
+[`setup.cfg`](https://github.com/explosion/spacy-lookups-data/blob/master/setup.cfg).
+
+Alternatively, you can initialize your language [`Vocab`](/api/vocab) with a
+[`Lookups`](/api/lookups) object that includes the table `lexeme_norm`.
+
+
+
+Previously in spaCy v2.0-v2.2, norm exceptions were provided as a simple python
+dictionary. For more examples, see the English
+[`norm_exceptions.py`](https://github.com/explosion/spaCy/tree/v2.2.x/spacy/lang/en/norm_exceptions.py).
```python
### Example
@@ -327,6 +353,8 @@ norm exceptions overwrite any of the global exceptions, they should be added
first. Also note that the tokenizer exceptions will always have priority over
the attribute getters.
+
+
### Lexical attributes {#lex-attrs new="2"}
spaCy provides a range of [`Token` attributes](/api/token#attributes) that
@@ -634,7 +662,7 @@ One thing to keep in mind is that spaCy expects to train its models from **whole
documents**, not just single sentences. If your corpus only contains single
sentences, spaCy's models will never learn to expect multi-sentence documents,
leading to low performance on real text. To mitigate this problem, you can use
-the `-N` argument to the `spacy convert` command, to merge some of the sentences
+the `-n` argument to the `spacy convert` command, to merge some of the sentences
into longer pseudo-documents.
### Training the tagger and parser {#train-tagger-parser}
diff --git a/website/docs/usage/linguistic-features.md b/website/docs/usage/linguistic-features.md
index 4b3c61b9d..2c927555f 100644
--- a/website/docs/usage/linguistic-features.md
+++ b/website/docs/usage/linguistic-features.md
@@ -471,7 +471,7 @@ doc = nlp.make_doc("London is a big city in the United Kingdom.")
print("Before", doc.ents) # []
header = [ENT_IOB, ENT_TYPE]
-attr_array = numpy.zeros((len(doc), len(header)))
+attr_array = numpy.zeros((len(doc), len(header)), dtype="uint64")
attr_array[0, 0] = 3 # B
attr_array[0, 1] = doc.vocab.strings["GPE"]
doc.from_array(header, attr_array)
@@ -732,12 +732,16 @@ rather than performance:
```python
def tokenizer_pseudo_code(self, special_cases, prefix_search, suffix_search,
- infix_finditer, token_match):
+ infix_finditer, token_match, url_match):
tokens = []
for substring in text.split():
suffixes = []
while substring:
while prefix_search(substring) or suffix_search(substring):
+ if token_match(substring):
+ tokens.append(substring)
+ substring = ''
+ break
if substring in special_cases:
tokens.extend(special_cases[substring])
substring = ''
@@ -752,12 +756,15 @@ def tokenizer_pseudo_code(self, special_cases, prefix_search, suffix_search,
split = suffix_search(substring).start()
suffixes.append(substring[split:])
substring = substring[:split]
- if substring in special_cases:
- tokens.extend(special_cases[substring])
- substring = ''
- elif token_match(substring):
+ if token_match(substring):
tokens.append(substring)
substring = ''
+ elif url_match(substring):
+ tokens.append(substring)
+ substring = ''
+ elif substring in special_cases:
+ tokens.extend(special_cases[substring])
+ substring = ''
elif list(infix_finditer(substring)):
infixes = infix_finditer(substring)
offset = 0
@@ -778,17 +785,19 @@ def tokenizer_pseudo_code(self, special_cases, prefix_search, suffix_search,
The algorithm can be summarized as follows:
1. Iterate over whitespace-separated substrings.
-2. Check whether we have an explicitly defined rule for this substring. If we
- do, use it.
-3. Otherwise, try to consume one prefix. If we consumed a prefix, go back to #2,
- so that special cases always get priority.
-4. If we didn't consume a prefix, try to consume a suffix and then go back to
+2. Look for a token match. If there is a match, stop processing and keep this
+ token.
+3. Check whether we have an explicitly defined special case for this substring.
+ If we do, use it.
+4. Otherwise, try to consume one prefix. If we consumed a prefix, go back to
+ #2, so that the token match and special cases always get priority.
+5. If we didn't consume a prefix, try to consume a suffix and then go back to
#2.
-5. If we can't consume a prefix or a suffix, look for a special case.
-6. Next, look for a token match.
-7. Look for "infixes" — stuff like hyphens etc. and split the substring into
+6. If we can't consume a prefix or a suffix, look for a URL match.
+7. If there's no URL match, then look for a special case.
+8. Look for "infixes" — stuff like hyphens etc. and split the substring into
tokens on all infixes.
-8. Once we can't consume any more of the string, handle it as a single token.
+9. Once we can't consume any more of the string, handle it as a single token.
#### Debugging the tokenizer {#tokenizer-debug new="2.2.3"}
@@ -820,7 +829,7 @@ for t in tok_exp:
### Customizing spaCy's Tokenizer class {#native-tokenizers}
Let's imagine you wanted to create a tokenizer for a new language or specific
-domain. There are five things you would need to define:
+domain. There are six things you may need to define:
1. A dictionary of **special cases**. This handles things like contractions,
units of measurement, emoticons, certain abbreviations, etc.
@@ -831,9 +840,22 @@ domain. There are five things you would need to define:
4. A function `infixes_finditer`, to handle non-whitespace separators, such as
hyphens etc.
5. An optional boolean function `token_match` matching strings that should never
- be split, overriding the infix rules. Useful for things like URLs or numbers.
- Note that prefixes and suffixes will be split off before `token_match` is
- applied.
+ be split, overriding the infix rules. Useful for things like numbers.
+6. An optional boolean function `url_match`, which is similar to `token_match`
+ except that prefixes and suffixes are removed before applying the match.
+
+
+
+In spaCy v2.2.2-v2.2.4, the `token_match` was equivalent to the `url_match`
+above and there was no match pattern applied before prefixes and suffixes were
+analyzed. As of spaCy v2.3.0, the `token_match` has been reverted to its
+behavior in v2.2.1 and earlier with precedence over prefixes and suffixes.
+
+The `url_match` is introduced in v2.3.0 to handle cases like URLs where the
+tokenizer should remove prefixes and suffixes (e.g., a comma at the end of a
+URL) before applying the match.
+
+
You shouldn't usually need to create a `Tokenizer` subclass. Standard usage is
to use `re.compile()` to build a regular expression object, and pass its
@@ -856,7 +878,7 @@ def custom_tokenizer(nlp):
prefix_search=prefix_re.search,
suffix_search=suffix_re.search,
infix_finditer=infix_re.finditer,
- token_match=simple_url_re.match)
+ url_match=simple_url_re.match)
nlp = spacy.load("en_core_web_sm")
nlp.tokenizer = custom_tokenizer(nlp)
@@ -1121,9 +1143,9 @@ from spacy.gold import align
other_tokens = ["i", "listened", "to", "obama", "'", "s", "podcasts", "."]
spacy_tokens = ["i", "listened", "to", "obama", "'s", "podcasts", "."]
cost, a2b, b2a, a2b_multi, b2a_multi = align(other_tokens, spacy_tokens)
-print("Misaligned tokens:", cost) # 2
+print("Edit distance:", cost) # 3
print("One-to-one mappings a -> b", a2b) # array([0, 1, 2, 3, -1, -1, 5, 6])
-print("One-to-one mappings b -> a", b2a) # array([0, 1, 2, 3, 5, 6, 7])
+print("One-to-one mappings b -> a", b2a) # array([0, 1, 2, 3, -1, 6, 7])
print("Many-to-one mappings a -> b", a2b_multi) # {4: 4, 5: 4}
print("Many-to-one mappings b-> a", b2a_multi) # {}
```
@@ -1131,7 +1153,7 @@ print("Many-to-one mappings b-> a", b2a_multi) # {}
Here are some insights from the alignment information generated in the example
above:
-- Two tokens are misaligned.
+- The edit distance (cost) is `3`: two deletions and one insertion.
- The one-to-one mappings for the first four tokens are identical, which means
they map to each other. This makes sense because they're also identical in the
input: `"i"`, `"listened"`, `"to"` and `"obama"`.
diff --git a/website/docs/usage/models.md b/website/docs/usage/models.md
index 5fd92f8f3..b11e6347a 100644
--- a/website/docs/usage/models.md
+++ b/website/docs/usage/models.md
@@ -85,6 +85,143 @@ To load your model with the neutral, multi-language class, simply set
`meta.json`. You can also import the class directly, or call
[`util.get_lang_class()`](/api/top-level#util.get_lang_class) for lazy-loading.
+### Chinese language support {#chinese new=2.3}
+
+The Chinese language class supports three word segmentation options:
+
+> ```python
+> from spacy.lang.zh import Chinese
+>
+> # Disable jieba to use character segmentation
+> Chinese.Defaults.use_jieba = False
+> nlp = Chinese()
+>
+> # Disable jieba through tokenizer config options
+> cfg = {"use_jieba": False}
+> nlp = Chinese(meta={"tokenizer": {"config": cfg}})
+>
+> # Load with "default" model provided by pkuseg
+> cfg = {"pkuseg_model": "default", "require_pkuseg": True}
+> nlp = Chinese(meta={"tokenizer": {"config": cfg}})
+> ```
+
+1. **Jieba:** `Chinese` uses [Jieba](https://github.com/fxsjy/jieba) for word
+ segmentation by default. It's enabled when you create a new `Chinese`
+ language class or call `spacy.blank("zh")`.
+2. **Character segmentation:** Character segmentation is supported by disabling
+ `jieba` and setting `Chinese.Defaults.use_jieba = False` _before_
+ initializing the language class. As of spaCy v2.3.0, the `meta` tokenizer
+ config options can be used to configure `use_jieba`.
+3. **PKUSeg**: In spaCy v2.3.0, support for
+ [PKUSeg](https://github.com/lancopku/PKUSeg-python) has been added to support
+ better segmentation for Chinese OntoNotes and the new
+ [Chinese models](/models/zh).
+
+
+
+Note that [`pkuseg`](https://github.com/lancopku/pkuseg-python) doesn't yet ship
+with pre-compiled wheels for Python 3.8. If you're running Python 3.8, you can
+install it from our fork and compile it locally:
+
+```bash
+$ pip install https://github.com/honnibal/pkuseg-python/archive/master.zip
+```
+
+
+
+
+
+The `meta` argument of the `Chinese` language class supports the following
+following tokenizer config settings:
+
+| Name | Type | Description |
+| ------------------ | ------- | ---------------------------------------------------------------------------------------------------- |
+| `pkuseg_model` | unicode | **Required:** Name of a model provided by `pkuseg` or the path to a local model directory. |
+| `pkuseg_user_dict` | unicode | Optional path to a file with one word per line which overrides the default `pkuseg` user dictionary. |
+| `require_pkuseg` | bool | Overrides all `jieba` settings (optional but strongly recommended). |
+
+```python
+### Examples
+# Load "default" model
+cfg = {"pkuseg_model": "default", "require_pkuseg": True}
+nlp = Chinese(meta={"tokenizer": {"config": cfg}})
+
+# Load local model
+cfg = {"pkuseg_model": "/path/to/pkuseg_model", "require_pkuseg": True}
+nlp = Chinese(meta={"tokenizer": {"config": cfg}})
+
+# Override the user directory
+cfg = {"pkuseg_model": "default", "require_pkuseg": True, "pkuseg_user_dict": "/path"}
+nlp = Chinese(meta={"tokenizer": {"config": cfg}})
+```
+
+You can also modify the user dictionary on-the-fly:
+
+```python
+# Append words to user dict
+nlp.tokenizer.pkuseg_update_user_dict(["中国", "ABC"])
+
+# Remove all words from user dict and replace with new words
+nlp.tokenizer.pkuseg_update_user_dict(["中国"], reset=True)
+
+# Remove all words from user dict
+nlp.tokenizer.pkuseg_update_user_dict([], reset=True)
+```
+
+
+
+
+
+The [Chinese models](/models/zh) provided by spaCy include a custom `pkuseg`
+model trained only on
+[Chinese OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19), since the
+models provided by `pkuseg` include data restricted to research use. For
+research use, `pkuseg` provides models for several different domains
+(`"default"`, `"news"` `"web"`, `"medicine"`, `"tourism"`) and for other uses,
+`pkuseg` provides a simple
+[training API](https://github.com/lancopku/pkuseg-python/blob/master/readme/readme_english.md#usage):
+
+```python
+import pkuseg
+from spacy.lang.zh import Chinese
+
+# Train pkuseg model
+pkuseg.train("train.utf8", "test.utf8", "/path/to/pkuseg_model")
+# Load pkuseg model in spaCy Chinese tokenizer
+nlp = Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "/path/to/pkuseg_model", "require_pkuseg": True}}})
+```
+
+
+
+### Japanese language support {#japanese new=2.3}
+
+> ```python
+> from spacy.lang.ja import Japanese
+>
+> # Load SudachiPy with split mode A (default)
+> nlp = Japanese()
+>
+> # Load SudachiPy with split mode B
+> cfg = {"split_mode": "B"}
+> nlp = Japanese(meta={"tokenizer": {"config": cfg}})
+> ```
+
+The Japanese language class uses
+[SudachiPy](https://github.com/WorksApplications/SudachiPy) for word
+segmentation and part-of-speech tagging. The default Japanese language class and
+the provided Japanese models use SudachiPy split mode `A`.
+
+The `meta` argument of the `Japanese` language class can be used to configure
+the split mode to `A`, `B` or `C`.
+
+
+
+If you run into errors related to `sudachipy`, which is currently under active
+development, we suggest downgrading to `sudachipy==0.4.5`, which is the version
+used for training the current [Japanese models](/models/ja).
+
+
+
## Installing and using models {#download}
> #### Downloading models in spaCy < v1.7
diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md
index a84399312..9c06ba0e0 100644
--- a/website/docs/usage/rule-based-matching.md
+++ b/website/docs/usage/rule-based-matching.md
@@ -1164,17 +1164,17 @@ what you need for your application.
> available corpus.
For example, the corpus spaCy's [English models](/models/en) were trained on
-defines a `PERSON` entity as just the **person name**, without titles like "Mr"
-or "Dr". This makes sense, because it makes it easier to resolve the entity type
-back to a knowledge base. But what if your application needs the full names,
-_including_ the titles?
+defines a `PERSON` entity as just the **person name**, without titles like "Mr."
+or "Dr.". This makes sense, because it makes it easier to resolve the entity
+type back to a knowledge base. But what if your application needs the full
+names, _including_ the titles?
```python
### {executable="true"}
import spacy
nlp = spacy.load("en_core_web_sm")
-doc = nlp("Dr Alex Smith chaired first board meeting of Acme Corp Inc.")
+doc = nlp("Dr. Alex Smith chaired first board meeting of Acme Corp Inc.")
print([(ent.text, ent.label_) for ent in doc.ents])
```
@@ -1239,7 +1239,7 @@ def expand_person_entities(doc):
# Add the component after the named entity recognizer
nlp.add_pipe(expand_person_entities, after='ner')
-doc = nlp("Dr Alex Smith chaired first board meeting of Acme Corp Inc.")
+doc = nlp("Dr. Alex Smith chaired first board meeting of Acme Corp Inc.")
print([(ent.text, ent.label_) for ent in doc.ents])
```
diff --git a/website/docs/usage/v2-3.md b/website/docs/usage/v2-3.md
new file mode 100644
index 000000000..d59b50a6e
--- /dev/null
+++ b/website/docs/usage/v2-3.md
@@ -0,0 +1,220 @@
+---
+title: What's New in v2.3
+teaser: New features, backwards incompatibilities and migration guide
+menu:
+ - ['New Features', 'features']
+ - ['Backwards Incompatibilities', 'incompat']
+ - ['Migrating from v2.2', 'migrating']
+---
+
+## New Features {#features hidden="true"}
+
+spaCy v2.3 features new pretrained models for five languages, word vectors for
+all language models, and decreased model size and loading times for models with
+vectors. We've added pretrained models for **Chinese, Danish, Japanese, Polish
+and Romanian** and updated the training data and vectors for most languages.
+Model packages with vectors are about **2×** smaller on disk and load
+**2-4×** faster. For the full changelog, see the
+[release notes on GitHub](https://github.com/explosion/spaCy/releases/tag/v2.3.0).
+For more details and a behind-the-scenes look at the new release,
+[see our blog post](https://explosion.ai/blog/spacy-v2-3).
+
+### Expanded model families with vectors {#models}
+
+> #### Example
+>
+> ```bash
+> python -m spacy download da_core_news_sm
+> python -m spacy download ja_core_news_sm
+> python -m spacy download pl_core_news_sm
+> python -m spacy download ro_core_news_sm
+> python -m spacy download zh_core_web_sm
+> ```
+
+With new model families for Chinese, Danish, Polish, Romanian and Chinese plus
+`md` and `lg` models with word vectors for all languages, this release provides
+a total of 46 model packages. For models trained using
+[Universal Dependencies](https://universaldependencies.org) corpora, the
+training data has been updated to UD v2.5 (v2.6 for Japanese, v2.3 for Polish)
+and Dutch has been extended to include both UD Dutch Alpino and LassySmall.
+
+
+
+**Models:** [Models directory](/models) **Benchmarks: **
+[Release notes](https://github.com/explosion/spaCy/releases/tag/v2.3.0)
+
+
+
+### Chinese {#chinese}
+
+> #### Example
+>
+> ```python
+> from spacy.lang.zh import Chinese
+>
+> # Load with "default" model provided by pkuseg
+> cfg = {"pkuseg_model": "default", "require_pkuseg": True}
+> nlp = Chinese(meta={"tokenizer": {"config": cfg}})
+>
+> # Append words to user dict
+> nlp.tokenizer.pkuseg_update_user_dict(["中国", "ABC"])
+> ```
+
+This release adds support for
+[`pkuseg`](https://github.com/lancopku/pkuseg-python) for word segmentation and
+the new Chinese models ship with a custom pkuseg model trained on OntoNotes. The
+Chinese tokenizer can be initialized with both `pkuseg` and custom models and
+the `pkuseg` user dictionary is easy to customize. Note that
+[`pkuseg`](https://github.com/lancopku/pkuseg-python) doesn't yet ship with
+pre-compiled wheels for Python 3.8. See the
+[usage documentation](/usage/models#chinese) for details on how to install it on
+Python 3.8.
+
+
+
+**Models:** [Chinese models](/models/zh) **Usage: **
+[Chinese tokenizer usage](/usage/models#chinese)
+
+
+
+### Japanese {#japanese}
+
+The updated Japanese language class switches to
+[`SudachiPy`](https://github.com/WorksApplications/SudachiPy) for word
+segmentation and part-of-speech tagging. Using `SudachiPy` greatly simplifies
+installing spaCy for Japanese, which is now possible with a single command:
+`pip install spacy[ja]`.
+
+
+
+**Models:** [Japanese models](/models/ja) **Usage:**
+[Japanese tokenizer usage](/usage/models#japanese)
+
+
+
+### Small CLI updates
+
+- [`spacy debug-data`](/api/cli#debug-data) provides the coverage of the vectors
+ in a base model with `spacy debug-data lang train dev -b base_model`
+- [`spacy evaluate`](/api/cli#evaluate) supports `blank:lg` (e.g.
+ `spacy evaluate blank:en dev.json`) to evaluate the tokenization accuracy
+ without loading a model
+- [`spacy train`](/api/cli#train) on GPU restricts the CPU timing evaluation to
+ the first iteration
+
+## Backwards incompatibilities {#incompat}
+
+
+
+If you've been training **your own models**, you'll need to **retrain** them
+with the new version. Also don't forget to upgrade all models to the latest
+versions. Models for earlier v2 releases (v2.0, v2.1, v2.2) aren't compatible
+with models for v2.3. To check if all of your models are up to date, you can run
+the [`spacy validate`](/api/cli#validate) command.
+
+
+
+> #### Install with lookups data
+>
+> ```bash
+> $ pip install spacy[lookups]
+> ```
+>
+> You can also install
+> [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data)
+> directly.
+
+- If you're training new models, you'll want to install the package
+ [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data), which
+ now includes both the lemmatization tables (as in v2.2) and the normalization
+ tables (new in v2.3). If you're using pretrained models, **nothing changes**,
+ because the relevant tables are included in the model packages.
+- Due to the updated Universal Dependencies training data, the fine-grained
+ part-of-speech tags will change for many provided language models. The
+ coarse-grained part-of-speech tagset remains the same, but the mapping from
+ particular fine-grained to coarse-grained tags may show minor differences.
+- For French, Italian, Portuguese and Spanish, the fine-grained part-of-speech
+ tagsets contain new merged tags related to contracted forms, such as `ADP_DET`
+ for French `"au"`, which maps to UPOS `ADP` based on the head `"à"`. This
+ increases the accuracy of the models by improving the alignment between
+ spaCy's tokenization and Universal Dependencies multi-word tokens used for
+ contractions.
+
+### Migrating from spaCy 2.2 {#migrating}
+
+#### Tokenizer settings
+
+In spaCy v2.2.2-v2.2.4, there was a change to the precedence of `token_match`
+that gave prefixes and suffixes priority over `token_match`, which caused
+problems for many custom tokenizer configurations. This has been reverted in
+v2.3 so that `token_match` has priority over prefixes and suffixes as in v2.2.1
+and earlier versions.
+
+A new tokenizer setting `url_match` has been introduced in v2.3.0 to handle
+cases like URLs where the tokenizer should remove prefixes and suffixes (e.g., a
+comma at the end of a URL) before applying the match. See the full
+[tokenizer documentation](/usage/linguistic-features#tokenization) and try out
+[`nlp.tokenizer.explain()`](/usage/linguistic-features#tokenizer-debug) when
+debugging your tokenizer configuration.
+
+#### Warnings configuration
+
+spaCy's custom warnings have been replaced with native Python
+[`warnings`](https://docs.python.org/3/library/warnings.html). Instead of
+setting `SPACY_WARNING_IGNORE`, use the
+[`warnings` filters](https://docs.python.org/3/library/warnings.html#the-warnings-filter)
+to manage warnings.
+
+#### Normalization tables
+
+The normalization tables have moved from the language data in
+[`spacy/lang`](https://github.com/explosion/spaCy/tree/master/spacy/lang) to the
+package [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data).
+If you're adding data for a new language, the normalization table should be
+added to `spacy-lookups-data`. See
+[adding norm exceptions](/usage/adding-languages#norm-exceptions).
+
+#### Probability and cluster features
+
+> #### Load and save extra prob lookups table
+>
+> ```python
+> from spacy.lang.en import English
+> nlp = English()
+> doc = nlp("the")
+> print(doc[0].prob) # lazily loads extra prob table
+> nlp.to_disk("/path/to/model") # includes prob table
+> ```
+
+The `Token.prob` and `Token.cluster` features, which are no longer used by the
+core pipeline components as of spaCy v2, are no longer provided in the
+pretrained models to reduce the model size. To keep these features available for
+users relying on them, the `prob` and `cluster` features for the most frequent
+1M tokens have been moved to
+[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) as
+`extra` features for the relevant languages (English, German, Greek and
+Spanish).
+
+The extra tables are loaded lazily, so if you have `spacy-lookups-data`
+installed and your code accesses `Token.prob`, the full table is loaded into the
+model vocab, which will take a few seconds on initial loading. When you save
+this model after loading the `prob` table, the full `prob` table will be saved
+as part of the model vocab.
+
+If you'd like to include custom `cluster`, `prob`, or `sentiment` tables as part
+of a new model, add the data to
+[`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) under
+the entry point `lg_extra`, e.g. `en_extra` for English. Alternatively, you can
+initialize your [`Vocab`](/api/vocab) with the `lookups_extra` argument with a
+[`Lookups`](/api/lookups) object that includes the tables `lexeme_cluster`,
+`lexeme_prob`, `lexeme_sentiment` or `lexeme_settings`. `lexeme_settings` is
+currently only used to provide a custom `oov_prob`. See examples in the
+[`data` directory](https://github.com/explosion/spacy-lookups-data/tree/master/spacy_lookups_data/data)
+in `spacy-lookups-data`.
+
+#### Initializing new models without extra lookups tables
+
+When you initialize a new model with [`spacy init-model`](/api/cli#init-model),
+the `prob` table from `spacy-lookups-data` may be loaded as part of the
+initialization. If you'd like to omit this extra data as in spaCy's provided
+v2.3 models, use the new flag `--omit-extra-lookups`.
diff --git a/website/meta/languages.json b/website/meta/languages.json
index 41c1bce7f..facfc3541 100644
--- a/website/meta/languages.json
+++ b/website/meta/languages.json
@@ -1,5 +1,35 @@
{
"languages": [
+ {
+ "code": "zh",
+ "name": "Chinese",
+ "models": ["zh_core_web_sm", "zh_core_web_md", "zh_core_web_lg"],
+ "dependencies": [
+ {
+ "name": "Jieba",
+ "url": "https://github.com/fxsjy/jieba"
+ },
+ {
+ "name": "PKUSeg",
+ "url": "https://github.com/lancopku/PKUSeg-python"
+ }
+ ],
+ "has_examples": true
+ },
+ {
+ "code": "da",
+ "name": "Danish",
+ "example": "Dette er en sætning.",
+ "has_examples": true,
+ "models": ["da_core_news_sm", "da_core_news_md", "da_core_news_lg"]
+ },
+ {
+ "code": "nl",
+ "name": "Dutch",
+ "models": ["nl_core_news_sm", "nl_core_news_md", "nl_core_news_lg"],
+ "example": "Dit is een zin.",
+ "has_examples": true
+ },
{
"code": "en",
"name": "English",
@@ -14,68 +44,91 @@
"example": "This is a sentence.",
"has_examples": true
},
+ {
+ "code": "fr",
+ "name": "French",
+ "models": ["fr_core_news_sm", "fr_core_news_md", "fr_core_news_lg"],
+ "example": "C'est une phrase.",
+ "has_examples": true
+ },
{
"code": "de",
"name": "German",
- "models": ["de_core_news_sm", "de_core_news_md"],
+ "models": ["de_core_news_sm", "de_core_news_md", "de_core_news_lg"],
"starters": ["de_trf_bertbasecased_lg"],
"example": "Dies ist ein Satz.",
"has_examples": true
},
{
- "code": "fr",
- "name": "French",
- "models": ["fr_core_news_sm", "fr_core_news_md"],
- "example": "C'est une phrase.",
- "has_examples": true
- },
- {
- "code": "es",
- "name": "Spanish",
- "models": ["es_core_news_sm", "es_core_news_md"],
- "example": "Esto es una frase.",
- "has_examples": true
- },
- {
- "code": "pt",
- "name": "Portuguese",
- "models": ["pt_core_news_sm"],
- "example": "Esta é uma frase.",
+ "code": "el",
+ "name": "Greek",
+ "models": ["el_core_news_sm", "el_core_news_md", "el_core_news_lg"],
+ "example": "Αυτή είναι μια πρόταση.",
"has_examples": true
},
{
"code": "it",
"name": "Italian",
- "models": ["it_core_news_sm"],
+ "models": ["it_core_news_sm", "it_core_news_md", "it_core_news_lg"],
"example": "Questa è una frase.",
"has_examples": true
},
{
- "code": "nl",
- "name": "Dutch",
- "models": ["nl_core_news_sm"],
- "example": "Dit is een zin.",
+ "code": "ja",
+ "name": "Japanese",
+ "models": ["ja_core_news_sm", "ja_core_news_md", "ja_core_news_lg"],
+ "dependencies": [
+ {
+ "name": "SudachiPy",
+ "url": "https://github.com/WorksApplications/SudachiPy"
+ }
+ ],
"has_examples": true
},
{
- "code": "el",
- "name": "Greek",
- "models": ["el_core_news_sm", "el_core_news_md"],
- "example": "Αυτή είναι μια πρόταση.",
- "has_examples": true
+ "code": "lt",
+ "name": "Lithuanian",
+ "has_examples": true,
+ "models": ["lt_core_news_sm", "lt_core_news_md", "lt_core_news_lg"]
},
- { "code": "sv", "name": "Swedish", "has_examples": true },
- { "code": "fi", "name": "Finnish", "has_examples": true },
{
"code": "nb",
"name": "Norwegian Bokmål",
"example": "Dette er en setning.",
"has_examples": true,
- "models": ["nb_core_news_sm"]
+ "models": ["nb_core_news_sm", "nb_core_news_md", "nb_core_news_lg"]
},
- { "code": "da", "name": "Danish", "example": "Dette er en sætning.", "has_examples": true },
+ {
+ "code": "pl",
+ "name": "Polish",
+ "example": "To jest zdanie.",
+ "has_examples": true,
+ "models": ["pl_core_news_sm", "pl_core_news_md", "pl_core_news_lg"]
+ },
+ {
+ "code": "pt",
+ "name": "Portuguese",
+ "models": ["pt_core_news_sm", "pt_core_news_md", "pt_core_news_lg"],
+ "example": "Esta é uma frase.",
+ "has_examples": true
+ },
+ {
+ "code": "ro",
+ "name": "Romanian",
+ "example": "Aceasta este o propoziție.",
+ "has_examples": true,
+ "models": ["ro_core_news_sm", "ro_core_news_md", "ro_core_news_lg"]
+ },
+ {
+ "code": "es",
+ "name": "Spanish",
+ "models": ["es_core_news_sm", "es_core_news_md", "es_core_news_lg"],
+ "example": "Esto es una frase.",
+ "has_examples": true
+ },
+ { "code": "sv", "name": "Swedish", "has_examples": true },
+ { "code": "fi", "name": "Finnish", "has_examples": true },
{ "code": "hu", "name": "Hungarian", "example": "Ez egy mondat.", "has_examples": true },
- { "code": "pl", "name": "Polish", "example": "To jest zdanie.", "has_examples": true },
{
"code": "ru",
"name": "Russian",
@@ -88,12 +141,6 @@
"has_examples": true,
"dependencies": [{ "name": "pymorphy2", "url": "https://github.com/kmike/pymorphy2" }]
},
- {
- "code": "ro",
- "name": "Romanian",
- "example": "Aceasta este o propoziție.",
- "has_examples": true
- },
{ "code": "hr", "name": "Croatian", "has_examples": true },
{ "code": "eu", "name": "Basque", "has_examples": true },
{ "code": "yo", "name": "Yoruba", "has_examples": true },
@@ -123,7 +170,6 @@
{ "code": "bg", "name": "Bulgarian", "example": "Това е изречение", "has_examples": true },
{ "code": "cs", "name": "Czech" },
{ "code": "is", "name": "Icelandic" },
- { "code": "lt", "name": "Lithuanian", "has_examples": true, "models": ["lt_core_news_sm"] },
{ "code": "lv", "name": "Latvian" },
{ "code": "sr", "name": "Serbian" },
{ "code": "sk", "name": "Slovak" },
@@ -145,12 +191,6 @@
"example": "นี่คือประโยค",
"has_examples": true
},
- {
- "code": "zh",
- "name": "Chinese",
- "dependencies": [{ "name": "Jieba", "url": "https://github.com/fxsjy/jieba" }],
- "has_examples": true
- },
{
"code": "ja",
"name": "Japanese",
@@ -187,6 +227,21 @@
"example": "Sta chì a l'é unna fraxe.",
"has_examples": true
},
+ {
+ "code": "hy",
+ "name": "Armenian",
+ "has_examples": true
+ },
+ {
+ "code": "gu",
+ "name": "Gujarati",
+ "has_examples": true
+ },
+ {
+ "code": "ml",
+ "name": "Malayalam",
+ "has_examples": true
+ },
{
"code": "xx",
"name": "Multi-language",
diff --git a/website/meta/sidebars.json b/website/meta/sidebars.json
index 3fafc52b0..d7129875f 100644
--- a/website/meta/sidebars.json
+++ b/website/meta/sidebars.json
@@ -9,6 +9,7 @@
{ "text": "Models & Languages", "url": "/usage/models" },
{ "text": "Facts & Figures", "url": "/usage/facts-figures" },
{ "text": "spaCy 101", "url": "/usage/spacy-101" },
+ { "text": "New in v2.3", "url": "/usage/v2-3" },
{ "text": "New in v2.2", "url": "/usage/v2-2" },
{ "text": "New in v2.1", "url": "/usage/v2-1" },
{ "text": "New in v2.0", "url": "/usage/v2" }
diff --git a/website/meta/site.json b/website/meta/site.json
index 29d71048e..8b8424f82 100644
--- a/website/meta/site.json
+++ b/website/meta/site.json
@@ -23,9 +23,9 @@
"apiKey": "371e26ed49d29a27bd36273dfdaf89af",
"indexName": "spacy"
},
- "binderUrl": "ines/spacy-io-binder",
+ "binderUrl": "explosion/spacy-io-binder",
"binderBranch": "live",
- "binderVersion": "2.2.0",
+ "binderVersion": "2.3.0",
"sections": [
{ "id": "usage", "title": "Usage Documentation", "theme": "blue" },
{ "id": "models", "title": "Models Documentation", "theme": "blue" },
diff --git a/website/meta/universe.json b/website/meta/universe.json
index 7bd954c2f..31959fac3 100644
--- a/website/meta/universe.json
+++ b/website/meta/universe.json
@@ -1,5 +1,29 @@
{
"resources": [
+ {
+ "id": "spacy-universal-sentence-encoder",
+ "title": "SpaCy - Universal Sentence Encoder",
+ "slogan": "Make use of Google's Universal Sentence Encoder directly within SpaCy",
+ "description": "This library lets you use Universal Sentence Encoder embeddings of Docs, Spans and Tokens directly from TensorFlow Hub",
+ "github": "MartinoMensio/spacy-universal-sentence-encoder-tfhub",
+ "code_example": [
+ "import spacy_universal_sentence_encoder",
+ "load one of the models: ['en_use_md', 'en_use_lg', 'xx_use_md', 'xx_use_lg']",
+ "nlp = spacy_universal_sentence_encoder.load_model('en_use_lg')",
+ "# get two documents",
+ "doc_1 = nlp('Hi there, how are you?')",
+ "doc_2 = nlp('Hello there, how are you doing today?')",
+ "# use the similarity method that is based on the vectors, on Doc, Span or Token",
+ "print(doc_1.similarity(doc_2[0:7]))"
+ ],
+ "category": ["models", "pipeline"],
+ "author": "Martino Mensio",
+ "author_links": {
+ "twitter": "MartinoMensio",
+ "github": "MartinoMensio",
+ "website": "https://martinomensio.github.io"
+ }
+ },
{
"id": "whatlies",
"title": "whatlies",
@@ -115,11 +139,11 @@
"print(text)"
],
"category": ["scientific", "biomedical"],
- "author": "Travis Hoppe",
+ "author": "Travis Hoppe",
"author_links": {
"github": "thoppe",
- "twitter":"metasemantic",
- "website" : "http://thoppe.github.io/"
+ "twitter": "metasemantic",
+ "website": "http://thoppe.github.io/"
}
},
{
@@ -1133,7 +1157,7 @@
"type": "education",
"id": "spacy-course",
"title": "Advanced NLP with spaCy",
- "slogan": "spaCy, 2019",
+ "slogan": "A free online course",
"description": "In this free interactive course, you'll learn how to use spaCy to build advanced natural language understanding systems, using both rule-based and machine learning approaches.",
"url": "https://course.spacy.io",
"image": "https://i.imgur.com/JC00pHW.jpg",
@@ -1186,10 +1210,38 @@
"youtube": "6zm9NC9uRkk",
"category": ["videos"]
},
+ {
+ "type": "education",
+ "id": "video-spacy-course",
+ "title": "Advanced NLP with spaCy · A free online course",
+ "description": "spaCy is a modern Python library for industrial-strength Natural Language Processing. In this free and interactive online course, you'll learn how to use spaCy to build advanced natural language understanding systems, using both rule-based and machine learning approaches.",
+ "url": "https://course.spacy.io/en",
+ "author": "Ines Montani",
+ "author_links": {
+ "twitter": "_inesmontani",
+ "github": "ines"
+ },
+ "youtube": "THduWAnG97k",
+ "category": ["videos"]
+ },
+ {
+ "type": "education",
+ "id": "video-spacy-course-de",
+ "title": "Modernes NLP mit spaCy · Ein Gratis-Onlinekurs",
+ "description": "spaCy ist eine moderne Python-Bibliothek für industriestarkes Natural Language Processing. In diesem kostenlosen und interaktiven Onlinekurs lernst du, mithilfe von spaCy fortgeschrittene Systeme für die Analyse natürlicher Sprache zu entwickeln und dabei sowohl regelbasierte Verfahren, als auch moderne Machine-Learning-Technologie einzusetzen.",
+ "url": "https://course.spacy.io/de",
+ "author": "Ines Montani",
+ "author_links": {
+ "twitter": "_inesmontani",
+ "github": "ines"
+ },
+ "youtube": "K1elwpgDdls",
+ "category": ["videos"]
+ },
{
"type": "education",
"id": "video-intro-to-nlp-episode-1",
- "title": "Intro to NLP with spaCy",
+ "title": "Intro to NLP with spaCy (1)",
"slogan": "Episode 1: Data exploration",
"description": "In this new video series, data science instructor Vincent Warmerdam gets started with spaCy, an open-source library for Natural Language Processing in Python. His mission: building a system to automatically detect programming languages in large volumes of text. Follow his process from the first idea to a prototype all the way to data collection and training a statistical named entity recogntion model from scratch.",
"author": "Vincent Warmerdam",
@@ -1203,7 +1255,7 @@
{
"type": "education",
"id": "video-intro-to-nlp-episode-2",
- "title": "Intro to NLP with spaCy",
+ "title": "Intro to NLP with spaCy (2)",
"slogan": "Episode 2: Rule-based Matching",
"description": "In this new video series, data science instructor Vincent Warmerdam gets started with spaCy, an open-source library for Natural Language Processing in Python. His mission: building a system to automatically detect programming languages in large volumes of text. Follow his process from the first idea to a prototype all the way to data collection and training a statistical named entity recogntion model from scratch.",
"author": "Vincent Warmerdam",
@@ -1214,6 +1266,34 @@
"youtube": "KL4-Mpgbahw",
"category": ["videos"]
},
+ {
+ "type": "education",
+ "id": "video-intro-to-nlp-episode-3",
+ "title": "Intro to NLP with spaCy (3)",
+ "slogan": "Episode 2: Evaluation",
+ "description": "In this new video series, data science instructor Vincent Warmerdam gets started with spaCy, an open-source library for Natural Language Processing in Python. His mission: building a system to automatically detect programming languages in large volumes of text. Follow his process from the first idea to a prototype all the way to data collection and training a statistical named entity recogntion model from scratch.",
+ "author": "Vincent Warmerdam",
+ "author_links": {
+ "twitter": "fishnets88",
+ "github": "koaning"
+ },
+ "youtube": "4V0JDdohxAk",
+ "category": ["videos"]
+ },
+ {
+ "type": "education",
+ "id": "video-intro-to-nlp-episode-4",
+ "title": "Intro to NLP with spaCy (4)",
+ "slogan": "Episode 4: Named Entity Recognition",
+ "description": "In this new video series, data science instructor Vincent Warmerdam gets started with spaCy, an open-source library for Natural Language Processing in Python. His mission: building a system to automatically detect programming languages in large volumes of text. Follow his process from the first idea to a prototype all the way to data collection and training a statistical named entity recogntion model from scratch.",
+ "author": "Vincent Warmerdam",
+ "author_links": {
+ "twitter": "fishnets88",
+ "github": "koaning"
+ },
+ "youtube": "IqOJU1-_Fi0",
+ "category": ["videos"]
+ },
{
"type": "education",
"id": "video-spacy-irl-entity-linking",
@@ -1287,6 +1367,22 @@
},
"category": ["podcasts"]
},
+ {
+ "type": "education",
+ "id": "podcast-init2",
+ "title": "Podcast.__init__ #256: An Open Source Toolchain For NLP From Explosion AI",
+ "slogan": "March 2020",
+ "description": "The state of the art in natural language processing is a constantly moving target. With the rise of deep learning, previously cutting edge techniques have given way to robust language models. Through it all the team at Explosion AI have built a strong presence with the trifecta of SpaCy, Thinc, and Prodigy to support fast and flexible data labeling to feed deep learning models and performant and scalable text processing. In this episode founder and open source author Matthew Honnibal shares his experience growing a business around cutting edge open source libraries for the machine learning developent process.",
+ "iframe": "https://cdn.podlove.org/web-player/share.html?episode=https%3A%2F%2Fwww.pythonpodcast.com%2F%3Fpodlove_player4%3D614",
+ "iframe_height": 200,
+ "thumb": "https://i.imgur.com/rpo6BuY.png",
+ "url": "https://www.pythonpodcast.com/explosion-ai-natural-language-processing-episode-256/",
+ "author": "Tobias Macey",
+ "author_links": {
+ "website": "https://www.podcastinit.com"
+ },
+ "category": ["podcasts"]
+ },
{
"type": "education",
"id": "talk-python-podcast",
@@ -1349,6 +1445,18 @@
},
"category": ["podcasts"]
},
+ {
+ "type": "education",
+ "id": "video-entity-linking",
+ "title": "Training a custom entity linking mode with spaCy",
+ "author": "Sofie Van Landeghem",
+ "author_links": {
+ "twitter": "OxyKodit",
+ "github": "svlandeg"
+ },
+ "youtube": "8u57WSXVpmw",
+ "category": ["videos"]
+ },
{
"id": "adam_qas",
"title": "ADAM: Question Answering System",
@@ -2128,7 +2236,7 @@
"",
"nlp = spacy.load('en_core_web_sm')",
"nlp.add_pipe(LanguageDetector())",
- "doc = nlp('Life is like a box of chocolates. You never know what you're gonna get.')",
+ "doc = nlp('Life is like a box of chocolates. You never know what you are gonna get.')",
"",
"assert doc._.language == 'en'",
"assert doc._.language_score >= 0.8"
@@ -2183,22 +2291,22 @@
"pip": "pyate",
"code_example": [
"import spacy",
- "from pyate.term_extraction_pipeline import TermExtractionPipeline",
- "",
- "nlp = spacy.load('en_core_web_sm')",
- "nlp.add_pipe(TermExtractionPipeline())",
- "# source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994795/",
- "string = 'Central to the development of cancer are genetic changes that endow these “cancer cells” with many of the hallmarks of cancer, such as self-sufficient growth and resistance to anti-growth and pro-death signals. However, while the genetic changes that occur within cancer cells themselves, such as activated oncogenes or dysfunctional tumor suppressors, are responsible for many aspects of cancer development, they are not sufficient. Tumor promotion and progression are dependent on ancillary processes provided by cells of the tumor environment but that are not necessarily cancerous themselves. Inflammation has long been associated with the development of cancer. This review will discuss the reflexive relationship between cancer and inflammation with particular focus on how considering the role of inflammation in physiologic processes such as the maintenance of tissue homeostasis and repair may provide a logical framework for understanding the connection between the inflammatory response and cancer.'",
- "",
- "doc = nlp(string)",
- "print(doc._.combo_basic.sort_values(ascending=False).head(5))",
- "\"\"\"\"\"\"",
- "dysfunctional tumor 1.443147",
- "tumor suppressors 1.443147",
- "genetic changes 1.386294",
- "cancer cells 1.386294",
- "dysfunctional tumor suppressors 1.298612",
- "\"\"\"\"\"\""
+ "from pyate.term_extraction_pipeline import TermExtractionPipeline",
+ "",
+ "nlp = spacy.load('en_core_web_sm')",
+ "nlp.add_pipe(TermExtractionPipeline())",
+ "# source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1994795/",
+ "string = 'Central to the development of cancer are genetic changes that endow these “cancer cells” with many of the hallmarks of cancer, such as self-sufficient growth and resistance to anti-growth and pro-death signals. However, while the genetic changes that occur within cancer cells themselves, such as activated oncogenes or dysfunctional tumor suppressors, are responsible for many aspects of cancer development, they are not sufficient. Tumor promotion and progression are dependent on ancillary processes provided by cells of the tumor environment but that are not necessarily cancerous themselves. Inflammation has long been associated with the development of cancer. This review will discuss the reflexive relationship between cancer and inflammation with particular focus on how considering the role of inflammation in physiologic processes such as the maintenance of tissue homeostasis and repair may provide a logical framework for understanding the connection between the inflammatory response and cancer.'",
+ "",
+ "doc = nlp(string)",
+ "print(doc._.combo_basic.sort_values(ascending=False).head(5))",
+ "\"\"\"\"\"\"",
+ "dysfunctional tumor 1.443147",
+ "tumor suppressors 1.443147",
+ "genetic changes 1.386294",
+ "cancer cells 1.386294",
+ "dysfunctional tumor suppressors 1.298612",
+ "\"\"\"\"\"\""
],
"code_language": "python",
"url": "https://github.com/kevinlu1248/pyate",
@@ -2210,6 +2318,36 @@
},
"category": ["pipeline", "research"],
"tags": ["term_extraction"]
+ },
+ {
+ "id": "contextualSpellCheck",
+ "title": "Contextual Spell Check",
+ "slogan": "Contextual spell correction using BERT (bidirectional representations)",
+ "description": "This package currently focuses on Out of Vocabulary (OOV) word or non-word error (NWE) correction using BERT model. The idea of using BERT was to use the context when correcting NWE. In the coming days, I would like to focus on RWE and optimising the package by implementing it in cython.",
+ "github": "R1j1t/contextualSpellCheck",
+ "pip": "contextualSpellCheck",
+ "code_example": [
+ "import spacy",
+ "import contextualSpellCheck",
+ "",
+ "nlp = spacy.load('en')",
+ "contextualSpellCheck.add_to_pipe(nlp)",
+ "doc = nlp('Income was $9.4 milion compared to the prior year of $2.7 milion.')",
+ "",
+ "print(doc._.performed_spellCheck) #Should be True",
+ "print(doc._.outcome_spellCheck) #Income was $9.4 million compared to the prior year of $2.7 million."
+ ],
+ "code_language": "python",
+ "url": "https://github.com/R1j1t/contextualSpellCheck",
+ "thumb": "https://user-images.githubusercontent.com/22280243/82760949-98e68480-9e14-11ea-952e-4738620fd9e3.png",
+ "image": "https://user-images.githubusercontent.com/22280243/82138959-2852cd00-9842-11ea-918a-49b2a7873ef6.png",
+ "author": "Rajat Goel",
+ "author_links": {
+ "github": "r1j1t",
+ "website": "https://github.com/R1j1t"
+ },
+ "category": ["pipeline", "conversational", "research"],
+ "tags": ["spell check", "correction", "preprocessing", "translation", "correction"]
}
],
diff --git a/website/src/styles/landing.module.sass b/website/src/styles/landing.module.sass
index e36e36c0a..c29c0fffb 100644
--- a/website/src/styles/landing.module.sass
+++ b/website/src/styles/landing.module.sass
@@ -86,6 +86,7 @@
.banner-content-small
display: block
+ margin-bottom: 0 !important
.banner-title
display: block
diff --git a/website/src/templates/models.js b/website/src/templates/models.js
index 845fec65d..3c5e9d2a4 100644
--- a/website/src/templates/models.js
+++ b/website/src/templates/models.js
@@ -1,4 +1,4 @@
-import React, { useEffect, useState, useMemo } from 'react'
+import React, { useEffect, useState, useMemo, Fragment } from 'react'
import { StaticQuery, graphql } from 'gatsby'
import { window } from 'browser-monads'
@@ -83,15 +83,24 @@ function formatVectors(data) {
function formatAccuracy(data) {
if (!data) return null
- const labels = { tags_acc: 'POS', ents_f: 'NER F', ents_p: 'NER P', ents_r: 'NER R' }
+ const labels = {
+ las: 'LAS',
+ uas: 'UAS',
+ tags_acc: 'TAG',
+ ents_f: 'NER F',
+ ents_p: 'NER P',
+ ents_r: 'NER R',
+ }
const isSyntax = key => ['tags_acc', 'las', 'uas'].includes(key)
const isNer = key => key.startsWith('ents_')
- return Object.keys(data).map(key => ({
- label: labels[key] || key.toUpperCase(),
- value: data[key].toFixed(2),
- help: MODEL_META[key],
- type: isNer(key) ? 'ner' : isSyntax(key) ? 'syntax' : null,
- }))
+ return Object.keys(data)
+ .filter(key => labels[key])
+ .map(key => ({
+ label: labels[key],
+ value: data[key].toFixed(2),
+ help: MODEL_META[key],
+ type: isNer(key) ? 'ner' : isSyntax(key) ? 'syntax' : null,
+ }))
}
function formatModelMeta(data) {
@@ -115,11 +124,11 @@ function formatModelMeta(data) {
function formatSources(data = []) {
const sources = data.map(s => (isString(s) ? { name: s } : s))
return sources.map(({ name, url, author }, i) => (
- <>
+
{i > 0 && }
{name && url ? {name} : name}
{author && ` (${author})`}
- >
+
))
}
@@ -308,12 +317,12 @@ const Model = ({ name, langId, langName, baseUrl, repo, compatibility, hasExampl
diff --git a/website/src/templates/universe.js b/website/src/templates/universe.js
index e49e81b01..4a4e13bec 100644
--- a/website/src/templates/universe.js
+++ b/website/src/templates/universe.js
@@ -14,7 +14,7 @@ import Sidebar from '../components/sidebar'
import Section from '../components/section'
import Main from '../components/main'
import Footer from '../components/footer'
-import { H3, Label, InlineList } from '../components/typography'
+import { H3, H5, Label, InlineList } from '../components/typography'
import { YouTube, SoundCloud, Iframe } from '../components/embed'
import { github, markdownToReact } from '../components/util'
@@ -86,7 +86,10 @@ const UniverseContent = ({ content = [], categories, pageContext, location, mdxC
)
return cover ? (
@@ -95,6 +98,13 @@ const UniverseContent = ({ content = [], categories, pageContext, location, mdxC
+ ) : data.id === 'videos' ? (
+
+
+ {header}
+
{title}
+
+
) : (
{
{counts.modelLangs} languages
- pretrained word vectors
+ Pretrained word vectors
State-of-the-art speed
@@ -148,13 +148,35 @@ const Landing = ({ data }) => {
+
+
+
+
+
+
+ In this free and interactive online course you’ll learn how to
+ use spaCy to build advanced natural language understanding systems, using both
+ rule-based and machine learning approaches. It includes{' '}
+ 55 exercises featuring videos, slide decks, multiple-choice
+ questions and interactive coding practice in the browser.
+
+
Prodigy is an annotation tool so efficient that data scientists
@@ -165,25 +187,6 @@ const Landing = ({ data }) => {
update your model in real-time and chain models together to build more complex
systems.
-
-
- We were pleased to invite the spaCy community and other folks working on Natural
- Language Processing to Berlin this summer for a small and intimate event{' '}
- July 6, 2019. We booked a beautiful venue, hand-picked an
- awesome lineup of speakers and scheduled plenty of social time to get to know
- each other and exchange ideas. The YouTube playlist includes 12 talks about NLP
- research, development and applications, with keynotes by Sebastian Ruder
- (DeepMind) and Yoav Goldberg (Allen AI).
-
diff --git a/website/src/widgets/languages.js b/website/src/widgets/languages.js
index 55645f951..bb26e57cd 100644
--- a/website/src/widgets/languages.js
+++ b/website/src/widgets/languages.js
@@ -38,10 +38,10 @@ const Languages = () => (
const langs = site.siteMetadata.languages
const withModels = langs
.filter(({ models }) => models && !!models.length)
- .sort((a, b) => a.code.localeCompare(b.code))
+ .sort((a, b) => a.name.localeCompare(b.name))
const withoutModels = langs
.filter(({ models }) => !models || !models.length)
- .sort((a, b) => a.code.localeCompare(b.code))
+ .sort((a, b) => a.name.localeCompare(b.name))
const withDeps = langs.filter(({ dependencies }) => dependencies && dependencies.length)
return (
<>