diff --git a/.github/contributors/bittlingmayer.md b/.github/contributors/bittlingmayer.md new file mode 100644 index 000000000..69ec98a00 --- /dev/null +++ b/.github/contributors/bittlingmayer.md @@ -0,0 +1,107 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Adam Bittlingmayer | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 12 Aug 2020 | +| GitHub username | bittlingmayer | +| Website (optional) | | + diff --git a/.github/contributors/graue70.md b/.github/contributors/graue70.md new file mode 100644 index 000000000..7f9aa037b --- /dev/null +++ b/.github/contributors/graue70.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Thomas | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 2020-08-11 | +| GitHub username | graue70 | +| Website (optional) | | diff --git a/.github/contributors/holubvl3.md b/.github/contributors/holubvl3.md new file mode 100644 index 000000000..f2047b103 --- /dev/null +++ b/.github/contributors/holubvl3.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Vladimir Holubec | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 30.07.2020 | +| GitHub username | holubvl3 | +| Website (optional) | | diff --git a/.github/contributors/idoshr.md b/.github/contributors/idoshr.md new file mode 100644 index 000000000..26e901530 --- /dev/null +++ b/.github/contributors/idoshr.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Ido Shraga | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 20-09-2020 | +| GitHub username | idoshr | +| Website (optional) | | diff --git a/.github/contributors/jgutix.md b/.github/contributors/jgutix.md new file mode 100644 index 000000000..4bda9486b --- /dev/null +++ b/.github/contributors/jgutix.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Juan Gutiérrez | +| Company name (if applicable) | Ojtli | +| Title or role (if applicable) | | +| Date | 2020-08-28 | +| GitHub username | jgutix | +| Website (optional) | ojtli.app | diff --git a/.github/contributors/leyendecker.md b/.github/contributors/leyendecker.md new file mode 100644 index 000000000..74e6cdd80 --- /dev/null +++ b/.github/contributors/leyendecker.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | ---------------------------- | +| Name | Gustavo Zadrozny Leyendecker | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | July 29, 2020 | +| GitHub username | leyendecker | +| Website (optional) | | diff --git a/.github/contributors/lizhe2004.md b/.github/contributors/lizhe2004.md new file mode 100644 index 000000000..6011506d6 --- /dev/null +++ b/.github/contributors/lizhe2004.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | ------------------------ | +| Name | Zhe li | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 2020-07-24 | +| GitHub username | lizhe2004 | +| Website (optional) | http://www.huahuaxia.net| diff --git a/.github/contributors/snsten.md b/.github/contributors/snsten.md new file mode 100644 index 000000000..0d7c28835 --- /dev/null +++ b/.github/contributors/snsten.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Shashank Shekhar | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 2020-08-23 | +| GitHub username | snsten | +| Website (optional) | | diff --git a/.github/contributors/solarmist.md b/.github/contributors/solarmist.md new file mode 100644 index 000000000..6bfb21696 --- /dev/null +++ b/.github/contributors/solarmist.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | ------------------------- | +| Name | Joshua Olson | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 2020-07-22 | +| GitHub username | solarmist | +| Website (optional) | http://blog.solarmist.net | diff --git a/.github/contributors/tilusnet.md b/.github/contributors/tilusnet.md new file mode 100644 index 000000000..1618bac2e --- /dev/null +++ b/.github/contributors/tilusnet.md @@ -0,0 +1,106 @@ +# spaCy contributor agreement + +This spaCy Contributor Agreement (**"SCA"**) is based on the +[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf). +The SCA applies to any contribution that you make to any product or project +managed by us (the **"project"**), and sets out the intellectual property rights +you grant to us in the contributed materials. The term **"us"** shall mean +[ExplosionAI GmbH](https://explosion.ai/legal). The term +**"you"** shall mean the person or entity identified below. + +If you agree to be bound by these terms, fill in the information requested +below and include the filled-in version with your first pull request, under the +folder [`.github/contributors/`](/.github/contributors/). The name of the file +should be your GitHub username, with the extension `.md`. For example, the user +example_user would create the file `.github/contributors/example_user.md`. + +Read this agreement carefully before signing. These terms and conditions +constitute a binding legal agreement. + +## Contributor Agreement + +1. The term "contribution" or "contributed materials" means any source code, +object code, patch, tool, sample, graphic, specification, manual, +documentation, or any other material posted or submitted by you to the project. + +2. With respect to any worldwide copyrights, or copyright applications and +registrations, in your contribution: + + * you hereby assign to us joint ownership, and to the extent that such + assignment is or becomes invalid, ineffective or unenforceable, you hereby + grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge, + royalty-free, unrestricted license to exercise all rights under those + copyrights. This includes, at our option, the right to sublicense these same + rights to third parties through multiple levels of sublicensees or other + licensing arrangements; + + * you agree that each of us can do all things in relation to your + contribution as if each of us were the sole owners, and if one of us makes + a derivative work of your contribution, the one who makes the derivative + work (or has it made will be the sole owner of that derivative work; + + * you agree that you will not assert any moral rights in your contribution + against us, our licensees or transferees; + + * you agree that we may register a copyright in your contribution and + exercise all ownership rights associated with it; and + + * you agree that neither of us has any duty to consult with, obtain the + consent of, pay or render an accounting to the other for any use or + distribution of your contribution. + +3. With respect to any patents you own, or that you can license without payment +to any third party, you hereby grant to us a perpetual, irrevocable, +non-exclusive, worldwide, no-charge, royalty-free license to: + + * make, have made, use, sell, offer to sell, import, and otherwise transfer + your contribution in whole or in part, alone or in combination with or + included in any product, work or materials arising out of the project to + which your contribution was submitted, and + + * at our option, to sublicense these same rights to third parties through + multiple levels of sublicensees or other licensing arrangements. + +4. Except as set out above, you keep all right, title, and interest in your +contribution. The rights that you grant to us under these terms are effective +on the date you first submitted a contribution to us, even if your submission +took place before the date you sign these terms. + +5. You covenant, represent, warrant and agree that: + + * Each contribution that you submit is and shall be an original work of + authorship and you can legally grant the rights set out in this SCA; + + * to the best of your knowledge, each contribution will not violate any + third party's copyrights, trademarks, patents, or other intellectual + property rights; and + + * each contribution shall be in compliance with U.S. export control laws and + other applicable export and import laws. You agree to notify us if you + become aware of any circumstance which would make any of the foregoing + representations inaccurate in any respect. We may publicly disclose your + participation in the project, including the fact that you have signed the SCA. + +6. This SCA is governed by the laws of the State of California and applicable +U.S. Federal law. Any choice of law rules will not apply. + +7. Please place an “x” on one of the applicable statement below. Please do NOT +mark both statements: + + * [x] I am signing on behalf of myself as an individual and no other person + or entity, including my employer, has or will have rights with respect to my + contributions. + + * [ ] I am signing on behalf of my employer or a legal entity and I have the + actual authority to contractually bind that entity. + +## Contributor Details + +| Field | Entry | +|------------------------------- | -------------------- | +| Name | Attila Szász | +| Company name (if applicable) | | +| Title or role (if applicable) | | +| Date | 12 Aug 2020 | +| GitHub username | tilusnet | +| Website (optional) | | diff --git a/extra/experiments/onto-joint/defaults.cfg b/extra/experiments/onto-joint/defaults.cfg index 3ab3ddaba..7954b57b5 100644 --- a/extra/experiments/onto-joint/defaults.cfg +++ b/extra/experiments/onto-joint/defaults.cfg @@ -36,7 +36,7 @@ max_length = 0 limit = 0 [training.batcher] -@batchers = "batch_by_words.v1" +@batchers = "spacy.batch_by_words.v1" discard_oversize = false tolerance = 0.2 diff --git a/extra/experiments/ptb-joint-pos-dep/defaults.cfg b/extra/experiments/ptb-joint-pos-dep/defaults.cfg index fc471ac43..8f9c5666e 100644 --- a/extra/experiments/ptb-joint-pos-dep/defaults.cfg +++ b/extra/experiments/ptb-joint-pos-dep/defaults.cfg @@ -35,7 +35,7 @@ max_length = 0 limit = 0 [training.batcher] -@batchers = "batch_by_words.v1" +@batchers = "spacy.batch_by_words.v1" discard_oversize = false tolerance = 0.2 diff --git a/licenses/3rd_party_licenses.txt b/licenses/3rd_party_licenses.txt new file mode 100644 index 000000000..0aeef5507 --- /dev/null +++ b/licenses/3rd_party_licenses.txt @@ -0,0 +1,38 @@ +Third Party Licenses for spaCy +============================== + +NumPy +----- + +* Files: setup.py + +Copyright (c) 2005-2020, NumPy Developers. +All rights reserved. + +Redistribution and use in source and binary forms, with or without +modification, are permitted provided that the following conditions are +met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + + * Redistributions in binary form must reproduce the above + copyright notice, this list of conditions and the following + disclaimer in the documentation and/or other materials provided + with the distribution. + + * Neither the name of the NumPy Developers nor the names of any + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + +THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. diff --git a/netlify.toml b/netlify.toml index 2f3e350e6..3c17b876c 100644 --- a/netlify.toml +++ b/netlify.toml @@ -24,7 +24,7 @@ redirects = [ {from = "/docs/usage/customizing-tokenizer", to = "/usage/linguistic-features#tokenization", force = true}, {from = "/docs/usage/language-processing-pipeline", to = "/usage/processing-pipelines", force = true}, {from = "/docs/usage/customizing-pipeline", to = "/usage/processing-pipelines", force = true}, - {from = "/docs/usage/training-ner", to = "/usage/training#ner", force = true}, + {from = "/docs/usage/training-ner", to = "/usage/training", force = true}, {from = "/docs/usage/tutorials", to = "/usage/examples", force = true}, {from = "/docs/usage/data-model", to = "/api", force = true}, {from = "/docs/usage/cli", to = "/api/cli", force = true}, diff --git a/spacy/about.py b/spacy/about.py index 3fe720dbc..7d0e85a17 100644 --- a/spacy/about.py +++ b/spacy/about.py @@ -1,6 +1,6 @@ # fmt: off __title__ = "spacy-nightly" -__version__ = "3.0.0a13" +__version__ = "3.0.0a14" __release__ = True __download_url__ = "https://github.com/explosion/spacy-models/releases/download" __compatibility__ = "https://raw.githubusercontent.com/explosion/spacy-models/master/compatibility.json" diff --git a/spacy/cli/__init__.py b/spacy/cli/__init__.py index b47c1c16b..92cb76971 100644 --- a/spacy/cli/__init__.py +++ b/spacy/cli/__init__.py @@ -29,9 +29,9 @@ from .project.document import project_document # noqa: F401 @app.command("link", no_args_is_help=True, deprecated=True, hidden=True) def link(*args, **kwargs): - """As of spaCy v3.0, model symlinks are deprecated. You can load models - using their full names or from a directory path.""" + """As of spaCy v3.0, symlinks like "en" are deprecated. You can load trained + pipeline packages using their full names or from a directory path.""" msg.warn( - "As of spaCy v3.0, model symlinks are deprecated. You can load models " - "using their full names or from a directory path." + "As of spaCy v3.0, model symlinks are deprecated. You can load trained " + "pipeline packages using their full names or from a directory path." ) diff --git a/spacy/cli/_util.py b/spacy/cli/_util.py index cfa126cc4..0ecb5ad8f 100644 --- a/spacy/cli/_util.py +++ b/spacy/cli/_util.py @@ -25,7 +25,7 @@ COMMAND = "python -m spacy" NAME = "spacy" HELP = """spaCy Command-line Interface -DOCS: https://spacy.io/api/cli +DOCS: https://nightly.spacy.io/api/cli """ PROJECT_HELP = f"""Command-line interface for spaCy projects and templates. You'd typically start by cloning a project template to a local directory and @@ -36,7 +36,7 @@ DEBUG_HELP = """Suite of helpful commands for debugging and profiling. Includes commands to check and validate your config files, training and evaluation data, and custom model implementations. """ -INIT_HELP = """Commands for initializing configs and models.""" +INIT_HELP = """Commands for initializing configs and pipeline packages.""" # Wrappers for Typer's annotations. Initially created to set defaults and to # keep the names short, but not needed at the moment. diff --git a/spacy/cli/convert.py b/spacy/cli/convert.py index f73c2f2c0..ade5a3ad4 100644 --- a/spacy/cli/convert.py +++ b/spacy/cli/convert.py @@ -44,7 +44,7 @@ def convert_cli( file_type: FileTypes = Opt("spacy", "--file-type", "-t", help="Type of data to produce"), n_sents: int = Opt(1, "--n-sents", "-n", help="Number of sentences per doc (0 to disable)"), seg_sents: bool = Opt(False, "--seg-sents", "-s", help="Segment sentences (for -c ner)"), - model: Optional[str] = Opt(None, "--model", "-b", help="Model for sentence segmentation (for -s)"), + model: Optional[str] = Opt(None, "--model", "--base", "-b", help="Trained spaCy pipeline for sentence segmentation to use as base (for --seg-sents)"), morphology: bool = Opt(False, "--morphology", "-m", help="Enable appending morphology to tags"), merge_subtokens: bool = Opt(False, "--merge-subtokens", "-T", help="Merge CoNLL-U subtokens"), converter: str = Opt("auto", "--converter", "-c", help=f"Converter: {tuple(CONVERTERS.keys())}"), @@ -61,6 +61,8 @@ def convert_cli( If no output_dir is specified and the output format is JSON, the data is written to stdout, so you can pipe them forward to a JSON file: $ spacy convert some_file.conllu --file-type json > some_file.json + + DOCS: https://nightly.spacy.io/api/cli#convert """ if isinstance(file_type, FileTypes): # We get an instance of the FileTypes from the CLI so we need its string value @@ -261,6 +263,6 @@ def _get_converter(msg, converter, input_path): msg.warn( "Can't automatically detect NER format. " "Conversion may not succeed. " - "See https://spacy.io/api/cli#convert" + "See https://nightly.spacy.io/api/cli#convert" ) return converter diff --git a/spacy/cli/debug_config.py b/spacy/cli/debug_config.py index 2944cd364..7930d0674 100644 --- a/spacy/cli/debug_config.py +++ b/spacy/cli/debug_config.py @@ -31,6 +31,8 @@ def debug_config_cli( Similar as with the 'train' command, you can override settings from the config as command line options. For instance, --training.batch_size 128 overrides the value of "batch_size" in the block "[training]". + + DOCS: https://nightly.spacy.io/api/cli#debug-config """ overrides = parse_config_overrides(ctx.args) import_code(code_path) diff --git a/spacy/cli/debug_data.py b/spacy/cli/debug_data.py index 2f48a29cd..75a81e6f5 100644 --- a/spacy/cli/debug_data.py +++ b/spacy/cli/debug_data.py @@ -18,7 +18,7 @@ from .. import util NEW_LABEL_THRESHOLD = 50 # Minimum number of expected occurrences of dependency labels DEP_LABEL_THRESHOLD = 20 -# Minimum number of expected examples to train a blank model +# Minimum number of expected examples to train a new pipeline BLANK_MODEL_MIN_THRESHOLD = 100 BLANK_MODEL_THRESHOLD = 2000 @@ -47,6 +47,8 @@ def debug_data_cli( Analyze, debug and validate your training and development data. Outputs useful stats, and can help you find problems like invalid entity annotations, cyclic dependencies, low data labels and more. + + DOCS: https://nightly.spacy.io/api/cli#debug-data """ if ctx.command.name == "debug-data": msg.warn( @@ -148,7 +150,7 @@ def debug_data( msg.text(f"Language: {config['nlp']['lang']}") msg.text(f"Training pipeline: {', '.join(pipeline)}") if resume_components: - msg.text(f"Components from other models: {', '.join(resume_components)}") + msg.text(f"Components from other pipelines: {', '.join(resume_components)}") if frozen_components: msg.text(f"Frozen components: {', '.join(frozen_components)}") msg.text(f"{len(train_dataset)} training docs") @@ -164,9 +166,7 @@ def debug_data( # TODO: make this feedback more fine-grained and report on updated # components vs. blank components if not resume_components and len(train_dataset) < BLANK_MODEL_THRESHOLD: - text = ( - f"Low number of examples to train from a blank model ({len(train_dataset)})" - ) + text = f"Low number of examples to train a new pipeline ({len(train_dataset)})" if len(train_dataset) < BLANK_MODEL_MIN_THRESHOLD: msg.fail(text) else: @@ -214,7 +214,7 @@ def debug_data( show=verbose, ) else: - msg.info("No word vectors present in the model") + msg.info("No word vectors present in the package") if "ner" in factory_names: # Get all unique NER labels present in the data diff --git a/spacy/cli/debug_model.py b/spacy/cli/debug_model.py index ed8d54655..5bd4e008f 100644 --- a/spacy/cli/debug_model.py +++ b/spacy/cli/debug_model.py @@ -30,6 +30,8 @@ def debug_model_cli( """ Analyze a Thinc model implementation. Includes checks for internal structure and activations during training. + + DOCS: https://nightly.spacy.io/api/cli#debug-model """ if use_gpu >= 0: msg.info("Using GPU") diff --git a/spacy/cli/download.py b/spacy/cli/download.py index e55e6e40e..036aeab17 100644 --- a/spacy/cli/download.py +++ b/spacy/cli/download.py @@ -17,16 +17,19 @@ from ..errors import OLD_MODEL_SHORTCUTS def download_cli( # fmt: off ctx: typer.Context, - model: str = Arg(..., help="Name of model to download"), + model: str = Arg(..., help="Name of pipeline package to download"), direct: bool = Opt(False, "--direct", "-d", "-D", help="Force direct download of name + version"), # fmt: on ): """ - Download compatible model from default download path using pip. If --direct - flag is set, the command expects the full model name with version. - For direct downloads, the compatibility check will be skipped. All + Download compatible trained pipeline from the default download path using + pip. If --direct flag is set, the command expects the full package name with + version. For direct downloads, the compatibility check will be skipped. All additional arguments provided to this command will be passed to `pip install` - on model installation. + on package installation. + + DOCS: https://nightly.spacy.io/api/cli#download + AVAILABLE PACKAGES: https://spacy.io/models """ download(model, direct, *ctx.args) @@ -34,11 +37,11 @@ def download_cli( def download(model: str, direct: bool = False, *pip_args) -> None: if not is_package("spacy") and "--no-deps" not in pip_args: msg.warn( - "Skipping model package dependencies and setting `--no-deps`. " + "Skipping pipeline package dependencies and setting `--no-deps`. " "You don't seem to have the spaCy package itself installed " "(maybe because you've built from source?), so installing the " - "model dependencies would cause spaCy to be downloaded, which " - "probably isn't what you want. If the model package has other " + "package dependencies would cause spaCy to be downloaded, which " + "probably isn't what you want. If the pipeline package has other " "dependencies, you'll have to install them manually." ) pip_args = pip_args + ("--no-deps",) @@ -53,7 +56,7 @@ def download(model: str, direct: bool = False, *pip_args) -> None: if model in OLD_MODEL_SHORTCUTS: msg.warn( f"As of spaCy v3.0, shortcuts like '{model}' are deprecated. Please" - f"use the full model name '{OLD_MODEL_SHORTCUTS[model]}' instead." + f"use the full pipeline package name '{OLD_MODEL_SHORTCUTS[model]}' instead." ) model_name = OLD_MODEL_SHORTCUTS[model] compatibility = get_compatibility() @@ -61,7 +64,7 @@ def download(model: str, direct: bool = False, *pip_args) -> None: download_model(dl_tpl.format(m=model_name, v=version), pip_args) msg.good( "Download and installation successful", - f"You can now load the model via spacy.load('{model_name}')", + f"You can now load the package via spacy.load('{model_name}')", ) @@ -71,16 +74,16 @@ def get_compatibility() -> dict: if r.status_code != 200: msg.fail( f"Server error ({r.status_code})", - f"Couldn't fetch compatibility table. Please find a model for your spaCy " + f"Couldn't fetch compatibility table. Please find a package for your spaCy " f"installation (v{about.__version__}), and download it manually. " f"For more details, see the documentation: " - f"https://spacy.io/usage/models", + f"https://nightly.spacy.io/usage/models", exits=1, ) comp_table = r.json() comp = comp_table["spacy"] if version not in comp: - msg.fail(f"No compatible models found for v{version} of spaCy", exits=1) + msg.fail(f"No compatible packages found for v{version} of spaCy", exits=1) return comp[version] @@ -88,7 +91,7 @@ def get_version(model: str, comp: dict) -> str: model = get_base_version(model) if model not in comp: msg.fail( - f"No compatible model found for '{model}' (spaCy v{about.__version__})", + f"No compatible package found for '{model}' (spaCy v{about.__version__})", exits=1, ) return comp[model][0] diff --git a/spacy/cli/evaluate.py b/spacy/cli/evaluate.py index 3847c74f3..c5cbab09a 100644 --- a/spacy/cli/evaluate.py +++ b/spacy/cli/evaluate.py @@ -26,13 +26,16 @@ def evaluate_cli( # fmt: on ): """ - Evaluate a model. Expects a loadable spaCy model and evaluation data in the - binary .spacy format. The --gold-preproc option sets up the evaluation - examples with gold-standard sentences and tokens for the predictions. Gold - preprocessing helps the annotations align to the tokenization, and may - result in sequences of more consistent length. However, it may reduce - runtime accuracy due to train/test skew. To render a sample of dependency - parses in a HTML file, set as output directory as the displacy_path argument. + Evaluate a trained pipeline. Expects a loadable spaCy pipeline and evaluation + data in the binary .spacy format. The --gold-preproc option sets up the + evaluation examples with gold-standard sentences and tokens for the + predictions. Gold preprocessing helps the annotations align to the + tokenization, and may result in sequences of more consistent length. However, + it may reduce runtime accuracy due to train/test skew. To render a sample of + dependency parses in a HTML file, set as output directory as the + displacy_path argument. + + DOCS: https://nightly.spacy.io/api/cli#evaluate """ evaluate( model, diff --git a/spacy/cli/info.py b/spacy/cli/info.py index ca082b939..2b87163c2 100644 --- a/spacy/cli/info.py +++ b/spacy/cli/info.py @@ -12,15 +12,17 @@ from .. import about @app.command("info") def info_cli( # fmt: off - model: Optional[str] = Arg(None, help="Optional model name"), + model: Optional[str] = Arg(None, help="Optional loadable spaCy pipeline"), markdown: bool = Opt(False, "--markdown", "-md", help="Generate Markdown for GitHub issues"), silent: bool = Opt(False, "--silent", "-s", "-S", help="Don't print anything (just return)"), # fmt: on ): """ - Print info about spaCy installation. If a model is speficied as an argument, - print model information. Flag --markdown prints details in Markdown for easy + Print info about spaCy installation. If a pipeline is speficied as an argument, + print its meta information. Flag --markdown prints details in Markdown for easy copy-pasting to GitHub issues. + + DOCS: https://nightly.spacy.io/api/cli#info """ info(model, markdown=markdown, silent=silent) @@ -30,14 +32,16 @@ def info( ) -> Union[str, dict]: msg = Printer(no_print=silent, pretty=not silent) if model: - title = f"Info about model '{model}'" + title = f"Info about pipeline '{model}'" data = info_model(model, silent=silent) else: title = "Info about spaCy" data = info_spacy() raw_data = {k.lower().replace(" ", "_"): v for k, v in data.items()} - if "Models" in data and isinstance(data["Models"], dict): - data["Models"] = ", ".join(f"{n} ({v})" for n, v in data["Models"].items()) + if "Pipelines" in data and isinstance(data["Pipelines"], dict): + data["Pipelines"] = ", ".join( + f"{n} ({v})" for n, v in data["Pipelines"].items() + ) markdown_data = get_markdown(data, title=title) if markdown: if not silent: @@ -63,7 +67,7 @@ def info_spacy() -> Dict[str, any]: "Location": str(Path(__file__).parent.parent), "Platform": platform.platform(), "Python version": platform.python_version(), - "Models": all_models, + "Pipelines": all_models, } @@ -81,7 +85,7 @@ def info_model(model: str, *, silent: bool = True) -> Dict[str, Any]: model_path = model meta_path = model_path / "meta.json" if not meta_path.is_file(): - msg.fail("Can't find model meta.json", meta_path, exits=1) + msg.fail("Can't find pipeline meta.json", meta_path, exits=1) meta = srsly.read_json(meta_path) if model_path.resolve() != model_path: meta["source"] = str(model_path.resolve()) diff --git a/spacy/cli/init_config.py b/spacy/cli/init_config.py index 1e1e55e06..584ca7f64 100644 --- a/spacy/cli/init_config.py +++ b/spacy/cli/init_config.py @@ -27,7 +27,7 @@ def init_config_cli( # fmt: off output_file: Path = Arg(..., help="File to save config.cfg to or - for stdout (will only output config and no additional logging info)", allow_dash=True), lang: Optional[str] = Opt("en", "--lang", "-l", help="Two-letter code of the language to use"), - pipeline: Optional[str] = Opt("tagger,parser,ner", "--pipeline", "-p", help="Comma-separated names of trainable pipeline components to include in the model (without 'tok2vec' or 'transformer')"), + pipeline: Optional[str] = Opt("tagger,parser,ner", "--pipeline", "-p", help="Comma-separated names of trainable pipeline components to include (without 'tok2vec' or 'transformer')"), optimize: Optimizations = Opt(Optimizations.efficiency.value, "--optimize", "-o", help="Whether to optimize for efficiency (faster inference, smaller model, lower memory consumption) or higher accuracy (potentially larger and slower model). This will impact the choice of architecture, pretrained weights and related hyperparameters."), cpu: bool = Opt(False, "--cpu", "-C", help="Whether the model needs to run on CPU. This will impact the choice of architecture, pretrained weights and related hyperparameters."), # fmt: on @@ -37,6 +37,8 @@ def init_config_cli( specified via the CLI arguments, this command generates a config with the optimal settings for you use case. This includes the choice of architecture, pretrained weights and related hyperparameters. + + DOCS: https://nightly.spacy.io/api/cli#init-config """ if isinstance(optimize, Optimizations): # instance of enum from the CLI optimize = optimize.value @@ -59,6 +61,8 @@ def init_fill_config_cli( functions for their default values and update the base config. This command can be used with a config generated via the training quickstart widget: https://nightly.spacy.io/usage/training#quickstart + + DOCS: https://nightly.spacy.io/api/cli#init-fill-config """ fill_config(output_file, base_path, pretraining=pretraining, diff=diff) @@ -168,7 +172,7 @@ def save_config( output_file.parent.mkdir(parents=True) config.to_disk(output_file, interpolate=False) msg.good("Saved config", output_file) - msg.text("You can now add your data and train your model:") + msg.text("You can now add your data and train your pipeline:") variables = ["--paths.train ./train.spacy", "--paths.dev ./dev.spacy"] if not no_print: print(f"{COMMAND} train {output_file.parts[-1]} {' '.join(variables)}") diff --git a/spacy/cli/init_model.py b/spacy/cli/init_model.py index 4fdd2bbbc..5f06fd895 100644 --- a/spacy/cli/init_model.py +++ b/spacy/cli/init_model.py @@ -28,7 +28,7 @@ except ImportError: DEFAULT_OOV_PROB = -20 -@init_cli.command("model") +@init_cli.command("vocab") @app.command( "init-model", context_settings={"allow_extra_args": True, "ignore_unknown_options": True}, @@ -37,8 +37,8 @@ DEFAULT_OOV_PROB = -20 def init_model_cli( # fmt: off ctx: typer.Context, # This is only used to read additional arguments - lang: str = Arg(..., help="Model language"), - output_dir: Path = Arg(..., help="Model output directory"), + lang: str = Arg(..., help="Pipeline language"), + output_dir: Path = Arg(..., help="Pipeline output directory"), freqs_loc: Optional[Path] = Arg(None, help="Location of words frequencies file", exists=True), clusters_loc: Optional[Path] = Opt(None, "--clusters-loc", "-c", help="Optional location of brown clusters data", exists=True), jsonl_loc: Optional[Path] = Opt(None, "--jsonl-loc", "-j", help="Location of JSONL-formatted attributes file", exists=True), @@ -46,19 +46,22 @@ def init_model_cli( prune_vectors: int = Opt(-1, "--prune-vectors", "-V", help="Optional number of vectors to prune to"), truncate_vectors: int = Opt(0, "--truncate-vectors", "-t", help="Optional number of vectors to truncate to when reading in vectors file"), vectors_name: Optional[str] = Opt(None, "--vectors-name", "-vn", help="Optional name for the word vectors, e.g. en_core_web_lg.vectors"), - model_name: Optional[str] = Opt(None, "--model-name", "-mn", help="Optional name for the model meta"), - base_model: Optional[str] = Opt(None, "--base-model", "-b", help="Base model (for languages with custom tokenizers)") + model_name: Optional[str] = Opt(None, "--meta-name", "-mn", help="Optional name of the package for the pipeline meta"), + base_model: Optional[str] = Opt(None, "--base", "-b", help="Name of or path to base pipeline to start with (mostly relevant for pipelines with custom tokenizers)") # fmt: on ): """ - Create a new model from raw data. If vectors are provided in Word2Vec format, - they can be either a .txt or zipped as a .zip or .tar.gz. + Create a new blank pipeline directory with vocab and vectors from raw data. + If vectors are provided in Word2Vec format, they can be either a .txt or + zipped as a .zip or .tar.gz. + + DOCS: https://nightly.spacy.io/api/cli#init-vocab """ if ctx.command.name == "init-model": msg.warn( - "The init-model command is now available via the 'init model' " - "subcommand (without the hyphen). You can run python -m spacy init " - "--help for an overview of the other available initialization commands." + "The init-model command is now called 'init vocab'. You can run " + "'python -m spacy init --help' for an overview of the other " + "available initialization commands." ) init_model( lang, @@ -115,10 +118,10 @@ def init_model( msg.fail("Can't find words frequencies file", freqs_loc, exits=1) lex_attrs = read_attrs_from_deprecated(msg, freqs_loc, clusters_loc) - with msg.loading("Creating model..."): + with msg.loading("Creating blank pipeline..."): nlp = create_model(lang, lex_attrs, name=model_name, base_model=base_model) - msg.good("Successfully created model") + msg.good("Successfully created blank pipeline") if vectors_loc is not None: add_vectors( msg, nlp, vectors_loc, truncate_vectors, prune_vectors, vectors_name @@ -242,7 +245,8 @@ def add_vectors( if vectors_data is not None: nlp.vocab.vectors = Vectors(data=vectors_data, keys=vector_keys) if name is None: - nlp.vocab.vectors.name = f"{nlp.meta['lang']}_model.vectors" + # TODO: Is this correct? Does this matter? + nlp.vocab.vectors.name = f"{nlp.meta['lang']}_{nlp.meta['name']}.vectors" else: nlp.vocab.vectors.name = name nlp.meta["vectors"]["name"] = nlp.vocab.vectors.name diff --git a/spacy/cli/package.py b/spacy/cli/package.py index 4e5038951..c457b3e17 100644 --- a/spacy/cli/package.py +++ b/spacy/cli/package.py @@ -14,23 +14,25 @@ from .. import about @app.command("package") def package_cli( # fmt: off - input_dir: Path = Arg(..., help="Directory with model data", exists=True, file_okay=False), + input_dir: Path = Arg(..., help="Directory with pipeline data", exists=True, file_okay=False), output_dir: Path = Arg(..., help="Output parent directory", exists=True, file_okay=False), meta_path: Optional[Path] = Opt(None, "--meta-path", "--meta", "-m", help="Path to meta.json", exists=True, dir_okay=False), create_meta: bool = Opt(False, "--create-meta", "-c", "-C", help="Create meta.json, even if one exists"), version: Optional[str] = Opt(None, "--version", "-v", help="Package version to override meta"), no_sdist: bool = Opt(False, "--no-sdist", "-NS", help="Don't build .tar.gz sdist, can be set if you want to run this step manually"), - force: bool = Opt(False, "--force", "-f", "-F", help="Force overwriting existing model in output directory"), + force: bool = Opt(False, "--force", "-f", "-F", help="Force overwriting existing data in output directory"), # fmt: on ): """ - Generate an installable Python package for a model. Includes model data, + Generate an installable Python package for a pipeline. Includes binary data, meta and required installation files. A new directory will be created in the - specified output directory, and model data will be copied over. If + specified output directory, and the data will be copied over. If --create-meta is set and a meta.json already exists in the output directory, the existing values will be used as the defaults in the command-line prompt. After packaging, "python setup.py sdist" is run in the package directory, which will create a .tar.gz archive that can be installed via "pip install". + + DOCS: https://nightly.spacy.io/api/cli#package """ package( input_dir, @@ -59,14 +61,14 @@ def package( output_path = util.ensure_path(output_dir) meta_path = util.ensure_path(meta_path) if not input_path or not input_path.exists(): - msg.fail("Can't locate model data", input_path, exits=1) + msg.fail("Can't locate pipeline data", input_path, exits=1) if not output_path or not output_path.exists(): msg.fail("Output directory not found", output_path, exits=1) if meta_path and not meta_path.exists(): - msg.fail("Can't find model meta.json", meta_path, exits=1) + msg.fail("Can't find pipeline meta.json", meta_path, exits=1) meta_path = meta_path or input_dir / "meta.json" if not meta_path.exists() or not meta_path.is_file(): - msg.fail("Can't load model meta.json", meta_path, exits=1) + msg.fail("Can't load pipeline meta.json", meta_path, exits=1) meta = srsly.read_json(meta_path) meta = get_meta(input_dir, meta) if version is not None: @@ -77,7 +79,7 @@ def package( meta = generate_meta(meta, msg) errors = validate(ModelMetaSchema, meta) if errors: - msg.fail("Invalid model meta.json") + msg.fail("Invalid pipeline meta.json") print("\n".join(errors)) sys.exit(1) model_name = meta["lang"] + "_" + meta["name"] @@ -118,7 +120,7 @@ def get_meta( ) -> Dict[str, Any]: meta = { "lang": "en", - "name": "model", + "name": "pipeline", "version": "0.0.0", "description": "", "author": "", @@ -143,10 +145,10 @@ def get_meta( def generate_meta(existing_meta: Dict[str, Any], msg: Printer) -> Dict[str, Any]: meta = existing_meta or {} settings = [ - ("lang", "Model language", meta.get("lang", "en")), - ("name", "Model name", meta.get("name", "model")), - ("version", "Model version", meta.get("version", "0.0.0")), - ("description", "Model description", meta.get("description", None)), + ("lang", "Pipeline language", meta.get("lang", "en")), + ("name", "Pipeline name", meta.get("name", "pipeline")), + ("version", "Package version", meta.get("version", "0.0.0")), + ("description", "Package description", meta.get("description", None)), ("author", "Author", meta.get("author", None)), ("email", "Author email", meta.get("email", None)), ("url", "Author website", meta.get("url", None)), @@ -154,8 +156,8 @@ def generate_meta(existing_meta: Dict[str, Any], msg: Printer) -> Dict[str, Any] ] msg.divider("Generating meta.json") msg.text( - "Enter the package settings for your model. The following information " - "will be read from your model data: pipeline, vectors." + "Enter the package settings for your pipeline. The following information " + "will be read from your pipeline data: pipeline, vectors." ) for setting, desc, default in settings: response = get_raw_input(desc, default) diff --git a/spacy/cli/pretrain.py b/spacy/cli/pretrain.py index 5f20773e1..828e5f08e 100644 --- a/spacy/cli/pretrain.py +++ b/spacy/cli/pretrain.py @@ -31,7 +31,7 @@ def pretrain_cli( # fmt: off ctx: typer.Context, # This is only used to read additional arguments texts_loc: Path = Arg(..., help="Path to JSONL file with raw texts to learn from, with text provided as the key 'text' or tokens as the key 'tokens'", exists=True), - output_dir: Path = Arg(..., help="Directory to write models to on each epoch"), + output_dir: Path = Arg(..., help="Directory to write weights to on each epoch"), config_path: Path = Arg(..., help="Path to config file", exists=True, dir_okay=False), code_path: Optional[Path] = Opt(None, "--code-path", "-c", help="Path to Python file with additional code (registered functions) to be imported"), resume_path: Optional[Path] = Opt(None, "--resume-path", "-r", help="Path to pretrained weights from which to resume pretraining"), @@ -57,6 +57,8 @@ def pretrain_cli( To load the weights back in during 'spacy train', you need to ensure all settings are the same between pretraining and training. Ideally, this is done by using the same config file for both commands. + + DOCS: https://nightly.spacy.io/api/cli#pretrain """ overrides = parse_config_overrides(ctx.args) import_code(code_path) @@ -376,10 +378,9 @@ def verify_cli_args(texts_loc, output_dir, config_path, resume_path, epoch_resum if output_dir.exists() and [p for p in output_dir.iterdir()]: if resume_path: msg.warn( - "Output directory is not empty. ", - "If you're resuming a run from a previous model in this directory, " - "the old models for the consecutive epochs will be overwritten " - "with the new ones.", + "Output directory is not empty.", + "If you're resuming a run in this directory, the old weights " + "for the consecutive epochs will be overwritten with the new ones.", ) else: msg.warn( diff --git a/spacy/cli/profile.py b/spacy/cli/profile.py index 14d8435fe..43226730d 100644 --- a/spacy/cli/profile.py +++ b/spacy/cli/profile.py @@ -19,7 +19,7 @@ from ..util import load_model def profile_cli( # fmt: off ctx: typer.Context, # This is only used to read current calling context - model: str = Arg(..., help="Model to load"), + model: str = Arg(..., help="Trained pipeline to load"), inputs: Optional[Path] = Arg(None, help="Location of input file. '-' for stdin.", exists=True, allow_dash=True), n_texts: int = Opt(10000, "--n-texts", "-n", help="Maximum number of texts to use if available"), # fmt: on @@ -29,6 +29,8 @@ def profile_cli( Input should be formatted as one JSON object per line with a key "text". It can either be provided as a JSONL file, or be read from sys.sytdin. If no input file is specified, the IMDB dataset is loaded via Thinc. + + DOCS: https://nightly.spacy.io/api/cli#debug-profile """ if ctx.parent.command.name == NAME: # called as top-level command msg.warn( @@ -60,9 +62,9 @@ def profile(model: str, inputs: Optional[Path] = None, n_texts: int = 10000) -> inputs, _ = zip(*imdb_train) msg.info(f"Loaded IMDB dataset and using {n_inputs} examples") inputs = inputs[:n_inputs] - with msg.loading(f"Loading model '{model}'..."): + with msg.loading(f"Loading pipeline '{model}'..."): nlp = load_model(model) - msg.good(f"Loaded model '{model}'") + msg.good(f"Loaded pipeline '{model}'") texts = list(itertools.islice(inputs, n_texts)) cProfile.runctx("parse_texts(nlp, texts)", globals(), locals(), "Profile.prof") s = pstats.Stats("Profile.prof") diff --git a/spacy/cli/project/assets.py b/spacy/cli/project/assets.py index e33a82acc..2b623675d 100644 --- a/spacy/cli/project/assets.py +++ b/spacy/cli/project/assets.py @@ -20,6 +20,8 @@ def project_assets_cli( defined in the "assets" section of the project.yml. If a checksum is provided in the project.yml, the file is only downloaded if no local file with the same checksum exists. + + DOCS: https://nightly.spacy.io/api/cli#project-assets """ project_assets(project_dir) diff --git a/spacy/cli/project/clone.py b/spacy/cli/project/clone.py index 751c389bc..a419feb0f 100644 --- a/spacy/cli/project/clone.py +++ b/spacy/cli/project/clone.py @@ -22,6 +22,8 @@ def project_clone_cli( only download the files from the given subdirectory. The GitHub repo defaults to the official spaCy template repo, but can be customized (including using a private repo). + + DOCS: https://nightly.spacy.io/api/cli#project-clone """ if dest is None: dest = Path.cwd() / name diff --git a/spacy/cli/project/document.py b/spacy/cli/project/document.py index ab345ecd8..d0265029a 100644 --- a/spacy/cli/project/document.py +++ b/spacy/cli/project/document.py @@ -43,6 +43,8 @@ def project_document_cli( hidden markers are added so you can add custom content before or after the auto-generated section and only the auto-generated docs will be replaced when you re-run the command. + + DOCS: https://nightly.spacy.io/api/cli#project-document """ project_document(project_dir, output_file, no_emoji=no_emoji) diff --git a/spacy/cli/project/dvc.py b/spacy/cli/project/dvc.py index de0480bad..541253234 100644 --- a/spacy/cli/project/dvc.py +++ b/spacy/cli/project/dvc.py @@ -31,7 +31,10 @@ def project_update_dvc_cli( """Auto-generate Data Version Control (DVC) config. A DVC project can only define one pipeline, so you need to specify one workflow defined in the project.yml. If no workflow is specified, the first defined - workflow is used. The DVC config will only be updated if the project.yml changed. + workflow is used. The DVC config will only be updated if the project.yml + changed. + + DOCS: https://nightly.spacy.io/api/cli#project-dvc """ project_update_dvc(project_dir, workflow, verbose=verbose, force=force) diff --git a/spacy/cli/project/pull.py b/spacy/cli/project/pull.py index 6c0f32171..edcd410bd 100644 --- a/spacy/cli/project/pull.py +++ b/spacy/cli/project/pull.py @@ -17,7 +17,9 @@ def project_pull_cli( """Retrieve available precomputed outputs from a remote storage. You can alias remotes in your project.yml by mapping them to storage paths. A storage can be anything that the smart-open library can upload to, e.g. - gcs, aws, ssh, local directories etc + AWS, Google Cloud Storage, SSH, local directories etc. + + DOCS: https://nightly.spacy.io/api/cli#project-pull """ for url, output_path in project_pull(project_dir, remote): if url is not None: @@ -38,5 +40,6 @@ def project_pull(project_dir: Path, remote: str, *, verbose: bool = False): url = storage.pull(output_path, command_hash=cmd_hash) yield url, output_path - if cmd.get("outptus") and all(loc.exists() for loc in cmd["outputs"]): + out_locs = [project_dir / out for out in cmd.get("outputs", [])] + if all(loc.exists() for loc in out_locs): update_lockfile(project_dir, cmd) diff --git a/spacy/cli/project/push.py b/spacy/cli/project/push.py index e09ee6e1a..26495412d 100644 --- a/spacy/cli/project/push.py +++ b/spacy/cli/project/push.py @@ -13,9 +13,12 @@ def project_push_cli( project_dir: Path = Arg(Path.cwd(), help="Location of project directory. Defaults to current working directory.", exists=True, file_okay=False), # fmt: on ): - """Persist outputs to a remote storage. You can alias remotes in your project.yml - by mapping them to storage paths. A storage can be anything that the smart-open - library can upload to, e.g. gcs, aws, ssh, local directories etc + """Persist outputs to a remote storage. You can alias remotes in your + project.yml by mapping them to storage paths. A storage can be anything that + the smart-open library can upload to, e.g. AWS, Google Cloud Storage, SSH, + local directories etc. + + DOCS: https://nightly.spacy.io/api/cli#project-push """ for output_path, url in project_push(project_dir, remote): if url is None: @@ -42,10 +45,19 @@ def project_push(project_dir: Path, remote: str): ) for output_path in cmd.get("outputs", []): output_loc = project_dir / output_path - if output_loc.exists(): + if output_loc.exists() and _is_not_empty_dir(output_loc): url = storage.push( output_path, command_hash=cmd_hash, content_hash=get_content_hash(output_loc), ) yield output_path, url + + +def _is_not_empty_dir(loc: Path): + if not loc.is_dir(): + return True + elif any(_is_not_empty_dir(child) for child in loc.iterdir()): + return True + else: + return False diff --git a/spacy/cli/project/run.py b/spacy/cli/project/run.py index bacd7f04b..eb7b8cc5b 100644 --- a/spacy/cli/project/run.py +++ b/spacy/cli/project/run.py @@ -24,6 +24,8 @@ def project_run_cli( name is specified, all commands in the workflow are run, in order. If commands define dependencies and/or outputs, they will only be re-run if state has changed. + + DOCS: https://nightly.spacy.io/api/cli#project-run """ if show_help or not subcommand: print_run_help(project_dir, subcommand) diff --git a/spacy/cli/templates/quickstart_training.jinja b/spacy/cli/templates/quickstart_training.jinja index fa9bb6d76..199aae217 100644 --- a/spacy/cli/templates/quickstart_training.jinja +++ b/spacy/cli/templates/quickstart_training.jinja @@ -29,7 +29,7 @@ name = "{{ transformer["name"] }}" tokenizer_config = {"use_fast": true} [components.transformer.model.get_spans] -@span_getters = "strided_spans.v1" +@span_getters = "spacy-transformers.strided_spans.v1" window = 128 stride = 96 @@ -186,11 +186,14 @@ accumulate_gradient = {{ transformer["size_factor"] }} [training.optimizer] @optimizers = "Adam.v1" + +{% if use_transformer -%} [training.optimizer.learn_rate] @schedules = "warmup_linear.v1" warmup_steps = 250 total_steps = 20000 initial_rate = 5e-5 +{% endif %} [training.train_corpus] @readers = "spacy.Corpus.v1" @@ -204,13 +207,13 @@ max_length = 0 {% if use_transformer %} [training.batcher] -@batchers = "batch_by_padded.v1" +@batchers = "spacy.batch_by_padded.v1" discard_oversize = true size = 2000 buffer = 256 {%- else %} [training.batcher] -@batchers = "batch_by_words.v1" +@batchers = "spacy.batch_by_words.v1" discard_oversize = false tolerance = 0.2 diff --git a/spacy/cli/train.py b/spacy/cli/train.py index 4ce02286a..6be47fa39 100644 --- a/spacy/cli/train.py +++ b/spacy/cli/train.py @@ -26,7 +26,7 @@ def train_cli( # fmt: off ctx: typer.Context, # This is only used to read additional arguments config_path: Path = Arg(..., help="Path to config file", exists=True), - output_path: Optional[Path] = Opt(None, "--output", "--output-path", "-o", help="Output directory to store model in"), + output_path: Optional[Path] = Opt(None, "--output", "--output-path", "-o", help="Output directory to store trained pipeline in"), code_path: Optional[Path] = Opt(None, "--code-path", "-c", help="Path to Python file with additional code (registered functions) to be imported"), verbose: bool = Opt(False, "--verbose", "-V", "-VV", help="Display more information for debugging purposes"), use_gpu: int = Opt(-1, "--gpu-id", "-g", help="GPU ID or -1 for CPU"), @@ -34,7 +34,7 @@ def train_cli( # fmt: on ): """ - Train or update a spaCy model. Requires data in spaCy's binary format. To + Train or update a spaCy pipeline. Requires data in spaCy's binary format. To convert data from other formats, use the `spacy convert` command. The config file includes all settings and hyperparameters used during traing. To override settings in the config, e.g. settings that point to local @@ -44,6 +44,8 @@ def train_cli( lets you pass in a Python file that's imported before training. It can be used to register custom functions and architectures that can then be referenced in the config. + + DOCS: https://nightly.spacy.io/api/cli#train """ util.logger.setLevel(logging.DEBUG if verbose else logging.ERROR) verify_cli_args(config_path, output_path) @@ -113,12 +115,12 @@ def train( # Load morph rules nlp.vocab.morphology.load_morph_exceptions(morph_rules) - # Load a pretrained tok2vec model - cf. CLI command 'pretrain' + # Load pretrained tok2vec weights - cf. CLI command 'pretrain' if weights_data is not None: tok2vec_path = config["pretraining"].get("tok2vec_model", None) if tok2vec_path is None: msg.fail( - f"To use a pretrained tok2vec model, the config needs to specify which " + f"To pretrained tok2vec weights, the config needs to specify which " f"tok2vec layer to load in the setting [pretraining.tok2vec_model].", exits=1, ) @@ -183,7 +185,7 @@ def train( nlp.to_disk(final_model_path) else: nlp.to_disk(final_model_path) - msg.good(f"Saved model to output directory {final_model_path}") + msg.good(f"Saved pipeline to output directory {final_model_path}") def create_train_batches(iterator, batcher, max_epochs: int): diff --git a/spacy/cli/validate.py b/spacy/cli/validate.py index e6ba284df..9a75ed6f3 100644 --- a/spacy/cli/validate.py +++ b/spacy/cli/validate.py @@ -13,9 +13,11 @@ from ..util import get_package_path, get_model_meta, is_compatible_version @app.command("validate") def validate_cli(): """ - Validate the currently installed models and spaCy version. Checks if the - installed models are compatible and shows upgrade instructions if available. - Should be run after `pip install -U spacy`. + Validate the currently installed pipeline packages and spaCy version. Checks + if the installed packages are compatible and shows upgrade instructions if + available. Should be run after `pip install -U spacy`. + + DOCS: https://nightly.spacy.io/api/cli#validate """ validate() @@ -25,13 +27,13 @@ def validate() -> None: spacy_version = get_base_version(about.__version__) current_compat = compat.get(spacy_version, {}) if not current_compat: - msg.warn(f"No compatible models found for v{spacy_version} of spaCy") + msg.warn(f"No compatible packages found for v{spacy_version} of spaCy") incompat_models = {d["name"] for _, d in model_pkgs.items() if not d["compat"]} na_models = [m for m in incompat_models if m not in current_compat] update_models = [m for m in incompat_models if m in current_compat] spacy_dir = Path(__file__).parent.parent - msg.divider(f"Installed models (spaCy v{about.__version__})") + msg.divider(f"Installed pipeline packages (spaCy v{about.__version__})") msg.info(f"spaCy installation: {spacy_dir}") if model_pkgs: @@ -47,15 +49,15 @@ def validate() -> None: rows.append((data["name"], data["spacy"], version, comp)) msg.table(rows, header=header) else: - msg.text("No models found in your current environment.", exits=0) + msg.text("No pipeline packages found in your current environment.", exits=0) if update_models: msg.divider("Install updates") - msg.text("Use the following commands to update the model packages:") + msg.text("Use the following commands to update the packages:") cmd = "python -m spacy download {}" print("\n".join([cmd.format(pkg) for pkg in update_models]) + "\n") if na_models: msg.info( - f"The following models are custom spaCy models or not " + f"The following packages are custom spaCy pipelines or not " f"available for spaCy v{about.__version__}:", ", ".join(na_models), ) diff --git a/spacy/default_config.cfg b/spacy/default_config.cfg index d76ef630d..9507f0f0a 100644 --- a/spacy/default_config.cfg +++ b/spacy/default_config.cfg @@ -69,7 +69,7 @@ max_length = 2000 limit = 0 [training.batcher] -@batchers = "batch_by_words.v1" +@batchers = "spacy.batch_by_words.v1" discard_oversize = false tolerance = 0.2 diff --git a/spacy/displacy/__init__.py b/spacy/displacy/__init__.py index 2df2bd61c..0e80c3b5f 100644 --- a/spacy/displacy/__init__.py +++ b/spacy/displacy/__init__.py @@ -1,8 +1,8 @@ """ spaCy's built in visualization suite for dependencies and named entities. -DOCS: https://spacy.io/api/top-level#displacy -USAGE: https://spacy.io/usage/visualizers +DOCS: https://nightly.spacy.io/api/top-level#displacy +USAGE: https://nightly.spacy.io/usage/visualizers """ from typing import Union, Iterable, Optional, Dict, Any, Callable import warnings @@ -37,8 +37,8 @@ def render( manual (bool): Don't parse `Doc` and instead expect a dict/list of dicts. RETURNS (str): Rendered HTML markup. - DOCS: https://spacy.io/api/top-level#displacy.render - USAGE: https://spacy.io/usage/visualizers + DOCS: https://nightly.spacy.io/api/top-level#displacy.render + USAGE: https://nightly.spacy.io/usage/visualizers """ factories = { "dep": (DependencyRenderer, parse_deps), @@ -88,8 +88,8 @@ def serve( port (int): Port to serve visualisation. host (str): Host to serve visualisation. - DOCS: https://spacy.io/api/top-level#displacy.serve - USAGE: https://spacy.io/usage/visualizers + DOCS: https://nightly.spacy.io/api/top-level#displacy.serve + USAGE: https://nightly.spacy.io/usage/visualizers """ from wsgiref import simple_server diff --git a/spacy/displacy/render.py b/spacy/displacy/render.py index 07550f9aa..ba56beca3 100644 --- a/spacy/displacy/render.py +++ b/spacy/displacy/render.py @@ -249,6 +249,12 @@ class EntityRenderer: colors = dict(DEFAULT_LABEL_COLORS) user_colors = registry.displacy_colors.get_all() for user_color in user_colors.values(): + if callable(user_color): + # Since this comes from the function registry, we want to make + # sure we support functions that *return* a dict of colors + user_color = user_color() + if not isinstance(user_color, dict): + raise ValueError(Errors.E925.format(obj=type(user_color))) colors.update(user_color) colors.update(options.get("colors", {})) self.default_color = DEFAULT_ENTITY_COLOR @@ -323,7 +329,11 @@ class EntityRenderer: else: markup += entity offset = end - markup += escape_html(text[offset:]) + fragments = text[offset:].split("\n") + for i, fragment in enumerate(fragments): + markup += escape_html(fragment) + if len(fragments) > 1 and i != len(fragments) - 1: + markup += "
" markup = TPL_ENTS.format(content=markup, dir=self.direction) if title: markup = TPL_TITLE.format(title=title) + markup diff --git a/spacy/errors.py b/spacy/errors.py index be71de820..bad3e83e4 100644 --- a/spacy/errors.py +++ b/spacy/errors.py @@ -22,7 +22,7 @@ class Warnings: "generate a dependency visualization for it. Make sure the Doc " "was processed with a model that supports dependency parsing, and " "not just a language class like `English()`. For more info, see " - "the docs:\nhttps://spacy.io/usage/models") + "the docs:\nhttps://nightly.spacy.io/usage/models") W006 = ("No entities to visualize found in Doc object. If this is " "surprising to you, make sure the Doc was processed using a model " "that supports named entity recognition, and check the `doc.ents` " @@ -76,6 +76,10 @@ class Warnings: "If this is surprising, make sure you have the spacy-lookups-data " "package installed. The languages with lexeme normalization tables " "are currently: {langs}") + W034 = ("Please install the package spacy-lookups-data in order to include " + "the default lexeme normalization table for the language '{lang}'.") + W035 = ('Discarding subpattern "{pattern}" due to an unrecognized ' + "attribute or operator.") # TODO: fix numbering after merging develop into master W090 = ("Could not locate any binary .spacy files in path '{path}'.") @@ -147,7 +151,7 @@ class Errors: E010 = ("Word vectors set to length 0. This may be because you don't have " "a model installed or loaded, or because your model doesn't " "include word vectors. For more info, see the docs:\n" - "https://spacy.io/usage/models") + "https://nightly.spacy.io/usage/models") E011 = ("Unknown operator: '{op}'. Options: {opts}") E012 = ("Cannot add pattern for zero tokens to matcher.\nKey: {key}") E014 = ("Unknown tag ID: {tag}") @@ -181,7 +185,7 @@ class Errors: "list of (unicode, bool) tuples. Got bytes instance: {value}") E029 = ("noun_chunks requires the dependency parse, which requires a " "statistical model to be installed and loaded. For more info, see " - "the documentation:\nhttps://spacy.io/usage/models") + "the documentation:\nhttps://nightly.spacy.io/usage/models") E030 = ("Sentence boundaries unset. You can add the 'sentencizer' " "component to the pipeline with: " "nlp.add_pipe('sentencizer'). " @@ -284,17 +288,17 @@ class Errors: "Span objects, or dicts if set to manual=True.") E097 = ("Invalid pattern: expected token pattern (list of dicts) or " "phrase pattern (string) but got:\n{pattern}") - E098 = ("Invalid pattern specified: expected both SPEC and PATTERN.") - E099 = ("First node of pattern should be a root node. The root should " - "only contain NODE_NAME.") - E100 = ("Nodes apart from the root should contain NODE_NAME, NBOR_NAME and " - "NBOR_RELOP.") - E101 = ("NODE_NAME should be a new node and NBOR_NAME should already have " + E098 = ("Invalid pattern: expected both RIGHT_ID and RIGHT_ATTRS.") + E099 = ("Invalid pattern: the first node of pattern should be an anchor " + "node. The node should only contain RIGHT_ID and RIGHT_ATTRS.") + E100 = ("Nodes other than the anchor node should all contain LEFT_ID, " + "REL_OP and RIGHT_ID.") + E101 = ("RIGHT_ID should be a new node and LEFT_ID should already have " "have been declared in previous edges.") E102 = ("Can't merge non-disjoint spans. '{token}' is already part of " "tokens to merge. If you want to find the longest non-overlapping " "spans, you can use the util.filter_spans helper:\n" - "https://spacy.io/api/top-level#util.filter_spans") + "https://nightly.spacy.io/api/top-level#util.filter_spans") E103 = ("Trying to set conflicting doc.ents: '{span1}' and '{span2}'. A " "token can only be part of one entity, so make sure the entities " "you're setting don't overlap.") @@ -364,10 +368,10 @@ class Errors: E137 = ("Expected 'dict' type, but got '{type}' from '{line}'. Make sure " "to provide a valid JSON object as input with either the `text` " "or `tokens` key. For more info, see the docs:\n" - "https://spacy.io/api/cli#pretrain-jsonl") + "https://nightly.spacy.io/api/cli#pretrain-jsonl") E138 = ("Invalid JSONL format for raw text '{text}'. Make sure the input " "includes either the `text` or `tokens` key. For more info, see " - "the docs:\nhttps://spacy.io/api/cli#pretrain-jsonl") + "the docs:\nhttps://nightly.spacy.io/api/cli#pretrain-jsonl") E139 = ("Knowledge Base for component '{name}' is empty. Use the methods " "kb.add_entity and kb.add_alias to add entries.") E140 = ("The list of entities, prior probabilities and entity vectors " @@ -474,8 +478,13 @@ class Errors: E198 = ("Unable to return {n} most similar vectors for the current vectors " "table, which contains {n_rows} vectors.") E199 = ("Unable to merge 0-length span at doc[{start}:{end}].") + E200 = ("Specifying a base model with a pretrained component '{component}' " + "can not be combined with adding a pretrained Tok2Vec layer.") + E201 = ("Span index out of range.") # TODO: fix numbering after merging develop into master + E925 = ("Invalid color values for displaCy visualizer: expected dictionary " + "mapping label names to colors but got: {obj}") E926 = ("It looks like you're trying to modify nlp.{attr} directly. This " "doesn't work because it's an immutable computed property. If you " "need to modify the pipeline, use the built-in methods like " @@ -652,6 +661,9 @@ class Errors: "'{chunk}'. Tokenizer exceptions are only allowed to specify " "`ORTH` and `NORM`.") E1006 = ("Unable to initialize {name} model with 0 labels.") + E1007 = ("Unsupported DependencyMatcher operator '{op}'.") + E1008 = ("Invalid pattern: each pattern should be a list of dicts. Check " + "that you are providing a list of patterns as `List[List[dict]]`.") @add_codes diff --git a/spacy/gold/batchers.py b/spacy/gold/batchers.py index ec1f35815..c54242eae 100644 --- a/spacy/gold/batchers.py +++ b/spacy/gold/batchers.py @@ -11,7 +11,7 @@ ItemT = TypeVar("ItemT") BatcherT = Callable[[Iterable[ItemT]], Iterable[List[ItemT]]] -@registry.batchers("batch_by_padded.v1") +@registry.batchers("spacy.batch_by_padded.v1") def configure_minibatch_by_padded_size( *, size: Sizing, @@ -46,7 +46,7 @@ def configure_minibatch_by_padded_size( ) -@registry.batchers("batch_by_words.v1") +@registry.batchers("spacy.batch_by_words.v1") def configure_minibatch_by_words( *, size: Sizing, @@ -70,7 +70,7 @@ def configure_minibatch_by_words( ) -@registry.batchers("batch_by_sequence.v1") +@registry.batchers("spacy.batch_by_sequence.v1") def configure_minibatch( size: Sizing, get_length: Optional[Callable[[ItemT], int]] = None ) -> BatcherT: diff --git a/spacy/gold/converters/conll_ner2docs.py b/spacy/gold/converters/conll_ner2docs.py index 0b348142a..c04a77f07 100644 --- a/spacy/gold/converters/conll_ner2docs.py +++ b/spacy/gold/converters/conll_ner2docs.py @@ -106,7 +106,7 @@ def conll_ner2docs( raise ValueError( "The token-per-line NER file is not formatted correctly. " "Try checking whitespace and delimiters. See " - "https://spacy.io/api/cli#convert" + "https://nightly.spacy.io/api/cli#convert" ) length = len(cols[0]) words.extend(cols[0]) diff --git a/spacy/gold/converters/iob2docs.py b/spacy/gold/converters/iob2docs.py index c7e243397..eebf1266b 100644 --- a/spacy/gold/converters/iob2docs.py +++ b/spacy/gold/converters/iob2docs.py @@ -44,7 +44,7 @@ def read_iob(raw_sents, vocab, n_sents): sent_tags = ["-"] * len(sent_words) else: raise ValueError( - "The sentence-per-line IOB/IOB2 file is not formatted correctly. Try checking whitespace and delimiters. See https://spacy.io/api/cli#convert" + "The sentence-per-line IOB/IOB2 file is not formatted correctly. Try checking whitespace and delimiters. See https://nightly.spacy.io/api/cli#convert" ) words.extend(sent_words) tags.extend(sent_tags) diff --git a/spacy/gold/corpus.py b/spacy/gold/corpus.py index 1046da1e6..545f01eaa 100644 --- a/spacy/gold/corpus.py +++ b/spacy/gold/corpus.py @@ -38,7 +38,7 @@ class Corpus: limit (int): Limit corpus to a subset of examples, e.g. for debugging. Defaults to 0, which indicates no limit. - DOCS: https://spacy.io/api/corpus + DOCS: https://nightly.spacy.io/api/corpus """ def __init__( @@ -83,7 +83,7 @@ class Corpus: nlp (Language): The current nlp object. YIELDS (Example): The examples. - DOCS: https://spacy.io/api/corpus#call + DOCS: https://nightly.spacy.io/api/corpus#call """ ref_docs = self.read_docbin(nlp.vocab, self.walk_corpus(self.path)) if self.gold_preproc: diff --git a/spacy/kb.pyx b/spacy/kb.pyx index 3b8017a0c..b24ed3a20 100644 --- a/spacy/kb.pyx +++ b/spacy/kb.pyx @@ -21,7 +21,7 @@ cdef class Candidate: algorithm which will disambiguate the various candidates to the correct one. Each candidate (alias, entity) pair is assigned to a certain prior probability. - DOCS: https://spacy.io/api/kb/#candidate_init + DOCS: https://nightly.spacy.io/api/kb/#candidate_init """ def __init__(self, KnowledgeBase kb, entity_hash, entity_freq, entity_vector, alias_hash, prior_prob): @@ -79,7 +79,7 @@ cdef class KnowledgeBase: """A `KnowledgeBase` instance stores unique identifiers for entities and their textual aliases, to support entity linking of named entities to real-world concepts. - DOCS: https://spacy.io/api/kb + DOCS: https://nightly.spacy.io/api/kb """ def __init__(self, Vocab vocab, entity_vector_length): diff --git a/spacy/lang/cs/__init__.py b/spacy/lang/cs/__init__.py index a4b546b13..0c35e2288 100644 --- a/spacy/lang/cs/__init__.py +++ b/spacy/lang/cs/__init__.py @@ -1,9 +1,11 @@ from .stop_words import STOP_WORDS +from .lex_attrs import LEX_ATTRS from ...language import Language class CzechDefaults(Language.Defaults): stop_words = STOP_WORDS + lex_attr_getters = LEX_ATTRS class Czech(Language): diff --git a/spacy/lang/cs/examples.py b/spacy/lang/cs/examples.py new file mode 100644 index 000000000..a30b5ac14 --- /dev/null +++ b/spacy/lang/cs/examples.py @@ -0,0 +1,38 @@ +""" +Example sentences to test spaCy and its language models. +>>> from spacy.lang.cs.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + "Máma mele maso.", + "Příliš žluťoučký kůň úpěl ďábelské ódy.", + "ArcGIS je geografický informační systém určený pro práci s prostorovými daty.", + "Může data vytvářet a spravovat, ale především je dokáže analyzovat, najít v nich nové vztahy a vše přehledně vizualizovat.", + "Dnes je krásné počasí.", + "Nestihl autobus, protože pozdě vstal z postele.", + "Než budeš jíst, jdi si umýt ruce.", + "Dnes je neděle.", + "Škola začíná v 8:00.", + "Poslední autobus jede v jedenáct hodin večer.", + "V roce 2020 se téměř zastavila světová ekonomika.", + "Praha je hlavní město České republiky.", + "Kdy půjdeš ven?", + "Kam pojedete na dovolenou?", + "Kolik stojí iPhone 12?", + "Průměrná mzda je 30000 Kč.", + "1. ledna 1993 byla založena Česká republika.", + "Co se stalo 21.8.1968?", + "Moje telefonní číslo je 712 345 678.", + "Můj pes má blechy.", + "Když bude přes noc více než 20°, tak nás čeká tropická noc.", + "Kolik bylo letos tropických nocí?", + "Jak to mám udělat?", + "Bydlíme ve čtvrtém patře.", + "Vysílají 30. sezonu seriálu Simpsonovi.", + "Adresa ČVUT je Thákurova 7, 166 29, Praha 6.", + "Jaké PSČ má Praha 1?", + "PSČ Prahy 1 je 110 00.", + "Za 20 minut jede vlak.", +] diff --git a/spacy/lang/cs/lex_attrs.py b/spacy/lang/cs/lex_attrs.py new file mode 100644 index 000000000..530d1d5eb --- /dev/null +++ b/spacy/lang/cs/lex_attrs.py @@ -0,0 +1,61 @@ +from ...attrs import LIKE_NUM + +_num_words = [ + "nula", + "jedna", + "dva", + "tři", + "čtyři", + "pět", + "šest", + "sedm", + "osm", + "devět", + "deset", + "jedenáct", + "dvanáct", + "třináct", + "čtrnáct", + "patnáct", + "šestnáct", + "sedmnáct", + "osmnáct", + "devatenáct", + "dvacet", + "třicet", + "čtyřicet", + "padesát", + "šedesát", + "sedmdesát", + "osmdesát", + "devadesát", + "sto", + "tisíc", + "milion", + "miliarda", + "bilion", + "biliarda", + "trilion", + "triliarda", + "kvadrilion", + "kvadriliarda", + "kvintilion", +] + + +def like_num(text): + if text.startswith(("+", "-", "±", "~")): + text = text[1:] + text = text.replace(",", "").replace(".", "") + if text.isdigit(): + return True + if text.count("/") == 1: + num, denom = text.split("/") + if num.isdigit() and denom.isdigit(): + return True + if text.lower() in _num_words: + return True + return False + + +LEX_ATTRS = {LIKE_NUM: like_num} diff --git a/spacy/lang/cs/stop_words.py b/spacy/lang/cs/stop_words.py index 70aab030b..f61f424f6 100644 --- a/spacy/lang/cs/stop_words.py +++ b/spacy/lang/cs/stop_words.py @@ -1,14 +1,23 @@ # Source: https://github.com/Alir3z4/stop-words +# Source: https://github.com/stopwords-iso/stopwords-cs/blob/master/stopwords-cs.txt STOP_WORDS = set( """ -ačkoli +a +aby ahoj +ačkoli ale +alespoň anebo +ani +aniž ano +atd. +atp. asi aspoň +až během bez beze @@ -21,12 +30,14 @@ budeš budete budou budu +by byl byla byli bylo byly bys +být čau chce chceme @@ -35,14 +46,21 @@ chcete chci chtějí chtít -chut' +chuť chuti co +což +cz +či +článek +článku +články čtrnáct čtyři dál dále daleko +další děkovat děkujeme děkuji @@ -50,6 +68,7 @@ den deset devatenáct devět +dnes do dobrý docela @@ -57,9 +76,15 @@ dva dvacet dvanáct dvě +email +ho hodně +i já jak +jakmile +jako +jakož jde je jeden @@ -69,25 +94,39 @@ jedno jednou jedou jeho +jehož +jej její jejich +jejichž +jehož +jelikož jemu jen jenom +jenž +jež ještě jestli jestliže +ještě +ji jí jich jím +jim jimi jinak -jsem +jiné +již jsi jsme +jsem jsou jste +k kam +každý kde kdo kdy @@ -96,10 +135,13 @@ ke kolik kromě která +kterak +kterou které kteří který kvůli +ku má mají málo @@ -110,8 +152,10 @@ máte mé mě mezi +mi mí mít +mne mně mnou moc @@ -134,6 +178,7 @@ nás náš naše naši +načež ne ně nebo @@ -141,6 +186,7 @@ nebyl nebyla nebyli nebyly +nechť něco nedělá nedělají @@ -150,6 +196,7 @@ neděláš neděláte nějak nejsi +nejsou někde někdo nemají @@ -157,15 +204,22 @@ nemáme nemáte neměl němu +němuž není nestačí +ně nevadí +nové +nový +noví než nic nich +ní ním nimi nula +o od ode on @@ -179,22 +233,37 @@ pak patnáct pět po +pod +pokud pořád +pouze potom pozdě +pravé před +přede přes -přese +přece pro proč prosím prostě +proto proti +první +právě protože +při +přičemž rovně +s se sedm sedmnáct +si +sice +skoro +sic šest šestnáct skoro @@ -203,41 +272,69 @@ smí snad spolu sta +svůj +své +svá +svých +svým +svými +svůj sté sto +strana ta tady tak takhle taky +také +takže tam -tamhle -tamhleto +támhle +támhleto tamto tě tebe tebou -ted' +teď tedy ten +tento +této ti +tím +tímto tisíc tisíce to tobě tohle +tohoto +tom +tomto +tomu +tomuto toto třeba tři třináct trošku +trochu +tu +tuto tvá tvé tvoje tvůj ty +tyto +těm +těma +těmi +u určitě už +v vám vámi vás @@ -247,13 +344,19 @@ vaši ve večer vedle +více vlastně +však +všechen všechno všichni vůbec vy vždy +z +zda za +zde zač zatímco ze diff --git a/spacy/lang/cs/test_text.py b/spacy/lang/cs/test_text.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/lang/en/lex_attrs.py b/spacy/lang/en/lex_attrs.py index 975e6b392..fcc7c6bf2 100644 --- a/spacy/lang/en/lex_attrs.py +++ b/spacy/lang/en/lex_attrs.py @@ -8,6 +8,14 @@ _num_words = [ "fifty", "sixty", "seventy", "eighty", "ninety", "hundred", "thousand", "million", "billion", "trillion", "quadrillion", "gajillion", "bazillion" ] +_ordinal_words = [ + "first", "second", "third", "fourth", "fifth", "sixth", "seventh", "eighth", + "ninth", "tenth", "eleventh", "twelfth", "thirteenth", "fourteenth", + "fifteenth", "sixteenth", "seventeenth", "eighteenth", "nineteenth", + "twentieth", "thirtieth", "fortieth", "fiftieth", "sixtieth", "seventieth", + "eightieth", "ninetieth", "hundredth", "thousandth", "millionth", "billionth", + "trillionth", "quadrillionth", "gajillionth", "bazillionth" +] # fmt: on @@ -21,8 +29,15 @@ def like_num(text: str) -> bool: num, denom = text.split("/") if num.isdigit() and denom.isdigit(): return True - if text.lower() in _num_words: + text_lower = text.lower() + if text_lower in _num_words: return True + # Check ordinal number + if text_lower in _ordinal_words: + return True + if text_lower.endswith("th"): + if text_lower[:-2].isdigit(): + return True return False diff --git a/spacy/lang/es/syntax_iterators.py b/spacy/lang/es/syntax_iterators.py index c33412693..427f1f203 100644 --- a/spacy/lang/es/syntax_iterators.py +++ b/spacy/lang/es/syntax_iterators.py @@ -19,8 +19,7 @@ def noun_chunks(doclike: Union[Doc, Span]) -> Iterator[Span]: np_left_deps = [doc.vocab.strings.add(label) for label in left_labels] np_right_deps = [doc.vocab.strings.add(label) for label in right_labels] stop_deps = [doc.vocab.strings.add(label) for label in stop_labels] - token = doc[0] - while token and token.i < len(doclike): + for token in doclike: if token.pos in [PROPN, NOUN, PRON]: left, right = noun_bounds( doc, token, np_left_deps, np_right_deps, stop_deps diff --git a/spacy/lang/he/__init__.py b/spacy/lang/he/__init__.py index 70bd9cf45..e0adc3293 100644 --- a/spacy/lang/he/__init__.py +++ b/spacy/lang/he/__init__.py @@ -1,9 +1,11 @@ from .stop_words import STOP_WORDS +from .lex_attrs import LEX_ATTRS from ...language import Language class HebrewDefaults(Language.Defaults): stop_words = STOP_WORDS + lex_attr_getters = LEX_ATTRS writing_system = {"direction": "rtl", "has_case": False, "has_letters": True} diff --git a/spacy/lang/he/lex_attrs.py b/spacy/lang/he/lex_attrs.py new file mode 100644 index 000000000..2953e7592 --- /dev/null +++ b/spacy/lang/he/lex_attrs.py @@ -0,0 +1,95 @@ +from ...attrs import LIKE_NUM + +_num_words = [ + "אפס", + "אחד", + "אחת", + "שתיים", + "שתים", + "שניים", + "שנים", + "שלוש", + "שלושה", + "ארבע", + "ארבעה", + "חמש", + "חמישה", + "שש", + "שישה", + "שבע", + "שבעה", + "שמונה", + "תשע", + "תשעה", + "עשר", + "עשרה", + "אחד עשר", + "אחת עשרה", + "שנים עשר", + "שתים עשרה", + "שלושה עשר", + "שלוש עשרה", + "ארבעה עשר", + "ארבע עשרה", + "חמישה עשר", + "חמש עשרה", + "ששה עשר", + "שש עשרה", + "שבעה עשר", + "שבע עשרה", + "שמונה עשר", + "שמונה עשרה", + "תשעה עשר", + "תשע עשרה", + "עשרים", + "שלושים", + "ארבעים", + "חמישים", + "שישים", + "שבעים", + "שמונים", + "תשעים", + "מאה", + "אלף", + "מליון", + "מליארד", + "טריליון", +] + + +_ordinal_words = [ + "ראשון", + "שני", + "שלישי", + "רביעי", + "חמישי", + "שישי", + "שביעי", + "שמיני", + "תשיעי", + "עשירי", +] + + +def like_num(text): + if text.startswith(("+", "-", "±", "~")): + text = text[1:] + text = text.replace(",", "").replace(".", "") + if text.isdigit(): + return True + + if text.count("/") == 1: + num, denom = text.split("/") + if num.isdigit() and denom.isdigit(): + return True + + if text in _num_words: + return True + + # CHeck ordinal number + if text in _ordinal_words: + return True + return False + + +LEX_ATTRS = {LIKE_NUM: like_num} diff --git a/spacy/lang/he/stop_words.py b/spacy/lang/he/stop_words.py index 2745460a7..23bb5176d 100644 --- a/spacy/lang/he/stop_words.py +++ b/spacy/lang/he/stop_words.py @@ -39,7 +39,6 @@ STOP_WORDS = set( בין עם עד -נגר על אל מול @@ -58,7 +57,7 @@ STOP_WORDS = set( עליך עלינו עליכם -לעיכן +עליכן עליהם עליהן כל @@ -67,8 +66,8 @@ STOP_WORDS = set( כך ככה כזה +כזאת זה -זות אותי אותה אותם @@ -91,7 +90,7 @@ STOP_WORDS = set( איתכן יהיה תהיה -היתי +הייתי היתה היה להיות @@ -101,8 +100,6 @@ STOP_WORDS = set( עצמם עצמן עצמנו -עצמהם -עצמהן מי מה איפה @@ -153,6 +150,7 @@ STOP_WORDS = set( לאו אי כלל +בעד נגד אם עם @@ -196,7 +194,6 @@ STOP_WORDS = set( אשר ואילו למרות -אס כמו כפי אז @@ -204,8 +201,8 @@ STOP_WORDS = set( כן לכן לפיכך -מאד עז +מאוד מעט מעטים במידה diff --git a/spacy/lang/hi/examples.py b/spacy/lang/hi/examples.py index ecb0b328c..1443b4908 100644 --- a/spacy/lang/hi/examples.py +++ b/spacy/lang/hi/examples.py @@ -15,4 +15,6 @@ sentences = [ "फ्रांस के राष्ट्रपति कौन हैं?", "संयुक्त राज्यों की राजधानी क्या है?", "बराक ओबामा का जन्म कब हुआ था?", + "जवाहरलाल नेहरू भारत के पहले प्रधानमंत्री हैं।", + "राजेंद्र प्रसाद, भारत के पहले राष्ट्रपति, दो कार्यकाल के लिए कार्यालय रखने वाले एकमात्र व्यक्ति हैं।", ] diff --git a/spacy/lang/ja/__init__.py b/spacy/lang/ja/__init__.py index 051415455..117514c09 100644 --- a/spacy/lang/ja/__init__.py +++ b/spacy/lang/ja/__init__.py @@ -254,7 +254,7 @@ def get_dtokens_and_spaces(dtokens, text, gap_tag="空白"): return text_dtokens, text_spaces # align words and dtokens by referring text, and insert gap tokens for the space char spans - for word, dtoken in zip(words, dtokens): + for i, (word, dtoken) in enumerate(zip(words, dtokens)): # skip all space tokens if word.isspace(): continue @@ -275,7 +275,7 @@ def get_dtokens_and_spaces(dtokens, text, gap_tag="空白"): text_spaces.append(False) text_pos += len(word) # poll a space char after the word - if text_pos < len(text) and text[text_pos] == " ": + if i + 1 < len(dtokens) and dtokens[i + 1].surface == " ": text_spaces[-1] = True text_pos += 1 diff --git a/spacy/lang/lex_attrs.py b/spacy/lang/lex_attrs.py index 088a05ef4..12016c273 100644 --- a/spacy/lang/lex_attrs.py +++ b/spacy/lang/lex_attrs.py @@ -8,7 +8,7 @@ from .. import attrs _like_email = re.compile(r"([a-zA-Z0-9_.+-]+@[a-zA-Z0-9-]+\.[a-zA-Z0-9-.]+)").match _tlds = set( "com|org|edu|gov|net|mil|aero|asia|biz|cat|coop|info|int|jobs|mobi|museum|" - "name|pro|tel|travel|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|" + "name|pro|tel|travel|xyz|icu|xxx|ac|ad|ae|af|ag|ai|al|am|an|ao|aq|ar|as|at|au|aw|" "ax|az|ba|bb|bd|be|bf|bg|bh|bi|bj|bm|bn|bo|br|bs|bt|bv|bw|by|bz|ca|cc|cd|" "cf|cg|ch|ci|ck|cl|cm|cn|co|cr|cs|cu|cv|cx|cy|cz|dd|de|dj|dk|dm|do|dz|ec|" "ee|eg|eh|er|es|et|eu|fi|fj|fk|fm|fo|fr|ga|gb|gd|ge|gf|gg|gh|gi|gl|gm|gn|" diff --git a/spacy/lang/ne/stop_words.py b/spacy/lang/ne/stop_words.py index f008697d0..8470297b9 100644 --- a/spacy/lang/ne/stop_words.py +++ b/spacy/lang/ne/stop_words.py @@ -1,7 +1,3 @@ -# coding: utf8 -from __future__ import unicode_literals - - # Source: https://github.com/sanjaalcorps/NepaliStopWords/blob/master/NepaliStopWords.txt STOP_WORDS = set( diff --git a/spacy/lang/sa/__init__.py b/spacy/lang/sa/__init__.py new file mode 100644 index 000000000..345137817 --- /dev/null +++ b/spacy/lang/sa/__init__.py @@ -0,0 +1,16 @@ +from .stop_words import STOP_WORDS +from .lex_attrs import LEX_ATTRS +from ...language import Language + + +class SanskritDefaults(Language.Defaults): + lex_attr_getters = LEX_ATTRS + stop_words = STOP_WORDS + + +class Sanskrit(Language): + lang = "sa" + Defaults = SanskritDefaults + + +__all__ = ["Sanskrit"] diff --git a/spacy/lang/sa/examples.py b/spacy/lang/sa/examples.py new file mode 100644 index 000000000..60243c04c --- /dev/null +++ b/spacy/lang/sa/examples.py @@ -0,0 +1,15 @@ +""" +Example sentences to test spaCy and its language models. + +>>> from spacy.lang.sa.examples import sentences +>>> docs = nlp.pipe(sentences) +""" + + +sentences = [ + "अभ्यावहति कल्याणं विविधं वाक् सुभाषिता ।", + "मनसि व्याकुले चक्षुः पश्यन्नपि न पश्यति ।", + "यस्य बुद्धिर्बलं तस्य निर्बुद्धेस्तु कुतो बलम्?", + "परो अपि हितवान् बन्धुः बन्धुः अपि अहितः परः ।", + "अहितः देहजः व्याधिः हितम् आरण्यं औषधम् ॥", +] diff --git a/spacy/lang/sa/lex_attrs.py b/spacy/lang/sa/lex_attrs.py new file mode 100644 index 000000000..f2b51650b --- /dev/null +++ b/spacy/lang/sa/lex_attrs.py @@ -0,0 +1,127 @@ +from ...attrs import LIKE_NUM + +# reference 1: https://en.wikibooks.org/wiki/Sanskrit/Numbers + +_num_words = [ + "एकः", + "द्वौ", + "त्रयः", + "चत्वारः", + "पञ्च", + "षट्", + "सप्त", + "अष्ट", + "नव", + "दश", + "एकादश", + "द्वादश", + "त्रयोदश", + "चतुर्दश", + "पञ्चदश", + "षोडश", + "सप्तदश", + "अष्टादश", + "एकान्नविंशति", + "विंशति", + "एकाविंशति", + "द्वाविंशति", + "त्रयोविंशति", + "चतुर्विंशति", + "पञ्चविंशति", + "षड्विंशति", + "सप्तविंशति", + "अष्टाविंशति", + "एकान्नत्रिंशत्", + "त्रिंशत्", + "एकत्रिंशत्", + "द्वात्रिंशत्", + "त्रयत्रिंशत्", + "चतुस्त्रिंशत्", + "पञ्चत्रिंशत्", + "षट्त्रिंशत्", + "सप्तत्रिंशत्", + "अष्टात्रिंशत्", + "एकोनचत्वारिंशत्", + "चत्वारिंशत्", + "एकचत्वारिंशत्", + "द्वाचत्वारिंशत्", + "त्रयश्चत्वारिंशत्", + "चतुश्चत्वारिंशत्", + "पञ्चचत्वारिंशत्", + "षट्चत्वारिंशत्", + "सप्तचत्वारिंशत्", + "अष्टाचत्वारिंशत्", + "एकोनपञ्चाशत्", + "पञ्चाशत्", + "एकपञ्चाशत्", + "द्विपञ्चाशत्", + "त्रिपञ्चाशत्", + "चतुःपञ्चाशत्", + "पञ्चपञ्चाशत्", + "षट्पञ्चाशत्", + "सप्तपञ्चाशत्", + "अष्टपञ्चाशत्", + "एकोनषष्ठिः", + "षष्ठिः", + "एकषष्ठिः", + "द्विषष्ठिः", + "त्रिषष्ठिः", + "चतुःषष्ठिः", + "पञ्चषष्ठिः", + "षट्षष्ठिः", + "सप्तषष्ठिः", + "अष्टषष्ठिः", + "एकोनसप्ततिः", + "सप्ततिः", + "एकसप्ततिः", + "द्विसप्ततिः", + "त्रिसप्ततिः", + "चतुःसप्ततिः", + "पञ्चसप्ततिः", + "षट्सप्ततिः", + "सप्तसप्ततिः", + "अष्टसप्ततिः", + "एकोनाशीतिः", + "अशीतिः", + "एकाशीतिः", + "द्वशीतिः", + "त्र्यशीतिः", + "चतुरशीतिः", + "पञ्चाशीतिः", + "षडशीतिः", + "सप्ताशीतिः", + "अष्टाशीतिः", + "एकोननवतिः", + "नवतिः", + "एकनवतिः", + "द्विनवतिः", + "त्रिनवतिः", + "चतुर्नवतिः", + "पञ्चनवतिः", + "षण्णवतिः", + "सप्तनवतिः", + "अष्टनवतिः", + "एकोनशतम्", + "शतम्", +] + + +def like_num(text): + """ + Check if text resembles a number + """ + if text.startswith(("+", "-", "±", "~")): + text = text[1:] + text = text.replace(",", "").replace(".", "") + if text.isdigit(): + return True + if text.count("/") == 1: + num, denom = text.split("/") + if num.isdigit() and denom.isdigit(): + return True + if text in _num_words: + return True + return False + + +LEX_ATTRS = {LIKE_NUM: like_num} diff --git a/spacy/lang/sa/stop_words.py b/spacy/lang/sa/stop_words.py new file mode 100644 index 000000000..30302a14d --- /dev/null +++ b/spacy/lang/sa/stop_words.py @@ -0,0 +1,515 @@ +# Source: https://gist.github.com/Akhilesh28/fe8b8e180f64b72e64751bc31cb6d323 + +STOP_WORDS = set( + """ +अहम् +आवाम् +वयम् +माम् मा +आवाम् +अस्मान् नः +मया +आवाभ्याम् +अस्माभिस् +मह्यम् मे +आवाभ्याम् नौ +अस्मभ्यम् नः +मत् +आवाभ्याम् +अस्मत् +मम मे +आवयोः +अस्माकम् नः +मयि +आवयोः +अस्मासु +त्वम् +युवाम् +यूयम् +त्वाम् त्वा +युवाम् वाम् +युष्मान् वः +त्वया +युवाभ्याम् +युष्माभिः +तुभ्यम् ते +युवाभ्याम् वाम् +युष्मभ्यम् वः +त्वत् +युवाभ्याम् +युष्मत् +तव ते +युवयोः वाम् +युष्माकम् वः +त्वयि +युवयोः +युष्मासु +सः +तौ +ते +तम् +तौ +तान् +तेन +ताभ्याम् +तैः +तस्मै +ताभ्याम् +तेभ्यः +तस्मात् +ताभ्याम् +तेभ्यः +तस्य +तयोः +तेषाम् +तस्मिन् +तयोः +तेषु +सा +ते +ताः +ताम् +ते +ताः +तया +ताभ्याम् +ताभिः +तस्यै +ताभ्याम् +ताभ्यः +तस्याः +ताभ्याम् +ताभ्यः +तस्य +तयोः +तासाम् +तस्याम् +तयोः +तासु +तत् +ते +तानि +तत् +ते +तानि +तया +ताभ्याम् +ताभिः +तस्यै +ताभ्याम् +ताभ्यः +तस्याः +ताभ्याम् +ताभ्यः +तस्य +तयोः +तासाम् +तस्याम् +तयोः +तासु +अयम् +इमौ +इमे +इमम् +इमौ +इमान् +अनेन +आभ्याम् +एभिः +अस्मै +आभ्याम् +एभ्यः +अस्मात् +आभ्याम् +एभ्यः +अस्य +अनयोः +एषाम् +अस्मिन् +अनयोः +एषु +इयम् +इमे +इमाः +इमाम् +इमे +इमाः +अनया +आभ्याम् +आभिः +अस्यै +आभ्याम् +आभ्यः +अस्याः +आभ्याम् +आभ्यः +अस्याः +अनयोः +आसाम् +अस्याम् +अनयोः +आसु +इदम् +इमे +इमानि +इदम् +इमे +इमानि +अनेन +आभ्याम् +एभिः +अस्मै +आभ्याम् +एभ्यः +अस्मात् +आभ्याम् +एभ्यः +अस्य +अनयोः +एषाम् +अस्मिन् +अनयोः +एषु +एषः +एतौ +एते +एतम् एनम् +एतौ एनौ +एतान् एनान् +एतेन +एताभ्याम् +एतैः +एतस्मै +एताभ्याम् +एतेभ्यः +एतस्मात् +एताभ्याम् +एतेभ्यः +एतस्य +एतस्मिन् +एतेषाम् +एतस्मिन् +एतस्मिन् +एतेषु +एषा +एते +एताः +एताम् एनाम् +एते एने +एताः एनाः +एतया एनया +एताभ्याम् +एताभिः +एतस्यै +एताभ्याम् +एताभ्यः +एतस्याः +एताभ्याम् +एताभ्यः +एतस्याः +एतयोः एनयोः +एतासाम् +एतस्याम् +एतयोः एनयोः +एतासु +एतत् एतद् +एते +एतानि +एतत् एतद् एनत् एनद् +एते एने +एतानि एनानि +एतेन एनेन +एताभ्याम् +एतैः +एतस्मै +एताभ्याम् +एतेभ्यः +एतस्मात् +एताभ्याम् +एतेभ्यः +एतस्य +एतयोः एनयोः +एतेषाम् +एतस्मिन् +एतयोः एनयोः +एतेषु +असौ +अमू +अमी +अमूम् +अमू +अमून् +अमुना +अमूभ्याम् +अमीभिः +अमुष्मै +अमूभ्याम् +अमीभ्यः +अमुष्मात् +अमूभ्याम् +अमीभ्यः +अमुष्य +अमुयोः +अमीषाम् +अमुष्मिन् +अमुयोः +अमीषु +असौ +अमू +अमूः +अमूम् +अमू +अमूः +अमुया +अमूभ्याम् +अमूभिः +अमुष्यै +अमूभ्याम् +अमूभ्यः +अमुष्याः +अमूभ्याम् +अमूभ्यः +अमुष्याः +अमुयोः +अमूषाम् +अमुष्याम् +अमुयोः +अमूषु +अमु +अमुनी +अमूनि +अमु +अमुनी +अमूनि +अमुना +अमूभ्याम् +अमीभिः +अमुष्मै +अमूभ्याम् +अमीभ्यः +अमुष्मात् +अमूभ्याम् +अमीभ्यः +अमुष्य +अमुयोः +अमीषाम् +अमुष्मिन् +अमुयोः +अमीषु +कः +कौ +के +कम् +कौ +कान् +केन +काभ्याम् +कैः +कस्मै +काभ्याम् +केभ्य +कस्मात् +काभ्याम् +केभ्य +कस्य +कयोः +केषाम् +कस्मिन् +कयोः +केषु +का +के +काः +काम् +के +काः +कया +काभ्याम् +काभिः +कस्यै +काभ्याम् +काभ्यः +कस्याः +काभ्याम् +काभ्यः +कस्याः +कयोः +कासाम् +कस्याम् +कयोः +कासु +किम् +के +कानि +किम् +के +कानि +केन +काभ्याम् +कैः +कस्मै +काभ्याम् +केभ्य +कस्मात् +काभ्याम् +केभ्य +कस्य +कयोः +केषाम् +कस्मिन् +कयोः +केषु +भवान् +भवन्तौ +भवन्तः +भवन्तम् +भवन्तौ +भवतः +भवता +भवद्भ्याम् +भवद्भिः +भवते +भवद्भ्याम् +भवद्भ्यः +भवतः +भवद्भ्याम् +भवद्भ्यः +भवतः +भवतोः +भवताम् +भवति +भवतोः +भवत्सु +भवती +भवत्यौ +भवत्यः +भवतीम् +भवत्यौ +भवतीः +भवत्या +भवतीभ्याम् +भवतीभिः +भवत्यै +भवतीभ्याम् +भवतीभिः +भवत्याः +भवतीभ्याम् +भवतीभिः +भवत्याः +भवत्योः +भवतीनाम् +भवत्याम् +भवत्योः +भवतीषु +भवत् +भवती +भवन्ति +भवत् +भवती +भवन्ति +भवता +भवद्भ्याम् +भवद्भिः +भवते +भवद्भ्याम् +भवद्भ्यः +भवतः +भवद्भ्याम् +भवद्भ्यः +भवतः +भवतोः +भवताम् +भवति +भवतोः +भवत्सु +अये +अरे +अरेरे +अविधा +असाधुना +अस्तोभ +अहह +अहावस् +आम् +आर्यहलम् +आह +आहो +इस् +उम् +उवे +काम् +कुम् +चमत् +टसत् +दृन् +धिक् +पाट् +फत् +फाट् +फुडुत् +बत +बाल् +वट् +व्यवस्तोभति व्यवस्तुभ् +षाट् +स्तोभ +हुम्मा +हूम् +अति +अधि +अनु +अप +अपि +अभि +अव +आ +उद् +उप +नि +निर् +परा +परि +प्र +प्रति +वि +सम् +अथवा उत +अन्यथा +इव +च +चेत् यदि +तु परन्तु +यतः करणेन हि यतस् यदर्थम् यदर्थे यर्हि यथा यत्कारणम् येन ही हिन +यथा यतस् +यद्यपि +यात् अवधेस् यावति +येन प्रकारेण +स्थाने +अह +एव +एवम् +कच्चित् +कु +कुवित् +कूपत् +च +चण् +चेत् +तत्र +नकिम् +नह +नुनम् +नेत् +भूयस् +मकिम् +मकिर् +यत्र +युगपत् +वा +शश्वत् +सूपत् +ह +हन्त +हि +""".split() +) diff --git a/spacy/lang/tokenizer_exceptions.py b/spacy/lang/tokenizer_exceptions.py index 2532ae104..960302513 100644 --- a/spacy/lang/tokenizer_exceptions.py +++ b/spacy/lang/tokenizer_exceptions.py @@ -34,13 +34,13 @@ URL_PATTERN = ( r"|" # host & domain names # mods: match is case-sensitive, so include [A-Z] - "(?:" # noqa: E131 - "(?:" - "[A-Za-z0-9\u00a1-\uffff]" - "[A-Za-z0-9\u00a1-\uffff_-]{0,62}" - ")?" - "[A-Za-z0-9\u00a1-\uffff]\." - ")+" + r"(?:" # noqa: E131 + r"(?:" + r"[A-Za-z0-9\u00a1-\uffff]" + r"[A-Za-z0-9\u00a1-\uffff_-]{0,62}" + r")?" + r"[A-Za-z0-9\u00a1-\uffff]\." + r")+" # TLD identifier # mods: use ALPHA_LOWER instead of a wider range so that this doesn't match # strings like "lower.Upper", which can be split on "." by infixes in some @@ -128,6 +128,8 @@ emoticons = set( :-] [: [-: +[= +=] :o) (o: :} @@ -159,6 +161,8 @@ emoticons = set( =| :| :-| +]= +=[ :1 :P :-P diff --git a/spacy/language.py b/spacy/language.py index 8e7c39b90..cd84e30a4 100644 --- a/spacy/language.py +++ b/spacy/language.py @@ -3,7 +3,6 @@ from typing import Tuple, Iterator from dataclasses import dataclass import random import itertools -import weakref import functools from contextlib import contextmanager from copy import deepcopy @@ -95,7 +94,7 @@ class Language: object and processing pipeline. lang (str): Two-letter language ID, i.e. ISO code. - DOCS: https://spacy.io/api/language + DOCS: https://nightly.spacy.io/api/language """ Defaults = BaseDefaults @@ -130,7 +129,7 @@ class Language: create_tokenizer (Callable): Function that takes the nlp object and returns a tokenizer. - DOCS: https://spacy.io/api/language#init + DOCS: https://nightly.spacy.io/api/language#init """ # We're only calling this to import all factories provided via entry # points. The factory decorator applied to these functions takes care @@ -185,14 +184,14 @@ class Language: RETURNS (Dict[str, Any]): The meta. - DOCS: https://spacy.io/api/language#meta + DOCS: https://nightly.spacy.io/api/language#meta """ spacy_version = util.get_model_version_range(about.__version__) if self.vocab.lang: self._meta.setdefault("lang", self.vocab.lang) else: self._meta.setdefault("lang", self.lang) - self._meta.setdefault("name", "model") + self._meta.setdefault("name", "pipeline") self._meta.setdefault("version", "0.0.0") self._meta.setdefault("spacy_version", spacy_version) self._meta.setdefault("description", "") @@ -211,6 +210,7 @@ class Language: # TODO: Adding this back to prevent breaking people's code etc., but # we should consider removing it self._meta["pipeline"] = list(self.pipe_names) + self._meta["components"] = list(self.component_names) self._meta["disabled"] = list(self.disabled) return self._meta @@ -225,7 +225,7 @@ class Language: RETURNS (thinc.api.Config): The config. - DOCS: https://spacy.io/api/language#config + DOCS: https://nightly.spacy.io/api/language#config """ self._config.setdefault("nlp", {}) self._config.setdefault("training", {}) @@ -433,7 +433,7 @@ class Language: will be combined and normalized for the whole pipeline. func (Optional[Callable]): Factory function if not used as a decorator. - DOCS: https://spacy.io/api/language#factory + DOCS: https://nightly.spacy.io/api/language#factory """ if not isinstance(name, str): raise ValueError(Errors.E963.format(decorator="factory")) @@ -513,7 +513,7 @@ class Language: Used for pipeline analysis. func (Optional[Callable]): Factory function if not used as a decorator. - DOCS: https://spacy.io/api/language#component + DOCS: https://nightly.spacy.io/api/language#component """ if name is not None and not isinstance(name, str): raise ValueError(Errors.E963.format(decorator="component")) @@ -579,7 +579,7 @@ class Language: name (str): Name of pipeline component to get. RETURNS (callable): The pipeline component. - DOCS: https://spacy.io/api/language#get_pipe + DOCS: https://nightly.spacy.io/api/language#get_pipe """ for pipe_name, component in self._components: if pipe_name == name: @@ -608,7 +608,7 @@ class Language: arguments and types expected by the factory. RETURNS (Callable[[Doc], Doc]): The pipeline component. - DOCS: https://spacy.io/api/language#create_pipe + DOCS: https://nightly.spacy.io/api/language#create_pipe """ name = name if name is not None else factory_name if not isinstance(config, dict): @@ -722,7 +722,7 @@ class Language: arguments and types expected by the factory. RETURNS (Callable[[Doc], Doc]): The pipeline component. - DOCS: https://spacy.io/api/language#add_pipe + DOCS: https://nightly.spacy.io/api/language#add_pipe """ if not isinstance(factory_name, str): bad_val = repr(factory_name) @@ -820,7 +820,7 @@ class Language: name (str): Name of the component. RETURNS (bool): Whether a component of the name exists in the pipeline. - DOCS: https://spacy.io/api/language#has_pipe + DOCS: https://nightly.spacy.io/api/language#has_pipe """ return name in self.pipe_names @@ -841,7 +841,7 @@ class Language: validate (bool): Whether to validate the component config against the arguments and types expected by the factory. - DOCS: https://spacy.io/api/language#replace_pipe + DOCS: https://nightly.spacy.io/api/language#replace_pipe """ if name not in self.pipe_names: raise ValueError(Errors.E001.format(name=name, opts=self.pipe_names)) @@ -870,7 +870,7 @@ class Language: old_name (str): Name of the component to rename. new_name (str): New name of the component. - DOCS: https://spacy.io/api/language#rename_pipe + DOCS: https://nightly.spacy.io/api/language#rename_pipe """ if old_name not in self.component_names: raise ValueError( @@ -891,7 +891,7 @@ class Language: name (str): Name of the component to remove. RETURNS (tuple): A `(name, component)` tuple of the removed component. - DOCS: https://spacy.io/api/language#remove_pipe + DOCS: https://nightly.spacy.io/api/language#remove_pipe """ if name not in self.component_names: raise ValueError(Errors.E001.format(name=name, opts=self.component_names)) @@ -944,7 +944,7 @@ class Language: keyword arguments for specific components. RETURNS (Doc): A container for accessing the annotations. - DOCS: https://spacy.io/api/language#call + DOCS: https://nightly.spacy.io/api/language#call """ if len(text) > self.max_length: raise ValueError( @@ -993,7 +993,7 @@ class Language: disable (str or iterable): The name(s) of the pipes to disable enable (str or iterable): The name(s) of the pipes to enable - all others will be disabled - DOCS: https://spacy.io/api/language#select_pipes + DOCS: https://nightly.spacy.io/api/language#select_pipes """ if enable is None and disable is None: raise ValueError(Errors.E991) @@ -1044,7 +1044,7 @@ class Language: exclude (Iterable[str]): Names of components that shouldn't be updated. RETURNS (Dict[str, float]): The updated losses dictionary - DOCS: https://spacy.io/api/language#update + DOCS: https://nightly.spacy.io/api/language#update """ if _ is not None: raise ValueError(Errors.E989) @@ -1106,7 +1106,7 @@ class Language: >>> raw_batch = [Example.from_dict(nlp.make_doc(text), {}) for text in next(raw_text_batches)] >>> nlp.rehearse(raw_batch) - DOCS: https://spacy.io/api/language#rehearse + DOCS: https://nightly.spacy.io/api/language#rehearse """ if len(examples) == 0: return @@ -1153,7 +1153,7 @@ class Language: create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/language#begin_training + DOCS: https://nightly.spacy.io/api/language#begin_training """ # TODO: throw warning when get_gold_tuples is provided instead of get_examples if get_examples is None: @@ -1200,7 +1200,7 @@ class Language: sgd (Optional[Optimizer]): An optimizer. RETURNS (Optimizer): The optimizer. - DOCS: https://spacy.io/api/language#resume_training + DOCS: https://nightly.spacy.io/api/language#resume_training """ if device >= 0: # TODO: do we need this here? require_gpu(device) @@ -1236,7 +1236,7 @@ class Language: for the scorer. RETURNS (Scorer): The scorer containing the evaluation results. - DOCS: https://spacy.io/api/language#evaluate + DOCS: https://nightly.spacy.io/api/language#evaluate """ validate_examples(examples, "Language.evaluate") if component_cfg is None: @@ -1275,7 +1275,7 @@ class Language: return results @contextmanager - def use_params(self, params: dict): + def use_params(self, params: Optional[dict]): """Replace weights of models in the pipeline with those provided in the params dictionary. Can be used as a contextmanager, in which case, models go back to their original weights after the block. @@ -1286,26 +1286,29 @@ class Language: >>> with nlp.use_params(optimizer.averages): >>> nlp.to_disk("/tmp/checkpoint") - DOCS: https://spacy.io/api/language#use_params + DOCS: https://nightly.spacy.io/api/language#use_params """ - contexts = [ - pipe.use_params(params) - for name, pipe in self.pipeline - if hasattr(pipe, "use_params") and hasattr(pipe, "model") - ] - # TODO: Having trouble with contextlib - # Workaround: these aren't actually context managers atm. - for context in contexts: - try: - next(context) - except StopIteration: - pass - yield - for context in contexts: - try: - next(context) - except StopIteration: - pass + if not params: + yield + else: + contexts = [ + pipe.use_params(params) + for name, pipe in self.pipeline + if hasattr(pipe, "use_params") and hasattr(pipe, "model") + ] + # TODO: Having trouble with contextlib + # Workaround: these aren't actually context managers atm. + for context in contexts: + try: + next(context) + except StopIteration: + pass + yield + for context in contexts: + try: + next(context) + except StopIteration: + pass def pipe( self, @@ -1330,7 +1333,7 @@ class Language: n_process (int): Number of processors to process texts. If -1, set `multiprocessing.cpu_count()`. YIELDS (Doc): Documents in the order of the original text. - DOCS: https://spacy.io/api/language#pipe + DOCS: https://nightly.spacy.io/api/language#pipe """ if n_process == -1: n_process = mp.cpu_count() @@ -1374,8 +1377,6 @@ class Language: docs = (self.make_doc(text) for text in texts) for pipe in pipes: docs = pipe(docs) - - nr_seen = 0 for doc in docs: yield doc @@ -1466,7 +1467,7 @@ class Language: the types expected by the factory. RETURNS (Language): The initialized Language class. - DOCS: https://spacy.io/api/language#from_config + DOCS: https://nightly.spacy.io/api/language#from_config """ if auto_fill: config = Config( @@ -1579,7 +1580,7 @@ class Language: it doesn't exist. exclude (list): Names of components or serialization fields to exclude. - DOCS: https://spacy.io/api/language#to_disk + DOCS: https://nightly.spacy.io/api/language#to_disk """ path = util.ensure_path(path) serializers = {} @@ -1608,7 +1609,7 @@ class Language: exclude (list): Names of components or serialization fields to exclude. RETURNS (Language): The modified `Language` object. - DOCS: https://spacy.io/api/language#from_disk + DOCS: https://nightly.spacy.io/api/language#from_disk """ def deserialize_meta(path: Path) -> None: @@ -1656,7 +1657,7 @@ class Language: exclude (list): Names of components or serialization fields to exclude. RETURNS (bytes): The serialized form of the `Language` object. - DOCS: https://spacy.io/api/language#to_bytes + DOCS: https://nightly.spacy.io/api/language#to_bytes """ serializers = {} serializers["vocab"] = lambda: self.vocab.to_bytes() @@ -1680,7 +1681,7 @@ class Language: exclude (list): Names of components or serialization fields to exclude. RETURNS (Language): The `Language` object. - DOCS: https://spacy.io/api/language#from_bytes + DOCS: https://nightly.spacy.io/api/language#from_bytes """ def deserialize_meta(b): diff --git a/spacy/lexeme.pyx b/spacy/lexeme.pyx index 25461b4b7..17ce574ce 100644 --- a/spacy/lexeme.pyx +++ b/spacy/lexeme.pyx @@ -30,7 +30,7 @@ cdef class Lexeme: tag, dependency parse, or lemma (lemmatization depends on the part-of-speech tag). - DOCS: https://spacy.io/api/lexeme + DOCS: https://nightly.spacy.io/api/lexeme """ def __init__(self, Vocab vocab, attr_t orth): """Create a Lexeme object. diff --git a/spacy/lookups.py b/spacy/lookups.py index d79a5b950..fb5e3d748 100644 --- a/spacy/lookups.py +++ b/spacy/lookups.py @@ -57,7 +57,7 @@ class Table(OrderedDict): data (dict): The dictionary. name (str): Optional table name for reference. - DOCS: https://spacy.io/api/lookups#table.from_dict + DOCS: https://nightly.spacy.io/api/lookups#table.from_dict """ self = cls(name=name) self.update(data) @@ -69,7 +69,7 @@ class Table(OrderedDict): name (str): Optional table name for reference. data (dict): Initial data, used to hint Bloom Filter. - DOCS: https://spacy.io/api/lookups#table.init + DOCS: https://nightly.spacy.io/api/lookups#table.init """ OrderedDict.__init__(self) self.name = name @@ -135,7 +135,7 @@ class Table(OrderedDict): RETURNS (bytes): The serialized table. - DOCS: https://spacy.io/api/lookups#table.to_bytes + DOCS: https://nightly.spacy.io/api/lookups#table.to_bytes """ data = { "name": self.name, @@ -150,7 +150,7 @@ class Table(OrderedDict): bytes_data (bytes): The data to load. RETURNS (Table): The loaded table. - DOCS: https://spacy.io/api/lookups#table.from_bytes + DOCS: https://nightly.spacy.io/api/lookups#table.from_bytes """ loaded = srsly.msgpack_loads(bytes_data) data = loaded.get("dict", {}) @@ -172,7 +172,7 @@ class Lookups: def __init__(self) -> None: """Initialize the Lookups object. - DOCS: https://spacy.io/api/lookups#init + DOCS: https://nightly.spacy.io/api/lookups#init """ self._tables = {} @@ -201,7 +201,7 @@ class Lookups: data (dict): Optional data to add to the table. RETURNS (Table): The newly added table. - DOCS: https://spacy.io/api/lookups#add_table + DOCS: https://nightly.spacy.io/api/lookups#add_table """ if name in self.tables: raise ValueError(Errors.E158.format(name=name)) @@ -215,7 +215,7 @@ class Lookups: name (str): Name of the table to set. table (Table): The Table to set. - DOCS: https://spacy.io/api/lookups#set_table + DOCS: https://nightly.spacy.io/api/lookups#set_table """ self._tables[name] = table @@ -227,7 +227,7 @@ class Lookups: default (Any): Optional default value to return if table doesn't exist. RETURNS (Table): The table. - DOCS: https://spacy.io/api/lookups#get_table + DOCS: https://nightly.spacy.io/api/lookups#get_table """ if name not in self._tables: if default == UNSET: @@ -241,7 +241,7 @@ class Lookups: name (str): Name of the table to remove. RETURNS (Table): The removed table. - DOCS: https://spacy.io/api/lookups#remove_table + DOCS: https://nightly.spacy.io/api/lookups#remove_table """ if name not in self._tables: raise KeyError(Errors.E159.format(name=name, tables=self.tables)) @@ -253,7 +253,7 @@ class Lookups: name (str): Name of the table. RETURNS (bool): Whether a table of that name exists. - DOCS: https://spacy.io/api/lookups#has_table + DOCS: https://nightly.spacy.io/api/lookups#has_table """ return name in self._tables @@ -262,7 +262,7 @@ class Lookups: RETURNS (bytes): The serialized Lookups. - DOCS: https://spacy.io/api/lookups#to_bytes + DOCS: https://nightly.spacy.io/api/lookups#to_bytes """ return srsly.msgpack_dumps(self._tables) @@ -272,7 +272,7 @@ class Lookups: bytes_data (bytes): The data to load. RETURNS (Lookups): The loaded Lookups. - DOCS: https://spacy.io/api/lookups#from_bytes + DOCS: https://nightly.spacy.io/api/lookups#from_bytes """ self._tables = {} for key, value in srsly.msgpack_loads(bytes_data).items(): @@ -287,7 +287,7 @@ class Lookups: path (str / Path): The file path. - DOCS: https://spacy.io/api/lookups#to_disk + DOCS: https://nightly.spacy.io/api/lookups#to_disk """ if len(self._tables): path = ensure_path(path) @@ -306,7 +306,7 @@ class Lookups: path (str / Path): The directory path. RETURNS (Lookups): The loaded lookups. - DOCS: https://spacy.io/api/lookups#from_disk + DOCS: https://nightly.spacy.io/api/lookups#from_disk """ path = ensure_path(path) filepath = path / filename diff --git a/spacy/matcher/dependencymatcher.pyx b/spacy/matcher/dependencymatcher.pyx index e0a54e6f1..067b2167c 100644 --- a/spacy/matcher/dependencymatcher.pyx +++ b/spacy/matcher/dependencymatcher.pyx @@ -1,16 +1,16 @@ # cython: infer_types=True, profile=True -from cymem.cymem cimport Pool -from preshed.maps cimport PreshMap -from libcpp cimport bool +from typing import List import numpy +from cymem.cymem cimport Pool + from .matcher cimport Matcher from ..vocab cimport Vocab from ..tokens.doc cimport Doc -from .matcher import unpickle_matcher from ..errors import Errors +from ..tokens import Span DELIMITER = "||" @@ -22,36 +22,52 @@ cdef class DependencyMatcher: """Match dependency parse tree based on pattern rules.""" cdef Pool mem cdef readonly Vocab vocab - cdef readonly Matcher token_matcher + cdef readonly Matcher matcher cdef public object _patterns + cdef public object _raw_patterns cdef public object _keys_to_token cdef public object _root - cdef public object _entities cdef public object _callbacks cdef public object _nodes cdef public object _tree + cdef public object _ops - def __init__(self, vocab): + def __init__(self, vocab, *, validate=False): """Create the DependencyMatcher. vocab (Vocab): The vocabulary object, which must be shared with the documents the matcher will operate on. + validate (bool): Whether patterns should be validated, passed to + Matcher as `validate` """ size = 20 - # TODO: make matcher work with validation - self.token_matcher = Matcher(vocab, validate=False) + self.matcher = Matcher(vocab, validate=validate) self._keys_to_token = {} self._patterns = {} + self._raw_patterns = {} self._root = {} self._nodes = {} self._tree = {} - self._entities = {} self._callbacks = {} self.vocab = vocab self.mem = Pool() + self._ops = { + "<": self.dep, + ">": self.gov, + "<<": self.dep_chain, + ">>": self.gov_chain, + ".": self.imm_precede, + ".*": self.precede, + ";": self.imm_follow, + ";*": self.follow, + "$+": self.imm_right_sib, + "$-": self.imm_left_sib, + "$++": self.right_sib, + "$--": self.left_sib, + } def __reduce__(self): - data = (self.vocab, self._patterns,self._tree, self._callbacks) + data = (self.vocab, self._raw_patterns, self._callbacks) return (unpickle_matcher, data, None, None) def __len__(self): @@ -74,54 +90,61 @@ cdef class DependencyMatcher: idx = 0 visited_nodes = {} for relation in pattern: - if "PATTERN" not in relation or "SPEC" not in relation: + if not isinstance(relation, dict): + raise ValueError(Errors.E1008) + if "RIGHT_ATTRS" not in relation and "RIGHT_ID" not in relation: raise ValueError(Errors.E098.format(key=key)) if idx == 0: if not( - "NODE_NAME" in relation["SPEC"] - and "NBOR_RELOP" not in relation["SPEC"] - and "NBOR_NAME" not in relation["SPEC"] + "RIGHT_ID" in relation + and "REL_OP" not in relation + and "LEFT_ID" not in relation ): raise ValueError(Errors.E099.format(key=key)) - visited_nodes[relation["SPEC"]["NODE_NAME"]] = True + visited_nodes[relation["RIGHT_ID"]] = True else: if not( - "NODE_NAME" in relation["SPEC"] - and "NBOR_RELOP" in relation["SPEC"] - and "NBOR_NAME" in relation["SPEC"] + "RIGHT_ID" in relation + and "RIGHT_ATTRS" in relation + and "REL_OP" in relation + and "LEFT_ID" in relation ): raise ValueError(Errors.E100.format(key=key)) if ( - relation["SPEC"]["NODE_NAME"] in visited_nodes - or relation["SPEC"]["NBOR_NAME"] not in visited_nodes + relation["RIGHT_ID"] in visited_nodes + or relation["LEFT_ID"] not in visited_nodes ): raise ValueError(Errors.E101.format(key=key)) - visited_nodes[relation["SPEC"]["NODE_NAME"]] = True - visited_nodes[relation["SPEC"]["NBOR_NAME"]] = True + if relation["REL_OP"] not in self._ops: + raise ValueError(Errors.E1007.format(op=relation["REL_OP"])) + visited_nodes[relation["RIGHT_ID"]] = True + visited_nodes[relation["LEFT_ID"]] = True idx = idx + 1 - def add(self, key, patterns, *_patterns, on_match=None): + def add(self, key, patterns, *, on_match=None): """Add a new matcher rule to the matcher. key (str): The match ID. patterns (list): The patterns to add for the given key. on_match (callable): Optional callback executed on match. """ - if patterns is None or hasattr(patterns, "__call__"): # old API - on_match = patterns - patterns = _patterns + if on_match is not None and not hasattr(on_match, "__call__"): + raise ValueError(Errors.E171.format(arg_type=type(on_match))) + if patterns is None or not isinstance(patterns, List): # old API + raise ValueError(Errors.E948.format(arg_type=type(patterns))) for pattern in patterns: if len(pattern) == 0: raise ValueError(Errors.E012.format(key=key)) - self.validate_input(pattern,key) + self.validate_input(pattern, key) key = self._normalize_key(key) + self._raw_patterns.setdefault(key, []) + self._raw_patterns[key].extend(patterns) _patterns = [] for pattern in patterns: token_patterns = [] for i in range(len(pattern)): - token_pattern = [pattern[i]["PATTERN"]] + token_pattern = [pattern[i]["RIGHT_ATTRS"]] token_patterns.append(token_pattern) - # self.patterns.append(token_patterns) _patterns.append(token_patterns) self._patterns.setdefault(key, []) self._callbacks[key] = on_match @@ -135,7 +158,7 @@ cdef class DependencyMatcher: # TODO: Better ways to hash edges in pattern? for j in range(len(_patterns[i])): k = self._normalize_key(unicode(key) + DELIMITER + unicode(i) + DELIMITER + unicode(j)) - self.token_matcher.add(k, [_patterns[i][j]]) + self.matcher.add(k, [_patterns[i][j]]) _keys_to_token[k] = j _keys_to_token_list.append(_keys_to_token) self._keys_to_token.setdefault(key, []) @@ -144,14 +167,14 @@ cdef class DependencyMatcher: for pattern in patterns: nodes = {} for i in range(len(pattern)): - nodes[pattern[i]["SPEC"]["NODE_NAME"]] = i + nodes[pattern[i]["RIGHT_ID"]] = i _nodes_list.append(nodes) self._nodes.setdefault(key, []) self._nodes[key].extend(_nodes_list) # Create an object tree to traverse later on. This data structure # enables easy tree pattern match. Doc-Token based tree cannot be # reused since it is memory-heavy and tightly coupled with the Doc. - self.retrieve_tree(patterns, _nodes_list,key) + self.retrieve_tree(patterns, _nodes_list, key) def retrieve_tree(self, patterns, _nodes_list, key): _heads_list = [] @@ -161,13 +184,13 @@ cdef class DependencyMatcher: root = -1 for j in range(len(patterns[i])): token_pattern = patterns[i][j] - if ("NBOR_RELOP" not in token_pattern["SPEC"]): + if ("REL_OP" not in token_pattern): heads[j] = ('root', j) root = j else: heads[j] = ( - token_pattern["SPEC"]["NBOR_RELOP"], - _nodes_list[i][token_pattern["SPEC"]["NBOR_NAME"]] + token_pattern["REL_OP"], + _nodes_list[i][token_pattern["LEFT_ID"]] ) _heads_list.append(heads) _root_list.append(root) @@ -202,11 +225,21 @@ cdef class DependencyMatcher: RETURNS (tuple): The rule, as an (on_match, patterns) tuple. """ key = self._normalize_key(key) - if key not in self._patterns: + if key not in self._raw_patterns: return default - return (self._callbacks[key], self._patterns[key]) + return (self._callbacks[key], self._raw_patterns[key]) - def __call__(self, Doc doc): + def remove(self, key): + key = self._normalize_key(key) + if not key in self._patterns: + raise ValueError(Errors.E175.format(key=key)) + self._patterns.pop(key) + self._raw_patterns.pop(key) + self._nodes.pop(key) + self._tree.pop(key) + self._root.pop(key) + + def __call__(self, object doclike): """Find all token sequences matching the supplied pattern. doclike (Doc or Span): The document to match over. @@ -214,8 +247,14 @@ cdef class DependencyMatcher: describing the matches. A match tuple describes a span `doc[start:end]`. The `label_id` and `key` are both integers. """ + if isinstance(doclike, Doc): + doc = doclike + elif isinstance(doclike, Span): + doc = doclike.as_doc() + else: + raise ValueError(Errors.E195.format(good="Doc or Span", got=type(doclike).__name__)) matched_key_trees = [] - matches = self.token_matcher(doc) + matches = self.matcher(doc) for key in list(self._patterns.keys()): _patterns_list = self._patterns[key] _keys_to_token_list = self._keys_to_token[key] @@ -244,26 +283,26 @@ cdef class DependencyMatcher: length = len(_nodes) matched_trees = [] - self.recurse(_tree,id_to_position,_node_operator_map,0,[],matched_trees) - matched_key_trees.append((key,matched_trees)) - - for i, (ent_id, nodes) in enumerate(matched_key_trees): - on_match = self._callbacks.get(ent_id) + self.recurse(_tree, id_to_position, _node_operator_map, 0, [], matched_trees) + for matched_tree in matched_trees: + matched_key_trees.append((key, matched_tree)) + for i, (match_id, nodes) in enumerate(matched_key_trees): + on_match = self._callbacks.get(match_id) if on_match is not None: on_match(self, doc, i, matched_key_trees) return matched_key_trees - def recurse(self,tree,id_to_position,_node_operator_map,int patternLength,visited_nodes,matched_trees): - cdef bool isValid; - if(patternLength == len(id_to_position.keys())): + def recurse(self, tree, id_to_position, _node_operator_map, int patternLength, visited_nodes, matched_trees): + cdef bint isValid; + if patternLength == len(id_to_position.keys()): isValid = True for node in range(patternLength): - if(node in tree): + if node in tree: for idx, (relop,nbor) in enumerate(tree[node]): computed_nbors = numpy.asarray(_node_operator_map[visited_nodes[node]][relop]) isNbor = False for computed_nbor in computed_nbors: - if(computed_nbor.i == visited_nodes[nbor]): + if computed_nbor.i == visited_nodes[nbor]: isNbor = True isValid = isValid & isNbor if(isValid): @@ -271,14 +310,14 @@ cdef class DependencyMatcher: return allPatternNodes = numpy.asarray(id_to_position[patternLength]) for patternNode in allPatternNodes: - self.recurse(tree,id_to_position,_node_operator_map,patternLength+1,visited_nodes+[patternNode],matched_trees) + self.recurse(tree, id_to_position, _node_operator_map, patternLength+1, visited_nodes+[patternNode], matched_trees) # Given a node and an edge operator, to return the list of nodes # from the doc that belong to node+operator. This is used to store # all the results beforehand to prevent unnecessary computation while # pattern matching # _node_operator_map[node][operator] = [...] - def get_node_operator_map(self,doc,tree,id_to_position,nodes,root): + def get_node_operator_map(self, doc, tree, id_to_position, nodes, root): _node_operator_map = {} all_node_indices = nodes.values() all_operators = [] @@ -295,24 +334,14 @@ cdef class DependencyMatcher: _node_operator_map[node] = {} for operator in all_operators: _node_operator_map[node][operator] = [] - # Used to invoke methods for each operator - switcher = { - "<": self.dep, - ">": self.gov, - "<<": self.dep_chain, - ">>": self.gov_chain, - ".": self.imm_precede, - "$+": self.imm_right_sib, - "$-": self.imm_left_sib, - "$++": self.right_sib, - "$--": self.left_sib - } for operator in all_operators: for node in all_nodes: - _node_operator_map[node][operator] = switcher.get(operator)(doc,node) + _node_operator_map[node][operator] = self._ops.get(operator)(doc, node) return _node_operator_map def dep(self, doc, node): + if doc[node].head == doc[node]: + return [] return [doc[node].head] def gov(self,doc,node): @@ -322,36 +351,51 @@ cdef class DependencyMatcher: return list(doc[node].ancestors) def gov_chain(self, doc, node): - return list(doc[node].subtree) + return [t for t in doc[node].subtree if t != doc[node]] def imm_precede(self, doc, node): - if node > 0: + sent = self._get_sent(doc[node]) + if node < len(doc) - 1 and doc[node + 1] in sent: + return [doc[node + 1]] + return [] + + def precede(self, doc, node): + sent = self._get_sent(doc[node]) + return [doc[i] for i in range(node + 1, sent.end)] + + def imm_follow(self, doc, node): + sent = self._get_sent(doc[node]) + if node > 0 and doc[node - 1] in sent: return [doc[node - 1]] return [] + def follow(self, doc, node): + sent = self._get_sent(doc[node]) + return [doc[i] for i in range(sent.start, node)] + def imm_right_sib(self, doc, node): for child in list(doc[node].head.children): - if child.i == node - 1: + if child.i == node + 1: return [doc[child.i]] return [] def imm_left_sib(self, doc, node): for child in list(doc[node].head.children): - if child.i == node + 1: + if child.i == node - 1: return [doc[child.i]] return [] def right_sib(self, doc, node): candidate_children = [] for child in list(doc[node].head.children): - if child.i < node: + if child.i > node: candidate_children.append(doc[child.i]) return candidate_children def left_sib(self, doc, node): candidate_children = [] for child in list(doc[node].head.children): - if child.i > node: + if child.i < node: candidate_children.append(doc[child.i]) return candidate_children @@ -360,3 +404,15 @@ cdef class DependencyMatcher: return self.vocab.strings.add(key) else: return key + + def _get_sent(self, token): + root = (list(token.ancestors) or [token])[-1] + return token.doc[root.left_edge.i:root.right_edge.i + 1] + + +def unpickle_matcher(vocab, patterns, callbacks): + matcher = DependencyMatcher(vocab) + for key, pattern in patterns.items(): + callback = callbacks.get(key, None) + matcher.add(key, pattern, on_match=callback) + return matcher diff --git a/spacy/matcher/matcher.pyx b/spacy/matcher/matcher.pyx index d3a8fa539..079cac788 100644 --- a/spacy/matcher/matcher.pyx +++ b/spacy/matcher/matcher.pyx @@ -31,8 +31,8 @@ DEF PADDING = 5 cdef class Matcher: """Match sequences of tokens, based on pattern rules. - DOCS: https://spacy.io/api/matcher - USAGE: https://spacy.io/usage/rule-based-matching + DOCS: https://nightly.spacy.io/api/matcher + USAGE: https://nightly.spacy.io/usage/rule-based-matching """ def __init__(self, vocab, validate=True): @@ -829,9 +829,11 @@ def _get_extra_predicates(spec, extra_predicates): attr = "ORTH" attr = IDS.get(attr.upper()) if isinstance(value, dict): + processed = False + value_with_upper_keys = {k.upper(): v for k, v in value.items()} for type_, cls in predicate_types.items(): - if type_ in value: - predicate = cls(len(extra_predicates), attr, value[type_], type_) + if type_ in value_with_upper_keys: + predicate = cls(len(extra_predicates), attr, value_with_upper_keys[type_], type_) # Don't create a redundant predicates. # This helps with efficiency, as we're caching the results. if predicate.key in seen_predicates: @@ -840,6 +842,9 @@ def _get_extra_predicates(spec, extra_predicates): extra_predicates.append(predicate) output.append(predicate.i) seen_predicates[predicate.key] = predicate.i + processed = True + if not processed: + warnings.warn(Warnings.W035.format(pattern=value)) return output diff --git a/spacy/matcher/phrasematcher.pyx b/spacy/matcher/phrasematcher.pyx index ba0f515b5..fae513367 100644 --- a/spacy/matcher/phrasematcher.pyx +++ b/spacy/matcher/phrasematcher.pyx @@ -19,8 +19,8 @@ cdef class PhraseMatcher: sequences based on lists of token descriptions, the `PhraseMatcher` accepts match patterns in the form of `Doc` objects. - DOCS: https://spacy.io/api/phrasematcher - USAGE: https://spacy.io/usage/rule-based-matching#phrasematcher + DOCS: https://nightly.spacy.io/api/phrasematcher + USAGE: https://nightly.spacy.io/usage/rule-based-matching#phrasematcher Adapted from FlashText: https://github.com/vi3k6i5/flashtext MIT License (see `LICENSE`) @@ -34,7 +34,7 @@ cdef class PhraseMatcher: attr (int / str): Token attribute to match on. validate (bool): Perform additional validation when patterns are added. - DOCS: https://spacy.io/api/phrasematcher#init + DOCS: https://nightly.spacy.io/api/phrasematcher#init """ self.vocab = vocab self._callbacks = {} @@ -61,7 +61,7 @@ cdef class PhraseMatcher: RETURNS (int): The number of rules. - DOCS: https://spacy.io/api/phrasematcher#len + DOCS: https://nightly.spacy.io/api/phrasematcher#len """ return len(self._callbacks) @@ -71,7 +71,7 @@ cdef class PhraseMatcher: key (str): The match ID. RETURNS (bool): Whether the matcher contains rules for this match ID. - DOCS: https://spacy.io/api/phrasematcher#contains + DOCS: https://nightly.spacy.io/api/phrasematcher#contains """ return key in self._callbacks @@ -85,7 +85,7 @@ cdef class PhraseMatcher: key (str): The match ID. - DOCS: https://spacy.io/api/phrasematcher#remove + DOCS: https://nightly.spacy.io/api/phrasematcher#remove """ if key not in self._docs: raise KeyError(key) @@ -164,7 +164,7 @@ cdef class PhraseMatcher: as variable arguments. Will be ignored if a list of patterns is provided as the second argument. - DOCS: https://spacy.io/api/phrasematcher#add + DOCS: https://nightly.spacy.io/api/phrasematcher#add """ if docs is None or hasattr(docs, "__call__"): # old API on_match = docs @@ -228,7 +228,7 @@ cdef class PhraseMatcher: `doc[start:end]`. The `match_id` is an integer. If as_spans is set to True, a list of Span objects is returned. - DOCS: https://spacy.io/api/phrasematcher#call + DOCS: https://nightly.spacy.io/api/phrasematcher#call """ matches = [] if doc is None or len(doc) == 0: diff --git a/spacy/ml/models/entity_linker.py b/spacy/ml/models/entity_linker.py index 6792f3e59..d945e5fba 100644 --- a/spacy/ml/models/entity_linker.py +++ b/spacy/ml/models/entity_linker.py @@ -24,7 +24,7 @@ def build_nel_encoder(tok2vec: Model, nO: Optional[int] = None) -> Model: return model -@registry.assets.register("spacy.KBFromFile.v1") +@registry.misc.register("spacy.KBFromFile.v1") def load_kb(kb_path: str) -> Callable[[Vocab], KnowledgeBase]: def kb_from_file(vocab): kb = KnowledgeBase(vocab, entity_vector_length=1) @@ -34,7 +34,7 @@ def load_kb(kb_path: str) -> Callable[[Vocab], KnowledgeBase]: return kb_from_file -@registry.assets.register("spacy.EmptyKB.v1") +@registry.misc.register("spacy.EmptyKB.v1") def empty_kb(entity_vector_length: int) -> Callable[[Vocab], KnowledgeBase]: def empty_kb_factory(vocab): return KnowledgeBase(vocab=vocab, entity_vector_length=entity_vector_length) @@ -42,6 +42,6 @@ def empty_kb(entity_vector_length: int) -> Callable[[Vocab], KnowledgeBase]: return empty_kb_factory -@registry.assets.register("spacy.CandidateGenerator.v1") +@registry.misc.register("spacy.CandidateGenerator.v1") def create_candidates() -> Callable[[KnowledgeBase, "Span"], Iterable[Candidate]]: return get_candidates diff --git a/spacy/pipeline/attributeruler.py b/spacy/pipeline/attributeruler.py index 85a425e29..406112681 100644 --- a/spacy/pipeline/attributeruler.py +++ b/spacy/pipeline/attributeruler.py @@ -38,7 +38,7 @@ class AttributeRuler(Pipe): """Set token-level attributes for tokens matched by Matcher patterns. Additionally supports importing patterns from tag maps and morph rules. - DOCS: https://spacy.io/api/attributeruler + DOCS: https://nightly.spacy.io/api/attributeruler """ def __init__( @@ -59,7 +59,7 @@ class AttributeRuler(Pipe): RETURNS (AttributeRuler): The AttributeRuler component. - DOCS: https://spacy.io/api/attributeruler#init + DOCS: https://nightly.spacy.io/api/attributeruler#init """ self.name = name self.vocab = vocab @@ -77,7 +77,7 @@ class AttributeRuler(Pipe): doc (Doc): The document to process. RETURNS (Doc): The processed Doc. - DOCS: https://spacy.io/api/attributeruler#call + DOCS: https://nightly.spacy.io/api/attributeruler#call """ matches = sorted(self.matcher(doc)) @@ -121,7 +121,7 @@ class AttributeRuler(Pipe): tag_map (dict): The tag map that maps fine-grained tags to coarse-grained tags and morphological features. - DOCS: https://spacy.io/api/attributeruler#load_from_morph_rules + DOCS: https://nightly.spacy.io/api/attributeruler#load_from_morph_rules """ for tag, attrs in tag_map.items(): pattern = [{"TAG": tag}] @@ -139,7 +139,7 @@ class AttributeRuler(Pipe): fine-grained tags to coarse-grained tags, lemmas and morphological features. - DOCS: https://spacy.io/api/attributeruler#load_from_morph_rules + DOCS: https://nightly.spacy.io/api/attributeruler#load_from_morph_rules """ for tag in morph_rules: for word in morph_rules[tag]: @@ -163,7 +163,7 @@ class AttributeRuler(Pipe): index (int): The index of the token in the matched span to modify. May be negative to index from the end of the span. Defaults to 0. - DOCS: https://spacy.io/api/attributeruler#add + DOCS: https://nightly.spacy.io/api/attributeruler#add """ self.matcher.add(len(self.attrs), patterns) self._attrs_unnormed.append(attrs) @@ -178,7 +178,7 @@ class AttributeRuler(Pipe): as the arguments to AttributeRuler.add (patterns/attrs/index) to add as patterns. - DOCS: https://spacy.io/api/attributeruler#add_patterns + DOCS: https://nightly.spacy.io/api/attributeruler#add_patterns """ for p in pattern_dicts: self.add(**p) @@ -203,7 +203,7 @@ class AttributeRuler(Pipe): Scorer.score_token_attr for the attributes "tag", "pos", "morph" and "lemma" for the target token attributes. - DOCS: https://spacy.io/api/tagger#score + DOCS: https://nightly.spacy.io/api/tagger#score """ validate_examples(examples, "AttributeRuler.score") results = {} @@ -227,7 +227,7 @@ class AttributeRuler(Pipe): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (bytes): The serialized object. - DOCS: https://spacy.io/api/attributeruler#to_bytes + DOCS: https://nightly.spacy.io/api/attributeruler#to_bytes """ serialize = {} serialize["vocab"] = self.vocab.to_bytes @@ -243,7 +243,7 @@ class AttributeRuler(Pipe): exclude (Iterable[str]): String names of serialization fields to exclude. returns (AttributeRuler): The loaded object. - DOCS: https://spacy.io/api/attributeruler#from_bytes + DOCS: https://nightly.spacy.io/api/attributeruler#from_bytes """ def load_patterns(b): @@ -264,7 +264,7 @@ class AttributeRuler(Pipe): path (Union[Path, str]): A path to a directory. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/attributeruler#to_disk + DOCS: https://nightly.spacy.io/api/attributeruler#to_disk """ serialize = { "vocab": lambda p: self.vocab.to_disk(p), @@ -279,7 +279,7 @@ class AttributeRuler(Pipe): path (Union[Path, str]): A path to a directory. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/attributeruler#from_disk + DOCS: https://nightly.spacy.io/api/attributeruler#from_disk """ def load_patterns(p): diff --git a/spacy/pipeline/dep_parser.pyx b/spacy/pipeline/dep_parser.pyx index 76f58df58..eee4ed535 100644 --- a/spacy/pipeline/dep_parser.pyx +++ b/spacy/pipeline/dep_parser.pyx @@ -105,7 +105,7 @@ def make_parser( cdef class DependencyParser(Parser): """Pipeline component for dependency parsing. - DOCS: https://spacy.io/api/dependencyparser + DOCS: https://nightly.spacy.io/api/dependencyparser """ TransitionSystem = ArcEager @@ -146,7 +146,7 @@ cdef class DependencyParser(Parser): RETURNS (Dict[str, Any]): The scores, produced by Scorer.score_spans and Scorer.score_deps. - DOCS: https://spacy.io/api/dependencyparser#score + DOCS: https://nightly.spacy.io/api/dependencyparser#score """ validate_examples(examples, "DependencyParser.score") def dep_getter(token, attr): @@ -156,7 +156,7 @@ cdef class DependencyParser(Parser): results = {} results.update(Scorer.score_spans(examples, "sents", **kwargs)) kwargs.setdefault("getter", dep_getter) - kwargs.setdefault("ignore_label", ("p", "punct")) + kwargs.setdefault("ignore_labels", ("p", "punct")) results.update(Scorer.score_deps(examples, "dep", **kwargs)) del results["sents_per_type"] return results diff --git a/spacy/pipeline/entity_linker.py b/spacy/pipeline/entity_linker.py index c45cdce75..d4f1e6b56 100644 --- a/spacy/pipeline/entity_linker.py +++ b/spacy/pipeline/entity_linker.py @@ -39,12 +39,12 @@ DEFAULT_NEL_MODEL = Config().from_str(default_model_config)["model"] requires=["doc.ents", "doc.sents", "token.ent_iob", "token.ent_type"], assigns=["token.ent_kb_id"], default_config={ - "kb_loader": {"@assets": "spacy.EmptyKB.v1", "entity_vector_length": 64}, + "kb_loader": {"@misc": "spacy.EmptyKB.v1", "entity_vector_length": 64}, "model": DEFAULT_NEL_MODEL, "labels_discard": [], "incl_prior": True, "incl_context": True, - "get_candidates": {"@assets": "spacy.CandidateGenerator.v1"}, + "get_candidates": {"@misc": "spacy.CandidateGenerator.v1"}, }, ) def make_entity_linker( @@ -83,7 +83,7 @@ def make_entity_linker( class EntityLinker(Pipe): """Pipeline component for named entity linking. - DOCS: https://spacy.io/api/entitylinker + DOCS: https://nightly.spacy.io/api/entitylinker """ NIL = "NIL" # string used to refer to a non-existing link @@ -111,7 +111,7 @@ class EntityLinker(Pipe): incl_prior (bool): Whether or not to include prior probabilities from the KB in the model. incl_context (bool): Whether or not to include the local context in the model. - DOCS: https://spacy.io/api/entitylinker#init + DOCS: https://nightly.spacy.io/api/entitylinker#init """ self.vocab = vocab self.model = model @@ -151,7 +151,7 @@ class EntityLinker(Pipe): create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/entitylinker#begin_training + DOCS: https://nightly.spacy.io/api/entitylinker#begin_training """ self.require_kb() nO = self.kb.entity_vector_length @@ -182,7 +182,7 @@ class EntityLinker(Pipe): Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/entitylinker#update + DOCS: https://nightly.spacy.io/api/entitylinker#update """ self.require_kb() if losses is None: @@ -264,7 +264,7 @@ class EntityLinker(Pipe): doc (Doc): The document to process. RETURNS (Doc): The processed Doc. - DOCS: https://spacy.io/api/entitylinker#call + DOCS: https://nightly.spacy.io/api/entitylinker#call """ kb_ids = self.predict([doc]) self.set_annotations([doc], kb_ids) @@ -279,7 +279,7 @@ class EntityLinker(Pipe): batch_size (int): The number of documents to buffer. YIELDS (Doc): Processed documents in order. - DOCS: https://spacy.io/api/entitylinker#pipe + DOCS: https://nightly.spacy.io/api/entitylinker#pipe """ for docs in util.minibatch(stream, size=batch_size): kb_ids = self.predict(docs) @@ -294,7 +294,7 @@ class EntityLinker(Pipe): docs (Iterable[Doc]): The documents to predict. RETURNS (List[int]): The models prediction for each document. - DOCS: https://spacy.io/api/entitylinker#predict + DOCS: https://nightly.spacy.io/api/entitylinker#predict """ self.require_kb() entity_count = 0 @@ -391,7 +391,7 @@ class EntityLinker(Pipe): docs (Iterable[Doc]): The documents to modify. kb_ids (List[str]): The IDs to set, produced by EntityLinker.predict. - DOCS: https://spacy.io/api/entitylinker#set_annotations + DOCS: https://nightly.spacy.io/api/entitylinker#set_annotations """ count_ents = len([ent for doc in docs for ent in doc.ents]) if count_ents != len(kb_ids): @@ -412,7 +412,7 @@ class EntityLinker(Pipe): path (str / Path): Path to a directory. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/entitylinker#to_disk + DOCS: https://nightly.spacy.io/api/entitylinker#to_disk """ serialize = {} serialize["cfg"] = lambda p: srsly.write_json(p, self.cfg) @@ -430,7 +430,7 @@ class EntityLinker(Pipe): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (EntityLinker): The modified EntityLinker object. - DOCS: https://spacy.io/api/entitylinker#from_disk + DOCS: https://nightly.spacy.io/api/entitylinker#from_disk """ def load_model(p): diff --git a/spacy/pipeline/entityruler.py b/spacy/pipeline/entityruler.py index 5137dfec2..4f4ff230e 100644 --- a/spacy/pipeline/entityruler.py +++ b/spacy/pipeline/entityruler.py @@ -53,8 +53,8 @@ class EntityRuler: purely rule-based entity recognition system. After initialization, the component is typically added to the pipeline using `nlp.add_pipe`. - DOCS: https://spacy.io/api/entityruler - USAGE: https://spacy.io/usage/rule-based-matching#entityruler + DOCS: https://nightly.spacy.io/api/entityruler + USAGE: https://nightly.spacy.io/usage/rule-based-matching#entityruler """ def __init__( @@ -88,7 +88,7 @@ class EntityRuler: added by the model, overwrite them by matches if necessary. ent_id_sep (str): Separator used internally for entity IDs. - DOCS: https://spacy.io/api/entityruler#init + DOCS: https://nightly.spacy.io/api/entityruler#init """ self.nlp = nlp self.name = name @@ -127,13 +127,13 @@ class EntityRuler: doc (Doc): The Doc object in the pipeline. RETURNS (Doc): The Doc with added entities, if available. - DOCS: https://spacy.io/api/entityruler#call + DOCS: https://nightly.spacy.io/api/entityruler#call """ matches = list(self.matcher(doc)) + list(self.phrase_matcher(doc)) matches = set( [(m_id, start, end) for m_id, start, end in matches if start != end] ) - get_sort_key = lambda m: (m[2] - m[1], m[1]) + get_sort_key = lambda m: (m[2] - m[1], -m[1]) matches = sorted(matches, key=get_sort_key, reverse=True) entities = list(doc.ents) new_entities = [] @@ -165,7 +165,7 @@ class EntityRuler: RETURNS (set): The string labels. - DOCS: https://spacy.io/api/entityruler#labels + DOCS: https://nightly.spacy.io/api/entityruler#labels """ keys = set(self.token_patterns.keys()) keys.update(self.phrase_patterns.keys()) @@ -185,7 +185,7 @@ class EntityRuler: RETURNS (set): The string entity ids. - DOCS: https://spacy.io/api/entityruler#ent_ids + DOCS: https://nightly.spacy.io/api/entityruler#ent_ids """ keys = set(self.token_patterns.keys()) keys.update(self.phrase_patterns.keys()) @@ -203,7 +203,7 @@ class EntityRuler: RETURNS (list): The original patterns, one dictionary per pattern. - DOCS: https://spacy.io/api/entityruler#patterns + DOCS: https://nightly.spacy.io/api/entityruler#patterns """ all_patterns = [] for label, patterns in self.token_patterns.items(): @@ -230,7 +230,7 @@ class EntityRuler: patterns (list): The patterns to add. - DOCS: https://spacy.io/api/entityruler#add_patterns + DOCS: https://nightly.spacy.io/api/entityruler#add_patterns """ # disable the nlp components after this one in case they hadn't been initialized / deserialised yet @@ -324,7 +324,7 @@ class EntityRuler: patterns_bytes (bytes): The bytestring to load. RETURNS (EntityRuler): The loaded entity ruler. - DOCS: https://spacy.io/api/entityruler#from_bytes + DOCS: https://nightly.spacy.io/api/entityruler#from_bytes """ cfg = srsly.msgpack_loads(patterns_bytes) self.clear() @@ -346,7 +346,7 @@ class EntityRuler: RETURNS (bytes): The serialized patterns. - DOCS: https://spacy.io/api/entityruler#to_bytes + DOCS: https://nightly.spacy.io/api/entityruler#to_bytes """ serial = { "overwrite": self.overwrite, @@ -365,7 +365,7 @@ class EntityRuler: path (str / Path): The JSONL file to load. RETURNS (EntityRuler): The loaded entity ruler. - DOCS: https://spacy.io/api/entityruler#from_disk + DOCS: https://nightly.spacy.io/api/entityruler#from_disk """ path = ensure_path(path) self.clear() @@ -401,7 +401,7 @@ class EntityRuler: path (str / Path): The JSONL file to save. - DOCS: https://spacy.io/api/entityruler#to_disk + DOCS: https://nightly.spacy.io/api/entityruler#to_disk """ path = ensure_path(path) cfg = { diff --git a/spacy/pipeline/functions.py b/spacy/pipeline/functions.py index 501884873..7e68ea369 100644 --- a/spacy/pipeline/functions.py +++ b/spacy/pipeline/functions.py @@ -15,7 +15,7 @@ def merge_noun_chunks(doc: Doc) -> Doc: doc (Doc): The Doc object. RETURNS (Doc): The Doc object with merged noun chunks. - DOCS: https://spacy.io/api/pipeline-functions#merge_noun_chunks + DOCS: https://nightly.spacy.io/api/pipeline-functions#merge_noun_chunks """ if not doc.is_parsed: return doc @@ -37,7 +37,7 @@ def merge_entities(doc: Doc): doc (Doc): The Doc object. RETURNS (Doc): The Doc object with merged entities. - DOCS: https://spacy.io/api/pipeline-functions#merge_entities + DOCS: https://nightly.spacy.io/api/pipeline-functions#merge_entities """ with doc.retokenize() as retokenizer: for ent in doc.ents: @@ -54,7 +54,7 @@ def merge_subtokens(doc: Doc, label: str = "subtok") -> Doc: label (str): The subtoken dependency label. RETURNS (Doc): The Doc object with merged subtokens. - DOCS: https://spacy.io/api/pipeline-functions#merge_subtokens + DOCS: https://nightly.spacy.io/api/pipeline-functions#merge_subtokens """ # TODO: make stateful component with "label" config merger = Matcher(doc.vocab) diff --git a/spacy/pipeline/lemmatizer.py b/spacy/pipeline/lemmatizer.py index 6cea65fec..3f3e387b7 100644 --- a/spacy/pipeline/lemmatizer.py +++ b/spacy/pipeline/lemmatizer.py @@ -43,7 +43,7 @@ class Lemmatizer(Pipe): The Lemmatizer supports simple part-of-speech-sensitive suffix rules and lookup tables. - DOCS: https://spacy.io/api/lemmatizer + DOCS: https://nightly.spacy.io/api/lemmatizer """ @classmethod @@ -54,7 +54,7 @@ class Lemmatizer(Pipe): mode (str): The lemmatizer mode. RETURNS (dict): The lookups configuration settings for this mode. - DOCS: https://spacy.io/api/lemmatizer#get_lookups_config + DOCS: https://nightly.spacy.io/api/lemmatizer#get_lookups_config """ if mode == "lookup": return { @@ -80,7 +80,7 @@ class Lemmatizer(Pipe): lookups should be loaded. RETURNS (Lookups): The Lookups object. - DOCS: https://spacy.io/api/lemmatizer#get_lookups_config + DOCS: https://nightly.spacy.io/api/lemmatizer#get_lookups_config """ config = cls.get_lookups_config(mode) required_tables = config.get("required_tables", []) @@ -123,7 +123,7 @@ class Lemmatizer(Pipe): overwrite (bool): Whether to overwrite existing lemmas. Defaults to `False`. - DOCS: https://spacy.io/api/lemmatizer#init + DOCS: https://nightly.spacy.io/api/lemmatizer#init """ self.vocab = vocab self.model = model @@ -152,7 +152,7 @@ class Lemmatizer(Pipe): doc (Doc): The Doc to process. RETURNS (Doc): The processed Doc. - DOCS: https://spacy.io/api/lemmatizer#call + DOCS: https://nightly.spacy.io/api/lemmatizer#call """ for token in doc: if self.overwrite or token.lemma == 0: @@ -168,7 +168,7 @@ class Lemmatizer(Pipe): batch_size (int): The number of documents to buffer. YIELDS (Doc): Processed documents in order. - DOCS: https://spacy.io/api/lemmatizer#pipe + DOCS: https://nightly.spacy.io/api/lemmatizer#pipe """ for doc in stream: doc = self(doc) @@ -180,7 +180,7 @@ class Lemmatizer(Pipe): token (Token): The token to lemmatize. RETURNS (list): The available lemmas for the string. - DOCS: https://spacy.io/api/lemmatizer#lookup_lemmatize + DOCS: https://nightly.spacy.io/api/lemmatizer#lookup_lemmatize """ lookup_table = self.lookups.get_table("lemma_lookup", {}) result = lookup_table.get(token.text, token.text) @@ -194,7 +194,7 @@ class Lemmatizer(Pipe): token (Token): The token to lemmatize. RETURNS (list): The available lemmas for the string. - DOCS: https://spacy.io/api/lemmatizer#rule_lemmatize + DOCS: https://nightly.spacy.io/api/lemmatizer#rule_lemmatize """ cache_key = (token.orth, token.pos, token.morph) if cache_key in self.cache: @@ -260,7 +260,7 @@ class Lemmatizer(Pipe): token (Token): The token. RETURNS (bool): Whether the token is a base form. - DOCS: https://spacy.io/api/lemmatizer#is_base_form + DOCS: https://nightly.spacy.io/api/lemmatizer#is_base_form """ return False @@ -270,7 +270,7 @@ class Lemmatizer(Pipe): examples (Iterable[Example]): The examples to score. RETURNS (Dict[str, Any]): The scores. - DOCS: https://spacy.io/api/lemmatizer#score + DOCS: https://nightly.spacy.io/api/lemmatizer#score """ validate_examples(examples, "Lemmatizer.score") return Scorer.score_token_attr(examples, "lemma", **kwargs) @@ -282,7 +282,7 @@ class Lemmatizer(Pipe): it doesn't exist. exclude (list): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/vocab#to_disk + DOCS: https://nightly.spacy.io/api/vocab#to_disk """ serialize = {} serialize["vocab"] = lambda p: self.vocab.to_disk(p) @@ -297,7 +297,7 @@ class Lemmatizer(Pipe): exclude (list): String names of serialization fields to exclude. RETURNS (Vocab): The modified `Vocab` object. - DOCS: https://spacy.io/api/vocab#to_disk + DOCS: https://nightly.spacy.io/api/vocab#to_disk """ deserialize = {} deserialize["vocab"] = lambda p: self.vocab.from_disk(p) @@ -310,7 +310,7 @@ class Lemmatizer(Pipe): exclude (list): String names of serialization fields to exclude. RETURNS (bytes): The serialized form of the `Vocab` object. - DOCS: https://spacy.io/api/vocab#to_bytes + DOCS: https://nightly.spacy.io/api/vocab#to_bytes """ serialize = {} serialize["vocab"] = self.vocab.to_bytes @@ -324,7 +324,7 @@ class Lemmatizer(Pipe): exclude (list): String names of serialization fields to exclude. RETURNS (Vocab): The `Vocab` object. - DOCS: https://spacy.io/api/vocab#from_bytes + DOCS: https://nightly.spacy.io/api/vocab#from_bytes """ deserialize = {} deserialize["vocab"] = lambda b: self.vocab.from_bytes(b) diff --git a/spacy/pipeline/morphologizer.pyx b/spacy/pipeline/morphologizer.pyx index 329a05f90..bcb555b90 100644 --- a/spacy/pipeline/morphologizer.pyx +++ b/spacy/pipeline/morphologizer.pyx @@ -79,7 +79,7 @@ class Morphologizer(Tagger): labels_morph (dict): Mapping of morph + POS tags to morph labels. labels_pos (dict): Mapping of morph + POS tags to POS tags. - DOCS: https://spacy.io/api/morphologizer#init + DOCS: https://nightly.spacy.io/api/morphologizer#init """ self.vocab = vocab self.model = model @@ -106,7 +106,7 @@ class Morphologizer(Tagger): label (str): The label to add. RETURNS (int): 0 if label is already present, otherwise 1. - DOCS: https://spacy.io/api/morphologizer#add_label + DOCS: https://nightly.spacy.io/api/morphologizer#add_label """ if not isinstance(label, str): raise ValueError(Errors.E187) @@ -139,7 +139,7 @@ class Morphologizer(Tagger): create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/morphologizer#begin_training + DOCS: https://nightly.spacy.io/api/morphologizer#begin_training """ if not hasattr(get_examples, "__call__"): err = Errors.E930.format(name="Morphologizer", obj=type(get_examples)) @@ -169,7 +169,7 @@ class Morphologizer(Tagger): docs (Iterable[Doc]): The documents to modify. batch_tag_ids: The IDs to set, produced by Morphologizer.predict. - DOCS: https://spacy.io/api/morphologizer#set_annotations + DOCS: https://nightly.spacy.io/api/morphologizer#set_annotations """ if isinstance(docs, Doc): docs = [docs] @@ -194,7 +194,7 @@ class Morphologizer(Tagger): scores: Scores representing the model's predictions. RETUTNRS (Tuple[float, float]): The loss and the gradient. - DOCS: https://spacy.io/api/morphologizer#get_loss + DOCS: https://nightly.spacy.io/api/morphologizer#get_loss """ validate_examples(examples, "Morphologizer.get_loss") loss_func = SequenceCategoricalCrossentropy(names=self.labels, normalize=False) @@ -231,7 +231,7 @@ class Morphologizer(Tagger): Scorer.score_token_attr for the attributes "pos" and "morph" and Scorer.score_token_attr_per_feat for the attribute "morph". - DOCS: https://spacy.io/api/morphologizer#score + DOCS: https://nightly.spacy.io/api/morphologizer#score """ validate_examples(examples, "Morphologizer.score") results = {} @@ -247,7 +247,7 @@ class Morphologizer(Tagger): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (bytes): The serialized object. - DOCS: https://spacy.io/api/morphologizer#to_bytes + DOCS: https://nightly.spacy.io/api/morphologizer#to_bytes """ serialize = {} serialize["model"] = self.model.to_bytes @@ -262,7 +262,7 @@ class Morphologizer(Tagger): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Morphologizer): The loaded Morphologizer. - DOCS: https://spacy.io/api/morphologizer#from_bytes + DOCS: https://nightly.spacy.io/api/morphologizer#from_bytes """ def load_model(b): try: @@ -284,7 +284,7 @@ class Morphologizer(Tagger): path (str / Path): Path to a directory. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/morphologizer#to_disk + DOCS: https://nightly.spacy.io/api/morphologizer#to_disk """ serialize = { "vocab": lambda p: self.vocab.to_disk(p), @@ -300,7 +300,7 @@ class Morphologizer(Tagger): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Morphologizer): The modified Morphologizer object. - DOCS: https://spacy.io/api/morphologizer#from_disk + DOCS: https://nightly.spacy.io/api/morphologizer#from_disk """ def load_model(p): with p.open("rb") as file_: diff --git a/spacy/pipeline/ner.pyx b/spacy/pipeline/ner.pyx index 631b5ae72..d9f33ccb4 100644 --- a/spacy/pipeline/ner.pyx +++ b/spacy/pipeline/ner.pyx @@ -88,7 +88,7 @@ def make_ner( cdef class EntityRecognizer(Parser): """Pipeline component for named entity recognition. - DOCS: https://spacy.io/api/entityrecognizer + DOCS: https://nightly.spacy.io/api/entityrecognizer """ TransitionSystem = BiluoPushDown @@ -119,7 +119,7 @@ cdef class EntityRecognizer(Parser): examples (Iterable[Example]): The examples to score. RETURNS (Dict[str, Any]): The scores, produced by Scorer.score_spans. - DOCS: https://spacy.io/api/entityrecognizer#score + DOCS: https://nightly.spacy.io/api/entityrecognizer#score """ validate_examples(examples, "EntityRecognizer.score") return Scorer.score_spans(examples, "ents", **kwargs) diff --git a/spacy/pipeline/pipe.pyx b/spacy/pipeline/pipe.pyx index a3f379a97..2518ebad3 100644 --- a/spacy/pipeline/pipe.pyx +++ b/spacy/pipeline/pipe.pyx @@ -15,7 +15,7 @@ cdef class Pipe: from it and it defines the interface that components should follow to function as trainable components in a spaCy pipeline. - DOCS: https://spacy.io/api/pipe + DOCS: https://nightly.spacy.io/api/pipe """ def __init__(self, vocab, model, name, **cfg): """Initialize a pipeline component. @@ -25,7 +25,7 @@ cdef class Pipe: name (str): The component instance name. **cfg: Additonal settings and config parameters. - DOCS: https://spacy.io/api/pipe#init + DOCS: https://nightly.spacy.io/api/pipe#init """ self.vocab = vocab self.model = model @@ -40,7 +40,7 @@ cdef class Pipe: docs (Doc): The Doc to process. RETURNS (Doc): The processed Doc. - DOCS: https://spacy.io/api/pipe#call + DOCS: https://nightly.spacy.io/api/pipe#call """ scores = self.predict([doc]) self.set_annotations([doc], scores) @@ -55,7 +55,7 @@ cdef class Pipe: batch_size (int): The number of documents to buffer. YIELDS (Doc): Processed documents in order. - DOCS: https://spacy.io/api/pipe#pipe + DOCS: https://nightly.spacy.io/api/pipe#pipe """ for docs in util.minibatch(stream, size=batch_size): scores = self.predict(docs) @@ -69,7 +69,7 @@ cdef class Pipe: docs (Iterable[Doc]): The documents to predict. RETURNS: Vector representations for each token in the documents. - DOCS: https://spacy.io/api/pipe#predict + DOCS: https://nightly.spacy.io/api/pipe#predict """ raise NotImplementedError(Errors.E931.format(method="predict", name=self.name)) @@ -79,7 +79,7 @@ cdef class Pipe: docs (Iterable[Doc]): The documents to modify. scores: The scores to assign. - DOCS: https://spacy.io/api/pipe#set_annotations + DOCS: https://nightly.spacy.io/api/pipe#set_annotations """ raise NotImplementedError(Errors.E931.format(method="set_annotations", name=self.name)) @@ -96,7 +96,7 @@ cdef class Pipe: Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/pipe#update + DOCS: https://nightly.spacy.io/api/pipe#update """ if losses is None: losses = {} @@ -132,7 +132,7 @@ cdef class Pipe: Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/pipe#rehearse + DOCS: https://nightly.spacy.io/api/pipe#rehearse """ pass @@ -144,7 +144,7 @@ cdef class Pipe: scores: Scores representing the model's predictions. RETUTNRS (Tuple[float, float]): The loss and the gradient. - DOCS: https://spacy.io/api/pipe#get_loss + DOCS: https://nightly.spacy.io/api/pipe#get_loss """ raise NotImplementedError(Errors.E931.format(method="get_loss", name=self.name)) @@ -156,7 +156,7 @@ cdef class Pipe: label (str): The label to add. RETURNS (int): 0 if label is already present, otherwise 1. - DOCS: https://spacy.io/api/pipe#add_label + DOCS: https://nightly.spacy.io/api/pipe#add_label """ raise NotImplementedError(Errors.E931.format(method="add_label", name=self.name)) @@ -165,7 +165,7 @@ cdef class Pipe: RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/pipe#create_optimizer + DOCS: https://nightly.spacy.io/api/pipe#create_optimizer """ return util.create_default_optimizer() @@ -181,7 +181,7 @@ cdef class Pipe: create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/pipe#begin_training + DOCS: https://nightly.spacy.io/api/pipe#begin_training """ self.model.initialize() if sgd is None: @@ -200,7 +200,7 @@ cdef class Pipe: params (dict): The parameter values to use in the model. - DOCS: https://spacy.io/api/pipe#use_params + DOCS: https://nightly.spacy.io/api/pipe#use_params """ with self.model.use_params(params): yield @@ -211,7 +211,7 @@ cdef class Pipe: examples (Iterable[Example]): The examples to score. RETURNS (Dict[str, Any]): The scores. - DOCS: https://spacy.io/api/pipe#score + DOCS: https://nightly.spacy.io/api/pipe#score """ return {} @@ -221,7 +221,7 @@ cdef class Pipe: exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (bytes): The serialized object. - DOCS: https://spacy.io/api/pipe#to_bytes + DOCS: https://nightly.spacy.io/api/pipe#to_bytes """ serialize = {} serialize["cfg"] = lambda: srsly.json_dumps(self.cfg) @@ -236,7 +236,7 @@ cdef class Pipe: exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Pipe): The loaded object. - DOCS: https://spacy.io/api/pipe#from_bytes + DOCS: https://nightly.spacy.io/api/pipe#from_bytes """ def load_model(b): @@ -259,7 +259,7 @@ cdef class Pipe: path (str / Path): Path to a directory. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/pipe#to_disk + DOCS: https://nightly.spacy.io/api/pipe#to_disk """ serialize = {} serialize["cfg"] = lambda p: srsly.write_json(p, self.cfg) @@ -274,7 +274,7 @@ cdef class Pipe: exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Pipe): The loaded object. - DOCS: https://spacy.io/api/pipe#from_disk + DOCS: https://nightly.spacy.io/api/pipe#from_disk """ def load_model(p): diff --git a/spacy/pipeline/sentencizer.pyx b/spacy/pipeline/sentencizer.pyx index 46d599497..aaf08d594 100644 --- a/spacy/pipeline/sentencizer.pyx +++ b/spacy/pipeline/sentencizer.pyx @@ -29,7 +29,7 @@ def make_sentencizer( class Sentencizer(Pipe): """Segment the Doc into sentences using a rule-based strategy. - DOCS: https://spacy.io/api/sentencizer + DOCS: https://nightly.spacy.io/api/sentencizer """ default_punct_chars = ['!', '.', '?', '։', '؟', '۔', '܀', '܁', '܂', '߹', @@ -51,7 +51,7 @@ class Sentencizer(Pipe): serialized with the nlp object. RETURNS (Sentencizer): The sentencizer component. - DOCS: https://spacy.io/api/sentencizer#init + DOCS: https://nightly.spacy.io/api/sentencizer#init """ self.name = name if punct_chars: @@ -68,7 +68,7 @@ class Sentencizer(Pipe): doc (Doc): The document to process. RETURNS (Doc): The processed Doc. - DOCS: https://spacy.io/api/sentencizer#call + DOCS: https://nightly.spacy.io/api/sentencizer#call """ start = 0 seen_period = False @@ -94,7 +94,7 @@ class Sentencizer(Pipe): batch_size (int): The number of documents to buffer. YIELDS (Doc): Processed documents in order. - DOCS: https://spacy.io/api/sentencizer#pipe + DOCS: https://nightly.spacy.io/api/sentencizer#pipe """ for docs in util.minibatch(stream, size=batch_size): predictions = self.predict(docs) @@ -157,7 +157,7 @@ class Sentencizer(Pipe): examples (Iterable[Example]): The examples to score. RETURNS (Dict[str, Any]): The scores, produced by Scorer.score_spans. - DOCS: https://spacy.io/api/sentencizer#score + DOCS: https://nightly.spacy.io/api/sentencizer#score """ validate_examples(examples, "Sentencizer.score") results = Scorer.score_spans(examples, "sents", **kwargs) @@ -169,7 +169,7 @@ class Sentencizer(Pipe): RETURNS (bytes): The serialized object. - DOCS: https://spacy.io/api/sentencizer#to_bytes + DOCS: https://nightly.spacy.io/api/sentencizer#to_bytes """ return srsly.msgpack_dumps({"punct_chars": list(self.punct_chars)}) @@ -179,7 +179,7 @@ class Sentencizer(Pipe): bytes_data (bytes): The data to load. returns (Sentencizer): The loaded object. - DOCS: https://spacy.io/api/sentencizer#from_bytes + DOCS: https://nightly.spacy.io/api/sentencizer#from_bytes """ cfg = srsly.msgpack_loads(bytes_data) self.punct_chars = set(cfg.get("punct_chars", self.default_punct_chars)) @@ -188,7 +188,7 @@ class Sentencizer(Pipe): def to_disk(self, path, *, exclude=tuple()): """Serialize the sentencizer to disk. - DOCS: https://spacy.io/api/sentencizer#to_disk + DOCS: https://nightly.spacy.io/api/sentencizer#to_disk """ path = util.ensure_path(path) path = path.with_suffix(".json") @@ -198,7 +198,7 @@ class Sentencizer(Pipe): def from_disk(self, path, *, exclude=tuple()): """Load the sentencizer from disk. - DOCS: https://spacy.io/api/sentencizer#from_disk + DOCS: https://nightly.spacy.io/api/sentencizer#from_disk """ path = util.ensure_path(path) path = path.with_suffix(".json") diff --git a/spacy/pipeline/senter.pyx b/spacy/pipeline/senter.pyx index e82225d27..b78be44f8 100644 --- a/spacy/pipeline/senter.pyx +++ b/spacy/pipeline/senter.pyx @@ -44,7 +44,7 @@ def make_senter(nlp: Language, name: str, model: Model): class SentenceRecognizer(Tagger): """Pipeline component for sentence segmentation. - DOCS: https://spacy.io/api/sentencerecognizer + DOCS: https://nightly.spacy.io/api/sentencerecognizer """ def __init__(self, vocab, model, name="senter"): """Initialize a sentence recognizer. @@ -54,7 +54,7 @@ class SentenceRecognizer(Tagger): name (str): The component instance name, used to add entries to the losses during training. - DOCS: https://spacy.io/api/sentencerecognizer#init + DOCS: https://nightly.spacy.io/api/sentencerecognizer#init """ self.vocab = vocab self.model = model @@ -76,7 +76,7 @@ class SentenceRecognizer(Tagger): docs (Iterable[Doc]): The documents to modify. batch_tag_ids: The IDs to set, produced by SentenceRecognizer.predict. - DOCS: https://spacy.io/api/sentencerecognizer#set_annotations + DOCS: https://nightly.spacy.io/api/sentencerecognizer#set_annotations """ if isinstance(docs, Doc): docs = [docs] @@ -101,7 +101,7 @@ class SentenceRecognizer(Tagger): scores: Scores representing the model's predictions. RETUTNRS (Tuple[float, float]): The loss and the gradient. - DOCS: https://spacy.io/api/sentencerecognizer#get_loss + DOCS: https://nightly.spacy.io/api/sentencerecognizer#get_loss """ validate_examples(examples, "SentenceRecognizer.get_loss") labels = self.labels @@ -135,7 +135,7 @@ class SentenceRecognizer(Tagger): create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/sentencerecognizer#begin_training + DOCS: https://nightly.spacy.io/api/sentencerecognizer#begin_training """ self.set_output(len(self.labels)) self.model.initialize() @@ -151,7 +151,7 @@ class SentenceRecognizer(Tagger): examples (Iterable[Example]): The examples to score. RETURNS (Dict[str, Any]): The scores, produced by Scorer.score_spans. - DOCS: https://spacy.io/api/sentencerecognizer#score + DOCS: https://nightly.spacy.io/api/sentencerecognizer#score """ validate_examples(examples, "SentenceRecognizer.score") results = Scorer.score_spans(examples, "sents", **kwargs) @@ -164,7 +164,7 @@ class SentenceRecognizer(Tagger): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (bytes): The serialized object. - DOCS: https://spacy.io/api/sentencerecognizer#to_bytes + DOCS: https://nightly.spacy.io/api/sentencerecognizer#to_bytes """ serialize = {} serialize["model"] = self.model.to_bytes @@ -179,7 +179,7 @@ class SentenceRecognizer(Tagger): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Tagger): The loaded SentenceRecognizer. - DOCS: https://spacy.io/api/sentencerecognizer#from_bytes + DOCS: https://nightly.spacy.io/api/sentencerecognizer#from_bytes """ def load_model(b): try: @@ -201,7 +201,7 @@ class SentenceRecognizer(Tagger): path (str / Path): Path to a directory. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/sentencerecognizer#to_disk + DOCS: https://nightly.spacy.io/api/sentencerecognizer#to_disk """ serialize = { "vocab": lambda p: self.vocab.to_disk(p), @@ -217,7 +217,7 @@ class SentenceRecognizer(Tagger): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Tagger): The modified SentenceRecognizer object. - DOCS: https://spacy.io/api/sentencerecognizer#from_disk + DOCS: https://nightly.spacy.io/api/sentencerecognizer#from_disk """ def load_model(p): with p.open("rb") as file_: diff --git a/spacy/pipeline/simple_ner.py b/spacy/pipeline/simple_ner.py index 5f3addbd7..c55edb067 100644 --- a/spacy/pipeline/simple_ner.py +++ b/spacy/pipeline/simple_ner.py @@ -78,7 +78,7 @@ class SimpleNER(Pipe): def add_label(self, label: str) -> None: """Add a new label to the pipe. label (str): The label to add. - DOCS: https://spacy.io/api/simplener#add_label + DOCS: https://nightly.spacy.io/api/simplener#add_label """ if not isinstance(label, str): raise ValueError(Errors.E187) diff --git a/spacy/pipeline/tagger.pyx b/spacy/pipeline/tagger.pyx index f831caefe..2b760c878 100644 --- a/spacy/pipeline/tagger.pyx +++ b/spacy/pipeline/tagger.pyx @@ -58,7 +58,7 @@ def make_tagger(nlp: Language, name: str, model: Model): class Tagger(Pipe): """Pipeline component for part-of-speech tagging. - DOCS: https://spacy.io/api/tagger + DOCS: https://nightly.spacy.io/api/tagger """ def __init__(self, vocab, model, name="tagger", *, labels=None): """Initialize a part-of-speech tagger. @@ -69,7 +69,7 @@ class Tagger(Pipe): losses during training. labels (List): The set of labels. Defaults to None. - DOCS: https://spacy.io/api/tagger#init + DOCS: https://nightly.spacy.io/api/tagger#init """ self.vocab = vocab self.model = model @@ -86,7 +86,7 @@ class Tagger(Pipe): RETURNS (Tuple[str]): The labels. - DOCS: https://spacy.io/api/tagger#labels + DOCS: https://nightly.spacy.io/api/tagger#labels """ return tuple(self.cfg["labels"]) @@ -96,7 +96,7 @@ class Tagger(Pipe): doc (Doc): The document to process. RETURNS (Doc): The processed Doc. - DOCS: https://spacy.io/api/tagger#call + DOCS: https://nightly.spacy.io/api/tagger#call """ tags = self.predict([doc]) self.set_annotations([doc], tags) @@ -111,7 +111,7 @@ class Tagger(Pipe): batch_size (int): The number of documents to buffer. YIELDS (Doc): Processed documents in order. - DOCS: https://spacy.io/api/tagger#pipe + DOCS: https://nightly.spacy.io/api/tagger#pipe """ for docs in util.minibatch(stream, size=batch_size): tag_ids = self.predict(docs) @@ -124,7 +124,7 @@ class Tagger(Pipe): docs (Iterable[Doc]): The documents to predict. RETURNS: The models prediction for each document. - DOCS: https://spacy.io/api/tagger#predict + DOCS: https://nightly.spacy.io/api/tagger#predict """ if not any(len(doc) for doc in docs): # Handle cases where there are no tokens in any docs. @@ -153,7 +153,7 @@ class Tagger(Pipe): docs (Iterable[Doc]): The documents to modify. batch_tag_ids: The IDs to set, produced by Tagger.predict. - DOCS: https://spacy.io/api/tagger#set_annotations + DOCS: https://nightly.spacy.io/api/tagger#set_annotations """ if isinstance(docs, Doc): docs = [docs] @@ -182,7 +182,7 @@ class Tagger(Pipe): Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/tagger#update + DOCS: https://nightly.spacy.io/api/tagger#update """ if losses is None: losses = {} @@ -220,7 +220,7 @@ class Tagger(Pipe): Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/tagger#rehearse + DOCS: https://nightly.spacy.io/api/tagger#rehearse """ validate_examples(examples, "Tagger.rehearse") docs = [eg.predicted for eg in examples] @@ -247,7 +247,7 @@ class Tagger(Pipe): scores: Scores representing the model's predictions. RETUTNRS (Tuple[float, float]): The loss and the gradient. - DOCS: https://spacy.io/api/tagger#get_loss + DOCS: https://nightly.spacy.io/api/tagger#get_loss """ validate_examples(examples, "Tagger.get_loss") loss_func = SequenceCategoricalCrossentropy(names=self.labels, normalize=False) @@ -269,7 +269,7 @@ class Tagger(Pipe): create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/tagger#begin_training + DOCS: https://nightly.spacy.io/api/tagger#begin_training """ if not hasattr(get_examples, "__call__"): err = Errors.E930.format(name="Tagger", obj=type(get_examples)) @@ -307,7 +307,7 @@ class Tagger(Pipe): label (str): The label to add. RETURNS (int): 0 if label is already present, otherwise 1. - DOCS: https://spacy.io/api/tagger#add_label + DOCS: https://nightly.spacy.io/api/tagger#add_label """ if not isinstance(label, str): raise ValueError(Errors.E187) @@ -324,7 +324,7 @@ class Tagger(Pipe): RETURNS (Dict[str, Any]): The scores, produced by Scorer.score_token_attr for the attributes "tag". - DOCS: https://spacy.io/api/tagger#score + DOCS: https://nightly.spacy.io/api/tagger#score """ validate_examples(examples, "Tagger.score") return Scorer.score_token_attr(examples, "tag", **kwargs) @@ -335,7 +335,7 @@ class Tagger(Pipe): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (bytes): The serialized object. - DOCS: https://spacy.io/api/tagger#to_bytes + DOCS: https://nightly.spacy.io/api/tagger#to_bytes """ serialize = {} serialize["model"] = self.model.to_bytes @@ -350,7 +350,7 @@ class Tagger(Pipe): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Tagger): The loaded Tagger. - DOCS: https://spacy.io/api/tagger#from_bytes + DOCS: https://nightly.spacy.io/api/tagger#from_bytes """ def load_model(b): try: @@ -372,7 +372,7 @@ class Tagger(Pipe): path (str / Path): Path to a directory. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/tagger#to_disk + DOCS: https://nightly.spacy.io/api/tagger#to_disk """ serialize = { "vocab": lambda p: self.vocab.to_disk(p), @@ -388,7 +388,7 @@ class Tagger(Pipe): exclude (Iterable[str]): String names of serialization fields to exclude. RETURNS (Tagger): The modified Tagger object. - DOCS: https://spacy.io/api/tagger#from_disk + DOCS: https://nightly.spacy.io/api/tagger#from_disk """ def load_model(p): with p.open("rb") as file_: diff --git a/spacy/pipeline/textcat.py b/spacy/pipeline/textcat.py index ce4f286e5..d6efb4348 100644 --- a/spacy/pipeline/textcat.py +++ b/spacy/pipeline/textcat.py @@ -92,7 +92,7 @@ def make_textcat( class TextCategorizer(Pipe): """Pipeline component for text classification. - DOCS: https://spacy.io/api/textcategorizer + DOCS: https://nightly.spacy.io/api/textcategorizer """ def __init__( @@ -111,7 +111,7 @@ class TextCategorizer(Pipe): losses during training. labels (Iterable[str]): The labels to use. - DOCS: https://spacy.io/api/textcategorizer#init + DOCS: https://nightly.spacy.io/api/textcategorizer#init """ self.vocab = vocab self.model = model @@ -124,7 +124,7 @@ class TextCategorizer(Pipe): def labels(self) -> Tuple[str]: """RETURNS (Tuple[str]): The labels currently added to the component. - DOCS: https://spacy.io/api/textcategorizer#labels + DOCS: https://nightly.spacy.io/api/textcategorizer#labels """ return tuple(self.cfg.setdefault("labels", [])) @@ -146,7 +146,7 @@ class TextCategorizer(Pipe): batch_size (int): The number of documents to buffer. YIELDS (Doc): Processed documents in order. - DOCS: https://spacy.io/api/textcategorizer#pipe + DOCS: https://nightly.spacy.io/api/textcategorizer#pipe """ for docs in util.minibatch(stream, size=batch_size): scores = self.predict(docs) @@ -159,7 +159,7 @@ class TextCategorizer(Pipe): docs (Iterable[Doc]): The documents to predict. RETURNS: The models prediction for each document. - DOCS: https://spacy.io/api/textcategorizer#predict + DOCS: https://nightly.spacy.io/api/textcategorizer#predict """ tensors = [doc.tensor for doc in docs] if not any(len(doc) for doc in docs): @@ -177,7 +177,7 @@ class TextCategorizer(Pipe): docs (Iterable[Doc]): The documents to modify. scores: The scores to set, produced by TextCategorizer.predict. - DOCS: https://spacy.io/api/textcategorizer#set_annotations + DOCS: https://nightly.spacy.io/api/textcategorizer#set_annotations """ for i, doc in enumerate(docs): for j, label in enumerate(self.labels): @@ -204,7 +204,7 @@ class TextCategorizer(Pipe): Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/textcategorizer#update + DOCS: https://nightly.spacy.io/api/textcategorizer#update """ if losses is None: losses = {} @@ -245,7 +245,7 @@ class TextCategorizer(Pipe): Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/textcategorizer#rehearse + DOCS: https://nightly.spacy.io/api/textcategorizer#rehearse """ if losses is not None: losses.setdefault(self.name, 0.0) @@ -289,7 +289,7 @@ class TextCategorizer(Pipe): scores: Scores representing the model's predictions. RETUTNRS (Tuple[float, float]): The loss and the gradient. - DOCS: https://spacy.io/api/textcategorizer#get_loss + DOCS: https://nightly.spacy.io/api/textcategorizer#get_loss """ validate_examples(examples, "TextCategorizer.get_loss") truths, not_missing = self._examples_to_truth(examples) @@ -305,7 +305,7 @@ class TextCategorizer(Pipe): label (str): The label to add. RETURNS (int): 0 if label is already present, otherwise 1. - DOCS: https://spacy.io/api/textcategorizer#add_label + DOCS: https://nightly.spacy.io/api/textcategorizer#add_label """ if not isinstance(label, str): raise ValueError(Errors.E187) @@ -343,7 +343,7 @@ class TextCategorizer(Pipe): create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/textcategorizer#begin_training + DOCS: https://nightly.spacy.io/api/textcategorizer#begin_training """ if not hasattr(get_examples, "__call__"): err = Errors.E930.format(name="TextCategorizer", obj=type(get_examples)) @@ -378,7 +378,7 @@ class TextCategorizer(Pipe): positive_label (str): Optional positive label. RETURNS (Dict[str, Any]): The scores, produced by Scorer.score_cats. - DOCS: https://spacy.io/api/textcategorizer#score + DOCS: https://nightly.spacy.io/api/textcategorizer#score """ validate_examples(examples, "TextCategorizer.score") return Scorer.score_cats( diff --git a/spacy/pipeline/tok2vec.py b/spacy/pipeline/tok2vec.py index 7e61ccc02..5657d687d 100644 --- a/spacy/pipeline/tok2vec.py +++ b/spacy/pipeline/tok2vec.py @@ -56,7 +56,7 @@ class Tok2Vec(Pipe): a list of Doc objects as input, and output a list of 2d float arrays. name (str): The component instance name. - DOCS: https://spacy.io/api/tok2vec#init + DOCS: https://nightly.spacy.io/api/tok2vec#init """ self.vocab = vocab self.model = model @@ -91,7 +91,7 @@ class Tok2Vec(Pipe): docs (Doc): The Doc to process. RETURNS (Doc): The processed Doc. - DOCS: https://spacy.io/api/tok2vec#call + DOCS: https://nightly.spacy.io/api/tok2vec#call """ tokvecses = self.predict([doc]) self.set_annotations([doc], tokvecses) @@ -106,7 +106,7 @@ class Tok2Vec(Pipe): batch_size (int): The number of documents to buffer. YIELDS (Doc): Processed documents in order. - DOCS: https://spacy.io/api/tok2vec#pipe + DOCS: https://nightly.spacy.io/api/tok2vec#pipe """ for docs in minibatch(stream, batch_size): docs = list(docs) @@ -121,7 +121,7 @@ class Tok2Vec(Pipe): docs (Iterable[Doc]): The documents to predict. RETURNS: Vector representations for each token in the documents. - DOCS: https://spacy.io/api/tok2vec#predict + DOCS: https://nightly.spacy.io/api/tok2vec#predict """ tokvecs = self.model.predict(docs) batch_id = Tok2VecListener.get_batch_id(docs) @@ -135,7 +135,7 @@ class Tok2Vec(Pipe): docs (Iterable[Doc]): The documents to modify. tokvecses: The tensors to set, produced by Tok2Vec.predict. - DOCS: https://spacy.io/api/tok2vec#set_annotations + DOCS: https://nightly.spacy.io/api/tok2vec#set_annotations """ for doc, tokvecs in zip(docs, tokvecses): assert tokvecs.shape[0] == len(doc) @@ -162,7 +162,7 @@ class Tok2Vec(Pipe): Updated using the component name as the key. RETURNS (Dict[str, float]): The updated losses dictionary. - DOCS: https://spacy.io/api/tok2vec#update + DOCS: https://nightly.spacy.io/api/tok2vec#update """ if losses is None: losses = {} @@ -220,7 +220,7 @@ class Tok2Vec(Pipe): create_optimizer if it doesn't exist. RETURNS (thinc.api.Optimizer): The optimizer. - DOCS: https://spacy.io/api/tok2vec#begin_training + DOCS: https://nightly.spacy.io/api/tok2vec#begin_training """ docs = [Doc(self.vocab, words=["hello"])] self.model.initialize(X=docs) diff --git a/spacy/schemas.py b/spacy/schemas.py index be8db6a99..59af53301 100644 --- a/spacy/schemas.py +++ b/spacy/schemas.py @@ -57,12 +57,13 @@ def validate_token_pattern(obj: list) -> List[str]: class TokenPatternString(BaseModel): - REGEX: Optional[StrictStr] - IN: Optional[List[StrictStr]] - NOT_IN: Optional[List[StrictStr]] + REGEX: Optional[StrictStr] = Field(None, alias="regex") + IN: Optional[List[StrictStr]] = Field(None, alias="in") + NOT_IN: Optional[List[StrictStr]] = Field(None, alias="not_in") class Config: extra = "forbid" + allow_population_by_field_name = True # allow alias and field name @validator("*", pre=True, each_item=True, allow_reuse=True) def raise_for_none(cls, v): @@ -72,9 +73,9 @@ class TokenPatternString(BaseModel): class TokenPatternNumber(BaseModel): - REGEX: Optional[StrictStr] = None - IN: Optional[List[StrictInt]] = None - NOT_IN: Optional[List[StrictInt]] = None + REGEX: Optional[StrictStr] = Field(None, alias="regex") + IN: Optional[List[StrictInt]] = Field(None, alias="in") + NOT_IN: Optional[List[StrictInt]] = Field(None, alias="not_in") EQ: Union[StrictInt, StrictFloat] = Field(None, alias="==") NEQ: Union[StrictInt, StrictFloat] = Field(None, alias="!=") GEQ: Union[StrictInt, StrictFloat] = Field(None, alias=">=") @@ -84,6 +85,7 @@ class TokenPatternNumber(BaseModel): class Config: extra = "forbid" + allow_population_by_field_name = True # allow alias and field name @validator("*", pre=True, each_item=True, allow_reuse=True) def raise_for_none(cls, v): diff --git a/spacy/scorer.py b/spacy/scorer.py index 9bbc64cac..9b1831a91 100644 --- a/spacy/scorer.py +++ b/spacy/scorer.py @@ -85,7 +85,7 @@ class Scorer: ) -> None: """Initialize the Scorer. - DOCS: https://spacy.io/api/scorer#init + DOCS: https://nightly.spacy.io/api/scorer#init """ self.nlp = nlp self.cfg = cfg @@ -101,7 +101,7 @@ class Scorer: examples (Iterable[Example]): The predicted annotations + correct annotations. RETURNS (Dict): A dictionary of scores. - DOCS: https://spacy.io/api/scorer#score + DOCS: https://nightly.spacy.io/api/scorer#score """ scores = {} if hasattr(self.nlp.tokenizer, "score"): @@ -121,7 +121,7 @@ class Scorer: RETURNS (Dict[str, float]): A dictionary containing the scores token_acc/p/r/f. - DOCS: https://spacy.io/api/scorer#score_tokenization + DOCS: https://nightly.spacy.io/api/scorer#score_tokenization """ acc_score = PRFScore() prf_score = PRFScore() @@ -169,7 +169,7 @@ class Scorer: RETURNS (Dict[str, float]): A dictionary containing the accuracy score under the key attr_acc. - DOCS: https://spacy.io/api/scorer#score_token_attr + DOCS: https://nightly.spacy.io/api/scorer#score_token_attr """ tag_score = PRFScore() for example in examples: @@ -263,7 +263,7 @@ class Scorer: RETURNS (Dict[str, Any]): A dictionary containing the PRF scores under the keys attr_p/r/f and the per-type PRF scores under attr_per_type. - DOCS: https://spacy.io/api/scorer#score_spans + DOCS: https://nightly.spacy.io/api/scorer#score_spans """ score = PRFScore() score_per_type = dict() @@ -350,7 +350,7 @@ class Scorer: attr_f_per_type, attr_auc_per_type - DOCS: https://spacy.io/api/scorer#score_cats + DOCS: https://nightly.spacy.io/api/scorer#score_cats """ if threshold is None: threshold = 0.5 if multi_label else 0.0 @@ -467,7 +467,7 @@ class Scorer: RETURNS (Dict[str, Any]): A dictionary containing the scores: attr_uas, attr_las, and attr_las_per_type. - DOCS: https://spacy.io/api/scorer#score_deps + DOCS: https://nightly.spacy.io/api/scorer#score_deps """ unlabelled = PRFScore() labelled = PRFScore() diff --git a/spacy/strings.pyx b/spacy/strings.pyx index 6a1d68221..cd442729c 100644 --- a/spacy/strings.pyx +++ b/spacy/strings.pyx @@ -91,7 +91,7 @@ cdef Utf8Str* _allocate(Pool mem, const unsigned char* chars, uint32_t length) e cdef class StringStore: """Look up strings by 64-bit hashes. - DOCS: https://spacy.io/api/stringstore + DOCS: https://nightly.spacy.io/api/stringstore """ def __init__(self, strings=None, freeze=False): """Create the StringStore. diff --git a/spacy/tests/conftest.py b/spacy/tests/conftest.py index 1c0595672..e17199a08 100644 --- a/spacy/tests/conftest.py +++ b/spacy/tests/conftest.py @@ -44,6 +44,11 @@ def ca_tokenizer(): return get_lang_class("ca")().tokenizer +@pytest.fixture(scope="session") +def cs_tokenizer(): + return get_lang_class("cs")().tokenizer + + @pytest.fixture(scope="session") def da_tokenizer(): return get_lang_class("da")().tokenizer @@ -204,6 +209,11 @@ def ru_lemmatizer(): return get_lang_class("ru")().add_pipe("lemmatizer") +@pytest.fixture(scope="session") +def sa_tokenizer(): + return get_lang_class("sa")().tokenizer + + @pytest.fixture(scope="session") def sr_tokenizer(): return get_lang_class("sr")().tokenizer diff --git a/spacy/tests/doc/test_doc_api.py b/spacy/tests/doc/test_doc_api.py index 954181df5..b37a31e43 100644 --- a/spacy/tests/doc/test_doc_api.py +++ b/spacy/tests/doc/test_doc_api.py @@ -317,7 +317,8 @@ def test_doc_from_array_morph(en_vocab): def test_doc_api_from_docs(en_tokenizer, de_tokenizer): - en_texts = ["Merging the docs is fun.", "They don't think alike."] + en_texts = ["Merging the docs is fun.", "", "They don't think alike."] + en_texts_without_empty = [t for t in en_texts if len(t)] de_text = "Wie war die Frage?" en_docs = [en_tokenizer(text) for text in en_texts] docs_idx = en_texts[0].index("docs") @@ -338,14 +339,14 @@ def test_doc_api_from_docs(en_tokenizer, de_tokenizer): Doc.from_docs(en_docs + [de_doc]) m_doc = Doc.from_docs(en_docs) - assert len(en_docs) == len(list(m_doc.sents)) + assert len(en_texts_without_empty) == len(list(m_doc.sents)) assert len(str(m_doc)) > len(en_texts[0]) + len(en_texts[1]) - assert str(m_doc) == " ".join(en_texts) + assert str(m_doc) == " ".join(en_texts_without_empty) p_token = m_doc[len(en_docs[0]) - 1] assert p_token.text == "." and bool(p_token.whitespace_) en_docs_tokens = [t for doc in en_docs for t in doc] assert len(m_doc) == len(en_docs_tokens) - think_idx = len(en_texts[0]) + 1 + en_texts[1].index("think") + think_idx = len(en_texts[0]) + 1 + en_texts[2].index("think") assert m_doc[9].idx == think_idx with pytest.raises(AttributeError): # not callable, because it was not set via set_extension @@ -353,14 +354,14 @@ def test_doc_api_from_docs(en_tokenizer, de_tokenizer): assert len(m_doc.user_data) == len(en_docs[0].user_data) # but it's there m_doc = Doc.from_docs(en_docs, ensure_whitespace=False) - assert len(en_docs) == len(list(m_doc.sents)) - assert len(str(m_doc)) == len(en_texts[0]) + len(en_texts[1]) + assert len(en_texts_without_empty) == len(list(m_doc.sents)) + assert len(str(m_doc)) == sum(len(t) for t in en_texts) assert str(m_doc) == "".join(en_texts) p_token = m_doc[len(en_docs[0]) - 1] assert p_token.text == "." and not bool(p_token.whitespace_) en_docs_tokens = [t for doc in en_docs for t in doc] assert len(m_doc) == len(en_docs_tokens) - think_idx = len(en_texts[0]) + 0 + en_texts[1].index("think") + think_idx = len(en_texts[0]) + 0 + en_texts[2].index("think") assert m_doc[9].idx == think_idx m_doc = Doc.from_docs(en_docs, attrs=["lemma", "length", "pos"]) @@ -369,12 +370,12 @@ def test_doc_api_from_docs(en_tokenizer, de_tokenizer): assert list(m_doc.sents) assert len(str(m_doc)) > len(en_texts[0]) + len(en_texts[1]) # space delimiter considered, although spacy attribute was missing - assert str(m_doc) == " ".join(en_texts) + assert str(m_doc) == " ".join(en_texts_without_empty) p_token = m_doc[len(en_docs[0]) - 1] assert p_token.text == "." and bool(p_token.whitespace_) en_docs_tokens = [t for doc in en_docs for t in doc] assert len(m_doc) == len(en_docs_tokens) - think_idx = len(en_texts[0]) + 1 + en_texts[1].index("think") + think_idx = len(en_texts[0]) + 1 + en_texts[2].index("think") assert m_doc[9].idx == think_idx diff --git a/spacy/tests/doc/test_span.py b/spacy/tests/doc/test_span.py index 79e8f31c0..1e9623484 100644 --- a/spacy/tests/doc/test_span.py +++ b/spacy/tests/doc/test_span.py @@ -162,11 +162,36 @@ def test_spans_are_hashable(en_tokenizer): def test_spans_by_character(doc): span1 = doc[1:-2] + + # default and specified alignment mode "strict" span2 = doc.char_span(span1.start_char, span1.end_char, label="GPE") assert span1.start_char == span2.start_char assert span1.end_char == span2.end_char assert span2.label_ == "GPE" + span2 = doc.char_span( + span1.start_char, span1.end_char, label="GPE", alignment_mode="strict" + ) + assert span1.start_char == span2.start_char + assert span1.end_char == span2.end_char + assert span2.label_ == "GPE" + + # alignment mode "contract" + span2 = doc.char_span( + span1.start_char - 3, span1.end_char, label="GPE", alignment_mode="contract" + ) + assert span1.start_char == span2.start_char + assert span1.end_char == span2.end_char + assert span2.label_ == "GPE" + + # alignment mode "expand" + span2 = doc.char_span( + span1.start_char + 1, span1.end_char, label="GPE", alignment_mode="expand" + ) + assert span1.start_char == span2.start_char + assert span1.end_char == span2.end_char + assert span2.label_ == "GPE" + def test_span_to_array(doc): span = doc[1:-2] diff --git a/spacy/tests/lang/cs/__init__.py b/spacy/tests/lang/cs/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/tests/lang/cs/test_text.py b/spacy/tests/lang/cs/test_text.py new file mode 100644 index 000000000..b834111b9 --- /dev/null +++ b/spacy/tests/lang/cs/test_text.py @@ -0,0 +1,23 @@ +import pytest + + +@pytest.mark.parametrize( + "text,match", + [ + ("10", True), + ("1", True), + ("10.000", True), + ("1000", True), + ("999,0", True), + ("devatenáct", True), + ("osmdesát", True), + ("kvadrilion", True), + ("Pes", False), + (",", False), + ("1/2", True), + ], +) +def test_lex_attrs_like_number(cs_tokenizer, text, match): + tokens = cs_tokenizer(text) + assert len(tokens) == 1 + assert tokens[0].like_num == match diff --git a/spacy/tests/lang/en/test_text.py b/spacy/tests/lang/en/test_text.py index 4d4d0a643..733e814f7 100644 --- a/spacy/tests/lang/en/test_text.py +++ b/spacy/tests/lang/en/test_text.py @@ -56,6 +56,11 @@ def test_lex_attrs_like_number(en_tokenizer, text, match): assert tokens[0].like_num == match +@pytest.mark.parametrize("word", ["third", "Millionth", "100th", "Hundredth"]) +def test_en_lex_attrs_like_number_for_ordinal(word): + assert like_num(word) + + @pytest.mark.parametrize("word", ["eleven"]) def test_en_lex_attrs_capitals(word): assert like_num(word) diff --git a/spacy/tests/lang/he/test_tokenizer.py b/spacy/tests/lang/he/test_tokenizer.py index 3131014a3..3716f7e3b 100644 --- a/spacy/tests/lang/he/test_tokenizer.py +++ b/spacy/tests/lang/he/test_tokenizer.py @@ -1,4 +1,5 @@ import pytest +from spacy.lang.he.lex_attrs import like_num @pytest.mark.parametrize( @@ -39,3 +40,30 @@ def test_he_tokenizer_handles_abbreviation(he_tokenizer, text, expected_tokens): def test_he_tokenizer_handles_punct(he_tokenizer, text, expected_tokens): tokens = he_tokenizer(text) assert expected_tokens == [token.text for token in tokens] + + +@pytest.mark.parametrize( + "text,match", + [ + ("10", True), + ("1", True), + ("10,000", True), + ("10,00", True), + ("999.0", True), + ("אחד", True), + ("שתיים", True), + ("מליון", True), + ("כלב", False), + (",", False), + ("1/2", True), + ], +) +def test_lex_attrs_like_number(he_tokenizer, text, match): + tokens = he_tokenizer(text) + assert len(tokens) == 1 + assert tokens[0].like_num == match + + +@pytest.mark.parametrize("word", ["שלישי", "מליון", "עשירי", "מאה", "עשר", "אחד עשר"]) +def test_he_lex_attrs_like_number_for_ordinal(word): + assert like_num(word) diff --git a/spacy/tests/lang/ne/test_text.py b/spacy/tests/lang/ne/test_text.py index 794f8fbdc..7dd971132 100644 --- a/spacy/tests/lang/ne/test_text.py +++ b/spacy/tests/lang/ne/test_text.py @@ -1,6 +1,3 @@ -# coding: utf-8 -from __future__ import unicode_literals - import pytest diff --git a/spacy/tests/lang/sa/__init__.py b/spacy/tests/lang/sa/__init__.py new file mode 100644 index 000000000..e69de29bb diff --git a/spacy/tests/lang/sa/test_text.py b/spacy/tests/lang/sa/test_text.py new file mode 100644 index 000000000..41257a4d8 --- /dev/null +++ b/spacy/tests/lang/sa/test_text.py @@ -0,0 +1,42 @@ +import pytest + + +def test_sa_tokenizer_handles_long_text(sa_tokenizer): + text = """नानाविधानि दिव्यानि नानावर्णाकृतीनि च।।""" + tokens = sa_tokenizer(text) + assert len(tokens) == 6 + + +@pytest.mark.parametrize( + "text,length", + [ + ("श्री भगवानुवाच पश्य मे पार्थ रूपाणि शतशोऽथ सहस्रशः।", 9,), + ("गुणान् सर्वान् स्वभावो मूर्ध्नि वर्तते ।", 6), + ], +) +def test_sa_tokenizer_handles_cnts(sa_tokenizer, text, length): + tokens = sa_tokenizer(text) + assert len(tokens) == length + + +@pytest.mark.parametrize( + "text,match", + [ + ("10", True), + ("1", True), + ("10.000", True), + ("1000", True), + ("999,0", True), + ("एकः ", True), + ("दश", True), + ("पञ्चदश", True), + ("चत्वारिंशत् ", True), + ("कूपे", False), + (",", False), + ("1/2", True), + ], +) +def test_lex_attrs_like_number(sa_tokenizer, text, match): + tokens = sa_tokenizer(text) + assert len(tokens) == 1 + assert tokens[0].like_num == match diff --git a/spacy/tests/lang/test_lemmatizers.py b/spacy/tests/lang/test_lemmatizers.py index 8c235c86e..14c59659a 100644 --- a/spacy/tests/lang/test_lemmatizers.py +++ b/spacy/tests/lang/test_lemmatizers.py @@ -14,7 +14,7 @@ LANGUAGES = ["el", "en", "fr", "nl"] @pytest.mark.parametrize("lang", LANGUAGES) def test_lemmatizer_initialize(lang, capfd): - @registry.assets("lemmatizer_init_lookups") + @registry.misc("lemmatizer_init_lookups") def lemmatizer_init_lookups(): lookups = Lookups() lookups.add_table("lemma_lookup", {"cope": "cope"}) @@ -25,9 +25,7 @@ def test_lemmatizer_initialize(lang, capfd): """Test that languages can be initialized.""" nlp = get_lang_class(lang)() - nlp.add_pipe( - "lemmatizer", config={"lookups": {"@assets": "lemmatizer_init_lookups"}} - ) + nlp.add_pipe("lemmatizer", config={"lookups": {"@misc": "lemmatizer_init_lookups"}}) # Check for stray print statements (see #3342) doc = nlp("test") # noqa: F841 captured = capfd.readouterr() diff --git a/spacy/tests/matcher/test_dependency_matcher.py b/spacy/tests/matcher/test_dependency_matcher.py new file mode 100644 index 000000000..72005cc82 --- /dev/null +++ b/spacy/tests/matcher/test_dependency_matcher.py @@ -0,0 +1,334 @@ +import pytest +import pickle +import re +import copy +from mock import Mock +from spacy.matcher import DependencyMatcher +from ..util import get_doc + + +@pytest.fixture +def doc(en_vocab): + text = "The quick brown fox jumped over the lazy fox" + heads = [3, 2, 1, 1, 0, -1, 2, 1, -3] + deps = ["det", "amod", "amod", "nsubj", "ROOT", "prep", "pobj", "det", "amod"] + doc = get_doc(en_vocab, text.split(), heads=heads, deps=deps) + return doc + + +@pytest.fixture +def patterns(en_vocab): + def is_brown_yellow(text): + return bool(re.compile(r"brown|yellow").match(text)) + + IS_BROWN_YELLOW = en_vocab.add_flag(is_brown_yellow) + + pattern1 = [ + {"RIGHT_ID": "fox", "RIGHT_ATTRS": {"ORTH": "fox"}}, + { + "LEFT_ID": "fox", + "REL_OP": ">", + "RIGHT_ID": "q", + "RIGHT_ATTRS": {"ORTH": "quick", "DEP": "amod"}, + }, + { + "LEFT_ID": "fox", + "REL_OP": ">", + "RIGHT_ID": "r", + "RIGHT_ATTRS": {IS_BROWN_YELLOW: True}, + }, + ] + + pattern2 = [ + {"RIGHT_ID": "jumped", "RIGHT_ATTRS": {"ORTH": "jumped"}}, + { + "LEFT_ID": "jumped", + "REL_OP": ">", + "RIGHT_ID": "fox1", + "RIGHT_ATTRS": {"ORTH": "fox"}, + }, + { + "LEFT_ID": "jumped", + "REL_OP": ".", + "RIGHT_ID": "over", + "RIGHT_ATTRS": {"ORTH": "over"}, + }, + ] + + pattern3 = [ + {"RIGHT_ID": "jumped", "RIGHT_ATTRS": {"ORTH": "jumped"}}, + { + "LEFT_ID": "jumped", + "REL_OP": ">", + "RIGHT_ID": "fox", + "RIGHT_ATTRS": {"ORTH": "fox"}, + }, + { + "LEFT_ID": "fox", + "REL_OP": ">>", + "RIGHT_ID": "r", + "RIGHT_ATTRS": {"ORTH": "brown"}, + }, + ] + + pattern4 = [ + {"RIGHT_ID": "jumped", "RIGHT_ATTRS": {"ORTH": "jumped"}}, + { + "LEFT_ID": "jumped", + "REL_OP": ">", + "RIGHT_ID": "fox", + "RIGHT_ATTRS": {"ORTH": "fox"}, + } + ] + + pattern5 = [ + {"RIGHT_ID": "jumped", "RIGHT_ATTRS": {"ORTH": "jumped"}}, + { + "LEFT_ID": "jumped", + "REL_OP": ">>", + "RIGHT_ID": "fox", + "RIGHT_ATTRS": {"ORTH": "fox"}, + }, + ] + + return [pattern1, pattern2, pattern3, pattern4, pattern5] + + +@pytest.fixture +def dependency_matcher(en_vocab, patterns, doc): + matcher = DependencyMatcher(en_vocab) + mock = Mock() + for i in range(1, len(patterns) + 1): + if i == 1: + matcher.add("pattern1", [patterns[0]], on_match=mock) + else: + matcher.add("pattern" + str(i), [patterns[i - 1]]) + + return matcher + + +def test_dependency_matcher(dependency_matcher, doc, patterns): + assert len(dependency_matcher) == 5 + assert "pattern3" in dependency_matcher + assert dependency_matcher.get("pattern3") == (None, [patterns[2]]) + matches = dependency_matcher(doc) + assert len(matches) == 6 + assert matches[0][1] == [3, 1, 2] + assert matches[1][1] == [4, 3, 5] + assert matches[2][1] == [4, 3, 2] + assert matches[3][1] == [4, 3] + assert matches[4][1] == [4, 3] + assert matches[5][1] == [4, 8] + + span = doc[0:6] + matches = dependency_matcher(span) + assert len(matches) == 5 + assert matches[0][1] == [3, 1, 2] + assert matches[1][1] == [4, 3, 5] + assert matches[2][1] == [4, 3, 2] + assert matches[3][1] == [4, 3] + assert matches[4][1] == [4, 3] + + +def test_dependency_matcher_pickle(en_vocab, patterns, doc): + matcher = DependencyMatcher(en_vocab) + for i in range(1, len(patterns) + 1): + matcher.add("pattern" + str(i), [patterns[i - 1]]) + + matches = matcher(doc) + assert matches[0][1] == [3, 1, 2] + assert matches[1][1] == [4, 3, 5] + assert matches[2][1] == [4, 3, 2] + assert matches[3][1] == [4, 3] + assert matches[4][1] == [4, 3] + assert matches[5][1] == [4, 8] + + b = pickle.dumps(matcher) + matcher_r = pickle.loads(b) + + assert len(matcher) == len(matcher_r) + matches = matcher_r(doc) + assert matches[0][1] == [3, 1, 2] + assert matches[1][1] == [4, 3, 5] + assert matches[2][1] == [4, 3, 2] + assert matches[3][1] == [4, 3] + assert matches[4][1] == [4, 3] + assert matches[5][1] == [4, 8] + + +def test_dependency_matcher_pattern_validation(en_vocab): + pattern = [ + {"RIGHT_ID": "fox", "RIGHT_ATTRS": {"ORTH": "fox"}}, + { + "LEFT_ID": "fox", + "REL_OP": ">", + "RIGHT_ID": "q", + "RIGHT_ATTRS": {"ORTH": "quick", "DEP": "amod"}, + }, + { + "LEFT_ID": "fox", + "REL_OP": ">", + "RIGHT_ID": "r", + "RIGHT_ATTRS": {"ORTH": "brown"}, + }, + ] + + matcher = DependencyMatcher(en_vocab) + # original pattern is valid + matcher.add("FOUNDED", [pattern]) + # individual pattern not wrapped in a list + with pytest.raises(ValueError): + matcher.add("FOUNDED", pattern) + # no anchor node + with pytest.raises(ValueError): + matcher.add("FOUNDED", [pattern[1:]]) + # required keys missing + with pytest.raises(ValueError): + pattern2 = copy.deepcopy(pattern) + del pattern2[0]["RIGHT_ID"] + matcher.add("FOUNDED", [pattern2]) + with pytest.raises(ValueError): + pattern2 = copy.deepcopy(pattern) + del pattern2[1]["RIGHT_ID"] + matcher.add("FOUNDED", [pattern2]) + with pytest.raises(ValueError): + pattern2 = copy.deepcopy(pattern) + del pattern2[1]["RIGHT_ATTRS"] + matcher.add("FOUNDED", [pattern2]) + with pytest.raises(ValueError): + pattern2 = copy.deepcopy(pattern) + del pattern2[1]["LEFT_ID"] + matcher.add("FOUNDED", [pattern2]) + with pytest.raises(ValueError): + pattern2 = copy.deepcopy(pattern) + del pattern2[1]["REL_OP"] + matcher.add("FOUNDED", [pattern2]) + # invalid operator + with pytest.raises(ValueError): + pattern2 = copy.deepcopy(pattern) + pattern2[1]["REL_OP"] = "!!!" + matcher.add("FOUNDED", [pattern2]) + # duplicate node name + with pytest.raises(ValueError): + pattern2 = copy.deepcopy(pattern) + pattern2[1]["RIGHT_ID"] = "fox" + matcher.add("FOUNDED", [pattern2]) + + +def test_dependency_matcher_callback(en_vocab, doc): + pattern = [ + {"RIGHT_ID": "quick", "RIGHT_ATTRS": {"ORTH": "quick"}}, + ] + + matcher = DependencyMatcher(en_vocab) + mock = Mock() + matcher.add("pattern", [pattern], on_match=mock) + matches = matcher(doc) + mock.assert_called_once_with(matcher, doc, 0, matches) + + # check that matches with and without callback are the same (#4590) + matcher2 = DependencyMatcher(en_vocab) + matcher2.add("pattern", [pattern]) + matches2 = matcher2(doc) + assert matches == matches2 + + +@pytest.mark.parametrize( + "op,num_matches", [(".", 8), (".*", 20), (";", 8), (";*", 20),] +) +def test_dependency_matcher_precedence_ops(en_vocab, op, num_matches): + # two sentences to test that all matches are within the same sentence + doc = get_doc( + en_vocab, + words=["a", "b", "c", "d", "e"] * 2, + heads=[0, -1, -2, -3, -4] * 2, + deps=["dep"] * 10, + ) + match_count = 0 + for text in ["a", "b", "c", "d", "e"]: + pattern = [ + {"RIGHT_ID": "1", "RIGHT_ATTRS": {"ORTH": text}}, + {"LEFT_ID": "1", "REL_OP": op, "RIGHT_ID": "2", "RIGHT_ATTRS": {},}, + ] + matcher = DependencyMatcher(en_vocab) + matcher.add("A", [pattern]) + matches = matcher(doc) + match_count += len(matches) + for match in matches: + match_id, token_ids = match + # token_ids[0] op token_ids[1] + if op == ".": + assert token_ids[0] == token_ids[1] - 1 + elif op == ";": + assert token_ids[0] == token_ids[1] + 1 + elif op == ".*": + assert token_ids[0] < token_ids[1] + elif op == ";*": + assert token_ids[0] > token_ids[1] + # all tokens are within the same sentence + assert doc[token_ids[0]].sent == doc[token_ids[1]].sent + assert match_count == num_matches + + +@pytest.mark.parametrize( + "left,right,op,num_matches", + [ + ("fox", "jumped", "<", 1), + ("the", "lazy", "<", 0), + ("jumped", "jumped", "<", 0), + ("fox", "jumped", ">", 0), + ("fox", "lazy", ">", 1), + ("lazy", "lazy", ">", 0), + ("fox", "jumped", "<<", 2), + ("jumped", "fox", "<<", 0), + ("the", "fox", "<<", 2), + ("fox", "jumped", ">>", 0), + ("over", "the", ">>", 1), + ("fox", "the", ">>", 2), + ("fox", "jumped", ".", 1), + ("lazy", "fox", ".", 1), + ("the", "fox", ".", 0), + ("the", "the", ".", 0), + ("fox", "jumped", ";", 0), + ("lazy", "fox", ";", 0), + ("the", "fox", ";", 0), + ("the", "the", ";", 0), + ("quick", "fox", ".*", 2), + ("the", "fox", ".*", 3), + ("the", "the", ".*", 1), + ("fox", "jumped", ";*", 1), + ("quick", "fox", ";*", 0), + ("the", "fox", ";*", 1), + ("the", "the", ";*", 1), + ("quick", "brown", "$+", 1), + ("brown", "quick", "$+", 0), + ("brown", "brown", "$+", 0), + ("quick", "brown", "$-", 0), + ("brown", "quick", "$-", 1), + ("brown", "brown", "$-", 0), + ("the", "brown", "$++", 1), + ("brown", "the", "$++", 0), + ("brown", "brown", "$++", 0), + ("the", "brown", "$--", 0), + ("brown", "the", "$--", 1), + ("brown", "brown", "$--", 0), + ], +) +def test_dependency_matcher_ops(en_vocab, doc, left, right, op, num_matches): + right_id = right + if left == right: + right_id = right + "2" + pattern = [ + {"RIGHT_ID": left, "RIGHT_ATTRS": {"LOWER": left}}, + { + "LEFT_ID": left, + "REL_OP": op, + "RIGHT_ID": right_id, + "RIGHT_ATTRS": {"LOWER": right}, + }, + ] + + matcher = DependencyMatcher(en_vocab) + matcher.add("pattern", [pattern]) + matches = matcher(doc) + assert len(matches) == num_matches diff --git a/spacy/tests/matcher/test_matcher_api.py b/spacy/tests/matcher/test_matcher_api.py index 8310c4466..e0f335a19 100644 --- a/spacy/tests/matcher/test_matcher_api.py +++ b/spacy/tests/matcher/test_matcher_api.py @@ -1,7 +1,6 @@ import pytest -import re from mock import Mock -from spacy.matcher import Matcher, DependencyMatcher +from spacy.matcher import Matcher from spacy.tokens import Doc, Token, Span from ..doc.test_underscore import clean_underscore # noqa: F401 @@ -292,84 +291,6 @@ def test_matcher_extension_set_membership(en_vocab): assert len(matches) == 0 -@pytest.fixture -def text(): - return "The quick brown fox jumped over the lazy fox" - - -@pytest.fixture -def heads(): - return [3, 2, 1, 1, 0, -1, 2, 1, -3] - - -@pytest.fixture -def deps(): - return ["det", "amod", "amod", "nsubj", "prep", "pobj", "det", "amod"] - - -@pytest.fixture -def dependency_matcher(en_vocab): - def is_brown_yellow(text): - return bool(re.compile(r"brown|yellow|over").match(text)) - - IS_BROWN_YELLOW = en_vocab.add_flag(is_brown_yellow) - - pattern1 = [ - {"SPEC": {"NODE_NAME": "fox"}, "PATTERN": {"ORTH": "fox"}}, - { - "SPEC": {"NODE_NAME": "q", "NBOR_RELOP": ">", "NBOR_NAME": "fox"}, - "PATTERN": {"ORTH": "quick", "DEP": "amod"}, - }, - { - "SPEC": {"NODE_NAME": "r", "NBOR_RELOP": ">", "NBOR_NAME": "fox"}, - "PATTERN": {IS_BROWN_YELLOW: True}, - }, - ] - - pattern2 = [ - {"SPEC": {"NODE_NAME": "jumped"}, "PATTERN": {"ORTH": "jumped"}}, - { - "SPEC": {"NODE_NAME": "fox", "NBOR_RELOP": ">", "NBOR_NAME": "jumped"}, - "PATTERN": {"ORTH": "fox"}, - }, - { - "SPEC": {"NODE_NAME": "quick", "NBOR_RELOP": ".", "NBOR_NAME": "jumped"}, - "PATTERN": {"ORTH": "fox"}, - }, - ] - - pattern3 = [ - {"SPEC": {"NODE_NAME": "jumped"}, "PATTERN": {"ORTH": "jumped"}}, - { - "SPEC": {"NODE_NAME": "fox", "NBOR_RELOP": ">", "NBOR_NAME": "jumped"}, - "PATTERN": {"ORTH": "fox"}, - }, - { - "SPEC": {"NODE_NAME": "r", "NBOR_RELOP": ">>", "NBOR_NAME": "fox"}, - "PATTERN": {"ORTH": "brown"}, - }, - ] - - matcher = DependencyMatcher(en_vocab) - matcher.add("pattern1", [pattern1]) - matcher.add("pattern2", [pattern2]) - matcher.add("pattern3", [pattern3]) - - return matcher - - -def test_dependency_matcher_compile(dependency_matcher): - assert len(dependency_matcher) == 3 - - -# def test_dependency_matcher(dependency_matcher, text, heads, deps): -# doc = get_doc(dependency_matcher.vocab, text.split(), heads=heads, deps=deps) -# matches = dependency_matcher(doc) -# assert matches[0][1] == [[3, 1, 2]] -# assert matches[1][1] == [[4, 3, 3]] -# assert matches[2][1] == [[4, 3, 2]] - - def test_matcher_basic_check(en_vocab): matcher = Matcher(en_vocab) # Potential mistake: pass in pattern instead of list of patterns diff --git a/spacy/tests/matcher/test_pattern_validation.py b/spacy/tests/matcher/test_pattern_validation.py index 5dea3dde2..4d21aea81 100644 --- a/spacy/tests/matcher/test_pattern_validation.py +++ b/spacy/tests/matcher/test_pattern_validation.py @@ -59,3 +59,12 @@ def test_minimal_pattern_validation(en_vocab, pattern, n_errors, n_min_errors): matcher.add("TEST", [pattern]) elif n_errors == 0: matcher.add("TEST", [pattern]) + + +def test_pattern_errors(en_vocab): + matcher = Matcher(en_vocab) + # normalize "regex" to upper like "text" + matcher.add("TEST1", [[{"text": {"regex": "regex"}}]]) + # error if subpattern attribute isn't recognized and processed + with pytest.raises(MatchPatternError): + matcher.add("TEST2", [[{"TEXT": {"XX": "xx"}}]]) diff --git a/spacy/tests/pipeline/test_attributeruler.py b/spacy/tests/pipeline/test_attributeruler.py index 96361a693..c12a2b650 100644 --- a/spacy/tests/pipeline/test_attributeruler.py +++ b/spacy/tests/pipeline/test_attributeruler.py @@ -31,7 +31,7 @@ def pattern_dicts(): ] -@registry.assets("attribute_ruler_patterns") +@registry.misc("attribute_ruler_patterns") def attribute_ruler_patterns(): return [ { @@ -86,7 +86,7 @@ def test_attributeruler_init_patterns(nlp, pattern_dicts): # initialize with patterns from asset nlp.add_pipe( "attribute_ruler", - config={"pattern_dicts": {"@assets": "attribute_ruler_patterns"}}, + config={"pattern_dicts": {"@misc": "attribute_ruler_patterns"}}, ) doc = nlp("This is a test.") assert doc[2].lemma_ == "the" diff --git a/spacy/tests/pipeline/test_entity_linker.py b/spacy/tests/pipeline/test_entity_linker.py index 4385d2bf9..4eaa71272 100644 --- a/spacy/tests/pipeline/test_entity_linker.py +++ b/spacy/tests/pipeline/test_entity_linker.py @@ -137,7 +137,7 @@ def test_kb_undefined(nlp): def test_kb_empty(nlp): """Test that the EL can't train with an empty KB""" - config = {"kb_loader": {"@assets": "spacy.EmptyKB.v1", "entity_vector_length": 342}} + config = {"kb_loader": {"@misc": "spacy.EmptyKB.v1", "entity_vector_length": 342}} entity_linker = nlp.add_pipe("entity_linker", config=config) assert len(entity_linker.kb) == 0 with pytest.raises(ValueError): @@ -183,7 +183,7 @@ def test_el_pipe_configuration(nlp): ruler = nlp.add_pipe("entity_ruler") ruler.add_patterns([pattern]) - @registry.assets.register("myAdamKB.v1") + @registry.misc.register("myAdamKB.v1") def mykb() -> Callable[["Vocab"], KnowledgeBase]: def create_kb(vocab): kb = KnowledgeBase(vocab, entity_vector_length=1) @@ -199,7 +199,7 @@ def test_el_pipe_configuration(nlp): # run an EL pipe without a trained context encoder, to check the candidate generation step only nlp.add_pipe( "entity_linker", - config={"kb_loader": {"@assets": "myAdamKB.v1"}, "incl_context": False}, + config={"kb_loader": {"@misc": "myAdamKB.v1"}, "incl_context": False}, ) # With the default get_candidates function, matching is case-sensitive text = "Douglas and douglas are not the same." @@ -211,7 +211,7 @@ def test_el_pipe_configuration(nlp): def get_lowercased_candidates(kb, span): return kb.get_alias_candidates(span.text.lower()) - @registry.assets.register("spacy.LowercaseCandidateGenerator.v1") + @registry.misc.register("spacy.LowercaseCandidateGenerator.v1") def create_candidates() -> Callable[[KnowledgeBase, "Span"], Iterable[Candidate]]: return get_lowercased_candidates @@ -220,9 +220,9 @@ def test_el_pipe_configuration(nlp): "entity_linker", "entity_linker", config={ - "kb_loader": {"@assets": "myAdamKB.v1"}, + "kb_loader": {"@misc": "myAdamKB.v1"}, "incl_context": False, - "get_candidates": {"@assets": "spacy.LowercaseCandidateGenerator.v1"}, + "get_candidates": {"@misc": "spacy.LowercaseCandidateGenerator.v1"}, }, ) doc = nlp(text) @@ -282,7 +282,7 @@ def test_append_invalid_alias(nlp): def test_preserving_links_asdoc(nlp): """Test that Span.as_doc preserves the existing entity links""" - @registry.assets.register("myLocationsKB.v1") + @registry.misc.register("myLocationsKB.v1") def dummy_kb() -> Callable[["Vocab"], KnowledgeBase]: def create_kb(vocab): mykb = KnowledgeBase(vocab, entity_vector_length=1) @@ -304,7 +304,7 @@ def test_preserving_links_asdoc(nlp): ] ruler = nlp.add_pipe("entity_ruler") ruler.add_patterns(patterns) - el_config = {"kb_loader": {"@assets": "myLocationsKB.v1"}, "incl_prior": False} + el_config = {"kb_loader": {"@misc": "myLocationsKB.v1"}, "incl_prior": False} el_pipe = nlp.add_pipe("entity_linker", config=el_config, last=True) el_pipe.begin_training(lambda: []) el_pipe.incl_context = False @@ -387,7 +387,7 @@ def test_overfitting_IO(): doc = nlp(text) train_examples.append(Example.from_dict(doc, annotation)) - @registry.assets.register("myOverfittingKB.v1") + @registry.misc.register("myOverfittingKB.v1") def dummy_kb() -> Callable[["Vocab"], KnowledgeBase]: def create_kb(vocab): # create artificial KB - assign same prior weight to the two russ cochran's @@ -408,7 +408,7 @@ def test_overfitting_IO(): # Create the Entity Linker component and add it to the pipeline nlp.add_pipe( "entity_linker", - config={"kb_loader": {"@assets": "myOverfittingKB.v1"}}, + config={"kb_loader": {"@misc": "myOverfittingKB.v1"}}, last=True, ) diff --git a/spacy/tests/pipeline/test_entity_ruler.py b/spacy/tests/pipeline/test_entity_ruler.py index e4e1631b1..d70d0326e 100644 --- a/spacy/tests/pipeline/test_entity_ruler.py +++ b/spacy/tests/pipeline/test_entity_ruler.py @@ -150,3 +150,15 @@ def test_entity_ruler_properties(nlp, patterns): ruler = EntityRuler(nlp, patterns=patterns, overwrite_ents=True) assert sorted(ruler.labels) == sorted(["HELLO", "BYE", "COMPLEX", "TECH_ORG"]) assert sorted(ruler.ent_ids) == ["a1", "a2"] + + +def test_entity_ruler_overlapping_spans(nlp): + ruler = EntityRuler(nlp) + patterns = [ + {"label": "FOOBAR", "pattern": "foo bar"}, + {"label": "BARBAZ", "pattern": "bar baz"}, + ] + ruler.add_patterns(patterns) + doc = ruler(nlp.make_doc("foo bar baz")) + assert len(doc.ents) == 1 + assert doc.ents[0].label_ == "FOOBAR" diff --git a/spacy/tests/pipeline/test_lemmatizer.py b/spacy/tests/pipeline/test_lemmatizer.py index 8a70fdeeb..05e15bc16 100644 --- a/spacy/tests/pipeline/test_lemmatizer.py +++ b/spacy/tests/pipeline/test_lemmatizer.py @@ -13,7 +13,7 @@ def nlp(): @pytest.fixture def lemmatizer(nlp): - @registry.assets("cope_lookups") + @registry.misc("cope_lookups") def cope_lookups(): lookups = Lookups() lookups.add_table("lemma_lookup", {"cope": "cope"}) @@ -23,13 +23,13 @@ def lemmatizer(nlp): return lookups lemmatizer = nlp.add_pipe( - "lemmatizer", config={"mode": "rule", "lookups": {"@assets": "cope_lookups"}} + "lemmatizer", config={"mode": "rule", "lookups": {"@misc": "cope_lookups"}} ) return lemmatizer def test_lemmatizer_init(nlp): - @registry.assets("cope_lookups") + @registry.misc("cope_lookups") def cope_lookups(): lookups = Lookups() lookups.add_table("lemma_lookup", {"cope": "cope"}) @@ -39,7 +39,7 @@ def test_lemmatizer_init(nlp): return lookups lemmatizer = nlp.add_pipe( - "lemmatizer", config={"mode": "lookup", "lookups": {"@assets": "cope_lookups"}} + "lemmatizer", config={"mode": "lookup", "lookups": {"@misc": "cope_lookups"}} ) assert isinstance(lemmatizer.lookups, Lookups) assert lemmatizer.mode == "lookup" @@ -51,14 +51,14 @@ def test_lemmatizer_init(nlp): nlp.remove_pipe("lemmatizer") - @registry.assets("empty_lookups") + @registry.misc("empty_lookups") def empty_lookups(): return Lookups() with pytest.raises(ValueError): nlp.add_pipe( "lemmatizer", - config={"mode": "lookup", "lookups": {"@assets": "empty_lookups"}}, + config={"mode": "lookup", "lookups": {"@misc": "empty_lookups"}}, ) @@ -79,7 +79,7 @@ def test_lemmatizer_config(nlp, lemmatizer): def test_lemmatizer_serialize(nlp, lemmatizer): - @registry.assets("cope_lookups") + @registry.misc("cope_lookups") def cope_lookups(): lookups = Lookups() lookups.add_table("lemma_lookup", {"cope": "cope"}) @@ -90,7 +90,7 @@ def test_lemmatizer_serialize(nlp, lemmatizer): nlp2 = English() lemmatizer2 = nlp2.add_pipe( - "lemmatizer", config={"mode": "rule", "lookups": {"@assets": "cope_lookups"}} + "lemmatizer", config={"mode": "rule", "lookups": {"@misc": "cope_lookups"}} ) lemmatizer2.from_bytes(lemmatizer.to_bytes()) assert lemmatizer.to_bytes() == lemmatizer2.to_bytes() diff --git a/spacy/tests/pipeline/test_tagger.py b/spacy/tests/pipeline/test_tagger.py index a1aa7e1e1..540301eac 100644 --- a/spacy/tests/pipeline/test_tagger.py +++ b/spacy/tests/pipeline/test_tagger.py @@ -71,6 +71,6 @@ def test_overfitting_IO(): def test_tagger_requires_labels(): nlp = English() - tagger = nlp.add_pipe("tagger") + nlp.add_pipe("tagger") with pytest.raises(ValueError): - optimizer = nlp.begin_training() + nlp.begin_training() diff --git a/spacy/tests/regression/test_issue4501-5000.py b/spacy/tests/regression/test_issue4501-5000.py index 39533f70a..d83a2c718 100644 --- a/spacy/tests/regression/test_issue4501-5000.py +++ b/spacy/tests/regression/test_issue4501-5000.py @@ -38,32 +38,6 @@ def test_gold_misaligned(en_tokenizer, text, words): Example.from_dict(doc, {"words": words}) -def test_issue4590(en_vocab): - """Test that matches param in on_match method are the same as matches run with no on_match method""" - pattern = [ - {"SPEC": {"NODE_NAME": "jumped"}, "PATTERN": {"ORTH": "jumped"}}, - { - "SPEC": {"NODE_NAME": "fox", "NBOR_RELOP": ">", "NBOR_NAME": "jumped"}, - "PATTERN": {"ORTH": "fox"}, - }, - { - "SPEC": {"NODE_NAME": "quick", "NBOR_RELOP": ".", "NBOR_NAME": "jumped"}, - "PATTERN": {"ORTH": "fox"}, - }, - ] - - on_match = Mock() - matcher = DependencyMatcher(en_vocab) - matcher.add("pattern", on_match, pattern) - text = "The quick brown fox jumped over the lazy fox" - heads = [3, 2, 1, 1, 0, -1, 2, 1, -3] - deps = ["det", "amod", "amod", "nsubj", "ROOT", "prep", "det", "amod", "pobj"] - doc = get_doc(en_vocab, text.split(), heads=heads, deps=deps) - matches = matcher(doc) - on_match_args = on_match.call_args - assert on_match_args[0][3] == matches - - def test_issue4651_with_phrase_matcher_attr(): """Test that the EntityRuler PhraseMatcher is deserialized correctly using the method from_disk when the EntityRuler argument phrase_matcher_attr is diff --git a/spacy/tests/regression/test_issue5230.py b/spacy/tests/regression/test_issue5230.py index 78ae04bbb..af643aadc 100644 --- a/spacy/tests/regression/test_issue5230.py +++ b/spacy/tests/regression/test_issue5230.py @@ -71,7 +71,7 @@ def tagger(): def entity_linker(): nlp = Language() - @registry.assets.register("TestIssue5230KB.v1") + @registry.misc.register("TestIssue5230KB.v1") def dummy_kb() -> Callable[["Vocab"], KnowledgeBase]: def create_kb(vocab): kb = KnowledgeBase(vocab, entity_vector_length=1) @@ -80,7 +80,7 @@ def entity_linker(): return create_kb - config = {"kb_loader": {"@assets": "TestIssue5230KB.v1"}} + config = {"kb_loader": {"@misc": "TestIssue5230KB.v1"}} entity_linker = nlp.add_pipe("entity_linker", config=config) # need to add model for two reasons: # 1. no model leads to error in serialization, diff --git a/spacy/tests/regression/test_issue5838.py b/spacy/tests/regression/test_issue5838.py new file mode 100644 index 000000000..4e4d98beb --- /dev/null +++ b/spacy/tests/regression/test_issue5838.py @@ -0,0 +1,23 @@ +from spacy.lang.en import English +from spacy.tokens import Span +from spacy import displacy + + +SAMPLE_TEXT = """First line +Second line, with ent +Third line +Fourth line +""" + + +def test_issue5838(): + # Displacy's EntityRenderer break line + # not working after last entity + + nlp = English() + doc = nlp(SAMPLE_TEXT) + doc.ents = [Span(doc, 7, 8, label="test")] + + html = displacy.render(doc, style="ent") + found = html.count("
") + assert found == 4 diff --git a/spacy/tests/regression/test_issue5918.py b/spacy/tests/regression/test_issue5918.py new file mode 100644 index 000000000..66280f012 --- /dev/null +++ b/spacy/tests/regression/test_issue5918.py @@ -0,0 +1,27 @@ +from spacy.lang.en import English +from spacy.pipeline import merge_entities + + +def test_issue5918(): + # Test edge case when merging entities. + nlp = English() + ruler = nlp.add_pipe("entity_ruler") + patterns = [ + {"label": "ORG", "pattern": "Digicon Inc"}, + {"label": "ORG", "pattern": "Rotan Mosle Inc's"}, + {"label": "ORG", "pattern": "Rotan Mosle Technology Partners Ltd"}, + ] + ruler.add_patterns(patterns) + + text = """ + Digicon Inc said it has completed the previously-announced disposition + of its computer systems division to an investment group led by + Rotan Mosle Inc's Rotan Mosle Technology Partners Ltd affiliate. + """ + doc = nlp(text) + assert len(doc.ents) == 3 + # make it so that the third span's head is within the entity (ent_iob=I) + # bug #5918 would wrongly transfer that I to the full entity, resulting in 2 instead of 3 final ents. + doc[29].head = doc[33] + doc = merge_entities(doc) + assert len(doc.ents) == 3 diff --git a/spacy/tests/serialize/test_serialize_config.py b/spacy/tests/serialize/test_serialize_config.py index fde92b0af..0ab212fda 100644 --- a/spacy/tests/serialize/test_serialize_config.py +++ b/spacy/tests/serialize/test_serialize_config.py @@ -28,7 +28,7 @@ path = ${paths.train} path = ${paths.dev} [training.batcher] -@batchers = "batch_by_words.v1" +@batchers = "spacy.batch_by_words.v1" size = 666 [nlp] diff --git a/spacy/tests/serialize/test_serialize_kb.py b/spacy/tests/serialize/test_serialize_kb.py index 3cf5485d7..63736418b 100644 --- a/spacy/tests/serialize/test_serialize_kb.py +++ b/spacy/tests/serialize/test_serialize_kb.py @@ -85,7 +85,7 @@ def test_serialize_subclassed_kb(): super().__init__(vocab, entity_vector_length) self.custom_field = custom_field - @registry.assets.register("spacy.CustomKB.v1") + @registry.misc.register("spacy.CustomKB.v1") def custom_kb( entity_vector_length: int, custom_field: int ) -> Callable[["Vocab"], KnowledgeBase]: @@ -101,7 +101,7 @@ def test_serialize_subclassed_kb(): nlp = English() config = { "kb_loader": { - "@assets": "spacy.CustomKB.v1", + "@misc": "spacy.CustomKB.v1", "entity_vector_length": 342, "custom_field": 666, } diff --git a/spacy/tests/test_tok2vec.py b/spacy/tests/test_tok2vec.py index 1068b662d..9f0f4b74a 100644 --- a/spacy/tests/test_tok2vec.py +++ b/spacy/tests/test_tok2vec.py @@ -135,6 +135,7 @@ TRAIN_DATA = [ ("Eat blue ham", {"tags": ["V", "J", "N"]}), ] + def test_tok2vec_listener(): orig_config = Config().from_str(cfg_string) nlp, config = util.load_model_from_config(orig_config, auto_fill=True, validate=True) diff --git a/spacy/tests/tokenizer/test_naughty_strings.py b/spacy/tests/tokenizer/test_naughty_strings.py index e93d5654f..b22dabb9d 100644 --- a/spacy/tests/tokenizer/test_naughty_strings.py +++ b/spacy/tests/tokenizer/test_naughty_strings.py @@ -29,6 +29,7 @@ NAUGHTY_STRINGS = [ r"₀₁₂", r"⁰⁴⁵₀₁₂", r"ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็ ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็ ด้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็็้้้้้้้้็็็็็้้้้้็็็็", + r" ̄ ̄", # Two-Byte Characters r"田中さんにあげて下さい", r"パーティーへ行かないか", diff --git a/spacy/tests/tokenizer/test_whitespace.py b/spacy/tests/tokenizer/test_whitespace.py index c7b9d7c6d..d68bb9e4e 100644 --- a/spacy/tests/tokenizer/test_whitespace.py +++ b/spacy/tests/tokenizer/test_whitespace.py @@ -15,7 +15,7 @@ def test_tokenizer_splits_double_space(tokenizer, text): @pytest.mark.parametrize("text", ["lorem ipsum "]) -def test_tokenizer_handles_double_trainling_ws(tokenizer, text): +def test_tokenizer_handles_double_trailing_ws(tokenizer, text): tokens = tokenizer(text) assert repr(tokens.text_with_ws) == repr(text) diff --git a/spacy/tokenizer.pyx b/spacy/tokenizer.pyx index 759de90d3..5e7222d40 100644 --- a/spacy/tokenizer.pyx +++ b/spacy/tokenizer.pyx @@ -31,7 +31,7 @@ cdef class Tokenizer: """Segment text, and create Doc objects with the discovered segment boundaries. - DOCS: https://spacy.io/api/tokenizer + DOCS: https://nightly.spacy.io/api/tokenizer """ def __init__(self, Vocab vocab, rules=None, prefix_search=None, suffix_search=None, infix_finditer=None, token_match=None, @@ -54,7 +54,7 @@ cdef class Tokenizer: EXAMPLE: >>> tokenizer = Tokenizer(nlp.vocab) - DOCS: https://spacy.io/api/tokenizer#init + DOCS: https://nightly.spacy.io/api/tokenizer#init """ self.mem = Pool() self._cache = PreshMap() @@ -147,7 +147,7 @@ cdef class Tokenizer: string (str): The string to tokenize. RETURNS (Doc): A container for linguistic annotations. - DOCS: https://spacy.io/api/tokenizer#call + DOCS: https://nightly.spacy.io/api/tokenizer#call """ doc = self._tokenize_affixes(string, True) self._apply_special_cases(doc) @@ -209,7 +209,7 @@ cdef class Tokenizer: Defaults to 1000. YIELDS (Doc): A sequence of Doc objects, in order. - DOCS: https://spacy.io/api/tokenizer#pipe + DOCS: https://nightly.spacy.io/api/tokenizer#pipe """ for text in texts: yield self(text) @@ -529,7 +529,7 @@ cdef class Tokenizer: and `.end()` methods, denoting the placement of internal segment separators, e.g. hyphens. - DOCS: https://spacy.io/api/tokenizer#find_infix + DOCS: https://nightly.spacy.io/api/tokenizer#find_infix """ if self.infix_finditer is None: return 0 @@ -542,7 +542,7 @@ cdef class Tokenizer: string (str): The string to segment. RETURNS (int): The length of the prefix if present, otherwise `None`. - DOCS: https://spacy.io/api/tokenizer#find_prefix + DOCS: https://nightly.spacy.io/api/tokenizer#find_prefix """ if self.prefix_search is None: return 0 @@ -556,7 +556,7 @@ cdef class Tokenizer: string (str): The string to segment. Returns (int): The length of the suffix if present, otherwise `None`. - DOCS: https://spacy.io/api/tokenizer#find_suffix + DOCS: https://nightly.spacy.io/api/tokenizer#find_suffix """ if self.suffix_search is None: return 0 @@ -596,7 +596,7 @@ cdef class Tokenizer: a token and its attributes. The `ORTH` fields of the attributes must exactly match the string when they are concatenated. - DOCS: https://spacy.io/api/tokenizer#add_special_case + DOCS: https://nightly.spacy.io/api/tokenizer#add_special_case """ self._validate_special_case(string, substrings) substrings = list(substrings) @@ -635,7 +635,7 @@ cdef class Tokenizer: string (str): The string to tokenize. RETURNS (list): A list of (pattern_string, token_string) tuples - DOCS: https://spacy.io/api/tokenizer#explain + DOCS: https://nightly.spacy.io/api/tokenizer#explain """ prefix_search = self.prefix_search suffix_search = self.suffix_search @@ -716,7 +716,7 @@ cdef class Tokenizer: it doesn't exist. exclude (list): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/tokenizer#to_disk + DOCS: https://nightly.spacy.io/api/tokenizer#to_disk """ path = util.ensure_path(path) with path.open("wb") as file_: @@ -730,7 +730,7 @@ cdef class Tokenizer: exclude (list): String names of serialization fields to exclude. RETURNS (Tokenizer): The modified `Tokenizer` object. - DOCS: https://spacy.io/api/tokenizer#from_disk + DOCS: https://nightly.spacy.io/api/tokenizer#from_disk """ path = util.ensure_path(path) with path.open("rb") as file_: @@ -744,7 +744,7 @@ cdef class Tokenizer: exclude (list): String names of serialization fields to exclude. RETURNS (bytes): The serialized form of the `Tokenizer` object. - DOCS: https://spacy.io/api/tokenizer#to_bytes + DOCS: https://nightly.spacy.io/api/tokenizer#to_bytes """ serializers = { "vocab": lambda: self.vocab.to_bytes(), @@ -764,7 +764,7 @@ cdef class Tokenizer: exclude (list): String names of serialization fields to exclude. RETURNS (Tokenizer): The `Tokenizer` object. - DOCS: https://spacy.io/api/tokenizer#from_bytes + DOCS: https://nightly.spacy.io/api/tokenizer#from_bytes """ data = {} deserializers = { diff --git a/spacy/tokens/_retokenize.pyx b/spacy/tokens/_retokenize.pyx index 8d57b791f..9323bb579 100644 --- a/spacy/tokens/_retokenize.pyx +++ b/spacy/tokens/_retokenize.pyx @@ -24,8 +24,8 @@ from ..strings import get_string_id cdef class Retokenizer: """Helper class for doc.retokenize() context manager. - DOCS: https://spacy.io/api/doc#retokenize - USAGE: https://spacy.io/usage/linguistic-features#retokenization + DOCS: https://nightly.spacy.io/api/doc#retokenize + USAGE: https://nightly.spacy.io/usage/linguistic-features#retokenization """ cdef Doc doc cdef list merges @@ -47,7 +47,7 @@ cdef class Retokenizer: span (Span): The span to merge. attrs (dict): Attributes to set on the merged token. - DOCS: https://spacy.io/api/doc#retokenizer.merge + DOCS: https://nightly.spacy.io/api/doc#retokenizer.merge """ if (span.start, span.end) in self._spans_to_merge: return @@ -73,7 +73,7 @@ cdef class Retokenizer: attrs (dict): Attributes to set on all split tokens. Attribute names mapped to list of per-token attribute values. - DOCS: https://spacy.io/api/doc#retokenizer.split + DOCS: https://nightly.spacy.io/api/doc#retokenizer.split """ if ''.join(orths) != token.text: raise ValueError(Errors.E117.format(new=''.join(orths), old=token.text)) @@ -169,6 +169,8 @@ def _merge(Doc doc, merges): spans.append(span) # House the new merged token where it starts token = &doc.c[start] + start_ent_iob = doc.c[start].ent_iob + start_ent_type = doc.c[start].ent_type # Initially set attributes to attributes of span root token.tag = doc.c[span.root.i].tag token.pos = doc.c[span.root.i].pos @@ -181,8 +183,8 @@ def _merge(Doc doc, merges): merged_iob = 3 # If start token is I-ENT and previous token is of the same # type, then I-ENT (could check I-ENT from start to span root) - if doc.c[start].ent_iob == 1 and start > 0 \ - and doc.c[start].ent_type == token.ent_type \ + if start_ent_iob == 1 and start > 0 \ + and start_ent_type == token.ent_type \ and doc.c[start - 1].ent_type == token.ent_type: merged_iob = 1 token.ent_iob = merged_iob diff --git a/spacy/tokens/_serialize.py b/spacy/tokens/_serialize.py index a257c7919..cd8c81939 100644 --- a/spacy/tokens/_serialize.py +++ b/spacy/tokens/_serialize.py @@ -61,7 +61,7 @@ class DocBin: store_user_data (bool): Whether to include the `Doc.user_data`. docs (Iterable[Doc]): Docs to add. - DOCS: https://spacy.io/api/docbin#init + DOCS: https://nightly.spacy.io/api/docbin#init """ attrs = sorted([intify_attr(attr) for attr in attrs]) self.version = "0.1" @@ -86,7 +86,7 @@ class DocBin: doc (Doc): The Doc object to add. - DOCS: https://spacy.io/api/docbin#add + DOCS: https://nightly.spacy.io/api/docbin#add """ array = doc.to_array(self.attrs) if len(array.shape) == 1: @@ -115,7 +115,7 @@ class DocBin: vocab (Vocab): The shared vocab. YIELDS (Doc): The Doc objects. - DOCS: https://spacy.io/api/docbin#get_docs + DOCS: https://nightly.spacy.io/api/docbin#get_docs """ for string in self.strings: vocab[string] @@ -141,7 +141,7 @@ class DocBin: other (DocBin): The DocBin to merge into the current bin. - DOCS: https://spacy.io/api/docbin#merge + DOCS: https://nightly.spacy.io/api/docbin#merge """ if self.attrs != other.attrs: raise ValueError(Errors.E166.format(current=self.attrs, other=other.attrs)) @@ -158,7 +158,7 @@ class DocBin: RETURNS (bytes): The serialized DocBin. - DOCS: https://spacy.io/api/docbin#to_bytes + DOCS: https://nightly.spacy.io/api/docbin#to_bytes """ for tokens in self.tokens: assert len(tokens.shape) == 2, tokens.shape # this should never happen @@ -185,7 +185,7 @@ class DocBin: bytes_data (bytes): The data to load from. RETURNS (DocBin): The loaded DocBin. - DOCS: https://spacy.io/api/docbin#from_bytes + DOCS: https://nightly.spacy.io/api/docbin#from_bytes """ msg = srsly.msgpack_loads(zlib.decompress(bytes_data)) self.attrs = msg["attrs"] @@ -211,7 +211,7 @@ class DocBin: path (str / Path): The file path. - DOCS: https://spacy.io/api/docbin#to_disk + DOCS: https://nightly.spacy.io/api/docbin#to_disk """ path = ensure_path(path) with path.open("wb") as file_: @@ -223,7 +223,7 @@ class DocBin: path (str / Path): The file path. RETURNS (DocBin): The loaded DocBin. - DOCS: https://spacy.io/api/docbin#to_disk + DOCS: https://nightly.spacy.io/api/docbin#to_disk """ path = ensure_path(path) with path.open("rb") as file_: diff --git a/spacy/tokens/doc.pyx b/spacy/tokens/doc.pyx index cd080bf35..3f8c735fb 100644 --- a/spacy/tokens/doc.pyx +++ b/spacy/tokens/doc.pyx @@ -104,7 +104,7 @@ cdef class Doc: >>> from spacy.tokens import Doc >>> doc = Doc(nlp.vocab, words=["hello", "world", "!"], spaces=[True, False, False]) - DOCS: https://spacy.io/api/doc + DOCS: https://nightly.spacy.io/api/doc """ @classmethod @@ -118,8 +118,8 @@ cdef class Doc: method (callable): Optional method for method extension. force (bool): Force overwriting existing attribute. - DOCS: https://spacy.io/api/doc#set_extension - USAGE: https://spacy.io/usage/processing-pipelines#custom-components-attributes + DOCS: https://nightly.spacy.io/api/doc#set_extension + USAGE: https://nightly.spacy.io/usage/processing-pipelines#custom-components-attributes """ if cls.has_extension(name) and not kwargs.get("force", False): raise ValueError(Errors.E090.format(name=name, obj="Doc")) @@ -132,7 +132,7 @@ cdef class Doc: name (str): Name of the extension. RETURNS (tuple): A `(default, method, getter, setter)` tuple. - DOCS: https://spacy.io/api/doc#get_extension + DOCS: https://nightly.spacy.io/api/doc#get_extension """ return Underscore.doc_extensions.get(name) @@ -143,7 +143,7 @@ cdef class Doc: name (str): Name of the extension. RETURNS (bool): Whether the extension has been registered. - DOCS: https://spacy.io/api/doc#has_extension + DOCS: https://nightly.spacy.io/api/doc#has_extension """ return name in Underscore.doc_extensions @@ -155,7 +155,7 @@ cdef class Doc: RETURNS (tuple): A `(default, method, getter, setter)` tuple of the removed extension. - DOCS: https://spacy.io/api/doc#remove_extension + DOCS: https://nightly.spacy.io/api/doc#remove_extension """ if not cls.has_extension(name): raise ValueError(Errors.E046.format(name=name)) @@ -173,7 +173,7 @@ cdef class Doc: it is not. If `None`, defaults to `[True]*len(words)` user_data (dict or None): Optional extra data to attach to the Doc. - DOCS: https://spacy.io/api/doc#init + DOCS: https://nightly.spacy.io/api/doc#init """ self.vocab = vocab size = max(20, (len(words) if words is not None else 0)) @@ -288,7 +288,7 @@ cdef class Doc: You can use negative indices and open-ended ranges, which have their normal Python semantics. - DOCS: https://spacy.io/api/doc#getitem + DOCS: https://nightly.spacy.io/api/doc#getitem """ if isinstance(i, slice): start, stop = normalize_slice(len(self), i.start, i.stop, i.step) @@ -305,7 +305,7 @@ cdef class Doc: than-Python speeds are required, you can instead access the annotations as a numpy array, or access the underlying C data directly from Cython. - DOCS: https://spacy.io/api/doc#iter + DOCS: https://nightly.spacy.io/api/doc#iter """ cdef int i for i in range(self.length): @@ -316,7 +316,7 @@ cdef class Doc: RETURNS (int): The number of tokens in the document. - DOCS: https://spacy.io/api/doc#len + DOCS: https://nightly.spacy.io/api/doc#len """ return self.length @@ -336,31 +336,56 @@ cdef class Doc: def doc(self): return self - def char_span(self, int start_idx, int end_idx, label=0, kb_id=0, vector=None): - """Create a `Span` object from the slice `doc.text[start : end]`. + def char_span(self, int start_idx, int end_idx, label=0, kb_id=0, vector=None, alignment_mode="strict"): + """Create a `Span` object from the slice + `doc.text[start_idx : end_idx]`. Returns None if no valid `Span` can be + created. doc (Doc): The parent document. - start (int): The index of the first character of the span. - end (int): The index of the first character after the span. + start_idx (int): The index of the first character of the span. + end_idx (int): The index of the first character after the span. label (uint64 or string): A label to attach to the Span, e.g. for named entities. - kb_id (uint64 or string): An ID from a KB to capture the meaning of a named entity. + kb_id (uint64 or string): An ID from a KB to capture the meaning of a + named entity. vector (ndarray[ndim=1, dtype='float32']): A meaning representation of the span. + alignment_mode (str): How character indices are aligned to token + boundaries. Options: "strict" (character indices must be aligned + with token boundaries), "contract" (span of all tokens completely + within the character span), "expand" (span of all tokens at least + partially covered by the character span). Defaults to "strict". RETURNS (Span): The newly constructed object. - DOCS: https://spacy.io/api/doc#char_span + DOCS: https://nightly.spacy.io/api/doc#char_span """ if not isinstance(label, int): label = self.vocab.strings.add(label) if not isinstance(kb_id, int): kb_id = self.vocab.strings.add(kb_id) - cdef int start = token_by_start(self.c, self.length, start_idx) - if start == -1: + if alignment_mode not in ("strict", "contract", "expand"): + alignment_mode = "strict" + cdef int start = token_by_char(self.c, self.length, start_idx) + if start < 0 or (alignment_mode == "strict" and start_idx != self[start].idx): return None - cdef int end = token_by_end(self.c, self.length, end_idx) - if end == -1: + # end_idx is exclusive, so find the token at one char before + cdef int end = token_by_char(self.c, self.length, end_idx - 1) + if end < 0 or (alignment_mode == "strict" and end_idx != self[end].idx + len(self[end])): return None + # Adjust start and end by alignment_mode + if alignment_mode == "contract": + if self[start].idx < start_idx: + start += 1 + if end_idx < self[end].idx + len(self[end]): + end -= 1 + # if no tokens are completely within the span, return None + if end < start: + return None + elif alignment_mode == "expand": + # Don't consider the trailing whitespace to be part of the previous + # token + if start_idx == self[start].idx + len(self[start]): + start += 1 # Currently we have the token index, we want the range-end index end += 1 cdef Span span = Span(self, start, end, label=label, kb_id=kb_id, vector=vector) @@ -374,7 +399,7 @@ cdef class Doc: `Span`, `Token` and `Lexeme` objects. RETURNS (float): A scalar similarity score. Higher is more similar. - DOCS: https://spacy.io/api/doc#similarity + DOCS: https://nightly.spacy.io/api/doc#similarity """ if "similarity" in self.user_hooks: return self.user_hooks["similarity"](self, other) @@ -407,7 +432,7 @@ cdef class Doc: RETURNS (bool): Whether a word vector is associated with the object. - DOCS: https://spacy.io/api/doc#has_vector + DOCS: https://nightly.spacy.io/api/doc#has_vector """ if "has_vector" in self.user_hooks: return self.user_hooks["has_vector"](self) @@ -425,7 +450,7 @@ cdef class Doc: RETURNS (numpy.ndarray[ndim=1, dtype='float32']): A 1D numpy array representing the document's semantics. - DOCS: https://spacy.io/api/doc#vector + DOCS: https://nightly.spacy.io/api/doc#vector """ def __get__(self): if "vector" in self.user_hooks: @@ -453,7 +478,7 @@ cdef class Doc: RETURNS (float): The L2 norm of the vector representation. - DOCS: https://spacy.io/api/doc#vector_norm + DOCS: https://nightly.spacy.io/api/doc#vector_norm """ def __get__(self): if "vector_norm" in self.user_hooks: @@ -493,7 +518,7 @@ cdef class Doc: RETURNS (tuple): Entities in the document, one `Span` per entity. - DOCS: https://spacy.io/api/doc#ents + DOCS: https://nightly.spacy.io/api/doc#ents """ def __get__(self): cdef int i @@ -584,7 +609,7 @@ cdef class Doc: YIELDS (Span): Noun chunks in the document. - DOCS: https://spacy.io/api/doc#noun_chunks + DOCS: https://nightly.spacy.io/api/doc#noun_chunks """ # Accumulate the result before beginning to iterate over it. This @@ -609,7 +634,7 @@ cdef class Doc: YIELDS (Span): Sentences in the document. - DOCS: https://spacy.io/api/doc#sents + DOCS: https://nightly.spacy.io/api/doc#sents """ if not self.is_sentenced: raise ValueError(Errors.E030) @@ -722,7 +747,7 @@ cdef class Doc: attr_id (int): The attribute ID to key the counts. RETURNS (dict): A dictionary mapping attributes to integer counts. - DOCS: https://spacy.io/api/doc#count_by + DOCS: https://nightly.spacy.io/api/doc#count_by """ cdef int i cdef attr_t attr @@ -777,7 +802,7 @@ cdef class Doc: array (numpy.ndarray[ndim=2, dtype='int32']): The attribute values. RETURNS (Doc): Itself. - DOCS: https://spacy.io/api/doc#from_array + DOCS: https://nightly.spacy.io/api/doc#from_array """ # Handle scalar/list inputs of strings/ints for py_attr_ids # See also #3064 @@ -872,7 +897,7 @@ cdef class Doc: attrs (list): Optional list of attribute ID ints or attribute name strings. RETURNS (Doc): A doc that contains the concatenated docs, or None if no docs were given. - DOCS: https://spacy.io/api/doc#from_docs + DOCS: https://nightly.spacy.io/api/doc#from_docs """ if not docs: return None @@ -920,7 +945,9 @@ cdef class Doc: warnings.warn(Warnings.W101.format(name=name)) else: warnings.warn(Warnings.W102.format(key=key, value=value)) - char_offset += len(doc.text) if not ensure_whitespace or doc[-1].is_space else len(doc.text) + 1 + char_offset += len(doc.text) + if ensure_whitespace and not (len(doc) > 0 and doc[-1].is_space): + char_offset += 1 arrays = [doc.to_array(attrs) for doc in docs] @@ -932,7 +959,7 @@ cdef class Doc: token_offset = -1 for doc in docs[:-1]: token_offset += len(doc) - if not doc[-1].is_space: + if not (len(doc) > 0 and doc[-1].is_space): concat_spaces[token_offset] = True concat_array = numpy.concatenate(arrays) @@ -951,7 +978,7 @@ cdef class Doc: RETURNS (np.array[ndim=2, dtype=numpy.int32]): LCA matrix with shape (n, n), where n = len(self). - DOCS: https://spacy.io/api/doc#get_lca_matrix + DOCS: https://nightly.spacy.io/api/doc#get_lca_matrix """ return numpy.asarray(_get_lca_matrix(self, 0, len(self))) @@ -985,7 +1012,7 @@ cdef class Doc: it doesn't exist. Paths may be either strings or Path-like objects. exclude (Iterable[str]): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/doc#to_disk + DOCS: https://nightly.spacy.io/api/doc#to_disk """ path = util.ensure_path(path) with path.open("wb") as file_: @@ -1000,7 +1027,7 @@ cdef class Doc: exclude (list): String names of serialization fields to exclude. RETURNS (Doc): The modified `Doc` object. - DOCS: https://spacy.io/api/doc#from_disk + DOCS: https://nightly.spacy.io/api/doc#from_disk """ path = util.ensure_path(path) with path.open("rb") as file_: @@ -1014,7 +1041,7 @@ cdef class Doc: RETURNS (bytes): A losslessly serialized copy of the `Doc`, including all annotations. - DOCS: https://spacy.io/api/doc#to_bytes + DOCS: https://nightly.spacy.io/api/doc#to_bytes """ return srsly.msgpack_dumps(self.to_dict(exclude=exclude)) @@ -1025,7 +1052,7 @@ cdef class Doc: exclude (list): String names of serialization fields to exclude. RETURNS (Doc): Itself. - DOCS: https://spacy.io/api/doc#from_bytes + DOCS: https://nightly.spacy.io/api/doc#from_bytes """ return self.from_dict(srsly.msgpack_loads(bytes_data), exclude=exclude) @@ -1036,7 +1063,7 @@ cdef class Doc: RETURNS (bytes): A losslessly serialized copy of the `Doc`, including all annotations. - DOCS: https://spacy.io/api/doc#to_bytes + DOCS: https://nightly.spacy.io/api/doc#to_bytes """ array_head = [LENGTH, SPACY, LEMMA, ENT_IOB, ENT_TYPE, ENT_ID, NORM, ENT_KB_ID] if self.is_tagged: @@ -1084,7 +1111,7 @@ cdef class Doc: exclude (list): String names of serialization fields to exclude. RETURNS (Doc): Itself. - DOCS: https://spacy.io/api/doc#from_dict + DOCS: https://nightly.spacy.io/api/doc#from_dict """ if self.length != 0: raise ValueError(Errors.E033.format(length=self.length)) @@ -1164,8 +1191,8 @@ cdef class Doc: retokenization are invalidated, although they may accidentally continue to work. - DOCS: https://spacy.io/api/doc#retokenize - USAGE: https://spacy.io/usage/linguistic-features#retokenization + DOCS: https://nightly.spacy.io/api/doc#retokenize + USAGE: https://nightly.spacy.io/usage/linguistic-features#retokenization """ return Retokenizer(self) @@ -1200,7 +1227,7 @@ cdef class Doc: be added to an "_" key in the data, e.g. "_": {"foo": "bar"}. RETURNS (dict): The data in spaCy's JSON format. - DOCS: https://spacy.io/api/doc#to_json + DOCS: https://nightly.spacy.io/api/doc#to_json """ data = {"text": self.text} if self.is_nered: @@ -1266,23 +1293,35 @@ cdef class Doc: cdef int token_by_start(const TokenC* tokens, int length, int start_char) except -2: - cdef int i - for i in range(length): - if tokens[i].idx == start_char: - return i + cdef int i = token_by_char(tokens, length, start_char) + if i >= 0 and tokens[i].idx == start_char: + return i else: return -1 cdef int token_by_end(const TokenC* tokens, int length, int end_char) except -2: - cdef int i - for i in range(length): - if tokens[i].idx + tokens[i].lex.length == end_char: - return i + # end_char is exclusive, so find the token at one char before + cdef int i = token_by_char(tokens, length, end_char - 1) + if i >= 0 and tokens[i].idx + tokens[i].lex.length == end_char: + return i else: return -1 +cdef int token_by_char(const TokenC* tokens, int length, int char_idx) except -2: + cdef int start = 0, mid, end = length - 1 + while start <= end: + mid = (start + end) / 2 + if char_idx < tokens[mid].idx: + end = mid - 1 + elif char_idx >= tokens[mid].idx + tokens[mid].lex.length + tokens[mid].spacy: + start = mid + 1 + else: + return mid + return -1 + + cdef int set_children_from_heads(TokenC* tokens, int length) except -1: cdef TokenC* head cdef TokenC* child diff --git a/spacy/tokens/span.pyx b/spacy/tokens/span.pyx index 15e6518d6..f06f3307d 100644 --- a/spacy/tokens/span.pyx +++ b/spacy/tokens/span.pyx @@ -27,7 +27,7 @@ from .underscore import Underscore, get_ext_args cdef class Span: """A slice from a Doc object. - DOCS: https://spacy.io/api/span + DOCS: https://nightly.spacy.io/api/span """ @classmethod def set_extension(cls, name, **kwargs): @@ -40,8 +40,8 @@ cdef class Span: method (callable): Optional method for method extension. force (bool): Force overwriting existing attribute. - DOCS: https://spacy.io/api/span#set_extension - USAGE: https://spacy.io/usage/processing-pipelines#custom-components-attributes + DOCS: https://nightly.spacy.io/api/span#set_extension + USAGE: https://nightly.spacy.io/usage/processing-pipelines#custom-components-attributes """ if cls.has_extension(name) and not kwargs.get("force", False): raise ValueError(Errors.E090.format(name=name, obj="Span")) @@ -54,7 +54,7 @@ cdef class Span: name (str): Name of the extension. RETURNS (tuple): A `(default, method, getter, setter)` tuple. - DOCS: https://spacy.io/api/span#get_extension + DOCS: https://nightly.spacy.io/api/span#get_extension """ return Underscore.span_extensions.get(name) @@ -65,7 +65,7 @@ cdef class Span: name (str): Name of the extension. RETURNS (bool): Whether the extension has been registered. - DOCS: https://spacy.io/api/span#has_extension + DOCS: https://nightly.spacy.io/api/span#has_extension """ return name in Underscore.span_extensions @@ -77,7 +77,7 @@ cdef class Span: RETURNS (tuple): A `(default, method, getter, setter)` tuple of the removed extension. - DOCS: https://spacy.io/api/span#remove_extension + DOCS: https://nightly.spacy.io/api/span#remove_extension """ if not cls.has_extension(name): raise ValueError(Errors.E046.format(name=name)) @@ -95,7 +95,7 @@ cdef class Span: vector (ndarray[ndim=1, dtype='float32']): A meaning representation of the span. - DOCS: https://spacy.io/api/span#init + DOCS: https://nightly.spacy.io/api/span#init """ if not (0 <= start <= end <= len(doc)): raise IndexError(Errors.E035.format(start=start, end=end, length=len(doc))) @@ -151,7 +151,7 @@ cdef class Span: RETURNS (int): The number of tokens in the span. - DOCS: https://spacy.io/api/span#len + DOCS: https://nightly.spacy.io/api/span#len """ self._recalculate_indices() if self.end < self.start: @@ -168,7 +168,7 @@ cdef class Span: the span to get. RETURNS (Token or Span): The token at `span[i]`. - DOCS: https://spacy.io/api/span#getitem + DOCS: https://nightly.spacy.io/api/span#getitem """ self._recalculate_indices() if isinstance(i, slice): @@ -189,7 +189,7 @@ cdef class Span: YIELDS (Token): A `Token` object. - DOCS: https://spacy.io/api/span#iter + DOCS: https://nightly.spacy.io/api/span#iter """ self._recalculate_indices() for i in range(self.start, self.end): @@ -210,7 +210,7 @@ cdef class Span: copy_user_data (bool): Whether or not to copy the original doc's user data. RETURNS (Doc): The `Doc` copy of the span. - DOCS: https://spacy.io/api/span#as_doc + DOCS: https://nightly.spacy.io/api/span#as_doc """ # TODO: make copy_user_data a keyword-only argument (Python 3 only) words = [t.text for t in self] @@ -292,7 +292,7 @@ cdef class Span: RETURNS (np.array[ndim=2, dtype=numpy.int32]): LCA matrix with shape (n, n), where n = len(self). - DOCS: https://spacy.io/api/span#get_lca_matrix + DOCS: https://nightly.spacy.io/api/span#get_lca_matrix """ return numpy.asarray(_get_lca_matrix(self.doc, self.start, self.end)) @@ -304,7 +304,7 @@ cdef class Span: `Span`, `Token` and `Lexeme` objects. RETURNS (float): A scalar similarity score. Higher is more similar. - DOCS: https://spacy.io/api/span#similarity + DOCS: https://nightly.spacy.io/api/span#similarity """ if "similarity" in self.doc.user_span_hooks: return self.doc.user_span_hooks["similarity"](self, other) @@ -400,7 +400,7 @@ cdef class Span: RETURNS (tuple): Entities in the span, one `Span` per entity. - DOCS: https://spacy.io/api/span#ents + DOCS: https://nightly.spacy.io/api/span#ents """ ents = [] for ent in self.doc.ents: @@ -415,7 +415,7 @@ cdef class Span: RETURNS (bool): Whether a word vector is associated with the object. - DOCS: https://spacy.io/api/span#has_vector + DOCS: https://nightly.spacy.io/api/span#has_vector """ if "has_vector" in self.doc.user_span_hooks: return self.doc.user_span_hooks["has_vector"](self) @@ -434,7 +434,7 @@ cdef class Span: RETURNS (numpy.ndarray[ndim=1, dtype='float32']): A 1D numpy array representing the span's semantics. - DOCS: https://spacy.io/api/span#vector + DOCS: https://nightly.spacy.io/api/span#vector """ if "vector" in self.doc.user_span_hooks: return self.doc.user_span_hooks["vector"](self) @@ -448,7 +448,7 @@ cdef class Span: RETURNS (float): The L2 norm of the vector representation. - DOCS: https://spacy.io/api/span#vector_norm + DOCS: https://nightly.spacy.io/api/span#vector_norm """ if "vector_norm" in self.doc.user_span_hooks: return self.doc.user_span_hooks["vector"](self) @@ -508,7 +508,7 @@ cdef class Span: YIELDS (Span): Base noun-phrase `Span` objects. - DOCS: https://spacy.io/api/span#noun_chunks + DOCS: https://nightly.spacy.io/api/span#noun_chunks """ if not self.doc.is_parsed: raise ValueError(Errors.E029) @@ -533,7 +533,7 @@ cdef class Span: RETURNS (Token): The root token. - DOCS: https://spacy.io/api/span#root + DOCS: https://nightly.spacy.io/api/span#root """ self._recalculate_indices() if "root" in self.doc.user_span_hooks: @@ -590,7 +590,7 @@ cdef class Span: RETURNS (tuple): A tuple of Token objects. - DOCS: https://spacy.io/api/span#lefts + DOCS: https://nightly.spacy.io/api/span#lefts """ return self.root.conjuncts @@ -601,7 +601,7 @@ cdef class Span: YIELDS (Token):A left-child of a token of the span. - DOCS: https://spacy.io/api/span#lefts + DOCS: https://nightly.spacy.io/api/span#lefts """ for token in reversed(self): # Reverse, so we get tokens in order for left in token.lefts: @@ -615,7 +615,7 @@ cdef class Span: YIELDS (Token): A right-child of a token of the span. - DOCS: https://spacy.io/api/span#rights + DOCS: https://nightly.spacy.io/api/span#rights """ for token in self: for right in token.rights: @@ -630,7 +630,7 @@ cdef class Span: RETURNS (int): The number of leftward immediate children of the span, in the syntactic dependency parse. - DOCS: https://spacy.io/api/span#n_lefts + DOCS: https://nightly.spacy.io/api/span#n_lefts """ return len(list(self.lefts)) @@ -642,7 +642,7 @@ cdef class Span: RETURNS (int): The number of rightward immediate children of the span, in the syntactic dependency parse. - DOCS: https://spacy.io/api/span#n_rights + DOCS: https://nightly.spacy.io/api/span#n_rights """ return len(list(self.rights)) @@ -652,7 +652,7 @@ cdef class Span: YIELDS (Token): A token within the span, or a descendant from it. - DOCS: https://spacy.io/api/span#subtree + DOCS: https://nightly.spacy.io/api/span#subtree """ for word in self.lefts: yield from word.subtree diff --git a/spacy/tokens/token.pyx b/spacy/tokens/token.pyx index 8afde60ee..50f1c5da3 100644 --- a/spacy/tokens/token.pyx +++ b/spacy/tokens/token.pyx @@ -30,7 +30,7 @@ cdef class Token: """An individual token – i.e. a word, punctuation symbol, whitespace, etc. - DOCS: https://spacy.io/api/token + DOCS: https://nightly.spacy.io/api/token """ @classmethod def set_extension(cls, name, **kwargs): @@ -43,8 +43,8 @@ cdef class Token: method (callable): Optional method for method extension. force (bool): Force overwriting existing attribute. - DOCS: https://spacy.io/api/token#set_extension - USAGE: https://spacy.io/usage/processing-pipelines#custom-components-attributes + DOCS: https://nightly.spacy.io/api/token#set_extension + USAGE: https://nightly.spacy.io/usage/processing-pipelines#custom-components-attributes """ if cls.has_extension(name) and not kwargs.get("force", False): raise ValueError(Errors.E090.format(name=name, obj="Token")) @@ -57,7 +57,7 @@ cdef class Token: name (str): Name of the extension. RETURNS (tuple): A `(default, method, getter, setter)` tuple. - DOCS: https://spacy.io/api/token#get_extension + DOCS: https://nightly.spacy.io/api/token#get_extension """ return Underscore.token_extensions.get(name) @@ -68,7 +68,7 @@ cdef class Token: name (str): Name of the extension. RETURNS (bool): Whether the extension has been registered. - DOCS: https://spacy.io/api/token#has_extension + DOCS: https://nightly.spacy.io/api/token#has_extension """ return name in Underscore.token_extensions @@ -80,7 +80,7 @@ cdef class Token: RETURNS (tuple): A `(default, method, getter, setter)` tuple of the removed extension. - DOCS: https://spacy.io/api/token#remove_extension + DOCS: https://nightly.spacy.io/api/token#remove_extension """ if not cls.has_extension(name): raise ValueError(Errors.E046.format(name=name)) @@ -93,7 +93,7 @@ cdef class Token: doc (Doc): The parent document. offset (int): The index of the token within the document. - DOCS: https://spacy.io/api/token#init + DOCS: https://nightly.spacy.io/api/token#init """ self.vocab = vocab self.doc = doc @@ -108,7 +108,7 @@ cdef class Token: RETURNS (int): The number of unicode characters in the token. - DOCS: https://spacy.io/api/token#len + DOCS: https://nightly.spacy.io/api/token#len """ return self.c.lex.length @@ -171,7 +171,7 @@ cdef class Token: flag_id (int): The ID of the flag attribute. RETURNS (bool): Whether the flag is set. - DOCS: https://spacy.io/api/token#check_flag + DOCS: https://nightly.spacy.io/api/token#check_flag """ return Lexeme.c_check_flag(self.c.lex, flag_id) @@ -181,7 +181,7 @@ cdef class Token: i (int): The relative position of the token to get. Defaults to 1. RETURNS (Token): The token at position `self.doc[self.i+i]`. - DOCS: https://spacy.io/api/token#nbor + DOCS: https://nightly.spacy.io/api/token#nbor """ if self.i+i < 0 or (self.i+i >= len(self.doc)): raise IndexError(Errors.E042.format(i=self.i, j=i, length=len(self.doc))) @@ -195,7 +195,7 @@ cdef class Token: `Span`, `Token` and `Lexeme` objects. RETURNS (float): A scalar similarity score. Higher is more similar. - DOCS: https://spacy.io/api/token#similarity + DOCS: https://nightly.spacy.io/api/token#similarity """ if "similarity" in self.doc.user_token_hooks: return self.doc.user_token_hooks["similarity"](self, other) @@ -373,7 +373,7 @@ cdef class Token: RETURNS (bool): Whether a word vector is associated with the object. - DOCS: https://spacy.io/api/token#has_vector + DOCS: https://nightly.spacy.io/api/token#has_vector """ if "has_vector" in self.doc.user_token_hooks: return self.doc.user_token_hooks["has_vector"](self) @@ -388,7 +388,7 @@ cdef class Token: RETURNS (numpy.ndarray[ndim=1, dtype='float32']): A 1D numpy array representing the token's semantics. - DOCS: https://spacy.io/api/token#vector + DOCS: https://nightly.spacy.io/api/token#vector """ if "vector" in self.doc.user_token_hooks: return self.doc.user_token_hooks["vector"](self) @@ -403,7 +403,7 @@ cdef class Token: RETURNS (float): The L2 norm of the vector representation. - DOCS: https://spacy.io/api/token#vector_norm + DOCS: https://nightly.spacy.io/api/token#vector_norm """ if "vector_norm" in self.doc.user_token_hooks: return self.doc.user_token_hooks["vector_norm"](self) @@ -426,7 +426,7 @@ cdef class Token: RETURNS (int): The number of leftward immediate children of the word, in the syntactic dependency parse. - DOCS: https://spacy.io/api/token#n_lefts + DOCS: https://nightly.spacy.io/api/token#n_lefts """ return self.c.l_kids @@ -438,7 +438,7 @@ cdef class Token: RETURNS (int): The number of rightward immediate children of the word, in the syntactic dependency parse. - DOCS: https://spacy.io/api/token#n_rights + DOCS: https://nightly.spacy.io/api/token#n_rights """ return self.c.r_kids @@ -470,7 +470,7 @@ cdef class Token: RETURNS (bool / None): Whether the token starts a sentence. None if unknown. - DOCS: https://spacy.io/api/token#is_sent_start + DOCS: https://nightly.spacy.io/api/token#is_sent_start """ def __get__(self): if self.c.sent_start == 0: @@ -499,7 +499,7 @@ cdef class Token: RETURNS (bool / None): Whether the token ends a sentence. None if unknown. - DOCS: https://spacy.io/api/token#is_sent_end + DOCS: https://nightly.spacy.io/api/token#is_sent_end """ def __get__(self): if self.i + 1 == len(self.doc): @@ -521,7 +521,7 @@ cdef class Token: YIELDS (Token): A left-child of the token. - DOCS: https://spacy.io/api/token#lefts + DOCS: https://nightly.spacy.io/api/token#lefts """ cdef int nr_iter = 0 cdef const TokenC* ptr = self.c - (self.i - self.c.l_edge) @@ -541,7 +541,7 @@ cdef class Token: YIELDS (Token): A right-child of the token. - DOCS: https://spacy.io/api/token#rights + DOCS: https://nightly.spacy.io/api/token#rights """ cdef const TokenC* ptr = self.c + (self.c.r_edge - self.i) tokens = [] @@ -563,7 +563,7 @@ cdef class Token: YIELDS (Token): A child token such that `child.head==self`. - DOCS: https://spacy.io/api/token#children + DOCS: https://nightly.spacy.io/api/token#children """ yield from self.lefts yield from self.rights @@ -576,7 +576,7 @@ cdef class Token: YIELDS (Token): A descendent token such that `self.is_ancestor(descendent) or token == self`. - DOCS: https://spacy.io/api/token#subtree + DOCS: https://nightly.spacy.io/api/token#subtree """ for word in self.lefts: yield from word.subtree @@ -607,7 +607,7 @@ cdef class Token: YIELDS (Token): A sequence of ancestor tokens such that `ancestor.is_ancestor(self)`. - DOCS: https://spacy.io/api/token#ancestors + DOCS: https://nightly.spacy.io/api/token#ancestors """ cdef const TokenC* head_ptr = self.c # Guard against infinite loop, no token can have @@ -625,7 +625,7 @@ cdef class Token: descendant (Token): Another token. RETURNS (bool): Whether this token is the ancestor of the descendant. - DOCS: https://spacy.io/api/token#is_ancestor + DOCS: https://nightly.spacy.io/api/token#is_ancestor """ if self.doc is not descendant.doc: return False @@ -729,7 +729,7 @@ cdef class Token: RETURNS (tuple): The coordinated tokens. - DOCS: https://spacy.io/api/token#conjuncts + DOCS: https://nightly.spacy.io/api/token#conjuncts """ cdef Token word, child if "conjuncts" in self.doc.user_token_hooks: diff --git a/spacy/util.py b/spacy/util.py index 0eb76c3d1..fa4815df8 100644 --- a/spacy/util.py +++ b/spacy/util.py @@ -76,7 +76,7 @@ class registry(thinc.registry): lemmatizers = catalogue.create("spacy", "lemmatizers", entry_points=True) lookups = catalogue.create("spacy", "lookups", entry_points=True) displacy_colors = catalogue.create("spacy", "displacy_colors", entry_points=True) - assets = catalogue.create("spacy", "assets", entry_points=True) + misc = catalogue.create("spacy", "misc", entry_points=True) # Callback functions used to manipulate nlp object etc. callbacks = catalogue.create("spacy", "callbacks") batchers = catalogue.create("spacy", "batchers", entry_points=True) diff --git a/spacy/vectors.pyx b/spacy/vectors.pyx index bcea87e67..ae2508c87 100644 --- a/spacy/vectors.pyx +++ b/spacy/vectors.pyx @@ -44,7 +44,7 @@ cdef class Vectors: the table need to be assigned - so len(list(vectors.keys())) may be greater or smaller than vectors.shape[0]. - DOCS: https://spacy.io/api/vectors + DOCS: https://nightly.spacy.io/api/vectors """ cdef public object name cdef public object data @@ -59,7 +59,7 @@ cdef class Vectors: keys (iterable): A sequence of keys, aligned with the data. name (str): A name to identify the vectors table. - DOCS: https://spacy.io/api/vectors#init + DOCS: https://nightly.spacy.io/api/vectors#init """ self.name = name if data is None: @@ -83,7 +83,7 @@ cdef class Vectors: RETURNS (tuple): A `(rows, dims)` pair. - DOCS: https://spacy.io/api/vectors#shape + DOCS: https://nightly.spacy.io/api/vectors#shape """ return self.data.shape @@ -93,7 +93,7 @@ cdef class Vectors: RETURNS (int): The vector size. - DOCS: https://spacy.io/api/vectors#size + DOCS: https://nightly.spacy.io/api/vectors#size """ return self.data.shape[0] * self.data.shape[1] @@ -103,7 +103,7 @@ cdef class Vectors: RETURNS (bool): `True` if no slots are available for new keys. - DOCS: https://spacy.io/api/vectors#is_full + DOCS: https://nightly.spacy.io/api/vectors#is_full """ return self._unset.size() == 0 @@ -114,7 +114,7 @@ cdef class Vectors: RETURNS (int): The number of keys in the table. - DOCS: https://spacy.io/api/vectors#n_keys + DOCS: https://nightly.spacy.io/api/vectors#n_keys """ return len(self.key2row) @@ -127,7 +127,7 @@ cdef class Vectors: key (int): The key to get the vector for. RETURNS (ndarray): The vector for the key. - DOCS: https://spacy.io/api/vectors#getitem + DOCS: https://nightly.spacy.io/api/vectors#getitem """ i = self.key2row[key] if i is None: @@ -141,7 +141,7 @@ cdef class Vectors: key (int): The key to set the vector for. vector (ndarray): The vector to set. - DOCS: https://spacy.io/api/vectors#setitem + DOCS: https://nightly.spacy.io/api/vectors#setitem """ i = self.key2row[key] self.data[i] = vector @@ -153,7 +153,7 @@ cdef class Vectors: YIELDS (int): A key in the table. - DOCS: https://spacy.io/api/vectors#iter + DOCS: https://nightly.spacy.io/api/vectors#iter """ yield from self.key2row @@ -162,7 +162,7 @@ cdef class Vectors: RETURNS (int): The number of vectors in the data. - DOCS: https://spacy.io/api/vectors#len + DOCS: https://nightly.spacy.io/api/vectors#len """ return self.data.shape[0] @@ -172,7 +172,7 @@ cdef class Vectors: key (int): The key to check. RETURNS (bool): Whether the key has a vector entry. - DOCS: https://spacy.io/api/vectors#contains + DOCS: https://nightly.spacy.io/api/vectors#contains """ return key in self.key2row @@ -189,7 +189,7 @@ cdef class Vectors: inplace (bool): Reallocate the memory. RETURNS (list): The removed items as a list of `(key, row)` tuples. - DOCS: https://spacy.io/api/vectors#resize + DOCS: https://nightly.spacy.io/api/vectors#resize """ xp = get_array_module(self.data) if inplace: @@ -224,7 +224,7 @@ cdef class Vectors: YIELDS (ndarray): A vector in the table. - DOCS: https://spacy.io/api/vectors#values + DOCS: https://nightly.spacy.io/api/vectors#values """ for row, vector in enumerate(range(self.data.shape[0])): if not self._unset.count(row): @@ -235,7 +235,7 @@ cdef class Vectors: YIELDS (tuple): A key/vector pair. - DOCS: https://spacy.io/api/vectors#items + DOCS: https://nightly.spacy.io/api/vectors#items """ for key, row in self.key2row.items(): yield key, self.data[row] @@ -281,7 +281,7 @@ cdef class Vectors: row (int / None): The row number of a vector to map the key to. RETURNS (int): The row the vector was added to. - DOCS: https://spacy.io/api/vectors#add + DOCS: https://nightly.spacy.io/api/vectors#add """ # use int for all keys and rows in key2row for more efficient access # and serialization @@ -368,7 +368,7 @@ cdef class Vectors: path (str / Path): A path to a directory, which will be created if it doesn't exists. - DOCS: https://spacy.io/api/vectors#to_disk + DOCS: https://nightly.spacy.io/api/vectors#to_disk """ xp = get_array_module(self.data) if xp is numpy: @@ -396,7 +396,7 @@ cdef class Vectors: path (str / Path): Directory path, string or Path-like object. RETURNS (Vectors): The modified object. - DOCS: https://spacy.io/api/vectors#from_disk + DOCS: https://nightly.spacy.io/api/vectors#from_disk """ def load_key2row(path): if path.exists(): @@ -432,7 +432,7 @@ cdef class Vectors: exclude (list): String names of serialization fields to exclude. RETURNS (bytes): The serialized form of the `Vectors` object. - DOCS: https://spacy.io/api/vectors#to_bytes + DOCS: https://nightly.spacy.io/api/vectors#to_bytes """ def serialize_weights(): if hasattr(self.data, "to_bytes"): @@ -453,7 +453,7 @@ cdef class Vectors: exclude (list): String names of serialization fields to exclude. RETURNS (Vectors): The `Vectors` object. - DOCS: https://spacy.io/api/vectors#from_bytes + DOCS: https://nightly.spacy.io/api/vectors#from_bytes """ def deserialize_weights(b): if hasattr(self.data, "from_bytes"): diff --git a/spacy/vocab.pyx b/spacy/vocab.pyx index 9e14f37d2..ef0847e54 100644 --- a/spacy/vocab.pyx +++ b/spacy/vocab.pyx @@ -54,7 +54,7 @@ cdef class Vocab: instance also provides access to the `StringStore`, and owns underlying C-data that is shared between `Doc` objects. - DOCS: https://spacy.io/api/vocab + DOCS: https://nightly.spacy.io/api/vocab """ def __init__(self, lex_attr_getters=None, strings=tuple(), lookups=None, oov_prob=-20., vectors_name=None, writing_system={}, @@ -117,7 +117,7 @@ cdef class Vocab: available bit will be chosen. RETURNS (int): The integer ID by which the flag value can be checked. - DOCS: https://spacy.io/api/vocab#add_flag + DOCS: https://nightly.spacy.io/api/vocab#add_flag """ if flag_id == -1: for bit in range(1, 64): @@ -201,7 +201,7 @@ cdef class Vocab: string (unicode): The ID string. RETURNS (bool) Whether the string has an entry in the vocabulary. - DOCS: https://spacy.io/api/vocab#contains + DOCS: https://nightly.spacy.io/api/vocab#contains """ cdef hash_t int_key if isinstance(key, bytes): @@ -218,7 +218,7 @@ cdef class Vocab: YIELDS (Lexeme): An entry in the vocabulary. - DOCS: https://spacy.io/api/vocab#iter + DOCS: https://nightly.spacy.io/api/vocab#iter """ cdef attr_t key cdef size_t addr @@ -241,7 +241,7 @@ cdef class Vocab: >>> apple = nlp.vocab.strings["apple"] >>> assert nlp.vocab[apple] == nlp.vocab[u"apple"] - DOCS: https://spacy.io/api/vocab#getitem + DOCS: https://nightly.spacy.io/api/vocab#getitem """ cdef attr_t orth if isinstance(id_or_string, unicode): @@ -309,7 +309,7 @@ cdef class Vocab: word was mapped to, and `score` the similarity score between the two words. - DOCS: https://spacy.io/api/vocab#prune_vectors + DOCS: https://nightly.spacy.io/api/vocab#prune_vectors """ xp = get_array_module(self.vectors.data) # Make prob negative so it sorts by rank ascending @@ -349,7 +349,7 @@ cdef class Vocab: and shape determined by the `vocab.vectors` instance. Usually, a numpy ndarray of shape (300,) and dtype float32. - DOCS: https://spacy.io/api/vocab#get_vector + DOCS: https://nightly.spacy.io/api/vocab#get_vector """ if isinstance(orth, str): orth = self.strings.add(orth) @@ -396,7 +396,7 @@ cdef class Vocab: orth (int / unicode): The word. vector (numpy.ndarray[ndim=1, dtype='float32']): The vector to set. - DOCS: https://spacy.io/api/vocab#set_vector + DOCS: https://nightly.spacy.io/api/vocab#set_vector """ if isinstance(orth, str): orth = self.strings.add(orth) @@ -418,7 +418,7 @@ cdef class Vocab: orth (int / unicode): The word. RETURNS (bool): Whether the word has a vector. - DOCS: https://spacy.io/api/vocab#has_vector + DOCS: https://nightly.spacy.io/api/vocab#has_vector """ if isinstance(orth, str): orth = self.strings.add(orth) @@ -431,7 +431,7 @@ cdef class Vocab: it doesn't exist. exclude (list): String names of serialization fields to exclude. - DOCS: https://spacy.io/api/vocab#to_disk + DOCS: https://nightly.spacy.io/api/vocab#to_disk """ path = util.ensure_path(path) if not path.exists(): @@ -452,7 +452,7 @@ cdef class Vocab: exclude (list): String names of serialization fields to exclude. RETURNS (Vocab): The modified `Vocab` object. - DOCS: https://spacy.io/api/vocab#to_disk + DOCS: https://nightly.spacy.io/api/vocab#to_disk """ path = util.ensure_path(path) getters = ["strings", "vectors"] @@ -477,7 +477,7 @@ cdef class Vocab: exclude (list): String names of serialization fields to exclude. RETURNS (bytes): The serialized form of the `Vocab` object. - DOCS: https://spacy.io/api/vocab#to_bytes + DOCS: https://nightly.spacy.io/api/vocab#to_bytes """ def deserialize_vectors(): if self.vectors is None: @@ -499,7 +499,7 @@ cdef class Vocab: exclude (list): String names of serialization fields to exclude. RETURNS (Vocab): The `Vocab` object. - DOCS: https://spacy.io/api/vocab#from_bytes + DOCS: https://nightly.spacy.io/api/vocab#from_bytes """ def serialize_vectors(b): if self.vectors is None: diff --git a/website/docs/api/architectures.md b/website/docs/api/architectures.md index 93e50bfb3..ee844d961 100644 --- a/website/docs/api/architectures.md +++ b/website/docs/api/architectures.md @@ -320,7 +320,7 @@ for details and system requirements. > tokenizer_config = {"use_fast": true} > > [model.get_spans] -> @span_getters = "strided_spans.v1" +> @span_getters = "spacy-transformers.strided_spans.v1" > window = 128 > stride = 96 > ``` @@ -673,11 +673,11 @@ into the "real world". This requires 3 main components: > subword_features = true > > [kb_loader] -> @assets = "spacy.EmptyKB.v1" +> @misc = "spacy.EmptyKB.v1" > entity_vector_length = 64 > > [get_candidates] -> @assets = "spacy.CandidateGenerator.v1" +> @misc = "spacy.CandidateGenerator.v1" > ``` The `EntityLinker` model architecture is a Thinc `Model` with a diff --git a/website/docs/api/cli.md b/website/docs/api/cli.md index 9070855fa..7852d0482 100644 --- a/website/docs/api/cli.md +++ b/website/docs/api/cli.md @@ -1,6 +1,6 @@ --- title: Command Line Interface -teaser: Download, train and package models, and debug spaCy +teaser: Download, train and package pipelines, and debug spaCy source: spacy/cli menu: - ['download', 'download'] @@ -17,45 +17,47 @@ menu: --- spaCy's CLI provides a range of helpful commands for downloading and training -models, converting data and debugging your config, data and installation. For a -list of available commands, you can type `python -m spacy --help`. You can also -add the `--help` flag to any command or subcommand to see the description, +pipelines, converting data and debugging your config, data and installation. For +a list of available commands, you can type `python -m spacy --help`. You can +also add the `--help` flag to any command or subcommand to see the description, available arguments and usage. ## download {#download tag="command"} -Download [models](/usage/models) for spaCy. The downloader finds the -best-matching compatible version and uses `pip install` to download the model as -a package. Direct downloads don't perform any compatibility checks and require -the model name to be specified with its version (e.g. `en_core_web_sm-2.2.0`). +Download [trained pipelines](/usage/models) for spaCy. The downloader finds the +best-matching compatible version and uses `pip install` to download the Python +package. Direct downloads don't perform any compatibility checks and require the +pipeline name to be specified with its version (e.g. `en_core_web_sm-2.2.0`). > #### Downloading best practices > > The `download` command is mostly intended as a convenient, interactive wrapper > – it performs compatibility checks and prints detailed messages in case things > go wrong. It's **not recommended** to use this command as part of an automated -> process. If you know which model your project needs, you should consider a -> [direct download via pip](/usage/models#download-pip), or uploading the model -> to a local PyPi installation and fetching it straight from there. This will -> also allow you to add it as a versioned package dependency to your project. +> process. If you know which package your project needs, you should consider a +> [direct download via pip](/usage/models#download-pip), or uploading the +> package to a local PyPi installation and fetching it straight from there. This +> will also allow you to add it as a versioned package dependency to your +> project. ```cli $ python -m spacy download [model] [--direct] [pip_args] ``` -| Name | Description | -| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `model` | Model name, e.g. [`en_core_web_sm`](/models/en#en_core_web_sm). ~~str (positional)~~ | -| `--direct`, `-d` | Force direct download of exact model version. ~~bool (flag)~~ | -| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | -| pip args 2.1 | Additional installation options to be passed to `pip install` when installing the model package. For example, `--user` to install to the user home directory or `--no-deps` to not install model dependencies. ~~Any (option/flag)~~ | -| **CREATES** | The installed model package in your `site-packages` directory. | +| Name | Description | +| ------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `model` | Pipeline package name, e.g. [`en_core_web_sm`](/models/en#en_core_web_sm). ~~str (positional)~~ | +| `--direct`, `-d` | Force direct download of exact package version. ~~bool (flag)~~ | +| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | +| pip args 2.1 | Additional installation options to be passed to `pip install` when installing the pipeline package. For example, `--user` to install to the user home directory or `--no-deps` to not install package dependencies. ~~Any (option/flag)~~ | +| **CREATES** | The installed pipeline package in your `site-packages` directory. | ## info {#info tag="command"} -Print information about your spaCy installation, models and local setup, and -generate [Markdown](https://en.wikipedia.org/wiki/Markdown)-formatted markup to -copy-paste into [GitHub issues](https://github.com/explosion/spaCy/issues). +Print information about your spaCy installation, trained pipelines and local +setup, and generate [Markdown](https://en.wikipedia.org/wiki/Markdown)-formatted +markup to copy-paste into +[GitHub issues](https://github.com/explosion/spaCy/issues). ```cli $ python -m spacy info [--markdown] [--silent] @@ -65,41 +67,41 @@ $ python -m spacy info [--markdown] [--silent] $ python -m spacy info [model] [--markdown] [--silent] ``` -| Name | Description | -| ------------------------------------------------ | ------------------------------------------------------------------------------ | -| `model` | A model, i.e. package name or path (optional). ~~Optional[str] \(positional)~~ | -| `--markdown`, `-md` | Print information as Markdown. ~~bool (flag)~~ | -| `--silent`, `-s` 2.0.12 | Don't print anything, just return the values. ~~bool (flag)~~ | -| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | -| **PRINTS** | Information about your spaCy installation. | +| Name | Description | +| ------------------------------------------------ | ----------------------------------------------------------------------------------------- | +| `model` | A trained pipeline, i.e. package name or path (optional). ~~Optional[str] \(positional)~~ | +| `--markdown`, `-md` | Print information as Markdown. ~~bool (flag)~~ | +| `--silent`, `-s` 2.0.12 | Don't print anything, just return the values. ~~bool (flag)~~ | +| `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | +| **PRINTS** | Information about your spaCy installation. | ## validate {#validate new="2" tag="command"} -Find all models installed in the current environment and check whether they are -compatible with the currently installed version of spaCy. Should be run after -upgrading spaCy via `pip install -U spacy` to ensure that all installed models -are can be used with the new version. It will show a list of models and their -installed versions. If any model is out of date, the latest compatible versions -and command for updating are shown. +Find all trained pipeline packages installed in the current environment and +check whether they are compatible with the currently installed version of spaCy. +Should be run after upgrading spaCy via `pip install -U spacy` to ensure that +all installed packages are can be used with the new version. It will show a list +of packages and their installed versions. If any package is out of date, the +latest compatible versions and command for updating are shown. > #### Automated validation > > You can also use the `validate` command as part of your build process or test -> suite, to ensure all models are up to date before proceeding. If incompatible -> models are found, it will return `1`. +> suite, to ensure all packages are up to date before proceeding. If +> incompatible packages are found, it will return `1`. ```cli $ python -m spacy validate ``` -| Name | Description | -| ---------- | --------------------------------------------------------- | -| **PRINTS** | Details about the compatibility of your installed models. | +| Name | Description | +| ---------- | -------------------------------------------------------------------- | +| **PRINTS** | Details about the compatibility of your installed pipeline packages. | ## init {#init new="3"} The `spacy init` CLI includes helpful commands for initializing training config -files and model directories. +files and pipeline directories. ### init config {#init-config new="3" tag="command"} @@ -125,7 +127,7 @@ $ python -m spacy init config [output_file] [--lang] [--pipeline] [--optimize] [ | ------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `output_file` | Path to output `.cfg` file or `-` to write the config to stdout (so you can pipe it forward to a file). Note that if you're writing to stdout, no additional logging info is printed. ~~Path (positional)~~ | | `--lang`, `-l` | Optional code of the [language](/usage/models#languages) to use. Defaults to `"en"`. ~~str (option)~~ | -| `--pipeline`, `-p` | Comma-separated list of trainable [pipeline components](/usage/processing-pipelines#built-in) to include in the model. Defaults to `"tagger,parser,ner"`. ~~str (option)~~ | +| `--pipeline`, `-p` | Comma-separated list of trainable [pipeline components](/usage/processing-pipelines#built-in) to include. Defaults to `"tagger,parser,ner"`. ~~str (option)~~ | | `--optimize`, `-o` | `"efficiency"` or `"accuracy"`. Whether to optimize for efficiency (faster inference, smaller model, lower memory consumption) or higher accuracy (potentially larger and slower model). This will impact the choice of architecture, pretrained weights and related hyperparameters. Defaults to `"efficiency"`. ~~str (option)~~ | | `--cpu`, `-C` | Whether the model needs to run on CPU. This will impact the choice of architecture, pretrained weights and related hyperparameters. ~~bool (flag)~~ | | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | @@ -165,36 +167,38 @@ $ python -m spacy init fill-config [base_path] [output_file] [--diff] | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | | **CREATES** | Complete and auto-filled config file for training. | -### init model {#init-model new="2" tag="command"} +### init vocab {#init-vocab new="3" tag="command"} -Create a new model directory from raw data, like word frequencies, Brown -clusters and word vectors. Note that in order to populate the model's vocab, you +Create a blank pipeline directory from raw data, like word frequencies, Brown +clusters and word vectors. Note that in order to populate the vocabulary, you need to pass in a JSONL-formatted [vocabulary file](/api/data-formats#vocab-jsonl) as `--jsonl-loc` with optional `id` values that correspond to the vectors table. Just loading in vectors will not automatically populate the vocab. - + -The `init-model` command is now available as a subcommand of `spacy init`. +This command was previously called `init-model`. ```cli -$ python -m spacy init model [lang] [output_dir] [--jsonl-loc] [--vectors-loc] [--prune-vectors] +$ python -m spacy init vocab [lang] [output_dir] [--jsonl-loc] [--vectors-loc] [--prune-vectors] [--vectors-name] [--meta-name] [--base] ``` | Name | Description | | ------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `lang` | Model language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes), e.g. `en`. ~~str (positional)~~ | -| `output_dir` | Model output directory. Will be created if it doesn't exist. ~~Path (positional)~~ | +| `lang` | Pipeline language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes), e.g. `en`. ~~str (positional)~~ | +| `output_dir` | Pipeline output directory. Will be created if it doesn't exist. ~~Path (positional)~~ | | `--jsonl-loc`, `-j` | Optional location of JSONL-formatted [vocabulary file](/api/data-formats#vocab-jsonl) with lexical attributes. ~~Optional[Path] \(option)~~ | | `--vectors-loc`, `-v` | Optional location of vectors. Should be a file where the first row contains the dimensions of the vectors, followed by a space-separated Word2Vec table. File can be provided in `.txt` format or as a zipped text file in `.zip` or `.tar.gz` format. ~~Optional[Path] \(option)~~ | | `--truncate-vectors`, `-t` 2.3 | Number of vectors to truncate to when reading in vectors file. Defaults to `0` for no truncation. ~~int (option)~~ | | `--prune-vectors`, `-V` | Number of vectors to prune the vocabulary to. Defaults to `-1` for no pruning. ~~int (option)~~ | -| `--vectors-name`, `-vn` | Name to assign to the word vectors in the `meta.json`, e.g. `en_core_web_md.vectors`. ~~str (option)~~ | +| `--vectors-name`, `-vn` | Name to assign to the word vectors in the `meta.json`, e.g. `en_core_web_md.vectors`. ~~Optional[str] \(option)~~ | +| `--meta-name`, `-mn` | Optional name of the package for the pipeline meta. ~~Optional[str] \(option)~~ | +| `--base`, `-b` | Optional name of or path to base pipeline to start with (mostly relevant for pipelines with custom tokenizers). ~~Optional[str] \(option)~~ | | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | -| **CREATES** | A spaCy model containing the vocab and vectors. | +| **CREATES** | A spaCy pipeline directory containing the vocab and vectors. | ## convert {#convert tag="command"} @@ -205,7 +209,7 @@ management functions. The converter can be specified on the command line, or chosen based on the file extension of the input file. ```cli -$ python -m spacy convert [input_file] [output_dir] [--converter] [--file-type] [--n-sents] [--seg-sents] [--model] [--morphology] [--merge-subtokens] [--ner-map] [--lang] +$ python -m spacy convert [input_file] [output_dir] [--converter] [--file-type] [--n-sents] [--seg-sents] [--base] [--morphology] [--merge-subtokens] [--ner-map] [--lang] ``` | Name | Description | @@ -216,7 +220,7 @@ $ python -m spacy convert [input_file] [output_dir] [--converter] [--file-type] | `--file-type`, `-t` 2.1 | Type of file to create. Either `spacy` (default) for binary [`DocBin`](/api/docbin) data or `json` for v2.x JSON format. ~~str (option)~~ | | `--n-sents`, `-n` | Number of sentences per document. ~~int (option)~~ | | `--seg-sents`, `-s` 2.2 | Segment sentences (for `--converter ner`). ~~bool (flag)~~ | -| `--model`, `-b` 2.2 | Model for parser-based sentence segmentation (for `--seg-sents`). ~~Optional[str](option)~~ | +| `--base`, `-b` | Trained spaCy pipeline for sentence segmentation to use as base (for `--seg-sents`). ~~Optional[str](option)~~ | | `--morphology`, `-m` | Enable appending morphology to tags. ~~bool (flag)~~ | | `--ner-map`, `-nm` | NER tag mapping (as JSON-encoded dict of entity types). ~~Optional[Path](option)~~ | | `--lang`, `-l` 2.1 | Language code (if tokenizer required). ~~Optional[str] \(option)~~ | @@ -267,7 +271,7 @@ training -> dropout field required training -> optimizer field required training -> optimize extra fields not permitted -{'vectors': 'en_vectors_web_lg', 'seed': 0, 'accumulate_gradient': 1, 'init_tok2vec': None, 'raw_text': None, 'patience': 1600, 'max_epochs': 0, 'max_steps': 20000, 'eval_frequency': 200, 'frozen_components': [], 'optimize': None, 'batcher': {'@batchers': 'batch_by_words.v1', 'discard_oversize': False, 'tolerance': 0.2, 'get_length': None, 'size': {'@schedules': 'compounding.v1', 'start': 100, 'stop': 1000, 'compound': 1.001, 't': 0.0}}, 'dev_corpus': {'@readers': 'spacy.Corpus.v1', 'path': '', 'max_length': 0, 'gold_preproc': False, 'limit': 0}, 'score_weights': {'tag_acc': 0.5, 'dep_uas': 0.25, 'dep_las': 0.25, 'sents_f': 0.0}, 'train_corpus': {'@readers': 'spacy.Corpus.v1', 'path': '', 'max_length': 0, 'gold_preproc': False, 'limit': 0}} +{'vectors': 'en_vectors_web_lg', 'seed': 0, 'accumulate_gradient': 1, 'init_tok2vec': None, 'raw_text': None, 'patience': 1600, 'max_epochs': 0, 'max_steps': 20000, 'eval_frequency': 200, 'frozen_components': [], 'optimize': None, 'batcher': {'@batchers': 'spacy.batch_by_words.v1', 'discard_oversize': False, 'tolerance': 0.2, 'get_length': None, 'size': {'@schedules': 'compounding.v1', 'start': 100, 'stop': 1000, 'compound': 1.001, 't': 0.0}}, 'dev_corpus': {'@readers': 'spacy.Corpus.v1', 'path': '', 'max_length': 0, 'gold_preproc': False, 'limit': 0}, 'score_weights': {'tag_acc': 0.5, 'dep_uas': 0.25, 'dep_las': 0.25, 'sents_f': 0.0}, 'train_corpus': {'@readers': 'spacy.Corpus.v1', 'path': '', 'max_length': 0, 'gold_preproc': False, 'limit': 0}} If your config contains missing values, you can run the 'init fill-config' command to fill in all the defaults, if possible: @@ -357,7 +361,7 @@ Module spacy.gold.loggers File /path/to/spacy/gold/loggers.py (line 8) ℹ [training.batcher] Registry @batchers -Name batch_by_words.v1 +Name spacy.batch_by_words.v1 Module spacy.gold.batchers File /path/to/spacy/gold/batchers.py (line 49) ℹ [training.batcher.size] @@ -594,11 +598,11 @@ $ python -m spacy debug profile [model] [inputs] [--n-texts] | Name | Description | | ----------------- | ---------------------------------------------------------------------------------- | -| `model` | A loadable spaCy model. ~~str (positional)~~ | +| `model` | A loadable spaCy pipeline (package name or path). ~~str (positional)~~ | | `inputs` | Optional path to input file, or `-` for standard input. ~~Path (positional)~~ | | `--n-texts`, `-n` | Maximum number of texts to use if available. Defaults to `10000`. ~~int (option)~~ | | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | -| **PRINTS** | Profiling information for the model. | +| **PRINTS** | Profiling information for the pipeline. | ### debug model {#debug-model new="3" tag="command"} @@ -724,10 +728,10 @@ $ python -m spacy debug model ./config.cfg tagger -l "5,15" -DIM -PAR -P0 -P1 -P ## train {#train tag="command"} -Train a model. Expects data in spaCy's +Train a pipeline. Expects data in spaCy's [binary format](/api/data-formats#training) and a [config file](/api/data-formats#config) with all settings and hyperparameters. -Will save out the best model from all epochs, as well as the final model. The +Will save out the best model from all epochs, as well as the final pipeline. The `--code` argument can be used to provide a Python file that's imported before the training process starts. This lets you register [custom functions](/usage/training#custom-functions) and architectures and refer @@ -753,12 +757,12 @@ $ python -m spacy train [config_path] [--output] [--code] [--verbose] [overrides | Name | Description | | ----------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `config_path` | Path to [training config](/api/data-formats#config) file containing all settings and hyperparameters. ~~Path (positional)~~ | -| `--output`, `-o` | Directory to store model in. Will be created if it doesn't exist. ~~Optional[Path] \(positional)~~ | +| `--output`, `-o` | Directory to store trained pipeline in. Will be created if it doesn't exist. ~~Optional[Path] \(positional)~~ | | `--code`, `-c` | Path to Python file with additional code to be imported. Allows [registering custom functions](/usage/training#custom-functions) for new architectures. ~~Optional[Path] \(option)~~ | | `--verbose`, `-V` | Show more detailed messages during training. ~~bool (flag)~~ | | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | | overrides | Config parameters to override. Should be options starting with `--` that correspond to the config section and value to override, e.g. `--paths.train ./train.spacy`. ~~Any (option/flag)~~ | -| **CREATES** | The final model and the best model. | +| **CREATES** | The final trained pipeline and the best trained pipeline. | ## pretrain {#pretrain new="2.1" tag="command,experimental"} @@ -769,7 +773,7 @@ a component like a CNN, BiLSTM, etc to predict vectors which match the pretrained ones. The weights are saved to a directory after each epoch. You can then include a **path to one of these pretrained weights files** in your [training config](/usage/training#config) as the `init_tok2vec` setting when you -train your model. This technique may be especially helpful if you have little +train your pipeline. This technique may be especially helpful if you have little labelled data. See the usage docs on [pretraining](/usage/training#pretraining) for more info. @@ -792,7 +796,7 @@ $ python -m spacy pretrain [texts_loc] [output_dir] [config_path] [--code] [--re | Name | Description | | ----------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `texts_loc` | Path to JSONL file with raw texts to learn from, with text provided as the key `"text"` or tokens as the key `"tokens"`. [See here](/api/data-formats#pretrain) for details. ~~Path (positional)~~ | -| `output_dir` | Directory to write models to on each epoch. ~~Path (positional)~~ | +| `output_dir` | Directory to save binary weights to on each epoch. ~~Path (positional)~~ | | `config_path` | Path to [training config](/api/data-formats#config) file containing all settings and hyperparameters. ~~Path (positional)~~ | | `--code`, `-c` | Path to Python file with additional code to be imported. Allows [registering custom functions](/usage/training#custom-functions) for new architectures. ~~Optional[Path] \(option)~~ | | `--resume-path`, `-r` | Path to pretrained weights from which to resume pretraining. ~~Optional[Path] \(option)~~ | @@ -803,7 +807,8 @@ $ python -m spacy pretrain [texts_loc] [output_dir] [config_path] [--code] [--re ## evaluate {#evaluate new="2" tag="command"} -Evaluate a model. Expects a loadable spaCy model and evaluation data in the +Evaluate a trained pipeline. Expects a loadable spaCy pipeline (package name or +path) and evaluation data in the [binary `.spacy` format](/api/data-formats#binary-training). The `--gold-preproc` option sets up the evaluation examples with gold-standard sentences and tokens for the predictions. Gold preprocessing helps the @@ -819,7 +824,7 @@ $ python -m spacy evaluate [model] [data_path] [--output] [--gold-preproc] [--gp | Name | Description | | ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `model` | Model to evaluate. Can be a package or a path to a model data directory. ~~str (positional)~~ | +| `model` | Pipeline to evaluate. Can be a package or a path to a data directory. ~~str (positional)~~ | | `data_path` | Location of evaluation data in spaCy's [binary format](/api/data-formats#training). ~~Path (positional)~~ | | `--output`, `-o` | Output JSON file for metrics. If not set, no metrics will be exported. ~~Optional[Path] \(option)~~ | | `--gold-preproc`, `-G` | Use gold preprocessing. ~~bool (flag)~~ | @@ -831,13 +836,12 @@ $ python -m spacy evaluate [model] [data_path] [--output] [--gold-preproc] [--gp ## package {#package tag="command"} -Generate an installable -[model Python package](/usage/training#models-generating) from an existing model -data directory. All data files are copied over. If the path to a -[`meta.json`](/api/data-formats#meta) is supplied, or a `meta.json` is found in -the input directory, this file is used. Otherwise, the data can be entered -directly from the command line. spaCy will then create a `.tar.gz` archive file -that you can distribute and install with `pip install`. +Generate an installable [Python package](/usage/training#models-generating) from +an existing pipeline data directory. All data files are copied over. If the path +to a [`meta.json`](/api/data-formats#meta) is supplied, or a `meta.json` is +found in the input directory, this file is used. Otherwise, the data can be +entered directly from the command line. spaCy will then create a `.tar.gz` +archive file that you can distribute and install with `pip install`. @@ -855,13 +859,13 @@ $ python -m spacy package [input_dir] [output_dir] [--meta-path] [--create-meta] > > ```cli > $ python -m spacy package /input /output -> $ cd /output/en_model-0.0.0 -> $ pip install dist/en_model-0.0.0.tar.gz +> $ cd /output/en_pipeline-0.0.0 +> $ pip install dist/en_pipeline-0.0.0.tar.gz > ``` | Name | Description | | ------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `input_dir` | Path to directory containing model data. ~~Path (positional)~~ | +| `input_dir` | Path to directory containing pipeline data. ~~Path (positional)~~ | | `output_dir` | Directory to create package folder in. ~~Path (positional)~~ | | `--meta-path`, `-m` 2 | Path to [`meta.json`](/api/data-formats#meta) file (optional). ~~Optional[Path] \(option)~~ | | `--create-meta`, `-C` 2 | Create a `meta.json` file on the command line, even if one already exists in the directory. If an existing file is found, its entries will be shown as the defaults in the command line prompt. ~~bool (flag)~~ | @@ -869,13 +873,13 @@ $ python -m spacy package [input_dir] [output_dir] [--meta-path] [--create-meta] | `--version`, `-v` 3 | Package version to override in meta. Useful when training new versions, as it doesn't require editing the meta template. ~~Optional[str] \(option)~~ | | `--force`, `-f` | Force overwriting of existing folder in output directory. ~~bool (flag)~~ | | `--help`, `-h` | Show help message and available arguments. ~~bool (flag)~~ | -| **CREATES** | A Python package containing the spaCy model. | +| **CREATES** | A Python package containing the spaCy pipeline. | ## project {#project new="3"} The `spacy project` CLI includes subcommands for working with [spaCy projects](/usage/projects), end-to-end workflows for building and -deploying custom spaCy models. +deploying custom spaCy pipelines. ### project clone {#project-clone tag="command"} @@ -1015,9 +1019,9 @@ Download all files or directories listed as `outputs` for commands, unless they are not already present locally. When searching for files in the remote, `pull` won't just look at the output path, but will also consider the **command string** and the **hashes of the dependencies**. For instance, let's say you've -previously pushed a model checkpoint to the remote, but now you've changed some +previously pushed a checkpoint to the remote, but now you've changed some hyper-parameters. Because you've changed the inputs to the command, if you run -`pull`, you won't retrieve the stale result. If you train your model and push +`pull`, you won't retrieve the stale result. If you train your pipeline and push the outputs to the remote, the outputs will be saved alongside the prior outputs, so if you change the config back, you'll be able to fetch back the result. diff --git a/website/docs/api/data-formats.md b/website/docs/api/data-formats.md index 8ef8041ee..3fd2818f4 100644 --- a/website/docs/api/data-formats.md +++ b/website/docs/api/data-formats.md @@ -6,18 +6,18 @@ menu: - ['Training Data', 'training'] - ['Pretraining Data', 'pretraining'] - ['Vocabulary', 'vocab-jsonl'] - - ['Model Meta', 'meta'] + - ['Pipeline Meta', 'meta'] --- This section documents input and output formats of data used by spaCy, including the [training config](/usage/training#config), training data and lexical vocabulary data. For an overview of label schemes used by the models, see the -[models directory](/models). Each model documents the label schemes used in its -components, depending on the data it was trained on. +[models directory](/models). Each trained pipeline documents the label schemes +used in its components, depending on the data it was trained on. ## Training config {#config new="3"} -Config files define the training process and model pipeline and can be passed to +Config files define the training process and pipeline and can be passed to [`spacy train`](/api/cli#train). They use [Thinc's configuration system](https://thinc.ai/docs/usage-config) under the hood. For details on how to use training configs, see the @@ -74,16 +74,16 @@ your config and check that it's valid, you can run the Defines the `nlp` object, its tokenizer and [processing pipeline](/usage/processing-pipelines) component names. -| Name | Description | -| ------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `lang` | Model language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). Defaults to `null`. ~~str~~ | -| `pipeline` | Names of pipeline components in order. Should correspond to sections in the `[components]` block, e.g. `[components.ner]`. See docs on [defining components](/usage/training#config-components). Defaults to `[]`. ~~List[str]~~ | -| `disabled` | Names of pipeline components that are loaded but disabled by default and not run as part of the pipeline. Should correspond to components listed in `pipeline`. After a model is loaded, disabled components can be enabled using [`Language.enable_pipe`](/api/language#enable_pipe). ~~List[str]~~ | -| `load_vocab_data` | Whether to load additional lexeme and vocab data from [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) if available. Defaults to `true`. ~~bool~~ | -| `before_creation` | Optional [callback](/usage/training#custom-code-nlp-callbacks) to modify `Language` subclass before it's initialized. Defaults to `null`. ~~Optional[Callable[[Type[Language]], Type[Language]]]~~ | -| `after_creation` | Optional [callback](/usage/training#custom-code-nlp-callbacks) to modify `nlp` object right after it's initialized. Defaults to `null`. ~~Optional[Callable[[Language], Language]]~~ | -| `after_pipeline_creation` | Optional [callback](/usage/training#custom-code-nlp-callbacks) to modify `nlp` object after the pipeline components have been added. Defaults to `null`. ~~Optional[Callable[[Language], Language]]~~ | -| `tokenizer` | The tokenizer to use. Defaults to [`Tokenizer`](/api/tokenizer). ~~Callable[[str], Doc]~~ | +| Name | Description | +| ------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `lang` | Pipeline language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). Defaults to `null`. ~~str~~ | +| `pipeline` | Names of pipeline components in order. Should correspond to sections in the `[components]` block, e.g. `[components.ner]`. See docs on [defining components](/usage/training#config-components). Defaults to `[]`. ~~List[str]~~ | +| `disabled` | Names of pipeline components that are loaded but disabled by default and not run as part of the pipeline. Should correspond to components listed in `pipeline`. After a pipeline is loaded, disabled components can be enabled using [`Language.enable_pipe`](/api/language#enable_pipe). ~~List[str]~~ | +| `load_vocab_data` | Whether to load additional lexeme and vocab data from [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) if available. Defaults to `true`. ~~bool~~ | +| `before_creation` | Optional [callback](/usage/training#custom-code-nlp-callbacks) to modify `Language` subclass before it's initialized. Defaults to `null`. ~~Optional[Callable[[Type[Language]], Type[Language]]]~~ | +| `after_creation` | Optional [callback](/usage/training#custom-code-nlp-callbacks) to modify `nlp` object right after it's initialized. Defaults to `null`. ~~Optional[Callable[[Language], Language]]~~ | +| `after_pipeline_creation` | Optional [callback](/usage/training#custom-code-nlp-callbacks) to modify `nlp` object after the pipeline components have been added. Defaults to `null`. ~~Optional[Callable[[Language], Language]]~~ | +| `tokenizer` | The tokenizer to use. Defaults to [`Tokenizer`](/api/tokenizer). ~~Callable[[str], Doc]~~ | ### components {#config-components tag="section"} @@ -105,8 +105,8 @@ This section includes definitions of the [pipeline components](/usage/processing-pipelines) and their models, if available. Components in this section can be referenced in the `pipeline` of the `[nlp]` block. Component blocks need to specify either a `factory` (named -function to use to create component) or a `source` (name of path of pretrained -model to copy components from). See the docs on +function to use to create component) or a `source` (name of path of trained +pipeline to copy components from). See the docs on [defining pipeline components](/usage/training#config-components) for details. ### paths, system {#config-variables tag="variables"} @@ -145,7 +145,7 @@ process that are used when you run [`spacy train`](/api/cli#train). | `score_weights` | Score names shown in metrics mapped to their weight towards the final weighted score. See [here](/usage/training#metrics) for details. Defaults to `{}`. ~~Dict[str, float]~~ | | `seed` | The random seed. Defaults to variable `${system.seed}`. ~~int~~ | | `train_corpus` | Callable that takes the current `nlp` object and yields [`Example`](/api/example) objects. Defaults to [`Corpus`](/api/corpus). ~~Callable[[Language], Iterator[Example]]~~ | -| `vectors` | Model name or path to model containing pretrained word vectors to use, e.g. created with [`init model`](/api/cli#init-model). Defaults to `null`. ~~Optional[str]~~ | +| `vectors` | Name or path of pipeline containing pretrained word vectors to use, e.g. created with [`init vocab`](/api/cli#init-vocab). Defaults to `null`. ~~Optional[str]~~ | ### pretraining {#config-pretraining tag="section,optional"} @@ -184,7 +184,7 @@ run [`spacy pretrain`](/api/cli#pretrain). The main data format used in spaCy v3.0 is a **binary format** created by serializing a [`DocBin`](/api/docbin), which represents a collection of `Doc` -objects. This means that you can train spaCy models using the same format it +objects. This means that you can train spaCy pipelines using the same format it outputs: annotated `Doc` objects. The binary format is extremely **efficient in storage**, especially when packing multiple documents together. @@ -286,8 +286,8 @@ a dictionary of gold-standard annotations. [internal training API](/usage/training#api) and they're expected when you call [`nlp.update`](/api/language#update). However, for most use cases, you **shouldn't** have to write your own training scripts. It's recommended to train -your models via the [`spacy train`](/api/cli#train) command with a config file -to keep track of your settings and hyperparameters and your own +your pipelines via the [`spacy train`](/api/cli#train) command with a config +file to keep track of your settings and hyperparameters and your own [registered functions](/usage/training/#custom-code) to customize the setup. @@ -406,15 +406,15 @@ in line-by-line, while still making it easy to represent newlines in the data. ## Lexical data for vocabulary {#vocab-jsonl new="2"} -To populate a model's vocabulary, you can use the -[`spacy init model`](/api/cli#init-model) command and load in a +To populate a pipeline's vocabulary, you can use the +[`spacy init vocab`](/api/cli#init-vocab) command and load in a [newline-delimited JSON](http://jsonlines.org/) (JSONL) file containing one lexical entry per line via the `--jsonl-loc` option. The first line defines the language and vocabulary settings. All other lines are expected to be JSON objects describing an individual lexeme. The lexical attributes will be then set as attributes on spaCy's [`Lexeme`](/api/lexeme#attributes) object. The `vocab` -command outputs a ready-to-use spaCy model with a `Vocab` containing the lexical -data. +command outputs a ready-to-use spaCy pipeline with a `Vocab` containing the +lexical data. ```python ### First line @@ -459,11 +459,11 @@ Here's an example of the 20 most frequent lexemes in the English training data: https://github.com/explosion/spaCy/tree/master/examples/training/vocab-data.jsonl ``` -## Model meta {#meta} +## Pipeline meta {#meta} -The model meta is available as the file `meta.json` and exported automatically -when you save an `nlp` object to disk. Its contents are available as -[`nlp.meta`](/api/language#meta). +The pipeline meta is available as the file `meta.json` and exported +automatically when you save an `nlp` object to disk. Its contents are available +as [`nlp.meta`](/api/language#meta). @@ -473,8 +473,8 @@ creating a Python package with [`spacy package`](/api/cli#package). How to set up the `nlp` object is now defined in the [`config.cfg`](/api/data-formats#config), which includes detailed information about the pipeline components and their model architectures, and all other -settings and hyperparameters used to train the model. It's the **single source -of truth** used for loading a model. +settings and hyperparameters used to train the pipeline. It's the **single +source of truth** used for loading a pipeline. @@ -482,12 +482,12 @@ of truth** used for loading a model. > > ```json > { -> "name": "example_model", +> "name": "example_pipeline", > "lang": "en", > "version": "1.0.0", > "spacy_version": ">=3.0.0,<3.1.0", > "parent_package": "spacy", -> "description": "Example model for spaCy", +> "description": "Example pipeline for spaCy", > "author": "You", > "email": "you@example.com", > "url": "https://example.com", @@ -510,23 +510,23 @@ of truth** used for loading a model. > } > ``` -| Name | Description | -| ---------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `lang` | Model language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). Defaults to `"en"`. ~~str~~ | -| `name` | Model name, e.g. `"core_web_sm"`. The final model package name will be `{lang}_{name}`. Defaults to `"model"`. ~~str~~ | -| `version` | Model version. Will be used to version a Python package created with [`spacy package`](/api/cli#package). Defaults to `"0.0.0"`. ~~str~~ | -| `spacy_version` | spaCy version range the model is compatible with. Defaults to the spaCy version used to create the model, up to next minor version, which is the default compatibility for the available [pretrained models](/models). For instance, a model trained with v3.0.0 will have the version range `">=3.0.0,<3.1.0"`. ~~str~~ | -| `parent_package` | Name of the spaCy package. Typically `"spacy"` or `"spacy_nightly"`. Defaults to `"spacy"`. ~~str~~ | -| `description` | Model description. Also used for Python package. Defaults to `""`. ~~str~~ | -| `author` | Model author name. Also used for Python package. Defaults to `""`. ~~str~~ | -| `email` | Model author email. Also used for Python package. Defaults to `""`. ~~str~~ | -| `url` | Model author URL. Also used for Python package. Defaults to `""`. ~~str~~ | -| `license` | Model license. Also used for Python package. Defaults to `""`. ~~str~~ | -| `sources` | Data sources used to train the model. Typically a list of dicts with the keys `"name"`, `"url"`, `"author"` and `"license"`. [See here](https://github.com/explosion/spacy-models/tree/master/meta) for examples. Defaults to `None`. ~~Optional[List[Dict[str, str]]]~~ | -| `vectors` | Information about the word vectors included with the model. Typically a dict with the keys `"width"`, `"vectors"` (number of vectors), `"keys"` and `"name"`. ~~Dict[str, Any]~~ | -| `pipeline` | Names of pipeline component names in the model, in order. Corresponds to [`nlp.pipe_names`](/api/language#pipe_names). Only exists for reference and is not used to create the components. This information is defined in the [`config.cfg`](/api/data-formats#config). Defaults to `[]`. ~~List[str]~~ | -| `labels` | Label schemes of the trained pipeline components, keyed by component name. Corresponds to [`nlp.pipe_labels`](/api/language#pipe_labels). [See here](https://github.com/explosion/spacy-models/tree/master/meta) for examples. Defaults to `{}`. ~~Dict[str, Dict[str, List[str]]]~~ | -| `accuracy` | Training accuracy, added automatically by [`spacy train`](/api/cli#train). Dictionary of [score names](/usage/training#metrics) mapped to scores. Defaults to `{}`. ~~Dict[str, Union[float, Dict[str, float]]]~~ | -| `speed` | Model speed, added automatically by [`spacy train`](/api/cli#train). Typically a dictionary with the keys `"cpu"`, `"gpu"` and `"nwords"` (words per second). Defaults to `{}`. ~~Dict[str, Optional[Union[float, str]]]~~ | -| `spacy_git_version` 3 | Git commit of [`spacy`](https://github.com/explosion/spaCy) used to create model. ~~str~~ | -| other | Any other custom meta information you want to add. The data is preserved in [`nlp.meta`](/api/language#meta). ~~Any~~ | +| Name | Description | +| ---------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `lang` | Pipeline language [ISO code](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes). Defaults to `"en"`. ~~str~~ | +| `name` | Pipeline name, e.g. `"core_web_sm"`. The final package name will be `{lang}_{name}`. Defaults to `"pipeline"`. ~~str~~ | +| `version` | Pipeline version. Will be used to version a Python package created with [`spacy package`](/api/cli#package). Defaults to `"0.0.0"`. ~~str~~ | +| `spacy_version` | spaCy version range the package is compatible with. Defaults to the spaCy version used to create the pipeline, up to next minor version, which is the default compatibility for the available [trained pipelines](/models). For instance, a pipeline trained with v3.0.0 will have the version range `">=3.0.0,<3.1.0"`. ~~str~~ | +| `parent_package` | Name of the spaCy package. Typically `"spacy"` or `"spacy_nightly"`. Defaults to `"spacy"`. ~~str~~ | +| `description` | Pipeline description. Also used for Python package. Defaults to `""`. ~~str~~ | +| `author` | Pipeline author name. Also used for Python package. Defaults to `""`. ~~str~~ | +| `email` | Pipeline author email. Also used for Python package. Defaults to `""`. ~~str~~ | +| `url` | Pipeline author URL. Also used for Python package. Defaults to `""`. ~~str~~ | +| `license` | Pipeline license. Also used for Python package. Defaults to `""`. ~~str~~ | +| `sources` | Data sources used to train the pipeline. Typically a list of dicts with the keys `"name"`, `"url"`, `"author"` and `"license"`. [See here](https://github.com/explosion/spacy-models/tree/master/meta) for examples. Defaults to `None`. ~~Optional[List[Dict[str, str]]]~~ | +| `vectors` | Information about the word vectors included with the pipeline. Typically a dict with the keys `"width"`, `"vectors"` (number of vectors), `"keys"` and `"name"`. ~~Dict[str, Any]~~ | +| `pipeline` | Names of pipeline component names, in order. Corresponds to [`nlp.pipe_names`](/api/language#pipe_names). Only exists for reference and is not used to create the components. This information is defined in the [`config.cfg`](/api/data-formats#config). Defaults to `[]`. ~~List[str]~~ | +| `labels` | Label schemes of the trained pipeline components, keyed by component name. Corresponds to [`nlp.pipe_labels`](/api/language#pipe_labels). [See here](https://github.com/explosion/spacy-models/tree/master/meta) for examples. Defaults to `{}`. ~~Dict[str, Dict[str, List[str]]]~~ | +| `accuracy` | Training accuracy, added automatically by [`spacy train`](/api/cli#train). Dictionary of [score names](/usage/training#metrics) mapped to scores. Defaults to `{}`. ~~Dict[str, Union[float, Dict[str, float]]]~~ | +| `speed` | Inference speed, added automatically by [`spacy train`](/api/cli#train). Typically a dictionary with the keys `"cpu"`, `"gpu"` and `"nwords"` (words per second). Defaults to `{}`. ~~Dict[str, Optional[Union[float, str]]]~~ | +| `spacy_git_version` 3 | Git commit of [`spacy`](https://github.com/explosion/spaCy) used to create pipeline. ~~str~~ | +| other | Any other custom meta information you want to add. The data is preserved in [`nlp.meta`](/api/language#meta). ~~Any~~ | diff --git a/website/docs/api/dependencymatcher.md b/website/docs/api/dependencymatcher.md index 2fb903100..c90a715d9 100644 --- a/website/docs/api/dependencymatcher.md +++ b/website/docs/api/dependencymatcher.md @@ -1,65 +1,91 @@ --- title: DependencyMatcher -teaser: Match sequences of tokens, based on the dependency parse +teaser: Match subtrees within a dependency parse tag: class +new: 3 source: spacy/matcher/dependencymatcher.pyx --- The `DependencyMatcher` follows the same API as the [`Matcher`](/api/matcher) and [`PhraseMatcher`](/api/phrasematcher) and lets you match on dependency trees -using the -[Semgrex syntax](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html). +using +[Semgrex operators](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html). It requires a pretrained [`DependencyParser`](/api/parser) or other component -that sets the `Token.dep` attribute. +that sets the `Token.dep` and `Token.head` attributes. See the +[usage guide](/usage/rule-based-matching#dependencymatcher) for examples. ## Pattern format {#patterns} -> ```json +> ```python > ### Example +> # pattern: "[subject] ... initially founded" > [ +> # anchor token: founded > { -> "SPEC": {"NODE_NAME": "founded"}, -> "PATTERN": {"ORTH": "founded"} +> "RIGHT_ID": "founded", +> "RIGHT_ATTRS": {"ORTH": "founded"} > }, +> # founded -> subject > { -> "SPEC": { -> "NODE_NAME": "founder", -> "NBOR_RELOP": ">", -> "NBOR_NAME": "founded" -> }, -> "PATTERN": {"DEP": "nsubj"} +> "LEFT_ID": "founded", +> "REL_OP": ">", +> "RIGHT_ID": "subject", +> "RIGHT_ATTRS": {"DEP": "nsubj"} > }, +> # "founded" follows "initially" > { -> "SPEC": { -> "NODE_NAME": "object", -> "NBOR_RELOP": ">", -> "NBOR_NAME": "founded" -> }, -> "PATTERN": {"DEP": "dobj"} +> "LEFT_ID": "founded", +> "REL_OP": ";", +> "RIGHT_ID": "initially", +> "RIGHT_ATTRS": {"ORTH": "initially"} > } > ] > ``` A pattern added to the `DependencyMatcher` consists of a list of dictionaries, -with each dictionary describing a node to match. Each pattern should have the -following top-level keys: +with each dictionary describing a token to match. Except for the first +dictionary, which defines an anchor token using only `RIGHT_ID` and +`RIGHT_ATTRS`, each pattern should have the following keys: -| Name | Description | -| --------- | ---------------------------------------------------------------------------------------------------------------------------------------------- | -| `PATTERN` | The token attributes to match in the same format as patterns provided to the regular token-based [`Matcher`](/api/matcher). ~~Dict[str, Any]~~ | -| `SPEC` | The relationships of the nodes in the subtree that should be matched. ~~Dict[str, str]~~ | +| Name | Description | +| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `LEFT_ID` | The name of the left-hand node in the relation, which has been defined in an earlier node. ~~str~~ | +| `REL_OP` | An operator that describes how the two nodes are related. ~~str~~ | +| `RIGHT_ID` | A unique name for the right-hand node in the relation. ~~str~~ | +| `RIGHT_ATTRS` | The token attributes to match for the right-hand node in the same format as patterns provided to the regular token-based [`Matcher`](/api/matcher). ~~Dict[str, Any]~~ | -The `SPEC` includes the following fields: + -| Name | Description | -| ------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | -| `NODE_NAME` | A unique name for this node to refer to it in other specs. ~~str~~ | -| `NBOR_RELOP` | A [Semgrex](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html) operator that describes how the two nodes are related. ~~str~~ | -| `NBOR_NAME` | The unique name of the node that this node is connected to. ~~str~~ | +For examples of how to construct dependency matcher patterns for different types +of relations, see the usage guide on +[dependency matching](/usage/rule-based-matching#dependencymatcher). + + + +### Operators + +The following operators are supported by the `DependencyMatcher`, most of which +come directly from +[Semgrex](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html): + +| Symbol | Description | +| --------- | -------------------------------------------------------------------------------------------------------------------- | +| `A < B` | `A` is the immediate dependent of `B`. | +| `A > B` | `A` is the immediate head of `B`. | +| `A << B` | `A` is the dependent in a chain to `B` following dep → head paths. | +| `A >> B` | `A` is the head in a chain to `B` following head → dep paths. | +| `A . B` | `A` immediately precedes `B`, i.e. `A.i == B.i - 1`, and both are within the same dependency tree. | +| `A .* B` | `A` precedes `B`, i.e. `A.i < B.i`, and both are within the same dependency tree _(not in Semgrex)_. | +| `A ; B` | `A` immediately follows `B`, i.e. `A.i == B.i + 1`, and both are within the same dependency tree _(not in Semgrex)_. | +| `A ;* B` | `A` follows `B`, i.e. `A.i > B.i`, and both are within the same dependency tree _(not in Semgrex)_. | +| `A $+ B` | `B` is a right immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i - 1`. | +| `A $- B` | `B` is a left immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i + 1`. | +| `A $++ B` | `B` is a right sibling of `A`, i.e. `A` and `B` have the same parent and `A.i < B.i`. | +| `A $-- B` | `B` is a left sibling of `A`, i.e. `A` and `B` have the same parent and `A.i > B.i`. | ## DependencyMatcher.\_\_init\_\_ {#init tag="method"} -Create a rule-based `DependencyMatcher`. +Create a `DependencyMatcher`. > #### Example > @@ -68,13 +94,15 @@ Create a rule-based `DependencyMatcher`. > matcher = DependencyMatcher(nlp.vocab) > ``` -| Name | Description | -| ------- | ----------------------------------------------------------------------------------------------------- | -| `vocab` | The vocabulary object, which must be shared with the documents the matcher will operate on. ~~Vocab~~ | +| Name | Description | +| -------------- | ----------------------------------------------------------------------------------------------------- | +| `vocab` | The vocabulary object, which must be shared with the documents the matcher will operate on. ~~Vocab~~ | +| _keyword-only_ | | +| `validate` | Validate all patterns added to this matcher. ~~bool~~ | ## DependencyMatcher.\_\call\_\_ {#call tag="method"} -Find all token sequences matching the supplied patterns on the `Doc` or `Span`. +Find all tokens matching the supplied patterns on the `Doc` or `Span`. > #### Example > @@ -82,36 +110,32 @@ Find all token sequences matching the supplied patterns on the `Doc` or `Span`. > from spacy.matcher import DependencyMatcher > > matcher = DependencyMatcher(nlp.vocab) -> pattern = [ -> {"SPEC": {"NODE_NAME": "founded"}, "PATTERN": {"ORTH": "founded"}}, -> {"SPEC": {"NODE_NAME": "founder", "NBOR_RELOP": ">", "NBOR_NAME": "founded"}, "PATTERN": {"DEP": "nsubj"}}, -> ] -> matcher.add("Founder", [pattern]) +> pattern = [{"RIGHT_ID": "founded_id", +> "RIGHT_ATTRS": {"ORTH": "founded"}}] +> matcher.add("FOUNDED", [pattern]) > doc = nlp("Bill Gates founded Microsoft.") > matches = matcher(doc) > ``` -| Name | Description | -| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `doclike` | The `Doc` or `Span` to match over. ~~Union[Doc, Span]~~ | -| **RETURNS** | A list of `(match_id, start, end)` tuples, describing the matches. A match tuple describes a span `doc[start:end`]. The `match_id` is the ID of the added match pattern. ~~List[Tuple[int, int, int]]~~ | +| Name | Description | +| ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `doclike` | The `Doc` or `Span` to match over. ~~Union[Doc, Span]~~ | +| **RETURNS** | A list of `(match_id, token_ids)` tuples, describing the matches. The `match_id` is the ID of the match pattern and `token_ids` is a list of token indices matched by the pattern, where the position of each token in the list corresponds to the position of the node specification in the pattern. ~~List[Tuple[int, List[int]]]~~ | ## DependencyMatcher.\_\_len\_\_ {#len tag="method"} -Get the number of rules (edges) added to the dependency matcher. Note that this -only returns the number of rules (identical with the number of IDs), not the -number of individual patterns. +Get the number of rules added to the dependency matcher. Note that this only +returns the number of rules (identical with the number of IDs), not the number +of individual patterns. > #### Example > > ```python > matcher = DependencyMatcher(nlp.vocab) > assert len(matcher) == 0 -> pattern = [ -> {"SPEC": {"NODE_NAME": "founded"}, "PATTERN": {"ORTH": "founded"}}, -> {"SPEC": {"NODE_NAME": "START_ENTITY", "NBOR_RELOP": ">", "NBOR_NAME": "founded"}, "PATTERN": {"DEP": "nsubj"}}, -> ] -> matcher.add("Rule", [pattern]) +> pattern = [{"RIGHT_ID": "founded_id", +> "RIGHT_ATTRS": {"ORTH": "founded"}}] +> matcher.add("FOUNDED", [pattern]) > assert len(matcher) == 1 > ``` @@ -126,10 +150,10 @@ Check whether the matcher contains rules for a match ID. > #### Example > > ```python -> matcher = Matcher(nlp.vocab) -> assert "Rule" not in matcher -> matcher.add("Rule", [pattern]) -> assert "Rule" in matcher +> matcher = DependencyMatcher(nlp.vocab) +> assert "FOUNDED" not in matcher +> matcher.add("FOUNDED", [pattern]) +> assert "FOUNDED" in matcher > ``` | Name | Description | @@ -152,33 +176,15 @@ will be overwritten. > print('Matched!', matches) > > matcher = DependencyMatcher(nlp.vocab) -> matcher.add("TEST_PATTERNS", patterns) +> matcher.add("FOUNDED", patterns, on_match=on_match) > ``` -| Name | Description | -| -------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `match_id` | An ID for the thing you're matching. ~~str~~ | -| `patterns` | list | Match pattern. A pattern consists of a list of dicts, where each dict describes a `"PATTERN"` and `"SPEC"`. ~~List[List[Dict[str, dict]]]~~ | -| _keyword-only_ | | | -| `on_match` | Callback function to act on matches. Takes the arguments `matcher`, `doc`, `i` and `matches`. ~~Optional[Callable[[Matcher, Doc, int, List[tuple], Any]]~~ | - -## DependencyMatcher.remove {#remove tag="method"} - -Remove a rule from the matcher. A `KeyError` is raised if the match ID does not -exist. - -> #### Example -> -> ```python -> matcher.add("Rule", [pattern]]) -> assert "Rule" in matcher -> matcher.remove("Rule") -> assert "Rule" not in matcher -> ``` - -| Name | Description | -| ----- | --------------------------------- | -| `key` | The ID of the match rule. ~~str~~ | +| Name | Description | +| -------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `match_id` | An ID for the patterns. ~~str~~ | +| `patterns` | A list of match patterns. A pattern consists of a list of dicts, where each dict describes a token in the tree. ~~List[List[Dict[str, Union[str, Dict]]]]~~ | +| _keyword-only_ | | | +| `on_match` | Callback function to act on matches. Takes the arguments `matcher`, `doc`, `i` and `matches`. ~~Optional[Callable[[DependencyMatcher, Doc, int, List[Tuple], Any]]~~ | ## DependencyMatcher.get {#get tag="method"} @@ -188,11 +194,29 @@ Retrieve the pattern stored for a key. Returns the rule as an > #### Example > > ```python -> matcher.add("Rule", [pattern], on_match=on_match) -> on_match, patterns = matcher.get("Rule") +> matcher.add("FOUNDED", patterns, on_match=on_match) +> on_match, patterns = matcher.get("FOUNDED") > ``` -| Name | Description | -| ----------- | --------------------------------------------------------------------------------------------- | -| `key` | The ID of the match rule. ~~str~~ | -| **RETURNS** | The rule, as an `(on_match, patterns)` tuple. ~~Tuple[Optional[Callable], List[List[dict]]]~~ | +| Name | Description | +| ----------- | ----------------------------------------------------------------------------------------------------------- | +| `key` | The ID of the match rule. ~~str~~ | +| **RETURNS** | The rule, as an `(on_match, patterns)` tuple. ~~Tuple[Optional[Callable], List[List[Union[Dict, Tuple]]]]~~ | + +## DependencyMatcher.remove {#remove tag="method"} + +Remove a rule from the dependency matcher. A `KeyError` is raised if the match +ID does not exist. + +> #### Example +> +> ```python +> matcher.add("FOUNDED", patterns) +> assert "FOUNDED" in matcher +> matcher.remove("FOUNDED") +> assert "FOUNDED" not in matcher +> ``` + +| Name | Description | +| ----- | --------------------------------- | +| `key` | The ID of the match rule. ~~str~~ | diff --git a/website/docs/api/doc.md b/website/docs/api/doc.md index 3c4825f0d..88dc62c2a 100644 --- a/website/docs/api/doc.md +++ b/website/docs/api/doc.md @@ -186,8 +186,9 @@ Remove a previously registered extension. ## Doc.char_span {#char_span tag="method" new="2"} -Create a `Span` object from the slice `doc.text[start:end]`. Returns `None` if -the character indices don't map to a valid span. +Create a `Span` object from the slice `doc.text[start_idx:end_idx]`. Returns +`None` if the character indices don't map to a valid span using the default mode +`"strict". > #### Example > @@ -197,14 +198,15 @@ the character indices don't map to a valid span. > assert span.text == "New York" > ``` -| Name | Description | -| ------------------------------------ | ----------------------------------------------------------------------------------------- | -| `start` | The index of the first character of the span. ~~int~~ | -| `end` | The index of the last character after the span. ~int~~ | -| `label` | A label to attach to the span, e.g. for named entities. ~~Union[int, str]~~ | -| `kb_id` 2.2 | An ID from a knowledge base to capture the meaning of a named entity. ~~Union[int, str]~~ | -| `vector` | A meaning representation of the span. ~~numpy.ndarray[ndim=1, dtype=float32]~~ | -| **RETURNS** | The newly constructed object or `None`. ~~Optional[Span]~~ | +| Name | Description | +| ------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `start` | The index of the first character of the span. ~~int~~ | +| `end` | The index of the last character after the span. ~int~~ | +| `label` | A label to attach to the span, e.g. for named entities. ~~Union[int, str]~~ | +| `kb_id` 2.2 | An ID from a knowledge base to capture the meaning of a named entity. ~~Union[int, str]~~ | +| `vector` | A meaning representation of the span. ~~numpy.ndarray[ndim=1, dtype=float32]~~ | +| `mode` | How character indices snap to token boundaries. Options: `"strict"` (no snapping), `"inside"` (span of all tokens completely within the character span), `"outside"` (span of all tokens at least partially covered by the character span). Defaults to `"strict"`. ~~str~~ | +| **RETURNS** | The newly constructed object or `None`. ~~Optional[Span]~~ | ## Doc.similarity {#similarity tag="method" model="vectors"} diff --git a/website/docs/api/entitylinker.md b/website/docs/api/entitylinker.md index 679c3c0c2..8cde6c490 100644 --- a/website/docs/api/entitylinker.md +++ b/website/docs/api/entitylinker.md @@ -13,8 +13,8 @@ An `EntityLinker` component disambiguates textual mentions (tagged as named entities) to unique identifiers, grounding the named entities into the "real world". It requires a `KnowledgeBase`, as well as a function to generate plausible candidates from that `KnowledgeBase` given a certain textual mention, -and a ML model to pick the right candidate, given the local context of the -mention. +and a machine learning model to pick the right candidate, given the local +context of the mention. ## Config and implementation {#config} @@ -34,8 +34,8 @@ architectures and their arguments and hyperparameters. > "incl_prior": True, > "incl_context": True, > "model": DEFAULT_NEL_MODEL, -> "kb_loader": {'@assets': 'spacy.EmptyKB.v1', 'entity_vector_length': 64}, -> "get_candidates": {'@assets': 'spacy.CandidateGenerator.v1'}, +> "kb_loader": {'@misc': 'spacy.EmptyKB.v1', 'entity_vector_length': 64}, +> "get_candidates": {'@misc': 'spacy.CandidateGenerator.v1'}, > } > nlp.add_pipe("entity_linker", config=config) > ``` @@ -66,7 +66,7 @@ https://github.com/explosion/spaCy/blob/develop/spacy/pipeline/entity_linker.py > entity_linker = nlp.add_pipe("entity_linker", config=config) > > # Construction via add_pipe with custom KB and candidate generation -> config = {"kb": {"@assets": "my_kb.v1"}} +> config = {"kb": {"@misc": "my_kb.v1"}} > entity_linker = nlp.add_pipe("entity_linker", config=config) > > # Construction from class diff --git a/website/docs/api/language.md b/website/docs/api/language.md index e2668c522..7799f103b 100644 --- a/website/docs/api/language.md +++ b/website/docs/api/language.md @@ -7,9 +7,9 @@ source: spacy/language.py Usually you'll load this once per process as `nlp` and pass the instance around your application. The `Language` class is created when you call -[`spacy.load()`](/api/top-level#spacy.load) and contains the shared vocabulary -and [language data](/usage/adding-languages), optional model data loaded from a -[model package](/models) or a path, and a +[`spacy.load`](/api/top-level#spacy.load) and contains the shared vocabulary and +[language data](/usage/adding-languages), optional binary weights, e.g. provided +by a [trained pipeline](/models), and the [processing pipeline](/usage/processing-pipelines) containing components like the tagger or parser that are called on a document in order. You can also add your own processing pipeline components that take a `Doc` object, modify it and @@ -37,7 +37,7 @@ Initialize a `Language` object. | `vocab` | A `Vocab` object. If `True`, a vocab is created using the default language data settings. ~~Vocab~~ | | _keyword-only_ | | | `max_length` | Maximum number of characters allowed in a single text. Defaults to `10 ** 6`. ~~int~~ | -| `meta` | Custom meta data for the `Language` class. Is written to by models to add model meta data. ~~dict~~ | +| `meta` | Custom meta data for the `Language` class. Is written to by pipelines to add meta data. ~~dict~~ | | `create_tokenizer` | Optional function that receives the `nlp` object and returns a tokenizer. ~~Callable[[Language], Callable[[str], Doc]]~~ | ## Language.from_config {#from_config tag="classmethod" new="3"} @@ -232,7 +232,7 @@ tuples of `Doc` and `GoldParse` objects. ## Language.resume_training {#resume_training tag="method,experimental" new="3"} -Continue training a pretrained model. Create and return an optimizer, and +Continue training a trained pipeline. Create and return an optimizer, and initialize "rehearsal" for any pipeline component that has a `rehearse` method. Rehearsal is used to prevent models from "forgetting" their initialized "knowledge". To perform rehearsal, collect samples of text you want the models @@ -314,7 +314,7 @@ the "catastrophic forgetting" problem. This feature is experimental. ## Language.evaluate {#evaluate tag="method"} -Evaluate a model's pipeline components. +Evaluate a pipeline's components. @@ -386,24 +386,24 @@ component, adds it to the pipeline and returns it. > nlp.add_pipe("component", before="ner") > component = nlp.add_pipe("component", name="custom_name", last=True) > -> # Add component from source model +> # Add component from source pipeline > source_nlp = spacy.load("en_core_web_sm") > nlp.add_pipe("ner", source=source_nlp) > ``` -| Name | Description | -| ------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `factory_name` | Name of the registered component factory. ~~str~~ | -| `name` | Optional unique name of pipeline component instance. If not set, the factory name is used. An error is raised if the name already exists in the pipeline. ~~Optional[str]~~ | -| _keyword-only_ | | -| `before` | Component name or index to insert component directly before. ~~Optional[Union[str, int]]~~ | -| `after` | Component name or index to insert component directly after. ~~Optional[Union[str, int]]~~ | -| `first` | Insert component first / not first in the pipeline. ~~Optional[bool]~~ | -| `last` | Insert component last / not last in the pipeline. ~~Optional[bool]~~ | -| `config` 3 | Optional config parameters to use for this component. Will be merged with the `default_config` specified by the component factory. ~~Optional[Dict[str, Any]]~~ | -| `source` 3 | Optional source model to copy component from. If a source is provided, the `factory_name` is interpreted as the name of the component in the source pipeline. Make sure that the vocab, vectors and settings of the source model match the target model. ~~Optional[Language]~~ | -| `validate` 3 | Whether to validate the component config and arguments against the types expected by the factory. Defaults to `True`. ~~bool~~ | -| **RETURNS** | The pipeline component. ~~Callable[[Doc], Doc]~~ | +| Name | Description | +| ------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `factory_name` | Name of the registered component factory. ~~str~~ | +| `name` | Optional unique name of pipeline component instance. If not set, the factory name is used. An error is raised if the name already exists in the pipeline. ~~Optional[str]~~ | +| _keyword-only_ | | +| `before` | Component name or index to insert component directly before. ~~Optional[Union[str, int]]~~ | +| `after` | Component name or index to insert component directly after. ~~Optional[Union[str, int]]~~ | +| `first` | Insert component first / not first in the pipeline. ~~Optional[bool]~~ | +| `last` | Insert component last / not last in the pipeline. ~~Optional[bool]~~ | +| `config` 3 | Optional config parameters to use for this component. Will be merged with the `default_config` specified by the component factory. ~~Optional[Dict[str, Any]]~~ | +| `source` 3 | Optional source pipeline to copy component from. If a source is provided, the `factory_name` is interpreted as the name of the component in the source pipeline. Make sure that the vocab, vectors and settings of the source pipeline match the target pipeline. ~~Optional[Language]~~ | +| `validate` 3 | Whether to validate the component config and arguments against the types expected by the factory. Defaults to `True`. ~~bool~~ | +| **RETURNS** | The pipeline component. ~~Callable[[Doc], Doc]~~ | ## Language.create_pipe {#create_pipe tag="method" new="2"} @@ -790,9 +790,10 @@ token.ent_iob, token.ent_type ## Language.meta {#meta tag="property"} -Custom meta data for the Language class. If a model is loaded, contains meta -data of the model. The `Language.meta` is also what's serialized as the -[`meta.json`](/api/data-formats#meta) when you save an `nlp` object to disk. +Custom meta data for the Language class. If a trained pipeline is loaded, this +contains meta data of the pipeline. The `Language.meta` is also what's +serialized as the [`meta.json`](/api/data-formats#meta) when you save an `nlp` +object to disk. > #### Example > @@ -827,13 +828,15 @@ subclass of the built-in `dict`. It supports the additional methods `to_disk` ## Language.to_disk {#to_disk tag="method" new="2"} -Save the current state to a directory. If a model is loaded, this will **include -the model**. +Save the current state to a directory. Under the hood, this method delegates to +the `to_disk` methods of the individual pipeline components, if available. This +means that if a trained pipeline is loaded, all components and their weights +will be saved to disk. > #### Example > > ```python -> nlp.to_disk("/path/to/models") +> nlp.to_disk("/path/to/pipeline") > ``` | Name | Description | @@ -844,22 +847,28 @@ the model**. ## Language.from_disk {#from_disk tag="method" new="2"} -Loads state from a directory. Modifies the object in place and returns it. If -the saved `Language` object contains a model, the model will be loaded. Note -that this method is commonly used via the subclasses like `English` or `German` -to make language-specific functionality like the -[lexical attribute getters](/usage/adding-languages#lex-attrs) available to the -loaded object. +Loads state from a directory, including all data that was saved with the +`Language` object. Modifies the object in place and returns it. + + + +Keep in mind that this method **only loads serialized state** and doesn't set up +the `nlp` object. This means that it requires the correct language class to be +initialized and all pipeline components to be added to the pipeline. If you want +to load a serialized pipeline from a directory, you should use +[`spacy.load`](/api/top-level#spacy.load), which will set everything up for you. + + > #### Example > > ```python > from spacy.language import Language -> nlp = Language().from_disk("/path/to/model") +> nlp = Language().from_disk("/path/to/pipeline") > -> # using language-specific subclass +> # Using language-specific subclass > from spacy.lang.en import English -> nlp = English().from_disk("/path/to/en_model") +> nlp = English().from_disk("/path/to/pipeline") > ``` | Name | Description | @@ -924,7 +933,7 @@ available to the loaded object. | `components` 3 | List of all available `(name, component)` tuples, including components that are currently disabled. ~~List[Tuple[str, Callable[[Doc], Doc]]]~~ | | `component_names` 3 | List of all available component names, including components that are currently disabled. ~~List[str]~~ | | `disabled` 3 | Names of components that are currently disabled and don't run as part of the pipeline. ~~List[str]~~ | -| `path` 2 | Path to the model data directory, if a model is loaded. Otherwise `None`. ~~Optional[Path]~~ | +| `path` 2 | Path to the pipeline data directory, if a pipeline is loaded from a path or package. Otherwise `None`. ~~Optional[Path]~~ | ## Class attributes {#class-attributes} @@ -1004,7 +1013,7 @@ serialization by passing in the string names via the `exclude` argument. > > ```python > data = nlp.to_bytes(exclude=["tokenizer", "vocab"]) -> nlp.from_disk("./model-data", exclude=["ner"]) +> nlp.from_disk("/pipeline", exclude=["ner"]) > ``` | Name | Description | diff --git a/website/docs/api/pipe.md b/website/docs/api/pipe.md index 9c3a4104e..57b2af44d 100644 --- a/website/docs/api/pipe.md +++ b/website/docs/api/pipe.md @@ -286,7 +286,7 @@ context, the original parameters are restored. ## Pipe.add_label {#add_label tag="method"} -Add a new label to the pipe. It's possible to extend pretrained models with new +Add a new label to the pipe. It's possible to extend trained models with new labels, but care should be taken to avoid the "catastrophic forgetting" problem. > #### Example diff --git a/website/docs/api/top-level.md b/website/docs/api/top-level.md index d437ecc07..7f2eb2e66 100644 --- a/website/docs/api/top-level.md +++ b/website/docs/api/top-level.md @@ -12,14 +12,14 @@ menu: ## spaCy {#spacy hidden="true"} -### spacy.load {#spacy.load tag="function" model="any"} +### spacy.load {#spacy.load tag="function"} -Load a model using the name of an installed -[model package](/usage/training#models-generating), a string path or a -`Path`-like object. spaCy will try resolving the load argument in this order. If -a model is loaded from a model name, spaCy will assume it's a Python package and -import it and call the model's own `load()` method. If a model is loaded from a -path, spaCy will assume it's a data directory, load its +Load a pipeline using the name of an installed +[package](/usage/saving-loading#models), a string path or a `Path`-like object. +spaCy will try resolving the load argument in this order. If a pipeline is +loaded from a string name, spaCy will assume it's a Python package and import it +and call the package's own `load()` method. If a pipeline is loaded from a path, +spaCy will assume it's a data directory, load its [`config.cfg`](/api/data-formats#config) and use the language and pipeline information to construct the `Language` class. The data will be loaded in via [`Language.from_disk`](/api/language#from_disk). @@ -36,38 +36,38 @@ specified separately using the new `exclude` keyword argument. > > ```python > nlp = spacy.load("en_core_web_sm") # package -> nlp = spacy.load("/path/to/en") # string path -> nlp = spacy.load(Path("/path/to/en")) # pathlib Path +> nlp = spacy.load("/path/to/pipeline") # string path +> nlp = spacy.load(Path("/path/to/pipeline")) # pathlib Path > > nlp = spacy.load("en_core_web_sm", exclude=["parser", "tagger"]) > ``` | Name | Description | | ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `name` | Model to load, i.e. package name or path. ~~Union[str, Path]~~ | +| `name` | Pipeline to load, i.e. package name or path. ~~Union[str, Path]~~ | | _keyword-only_ | | | `disable` | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [nlp.enable_pipe](/api/language#enable_pipe). ~~List[str]~~ | | `exclude` 3 | Names of pipeline components to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~List[str]~~ | | `config` 3 | Optional config overrides, either as nested dict or dict keyed by section value in dot notation, e.g. `"components.name.value"`. ~~Union[Dict[str, Any], Config]~~ | -| **RETURNS** | A `Language` object with the loaded model. ~~Language~~ | +| **RETURNS** | A `Language` object with the loaded pipeline. ~~Language~~ | -Essentially, `spacy.load()` is a convenience wrapper that reads the model's +Essentially, `spacy.load()` is a convenience wrapper that reads the pipeline's [`config.cfg`](/api/data-formats#config), uses the language and pipeline information to construct a `Language` object, loads in the model data and -returns it. +weights, and returns it. ```python ### Abstract example -cls = util.get_lang_class(lang) # get language for ID, e.g. "en" -nlp = cls() # initialize the language +cls = spacy.util.get_lang_class(lang) # 1. Get Language class, e.g. English +nlp = cls() # 2. Initialize it for name in pipeline: - nlp.add_pipe(name) # add component to pipeline -nlp.from_disk(model_data_path) # load in model data + nlp.add_pipe(name) # 3. Add the component to the pipeline +nlp.from_disk(data_path) # 4. Load in the binary data ``` ### spacy.blank {#spacy.blank tag="function" new="2"} -Create a blank model of a given language class. This function is the twin of +Create a blank pipeline of a given language class. This function is the twin of `spacy.load()`. > #### Example @@ -85,9 +85,7 @@ Create a blank model of a given language class. This function is the twin of ### spacy.info {#spacy.info tag="function"} The same as the [`info` command](/api/cli#info). Pretty-print information about -your installation, models and local setup from within spaCy. To get the model -meta data as a dictionary instead, you can use the `meta` attribute on your -`nlp` object with a loaded model, e.g. `nlp.meta`. +your installation, installed pipelines and local setup from within spaCy. > #### Example > @@ -97,12 +95,12 @@ meta data as a dictionary instead, you can use the `meta` attribute on your > markdown = spacy.info(markdown=True, silent=True) > ``` -| Name | Description | -| -------------- | ------------------------------------------------------------------ | -| `model` | A model, i.e. a package name or path (optional). ~~Optional[str]~~ | -| _keyword-only_ | | -| `markdown` | Print information as Markdown. ~~bool~~ | -| `silent` | Don't print anything, just return. ~~bool~~ | +| Name | Description | +| -------------- | ---------------------------------------------------------------------------- | +| `model` | Optional pipeline, i.e. a package name or path (optional). ~~Optional[str]~~ | +| _keyword-only_ | | +| `markdown` | Print information as Markdown. ~~bool~~ | +| `silent` | Don't print anything, just return. ~~bool~~ | ### spacy.explain {#spacy.explain tag="function"} @@ -133,7 +131,7 @@ list of available terms, see Allocate data and perform operations on [GPU](/usage/#gpu), if available. If data has already been allocated on CPU, it will not be moved. Ideally, this function should be called right after importing spaCy and _before_ loading any -models. +pipelines. > #### Example > @@ -152,7 +150,7 @@ models. Allocate data and perform operations on [GPU](/usage/#gpu). Will raise an error if no GPU is available. If data has already been allocated on CPU, it will not be moved. Ideally, this function should be called right after importing spaCy -and _before_ loading any models. +and _before_ loading any pipelines. > #### Example > @@ -271,9 +269,9 @@ If a setting is not present in the options, the default value will be used. | `template` 2.2 | Optional template to overwrite the HTML used to render entity spans. Should be a format string and can use `{bg}`, `{text}` and `{label}`. See [`templates.py`](https://github.com/explosion/spaCy/blob/master/spacy/displacy/templates.py) for examples. ~~Optional[str]~~ | By default, displaCy comes with colors for all entity types used by -[spaCy models](/models). If you're using custom entity types, you can use the -`colors` setting to add your own colors for them. Your application or model -package can also expose a +[spaCy's trained pipelines](/models). If you're using custom entity types, you +can use the `colors` setting to add your own colors for them. Your application +or pipeline package can also expose a [`spacy_displacy_colors` entry point](/usage/saving-loading#entry-points-displacy) to add custom labels and their colors automatically. @@ -309,7 +307,6 @@ factories. | Registry name | Description | | ----------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `architectures` | Registry for functions that create [model architectures](/api/architectures). Can be used to register custom model architectures and reference them in the `config.cfg`. | -| `assets` | Registry for data assets, knowledge bases etc. | | `batchers` | Registry for training and evaluation [data batchers](#batchers). | | `callbacks` | Registry for custom callbacks to [modify the `nlp` object](/usage/training#custom-code-nlp-callbacks) before training. | | `displacy_colors` | Registry for custom color scheme for the [`displacy` NER visualizer](/usage/visualizers). Automatically reads from [entry points](/usage/saving-loading#entry-points). | @@ -320,6 +317,7 @@ factories. | `loggers` | Registry for functions that log [training results](/usage/training). | | `lookups` | Registry for large lookup tables available via `vocab.lookups`. | | `losses` | Registry for functions that create [losses](https://thinc.ai/docs/api-loss). | +| `misc` | Registry for miscellaneous functions that return data assets, knowledge bases or anything else you may need. | | `optimizers` | Registry for functions that create [optimizers](https://thinc.ai/docs/api-optimizers). | | `readers` | Registry for training and evaluation data readers like [`Corpus`](/api/corpus). | | `schedules` | Registry for functions that create [schedules](https://thinc.ai/docs/api-schedules). | @@ -366,7 +364,7 @@ results to a [Weights & Biases](https://www.wandb.com/) dashboard. Instead of using one of the built-in loggers listed here, you can also [implement your own](/usage/training#custom-logging). -#### spacy.ConsoleLogger.v1 {#ConsoleLogger tag="registered function"} +#### spacy.ConsoleLogger {#ConsoleLogger tag="registered function"} > #### Example config > @@ -412,7 +410,7 @@ start decreasing across epochs. -#### spacy.WandbLogger.v1 {#WandbLogger tag="registered function"} +#### spacy.WandbLogger {#WandbLogger tag="registered function"} > #### Installation > @@ -468,7 +466,7 @@ Instead of using one of the built-in batchers listed here, you can also [implement your own](/usage/training#custom-code-readers-batchers), which may or may not use a custom schedule. -#### batch_by_words.v1 {#batch_by_words tag="registered function"} +#### batch_by_words {#batch_by_words tag="registered function"} Create minibatches of roughly a given number of words. If any examples are longer than the specified batch length, they will appear in a batch by @@ -480,7 +478,7 @@ themselves, or be discarded if `discard_oversize` is set to `True`. The argument > > ```ini > [training.batcher] -> @batchers = "batch_by_words.v1" +> @batchers = "spacy.batch_by_words.v1" > size = 100 > tolerance = 0.2 > discard_oversize = false @@ -495,13 +493,13 @@ themselves, or be discarded if `discard_oversize` is set to `True`. The argument | `discard_oversize` | Whether to discard sequences that by themselves exceed the tolerated size. ~~bool~~ | | `get_length` | Optional function that receives a sequence item and returns its length. Defaults to the built-in `len()` if not set. ~~Optional[Callable[[Any], int]]~~ | -#### batch_by_sequence.v1 {#batch_by_sequence tag="registered function"} +#### batch_by_sequence {#batch_by_sequence tag="registered function"} > #### Example config > > ```ini > [training.batcher] -> @batchers = "batch_by_sequence.v1" +> @batchers = "spacy.batch_by_sequence.v1" > size = 32 > get_length = null > ``` @@ -513,13 +511,13 @@ Create a batcher that creates batches of the specified size. | `size` | The target number of items per batch. Can also be a block referencing a schedule, e.g. [`compounding`](https://thinc.ai/docs/api-schedules/#compounding). ~~Union[int, Sequence[int]]~~ | | `get_length` | Optional function that receives a sequence item and returns its length. Defaults to the built-in `len()` if not set. ~~Optional[Callable[[Any], int]]~~ | -#### batch_by_padded.v1 {#batch_by_padded tag="registered function"} +#### batch_by_padded {#batch_by_padded tag="registered function"} > #### Example config > > ```ini > [training.batcher] -> @batchers = "batch_by_padded.v1" +> @batchers = "spacy.batch_by_padded.v1" > size = 100 > buffer = 256 > discard_oversize = false @@ -666,8 +664,8 @@ loaded lazily, to avoid expensive setup code associated with the language data. ### util.load_model {#util.load_model tag="function" new="2"} -Load a model from a package or data path. If called with a package name, spaCy -will assume the model is a Python package and import and call its `load()` +Load a pipeline from a package or data path. If called with a string name, spaCy +will assume the pipeline is a Python package and import and call its `load()` method. If called with a path, spaCy will assume it's a data directory, read the language and pipeline settings from the [`config.cfg`](/api/data-formats#config) and create a `Language` object. The model data will then be loaded in via @@ -683,16 +681,16 @@ and create a `Language` object. The model data will then be loaded in via | Name | Description | | ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `name` | Package name or model path. ~~str~~ | +| `name` | Package name or path. ~~str~~ | | `vocab` 3 | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~. | | `disable` | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [nlp.enable_pipe](/api/language#enable_pipe). ~~List[str]~~ | | `exclude` 3 | Names of pipeline components to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~List[str]~~ | | `config` 3 | Config overrides as nested dict or flat dict keyed by section values in dot notation, e.g. `"nlp.pipeline"`. ~~Union[Dict[str, Any], Config]~~ | -| **RETURNS** | `Language` class with the loaded model. ~~Language~~ | +| **RETURNS** | `Language` class with the loaded pipeline. ~~Language~~ | ### util.load_model_from_init_py {#util.load_model_from_init_py tag="function" new="2"} -A helper function to use in the `load()` method of a model package's +A helper function to use in the `load()` method of a pipeline package's [`__init__.py`](https://github.com/explosion/spacy-models/tree/master/template/model/xx_model_name/__init__.py). > #### Example @@ -706,70 +704,72 @@ A helper function to use in the `load()` method of a model package's | Name | Description | | ------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `init_file` | Path to model's `__init__.py`, i.e. `__file__`. ~~Union[str, Path]~~ | +| `init_file` | Path to package's `__init__.py`, i.e. `__file__`. ~~Union[str, Path]~~ | | `vocab` 3 | Optional shared vocab to pass in on initialization. If `True` (default), a new `Vocab` object will be created. ~~Union[Vocab, bool]~~. | | `disable` | Names of pipeline components to [disable](/usage/processing-pipelines#disabling). Disabled pipes will be loaded but they won't be run unless you explicitly enable them by calling [nlp.enable_pipe](/api/language#enable_pipe). ~~List[str]~~ | | `exclude` 3 | Names of pipeline components to [exclude](/usage/processing-pipelines#disabling). Excluded components won't be loaded. ~~List[str]~~ | | `config` 3 | Config overrides as nested dict or flat dict keyed by section values in dot notation, e.g. `"nlp.pipeline"`. ~~Union[Dict[str, Any], Config]~~ | -| **RETURNS** | `Language` class with the loaded model. ~~Language~~ | +| **RETURNS** | `Language` class with the loaded pipeline. ~~Language~~ | ### util.load_config {#util.load_config tag="function" new="3"} -Load a model's [`config.cfg`](/api/data-formats#config) from a file path. The -config typically includes details about the model pipeline and how its -components are created, as well as all training settings and hyperparameters. +Load a pipeline's [`config.cfg`](/api/data-formats#config) from a file path. The +config typically includes details about the components and how they're created, +as well as all training settings and hyperparameters. > #### Example > > ```python -> config = util.load_config("/path/to/model/config.cfg") +> config = util.load_config("/path/to/config.cfg") > print(config.to_str()) > ``` | Name | Description | | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `path` | Path to the model's `config.cfg`. ~~Union[str, Path]~~ | +| `path` | Path to the pipeline's `config.cfg`. ~~Union[str, Path]~~ | | `overrides` | Optional config overrides to replace in loaded config. Can be provided as nested dict, or as flat dict with keys in dot notation, e.g. `"nlp.pipeline"`. ~~Dict[str, Any]~~ | | `interpolate` | Whether to interpolate the config and replace variables like `${paths.train}` with their values. Defaults to `False`. ~~bool~~ | -| **RETURNS** | The model's config. ~~Config~~ | +| **RETURNS** | The pipeline's config. ~~Config~~ | ### util.load_meta {#util.load_meta tag="function" new="3"} -Get a model's [`meta.json`](/api/data-formats#meta) from a file path and -validate its contents. +Get a pipeline's [`meta.json`](/api/data-formats#meta) from a file path and +validate its contents. The meta typically includes details about author, +licensing, data sources and version. > #### Example > > ```python -> meta = util.load_meta("/path/to/model/meta.json") +> meta = util.load_meta("/path/to/meta.json") > ``` -| Name | Description | -| ----------- | ----------------------------------------------------- | -| `path` | Path to the model's `meta.json`. ~~Union[str, Path]~~ | -| **RETURNS** | The model's meta data. ~~Dict[str, Any]~~ | +| Name | Description | +| ----------- | -------------------------------------------------------- | +| `path` | Path to the pipeline's `meta.json`. ~~Union[str, Path]~~ | +| **RETURNS** | The pipeline's meta data. ~~Dict[str, Any]~~ | ### util.get_installed_models {#util.get_installed_models tag="function" new="3"} -List all model packages installed in the current environment. This will include -any spaCy model that was packaged with [`spacy package`](/api/cli#package). -Under the hood, model packages expose a Python entry point that spaCy can check, -without having to load the model. +List all pipeline packages installed in the current environment. This will +include any spaCy pipeline that was packaged with +[`spacy package`](/api/cli#package). Under the hood, pipeline packages expose a +Python entry point that spaCy can check, without having to load the `nlp` +object. > #### Example > > ```python -> model_names = util.get_installed_models() +> names = util.get_installed_models() > ``` -| Name | Description | -| ----------- | ---------------------------------------------------------------------------------- | -| **RETURNS** | The string names of the models installed in the current environment. ~~List[str]~~ | +| Name | Description | +| ----------- | ------------------------------------------------------------------------------------- | +| **RETURNS** | The string names of the pipelines installed in the current environment. ~~List[str]~~ | ### util.is_package {#util.is_package tag="function"} Check if string maps to a package installed via pip. Mainly used to validate -[model packages](/usage/models). +[pipeline packages](/usage/models). > #### Example > @@ -786,7 +786,8 @@ Check if string maps to a package installed via pip. Mainly used to validate ### util.get_package_path {#util.get_package_path tag="function" new="2"} Get path to an installed package. Mainly used to resolve the location of -[model packages](/usage/models). Currently imports the package to find its path. +[pipeline packages](/usage/models). Currently imports the package to find its +path. > #### Example > @@ -795,10 +796,10 @@ Get path to an installed package. Mainly used to resolve the location of > # /usr/lib/python3.6/site-packages/en_core_web_sm > ``` -| Name | Description | -| -------------- | ----------------------------------------- | -| `package_name` | Name of installed package. ~~str~~ | -| **RETURNS** | Path to model package directory. ~~Path~~ | +| Name | Description | +| -------------- | -------------------------------------------- | +| `package_name` | Name of installed package. ~~str~~ | +| **RETURNS** | Path to pipeline package directory. ~~Path~~ | ### util.is_in_jupyter {#util.is_in_jupyter tag="function" new="2"} diff --git a/website/docs/api/transformer.md b/website/docs/api/transformer.md index 5ac95cb29..b41a18890 100644 --- a/website/docs/api/transformer.md +++ b/website/docs/api/transformer.md @@ -453,7 +453,7 @@ using the `@spacy.registry.span_getters` decorator. > #### Example > > ```python -> @spacy.registry.span_getters("sent_spans.v1") +> @spacy.registry.span_getters("custom_sent_spans") > def configure_get_sent_spans() -> Callable: > def get_sent_spans(docs: Iterable[Doc]) -> List[List[Span]]: > return [list(doc.sents) for doc in docs] @@ -472,7 +472,7 @@ using the `@spacy.registry.span_getters` decorator. > > ```ini > [transformer.model.get_spans] -> @span_getters = "doc_spans.v1" +> @span_getters = "spacy-transformers.doc_spans.v1" > ``` Create a span getter that uses the whole document as its spans. This is the best @@ -485,7 +485,7 @@ texts. > > ```ini > [transformer.model.get_spans] -> @span_getters = "sent_spans.v1" +> @span_getters = "spacy-transformers.sent_spans.v1" > ``` Create a span getter that uses sentence boundary markers to extract the spans. @@ -500,7 +500,7 @@ more meaningful windows to attend over. > > ```ini > [transformer.model.get_spans] -> @span_getters = "strided_spans.v1" +> @span_getters = "spacy-transformers.strided_spans.v1" > window = 128 > stride = 96 > ``` diff --git a/website/docs/images/dep-match-diagram.svg b/website/docs/images/dep-match-diagram.svg new file mode 100644 index 000000000..676be4137 --- /dev/null +++ b/website/docs/images/dep-match-diagram.svg @@ -0,0 +1,39 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/website/docs/images/displacy-dep-founded.html b/website/docs/images/displacy-dep-founded.html new file mode 100644 index 000000000..e22984ee1 --- /dev/null +++ b/website/docs/images/displacy-dep-founded.html @@ -0,0 +1,58 @@ + + + Smith + + + + + founded + + + + + a + + + + + healthcare + + + + + company + + + + + + + nsubj + + + + + + + + det + + + + + + + + compound + + + + + + + + dobj + + + + diff --git a/website/docs/models/index.md b/website/docs/models/index.md index d5f87d3b5..64e719f37 100644 --- a/website/docs/models/index.md +++ b/website/docs/models/index.md @@ -1,6 +1,6 @@ --- -title: Models -teaser: Downloadable pretrained models for spaCy +title: Trained Models & Pipelines +teaser: Downloadable trained pipelines and weights for spaCy menu: - ['Quickstart', 'quickstart'] - ['Conventions', 'conventions'] @@ -8,15 +8,15 @@ menu: -The models directory includes two types of pretrained models: +This directory includes two types of packages: -1. **Core models:** General-purpose pretrained models to predict named entities, - part-of-speech tags and syntactic dependencies. Can be used out-of-the-box - and fine-tuned on more specific data. -2. **Starter models:** Transfer learning starter packs with pretrained weights - you can initialize your models with to achieve better accuracy. They can +1. **Trained pipelines:** General-purpose spaCy pipelines to predict named + entities, part-of-speech tags and syntactic dependencies. Can be used + out-of-the-box and fine-tuned on more specific data. +2. **Starters:** Transfer learning starter packs with pretrained weights you can + initialize your pipeline models with to achieve better accuracy. They can include word vectors (which will be used as features during training) or - other pretrained representations like BERT. These models don't include + other pretrained representations like BERT. These packages don't include components for specific tasks like NER or text classification and are intended to be used as base models when training your own models. @@ -28,43 +28,42 @@ import QuickstartModels from 'widgets/quickstart-models.js' -For more details on how to use models with spaCy, see the -[usage guide on models](/usage/models). +For more details on how to use trained pipelines with spaCy, see the +[usage guide](/usage/models). -## Model naming conventions {#conventions} +## Package naming conventions {#conventions} -In general, spaCy expects all model packages to follow the naming convention of -`[lang`\_[name]]. For spaCy's models, we also chose to divide the name into -three components: +In general, spaCy expects all pipeline packages to follow the naming convention +of `[lang`\_[name]]. For spaCy's pipelines, we also chose to divide the name +into three components: -1. **Type:** Model capabilities (e.g. `core` for general-purpose model with +1. **Type:** Capabilities (e.g. `core` for general-purpose pipeline with vocabulary, syntax, entities and word vectors, or `depent` for only vocab, syntax and entities). -2. **Genre:** Type of text the model is trained on, e.g. `web` or `news`. -3. **Size:** Model size indicator, `sm`, `md` or `lg`. +2. **Genre:** Type of text the pipeline is trained on, e.g. `web` or `news`. +3. **Size:** Package size indicator, `sm`, `md` or `lg`. For example, [`en_core_web_sm`](/models/en#en_core_web_sm) is a small English -model trained on written web text (blogs, news, comments), that includes +pipeline trained on written web text (blogs, news, comments), that includes vocabulary, vectors, syntax and entities. -### Model versioning {#model-versioning} +### Package versioning {#model-versioning} -Additionally, the model versioning reflects both the compatibility with spaCy, -as well as the major and minor model version. A model version `a.b.c` translates -to: +Additionally, the pipeline package versioning reflects both the compatibility +with spaCy, as well as the major and minor version. A package version `a.b.c` +translates to: - `a`: **spaCy major version**. For example, `2` for spaCy v2.x. -- `b`: **Model major version**. Models with a different major version can't be - loaded by the same code. For example, changing the width of the model, adding - hidden layers or changing the activation changes the model major version. -- `c`: **Model minor version**. Same model structure, but different parameter - values, e.g. from being trained on different data, for different numbers of - iterations, etc. +- `b`: **Package major version**. Pipelines with a different major version can't + be loaded by the same code. For example, changing the width of the model, + adding hidden layers or changing the activation changes the major version. +- `c`: **Package minor version**. Same pipeline structure, but different + parameter values, e.g. from being trained on different data, for different + numbers of iterations, etc. For a detailed compatibility overview, see the -[`compatibility.json`](https://github.com/explosion/spacy-models/tree/master/compatibility.json) -in the models repository. This is also the source of spaCy's internal -compatibility check, performed when you run the [`download`](/api/cli#download) -command. +[`compatibility.json`](https://github.com/explosion/spacy-models/tree/master/compatibility.json). +This is also the source of spaCy's internal compatibility check, performed when +you run the [`download`](/api/cli#download) command. diff --git a/website/docs/usage/101/_pipelines.md b/website/docs/usage/101/_pipelines.md index 0aa821223..9a63ee42d 100644 --- a/website/docs/usage/101/_pipelines.md +++ b/website/docs/usage/101/_pipelines.md @@ -1,9 +1,9 @@ When you call `nlp` on a text, spaCy first tokenizes the text to produce a `Doc` object. The `Doc` is then processed in several different steps – this is also referred to as the **processing pipeline**. The pipeline used by the -[default models](/models) typically include a tagger, a lemmatizer, a parser and -an entity recognizer. Each pipeline component returns the processed `Doc`, which -is then passed on to the next component. +[trained pipelines](/models) typically include a tagger, a lemmatizer, a parser +and an entity recognizer. Each pipeline component returns the processed `Doc`, +which is then passed on to the next component. ![The processing pipeline](../../images/pipeline.svg) @@ -23,14 +23,15 @@ is then passed on to the next component. | **textcat** | [`TextCategorizer`](/api/textcategorizer) | `Doc.cats` | Assign document labels. | | **custom** | [custom components](/usage/processing-pipelines#custom-components) | `Doc._.xxx`, `Token._.xxx`, `Span._.xxx` | Assign custom attributes, methods or properties. | -The processing pipeline always **depends on the statistical model** and its -capabilities. For example, a pipeline can only include an entity recognizer -component if the model includes data to make predictions of entity labels. This -is why each model will specify the pipeline to use in its meta data and -[config](/usage/training#config), as a simple list containing the component -names: +The capabilities of a processing pipeline always depend on the components, their +models and how they were trained. For example, a pipeline for named entity +recognition needs to include a trained named entity recognizer component with a +statistical model and weights that enable it to **make predictions** of entity +labels. This is why each pipeline specifies its components and their settings in +the [config](/usage/training#config): ```ini +[nlp] pipeline = ["tagger", "parser", "ner"] ``` diff --git a/website/docs/usage/101/_pos-deps.md b/website/docs/usage/101/_pos-deps.md index 1e8960edf..a531b245e 100644 --- a/website/docs/usage/101/_pos-deps.md +++ b/website/docs/usage/101/_pos-deps.md @@ -1,9 +1,9 @@ After tokenization, spaCy can **parse** and **tag** a given `Doc`. This is where -the statistical model comes in, which enables spaCy to **make a prediction** of -which tag or label most likely applies in this context. A model consists of -binary data and is produced by showing a system enough examples for it to make -predictions that generalize across the language – for example, a word following -"the" in English is most likely a noun. +the trained pipeline and its statistical models come in, which enable spaCy to +**make predictions** of which tag or label most likely applies in this context. +A trained component includes binary data that is produced by showing a system +enough examples for it to make predictions that generalize across the language – +for example, a word following "the" in English is most likely a noun. Linguistic annotations are available as [`Token` attributes](/api/token#attributes). Like many NLP libraries, spaCy @@ -25,7 +25,8 @@ for token in doc: > - **Text:** The original word text. > - **Lemma:** The base form of the word. -> - **POS:** The simple [UPOS](https://universaldependencies.org/docs/u/pos/) part-of-speech tag. +> - **POS:** The simple [UPOS](https://universaldependencies.org/docs/u/pos/) +> part-of-speech tag. > - **Tag:** The detailed part-of-speech tag. > - **Dep:** Syntactic dependency, i.e. the relation between tokens. > - **Shape:** The word shape – capitalization, punctuation, digits. diff --git a/website/docs/usage/101/_serialization.md b/website/docs/usage/101/_serialization.md index 01a9c39d1..ce34ea6e9 100644 --- a/website/docs/usage/101/_serialization.md +++ b/website/docs/usage/101/_serialization.md @@ -1,9 +1,9 @@ If you've been modifying the pipeline, vocabulary, vectors and entities, or made -updates to the model, you'll eventually want to **save your progress** – for -example, everything that's in your `nlp` object. This means you'll have to -translate its contents and structure into a format that can be saved, like a -file or a byte string. This process is called serialization. spaCy comes with -**built-in serialization methods** and supports the +updates to the component models, you'll eventually want to **save your +progress** – for example, everything that's in your `nlp` object. This means +you'll have to translate its contents and structure into a format that can be +saved, like a file or a byte string. This process is called serialization. spaCy +comes with **built-in serialization methods** and supports the [Pickle protocol](https://www.diveinto.org/python3/serializing.html#dump). > #### What's pickle? diff --git a/website/docs/usage/101/_training.md b/website/docs/usage/101/_training.md index 4573f5ea3..b73a83d6a 100644 --- a/website/docs/usage/101/_training.md +++ b/website/docs/usage/101/_training.md @@ -1,25 +1,25 @@ spaCy's tagger, parser, text categorizer and many other components are powered by **statistical models**. Every "decision" these components make – for example, which part-of-speech tag to assign, or whether a word is a named entity – is a -**prediction** based on the model's current **weight values**. The weight -values are estimated based on examples the model has seen -during **training**. To train a model, you first need training data – examples -of text, and the labels you want the model to predict. This could be a -part-of-speech tag, a named entity or any other information. +**prediction** based on the model's current **weight values**. The weight values +are estimated based on examples the model has seen during **training**. To train +a model, you first need training data – examples of text, and the labels you +want the model to predict. This could be a part-of-speech tag, a named entity or +any other information. -Training is an iterative process in which the model's predictions are compared +Training is an iterative process in which the model's predictions are compared against the reference annotations in order to estimate the **gradient of the loss**. The gradient of the loss is then used to calculate the gradient of the weights through [backpropagation](https://thinc.ai/backprop101). The gradients -indicate how the weight values should be changed so that the model's -predictions become more similar to the reference labels over time. +indicate how the weight values should be changed so that the model's predictions +become more similar to the reference labels over time. > - **Training data:** Examples and their annotations. > - **Text:** The input text the model should predict a label for. > - **Label:** The label the model should predict. > - **Gradient:** The direction and rate of change for a numeric value. -> Minimising the gradient of the weights should result in predictions that -> are closer to the reference labels on the training data. +> Minimising the gradient of the weights should result in predictions that are +> closer to the reference labels on the training data. ![The training process](../../images/training.svg) diff --git a/website/docs/usage/101/_vectors-similarity.md b/website/docs/usage/101/_vectors-similarity.md index 92df1b331..cf5b70af2 100644 --- a/website/docs/usage/101/_vectors-similarity.md +++ b/website/docs/usage/101/_vectors-similarity.md @@ -24,12 +24,12 @@ array([2.02280000e-01, -7.66180009e-02, 3.70319992e-01, -To make them compact and fast, spaCy's small [models](/models) (all packages -that end in `sm`) **don't ship with word vectors**, and only include +To make them compact and fast, spaCy's small [pipeline packages](/models) (all +packages that end in `sm`) **don't ship with word vectors**, and only include context-sensitive **tensors**. This means you can still use the `similarity()` methods to compare documents, spans and tokens – but the result won't be as good, and individual tokens won't have any vectors assigned. So in order to use -_real_ word vectors, you need to download a larger model: +_real_ word vectors, you need to download a larger pipeline package: ```diff - python -m spacy download en_core_web_sm @@ -38,11 +38,11 @@ _real_ word vectors, you need to download a larger model: -Models that come with built-in word vectors make them available as the -[`Token.vector`](/api/token#vector) attribute. [`Doc.vector`](/api/doc#vector) -and [`Span.vector`](/api/span#vector) will default to an average of their token -vectors. You can also check if a token has a vector assigned, and get the L2 -norm, which can be used to normalize vectors. +Pipeline packages that come with built-in word vectors make them available as +the [`Token.vector`](/api/token#vector) attribute. +[`Doc.vector`](/api/doc#vector) and [`Span.vector`](/api/span#vector) will +default to an average of their token vectors. You can also check if a token has +a vector assigned, and get the L2 norm, which can be used to normalize vectors. ```python ### {executable="true"} @@ -62,12 +62,12 @@ for token in tokens: > - **OOV**: Out-of-vocabulary The words "dog", "cat" and "banana" are all pretty common in English, so they're -part of the model's vocabulary, and come with a vector. The word "afskfsd" on +part of the pipeline's vocabulary, and come with a vector. The word "afskfsd" on the other hand is a lot less common and out-of-vocabulary – so its vector representation consists of 300 dimensions of `0`, which means it's practically nonexistent. If your application will benefit from a **large vocabulary** with -more vectors, you should consider using one of the larger models or loading in a -full vector package, for example, +more vectors, you should consider using one of the larger pipeline packages or +loading in a full vector package, for example, [`en_vectors_web_lg`](/models/en-starters#en_vectors_web_lg), which includes over **1 million unique vectors**. @@ -82,7 +82,7 @@ Each [`Doc`](/api/doc), [`Span`](/api/span), [`Token`](/api/token) and method that lets you compare it with another object, and determine the similarity. Of course similarity is always subjective – whether two words, spans or documents are similar really depends on how you're looking at it. spaCy's -similarity model usually assumes a pretty general-purpose definition of +similarity implementation usually assumes a pretty general-purpose definition of similarity. > #### 📝 Things to try @@ -99,7 +99,7 @@ similarity. ### {executable="true"} import spacy -nlp = spacy.load("en_core_web_md") # make sure to use larger model! +nlp = spacy.load("en_core_web_md") # make sure to use larger package! doc1 = nlp("I like salty fries and hamburgers.") doc2 = nlp("Fast food tastes very good.") @@ -143,10 +143,9 @@ us that builds on top of spaCy and lets you train and query more interesting and detailed word vectors. It combines noun phrases like "fast food" or "fair game" and includes the part-of-speech tags and entity labels. The library also includes annotation recipes for our annotation tool [Prodigy](https://prodi.gy) -that let you evaluate vector models and create terminology lists. For more -details, check out -[our blog post](https://explosion.ai/blog/sense2vec-reloaded). To explore the -semantic similarities across all Reddit comments of 2015 and 2019, see the -[interactive demo](https://explosion.ai/demos/sense2vec). +that let you evaluate vectors and create terminology lists. For more details, +check out [our blog post](https://explosion.ai/blog/sense2vec-reloaded). To +explore the semantic similarities across all Reddit comments of 2015 and 2019, +see the [interactive demo](https://explosion.ai/demos/sense2vec). diff --git a/website/docs/usage/embeddings-transformers.md b/website/docs/usage/embeddings-transformers.md index 7792ce124..abd92a8ac 100644 --- a/website/docs/usage/embeddings-transformers.md +++ b/website/docs/usage/embeddings-transformers.md @@ -331,7 +331,7 @@ name = "bert-base-cased" tokenizer_config = {"use_fast": true} [components.transformer.model.get_spans] -@span_getters = "doc_spans.v1" +@span_getters = "spacy-transformers.doc_spans.v1" [components.transformer.annotation_setter] @annotation_setters = "spacy-transformers.null_annotation_setter.v1" @@ -369,8 +369,9 @@ all defaults. To change any of the settings, you can edit the `config.cfg` and re-run the training. To change any of the functions, like the span getter, you can replace -the name of the referenced function – e.g. `@span_getters = "sent_spans.v1"` to -process sentences. You can also register your own functions using the +the name of the referenced function – e.g. +`@span_getters = "spacy-transformers.sent_spans.v1"` to process sentences. You +can also register your own functions using the [`span_getters` registry](/api/top-level#registry). For instance, the following custom function returns [`Span`](/api/span) objects following sentence boundaries, unless a sentence succeeds a certain amount of tokens, in which case diff --git a/website/docs/usage/index.md b/website/docs/usage/index.md index 76858213c..ee5fd0a3b 100644 --- a/website/docs/usage/index.md +++ b/website/docs/usage/index.md @@ -35,10 +35,10 @@ Using pip, spaCy releases are available as source packages and binary wheels. $ pip install -U spacy ``` -> #### Download models +> #### Download pipelines > -> After installation you need to download a language model. For more info and -> available models, see the [docs on models](/models). +> After installation you typically want to download a trained pipeline. For more +> info and available packages, see the [models directory](/models). > > ```cli > $ python -m spacy download en_core_web_sm @@ -54,7 +54,7 @@ To install additional data tables for lemmatization you can run [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data) separately. The lookups package is needed to provide normalization and lemmatization data for new models and to lemmatize in languages that don't yet -come with pretrained models and aren't powered by third-party libraries. +come with trained pipelines and aren't powered by third-party libraries. @@ -88,23 +88,21 @@ and pull requests to the recipe and setup are always appreciated. > spaCy v2.x to v3.x may still require some changes to your code base. For > details see the sections on [backwards incompatibilities](/usage/v3#incompat) > and [migrating](/usage/v3#migrating). Also remember to download the new -> models, and retrain your own models. +> trained pipelines, and retrain your own pipelines. When updating to a newer version of spaCy, it's generally recommended to start with a clean virtual environment. If you're upgrading to a new major version, -make sure you have the latest **compatible models** installed, and that there -are no old and incompatible model packages left over in your environment, as -this can often lead to unexpected results and errors. If you've trained your own -models, keep in mind that your train and runtime inputs must match. This means -you'll have to **retrain your models** with the new version. +make sure you have the latest **compatible trained pipelines** installed, and +that there are no old and incompatible packages left over in your environment, +as this can often lead to unexpected results and errors. If you've trained your +own models, keep in mind that your train and runtime inputs must match. This +means you'll have to **retrain your pipelines** with the new version. spaCy also provides a [`validate`](/api/cli#validate) command, which lets you -verify that all installed models are compatible with your spaCy version. If -incompatible models are found, tips and installation instructions are printed. -The command is also useful to detect out-of-sync model links resulting from -links created in different virtual environments. It's recommended to run the -command with `python -m` to make sure you're executing the correct version of -spaCy. +verify that all installed pipeline packages are compatible with your spaCy +version. If incompatible packages are found, tips and installation instructions +are printed. It's recommended to run the command with `python -m` to make sure +you're executing the correct version of spaCy. ```cli $ pip install -U spacy @@ -132,8 +130,8 @@ $ pip install -U spacy[cuda92] Once you have a GPU-enabled installation, the best way to activate it is to call [`spacy.prefer_gpu`](/api/top-level#spacy.prefer_gpu) or [`spacy.require_gpu()`](/api/top-level#spacy.require_gpu) somewhere in your -script before any models have been loaded. `require_gpu` will raise an error if -no GPU is available. +script before any pipelines have been loaded. `require_gpu` will raise an error +if no GPU is available. ```python import spacy @@ -238,16 +236,16 @@ installing, loading and using spaCy, as well as their solutions. ``` -No compatible model found for [lang] (spaCy vX.X.X). +No compatible package found for [lang] (spaCy vX.X.X). ``` -This usually means that the model you're trying to download does not exist, or -isn't available for your version of spaCy. Check the +This usually means that the trained pipeline you're trying to download does not +exist, or isn't available for your version of spaCy. Check the [compatibility table](https://github.com/explosion/spacy-models/tree/master/compatibility.json) -to see which models are available for your spaCy version. If you're using an old -version, consider upgrading to the latest release. Note that while spaCy +to see which packages are available for your spaCy version. If you're using an +old version, consider upgrading to the latest release. Note that while spaCy supports tokenization for [a variety of languages](/usage/models#languages), not -all of them come with statistical models. To only use the tokenizer, import the +all of them come with trained pipelines. To only use the tokenizer, import the language's `Language` class instead, for example `from spacy.lang.fr import French`. @@ -259,7 +257,7 @@ language's `Language` class instead, for example no such option: --no-cache-dir ``` -The `download` command uses pip to install the models and sets the +The `download` command uses pip to install the pipeline packages and sets the `--no-cache-dir` flag to prevent it from requiring too much memory. [This setting](https://pip.pypa.io/en/stable/reference/pip_install/#caching) requires pip v6.0 or newer. Run `pip install -U pip` to upgrade to the latest @@ -323,19 +321,19 @@ also run `which python` to find out where your Python executable is located. - + ``` ImportError: No module named 'en_core_web_sm' ``` -As of spaCy v1.7, all models can be installed as Python packages. This means -that they'll become importable modules of your application. If this fails, it's -usually a sign that the package is not installed in the current environment. Run -`pip list` or `pip freeze` to check which model packages you have installed, and -install the [correct models](/models) if necessary. If you're importing a model -manually at the top of a file, make sure to use the name of the package, not the -shortcut link you've created. +As of spaCy v1.7, all trained pipelines can be installed as Python packages. +This means that they'll become importable modules of your application. If this +fails, it's usually a sign that the package is not installed in the current +environment. Run `pip list` or `pip freeze` to check which pipeline packages you +have installed, and install the [correct package](/models) if necessary. If +you're importing a package manually at the top of a file, make sure to use the +full name of the package. diff --git a/website/docs/usage/layers-architectures.md b/website/docs/usage/layers-architectures.md index 419048f65..e24b776c8 100644 --- a/website/docs/usage/layers-architectures.md +++ b/website/docs/usage/layers-architectures.md @@ -103,7 +103,7 @@ bit of validation goes a long way, especially if you tools to highlight these errors early. The config file is also validated at the beginning of training, to verify that all the types match correctly. - + If you're using a modern editor like Visual Studio Code, you can [set up `mypy`](https://thinc.ai/docs/usage-type-checking#install) with the @@ -143,11 +143,11 @@ nO = null spaCy has two additional built-in `textcat` architectures, and you can easily use those by swapping out the definition of the textcat's model. For instance, -to use the simpel and fast [bag-of-words model](/api/architectures#TextCatBOW), -you can change the config to: +to use the simple and fast bag-of-words model +[TextCatBOW](/api/architectures#TextCatBOW), you can change the config to: ```ini -### config.cfg (excerpt) +### config.cfg (excerpt) {highlight="6-10"} [components.textcat] factory = "textcat" labels = [] @@ -160,8 +160,9 @@ no_output_layer = false nO = null ``` -The details of all prebuilt architectures and their parameters, can be consulted -on the [API page for model architectures](/api/architectures). +For details on all pre-defined architectures shipped with spaCy and how to +configure them, check out the [model architectures](/api/architectures) +documentation. ### Defining sublayers {#sublayers} diff --git a/website/docs/usage/linguistic-features.md b/website/docs/usage/linguistic-features.md index 726cf0521..b36e9b71f 100644 --- a/website/docs/usage/linguistic-features.md +++ b/website/docs/usage/linguistic-features.md @@ -132,7 +132,7 @@ language can extend the `Lemmatizer` as part of its ### {executable="true"} import spacy -# English models include a rule-based lemmatizer +# English pipelines include a rule-based lemmatizer nlp = spacy.load("en_core_web_sm") lemmatizer = nlp.get_pipe("lemmatizer") print(lemmatizer.mode) # 'rule' @@ -156,14 +156,14 @@ component. The data for spaCy's lemmatizers is distributed in the package [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data). The -provided models already include all the required tables, but if you are creating -new models, you'll probably want to install `spacy-lookups-data` to provide the -data when the lemmatizer is initialized. +provided trained pipelines already include all the required tables, but if you +are creating new pipelines, you'll probably want to install `spacy-lookups-data` +to provide the data when the lemmatizer is initialized. ### Lookup lemmatizer {#lemmatizer-lookup} -For models without a tagger or morphologizer, a lookup lemmatizer can be added -to the pipeline as long as a lookup table is provided, typically through +For pipelines without a tagger or morphologizer, a lookup lemmatizer can be +added to the pipeline as long as a lookup table is provided, typically through [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data). The lookup lemmatizer looks up the token surface form in the lookup table without reference to the token's part-of-speech or context. @@ -178,9 +178,9 @@ nlp.add_pipe("lemmatizer", config={"mode": "lookup"}) ### Rule-based lemmatizer {#lemmatizer-rule} -When training models that include a component that assigns POS (a morphologizer -or a tagger with a [POS mapping](#mappings-exceptions)), a rule-based lemmatizer -can be added using rule tables from +When training pipelines that include a component that assigns part-of-speech +tags (a morphologizer or a tagger with a [POS mapping](#mappings-exceptions)), a +rule-based lemmatizer can be added using rule tables from [`spacy-lookups-data`](https://github.com/explosion/spacy-lookups-data): ```python @@ -366,10 +366,10 @@ sequence of tokens. You can walk up the tree with the > #### Projective vs. non-projective > -> For the [default English model](/models/en), the parse tree is **projective**, -> which means that there are no crossing brackets. The tokens returned by -> `.subtree` are therefore guaranteed to be contiguous. This is not true for the -> German model, which has many +> For the [default English pipelines](/models/en), the parse tree is +> **projective**, which means that there are no crossing brackets. The tokens +> returned by `.subtree` are therefore guaranteed to be contiguous. This is not +> true for the German pipelines, which have many > [non-projective dependencies](https://explosion.ai/blog/german-model#word-order). ```python @@ -497,26 +497,27 @@ displaCy in our [online demo](https://explosion.ai/demos/displacy).. ### Disabling the parser {#disabling} -In the [default models](/models), the parser is loaded and enabled as part of -the [standard processing pipeline](/usage/processing-pipelines). If you don't -need any of the syntactic information, you should disable the parser. Disabling -the parser will make spaCy load and run much faster. If you want to load the -parser, but need to disable it for specific documents, you can also control its -use on the `nlp` object. +In the [trained pipelines](/models) provided by spaCy, the parser is loaded and +enabled by default as part of the +[standard processing pipeline](/usage/processing-pipelines). If you don't need +any of the syntactic information, you should disable the parser. Disabling the +parser will make spaCy load and run much faster. If you want to load the parser, +but need to disable it for specific documents, you can also control its use on +the `nlp` object. For more details, see the usage guide on +[disabling pipeline components](/usage/processing-pipelines/#disabling). ```python nlp = spacy.load("en_core_web_sm", disable=["parser"]) -nlp = English().from_disk("/model", disable=["parser"]) -doc = nlp("I don't want parsed", disable=["parser"]) ``` ## Named Entity Recognition {#named-entities} spaCy features an extremely fast statistical entity recognition system, that -assigns labels to contiguous spans of tokens. The default model identifies a -variety of named and numeric entities, including companies, locations, -organizations and products. You can add arbitrary classes to the entity -recognition system, and update the model with new examples. +assigns labels to contiguous spans of tokens. The default +[trained pipelines](/models) can indentify a variety of named and numeric +entities, including companies, locations, organizations and products. You can +add arbitrary classes to the entity recognition system, and update the model +with new examples. ### Named Entity Recognition 101 {#named-entities-101} @@ -669,7 +670,7 @@ responsibility for ensuring that the data is left in a consistent state. -For details on the entity types available in spaCy's pretrained models, see the +For details on the entity types available in spaCy's trained pipelines, see the "label scheme" sections of the individual models in the [models directory](/models). @@ -710,9 +711,8 @@ import DisplacyEntHtml from 'images/displacy-ent2.html' To ground the named entities into the "real world", spaCy provides functionality to perform entity linking, which resolves a textual entity to a unique identifier from a knowledge base (KB). You can create your own -[`KnowledgeBase`](/api/kb) and -[train a new Entity Linking model](/usage/training#entity-linker) using that -custom-made KB. +[`KnowledgeBase`](/api/kb) and [train](/usage/training) a new +[`EntityLinker`](/api/entitylinker) using that custom knowledge base. ### Accessing entity identifiers {#entity-linking-accessing model="entity linking"} @@ -724,7 +724,7 @@ object, or the `ent_kb_id` and `ent_kb_id_` attributes of a ```python import spacy -nlp = spacy.load("my_custom_el_model") +nlp = spacy.load("my_custom_el_pipeline") doc = nlp("Ada Lovelace was born in London") # Document level @@ -1021,7 +1021,7 @@ expressions – for example, [`compile_suffix_regex`](/api/top-level#util.compile_suffix_regex): ```python -suffixes = nlp.Defaults.suffixes + (r'''-+$''',) +suffixes = nlp.Defaults.suffixes + [r'''-+$''',] suffix_regex = spacy.util.compile_suffix_regex(suffixes) nlp.tokenizer.suffix_search = suffix_regex.search ``` @@ -1042,13 +1042,15 @@ function that behaves the same way. -If you're using a statistical model, writing to the +If you've loaded a trained pipeline, writing to the [`nlp.Defaults`](/api/language#defaults) or `English.Defaults` directly won't -work, since the regular expressions are read from the model and will be compiled -when you load it. If you modify `nlp.Defaults`, you'll only see the effect if -you call [`spacy.blank`](/api/top-level#spacy.blank). If you want to modify the -tokenizer loaded from a statistical model, you should modify `nlp.tokenizer` -directly. +work, since the regular expressions are read from the pipeline data and will be +compiled when you load it. If you modify `nlp.Defaults`, you'll only see the +effect if you call [`spacy.blank`](/api/top-level#spacy.blank). If you want to +modify the tokenizer loaded from a trained pipeline, you should modify +`nlp.tokenizer` directly. If you're training your own pipeline, you can register +[callbacks](/usage/training/#custom-code-nlp-callbacks) to modify the `nlp` +object before training. @@ -1218,11 +1220,11 @@ print(doc.text, [token.text for token in doc]) -Keep in mind that your model's result may be less accurate if the tokenization +Keep in mind that your models' results may be less accurate if the tokenization during training differs from the tokenization at runtime. So if you modify a -pretrained model's tokenization afterwards, it may produce very different -predictions. You should therefore train your model with the **same tokenizer** -it will be using at runtime. See the docs on +trained pipeline's tokenization afterwards, it may produce very different +predictions. You should therefore train your pipeline with the **same +tokenizer** it will be using at runtime. See the docs on [training with custom tokenization](#custom-tokenizer-training) for details. @@ -1231,7 +1233,7 @@ it will be using at runtime. See the docs on spaCy's [training config](/usage/training#config) describe the settings, hyperparameters, pipeline and tokenizer used for constructing and training the -model. The `[nlp.tokenizer]` block refers to a **registered function** that +pipeline. The `[nlp.tokenizer]` block refers to a **registered function** that takes the `nlp` object and returns a tokenizer. Here, we're registering a function called `whitespace_tokenizer` in the [`@tokenizers` registry](/api/registry). To make sure spaCy knows how to @@ -1626,11 +1628,11 @@ spaCy provides four alternatives for sentence segmentation: Unlike other libraries, spaCy uses the dependency parse to determine sentence boundaries. This is usually the most accurate approach, but it requires a -**statistical model** that provides accurate predictions. If your texts are +**trained pipeline** that provides accurate predictions. If your texts are closer to general-purpose news or web text, this should work well out-of-the-box -with spaCy's provided models. For social media or conversational text that -doesn't follow the same rules, your application may benefit from a custom model -or rule-based component. +with spaCy's provided trained pipelines. For social media or conversational text +that doesn't follow the same rules, your application may benefit from a custom +trained or rule-based component. ```python ### {executable="true"} @@ -1652,8 +1654,8 @@ parses consistent with the sentence boundaries. The [`SentenceRecognizer`](/api/sentencerecognizer) is a simple statistical component that only provides sentence boundaries. Along with being faster and smaller than the parser, its primary advantage is that it's easier to train -custom models because it only requires annotated sentence boundaries rather than -full dependency parses. +because it only requires annotated sentence boundaries rather than full +dependency parses. @@ -1685,7 +1687,7 @@ need sentence boundaries without dependency parses. import spacy from spacy.lang.en import English -nlp = English() # just the language with no model +nlp = English() # just the language with no pipeline nlp.add_pipe("sentencizer") doc = nlp("This is a sentence. This is another sentence.") for sent in doc.sents: @@ -1827,11 +1829,11 @@ or Tomas Mikolov's original [Word2vec implementation](https://code.google.com/archive/p/word2vec/). Most word vector libraries output an easy-to-read text-based format, where each line consists of the word followed by its vector. For everyday use, we want to -convert the vectors model into a binary format that loads faster and takes up -less space on disk. The easiest way to do this is the -[`init model`](/api/cli#init-model) command-line utility. This will output a -spaCy model in the directory `/tmp/la_vectors_wiki_lg`, giving you access to -some nice Latin vectors. You can then pass the directory path to +convert the vectors into a binary format that loads faster and takes up less +space on disk. The easiest way to do this is the +[`init vocab`](/api/cli#init-vocab) command-line utility. This will output a +blank spaCy pipeline in the directory `/tmp/la_vectors_wiki_lg`, giving you +access to some nice Latin vectors. You can then pass the directory path to [`spacy.load`](/api/top-level#spacy.load). > #### Usage example @@ -1845,7 +1847,7 @@ some nice Latin vectors. You can then pass the directory path to ```cli $ wget https://s3-us-west-1.amazonaws.com/fasttext-vectors/word-vectors-v2/cc.la.300.vec.gz -$ python -m spacy init model en /tmp/la_vectors_wiki_lg --vectors-loc cc.la.300.vec.gz +$ python -m spacy init vocab en /tmp/la_vectors_wiki_lg --vectors-loc cc.la.300.vec.gz ``` @@ -1853,13 +1855,13 @@ $ python -m spacy init model en /tmp/la_vectors_wiki_lg --vectors-loc cc.la.300. To help you strike a good balance between coverage and memory usage, spaCy's [`Vectors`](/api/vectors) class lets you map **multiple keys** to the **same row** of the table. If you're using the -[`spacy init model`](/api/cli#init-model) command to create a vocabulary, +[`spacy init vocab`](/api/cli#init-vocab) command to create a vocabulary, pruning the vectors will be taken care of automatically if you set the `--prune-vectors` flag. You can also do it manually in the following steps: -1. Start with a **word vectors model** that covers a huge vocabulary. For +1. Start with a **word vectors package** that covers a huge vocabulary. For instance, the [`en_vectors_web_lg`](/models/en-starters#en_vectors_web_lg) - model provides 300-dimensional GloVe vectors for over 1 million terms of + starter provides 300-dimensional GloVe vectors for over 1 million terms of English. 2. If your vocabulary has values set for the `Lexeme.prob` attribute, the lexemes will be sorted by descending probability to determine which vectors @@ -1900,17 +1902,17 @@ the two words. In the example above, the vector for "Shore" was removed and remapped to the vector of "coast", which is deemed about 73% similar. "Leaving" was remapped to the vector of "leaving", which is identical. If you're using the -[`init model`](/api/cli#init-model) command, you can set the `--prune-vectors` +[`init vocab`](/api/cli#init-vocab) command, you can set the `--prune-vectors` option to easily reduce the size of the vectors as you add them to a spaCy -model: +pipeline: ```cli -$ python -m spacy init model en /tmp/la_vectors_web_md --vectors-loc la.300d.vec.tgz --prune-vectors 10000 +$ python -m spacy init vocab en /tmp/la_vectors_web_md --vectors-loc la.300d.vec.tgz --prune-vectors 10000 ``` -This will create a spaCy model with vectors for the first 10,000 words in the -vectors model. All other words in the vectors model are mapped to the closest -vector among those retained. +This will create a blank spaCy pipeline with vectors for the first 10,000 words +in the vectors. All other words in the vectors are mapped to the closest vector +among those retained. @@ -1925,8 +1927,8 @@ possible. You can modify the vectors via the [`Vocab`](/api/vocab) or if you have vectors in an arbitrary format, as you can read in the vectors with your own logic, and just set them with a simple loop. This method is likely to be slower than approaches that work with the whole vectors table at once, but -it's a great approach for once-off conversions before you save out your model to -disk. +it's a great approach for once-off conversions before you save out your `nlp` +object to disk. ```python ### Adding vectors @@ -1978,14 +1980,14 @@ print(nlp2.lang, [token.is_stop for token in nlp2("custom stop")]) The [`@spacy.registry.languages`](/api/top-level#registry) decorator lets you register a custom language class and assign it a string name. This means that you can call [`spacy.blank`](/api/top-level#spacy.blank) with your custom -language name, and even train models with it and refer to it in your +language name, and even train pipelines with it and refer to it in your [training config](/usage/training#config). > #### Config usage > > After registering your custom language class using the `languages` registry, > you can refer to it in your [training config](/usage/training#config). This -> means spaCy will train your model using the custom subclass. +> means spaCy will train your pipeline using the custom subclass. > > ```ini > [nlp] diff --git a/website/docs/usage/models.md b/website/docs/usage/models.md index ec0e02297..9b1e96e4e 100644 --- a/website/docs/usage/models.md +++ b/website/docs/usage/models.md @@ -8,25 +8,24 @@ menu: - ['Production Use', 'production'] --- -spaCy's models can be installed as **Python packages**. This means that they're -a component of your application, just like any other module. They're versioned -and can be defined as a dependency in your `requirements.txt`. Models can be -installed from a download URL or a local directory, manually or via -[pip](https://pypi.python.org/pypi/pip). Their data can be located anywhere on -your file system. +spaCy's trained pipelines can be installed as **Python packages**. This means +that they're a component of your application, just like any other module. +They're versioned and can be defined as a dependency in your `requirements.txt`. +Trained pipelines can be installed from a download URL or a local directory, +manually or via [pip](https://pypi.python.org/pypi/pip). Their data can be +located anywhere on your file system. > #### Important note > -> If you're upgrading to spaCy v3.x, you need to **download the new models**. If -> you've trained statistical models that use spaCy's annotations, you should -> **retrain your models** after updating spaCy. If you don't retrain, you may -> suffer train/test skew, which might decrease your accuracy. +> If you're upgrading to spaCy v3.x, you need to **download the new pipeline +> packages**. If you've trained your own pipelines, you need to **retrain** them +> after updating spaCy. ## Quickstart {hidden="true"} import QuickstartModels from 'widgets/quickstart-models.js' - + ## Language support {#languages} @@ -34,14 +33,14 @@ spaCy currently provides support for the following languages. You can help by [improving the existing language data](/usage/adding-languages#language-data) and extending the tokenization patterns. [See here](https://github.com/explosion/spaCy/issues/3056) for details on how to -contribute to model development. +contribute to development. > #### Usage note > -> If a model is available for a language, you can download it using the -> [`spacy download`](/api/cli#download) command. In order to use languages that -> don't yet come with a model, you have to import them directly, or use -> [`spacy.blank`](/api/top-level#spacy.blank): +> If a trained pipeline is available for a language, you can download it using +> the [`spacy download`](/api/cli#download) command. In order to use languages +> that don't yet come with a trained pipeline, you have to import them directly, +> or use [`spacy.blank`](/api/top-level#spacy.blank): > > ```python > from spacy.lang.fi import Finnish @@ -73,13 +72,13 @@ import Languages from 'widgets/languages.js' > nlp = spacy.blank("xx") > ``` -spaCy also supports models trained on more than one language. This is especially -useful for named entity recognition. The language ID used for multi-language or -language-neutral models is `xx`. The language class, a generic subclass -containing only the base language data, can be found in +spaCy also supports pipelines trained on more than one language. This is +especially useful for named entity recognition. The language ID used for +multi-language or language-neutral pipelines is `xx`. The language class, a +generic subclass containing only the base language data, can be found in [`lang/xx`](https://github.com/explosion/spaCy/tree/master/spacy/lang/xx). -To train a model using the neutral multi-language class, you can set +To train a pipeline using the neutral multi-language class, you can set `lang = "xx"` in your [training config](/usage/training#config). You can also import the `MultiLanguage` class directly, or call [`spacy.blank("xx")`](/api/top-level#spacy.blank) for lazy-loading. @@ -111,7 +110,7 @@ The Chinese language class supports three word segmentation options: 3. **PKUSeg**: As of spaCy v2.3.0, support for [PKUSeg](https://github.com/lancopku/PKUSeg-python) has been added to support better segmentation for Chinese OntoNotes and the provided - [Chinese models](/models/zh). Enable PKUSeg with the tokenizer option + [Chinese pipelines](/models/zh). Enable PKUSeg with the tokenizer option `{"segmenter": "pkuseg"}`. @@ -169,9 +168,9 @@ nlp.tokenizer.pkuseg_update_user_dict([], reset=True) - + -The [Chinese models](/models/zh) provided by spaCy include a custom `pkuseg` +The [Chinese pipelines](/models/zh) provided by spaCy include a custom `pkuseg` model trained only on [Chinese OntoNotes 5.0](https://catalog.ldc.upenn.edu/LDC2013T19), since the models provided by `pkuseg` include data restricted to research use. For @@ -208,29 +207,29 @@ nlp = Chinese(meta={"tokenizer": {"config": {"pkuseg_model": "/path/to/pkuseg_mo The Japanese language class uses [SudachiPy](https://github.com/WorksApplications/SudachiPy) for word segmentation and part-of-speech tagging. The default Japanese language class and -the provided Japanese models use SudachiPy split mode `A`. The `meta` argument -of the `Japanese` language class can be used to configure the split mode to `A`, -`B` or `C`. +the provided Japanese pipelines use SudachiPy split mode `A`. The `meta` +argument of the `Japanese` language class can be used to configure the split +mode to `A`, `B` or `C`. If you run into errors related to `sudachipy`, which is currently under active development, we suggest downgrading to `sudachipy==0.4.5`, which is the version -used for training the current [Japanese models](/models/ja). +used for training the current [Japanese pipelines](/models/ja). -## Installing and using models {#download} +## Installing and using trained pipelines {#download} -The easiest way to download a model is via spaCy's +The easiest way to download a trained pipeline is via spaCy's [`download`](/api/cli#download) command. It takes care of finding the -best-matching model compatible with your spaCy installation. +best-matching package compatible with your spaCy installation. > #### Important note for v3.0 > -> Note that as of spaCy v3.0, model shortcut links that create (potentially +> Note that as of spaCy v3.0, shortcut links like `en` that create (potentially > brittle) symlinks in your spaCy installation are **deprecated**. To download -> and load an installed model, use its full name: +> and load an installed pipeline package, use its full name: > > ```diff > - python -m spacy download en @@ -243,14 +242,14 @@ best-matching model compatible with your spaCy installation. > ``` ```cli -# Download best-matching version of a model for your spaCy installation +# Download best-matching version of a package for your spaCy installation $ python -m spacy download en_core_web_sm -# Download exact model version +# Download exact package version $ python -m spacy download en_core_web_sm-3.0.0 --direct ``` -The download command will [install the model](/usage/models#download-pip) via +The download command will [install the package](/usage/models#download-pip) via pip and place the package in your `site-packages` directory. ```cli @@ -266,11 +265,11 @@ doc = nlp("This is a sentence.") ### Installation via pip {#download-pip} -To download a model directly using [pip](https://pypi.python.org/pypi/pip), -point `pip install` to the URL or local path of the archive file. To find the -direct link to a model, head over to the -[model releases](https://github.com/explosion/spacy-models/releases), right -click on the archive link and copy it to your clipboard. +To download a trained pipeline directly using +[pip](https://pypi.python.org/pypi/pip), point `pip install` to the URL or local +path of the archive file. To find the direct link to a package, head over to the +[releases](https://github.com/explosion/spacy-models/releases), right click on +the archive link and copy it to your clipboard. ```bash # With external URL @@ -280,60 +279,61 @@ $ pip install https://github.com/explosion/spacy-models/releases/download/en_cor $ pip install /Users/you/en_core_web_sm-3.0.0.tar.gz ``` -By default, this will install the model into your `site-packages` directory. You -can then use `spacy.load()` to load it via its package name or +By default, this will install the pipeline package into your `site-packages` +directory. You can then use `spacy.load` to load it via its package name or [import it](#usage-import) explicitly as a module. If you need to download -models as part of an automated process, we recommend using pip with a direct -link, instead of relying on spaCy's [`download`](/api/cli#download) command. +pipeline packages as part of an automated process, we recommend using pip with a +direct link, instead of relying on spaCy's [`download`](/api/cli#download) +command. You can also add the direct download link to your application's `requirements.txt`. For more details, see the section on -[working with models in production](#production). +[working with pipeline packages in production](#production). ### Manual download and installation {#download-manual} In some cases, you might prefer downloading the data manually, for example to -place it into a custom directory. You can download the model via your browser +place it into a custom directory. You can download the package via your browser from the [latest releases](https://github.com/explosion/spacy-models/releases), or configure your own download script using the URL of the archive file. The -archive consists of a model directory that contains another directory with the -model data. +archive consists of a package directory that contains another directory with the +pipeline data. ```yaml ### Directory structure {highlight="6"} └── en_core_web_md-3.0.0.tar.gz # downloaded archive ├── setup.py # setup file for pip installation - ├── meta.json # copy of model meta - └── en_core_web_md # 📦 model package + ├── meta.json # copy of pipeline meta + └── en_core_web_md # 📦 pipeline package ├── __init__.py # init for pip installation - └── en_core_web_md-3.0.0 # model data - ├── config.cfg # model config - ├── meta.json # model meta + └── en_core_web_md-3.0.0 # pipeline data + ├── config.cfg # pipeline config + ├── meta.json # pipeline meta └── ... # directories with component data ``` -You can place the **model package directory** anywhere on your local file +You can place the **pipeline package directory** anywhere on your local file system. -### Using models with spaCy {#usage} +### Using trained pipelines with spaCy {#usage} -To load a model, use [`spacy.load`](/api/top-level#spacy.load) with the model's -package name or a path to the data directory: +To load a pipeline package, use [`spacy.load`](/api/top-level#spacy.load) with +the package name or a path to the data directory: > #### Important note for v3.0 > -> Note that as of spaCy v3.0, model shortcut links that create (potentially -> brittle) symlinks in your spaCy installation are **deprecated**. To load an -> installed model, use its full name: +> Note that as of spaCy v3.0, shortcut links like `en` that create (potentially +> brittle) symlinks in your spaCy installation are **deprecated**. To download +> and load an installed pipeline package, use its full name: > > ```diff -> - nlp = spacy.load("en") -> + nlp = spacy.load("en_core_web_sm") +> - python -m spacy download en +> + python -m spacy dowmload en_core_web_sm > ``` ```python import spacy -nlp = spacy.load("en_core_web_sm") # load model package "en_core_web_sm" +nlp = spacy.load("en_core_web_sm") # load package "en_core_web_sm" nlp = spacy.load("/path/to/en_core_web_sm") # load package from a directory doc = nlp("This is a sentence.") @@ -342,17 +342,18 @@ doc = nlp("This is a sentence.") You can use the [`info`](/api/cli#info) command or -[`spacy.info()`](/api/top-level#spacy.info) method to print a model's meta data -before loading it. Each `Language` object with a loaded model also exposes the -model's meta data as the attribute `meta`. For example, `nlp.meta['version']` -will return the model's version. +[`spacy.info()`](/api/top-level#spacy.info) method to print a pipeline +packages's meta data before loading it. Each `Language` object with a loaded +pipeline also exposes the pipeline's meta data as the attribute `meta`. For +example, `nlp.meta['version']` will return the package version. -### Importing models as modules {#usage-import} +### Importing pipeline packages as modules {#usage-import} -If you've installed a model via spaCy's downloader, or directly via pip, you can -also `import` it and then call its `load()` method with no arguments: +If you've installed a trained pipeline via [`spacy download`](/api/cli#download) +or directly via pip, you can also `import` it and then call its `load()` method +with no arguments: ```python ### {executable="true"} @@ -362,51 +363,38 @@ nlp = en_core_web_sm.load() doc = nlp("This is a sentence.") ``` -How you choose to load your models ultimately depends on personal preference. -However, **for larger code bases**, we usually recommend native imports, as this -will make it easier to integrate models with your existing build process, -continuous integration workflow and testing framework. It'll also prevent you -from ever trying to load a model that is not installed, as your code will raise -an `ImportError` immediately, instead of failing somewhere down the line when -calling `spacy.load()`. +How you choose to load your trained pipelines ultimately depends on personal +preference. However, **for larger code bases**, we usually recommend native +imports, as this will make it easier to integrate pipeline packages with your +existing build process, continuous integration workflow and testing framework. +It'll also prevent you from ever trying to load a package that is not installed, +as your code will raise an `ImportError` immediately, instead of failing +somewhere down the line when calling `spacy.load()`. For more details, see the +section on [working with pipeline packages in production](#production). -For more details, see the section on -[working with models in production](#production). +## Using trained pipelines in production {#production} -### Using your own models {#own-models} - -If you've trained your own model, for example for -[additional languages](/usage/adding-languages) or -[custom named entities](/usage/training#ner), you can save its state using the -[`Language.to_disk()`](/api/language#to_disk) method. To make the model more -convenient to deploy, we recommend wrapping it as a Python package. - -For more information and a detailed guide on how to package your model, see the -documentation on [saving and loading models](/usage/saving-loading#models). - -## Using models in production {#production} - -If your application depends on one or more models, you'll usually want to -integrate them into your continuous integration workflow and build process. -While spaCy provides a range of useful helpers for downloading, linking and -loading models, the underlying functionality is entirely based on native Python -packages. This allows your application to handle a model like any other package -dependency. +If your application depends on one or more trained pipeline packages, you'll +usually want to integrate them into your continuous integration workflow and +build process. While spaCy provides a range of useful helpers for downloading +and loading pipeline packages, the underlying functionality is entirely based on +native Python packaging. This allows your application to handle a spaCy pipeline +like any other package dependency. -### Downloading and requiring model dependencies {#models-download} +### Downloading and requiring package dependencies {#models-download} spaCy's built-in [`download`](/api/cli#download) command is mostly intended as a convenient, interactive wrapper. It performs compatibility checks and prints -detailed error messages and warnings. However, if you're downloading models as -part of an automated build process, this only adds an unnecessary layer of -complexity. If you know which models your application needs, you should be -specifying them directly. +detailed error messages and warnings. However, if you're downloading pipeline +packages as part of an automated build process, this only adds an unnecessary +layer of complexity. If you know which packages your application needs, you +should be specifying them directly. -Because all models are valid Python packages, you can add them to your +Because pipeline packages are valid Python packages, you can add them to your application's `requirements.txt`. If you're running your own internal PyPi -installation, you can upload the models there. pip's +installation, you can upload the pipeline packages there. pip's [requirements file format](https://pip.pypa.io/en/latest/reference/pip_install/#requirements-file-format) supports both package names to download via a PyPi server, as well as direct URLs. @@ -422,17 +410,17 @@ the download URL. This way, the package won't be re-downloaded and overwritten if it's already installed - just like when you're downloading a package from PyPi. -All models are versioned and specify their spaCy dependency. This ensures -cross-compatibility and lets you specify exact version requirements for each -model. If you've trained your own model, you can use the -[`package`](/api/cli#package) command to generate the required meta data and -turn it into a loadable package. +All pipeline packages are versioned and specify their spaCy dependency. This +ensures cross-compatibility and lets you specify exact version requirements for +each pipeline. If you've [trained](/usage/training) your own pipeline, you can +use the [`spacy package`](/api/cli#package) command to generate the required +meta data and turn it into a loadable package. -### Loading and testing models {#models-loading} +### Loading and testing pipeline packages {#models-loading} -Models are regular Python packages, so you can also import them as a package -using Python's native `import` syntax, and then call the `load` method to load -the model data and return an `nlp` object: +Pipeline packages are regular Python packages, so you can also import them as a +package using Python's native `import` syntax, and then call the `load` method +to load the data and return an `nlp` object: ```python import en_core_web_sm @@ -440,16 +428,17 @@ nlp = en_core_web_sm.load() ``` In general, this approach is recommended for larger code bases, as it's more -"native", and doesn't depend on symlinks or rely on spaCy's loader to resolve -string names to model packages. If a model can't be imported, Python will raise -an `ImportError` immediately. And if a model is imported but not used, any -linter will catch that. +"native", and doesn't rely on spaCy's loader to resolve string names to +packages. If a package can't be imported, Python will raise an `ImportError` +immediately. And if a package is imported but not used, any linter will catch +that. Similarly, it'll give you more flexibility when writing tests that require -loading models. For example, instead of writing your own `try` and `except` +loading pipelines. For example, instead of writing your own `try` and `except` logic around spaCy's loader, you can use [pytest](http://pytest.readthedocs.io/en/latest/)'s [`importorskip()`](https://docs.pytest.org/en/latest/builtin.html#_pytest.outcomes.importorskip) -method to only run a test if a specific model or model version is installed. -Each model package exposes a `__version__` attribute which you can also use to -perform your own version compatibility checks before loading a model. +method to only run a test if a specific pipeline package or version is +installed. Each pipeline package package exposes a `__version__` attribute which +you can also use to perform your own version compatibility checks before loading +it. diff --git a/website/docs/usage/processing-pipelines.md b/website/docs/usage/processing-pipelines.md index 3636aa3c2..2885d9f50 100644 --- a/website/docs/usage/processing-pipelines.md +++ b/website/docs/usage/processing-pipelines.md @@ -42,8 +42,8 @@ texts = ["This is a text", "These are lots of texts", "..."] - Only apply the **pipeline components you need**. Getting predictions from the model that you don't actually need adds up and becomes very inefficient at scale. To prevent this, use the `disable` keyword argument to disable - components you don't need – either when loading a model, or during processing - with `nlp.pipe`. See the section on + components you don't need – either when loading a pipeline, or during + processing with `nlp.pipe`. See the section on [disabling pipeline components](#disabling) for more details and examples. @@ -95,7 +95,7 @@ spaCy makes it very easy to create your own pipelines consisting of reusable components – this includes spaCy's default tagger, parser and entity recognizer, but also your own custom processing functions. A pipeline component can be added to an already existing `nlp` object, specified when initializing a `Language` -class, or defined within a [model package](/usage/saving-loading#models). +class, or defined within a [pipeline package](/usage/saving-loading#models). > #### config.cfg (excerpt) > @@ -115,7 +115,7 @@ class, or defined within a [model package](/usage/saving-loading#models). > # Settings for the parser component > ``` -When you load a model, spaCy first consults the model's +When you load a pipeline, spaCy first consults the [`meta.json`](/usage/saving-loading#models) and [`config.cfg`](/usage/training#config). The config tells spaCy what language class to use, which components are in the pipeline, and how those components @@ -131,8 +131,7 @@ should be created. spaCy will then do the following: component with with [`add_pipe`](/api/language#add_pipe). The settings are passed into the factory. 3. Make the **model data** available to the `Language` class by calling - [`from_disk`](/api/language#from_disk) with the path to the model data - directory. + [`from_disk`](/api/language#from_disk) with the path to the data directory. So when you call this... @@ -140,27 +139,27 @@ So when you call this... nlp = spacy.load("en_core_web_sm") ``` -... the model's `config.cfg` tells spaCy to use the language `"en"` and the +... the pipeline's `config.cfg` tells spaCy to use the language `"en"` and the pipeline `["tagger", "parser", "ner"]`. spaCy will then initialize `spacy.lang.en.English`, and create each pipeline component and add it to the -processing pipeline. It'll then load in the model's data from its data directory +processing pipeline. It'll then load in the model data from the data directory and return the modified `Language` class for you to use as the `nlp` object. spaCy v3.0 introduces a `config.cfg`, which includes more detailed settings for -the model pipeline, its components and the -[training process](/usage/training#config). You can export the config of your -current `nlp` object by calling [`nlp.config.to_disk`](/api/language#config). +the pipeline, its components and the [training process](/usage/training#config). +You can export the config of your current `nlp` object by calling +[`nlp.config.to_disk`](/api/language#config). -Fundamentally, a [spaCy model](/models) consists of three components: **the -weights**, i.e. binary data loaded in from a directory, a **pipeline** of +Fundamentally, a [spaCy pipeline package](/models) consists of three components: +**the weights**, i.e. binary data loaded in from a directory, a **pipeline** of functions called in order, and **language data** like the tokenization rules and -language-specific settings. For example, a Spanish NER model requires different -weights, language data and pipeline components than an English parsing and -tagging model. This is also why the pipeline state is always held by the +language-specific settings. For example, a Spanish NER pipeline requires +different weights, language data and components than an English parsing and +tagging pipeline. This is also why the pipeline state is always held by the `Language` class. [`spacy.load`](/api/top-level#spacy.load) puts this all together and returns an instance of `Language` with a pipeline set and access to the binary data: @@ -175,7 +174,7 @@ cls = spacy.util.get_lang_class(lang) # 1. Get Language class, e.g. English nlp = cls() # 2. Initialize it for name in pipeline: nlp.add_pipe(name) # 3. Add the component to the pipeline -nlp.from_disk(model_data_path) # 4. Load in the binary data +nlp.from_disk(data_path) # 4. Load in the binary data ``` When you call `nlp` on a text, spaCy will **tokenize** it and then **call each @@ -243,28 +242,29 @@ tagger or the parser, you can **disable or exclude** it. This can sometimes make a big difference and improve loading and inference speed. There are two different mechanisms you can use: -1. **Disable:** The component and its data will be loaded with the model, but it - will be disabled by default and not run as part of the processing pipeline. - To run it, you can explicitly enable it by calling +1. **Disable:** The component and its data will be loaded with the pipeline, but + it will be disabled by default and not run as part of the processing + pipeline. To run it, you can explicitly enable it by calling [`nlp.enable_pipe`](/api/language#enable_pipe). When you save out the `nlp` object, the disabled component will be included but disabled by default. -2. **Exclude:** Don't load the component and its data with the model. Once the - model is loaded, there will be no reference to the excluded component. +2. **Exclude:** Don't load the component and its data with the pipeline. Once + the pipeline is loaded, there will be no reference to the excluded component. Disabled and excluded component names can be provided to [`spacy.load`](/api/top-level#spacy.load) as a list. -> #### 💡 Models with optional components +> #### 💡 Optional pipeline components > -> The `disable` mechanism makes it easy to distribute models with optional -> components that you can enable or disable at runtime. For instance, your model -> may include a statistical _and_ a rule-based component for sentence -> segmentation, and you can choose which one to run depending on your use case. +> The `disable` mechanism makes it easy to distribute pipeline packages with +> optional components that you can enable or disable at runtime. For instance, +> your pipeline may include a statistical _and_ a rule-based component for +> sentence segmentation, and you can choose which one to run depending on your +> use case. ```python -# Load the model without the entity recognizer +# Load the pipeline without the entity recognizer nlp = spacy.load("en_core_web_sm", exclude=["ner"]) # Load the tagger and parser but don't enable them @@ -358,25 +358,25 @@ run as part of the pipeline. | `nlp.component_names` | All component names, including disabled components. | | `nlp.disabled` | Names of components that are currently disabled. | -### Sourcing pipeline components from existing models {#sourced-components new="3"} +### Sourcing components from existing pipelines {#sourced-components new="3"} -Pipeline components that are independent can also be reused across models. -Instead of adding a new blank component to a pipeline, you can also copy an -existing component from a pretrained model by setting the `source` argument on +Pipeline components that are independent can also be reused across pipelines. +Instead of adding a new blank component, you can also copy an existing component +from a trained pipeline by setting the `source` argument on [`nlp.add_pipe`](/api/language#add_pipe). The first argument will then be interpreted as the name of the component in the source pipeline – for instance, `"ner"`. This is especially useful for -[training a model](/usage/training#config-components) because it lets you mix -and match components and create fully custom model packages with updated -pretrained components and new components trained on your data. +[training a pipeline](/usage/training#config-components) because it lets you mix +and match components and create fully custom pipeline packages with updated +trained components and new components trained on your data. - + -When reusing components across models, keep in mind that the **vocabulary**, -**vectors** and model settings **must match**. If a pretrained model includes +When reusing components across pipelines, keep in mind that the **vocabulary**, +**vectors** and model settings **must match**. If a trained pipeline includes [word vectors](/usage/linguistic-features#vectors-similarity) and the component -uses them as features, the model you copy it to needs to have the _same_ vectors -available – otherwise, it won't be able to make the same predictions. +uses them as features, the pipeline you copy it to needs to have the _same_ +vectors available – otherwise, it won't be able to make the same predictions. @@ -384,7 +384,7 @@ available – otherwise, it won't be able to make the same predictions. > > Instead of providing a `factory`, component blocks in the training > [config](/usage/training#config) can also define a `source`. The string needs -> to be a loadable spaCy model package or path. The +> to be a loadable spaCy pipeline package or path. The > > ```ini > [components.ner] @@ -404,11 +404,11 @@ available – otherwise, it won't be able to make the same predictions. ### {executable="true"} import spacy -# The source model with different components +# The source pipeline with different components source_nlp = spacy.load("en_core_web_sm") print(source_nlp.pipe_names) -# Add only the entity recognizer to the new blank model +# Add only the entity recognizer to the new blank pipeline nlp = spacy.blank("en") nlp.add_pipe("ner", source=source_nlp) print(nlp.pipe_names) @@ -535,8 +535,8 @@ only being able to modify it afterwards. The [`@Language.component`](/api/language#component) decorator lets you turn a simple function into a pipeline component. It takes at least one argument, the **name** of the component factory. You can use this name to add an instance of -your component to the pipeline. It can also be listed in your model config, so -you can save, load and train models using your component. +your component to the pipeline. It can also be listed in your pipeline config, +so you can save, load and train pipelines using your component. Custom components can be added to the pipeline using the [`add_pipe`](/api/language#add_pipe) method. Optionally, you can either specify @@ -838,16 +838,24 @@ If what you're passing in isn't JSON-serializable – e.g. a custom object like [model](#trainable-components) – saving out the component config becomes impossible because there's no way for spaCy to know _how_ that object was created, and what to do to create it again. This makes it much harder to save, -load and train custom models with custom components. A simple solution is to +load and train custom pipelines with custom components. A simple solution is to **register a function** that returns your resources. The [registry](/api/top-level#registry) lets you **map string names to functions** that create objects, so given a name and optional arguments, spaCy will know how -to recreate the object. To register a function that returns a custom asset, you -can use the `@spacy.registry.assets` decorator with a single argument, the name: +to recreate the object. To register a function that returns your custom +dictionary, you can use the `@spacy.registry.misc` decorator with a single +argument, the name: + +> #### What's the misc registry? +> +> The [`registry`](/api/top-level#registry) provides different categories for +> different types of functions – for example, model architectures, tokenizers or +> batchers. `misc` is intended for miscellaneous functions that don't fit +> anywhere else. ```python ### Registered function for assets {highlight="1"} -@spacy.registry.assets("acronyms.slang_dict.v1") +@spacy.registry.misc("acronyms.slang_dict.v1") def create_acronyms_slang_dict(): dictionary = {"lol": "laughing out loud", "brb": "be right back"} dictionary.update({value: key for key, value in dictionary.items()}) @@ -856,9 +864,9 @@ def create_acronyms_slang_dict(): In your `default_config` (and later in your [training config](/usage/training#config)), you can now refer to the function -registered under the name `"acronyms.slang_dict.v1"` using the `@assets` key. -This tells spaCy how to create the value, and when your component is created, -the result of the registered function is passed in as the key `"dictionary"`. +registered under the name `"acronyms.slang_dict.v1"` using the `@misc` key. This +tells spaCy how to create the value, and when your component is created, the +result of the registered function is passed in as the key `"dictionary"`. > #### config.cfg > @@ -867,21 +875,21 @@ the result of the registered function is passed in as the key `"dictionary"`. > factory = "acronyms" > > [components.acronyms.dictionary] -> @assets = "acronyms.slang_dict.v1" +> @misc = "acronyms.slang_dict.v1" > ``` ```diff - default_config = {"dictionary:" DICTIONARY} -+ default_config = {"dictionary": {"@assets": "acronyms.slang_dict.v1"}} ++ default_config = {"dictionary": {"@misc": "acronyms.slang_dict.v1"}} ``` Using a registered function also means that you can easily include your custom -components in models that you [train](/usage/training). To make sure spaCy knows -where to find your custom `@assets` function, you can pass in a Python file via -the argument `--code`. If someone else is using your component, all they have to -do to customize the data is to register their own function and swap out the -name. Registered functions can also take **arguments** by the way that can be -defined in the config as well – you can read more about this in the docs on +components in pipelines that you [train](/usage/training). To make sure spaCy +knows where to find your custom `@misc` function, you can pass in a Python file +via the argument `--code`. If someone else is using your component, all they +have to do to customize the data is to register their own function and swap out +the name. Registered functions can also take **arguments** by the way that can +be defined in the config as well – you can read more about this in the docs on [training with custom code](/usage/training#custom-code). ### Python type hints and pydantic validation {#type-hints new="3"} @@ -1121,7 +1129,14 @@ loss is calculated and to add evaluation scores to the training output. | [`get_loss`](/api/pipe#get_loss) | Return a tuple of the loss and the gradient for a batch of [`Example`](/api/example) objects. | | [`score`](/api/pipe#score) | Score a batch of [`Example`](/api/example) objects and return a dictionary of scores. The [`@Language.factory`](/api/language#factory) decorator can define the `default_socre_weights` of the component to decide which keys of the scores to display during training and how they count towards the final score. | - + + +For more details on how to implement your own trainable components and model +architectures, and plug existing models implemented in PyTorch or TensorFlow +into your spaCy pipeline, see the usage guide on +[layers and model architectures](/usage/layers-architectures#components). + + ## Extension attributes {#custom-components-attributes new="2"} @@ -1322,9 +1337,9 @@ While it's generally recommended to use the `Doc._`, `Span._` and `Token._` proxies to add your own custom attributes, spaCy offers a few exceptions to allow **customizing the built-in methods** like [`Doc.similarity`](/api/doc#similarity) or [`Doc.vector`](/api/doc#vector) with -your own hooks, which can rely on statistical models you train yourself. For -instance, you can provide your own on-the-fly sentence segmentation algorithm or -document similarity method. +your own hooks, which can rely on components you train yourself. For instance, +you can provide your own on-the-fly sentence segmentation algorithm or document +similarity method. Hooks let you customize some of the behaviors of the `Doc`, `Span` or `Token` objects by adding a component to the pipeline. For instance, to customize the @@ -1456,13 +1471,13 @@ function that takes a `Doc`, modifies it and returns it. method. However, a third-party extension should **never silently overwrite built-ins**, or attributes set by other extensions. -- If you're looking to publish a model that depends on a custom pipeline - component, you can either **require it** in the model package's dependencies, - or – if the component is specific and lightweight – choose to **ship it with - your model package**. Just make sure the +- If you're looking to publish a pipeline package that depends on a custom + pipeline component, you can either **require it** in the package's + dependencies, or – if the component is specific and lightweight – choose to + **ship it with your pipeline package**. Just make sure the [`@Language.component`](/api/language#component) or [`@Language.factory`](/api/language#factory) decorator that registers the - custom component runs in your model's `__init__.py` or is exposed via an + custom component runs in your package's `__init__.py` or is exposed via an [entry point](/usage/saving-loading#entry-points). - Once you're ready to share your extension with others, make sure to **add docs @@ -1511,9 +1526,9 @@ def custom_ner_wrapper(doc): return doc ``` -The `custom_ner_wrapper` can then be added to the pipeline of a blank model -using [`nlp.add_pipe`](/api/language#add_pipe). You can also replace the -existing entity recognizer of a pretrained model with +The `custom_ner_wrapper` can then be added to a blank pipeline using +[`nlp.add_pipe`](/api/language#add_pipe). You can also replace the existing +entity recognizer of a trained pipeline with [`nlp.replace_pipe`](/api/language#replace_pipe). Here's another example of a custom model, `your_custom_model`, that takes a list diff --git a/website/docs/usage/projects.md b/website/docs/usage/projects.md index 97a0caed8..b6688cd5d 100644 --- a/website/docs/usage/projects.md +++ b/website/docs/usage/projects.md @@ -20,17 +20,13 @@ menu: spaCy projects let you manage and share **end-to-end spaCy workflows** for different **use cases and domains**, and orchestrate training, packaging and -serving your custom models. You can start off by cloning a pre-defined project -template, adjust it to fit your needs, load in your data, train a model, export -it as a Python package, upload your outputs to a remote storage and share your -results with your team. spaCy projects can be used via the new +serving your custom pipelines. You can start off by cloning a pre-defined +project template, adjust it to fit your needs, load in your data, train a +pipeline, export it as a Python package, upload your outputs to a remote storage +and share your results with your team. spaCy projects can be used via the new [`spacy project`](/api/cli#project) command and we provide templates in our [`projects`](https://github.com/explosion/projects) repo. - - - - ![Illustration of project workflow and commands](../images/projects.svg) @@ -324,17 +320,17 @@ others are running your project with the same data. Each command defined in the `project.yml` can optionally define a list of dependencies and outputs. These are the files the command requires and creates. -For example, a command for training a model may depend on a +For example, a command for training a pipeline may depend on a [`config.cfg`](/usage/training#config) and the training and evaluation data, and -it will export a directory `model-best`, containing the best model, which you -can then re-use in other commands. +it will export a directory `model-best`, which you can then re-use in other +commands. ```yaml ### project.yml commands: - name: train - help: 'Train a spaCy model using the specified corpus and config' + help: 'Train a spaCy pipeline using the specified corpus and config' script: - 'python -m spacy train ./configs/config.cfg -o training/ --paths.train ./corpus/training.spacy --paths.dev ./corpus/evaluation.spacy' deps: @@ -392,14 +388,14 @@ directory: ├── project.yml # the project settings ├── project.lock # lockfile that tracks inputs/outputs ├── assets/ # downloaded data assets -├── configs/ # model config.cfg files used for training +├── configs/ # pipeline config.cfg files used for training ├── corpus/ # output directory for training corpus -├── metas/ # model meta.json templates used for packaging +├── metas/ # pipeline meta.json templates used for packaging ├── metrics/ # output directory for evaluation metrics ├── notebooks/ # directory for Jupyter notebooks -├── packages/ # output directory for model Python packages +├── packages/ # output directory for pipeline Python packages ├── scripts/ # directory for scripts, e.g. referenced in commands -├── training/ # output directory for trained models +├── training/ # output directory for trained pipelines └── ... # any other files, like a requirements.txt etc. ``` @@ -426,7 +422,7 @@ report: ### project.yml commands: - name: test - help: 'Test the trained model' + help: 'Test the trained pipeline' script: - 'pip install pytest pytest-html' - 'python -m pytest ./scripts/tests --html=metrics/test-report.html' @@ -440,8 +436,8 @@ commands: Adding `training/model-best` to the command's `deps` lets you ensure that the file is available. If not, spaCy will show an error and the command won't run. Setting `no_skip: true` means that the command will always run, even if the -dependencies (the trained model) hasn't changed. This makes sense here, because -you typically don't want to skip your tests. +dependencies (the trained pipeline) haven't changed. This makes sense here, +because you typically don't want to skip your tests. ### Writing custom scripts {#custom-scripts} @@ -554,7 +550,7 @@ notebooks with usage examples. -It's typically not a good idea to check large data assets, trained models or +It's typically not a good idea to check large data assets, trained pipelines or other artifacts into a Git repo and you should exclude them from your project template by adding a `.gitignore`. If you want to version your data and models, check out [Data Version Control](#dvc) (DVC), which integrates with spaCy @@ -566,7 +562,7 @@ projects. You can persist your project outputs to a remote storage using the [`project push`](/api/cli#project-push) command. This can help you **export** -your model packages, **share** work with your team, or **cache results** to +your pipeline packages, **share** work with your team, or **cache results** to avoid repeating work. The [`project pull`](/api/cli#project-pull) command will download any outputs that are in the remote storage and aren't available locally. @@ -622,7 +618,7 @@ For instance, let's say you had the following command in your `project.yml`: ```yaml ### project.yml - name: train - help: 'Train a spaCy model using the specified corpus and config' + help: 'Train a spaCy pipeline using the specified corpus and config' script: - 'spacy train ./config.cfg --output training/' deps: @@ -814,8 +810,8 @@ mattis pretium. [Streamlit](https://streamlit.io) is a Python framework for building interactive data apps. The [`spacy-streamlit`](https://github.com/explosion/spacy-streamlit) package helps you integrate spaCy visualizations into your Streamlit apps and -quickly spin up demos to explore your models interactively. It includes a full -embedded visualizer, as well as individual components. +quickly spin up demos to explore your pipelines interactively. It includes a +full embedded visualizer, as well as individual components. ```bash $ pip install spacy_streamlit @@ -829,11 +825,11 @@ $ pip install spacy_streamlit Using [`spacy-streamlit`](https://github.com/explosion/spacy-streamlit), your projects can easily define their own scripts that spin up an interactive -visualizer, using the latest model you trained, or a selection of models so you -can compare their results. The following script starts an +visualizer, using the latest pipeline you trained, or a selection of pipelines +so you can compare their results. The following script starts an [NER visualizer](/usage/visualizers#ent) and takes two positional command-line -argument you can pass in from your `config.yml`: a comma-separated list of model -paths and an example text to use as the default text. +argument you can pass in from your `config.yml`: a comma-separated list of paths +to load the pipelines from and an example text to use as the default text. ```python ### scripts/visualize.py @@ -841,8 +837,8 @@ import spacy_streamlit import sys DEFAULT_TEXT = sys.argv[2] if len(sys.argv) >= 3 else "" -MODELS = [name.strip() for name in sys.argv[1].split(",")] -spacy_streamlit.visualize(MODELS, DEFAULT_TEXT, visualizers=["ner"]) +PIPELINES = [name.strip() for name in sys.argv[1].split(",")] +spacy_streamlit.visualize(PIPELINES, DEFAULT_TEXT, visualizers=["ner"]) ``` > #### Example usage @@ -856,7 +852,7 @@ spacy_streamlit.visualize(MODELS, DEFAULT_TEXT, visualizers=["ner"]) ### project.yml commands: - name: visualize - help: "Visualize the model's output interactively using Streamlit" + help: "Visualize the pipeline's output interactively using Streamlit" script: - 'streamlit run ./scripts/visualize.py ./training/model-best "I like Adidas shoes."' deps: @@ -879,8 +875,8 @@ mattis pretium. for building REST APIs with Python, based on Python [type hints](https://fastapi.tiangolo.com/python-types/). It's become a popular library for serving machine learning models and you can use it in your spaCy -projects to quickly serve up a trained model and make it available behind a REST -API. +projects to quickly serve up a trained pipeline and make it available behind a +REST API. ```python # TODO: show an example that addresses some of the main concerns for serving ML (workers etc.) @@ -897,7 +893,7 @@ API. ### project.yml commands: - name: serve - help: "Serve the trained model with FastAPI" + help: "Serve the trained pipeline with FastAPI" script: - 'python ./scripts/serve.py ./training/model-best' deps: diff --git a/website/docs/usage/rule-based-matching.md b/website/docs/usage/rule-based-matching.md index a589c556e..01d60ddb8 100644 --- a/website/docs/usage/rule-based-matching.md +++ b/website/docs/usage/rule-based-matching.md @@ -4,6 +4,7 @@ teaser: Find phrases and tokens, and match entities menu: - ['Token Matcher', 'matcher'] - ['Phrase Matcher', 'phrasematcher'] + - ['Dependency Matcher', 'dependencymatcher'] - ['Entity Ruler', 'entityruler'] - ['Models & Rules', 'models-rules'] --- @@ -759,7 +760,7 @@ whitespace, making them easy to match as well. from spacy.lang.en import English from spacy.matcher import Matcher -nlp = English() # We only want the tokenizer, so no need to load a model +nlp = English() # We only want the tokenizer, so no need to load a pipeline matcher = Matcher(nlp.vocab) pos_emoji = ["😀", "😃", "😂", "🤣", "😊", "😍"] # Positive emoji @@ -893,12 +894,13 @@ pattern covering the exact tokenization of the term. To create the patterns, each phrase has to be processed with the `nlp` object. -If you have a model loaded, doing this in a loop or list comprehension can -easily become inefficient and slow. If you **only need the tokenization and -lexical attributes**, you can run [`nlp.make_doc`](/api/language#make_doc) -instead, which will only run the tokenizer. For an additional speed boost, you -can also use the [`nlp.tokenizer.pipe`](/api/tokenizer#pipe) method, which will -process the texts as a stream. +If you have a trained pipeline loaded, doing this in a loop or list +comprehension can easily become inefficient and slow. If you **only need the +tokenization and lexical attributes**, you can run +[`nlp.make_doc`](/api/language#make_doc) instead, which will only run the +tokenizer. For an additional speed boost, you can also use the +[`nlp.tokenizer.pipe`](/api/tokenizer#pipe) method, which will process the texts +as a stream. ```diff - patterns = [nlp(term) for term in LOTS_OF_TERMS] @@ -938,10 +940,10 @@ object patterns as efficiently as possible and without running any of the other pipeline components. If the token attribute you want to match on are set by a pipeline component, **make sure that the pipeline component runs** when you create the pattern. For example, to match on `POS` or `LEMMA`, the pattern `Doc` -objects need to have part-of-speech tags set by the `tagger`. You can either -call the `nlp` object on your pattern texts instead of `nlp.make_doc`, or use -[`nlp.select_pipes`](/api/language#select_pipes) to disable components -selectively. +objects need to have part-of-speech tags set by the `tagger` or `morphologizer`. +You can either call the `nlp` object on your pattern texts instead of +`nlp.make_doc`, or use [`nlp.select_pipes`](/api/language#select_pipes) to +disable components selectively. @@ -972,12 +974,289 @@ to match phrases with the same sequence of punctuation and non-punctuation tokens as the pattern. But this can easily get confusing and doesn't have much of an advantage over writing one or two token patterns. +## Dependency Matcher {#dependencymatcher new="3" model="parser"} + +The [`DependencyMatcher`](/api/dependencymatcher) lets you match patterns within +the dependency parse using +[Semgrex](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html) +operators. It requires a model containing a parser such as the +[`DependencyParser`](/api/dependencyparser). Instead of defining a list of +adjacent tokens as in `Matcher` patterns, the `DependencyMatcher` patterns match +tokens in the dependency parse and specify the relations between them. + +> ```python +> ### Example +> from spacy.matcher import DependencyMatcher +> +> # "[subject] ... initially founded" +> pattern = [ +> # anchor token: founded +> { +> "RIGHT_ID": "founded", +> "RIGHT_ATTRS": {"ORTH": "founded"} +> }, +> # founded -> subject +> { +> "LEFT_ID": "founded", +> "REL_OP": ">", +> "RIGHT_ID": "subject", +> "RIGHT_ATTRS": {"DEP": "nsubj"} +> }, +> # "founded" follows "initially" +> { +> "LEFT_ID": "founded", +> "REL_OP": ";", +> "RIGHT_ID": "initially", +> "RIGHT_ATTRS": {"ORTH": "initially"} +> } +> ] +> +> matcher = DependencyMatcher(nlp.vocab) +> matcher.add("FOUNDED", [pattern]) +> matches = matcher(doc) +> ``` + +A pattern added to the dependency matcher consists of a **list of +dictionaries**, with each dictionary describing a **token to match** and its +**relation to an existing token** in the pattern. Except for the first +dictionary, which defines an anchor token using only `RIGHT_ID` and +`RIGHT_ATTRS`, each pattern should have the following keys: + +| Name | Description | +| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `LEFT_ID` | The name of the left-hand node in the relation, which has been defined in an earlier node. ~~str~~ | +| `REL_OP` | An operator that describes how the two nodes are related. ~~str~~ | +| `RIGHT_ID` | A unique name for the right-hand node in the relation. ~~str~~ | +| `RIGHT_ATTRS` | The token attributes to match for the right-hand node in the same format as patterns provided to the regular token-based [`Matcher`](/api/matcher). ~~Dict[str, Any]~~ | + +Each additional token added to the pattern is linked to an existing token +`LEFT_ID` by the relation `REL_OP`. The new token is given the name `RIGHT_ID` +and described by the attributes `RIGHT_ATTRS`. + + + +Because the unique token **names** in `LEFT_ID` and `RIGHT_ID` are used to +identify tokens, the order of the dicts in the patterns is important: a token +name needs to be defined as `RIGHT_ID` in one dict in the pattern **before** it +can be used as `LEFT_ID` in another dict. + + + +### Dependency matcher operators {#dependencymatcher-operators} + +The following operators are supported by the `DependencyMatcher`, most of which +come directly from +[Semgrex](https://nlp.stanford.edu/nlp/javadoc/javanlp/edu/stanford/nlp/semgraph/semgrex/SemgrexPattern.html): + +| Symbol | Description | +| --------- | -------------------------------------------------------------------------------------------------------------------- | +| `A < B` | `A` is the immediate dependent of `B`. | +| `A > B` | `A` is the immediate head of `B`. | +| `A << B` | `A` is the dependent in a chain to `B` following dep → head paths. | +| `A >> B` | `A` is the head in a chain to `B` following head → dep paths. | +| `A . B` | `A` immediately precedes `B`, i.e. `A.i == B.i - 1`, and both are within the same dependency tree. | +| `A .* B` | `A` precedes `B`, i.e. `A.i < B.i`, and both are within the same dependency tree _(not in Semgrex)_. | +| `A ; B` | `A` immediately follows `B`, i.e. `A.i == B.i + 1`, and both are within the same dependency tree _(not in Semgrex)_. | +| `A ;* B` | `A` follows `B`, i.e. `A.i > B.i`, and both are within the same dependency tree _(not in Semgrex)_. | +| `A $+ B` | `B` is a right immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i - 1`. | +| `A $- B` | `B` is a left immediate sibling of `A`, i.e. `A` and `B` have the same parent and `A.i == B.i + 1`. | +| `A $++ B` | `B` is a right sibling of `A`, i.e. `A` and `B` have the same parent and `A.i < B.i`. | +| `A $-- B` | `B` is a left sibling of `A`, i.e. `A` and `B` have the same parent and `A.i > B.i`. | + +### Designing dependency matcher patterns {#dependencymatcher-patterns} + +Let's say we want to find sentences describing who founded what kind of company: + +- _Smith founded a healthcare company in 2005._ +- _Williams initially founded an insurance company in 1987._ +- _Lee, an experienced CEO, has founded two AI startups._ + +The dependency parse for "Smith founded a healthcare company" shows types of +relations and tokens we want to match: + +> #### Visualizing the parse +> +> The [`displacy` visualizer](/usage/visualizer) lets you render `Doc` objects +> and their dependency parse and part-of-speech tags: +> +> ```python +> import spacy +> from spacy import displacy +> +> nlp = spacy.load("en_core_web_sm") +> doc = nlp("Smith founded a healthcare company") +> displacy.serve(doc) +> ``` + +import DisplaCyDepFoundedHtml from 'images/displacy-dep-founded.html' + +