mirror of
https://github.com/explosion/spaCy.git
synced 2025-01-07 07:46:29 +03:00
Revert "Merge branch 'master' into spacy.io"
This reverts commitc8bb08b545
, reversing changes made tob6a509a8d1
.
This commit is contained in:
parent
c8bb08b545
commit
06d8c3a20f
106
.github/contributors/EARL_GREYT.md
vendored
106
.github/contributors/EARL_GREYT.md
vendored
|
@ -1,106 +0,0 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | David Weßling |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 27.09.19 |
|
||||
| GitHub username | EarlGreyT |
|
||||
| Website (optional) | |
|
106
.github/contributors/Hazoom.md
vendored
106
.github/contributors/Hazoom.md
vendored
|
@ -1,106 +0,0 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Moshe Hazoom |
|
||||
| Company name (if applicable) | Amenity Analytics |
|
||||
| Title or role (if applicable) | NLP Engineer |
|
||||
| Date | 2019-09-15 |
|
||||
| GitHub username | Hazoom |
|
||||
| Website (optional) | |
|
106
.github/contributors/jaydeepborkar.md
vendored
106
.github/contributors/jaydeepborkar.md
vendored
|
@ -1,106 +0,0 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [ ] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Jaydeep Borkar |
|
||||
| Company name (if applicable) | Pune University, India |
|
||||
| Title or role (if applicable) | CS Undergrad |
|
||||
| Date | 9/26/2019 |
|
||||
| GitHub username | jaydeepborkar |
|
||||
| Website (optional) | http://jaydeepborkar.github.io |
|
106
.github/contributors/seanBE.md
vendored
106
.github/contributors/seanBE.md
vendored
|
@ -1,106 +0,0 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | ------------------------- |
|
||||
| Name | Sean Löfgren |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 2019-09-17 |
|
||||
| GitHub username | seanBE |
|
||||
| Website (optional) | http://seanbe.github.io |
|
106
.github/contributors/zqianem.md
vendored
106
.github/contributors/zqianem.md
vendored
|
@ -1,106 +0,0 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI GmbH](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Em Zhan |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 2019-09-25 |
|
||||
| GitHub username | zqianem |
|
||||
| Website (optional) | |
|
|
@ -73,8 +73,9 @@ issue body. A few more tips:
|
|||
|
||||
### Issue labels
|
||||
|
||||
[See this page](https://github.com/explosion/spaCy/labels) for an overview of
|
||||
the system we use to tag our issues and pull requests.
|
||||
To distinguish issues that are opened by us, the maintainers, we usually add a
|
||||
💫 to the title. [See this page](https://github.com/explosion/spaCy/labels)
|
||||
for an overview of the system we use to tag our issues and pull requests.
|
||||
|
||||
## Contributing to the code base
|
||||
|
||||
|
|
16
Makefile
16
Makefile
|
@ -1,17 +1,7 @@
|
|||
SHELL := /bin/bash
|
||||
sha = $(shell "git" "rev-parse" "--short" "HEAD")
|
||||
version = $(shell "bin/get-version.sh")
|
||||
wheel = spacy-$(version)-cp36-cp36m-linux_x86_64.whl
|
||||
|
||||
dist/spacy.pex : dist/spacy-$(sha).pex
|
||||
cp dist/spacy-$(sha).pex dist/spacy.pex
|
||||
chmod a+rx dist/spacy.pex
|
||||
|
||||
dist/spacy-$(sha).pex : dist/$(wheel)
|
||||
env3.6/bin/python -m pip install pex==1.5.3
|
||||
env3.6/bin/pex pytest dist/$(wheel) -e spacy -o dist/spacy-$(sha).pex
|
||||
|
||||
dist/$(wheel) : setup.py spacy/*.py* spacy/*/*.py*
|
||||
dist/spacy.pex : spacy/*.py* spacy/*/*.py*
|
||||
python3.6 -m venv env3.6
|
||||
source env3.6/bin/activate
|
||||
env3.6/bin/pip install wheel
|
||||
|
@ -19,6 +9,10 @@ dist/$(wheel) : setup.py spacy/*.py* spacy/*/*.py*
|
|||
env3.6/bin/python setup.py build_ext --inplace
|
||||
env3.6/bin/python setup.py sdist
|
||||
env3.6/bin/python setup.py bdist_wheel
|
||||
env3.6/bin/python -m pip install pex==1.5.3
|
||||
env3.6/bin/pex pytest dist/*.whl -e spacy -o dist/spacy-$(sha).pex
|
||||
cp dist/spacy-$(sha).pex dist/spacy.pex
|
||||
chmod a+rx dist/spacy.pex
|
||||
|
||||
.PHONY : clean
|
||||
|
||||
|
|
13
README.md
13
README.md
|
@ -49,12 +49,9 @@ It's commercial open-source software, released under the MIT license.
|
|||
## 💬 Where to ask questions
|
||||
|
||||
The spaCy project is maintained by [@honnibal](https://github.com/honnibal)
|
||||
and [@ines](https://github.com/ines), along with core contributors
|
||||
[@svlandeg](https://github.com/svlandeg) and
|
||||
[@adrianeboyd](https://github.com/adrianeboyd). Please understand that we won't
|
||||
be able to provide individual support via email. We also believe that help is
|
||||
much more valuable if it's shared publicly, so that more people can benefit
|
||||
from it.
|
||||
and [@ines](https://github.com/ines). Please understand that we won't be able
|
||||
to provide individual support via email. We also believe that help is much more
|
||||
valuable if it's shared publicly, so that more people can benefit from it.
|
||||
|
||||
| Type | Platforms |
|
||||
| ------------------------ | ------------------------------------------------------ |
|
||||
|
@ -175,8 +172,8 @@ python -m spacy download en_core_web_sm
|
|||
python -m spacy download en
|
||||
|
||||
# pip install .tar.gz archive from path or URL
|
||||
pip install /Users/you/en_core_web_sm-2.2.0.tar.gz
|
||||
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz
|
||||
pip install /Users/you/en_core_web_sm-2.1.0.tar.gz
|
||||
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.1.0/en_core_web_sm-2.1.0.tar.gz
|
||||
```
|
||||
|
||||
### Loading and using models
|
||||
|
|
|
@ -79,24 +79,14 @@ jobs:
|
|||
# Downgrading pip is necessary to prevent a wheel version incompatiblity.
|
||||
# Might be fixed in the future or some other way, so investigate again.
|
||||
- script: |
|
||||
python -m pip install -U pip==18.1 setuptools
|
||||
python -m pip install --upgrade pip==18.1
|
||||
pip install -r requirements.txt
|
||||
displayName: 'Install dependencies'
|
||||
|
||||
- script: |
|
||||
python setup.py build_ext --inplace
|
||||
python setup.py sdist --formats=gztar
|
||||
displayName: 'Compile and build sdist'
|
||||
pip install -e .
|
||||
displayName: 'Build and install'
|
||||
|
||||
- task: DeleteFiles@1
|
||||
inputs:
|
||||
contents: 'spacy'
|
||||
displayName: 'Delete source directory'
|
||||
|
||||
- bash: |
|
||||
SDIST=$(python -c "import os;print(os.listdir('./dist')[-1])" 2>&1)
|
||||
pip install dist/$SDIST
|
||||
displayName: 'Install from sdist'
|
||||
|
||||
- script: python -m pytest --pyargs spacy
|
||||
- script: python -m pytest --tb=native spacy
|
||||
displayName: 'Run tests'
|
||||
|
|
|
@ -1,12 +0,0 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
version=$(grep "__version__ = " spacy/about.py)
|
||||
version=${version/__version__ = }
|
||||
version=${version/\'/}
|
||||
version=${version/\'/}
|
||||
version=${version/\"/}
|
||||
version=${version/\"/}
|
||||
|
||||
echo $version
|
|
@ -7,16 +7,14 @@ import datetime
|
|||
from pathlib import Path
|
||||
import xml.etree.ElementTree as ET
|
||||
|
||||
import conll17_ud_eval
|
||||
from ud_train import write_conllu
|
||||
from spacy.cli.ud import conll17_ud_eval
|
||||
from spacy.cli.ud.ud_train import write_conllu
|
||||
from spacy.lang.lex_attrs import word_shape
|
||||
from spacy.util import get_lang_class
|
||||
|
||||
# All languages in spaCy - in UD format (note that Norwegian is 'no' instead of 'nb')
|
||||
ALL_LANGUAGES = ("af, ar, bg, bn, ca, cs, da, de, el, en, es, et, fa, fi, fr,"
|
||||
"ga, he, hi, hr, hu, id, is, it, ja, kn, ko, lt, lv, mr, no,"
|
||||
"nl, pl, pt, ro, ru, si, sk, sl, sq, sr, sv, ta, te, th, tl,"
|
||||
"tr, tt, uk, ur, vi, zh")
|
||||
ALL_LANGUAGES = "ar, ca, da, de, el, en, es, fa, fi, fr, ga, he, hi, hr, hu, id, " \
|
||||
"it, ja, no, nl, pl, pt, ro, ru, sv, tr, ur, vi, zh"
|
||||
|
||||
# Non-parsing tasks that will be evaluated (works for default models)
|
||||
EVAL_NO_PARSE = ['Tokens', 'Words', 'Lemmas', 'Sentences', 'Feats']
|
||||
|
@ -75,10 +73,10 @@ def _contains_blinded_text(stats_xml):
|
|||
tree = ET.parse(stats_xml)
|
||||
root = tree.getroot()
|
||||
total_tokens = int(root.find('size/total/tokens').text)
|
||||
unique_forms = int(root.find('forms').get('unique'))
|
||||
unique_lemmas = int(root.find('lemmas').get('unique'))
|
||||
|
||||
# assume the corpus is largely blinded when there are less than 1% unique tokens
|
||||
return (unique_forms / total_tokens) < 0.01
|
||||
return (unique_lemmas / total_tokens) < 0.01
|
||||
|
||||
|
||||
def fetch_all_treebanks(ud_dir, languages, corpus, best_per_language):
|
||||
|
@ -264,26 +262,22 @@ def main(out_path, ud_dir, check_parse=False, langs=ALL_LANGUAGES, exclude_train
|
|||
if not exclude_trained_models:
|
||||
if 'de' in models:
|
||||
models['de'].append(load_model('de_core_news_sm'))
|
||||
models['de'].append(load_model('de_core_news_md'))
|
||||
if 'el' in models:
|
||||
models['el'].append(load_model('el_core_news_sm'))
|
||||
models['el'].append(load_model('el_core_news_md'))
|
||||
if 'en' in models:
|
||||
models['en'].append(load_model('en_core_web_sm'))
|
||||
models['en'].append(load_model('en_core_web_md'))
|
||||
models['en'].append(load_model('en_core_web_lg'))
|
||||
if 'es' in models:
|
||||
models['es'].append(load_model('es_core_news_sm'))
|
||||
models['es'].append(load_model('es_core_news_md'))
|
||||
if 'fr' in models:
|
||||
models['fr'].append(load_model('fr_core_news_sm'))
|
||||
models['fr'].append(load_model('fr_core_news_md'))
|
||||
if 'pt' in models:
|
||||
models['pt'].append(load_model('pt_core_news_sm'))
|
||||
if 'it' in models:
|
||||
models['it'].append(load_model('it_core_news_sm'))
|
||||
if 'nl' in models:
|
||||
models['nl'].append(load_model('nl_core_news_sm'))
|
||||
if 'pt' in models:
|
||||
models['pt'].append(load_model('pt_core_news_sm'))
|
||||
if 'en' in models:
|
||||
models['en'].append(load_model('en_core_web_sm'))
|
||||
models['en'].append(load_model('en_core_web_md'))
|
||||
models['en'].append(load_model('en_core_web_lg'))
|
||||
if 'fr' in models:
|
||||
models['fr'].append(load_model('fr_core_news_sm'))
|
||||
models['fr'].append(load_model('fr_core_news_md'))
|
||||
|
||||
with out_path.open(mode='w', encoding='utf-8') as out_file:
|
||||
run_all_evals(models, treebanks, out_file, check_parse, print_freq_tasks)
|
||||
|
|
|
@ -109,13 +109,15 @@ def write_conllu(docs, file_):
|
|||
merger = Matcher(docs[0].vocab)
|
||||
merger.add("SUBTOK", None, [{"DEP": "subtok", "op": "+"}])
|
||||
for i, doc in enumerate(docs):
|
||||
matches = []
|
||||
if doc.is_parsed:
|
||||
matches = merger(doc)
|
||||
matches = merger(doc)
|
||||
spans = [doc[start : end + 1] for _, start, end in matches]
|
||||
with doc.retokenize() as retokenizer:
|
||||
for span in spans:
|
||||
retokenizer.merge(span)
|
||||
# TODO: This shouldn't be necessary? Should be handled in merge
|
||||
for word in doc:
|
||||
if word.i == word.head.i:
|
||||
word.dep_ = "ROOT"
|
||||
file_.write("# newdoc id = {i}\n".format(i=i))
|
||||
for j, sent in enumerate(doc.sents):
|
||||
file_.write("# sent_id = {i}.{j}\n".format(i=i, j=j))
|
||||
|
|
|
@ -25,7 +25,7 @@ import itertools
|
|||
import random
|
||||
import numpy.random
|
||||
|
||||
import conll17_ud_eval
|
||||
from . import conll17_ud_eval
|
||||
|
||||
from spacy import lang
|
||||
from spacy.lang import zh
|
||||
|
@ -82,8 +82,6 @@ def read_data(
|
|||
head = int(head) - 1 if head != "0" else id_
|
||||
sent["words"].append(word)
|
||||
sent["tags"].append(tag)
|
||||
sent["morphology"].append(_parse_morph_string(morph))
|
||||
sent["morphology"][-1].add("POS_%s" % pos)
|
||||
sent["heads"].append(head)
|
||||
sent["deps"].append("ROOT" if dep == "root" else dep)
|
||||
sent["spaces"].append(space_after == "_")
|
||||
|
@ -92,12 +90,10 @@ def read_data(
|
|||
if oracle_segments:
|
||||
docs.append(Doc(nlp.vocab, words=sent["words"], spaces=sent["spaces"]))
|
||||
golds.append(GoldParse(docs[-1], **sent))
|
||||
assert golds[-1].morphology is not None
|
||||
|
||||
sent_annots.append(sent)
|
||||
if raw_text and max_doc_length and len(sent_annots) >= max_doc_length:
|
||||
doc, gold = _make_gold(nlp, None, sent_annots)
|
||||
assert gold.morphology is not None
|
||||
sent_annots = []
|
||||
docs.append(doc)
|
||||
golds.append(gold)
|
||||
|
@ -112,17 +108,6 @@ def read_data(
|
|||
return docs, golds
|
||||
return docs, golds
|
||||
|
||||
def _parse_morph_string(morph_string):
|
||||
if morph_string == '_':
|
||||
return set()
|
||||
output = []
|
||||
replacements = {'1': 'one', '2': 'two', '3': 'three'}
|
||||
for feature in morph_string.split('|'):
|
||||
key, value = feature.split('=')
|
||||
value = replacements.get(value, value)
|
||||
value = value.split(',')[0]
|
||||
output.append('%s_%s' % (key, value.lower()))
|
||||
return set(output)
|
||||
|
||||
def read_conllu(file_):
|
||||
docs = []
|
||||
|
@ -156,8 +141,8 @@ def _make_gold(nlp, text, sent_annots, drop_deps=0.0):
|
|||
flat = defaultdict(list)
|
||||
sent_starts = []
|
||||
for sent in sent_annots:
|
||||
flat["heads"].extend(len(flat["words"])+head for head in sent["heads"])
|
||||
for field in ["words", "tags", "deps", "morphology", "entities", "spaces"]:
|
||||
flat["heads"].extend(len(flat["words"]) + head for head in sent["heads"])
|
||||
for field in ["words", "tags", "deps", "entities", "spaces"]:
|
||||
flat[field].extend(sent[field])
|
||||
sent_starts.append(True)
|
||||
sent_starts.extend([False] * (len(sent["words"]) - 1))
|
||||
|
@ -229,18 +214,11 @@ def write_conllu(docs, file_):
|
|||
merger = Matcher(docs[0].vocab)
|
||||
merger.add("SUBTOK", None, [{"DEP": "subtok", "op": "+"}])
|
||||
for i, doc in enumerate(docs):
|
||||
matches = []
|
||||
if doc.is_parsed:
|
||||
matches = merger(doc)
|
||||
matches = merger(doc)
|
||||
spans = [doc[start : end + 1] for _, start, end in matches]
|
||||
seen_tokens = set()
|
||||
with doc.retokenize() as retokenizer:
|
||||
for span in spans:
|
||||
span_tokens = set(range(span.start, span.end))
|
||||
if not span_tokens.intersection(seen_tokens):
|
||||
retokenizer.merge(span)
|
||||
seen_tokens.update(span_tokens)
|
||||
|
||||
retokenizer.merge(span)
|
||||
file_.write("# newdoc id = {i}\n".format(i=i))
|
||||
for j, sent in enumerate(doc.sents):
|
||||
file_.write("# sent_id = {i}.{j}\n".format(i=i, j=j))
|
||||
|
@ -263,29 +241,27 @@ def write_conllu(docs, file_):
|
|||
def print_progress(itn, losses, ud_scores):
|
||||
fields = {
|
||||
"dep_loss": losses.get("parser", 0.0),
|
||||
"morph_loss": losses.get("morphologizer", 0.0),
|
||||
"tag_loss": losses.get("tagger", 0.0),
|
||||
"words": ud_scores["Words"].f1 * 100,
|
||||
"sents": ud_scores["Sentences"].f1 * 100,
|
||||
"tags": ud_scores["XPOS"].f1 * 100,
|
||||
"uas": ud_scores["UAS"].f1 * 100,
|
||||
"las": ud_scores["LAS"].f1 * 100,
|
||||
"morph": ud_scores["Feats"].f1 * 100,
|
||||
}
|
||||
header = ["Epoch", "P.Loss", "M.Loss", "LAS", "UAS", "TAG", "MORPH", "SENT", "WORD"]
|
||||
header = ["Epoch", "Loss", "LAS", "UAS", "TAG", "SENT", "WORD"]
|
||||
if itn == 0:
|
||||
print("\t".join(header))
|
||||
tpl = "\t".join((
|
||||
"{:d}",
|
||||
"{dep_loss:.1f}",
|
||||
"{morph_loss:.1f}",
|
||||
"{las:.1f}",
|
||||
"{uas:.1f}",
|
||||
"{tags:.1f}",
|
||||
"{morph:.1f}",
|
||||
"{sents:.1f}",
|
||||
"{words:.1f}",
|
||||
))
|
||||
tpl = "\t".join(
|
||||
(
|
||||
"{:d}",
|
||||
"{dep_loss:.1f}",
|
||||
"{las:.1f}",
|
||||
"{uas:.1f}",
|
||||
"{tags:.1f}",
|
||||
"{sents:.1f}",
|
||||
"{words:.1f}",
|
||||
)
|
||||
)
|
||||
print(tpl.format(itn, **fields))
|
||||
|
||||
|
||||
|
@ -306,27 +282,25 @@ def get_token_conllu(token, i):
|
|||
head = 0
|
||||
else:
|
||||
head = i + (token.head.i - token.i) + 1
|
||||
features = list(token.morph)
|
||||
feat_str = []
|
||||
replacements = {"one": "1", "two": "2", "three": "3"}
|
||||
for feat in features:
|
||||
if not feat.startswith("begin") and not feat.startswith("end"):
|
||||
key, value = feat.split("_", 1)
|
||||
value = replacements.get(value, value)
|
||||
feat_str.append("%s=%s" % (key, value.title()))
|
||||
if not feat_str:
|
||||
feat_str = "_"
|
||||
else:
|
||||
feat_str = "|".join(feat_str)
|
||||
fields = [str(i+1), token.text, token.lemma_, token.pos_, token.tag_, feat_str,
|
||||
str(head), token.dep_.lower(), "_", "_"]
|
||||
fields = [
|
||||
str(i + 1),
|
||||
token.text,
|
||||
token.lemma_,
|
||||
token.pos_,
|
||||
token.tag_,
|
||||
"_",
|
||||
str(head),
|
||||
token.dep_.lower(),
|
||||
"_",
|
||||
"_",
|
||||
]
|
||||
lines.append("\t".join(fields))
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
Token.set_extension("get_conllu_lines", method=get_token_conllu, force=True)
|
||||
Token.set_extension("begins_fused", default=False, force=True)
|
||||
Token.set_extension("inside_fused", default=False, force=True)
|
||||
Token.set_extension("get_conllu_lines", method=get_token_conllu)
|
||||
Token.set_extension("begins_fused", default=False)
|
||||
Token.set_extension("inside_fused", default=False)
|
||||
|
||||
|
||||
##################
|
||||
|
@ -350,8 +324,7 @@ def load_nlp(corpus, config, vectors=None):
|
|||
|
||||
|
||||
def initialize_pipeline(nlp, docs, golds, config, device):
|
||||
nlp.add_pipe(nlp.create_pipe("tagger", config={"set_morphology": False}))
|
||||
nlp.add_pipe(nlp.create_pipe("morphologizer"))
|
||||
nlp.add_pipe(nlp.create_pipe("tagger"))
|
||||
nlp.add_pipe(nlp.create_pipe("parser"))
|
||||
if config.multitask_tag:
|
||||
nlp.parser.add_multitask_objective("tag")
|
||||
|
@ -551,12 +524,14 @@ def main(
|
|||
out_path = parses_dir / corpus / "epoch-{i}.conllu".format(i=i)
|
||||
with nlp.use_params(optimizer.averages):
|
||||
if use_oracle_segments:
|
||||
parsed_docs, scores = evaluate(nlp, paths.dev.conllu,
|
||||
paths.dev.conllu, out_path)
|
||||
parsed_docs, scores = evaluate(
|
||||
nlp, paths.dev.conllu, paths.dev.conllu, out_path
|
||||
)
|
||||
else:
|
||||
parsed_docs, scores = evaluate(nlp, paths.dev.text,
|
||||
paths.dev.conllu, out_path)
|
||||
print_progress(i, losses, scores)
|
||||
parsed_docs, scores = evaluate(
|
||||
nlp, paths.dev.text, paths.dev.conllu, out_path
|
||||
)
|
||||
print_progress(i, losses, scores)
|
||||
|
||||
|
||||
def _render_parses(i, to_render):
|
||||
|
|
|
@ -8,8 +8,8 @@ For more details, see the documentation:
|
|||
* Knowledge base: https://spacy.io/api/kb
|
||||
* Entity Linking: https://spacy.io/usage/linguistic-features#entity-linking
|
||||
|
||||
Compatible with: spaCy v2.2
|
||||
Last tested with: v2.2
|
||||
Compatible with: spaCy vX.X
|
||||
Last tested with: vX.X
|
||||
"""
|
||||
from __future__ import unicode_literals, print_function
|
||||
|
||||
|
@ -73,6 +73,7 @@ def main(vocab_path=None, model=None, output_dir=None, n_iter=50):
|
|||
input_dim=INPUT_DIM,
|
||||
desc_width=DESC_WIDTH,
|
||||
epochs=n_iter,
|
||||
threshold=0.001,
|
||||
)
|
||||
encoder.train(description_list=descriptions, to_print=True)
|
||||
|
||||
|
|
|
@ -1,121 +0,0 @@
|
|||
Creative Commons Legal Code
|
||||
|
||||
CC0 1.0 Universal
|
||||
|
||||
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
|
||||
LEGAL SERVICES. DISTRIBUTION OF THIS DOCUMENT DOES NOT CREATE AN
|
||||
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
|
||||
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
|
||||
REGARDING THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS
|
||||
PROVIDED HEREUNDER, AND DISCLAIMS LIABILITY FOR DAMAGES RESULTING FROM
|
||||
THE USE OF THIS DOCUMENT OR THE INFORMATION OR WORKS PROVIDED
|
||||
HEREUNDER.
|
||||
|
||||
Statement of Purpose
|
||||
|
||||
The laws of most jurisdictions throughout the world automatically confer
|
||||
exclusive Copyright and Related Rights (defined below) upon the creator
|
||||
and subsequent owner(s) (each and all, an "owner") of an original work of
|
||||
authorship and/or a database (each, a "Work").
|
||||
|
||||
Certain owners wish to permanently relinquish those rights to a Work for
|
||||
the purpose of contributing to a commons of creative, cultural and
|
||||
scientific works ("Commons") that the public can reliably and without fear
|
||||
of later claims of infringement build upon, modify, incorporate in other
|
||||
works, reuse and redistribute as freely as possible in any form whatsoever
|
||||
and for any purposes, including without limitation commercial purposes.
|
||||
These owners may contribute to the Commons to promote the ideal of a free
|
||||
culture and the further production of creative, cultural and scientific
|
||||
works, or to gain reputation or greater distribution for their Work in
|
||||
part through the use and efforts of others.
|
||||
|
||||
For these and/or other purposes and motivations, and without any
|
||||
expectation of additional consideration or compensation, the person
|
||||
associating CC0 with a Work (the "Affirmer"), to the extent that he or she
|
||||
is an owner of Copyright and Related Rights in the Work, voluntarily
|
||||
elects to apply CC0 to the Work and publicly distribute the Work under its
|
||||
terms, with knowledge of his or her Copyright and Related Rights in the
|
||||
Work and the meaning and intended legal effect of CC0 on those rights.
|
||||
|
||||
1. Copyright and Related Rights. A Work made available under CC0 may be
|
||||
protected by copyright and related or neighboring rights ("Copyright and
|
||||
Related Rights"). Copyright and Related Rights include, but are not
|
||||
limited to, the following:
|
||||
|
||||
i. the right to reproduce, adapt, distribute, perform, display,
|
||||
communicate, and translate a Work;
|
||||
ii. moral rights retained by the original author(s) and/or performer(s);
|
||||
iii. publicity and privacy rights pertaining to a person's image or
|
||||
likeness depicted in a Work;
|
||||
iv. rights protecting against unfair competition in regards to a Work,
|
||||
subject to the limitations in paragraph 4(a), below;
|
||||
v. rights protecting the extraction, dissemination, use and reuse of data
|
||||
in a Work;
|
||||
vi. database rights (such as those arising under Directive 96/9/EC of the
|
||||
European Parliament and of the Council of 11 March 1996 on the legal
|
||||
protection of databases, and under any national implementation
|
||||
thereof, including any amended or successor version of such
|
||||
directive); and
|
||||
vii. other similar, equivalent or corresponding rights throughout the
|
||||
world based on applicable law or treaty, and any national
|
||||
implementations thereof.
|
||||
|
||||
2. Waiver. To the greatest extent permitted by, but not in contravention
|
||||
of, applicable law, Affirmer hereby overtly, fully, permanently,
|
||||
irrevocably and unconditionally waives, abandons, and surrenders all of
|
||||
Affirmer's Copyright and Related Rights and associated claims and causes
|
||||
of action, whether now known or unknown (including existing as well as
|
||||
future claims and causes of action), in the Work (i) in all territories
|
||||
worldwide, (ii) for the maximum duration provided by applicable law or
|
||||
treaty (including future time extensions), (iii) in any current or future
|
||||
medium and for any number of copies, and (iv) for any purpose whatsoever,
|
||||
including without limitation commercial, advertising or promotional
|
||||
purposes (the "Waiver"). Affirmer makes the Waiver for the benefit of each
|
||||
member of the public at large and to the detriment of Affirmer's heirs and
|
||||
successors, fully intending that such Waiver shall not be subject to
|
||||
revocation, rescission, cancellation, termination, or any other legal or
|
||||
equitable action to disrupt the quiet enjoyment of the Work by the public
|
||||
as contemplated by Affirmer's express Statement of Purpose.
|
||||
|
||||
3. Public License Fallback. Should any part of the Waiver for any reason
|
||||
be judged legally invalid or ineffective under applicable law, then the
|
||||
Waiver shall be preserved to the maximum extent permitted taking into
|
||||
account Affirmer's express Statement of Purpose. In addition, to the
|
||||
extent the Waiver is so judged Affirmer hereby grants to each affected
|
||||
person a royalty-free, non transferable, non sublicensable, non exclusive,
|
||||
irrevocable and unconditional license to exercise Affirmer's Copyright and
|
||||
Related Rights in the Work (i) in all territories worldwide, (ii) for the
|
||||
maximum duration provided by applicable law or treaty (including future
|
||||
time extensions), (iii) in any current or future medium and for any number
|
||||
of copies, and (iv) for any purpose whatsoever, including without
|
||||
limitation commercial, advertising or promotional purposes (the
|
||||
"License"). The License shall be deemed effective as of the date CC0 was
|
||||
applied by Affirmer to the Work. Should any part of the License for any
|
||||
reason be judged legally invalid or ineffective under applicable law, such
|
||||
partial invalidity or ineffectiveness shall not invalidate the remainder
|
||||
of the License, and in such case Affirmer hereby affirms that he or she
|
||||
will not (i) exercise any of his or her remaining Copyright and Related
|
||||
Rights in the Work or (ii) assert any associated claims and causes of
|
||||
action with respect to the Work, in either case contrary to Affirmer's
|
||||
express Statement of Purpose.
|
||||
|
||||
4. Limitations and Disclaimers.
|
||||
|
||||
a. No trademark or patent rights held by Affirmer are waived, abandoned,
|
||||
surrendered, licensed or otherwise affected by this document.
|
||||
b. Affirmer offers the Work as-is and makes no representations or
|
||||
warranties of any kind concerning the Work, express, implied,
|
||||
statutory or otherwise, including without limitation warranties of
|
||||
title, merchantability, fitness for a particular purpose, non
|
||||
infringement, or the absence of latent or other defects, accuracy, or
|
||||
the present or absence of errors, whether or not discoverable, all to
|
||||
the greatest extent permissible under applicable law.
|
||||
c. Affirmer disclaims responsibility for clearing rights of other persons
|
||||
that may apply to the Work or any use thereof, including without
|
||||
limitation any person's Copyright and Related Rights in the Work.
|
||||
Further, Affirmer disclaims responsibility for obtaining any necessary
|
||||
consents, permissions or other rights required for any use of the
|
||||
Work.
|
||||
d. Affirmer understands and acknowledges that Creative Commons is not a
|
||||
party to this document and has no duty or obligation with respect to
|
||||
this CC0 or use of the Work.
|
|
@ -1,359 +0,0 @@
|
|||
Creative Commons Legal Code
|
||||
|
||||
Attribution-ShareAlike 3.0 Unported
|
||||
|
||||
CREATIVE COMMONS CORPORATION IS NOT A LAW FIRM AND DOES NOT PROVIDE
|
||||
LEGAL SERVICES. DISTRIBUTION OF THIS LICENSE DOES NOT CREATE AN
|
||||
ATTORNEY-CLIENT RELATIONSHIP. CREATIVE COMMONS PROVIDES THIS
|
||||
INFORMATION ON AN "AS-IS" BASIS. CREATIVE COMMONS MAKES NO WARRANTIES
|
||||
REGARDING THE INFORMATION PROVIDED, AND DISCLAIMS LIABILITY FOR
|
||||
DAMAGES RESULTING FROM ITS USE.
|
||||
|
||||
License
|
||||
|
||||
THE WORK (AS DEFINED BELOW) IS PROVIDED UNDER THE TERMS OF THIS CREATIVE
|
||||
COMMONS PUBLIC LICENSE ("CCPL" OR "LICENSE"). THE WORK IS PROTECTED BY
|
||||
COPYRIGHT AND/OR OTHER APPLICABLE LAW. ANY USE OF THE WORK OTHER THAN AS
|
||||
AUTHORIZED UNDER THIS LICENSE OR COPYRIGHT LAW IS PROHIBITED.
|
||||
|
||||
BY EXERCISING ANY RIGHTS TO THE WORK PROVIDED HERE, YOU ACCEPT AND AGREE
|
||||
TO BE BOUND BY THE TERMS OF THIS LICENSE. TO THE EXTENT THIS LICENSE MAY
|
||||
BE CONSIDERED TO BE A CONTRACT, THE LICENSOR GRANTS YOU THE RIGHTS
|
||||
CONTAINED HERE IN CONSIDERATION OF YOUR ACCEPTANCE OF SUCH TERMS AND
|
||||
CONDITIONS.
|
||||
|
||||
1. Definitions
|
||||
|
||||
a. "Adaptation" means a work based upon the Work, or upon the Work and
|
||||
other pre-existing works, such as a translation, adaptation,
|
||||
derivative work, arrangement of music or other alterations of a
|
||||
literary or artistic work, or phonogram or performance and includes
|
||||
cinematographic adaptations or any other form in which the Work may be
|
||||
recast, transformed, or adapted including in any form recognizably
|
||||
derived from the original, except that a work that constitutes a
|
||||
Collection will not be considered an Adaptation for the purpose of
|
||||
this License. For the avoidance of doubt, where the Work is a musical
|
||||
work, performance or phonogram, the synchronization of the Work in
|
||||
timed-relation with a moving image ("synching") will be considered an
|
||||
Adaptation for the purpose of this License.
|
||||
b. "Collection" means a collection of literary or artistic works, such as
|
||||
encyclopedias and anthologies, or performances, phonograms or
|
||||
broadcasts, or other works or subject matter other than works listed
|
||||
in Section 1(f) below, which, by reason of the selection and
|
||||
arrangement of their contents, constitute intellectual creations, in
|
||||
which the Work is included in its entirety in unmodified form along
|
||||
with one or more other contributions, each constituting separate and
|
||||
independent works in themselves, which together are assembled into a
|
||||
collective whole. A work that constitutes a Collection will not be
|
||||
considered an Adaptation (as defined below) for the purposes of this
|
||||
License.
|
||||
c. "Creative Commons Compatible License" means a license that is listed
|
||||
at https://creativecommons.org/compatiblelicenses that has been
|
||||
approved by Creative Commons as being essentially equivalent to this
|
||||
License, including, at a minimum, because that license: (i) contains
|
||||
terms that have the same purpose, meaning and effect as the License
|
||||
Elements of this License; and, (ii) explicitly permits the relicensing
|
||||
of adaptations of works made available under that license under this
|
||||
License or a Creative Commons jurisdiction license with the same
|
||||
License Elements as this License.
|
||||
d. "Distribute" means to make available to the public the original and
|
||||
copies of the Work or Adaptation, as appropriate, through sale or
|
||||
other transfer of ownership.
|
||||
e. "License Elements" means the following high-level license attributes
|
||||
as selected by Licensor and indicated in the title of this License:
|
||||
Attribution, ShareAlike.
|
||||
f. "Licensor" means the individual, individuals, entity or entities that
|
||||
offer(s) the Work under the terms of this License.
|
||||
g. "Original Author" means, in the case of a literary or artistic work,
|
||||
the individual, individuals, entity or entities who created the Work
|
||||
or if no individual or entity can be identified, the publisher; and in
|
||||
addition (i) in the case of a performance the actors, singers,
|
||||
musicians, dancers, and other persons who act, sing, deliver, declaim,
|
||||
play in, interpret or otherwise perform literary or artistic works or
|
||||
expressions of folklore; (ii) in the case of a phonogram the producer
|
||||
being the person or legal entity who first fixes the sounds of a
|
||||
performance or other sounds; and, (iii) in the case of broadcasts, the
|
||||
organization that transmits the broadcast.
|
||||
h. "Work" means the literary and/or artistic work offered under the terms
|
||||
of this License including without limitation any production in the
|
||||
literary, scientific and artistic domain, whatever may be the mode or
|
||||
form of its expression including digital form, such as a book,
|
||||
pamphlet and other writing; a lecture, address, sermon or other work
|
||||
of the same nature; a dramatic or dramatico-musical work; a
|
||||
choreographic work or entertainment in dumb show; a musical
|
||||
composition with or without words; a cinematographic work to which are
|
||||
assimilated works expressed by a process analogous to cinematography;
|
||||
a work of drawing, painting, architecture, sculpture, engraving or
|
||||
lithography; a photographic work to which are assimilated works
|
||||
expressed by a process analogous to photography; a work of applied
|
||||
art; an illustration, map, plan, sketch or three-dimensional work
|
||||
relative to geography, topography, architecture or science; a
|
||||
performance; a broadcast; a phonogram; a compilation of data to the
|
||||
extent it is protected as a copyrightable work; or a work performed by
|
||||
a variety or circus performer to the extent it is not otherwise
|
||||
considered a literary or artistic work.
|
||||
i. "You" means an individual or entity exercising rights under this
|
||||
License who has not previously violated the terms of this License with
|
||||
respect to the Work, or who has received express permission from the
|
||||
Licensor to exercise rights under this License despite a previous
|
||||
violation.
|
||||
j. "Publicly Perform" means to perform public recitations of the Work and
|
||||
to communicate to the public those public recitations, by any means or
|
||||
process, including by wire or wireless means or public digital
|
||||
performances; to make available to the public Works in such a way that
|
||||
members of the public may access these Works from a place and at a
|
||||
place individually chosen by them; to perform the Work to the public
|
||||
by any means or process and the communication to the public of the
|
||||
performances of the Work, including by public digital performance; to
|
||||
broadcast and rebroadcast the Work by any means including signs,
|
||||
sounds or images.
|
||||
k. "Reproduce" means to make copies of the Work by any means including
|
||||
without limitation by sound or visual recordings and the right of
|
||||
fixation and reproducing fixations of the Work, including storage of a
|
||||
protected performance or phonogram in digital form or other electronic
|
||||
medium.
|
||||
|
||||
2. Fair Dealing Rights. Nothing in this License is intended to reduce,
|
||||
limit, or restrict any uses free from copyright or rights arising from
|
||||
limitations or exceptions that are provided for in connection with the
|
||||
copyright protection under copyright law or other applicable laws.
|
||||
|
||||
3. License Grant. Subject to the terms and conditions of this License,
|
||||
Licensor hereby grants You a worldwide, royalty-free, non-exclusive,
|
||||
perpetual (for the duration of the applicable copyright) license to
|
||||
exercise the rights in the Work as stated below:
|
||||
|
||||
a. to Reproduce the Work, to incorporate the Work into one or more
|
||||
Collections, and to Reproduce the Work as incorporated in the
|
||||
Collections;
|
||||
b. to create and Reproduce Adaptations provided that any such Adaptation,
|
||||
including any translation in any medium, takes reasonable steps to
|
||||
clearly label, demarcate or otherwise identify that changes were made
|
||||
to the original Work. For example, a translation could be marked "The
|
||||
original work was translated from English to Spanish," or a
|
||||
modification could indicate "The original work has been modified.";
|
||||
c. to Distribute and Publicly Perform the Work including as incorporated
|
||||
in Collections; and,
|
||||
d. to Distribute and Publicly Perform Adaptations.
|
||||
e. For the avoidance of doubt:
|
||||
|
||||
i. Non-waivable Compulsory License Schemes. In those jurisdictions in
|
||||
which the right to collect royalties through any statutory or
|
||||
compulsory licensing scheme cannot be waived, the Licensor
|
||||
reserves the exclusive right to collect such royalties for any
|
||||
exercise by You of the rights granted under this License;
|
||||
ii. Waivable Compulsory License Schemes. In those jurisdictions in
|
||||
which the right to collect royalties through any statutory or
|
||||
compulsory licensing scheme can be waived, the Licensor waives the
|
||||
exclusive right to collect such royalties for any exercise by You
|
||||
of the rights granted under this License; and,
|
||||
iii. Voluntary License Schemes. The Licensor waives the right to
|
||||
collect royalties, whether individually or, in the event that the
|
||||
Licensor is a member of a collecting society that administers
|
||||
voluntary licensing schemes, via that society, from any exercise
|
||||
by You of the rights granted under this License.
|
||||
|
||||
The above rights may be exercised in all media and formats whether now
|
||||
known or hereafter devised. The above rights include the right to make
|
||||
such modifications as are technically necessary to exercise the rights in
|
||||
other media and formats. Subject to Section 8(f), all rights not expressly
|
||||
granted by Licensor are hereby reserved.
|
||||
|
||||
4. Restrictions. The license granted in Section 3 above is expressly made
|
||||
subject to and limited by the following restrictions:
|
||||
|
||||
a. You may Distribute or Publicly Perform the Work only under the terms
|
||||
of this License. You must include a copy of, or the Uniform Resource
|
||||
Identifier (URI) for, this License with every copy of the Work You
|
||||
Distribute or Publicly Perform. You may not offer or impose any terms
|
||||
on the Work that restrict the terms of this License or the ability of
|
||||
the recipient of the Work to exercise the rights granted to that
|
||||
recipient under the terms of the License. You may not sublicense the
|
||||
Work. You must keep intact all notices that refer to this License and
|
||||
to the disclaimer of warranties with every copy of the Work You
|
||||
Distribute or Publicly Perform. When You Distribute or Publicly
|
||||
Perform the Work, You may not impose any effective technological
|
||||
measures on the Work that restrict the ability of a recipient of the
|
||||
Work from You to exercise the rights granted to that recipient under
|
||||
the terms of the License. This Section 4(a) applies to the Work as
|
||||
incorporated in a Collection, but this does not require the Collection
|
||||
apart from the Work itself to be made subject to the terms of this
|
||||
License. If You create a Collection, upon notice from any Licensor You
|
||||
must, to the extent practicable, remove from the Collection any credit
|
||||
as required by Section 4(c), as requested. If You create an
|
||||
Adaptation, upon notice from any Licensor You must, to the extent
|
||||
practicable, remove from the Adaptation any credit as required by
|
||||
Section 4(c), as requested.
|
||||
b. You may Distribute or Publicly Perform an Adaptation only under the
|
||||
terms of: (i) this License; (ii) a later version of this License with
|
||||
the same License Elements as this License; (iii) a Creative Commons
|
||||
jurisdiction license (either this or a later license version) that
|
||||
contains the same License Elements as this License (e.g.,
|
||||
Attribution-ShareAlike 3.0 US)); (iv) a Creative Commons Compatible
|
||||
License. If you license the Adaptation under one of the licenses
|
||||
mentioned in (iv), you must comply with the terms of that license. If
|
||||
you license the Adaptation under the terms of any of the licenses
|
||||
mentioned in (i), (ii) or (iii) (the "Applicable License"), you must
|
||||
comply with the terms of the Applicable License generally and the
|
||||
following provisions: (I) You must include a copy of, or the URI for,
|
||||
the Applicable License with every copy of each Adaptation You
|
||||
Distribute or Publicly Perform; (II) You may not offer or impose any
|
||||
terms on the Adaptation that restrict the terms of the Applicable
|
||||
License or the ability of the recipient of the Adaptation to exercise
|
||||
the rights granted to that recipient under the terms of the Applicable
|
||||
License; (III) You must keep intact all notices that refer to the
|
||||
Applicable License and to the disclaimer of warranties with every copy
|
||||
of the Work as included in the Adaptation You Distribute or Publicly
|
||||
Perform; (IV) when You Distribute or Publicly Perform the Adaptation,
|
||||
You may not impose any effective technological measures on the
|
||||
Adaptation that restrict the ability of a recipient of the Adaptation
|
||||
from You to exercise the rights granted to that recipient under the
|
||||
terms of the Applicable License. This Section 4(b) applies to the
|
||||
Adaptation as incorporated in a Collection, but this does not require
|
||||
the Collection apart from the Adaptation itself to be made subject to
|
||||
the terms of the Applicable License.
|
||||
c. If You Distribute, or Publicly Perform the Work or any Adaptations or
|
||||
Collections, You must, unless a request has been made pursuant to
|
||||
Section 4(a), keep intact all copyright notices for the Work and
|
||||
provide, reasonable to the medium or means You are utilizing: (i) the
|
||||
name of the Original Author (or pseudonym, if applicable) if supplied,
|
||||
and/or if the Original Author and/or Licensor designate another party
|
||||
or parties (e.g., a sponsor institute, publishing entity, journal) for
|
||||
attribution ("Attribution Parties") in Licensor's copyright notice,
|
||||
terms of service or by other reasonable means, the name of such party
|
||||
or parties; (ii) the title of the Work if supplied; (iii) to the
|
||||
extent reasonably practicable, the URI, if any, that Licensor
|
||||
specifies to be associated with the Work, unless such URI does not
|
||||
refer to the copyright notice or licensing information for the Work;
|
||||
and (iv) , consistent with Ssection 3(b), in the case of an
|
||||
Adaptation, a credit identifying the use of the Work in the Adaptation
|
||||
(e.g., "French translation of the Work by Original Author," or
|
||||
"Screenplay based on original Work by Original Author"). The credit
|
||||
required by this Section 4(c) may be implemented in any reasonable
|
||||
manner; provided, however, that in the case of a Adaptation or
|
||||
Collection, at a minimum such credit will appear, if a credit for all
|
||||
contributing authors of the Adaptation or Collection appears, then as
|
||||
part of these credits and in a manner at least as prominent as the
|
||||
credits for the other contributing authors. For the avoidance of
|
||||
doubt, You may only use the credit required by this Section for the
|
||||
purpose of attribution in the manner set out above and, by exercising
|
||||
Your rights under this License, You may not implicitly or explicitly
|
||||
assert or imply any connection with, sponsorship or endorsement by the
|
||||
Original Author, Licensor and/or Attribution Parties, as appropriate,
|
||||
of You or Your use of the Work, without the separate, express prior
|
||||
written permission of the Original Author, Licensor and/or Attribution
|
||||
Parties.
|
||||
d. Except as otherwise agreed in writing by the Licensor or as may be
|
||||
otherwise permitted by applicable law, if You Reproduce, Distribute or
|
||||
Publicly Perform the Work either by itself or as part of any
|
||||
Adaptations or Collections, You must not distort, mutilate, modify or
|
||||
take other derogatory action in relation to the Work which would be
|
||||
prejudicial to the Original Author's honor or reputation. Licensor
|
||||
agrees that in those jurisdictions (e.g. Japan), in which any exercise
|
||||
of the right granted in Section 3(b) of this License (the right to
|
||||
make Adaptations) would be deemed to be a distortion, mutilation,
|
||||
modification or other derogatory action prejudicial to the Original
|
||||
Author's honor and reputation, the Licensor will waive or not assert,
|
||||
as appropriate, this Section, to the fullest extent permitted by the
|
||||
applicable national law, to enable You to reasonably exercise Your
|
||||
right under Section 3(b) of this License (right to make Adaptations)
|
||||
but not otherwise.
|
||||
|
||||
5. Representations, Warranties and Disclaimer
|
||||
|
||||
UNLESS OTHERWISE MUTUALLY AGREED TO BY THE PARTIES IN WRITING, LICENSOR
|
||||
OFFERS THE WORK AS-IS AND MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY
|
||||
KIND CONCERNING THE WORK, EXPRESS, IMPLIED, STATUTORY OR OTHERWISE,
|
||||
INCLUDING, WITHOUT LIMITATION, WARRANTIES OF TITLE, MERCHANTIBILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE, NONINFRINGEMENT, OR THE ABSENCE OF
|
||||
LATENT OR OTHER DEFECTS, ACCURACY, OR THE PRESENCE OF ABSENCE OF ERRORS,
|
||||
WHETHER OR NOT DISCOVERABLE. SOME JURISDICTIONS DO NOT ALLOW THE EXCLUSION
|
||||
OF IMPLIED WARRANTIES, SO SUCH EXCLUSION MAY NOT APPLY TO YOU.
|
||||
|
||||
6. Limitation on Liability. EXCEPT TO THE EXTENT REQUIRED BY APPLICABLE
|
||||
LAW, IN NO EVENT WILL LICENSOR BE LIABLE TO YOU ON ANY LEGAL THEORY FOR
|
||||
ANY SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES
|
||||
ARISING OUT OF THIS LICENSE OR THE USE OF THE WORK, EVEN IF LICENSOR HAS
|
||||
BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
|
||||
|
||||
7. Termination
|
||||
|
||||
a. This License and the rights granted hereunder will terminate
|
||||
automatically upon any breach by You of the terms of this License.
|
||||
Individuals or entities who have received Adaptations or Collections
|
||||
from You under this License, however, will not have their licenses
|
||||
terminated provided such individuals or entities remain in full
|
||||
compliance with those licenses. Sections 1, 2, 5, 6, 7, and 8 will
|
||||
survive any termination of this License.
|
||||
b. Subject to the above terms and conditions, the license granted here is
|
||||
perpetual (for the duration of the applicable copyright in the Work).
|
||||
Notwithstanding the above, Licensor reserves the right to release the
|
||||
Work under different license terms or to stop distributing the Work at
|
||||
any time; provided, however that any such election will not serve to
|
||||
withdraw this License (or any other license that has been, or is
|
||||
required to be, granted under the terms of this License), and this
|
||||
License will continue in full force and effect unless terminated as
|
||||
stated above.
|
||||
|
||||
8. Miscellaneous
|
||||
|
||||
a. Each time You Distribute or Publicly Perform the Work or a Collection,
|
||||
the Licensor offers to the recipient a license to the Work on the same
|
||||
terms and conditions as the license granted to You under this License.
|
||||
b. Each time You Distribute or Publicly Perform an Adaptation, Licensor
|
||||
offers to the recipient a license to the original Work on the same
|
||||
terms and conditions as the license granted to You under this License.
|
||||
c. If any provision of this License is invalid or unenforceable under
|
||||
applicable law, it shall not affect the validity or enforceability of
|
||||
the remainder of the terms of this License, and without further action
|
||||
by the parties to this agreement, such provision shall be reformed to
|
||||
the minimum extent necessary to make such provision valid and
|
||||
enforceable.
|
||||
d. No term or provision of this License shall be deemed waived and no
|
||||
breach consented to unless such waiver or consent shall be in writing
|
||||
and signed by the party to be charged with such waiver or consent.
|
||||
e. This License constitutes the entire agreement between the parties with
|
||||
respect to the Work licensed here. There are no understandings,
|
||||
agreements or representations with respect to the Work not specified
|
||||
here. Licensor shall not be bound by any additional provisions that
|
||||
may appear in any communication from You. This License may not be
|
||||
modified without the mutual written agreement of the Licensor and You.
|
||||
f. The rights granted under, and the subject matter referenced, in this
|
||||
License were drafted utilizing the terminology of the Berne Convention
|
||||
for the Protection of Literary and Artistic Works (as amended on
|
||||
September 28, 1979), the Rome Convention of 1961, the WIPO Copyright
|
||||
Treaty of 1996, the WIPO Performances and Phonograms Treaty of 1996
|
||||
and the Universal Copyright Convention (as revised on July 24, 1971).
|
||||
These rights and subject matter take effect in the relevant
|
||||
jurisdiction in which the License terms are sought to be enforced
|
||||
according to the corresponding provisions of the implementation of
|
||||
those treaty provisions in the applicable national law. If the
|
||||
standard suite of rights granted under applicable copyright law
|
||||
includes additional rights not granted under this License, such
|
||||
additional rights are deemed to be included in the License; this
|
||||
License is not intended to restrict the license of any rights under
|
||||
applicable law.
|
||||
|
||||
|
||||
Creative Commons Notice
|
||||
|
||||
Creative Commons is not a party to this License, and makes no warranty
|
||||
whatsoever in connection with the Work. Creative Commons will not be
|
||||
liable to You or any party on any legal theory for any damages
|
||||
whatsoever, including without limitation any general, special,
|
||||
incidental or consequential damages arising in connection to this
|
||||
license. Notwithstanding the foregoing two (2) sentences, if Creative
|
||||
Commons has expressly identified itself as the Licensor hereunder, it
|
||||
shall have all rights and obligations of Licensor.
|
||||
|
||||
Except for the limited purpose of indicating to the public that the
|
||||
Work is licensed under the CCPL, Creative Commons does not authorize
|
||||
the use by either party of the trademark "Creative Commons" or any
|
||||
related trademark or logo of Creative Commons without the prior
|
||||
written consent of Creative Commons. Any permitted use will be in
|
||||
compliance with Creative Commons' then-current trademark usage
|
||||
guidelines, as may be published on its website or otherwise made
|
||||
available upon request from time to time. For the avoidance of doubt,
|
||||
this trademark restriction does not form part of the License.
|
||||
|
||||
Creative Commons may be contacted at https://creativecommons.org/.
|
|
@ -1,428 +0,0 @@
|
|||
Attribution-ShareAlike 4.0 International
|
||||
|
||||
=======================================================================
|
||||
|
||||
Creative Commons Corporation ("Creative Commons") is not a law firm and
|
||||
does not provide legal services or legal advice. Distribution of
|
||||
Creative Commons public licenses does not create a lawyer-client or
|
||||
other relationship. Creative Commons makes its licenses and related
|
||||
information available on an "as-is" basis. Creative Commons gives no
|
||||
warranties regarding its licenses, any material licensed under their
|
||||
terms and conditions, or any related information. Creative Commons
|
||||
disclaims all liability for damages resulting from their use to the
|
||||
fullest extent possible.
|
||||
|
||||
Using Creative Commons Public Licenses
|
||||
|
||||
Creative Commons public licenses provide a standard set of terms and
|
||||
conditions that creators and other rights holders may use to share
|
||||
original works of authorship and other material subject to copyright
|
||||
and certain other rights specified in the public license below. The
|
||||
following considerations are for informational purposes only, are not
|
||||
exhaustive, and do not form part of our licenses.
|
||||
|
||||
Considerations for licensors: Our public licenses are
|
||||
intended for use by those authorized to give the public
|
||||
permission to use material in ways otherwise restricted by
|
||||
copyright and certain other rights. Our licenses are
|
||||
irrevocable. Licensors should read and understand the terms
|
||||
and conditions of the license they choose before applying it.
|
||||
Licensors should also secure all rights necessary before
|
||||
applying our licenses so that the public can reuse the
|
||||
material as expected. Licensors should clearly mark any
|
||||
material not subject to the license. This includes other CC-
|
||||
licensed material, or material used under an exception or
|
||||
limitation to copyright. More considerations for licensors:
|
||||
wiki.creativecommons.org/Considerations_for_licensors
|
||||
|
||||
Considerations for the public: By using one of our public
|
||||
licenses, a licensor grants the public permission to use the
|
||||
licensed material under specified terms and conditions. If
|
||||
the licensor's permission is not necessary for any reason--for
|
||||
example, because of any applicable exception or limitation to
|
||||
copyright--then that use is not regulated by the license. Our
|
||||
licenses grant only permissions under copyright and certain
|
||||
other rights that a licensor has authority to grant. Use of
|
||||
the licensed material may still be restricted for other
|
||||
reasons, including because others have copyright or other
|
||||
rights in the material. A licensor may make special requests,
|
||||
such as asking that all changes be marked or described.
|
||||
Although not required by our licenses, you are encouraged to
|
||||
respect those requests where reasonable. More considerations
|
||||
for the public:
|
||||
wiki.creativecommons.org/Considerations_for_licensees
|
||||
|
||||
=======================================================================
|
||||
|
||||
Creative Commons Attribution-ShareAlike 4.0 International Public
|
||||
License
|
||||
|
||||
By exercising the Licensed Rights (defined below), You accept and agree
|
||||
to be bound by the terms and conditions of this Creative Commons
|
||||
Attribution-ShareAlike 4.0 International Public License ("Public
|
||||
License"). To the extent this Public License may be interpreted as a
|
||||
contract, You are granted the Licensed Rights in consideration of Your
|
||||
acceptance of these terms and conditions, and the Licensor grants You
|
||||
such rights in consideration of benefits the Licensor receives from
|
||||
making the Licensed Material available under these terms and
|
||||
conditions.
|
||||
|
||||
|
||||
Section 1 -- Definitions.
|
||||
|
||||
a. Adapted Material means material subject to Copyright and Similar
|
||||
Rights that is derived from or based upon the Licensed Material
|
||||
and in which the Licensed Material is translated, altered,
|
||||
arranged, transformed, or otherwise modified in a manner requiring
|
||||
permission under the Copyright and Similar Rights held by the
|
||||
Licensor. For purposes of this Public License, where the Licensed
|
||||
Material is a musical work, performance, or sound recording,
|
||||
Adapted Material is always produced where the Licensed Material is
|
||||
synched in timed relation with a moving image.
|
||||
|
||||
b. Adapter's License means the license You apply to Your Copyright
|
||||
and Similar Rights in Your contributions to Adapted Material in
|
||||
accordance with the terms and conditions of this Public License.
|
||||
|
||||
c. BY-SA Compatible License means a license listed at
|
||||
creativecommons.org/compatiblelicenses, approved by Creative
|
||||
Commons as essentially the equivalent of this Public License.
|
||||
|
||||
d. Copyright and Similar Rights means copyright and/or similar rights
|
||||
closely related to copyright including, without limitation,
|
||||
performance, broadcast, sound recording, and Sui Generis Database
|
||||
Rights, without regard to how the rights are labeled or
|
||||
categorized. For purposes of this Public License, the rights
|
||||
specified in Section 2(b)(1)-(2) are not Copyright and Similar
|
||||
Rights.
|
||||
|
||||
e. Effective Technological Measures means those measures that, in the
|
||||
absence of proper authority, may not be circumvented under laws
|
||||
fulfilling obligations under Article 11 of the WIPO Copyright
|
||||
Treaty adopted on December 20, 1996, and/or similar international
|
||||
agreements.
|
||||
|
||||
f. Exceptions and Limitations means fair use, fair dealing, and/or
|
||||
any other exception or limitation to Copyright and Similar Rights
|
||||
that applies to Your use of the Licensed Material.
|
||||
|
||||
g. License Elements means the license attributes listed in the name
|
||||
of a Creative Commons Public License. The License Elements of this
|
||||
Public License are Attribution and ShareAlike.
|
||||
|
||||
h. Licensed Material means the artistic or literary work, database,
|
||||
or other material to which the Licensor applied this Public
|
||||
License.
|
||||
|
||||
i. Licensed Rights means the rights granted to You subject to the
|
||||
terms and conditions of this Public License, which are limited to
|
||||
all Copyright and Similar Rights that apply to Your use of the
|
||||
Licensed Material and that the Licensor has authority to license.
|
||||
|
||||
j. Licensor means the individual(s) or entity(ies) granting rights
|
||||
under this Public License.
|
||||
|
||||
k. Share means to provide material to the public by any means or
|
||||
process that requires permission under the Licensed Rights, such
|
||||
as reproduction, public display, public performance, distribution,
|
||||
dissemination, communication, or importation, and to make material
|
||||
available to the public including in ways that members of the
|
||||
public may access the material from a place and at a time
|
||||
individually chosen by them.
|
||||
|
||||
l. Sui Generis Database Rights means rights other than copyright
|
||||
resulting from Directive 96/9/EC of the European Parliament and of
|
||||
the Council of 11 March 1996 on the legal protection of databases,
|
||||
as amended and/or succeeded, as well as other essentially
|
||||
equivalent rights anywhere in the world.
|
||||
|
||||
m. You means the individual or entity exercising the Licensed Rights
|
||||
under this Public License. Your has a corresponding meaning.
|
||||
|
||||
|
||||
Section 2 -- Scope.
|
||||
|
||||
a. License grant.
|
||||
|
||||
1. Subject to the terms and conditions of this Public License,
|
||||
the Licensor hereby grants You a worldwide, royalty-free,
|
||||
non-sublicensable, non-exclusive, irrevocable license to
|
||||
exercise the Licensed Rights in the Licensed Material to:
|
||||
|
||||
a. reproduce and Share the Licensed Material, in whole or
|
||||
in part; and
|
||||
|
||||
b. produce, reproduce, and Share Adapted Material.
|
||||
|
||||
2. Exceptions and Limitations. For the avoidance of doubt, where
|
||||
Exceptions and Limitations apply to Your use, this Public
|
||||
License does not apply, and You do not need to comply with
|
||||
its terms and conditions.
|
||||
|
||||
3. Term. The term of this Public License is specified in Section
|
||||
6(a).
|
||||
|
||||
4. Media and formats; technical modifications allowed. The
|
||||
Licensor authorizes You to exercise the Licensed Rights in
|
||||
all media and formats whether now known or hereafter created,
|
||||
and to make technical modifications necessary to do so. The
|
||||
Licensor waives and/or agrees not to assert any right or
|
||||
authority to forbid You from making technical modifications
|
||||
necessary to exercise the Licensed Rights, including
|
||||
technical modifications necessary to circumvent Effective
|
||||
Technological Measures. For purposes of this Public License,
|
||||
simply making modifications authorized by this Section 2(a)
|
||||
(4) never produces Adapted Material.
|
||||
|
||||
5. Downstream recipients.
|
||||
|
||||
a. Offer from the Licensor -- Licensed Material. Every
|
||||
recipient of the Licensed Material automatically
|
||||
receives an offer from the Licensor to exercise the
|
||||
Licensed Rights under the terms and conditions of this
|
||||
Public License.
|
||||
|
||||
b. Additional offer from the Licensor -- Adapted Material.
|
||||
Every recipient of Adapted Material from You
|
||||
automatically receives an offer from the Licensor to
|
||||
exercise the Licensed Rights in the Adapted Material
|
||||
under the conditions of the Adapter's License You apply.
|
||||
|
||||
c. No downstream restrictions. You may not offer or impose
|
||||
any additional or different terms or conditions on, or
|
||||
apply any Effective Technological Measures to, the
|
||||
Licensed Material if doing so restricts exercise of the
|
||||
Licensed Rights by any recipient of the Licensed
|
||||
Material.
|
||||
|
||||
6. No endorsement. Nothing in this Public License constitutes or
|
||||
may be construed as permission to assert or imply that You
|
||||
are, or that Your use of the Licensed Material is, connected
|
||||
with, or sponsored, endorsed, or granted official status by,
|
||||
the Licensor or others designated to receive attribution as
|
||||
provided in Section 3(a)(1)(A)(i).
|
||||
|
||||
b. Other rights.
|
||||
|
||||
1. Moral rights, such as the right of integrity, are not
|
||||
licensed under this Public License, nor are publicity,
|
||||
privacy, and/or other similar personality rights; however, to
|
||||
the extent possible, the Licensor waives and/or agrees not to
|
||||
assert any such rights held by the Licensor to the limited
|
||||
extent necessary to allow You to exercise the Licensed
|
||||
Rights, but not otherwise.
|
||||
|
||||
2. Patent and trademark rights are not licensed under this
|
||||
Public License.
|
||||
|
||||
3. To the extent possible, the Licensor waives any right to
|
||||
collect royalties from You for the exercise of the Licensed
|
||||
Rights, whether directly or through a collecting society
|
||||
under any voluntary or waivable statutory or compulsory
|
||||
licensing scheme. In all other cases the Licensor expressly
|
||||
reserves any right to collect such royalties.
|
||||
|
||||
|
||||
Section 3 -- License Conditions.
|
||||
|
||||
Your exercise of the Licensed Rights is expressly made subject to the
|
||||
following conditions.
|
||||
|
||||
a. Attribution.
|
||||
|
||||
1. If You Share the Licensed Material (including in modified
|
||||
form), You must:
|
||||
|
||||
a. retain the following if it is supplied by the Licensor
|
||||
with the Licensed Material:
|
||||
|
||||
i. identification of the creator(s) of the Licensed
|
||||
Material and any others designated to receive
|
||||
attribution, in any reasonable manner requested by
|
||||
the Licensor (including by pseudonym if
|
||||
designated);
|
||||
|
||||
ii. a copyright notice;
|
||||
|
||||
iii. a notice that refers to this Public License;
|
||||
|
||||
iv. a notice that refers to the disclaimer of
|
||||
warranties;
|
||||
|
||||
v. a URI or hyperlink to the Licensed Material to the
|
||||
extent reasonably practicable;
|
||||
|
||||
b. indicate if You modified the Licensed Material and
|
||||
retain an indication of any previous modifications; and
|
||||
|
||||
c. indicate the Licensed Material is licensed under this
|
||||
Public License, and include the text of, or the URI or
|
||||
hyperlink to, this Public License.
|
||||
|
||||
2. You may satisfy the conditions in Section 3(a)(1) in any
|
||||
reasonable manner based on the medium, means, and context in
|
||||
which You Share the Licensed Material. For example, it may be
|
||||
reasonable to satisfy the conditions by providing a URI or
|
||||
hyperlink to a resource that includes the required
|
||||
information.
|
||||
|
||||
3. If requested by the Licensor, You must remove any of the
|
||||
information required by Section 3(a)(1)(A) to the extent
|
||||
reasonably practicable.
|
||||
|
||||
b. ShareAlike.
|
||||
|
||||
In addition to the conditions in Section 3(a), if You Share
|
||||
Adapted Material You produce, the following conditions also apply.
|
||||
|
||||
1. The Adapter's License You apply must be a Creative Commons
|
||||
license with the same License Elements, this version or
|
||||
later, or a BY-SA Compatible License.
|
||||
|
||||
2. You must include the text of, or the URI or hyperlink to, the
|
||||
Adapter's License You apply. You may satisfy this condition
|
||||
in any reasonable manner based on the medium, means, and
|
||||
context in which You Share Adapted Material.
|
||||
|
||||
3. You may not offer or impose any additional or different terms
|
||||
or conditions on, or apply any Effective Technological
|
||||
Measures to, Adapted Material that restrict exercise of the
|
||||
rights granted under the Adapter's License You apply.
|
||||
|
||||
|
||||
Section 4 -- Sui Generis Database Rights.
|
||||
|
||||
Where the Licensed Rights include Sui Generis Database Rights that
|
||||
apply to Your use of the Licensed Material:
|
||||
|
||||
a. for the avoidance of doubt, Section 2(a)(1) grants You the right
|
||||
to extract, reuse, reproduce, and Share all or a substantial
|
||||
portion of the contents of the database;
|
||||
|
||||
b. if You include all or a substantial portion of the database
|
||||
contents in a database in which You have Sui Generis Database
|
||||
Rights, then the database in which You have Sui Generis Database
|
||||
Rights (but not its individual contents) is Adapted Material,
|
||||
|
||||
including for purposes of Section 3(b); and
|
||||
c. You must comply with the conditions in Section 3(a) if You Share
|
||||
all or a substantial portion of the contents of the database.
|
||||
|
||||
For the avoidance of doubt, this Section 4 supplements and does not
|
||||
replace Your obligations under this Public License where the Licensed
|
||||
Rights include other Copyright and Similar Rights.
|
||||
|
||||
|
||||
Section 5 -- Disclaimer of Warranties and Limitation of Liability.
|
||||
|
||||
a. UNLESS OTHERWISE SEPARATELY UNDERTAKEN BY THE LICENSOR, TO THE
|
||||
EXTENT POSSIBLE, THE LICENSOR OFFERS THE LICENSED MATERIAL AS-IS
|
||||
AND AS-AVAILABLE, AND MAKES NO REPRESENTATIONS OR WARRANTIES OF
|
||||
ANY KIND CONCERNING THE LICENSED MATERIAL, WHETHER EXPRESS,
|
||||
IMPLIED, STATUTORY, OR OTHER. THIS INCLUDES, WITHOUT LIMITATION,
|
||||
WARRANTIES OF TITLE, MERCHANTABILITY, FITNESS FOR A PARTICULAR
|
||||
PURPOSE, NON-INFRINGEMENT, ABSENCE OF LATENT OR OTHER DEFECTS,
|
||||
ACCURACY, OR THE PRESENCE OR ABSENCE OF ERRORS, WHETHER OR NOT
|
||||
KNOWN OR DISCOVERABLE. WHERE DISCLAIMERS OF WARRANTIES ARE NOT
|
||||
ALLOWED IN FULL OR IN PART, THIS DISCLAIMER MAY NOT APPLY TO YOU.
|
||||
|
||||
b. TO THE EXTENT POSSIBLE, IN NO EVENT WILL THE LICENSOR BE LIABLE
|
||||
TO YOU ON ANY LEGAL THEORY (INCLUDING, WITHOUT LIMITATION,
|
||||
NEGLIGENCE) OR OTHERWISE FOR ANY DIRECT, SPECIAL, INDIRECT,
|
||||
INCIDENTAL, CONSEQUENTIAL, PUNITIVE, EXEMPLARY, OR OTHER LOSSES,
|
||||
COSTS, EXPENSES, OR DAMAGES ARISING OUT OF THIS PUBLIC LICENSE OR
|
||||
USE OF THE LICENSED MATERIAL, EVEN IF THE LICENSOR HAS BEEN
|
||||
ADVISED OF THE POSSIBILITY OF SUCH LOSSES, COSTS, EXPENSES, OR
|
||||
DAMAGES. WHERE A LIMITATION OF LIABILITY IS NOT ALLOWED IN FULL OR
|
||||
IN PART, THIS LIMITATION MAY NOT APPLY TO YOU.
|
||||
|
||||
c. The disclaimer of warranties and limitation of liability provided
|
||||
above shall be interpreted in a manner that, to the extent
|
||||
possible, most closely approximates an absolute disclaimer and
|
||||
waiver of all liability.
|
||||
|
||||
|
||||
Section 6 -- Term and Termination.
|
||||
|
||||
a. This Public License applies for the term of the Copyright and
|
||||
Similar Rights licensed here. However, if You fail to comply with
|
||||
this Public License, then Your rights under this Public License
|
||||
terminate automatically.
|
||||
|
||||
b. Where Your right to use the Licensed Material has terminated under
|
||||
Section 6(a), it reinstates:
|
||||
|
||||
1. automatically as of the date the violation is cured, provided
|
||||
it is cured within 30 days of Your discovery of the
|
||||
violation; or
|
||||
|
||||
2. upon express reinstatement by the Licensor.
|
||||
|
||||
For the avoidance of doubt, this Section 6(b) does not affect any
|
||||
right the Licensor may have to seek remedies for Your violations
|
||||
of this Public License.
|
||||
|
||||
c. For the avoidance of doubt, the Licensor may also offer the
|
||||
Licensed Material under separate terms or conditions or stop
|
||||
distributing the Licensed Material at any time; however, doing so
|
||||
will not terminate this Public License.
|
||||
|
||||
d. Sections 1, 5, 6, 7, and 8 survive termination of this Public
|
||||
License.
|
||||
|
||||
|
||||
Section 7 -- Other Terms and Conditions.
|
||||
|
||||
a. The Licensor shall not be bound by any additional or different
|
||||
terms or conditions communicated by You unless expressly agreed.
|
||||
|
||||
b. Any arrangements, understandings, or agreements regarding the
|
||||
Licensed Material not stated herein are separate from and
|
||||
independent of the terms and conditions of this Public License.
|
||||
|
||||
|
||||
Section 8 -- Interpretation.
|
||||
|
||||
a. For the avoidance of doubt, this Public License does not, and
|
||||
shall not be interpreted to, reduce, limit, restrict, or impose
|
||||
conditions on any use of the Licensed Material that could lawfully
|
||||
be made without permission under this Public License.
|
||||
|
||||
b. To the extent possible, if any provision of this Public License is
|
||||
deemed unenforceable, it shall be automatically reformed to the
|
||||
minimum extent necessary to make it enforceable. If the provision
|
||||
cannot be reformed, it shall be severed from this Public License
|
||||
without affecting the enforceability of the remaining terms and
|
||||
conditions.
|
||||
|
||||
c. No term or condition of this Public License will be waived and no
|
||||
failure to comply consented to unless expressly agreed to by the
|
||||
Licensor.
|
||||
|
||||
d. Nothing in this Public License constitutes or may be interpreted
|
||||
as a limitation upon, or waiver of, any privileges and immunities
|
||||
that apply to the Licensor or You, including from the legal
|
||||
processes of any jurisdiction or authority.
|
||||
|
||||
|
||||
=======================================================================
|
||||
|
||||
Creative Commons is not a party to its public
|
||||
licenses. Notwithstanding, Creative Commons may elect to apply one of
|
||||
its public licenses to material it publishes and in those instances
|
||||
will be considered the “Licensor.” The text of the Creative Commons
|
||||
public licenses is dedicated to the public domain under the CC0 Public
|
||||
Domain Dedication. Except for the limited purpose of indicating that
|
||||
material is shared under a Creative Commons public license or as
|
||||
otherwise permitted by the Creative Commons policies published at
|
||||
creativecommons.org/policies, Creative Commons does not authorize the
|
||||
use of the trademark "Creative Commons" or any other trademark or logo
|
||||
of Creative Commons without its prior written consent including,
|
||||
without limitation, in connection with any unauthorized modifications
|
||||
to any of its public licenses or any other arrangements,
|
||||
understandings, or agreements concerning use of licensed material. For
|
||||
the avoidance of doubt, this paragraph does not form part of the
|
||||
public licenses.
|
||||
|
||||
Creative Commons may be contacted at creativecommons.org.
|
||||
|
|
@ -1,34 +0,0 @@
|
|||
## Examples of textcat training data
|
||||
|
||||
spacy JSON training files were generated from JSONL with:
|
||||
|
||||
```
|
||||
python textcatjsonl_to_trainjson.py -m en file.jsonl .
|
||||
```
|
||||
|
||||
`cooking.json` is an example with mutually-exclusive classes with two labels:
|
||||
|
||||
* `baking`
|
||||
* `not_baking`
|
||||
|
||||
`jigsaw-toxic-comment.json` is an example with multiple labels per instance:
|
||||
|
||||
* `insult`
|
||||
* `obscene`
|
||||
* `severe_toxic`
|
||||
* `toxic`
|
||||
|
||||
### Data Sources
|
||||
|
||||
* `cooking.jsonl`: https://cooking.stackexchange.com. The meta IDs link to the
|
||||
original question as `https://cooking.stackexchange.com/questions/ID`, e.g.,
|
||||
`https://cooking.stackexchange.com/questions/2` for the first instance.
|
||||
* `jigsaw-toxic-comment.jsonl`: [Jigsaw Toxic Comments Classification
|
||||
Challenge](https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge)
|
||||
|
||||
### Data Licenses
|
||||
|
||||
* `cooking.jsonl`: CC BY-SA 4.0 ([`CC_BY-SA-4.0.txt`](CC_BY-SA-4.0.txt))
|
||||
* `jigsaw-toxic-comment.jsonl`:
|
||||
* text: CC BY-SA 3.0 ([`CC_BY-SA-3.0.txt`](CC_BY-SA-3.0.txt))
|
||||
* annotation: CC0 ([`CC0.txt`](CC0.txt))
|
File diff suppressed because it is too large
Load Diff
|
@ -1,10 +0,0 @@
|
|||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "2"}, "text": "How should I cook bacon in an oven?\nI've heard of people cooking bacon in an oven by laying the strips out on a cookie sheet. When using this method, how long should I cook the bacon for, and at what temperature?\n"}
|
||||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "3"}, "text": "What is the difference between white and brown eggs?\nI always use brown extra large eggs, but I can't honestly say why I do this other than habit at this point. Are there any distinct advantages or disadvantages like flavor, shelf life, etc?\n"}
|
||||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "4"}, "text": "What is the difference between baking soda and baking powder?\nAnd can I use one in place of the other in certain recipes?\n"}
|
||||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "5"}, "text": "In a tomato sauce recipe, how can I cut the acidity?\nIt seems that every time I make a tomato sauce for pasta, the sauce is a little bit too acid for my taste. I've tried using sugar or sodium bicarbonate, but I'm not satisfied with the results.\n"}
|
||||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "6"}, "text": "What ingredients (available in specific regions) can I substitute for parsley?\nI have a recipe that calls for fresh parsley. I have substituted other fresh herbs for their dried equivalents but I don't have fresh or dried parsley. Is there something else (ex another dried herb) that I can use instead of parsley?\nI know it is used mainly for looks rather than taste but I have a pasta recipe that calls for 2 tablespoons of parsley in the sauce and then another 2 tablespoons on top when it is done. I know the parsley on top is more for looks but there must be something about the taste otherwise it would call for parsley within the sauce as well.\nI would especially like to hear about substitutes available in Southeast Asia and other parts of the world where the obvious answers (such as cilantro) are not widely available.\n"}
|
||||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "9"}, "text": "What is the internal temperature a steak should be cooked to for Rare/Medium Rare/Medium/Well?\nI'd like to know when to take my steaks off the grill and please everybody.\n"}
|
||||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "11"}, "text": "How should I poach an egg?\nWhat's the best method to poach an egg without it turning into an eggy soupy mess?\n"}
|
||||
{"cats": {"baking": 0.0, "not_baking": 1.0}, "meta": {"id": "12"}, "text": "How can I make my Ice Cream \"creamier\"\nMy ice cream doesn't feel creamy enough. I got the recipe from Good Eats, and I can't tell if it's just the recipe or maybe that I'm just not getting my \"batter\" cold enough before I try to make it (I let it chill overnight in the refrigerator, but it doesn't always come out of the machine looking like \"soft serve\" as he said on the show - it's usually a little thinner).\nRecipe: http://www.foodnetwork.com/recipes/alton-brown/serious-vanilla-ice-cream-recipe/index.html\nThanks!\n"}
|
||||
{"cats": {"baking": 1.0, "not_baking": 0.0}, "meta": {"id": "17"}, "text": "How long and at what temperature do the various parts of a chicken need to be cooked?\nI'm interested in baking thighs, legs, breasts and wings. How long do each of these items need to bake and at what temperature?\n"}
|
||||
{"cats": {"baking": 1.0, "not_baking": 0.0}, "meta": {"id": "27"}, "text": "Do I need to sift flour that is labeled sifted?\nIs there really an advantage to sifting flour that I bought that was labeled 'sifted'?\n"}
|
File diff suppressed because it is too large
Load Diff
|
@ -1,10 +0,0 @@
|
|||
{"meta": {"id": "0000997932d777bf"}, "text": "Explanation\nWhy the edits made under my username Hardcore Metallica Fan were reverted? They weren't vandalisms, just closure on some GAs after I voted at New York Dolls FAC. And please don't remove the template from the talk page since I'm retired now.89.205.38.27", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
|
||||
{"meta": {"id": "001956c382006abd"}, "text": "I'm Sorry \n\nI'm sorry I screwed around with someones talk page. It was very bad to do. I know how having the templates on their talk page helps you assert your dominance over them. I know I should bow down to the almighty administrators. But then again, I'm going to go play outside....with your mom. 76.122.79.82", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 1}}
|
||||
{"meta": {"id": "0020e7119b96eeeb"}, "text": "Stupid peace of shit stop deleting my stuff asshole go die and fall in a hole go to hell!", "cats": {"insult": 1, "obscene": 1, "severe_toxic": 1, "toxic": 1}}
|
||||
{"meta": {"id": "000103f0d9cfb60f"}, "text": "D'aww! He matches this background colour I'm seemingly stuck with. Thanks. (talk) 21:51, January 11, 2016 (UTC)", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
|
||||
{"meta": {"id": "001dc38a83d420cf"}, "text": "GET FUCKED UP. GET FUCKEEED UP. GOT A DRINK THAT YOU CANT PUT DOWN???/ GET FUCK UP GET FUCKED UP. I'M FUCKED UP RIGHT NOW!", "cats": {"insult": 0, "obscene": 1, "severe_toxic": 0, "toxic": 1}}
|
||||
{"meta": {"id": "000113f07ec002fd"}, "text": "Hey man, I'm really not trying to edit war. It's just that this guy is constantly removing relevant information and talking to me through edits instead of my talk page. He seems to care more about the formatting than the actual info.", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
|
||||
{"meta": {"id": "0001b41b1c6bb37e"}, "text": "\"\nMore\nI can't make any real suggestions on improvement - I wondered if the section statistics should be later on, or a subsection of \"\"types of accidents\"\" -I think the references may need tidying so that they are all in the exact same format ie date format etc. I can do that later on, if no-one else does first - if you have any preferences for formatting style on references or want to do it yourself please let me know.\n\nThere appears to be a backlog on articles for review so I guess there may be a delay until a reviewer turns up. It's listed in the relevant form eg Wikipedia:Good_article_nominations#Transport \"", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
|
||||
{"meta": {"id": "0001d958c54c6e35"}, "text": "You, sir, are my hero. Any chance you remember what page that's on?", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
|
||||
{"meta": {"id": "00025465d4725e87"}, "text": "\"\n\nCongratulations from me as well, use the tools well. · talk \"", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 0}}
|
||||
{"meta": {"id": "002264ea4d5f2887"}, "text": "Why can't you believe how fat Artie is? Did you see him on his recent appearence on the Tonight Show with Jay Leno? He looks absolutely AWFUL! If I had to put money on it, I'd say that Artie Lange is a can't miss candidate for the 2007 Dead pool! \n\n \nKindly keep your malicious fingers off of my above comment, . Everytime you remove it, I will repost it!!!", "cats": {"insult": 0, "obscene": 0, "severe_toxic": 0, "toxic": 1}}
|
|
@ -1,53 +0,0 @@
|
|||
from pathlib import Path
|
||||
import plac
|
||||
import spacy
|
||||
from spacy.gold import docs_to_json
|
||||
import srsly
|
||||
import sys
|
||||
|
||||
@plac.annotations(
|
||||
model=("Model name. Defaults to 'en'.", "option", "m", str),
|
||||
input_file=("Input file (jsonl)", "positional", None, Path),
|
||||
output_dir=("Output directory", "positional", None, Path),
|
||||
n_texts=("Number of texts to convert", "option", "t", int),
|
||||
)
|
||||
def convert(model='en', input_file=None, output_dir=None, n_texts=0):
|
||||
# Load model with tokenizer + sentencizer only
|
||||
nlp = spacy.load(model)
|
||||
nlp.disable_pipes(*nlp.pipe_names)
|
||||
sentencizer = nlp.create_pipe("sentencizer")
|
||||
nlp.add_pipe(sentencizer, first=True)
|
||||
|
||||
texts = []
|
||||
cats = []
|
||||
count = 0
|
||||
|
||||
if not input_file.exists():
|
||||
print("Input file not found:", input_file)
|
||||
sys.exit(1)
|
||||
else:
|
||||
with open(input_file) as fileh:
|
||||
for line in fileh:
|
||||
data = srsly.json_loads(line)
|
||||
texts.append(data["text"])
|
||||
cats.append(data["cats"])
|
||||
|
||||
if output_dir is not None:
|
||||
output_dir = Path(output_dir)
|
||||
if not output_dir.exists():
|
||||
output_dir.mkdir()
|
||||
else:
|
||||
output_dir = Path(".")
|
||||
|
||||
docs = []
|
||||
for i, doc in enumerate(nlp.pipe(texts)):
|
||||
doc.cats = cats[i]
|
||||
docs.append(doc)
|
||||
if n_texts > 0 and count == n_texts:
|
||||
break
|
||||
count += 1
|
||||
|
||||
srsly.write_json(output_dir / input_file.with_suffix(".json"), [docs_to_json(docs)])
|
||||
|
||||
if __name__ == "__main__":
|
||||
plac.call(convert)
|
|
@ -8,8 +8,8 @@ For more details, see the documentation:
|
|||
* Training: https://spacy.io/usage/training
|
||||
* Entity Linking: https://spacy.io/usage/linguistic-features#entity-linking
|
||||
|
||||
Compatible with: spaCy v2.2
|
||||
Last tested with: v2.2
|
||||
Compatible with: spaCy vX.X
|
||||
Last tested with: vX.X
|
||||
"""
|
||||
from __future__ import unicode_literals, print_function
|
||||
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
{
|
||||
"tokens": [
|
||||
{
|
||||
"head": 4,
|
||||
"head": 44,
|
||||
"dep": "prep",
|
||||
"tag": "IN",
|
||||
"orth": "In",
|
||||
|
|
122
fabfile.py
vendored
122
fabfile.py
vendored
|
@ -10,145 +10,113 @@ import sys
|
|||
|
||||
|
||||
PWD = path.dirname(__file__)
|
||||
ENV = environ["VENV_DIR"] if "VENV_DIR" in environ else ".env"
|
||||
ENV = environ['VENV_DIR'] if 'VENV_DIR' in environ else '.env'
|
||||
VENV_DIR = Path(PWD) / ENV
|
||||
|
||||
|
||||
@contextlib.contextmanager
|
||||
def virtualenv(name, create=False, python="/usr/bin/python3.6"):
|
||||
def virtualenv(name, create=False, python='/usr/bin/python3.6'):
|
||||
python = Path(python).resolve()
|
||||
env_path = VENV_DIR
|
||||
if create:
|
||||
if env_path.exists():
|
||||
shutil.rmtree(str(env_path))
|
||||
local("{python} -m venv {env_path}".format(python=python, env_path=VENV_DIR))
|
||||
|
||||
local('{python} -m venv {env_path}'.format(python=python, env_path=VENV_DIR))
|
||||
def wrapped_local(cmd, env_vars=[], capture=False, direct=False):
|
||||
return local(
|
||||
"source {}/bin/activate && {}".format(env_path, cmd),
|
||||
shell="/bin/bash",
|
||||
capture=False,
|
||||
)
|
||||
|
||||
return local('source {}/bin/activate && {}'.format(env_path, cmd),
|
||||
shell='/bin/bash', capture=False)
|
||||
yield wrapped_local
|
||||
|
||||
|
||||
def env(lang="python3.6"):
|
||||
def env(lang='python3.6'):
|
||||
if VENV_DIR.exists():
|
||||
local("rm -rf {env}".format(env=VENV_DIR))
|
||||
if lang.startswith("python3"):
|
||||
local("{lang} -m venv {env}".format(lang=lang, env=VENV_DIR))
|
||||
local('rm -rf {env}'.format(env=VENV_DIR))
|
||||
if lang.startswith('python3'):
|
||||
local('{lang} -m venv {env}'.format(lang=lang, env=VENV_DIR))
|
||||
else:
|
||||
local("{lang} -m pip install virtualenv --no-cache-dir".format(lang=lang))
|
||||
local(
|
||||
"{lang} -m virtualenv {env} --no-cache-dir".format(lang=lang, env=VENV_DIR)
|
||||
)
|
||||
local('{lang} -m pip install virtualenv --no-cache-dir'.format(lang=lang))
|
||||
local('{lang} -m virtualenv {env} --no-cache-dir'.format(lang=lang, env=VENV_DIR))
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
print(venv_local("python --version", capture=True))
|
||||
venv_local("pip install --upgrade setuptools --no-cache-dir")
|
||||
venv_local("pip install pytest --no-cache-dir")
|
||||
venv_local("pip install wheel --no-cache-dir")
|
||||
venv_local("pip install -r requirements.txt --no-cache-dir")
|
||||
venv_local("pip install pex --no-cache-dir")
|
||||
print(venv_local('python --version', capture=True))
|
||||
venv_local('pip install --upgrade setuptools --no-cache-dir')
|
||||
venv_local('pip install pytest --no-cache-dir')
|
||||
venv_local('pip install wheel --no-cache-dir')
|
||||
venv_local('pip install -r requirements.txt --no-cache-dir')
|
||||
venv_local('pip install pex --no-cache-dir')
|
||||
|
||||
|
||||
|
||||
def install():
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
venv_local("pip install dist/*.tar.gz")
|
||||
venv_local('pip install dist/*.tar.gz')
|
||||
|
||||
|
||||
def make():
|
||||
with lcd(path.dirname(__file__)):
|
||||
local(
|
||||
"export PYTHONPATH=`pwd` && source .env/bin/activate && python setup.py build_ext --inplace",
|
||||
shell="/bin/bash",
|
||||
)
|
||||
|
||||
local('export PYTHONPATH=`pwd` && source .env/bin/activate && python setup.py build_ext --inplace',
|
||||
shell='/bin/bash')
|
||||
|
||||
def sdist():
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
with lcd(path.dirname(__file__)):
|
||||
local("python -m pip install -U setuptools srsly")
|
||||
local("python setup.py sdist")
|
||||
|
||||
local('python -m pip install -U setuptools')
|
||||
local('python setup.py sdist')
|
||||
|
||||
def wheel():
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
with lcd(path.dirname(__file__)):
|
||||
venv_local("python setup.py bdist_wheel")
|
||||
|
||||
venv_local('python setup.py bdist_wheel')
|
||||
|
||||
def pex():
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
with lcd(path.dirname(__file__)):
|
||||
sha = local("git rev-parse --short HEAD", capture=True)
|
||||
venv_local(
|
||||
"pex dist/*.whl -e spacy -o dist/spacy-%s.pex" % sha, direct=True
|
||||
)
|
||||
sha = local('git rev-parse --short HEAD', capture=True)
|
||||
venv_local('pex dist/*.whl -e spacy -o dist/spacy-%s.pex' % sha,
|
||||
direct=True)
|
||||
|
||||
|
||||
def clean():
|
||||
with lcd(path.dirname(__file__)):
|
||||
local("rm -f dist/*.whl")
|
||||
local("rm -f dist/*.pex")
|
||||
local('rm -f dist/*.whl')
|
||||
local('rm -f dist/*.pex')
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
venv_local("python setup.py clean --all")
|
||||
venv_local('python setup.py clean --all')
|
||||
|
||||
|
||||
def test():
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
with lcd(path.dirname(__file__)):
|
||||
venv_local("pytest -x spacy/tests")
|
||||
|
||||
venv_local('pytest -x spacy/tests')
|
||||
|
||||
def train():
|
||||
args = environ.get("SPACY_TRAIN_ARGS", "")
|
||||
args = environ.get('SPACY_TRAIN_ARGS', '')
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
venv_local("spacy train {args}".format(args=args))
|
||||
venv_local('spacy train {args}'.format(args=args))
|
||||
|
||||
|
||||
def conll17(treebank_dir, experiment_dir, vectors_dir, config, corpus=""):
|
||||
is_not_clean = local("git status --porcelain", capture=True)
|
||||
def conll17(treebank_dir, experiment_dir, vectors_dir, config, corpus=''):
|
||||
is_not_clean = local('git status --porcelain', capture=True)
|
||||
if is_not_clean:
|
||||
print("Repository is not clean")
|
||||
print(is_not_clean)
|
||||
sys.exit(1)
|
||||
git_sha = local("git rev-parse --short HEAD", capture=True)
|
||||
config_checksum = local("sha256sum {config}".format(config=config), capture=True)
|
||||
experiment_dir = Path(experiment_dir) / "{}--{}".format(
|
||||
config_checksum[:6], git_sha
|
||||
)
|
||||
git_sha = local('git rev-parse --short HEAD', capture=True)
|
||||
config_checksum = local('sha256sum {config}'.format(config=config), capture=True)
|
||||
experiment_dir = Path(experiment_dir) / '{}--{}'.format(config_checksum[:6], git_sha)
|
||||
if not experiment_dir.exists():
|
||||
experiment_dir.mkdir()
|
||||
test_data_dir = Path(treebank_dir) / "ud-test-v2.0-conll2017"
|
||||
test_data_dir = Path(treebank_dir) / 'ud-test-v2.0-conll2017'
|
||||
assert test_data_dir.exists()
|
||||
assert test_data_dir.is_dir()
|
||||
if corpus:
|
||||
corpora = [corpus]
|
||||
else:
|
||||
corpora = ["UD_English", "UD_Chinese", "UD_Japanese", "UD_Vietnamese"]
|
||||
corpora = ['UD_English', 'UD_Chinese', 'UD_Japanese', 'UD_Vietnamese']
|
||||
|
||||
local(
|
||||
"cp {config} {experiment_dir}/config.json".format(
|
||||
config=config, experiment_dir=experiment_dir
|
||||
)
|
||||
)
|
||||
local('cp {config} {experiment_dir}/config.json'.format(config=config, experiment_dir=experiment_dir))
|
||||
with virtualenv(VENV_DIR) as venv_local:
|
||||
for corpus in corpora:
|
||||
venv_local(
|
||||
"spacy ud-train {treebank_dir} {experiment_dir} {config} {corpus} -v {vectors_dir}".format(
|
||||
treebank_dir=treebank_dir,
|
||||
experiment_dir=experiment_dir,
|
||||
config=config,
|
||||
corpus=corpus,
|
||||
vectors_dir=vectors_dir,
|
||||
)
|
||||
)
|
||||
venv_local(
|
||||
"spacy ud-run-test {test_data_dir} {experiment_dir} {corpus}".format(
|
||||
test_data_dir=test_data_dir,
|
||||
experiment_dir=experiment_dir,
|
||||
config=config,
|
||||
corpus=corpus,
|
||||
)
|
||||
)
|
||||
venv_local('spacy ud-train {treebank_dir} {experiment_dir} {config} {corpus} -v {vectors_dir}'.format(
|
||||
treebank_dir=treebank_dir, experiment_dir=experiment_dir, config=config, corpus=corpus, vectors_dir=vectors_dir))
|
||||
venv_local('spacy ud-run-test {test_data_dir} {experiment_dir} {corpus}'.format(
|
||||
test_data_dir=test_data_dir, experiment_dir=experiment_dir, config=config, corpus=corpus))
|
||||
|
|
|
@ -1,8 +1,8 @@
|
|||
# Our libraries
|
||||
cymem>=2.0.2,<2.1.0
|
||||
preshed>=3.0.2,<3.1.0
|
||||
thinc>=7.1.1,<7.2.0
|
||||
blis>=0.4.0,<0.5.0
|
||||
preshed>=2.0.1,<2.1.0
|
||||
thinc>=7.0.8,<7.1.0
|
||||
blis>=0.2.2,<0.3.0
|
||||
murmurhash>=0.28.0,<1.1.0
|
||||
wasabi>=0.2.0,<1.1.0
|
||||
srsly>=0.1.0,<1.1.0
|
||||
|
|
11
setup.py
11
setup.py
|
@ -27,7 +27,7 @@ def is_new_osx():
|
|||
return False
|
||||
|
||||
|
||||
PACKAGE_DATA = {"": ["*.pyx", "*.pxd", "*.txt", "*.tokens", "*.json", "*.json.gz"]}
|
||||
PACKAGE_DATA = {"": ["*.pyx", "*.pxd", "*.txt", "*.tokens", "*.json"]}
|
||||
|
||||
|
||||
PACKAGES = find_packages()
|
||||
|
@ -43,7 +43,6 @@ MOD_NAMES = [
|
|||
"spacy.kb",
|
||||
"spacy.morphology",
|
||||
"spacy.pipeline.pipes",
|
||||
"spacy.pipeline.morphologizer",
|
||||
"spacy.syntax.stateclass",
|
||||
"spacy.syntax._state",
|
||||
"spacy.tokenizer",
|
||||
|
@ -57,7 +56,6 @@ MOD_NAMES = [
|
|||
"spacy.tokens.doc",
|
||||
"spacy.tokens.span",
|
||||
"spacy.tokens.token",
|
||||
"spacy.tokens.morphanalysis",
|
||||
"spacy.tokens._retokenize",
|
||||
"spacy.matcher.matcher",
|
||||
"spacy.matcher.phrasematcher",
|
||||
|
@ -247,9 +245,9 @@ def setup_package():
|
|||
"numpy>=1.15.0",
|
||||
"murmurhash>=0.28.0,<1.1.0",
|
||||
"cymem>=2.0.2,<2.1.0",
|
||||
"preshed>=3.0.2,<3.1.0",
|
||||
"thinc>=7.1.1,<7.2.0",
|
||||
"blis>=0.4.0,<0.5.0",
|
||||
"preshed>=2.0.1,<2.1.0",
|
||||
"thinc>=7.0.8,<7.1.0",
|
||||
"blis>=0.2.2,<0.3.0",
|
||||
"plac<1.0.0,>=0.9.6",
|
||||
"requests>=2.13.0,<3.0.0",
|
||||
"wasabi>=0.2.0,<1.1.0",
|
||||
|
@ -283,6 +281,7 @@ def setup_package():
|
|||
"Programming Language :: Python :: 2",
|
||||
"Programming Language :: Python :: 2.7",
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.4",
|
||||
"Programming Language :: Python :: 3.5",
|
||||
"Programming Language :: Python :: 3.6",
|
||||
"Programming Language :: Python :: 3.7",
|
||||
|
|
161
spacy/_ml.py
161
spacy/_ml.py
|
@ -15,7 +15,7 @@ from thinc.api import uniqued, wrap, noop
|
|||
from thinc.api import with_square_sequences
|
||||
from thinc.linear.linear import LinearModel
|
||||
from thinc.neural.ops import NumpyOps, CupyOps
|
||||
from thinc.neural.util import get_array_module, copy_array
|
||||
from thinc.neural.util import get_array_module
|
||||
from thinc.neural.optimizers import Adam
|
||||
|
||||
from thinc import describe
|
||||
|
@ -286,7 +286,10 @@ def link_vectors_to_models(vocab):
|
|||
if vectors.name is None:
|
||||
vectors.name = VECTORS_KEY
|
||||
if vectors.data.size != 0:
|
||||
user_warning(Warnings.W020.format(shape=vectors.data.shape))
|
||||
print(
|
||||
"Warning: Unnamed vectors -- this won't allow multiple vectors "
|
||||
"models to be loaded. (Shape: (%d, %d))" % vectors.data.shape
|
||||
)
|
||||
ops = Model.ops
|
||||
for word in vocab:
|
||||
if word.orth in vectors.key2row:
|
||||
|
@ -320,9 +323,6 @@ def Tok2Vec(width, embed_size, **kwargs):
|
|||
pretrained_vectors = kwargs.get("pretrained_vectors", None)
|
||||
cnn_maxout_pieces = kwargs.get("cnn_maxout_pieces", 3)
|
||||
subword_features = kwargs.get("subword_features", True)
|
||||
char_embed = kwargs.get("char_embed", False)
|
||||
if char_embed:
|
||||
subword_features = False
|
||||
conv_depth = kwargs.get("conv_depth", 4)
|
||||
bilstm_depth = kwargs.get("bilstm_depth", 0)
|
||||
cols = [ID, NORM, PREFIX, SUFFIX, SHAPE, ORTH]
|
||||
|
@ -362,14 +362,6 @@ def Tok2Vec(width, embed_size, **kwargs):
|
|||
>> LN(Maxout(width, width * 4, pieces=3)),
|
||||
column=cols.index(ORTH),
|
||||
)
|
||||
elif char_embed:
|
||||
embed = concatenate_lists(
|
||||
CharacterEmbed(nM=64, nC=8),
|
||||
FeatureExtracter(cols) >> with_flatten(norm),
|
||||
)
|
||||
reduce_dimensions = LN(
|
||||
Maxout(width, 64 * 8 + width, pieces=cnn_maxout_pieces)
|
||||
)
|
||||
else:
|
||||
embed = norm
|
||||
|
||||
|
@ -377,15 +369,9 @@ def Tok2Vec(width, embed_size, **kwargs):
|
|||
ExtractWindow(nW=1)
|
||||
>> LN(Maxout(width, width * 3, pieces=cnn_maxout_pieces))
|
||||
)
|
||||
if char_embed:
|
||||
tok2vec = embed >> with_flatten(
|
||||
reduce_dimensions >> convolution ** conv_depth, pad=conv_depth
|
||||
)
|
||||
else:
|
||||
tok2vec = FeatureExtracter(cols) >> with_flatten(
|
||||
embed >> convolution ** conv_depth, pad=conv_depth
|
||||
)
|
||||
|
||||
tok2vec = FeatureExtracter(cols) >> with_flatten(
|
||||
embed >> convolution ** conv_depth, pad=conv_depth
|
||||
)
|
||||
if bilstm_depth >= 1:
|
||||
tok2vec = tok2vec >> PyTorchBiLSTM(width, width, bilstm_depth)
|
||||
# Work around thinc API limitations :(. TODO: Revise in Thinc 7
|
||||
|
@ -518,46 +504,6 @@ def getitem(i):
|
|||
return layerize(getitem_fwd)
|
||||
|
||||
|
||||
@describe.attributes(
|
||||
W=Synapses("Weights matrix", lambda obj: (obj.nO, obj.nI), lambda W, ops: None)
|
||||
)
|
||||
class MultiSoftmax(Affine):
|
||||
"""Neural network layer that predicts several multi-class attributes at once.
|
||||
For instance, we might predict one class with 6 variables, and another with 5.
|
||||
We predict the 11 neurons required for this, and then softmax them such
|
||||
that columns 0-6 make a probability distribution and coumns 6-11 make another.
|
||||
"""
|
||||
|
||||
name = "multisoftmax"
|
||||
|
||||
def __init__(self, out_sizes, nI=None, **kwargs):
|
||||
Model.__init__(self, **kwargs)
|
||||
self.out_sizes = out_sizes
|
||||
self.nO = sum(out_sizes)
|
||||
self.nI = nI
|
||||
|
||||
def predict(self, input__BI):
|
||||
output__BO = self.ops.affine(self.W, self.b, input__BI)
|
||||
i = 0
|
||||
for out_size in self.out_sizes:
|
||||
self.ops.softmax(output__BO[:, i : i + out_size], inplace=True)
|
||||
i += out_size
|
||||
return output__BO
|
||||
|
||||
def begin_update(self, input__BI, drop=0.0):
|
||||
output__BO = self.predict(input__BI)
|
||||
|
||||
def finish_update(grad__BO, sgd=None):
|
||||
self.d_W += self.ops.gemm(grad__BO, input__BI, trans1=True)
|
||||
self.d_b += grad__BO.sum(axis=0)
|
||||
grad__BI = self.ops.gemm(grad__BO, self.W)
|
||||
if sgd is not None:
|
||||
sgd(self._mem.weights, self._mem.gradient, key=self.id)
|
||||
return grad__BI
|
||||
|
||||
return output__BO, finish_update
|
||||
|
||||
|
||||
def build_tagger_model(nr_class, **cfg):
|
||||
embed_size = util.env_opt("embed_size", 2000)
|
||||
if "token_vector_width" in cfg:
|
||||
|
@ -584,33 +530,6 @@ def build_tagger_model(nr_class, **cfg):
|
|||
return model
|
||||
|
||||
|
||||
def build_morphologizer_model(class_nums, **cfg):
|
||||
embed_size = util.env_opt("embed_size", 7000)
|
||||
if "token_vector_width" in cfg:
|
||||
token_vector_width = cfg["token_vector_width"]
|
||||
else:
|
||||
token_vector_width = util.env_opt("token_vector_width", 128)
|
||||
pretrained_vectors = cfg.get("pretrained_vectors")
|
||||
char_embed = cfg.get("char_embed", True)
|
||||
with Model.define_operators({">>": chain, "+": add, "**": clone}):
|
||||
if "tok2vec" in cfg:
|
||||
tok2vec = cfg["tok2vec"]
|
||||
else:
|
||||
tok2vec = Tok2Vec(
|
||||
token_vector_width,
|
||||
embed_size,
|
||||
char_embed=char_embed,
|
||||
pretrained_vectors=pretrained_vectors,
|
||||
)
|
||||
softmax = with_flatten(MultiSoftmax(class_nums, token_vector_width))
|
||||
softmax.out_sizes = class_nums
|
||||
model = tok2vec >> softmax
|
||||
model.nI = None
|
||||
model.tok2vec = tok2vec
|
||||
model.softmax = softmax
|
||||
return model
|
||||
|
||||
|
||||
@layerize
|
||||
def SpacyVectors(docs, drop=0.0):
|
||||
batch = []
|
||||
|
@ -801,8 +720,7 @@ def concatenate_lists(*layers, **kwargs): # pragma: no cover
|
|||
concat = concatenate(*layers)
|
||||
|
||||
def concatenate_lists_fwd(Xs, drop=0.0):
|
||||
if drop is not None:
|
||||
drop *= drop_factor
|
||||
drop *= drop_factor
|
||||
lengths = ops.asarray([len(X) for X in Xs], dtype="i")
|
||||
flat_y, bp_flat_y = concat.begin_update(Xs, drop=drop)
|
||||
ys = ops.unflatten(flat_y, lengths)
|
||||
|
@ -892,67 +810,6 @@ def _replace_word(word, random_words, mask="[MASK]"):
|
|||
return word
|
||||
|
||||
|
||||
def _uniform_init(lo, hi):
|
||||
def wrapped(W, ops):
|
||||
copy_array(W, ops.xp.random.uniform(lo, hi, W.shape))
|
||||
|
||||
return wrapped
|
||||
|
||||
|
||||
@describe.attributes(
|
||||
nM=Dimension("Vector dimensions"),
|
||||
nC=Dimension("Number of characters per word"),
|
||||
vectors=Synapses(
|
||||
"Embed matrix", lambda obj: (obj.nC, obj.nV, obj.nM), _uniform_init(-0.1, 0.1)
|
||||
),
|
||||
d_vectors=Gradient("vectors"),
|
||||
)
|
||||
class CharacterEmbed(Model):
|
||||
def __init__(self, nM=None, nC=None, **kwargs):
|
||||
Model.__init__(self, **kwargs)
|
||||
self.nM = nM
|
||||
self.nC = nC
|
||||
|
||||
@property
|
||||
def nO(self):
|
||||
return self.nM * self.nC
|
||||
|
||||
@property
|
||||
def nV(self):
|
||||
return 256
|
||||
|
||||
def begin_update(self, docs, drop=0.0):
|
||||
if not docs:
|
||||
return []
|
||||
ids = []
|
||||
output = []
|
||||
weights = self.vectors
|
||||
# This assists in indexing; it's like looping over this dimension.
|
||||
# Still consider this weird witch craft...But thanks to Mark Neumann
|
||||
# for the tip.
|
||||
nCv = self.ops.xp.arange(self.nC)
|
||||
for doc in docs:
|
||||
doc_ids = doc.to_utf8_array(nr_char=self.nC)
|
||||
doc_vectors = self.ops.allocate((len(doc), self.nC, self.nM))
|
||||
# Let's say I have a 2d array of indices, and a 3d table of data. What numpy
|
||||
# incantation do I chant to get
|
||||
# output[i, j, k] == data[j, ids[i, j], k]?
|
||||
doc_vectors[:, nCv] = weights[nCv, doc_ids[:, nCv]]
|
||||
output.append(doc_vectors.reshape((len(doc), self.nO)))
|
||||
ids.append(doc_ids)
|
||||
|
||||
def backprop_character_embed(d_vectors, sgd=None):
|
||||
gradient = self.d_vectors
|
||||
for doc_ids, d_doc_vectors in zip(ids, d_vectors):
|
||||
d_doc_vectors = d_doc_vectors.reshape((len(doc_ids), self.nC, self.nM))
|
||||
gradient[nCv, doc_ids[:, nCv]] += d_doc_vectors[:, nCv]
|
||||
if sgd is not None:
|
||||
sgd(self._mem.weights, self._mem.gradient, key=self.id)
|
||||
return None
|
||||
|
||||
return output, backprop_character_embed
|
||||
|
||||
|
||||
def get_cossim_loss(yh, y):
|
||||
# Add a small constant to avoid 0 vectors
|
||||
yh = yh + 1e-8
|
||||
|
|
|
@ -1,12 +1,16 @@
|
|||
# inspired from:
|
||||
# https://python-packaging-user-guide.readthedocs.org/en/latest/single_source_version/
|
||||
# https://github.com/pypa/warehouse/blob/master/warehouse/__about__.py
|
||||
# fmt: off
|
||||
|
||||
__title__ = "spacy"
|
||||
__version__ = "2.2.0.dev15"
|
||||
__summary__ = "Industrial-strength Natural Language Processing (NLP) in Python"
|
||||
__version__ = "2.1.8"
|
||||
__summary__ = "Industrial-strength Natural Language Processing (NLP) with Python and Cython"
|
||||
__uri__ = "https://spacy.io"
|
||||
__author__ = "Explosion"
|
||||
__author__ = "Explosion AI"
|
||||
__email__ = "contact@explosion.ai"
|
||||
__license__ = "MIT"
|
||||
__release__ = False
|
||||
__release__ = True
|
||||
|
||||
__download_url__ = "https://github.com/explosion/spacy-models/releases/download"
|
||||
__compatibility__ = "https://raw.githubusercontent.com/explosion/spacy-models/master/compatibility.json"
|
||||
|
|
|
@ -144,12 +144,8 @@ def intify_attrs(stringy_attrs, strings_map=None, _do_deprecated=False):
|
|||
for name, value in stringy_attrs.items():
|
||||
if isinstance(name, int):
|
||||
int_key = name
|
||||
elif name in IDS:
|
||||
int_key = IDS[name]
|
||||
elif name.upper() in IDS:
|
||||
int_key = IDS[name.upper()]
|
||||
else:
|
||||
continue
|
||||
int_key = IDS[name.upper()]
|
||||
if strings_map is not None and isinstance(value, basestring):
|
||||
if hasattr(strings_map, 'add'):
|
||||
value = strings_map.add(value)
|
||||
|
|
|
@ -34,6 +34,12 @@ BLANK_MODEL_THRESHOLD = 2000
|
|||
str,
|
||||
),
|
||||
ignore_warnings=("Ignore warnings, only show stats and errors", "flag", "IW", bool),
|
||||
ignore_validation=(
|
||||
"Don't exit if JSON format validation fails",
|
||||
"flag",
|
||||
"IV",
|
||||
bool,
|
||||
),
|
||||
verbose=("Print additional information and explanations", "flag", "V", bool),
|
||||
no_format=("Don't pretty-print the results", "flag", "NF", bool),
|
||||
)
|
||||
|
@ -44,14 +50,10 @@ def debug_data(
|
|||
base_model=None,
|
||||
pipeline="tagger,parser,ner",
|
||||
ignore_warnings=False,
|
||||
ignore_validation=False,
|
||||
verbose=False,
|
||||
no_format=False,
|
||||
):
|
||||
"""
|
||||
Analyze, debug and validate your training and development data, get useful
|
||||
stats, and find problems like invalid entity annotations, cyclic
|
||||
dependencies, low data labels and more.
|
||||
"""
|
||||
msg = Printer(pretty=not no_format, ignore_warnings=ignore_warnings)
|
||||
|
||||
# Make sure all files and paths exists if they are needed
|
||||
|
@ -70,9 +72,21 @@ def debug_data(
|
|||
|
||||
msg.divider("Data format validation")
|
||||
|
||||
# TODO: Validate data format using the JSON schema
|
||||
# Validate data format using the JSON schema
|
||||
# TODO: update once the new format is ready
|
||||
# TODO: move validation to GoldCorpus in order to be able to load from dir
|
||||
train_data_errors = [] # TODO: validate_json
|
||||
dev_data_errors = [] # TODO: validate_json
|
||||
if not train_data_errors:
|
||||
msg.good("Training data JSON format is valid")
|
||||
if not dev_data_errors:
|
||||
msg.good("Development data JSON format is valid")
|
||||
for error in train_data_errors:
|
||||
msg.fail("Training data: {}".format(error))
|
||||
for error in dev_data_errors:
|
||||
msg.fail("Develoment data: {}".format(error))
|
||||
if (train_data_errors or dev_data_errors) and not ignore_validation:
|
||||
sys.exit(1)
|
||||
|
||||
# Create the gold corpus to be able to better analyze data
|
||||
loading_train_error_message = ""
|
||||
|
@ -270,7 +284,7 @@ def debug_data(
|
|||
|
||||
if "textcat" in pipeline:
|
||||
msg.divider("Text Classification")
|
||||
labels = [label for label in gold_train_data["cats"]]
|
||||
labels = [label for label in gold_train_data["textcat"]]
|
||||
model_labels = _get_labels_from_model(nlp, "textcat")
|
||||
new_labels = [l for l in labels if l not in model_labels]
|
||||
existing_labels = [l for l in labels if l in model_labels]
|
||||
|
@ -281,45 +295,13 @@ def debug_data(
|
|||
)
|
||||
if new_labels:
|
||||
labels_with_counts = _format_labels(
|
||||
gold_train_data["cats"].most_common(), counts=True
|
||||
gold_train_data["textcat"].most_common(), counts=True
|
||||
)
|
||||
msg.text("New: {}".format(labels_with_counts), show=verbose)
|
||||
if existing_labels:
|
||||
msg.text(
|
||||
"Existing: {}".format(_format_labels(existing_labels)), show=verbose
|
||||
)
|
||||
if set(gold_train_data["cats"]) != set(gold_dev_data["cats"]):
|
||||
msg.fail(
|
||||
"The train and dev labels are not the same. "
|
||||
"Train labels: {}. "
|
||||
"Dev labels: {}.".format(
|
||||
_format_labels(gold_train_data["cats"]),
|
||||
_format_labels(gold_dev_data["cats"]),
|
||||
)
|
||||
)
|
||||
if gold_train_data["n_cats_multilabel"] > 0:
|
||||
msg.info(
|
||||
"The train data contains instances without "
|
||||
"mutually-exclusive classes. Use '--textcat-multilabel' "
|
||||
"when training."
|
||||
)
|
||||
if gold_dev_data["n_cats_multilabel"] == 0:
|
||||
msg.warn(
|
||||
"Potential train/dev mismatch: the train data contains "
|
||||
"instances without mutually-exclusive classes while the "
|
||||
"dev data does not."
|
||||
)
|
||||
else:
|
||||
msg.info(
|
||||
"The train data contains only instances with "
|
||||
"mutually-exclusive classes."
|
||||
)
|
||||
if gold_dev_data["n_cats_multilabel"] > 0:
|
||||
msg.fail(
|
||||
"Train/dev mismatch: the dev data contains instances "
|
||||
"without mutually-exclusive classes while the train data "
|
||||
"contains only instances with mutually-exclusive classes."
|
||||
)
|
||||
|
||||
if "tagger" in pipeline:
|
||||
msg.divider("Part-of-speech Tagging")
|
||||
|
@ -348,7 +330,6 @@ def debug_data(
|
|||
)
|
||||
|
||||
if "parser" in pipeline:
|
||||
has_low_data_warning = False
|
||||
msg.divider("Dependency Parsing")
|
||||
|
||||
# profile sentence length
|
||||
|
@ -537,7 +518,6 @@ def _compile_gold(train_docs, pipeline):
|
|||
"n_sents": 0,
|
||||
"n_nonproj": 0,
|
||||
"n_cycles": 0,
|
||||
"n_cats_multilabel": 0,
|
||||
"texts": set(),
|
||||
}
|
||||
for doc, gold in train_docs:
|
||||
|
@ -560,8 +540,6 @@ def _compile_gold(train_docs, pipeline):
|
|||
data["ner"]["-"] += 1
|
||||
if "textcat" in pipeline:
|
||||
data["cats"].update(gold.cats)
|
||||
if list(gold.cats.values()).count(1.0) != 1:
|
||||
data["n_cats_multilabel"] += 1
|
||||
if "tagger" in pipeline:
|
||||
data["tags"].update([x for x in gold.tags if x is not None])
|
||||
if "parser" in pipeline:
|
||||
|
|
|
@ -28,16 +28,6 @@ def download(model, direct=False, *pip_args):
|
|||
can be shortcut, model name or, if --direct flag is set, full model name
|
||||
with version. For direct downloads, the compatibility check will be skipped.
|
||||
"""
|
||||
if not require_package("spacy") and "--no-deps" not in pip_args:
|
||||
msg.warn(
|
||||
"Skipping model package dependencies and setting `--no-deps`. "
|
||||
"You don't seem to have the spaCy package itself installed "
|
||||
"(maybe because you've built from source?), so installing the "
|
||||
"model dependencies would cause spaCy to be downloaded, which "
|
||||
"probably isn't what you want. If the model package has other "
|
||||
"dependencies, you'll have to install them manually."
|
||||
)
|
||||
pip_args = pip_args + ("--no-deps",)
|
||||
dl_tpl = "{m}-{v}/{m}-{v}.tar.gz#egg={m}=={v}"
|
||||
if direct:
|
||||
components = model.split("-")
|
||||
|
@ -82,15 +72,12 @@ def download(model, direct=False, *pip_args):
|
|||
# is_package check currently fails, because pkg_resources.working_set
|
||||
# is not refreshed automatically (see #3923). We're trying to work
|
||||
# around this here be requiring the package explicitly.
|
||||
require_package(model_name)
|
||||
|
||||
|
||||
def require_package(name):
|
||||
try:
|
||||
pkg_resources.working_set.require(name)
|
||||
return True
|
||||
except: # noqa: E722
|
||||
return False
|
||||
try:
|
||||
pkg_resources.working_set.require(model_name)
|
||||
except: # noqa: E722
|
||||
# Maybe it's possible to remove this – mostly worried about cross-
|
||||
# platform and cross-Python copmpatibility here
|
||||
pass
|
||||
|
||||
|
||||
def get_json(url, desc):
|
||||
|
@ -130,7 +117,7 @@ def get_version(model, comp):
|
|||
|
||||
def download_model(filename, user_pip_args=None):
|
||||
download_url = about.__download_url__ + "/" + filename
|
||||
pip_args = ["--no-cache-dir"]
|
||||
pip_args = ["--no-cache-dir", "--no-deps"]
|
||||
if user_pip_args:
|
||||
pip_args.extend(user_pip_args)
|
||||
cmd = [sys.executable, "-m", "pip", "install"] + pip_args + [download_url]
|
||||
|
|
|
@ -61,7 +61,6 @@ def evaluate(
|
|||
"NER P": "%.2f" % scorer.ents_p,
|
||||
"NER R": "%.2f" % scorer.ents_r,
|
||||
"NER F": "%.2f" % scorer.ents_f,
|
||||
"Textcat": "%.2f" % scorer.textcat_score,
|
||||
}
|
||||
msg.table(results, title="Results")
|
||||
|
||||
|
|
|
@ -35,13 +35,6 @@ msg = Printer()
|
|||
clusters_loc=("Optional location of brown clusters data", "option", "c", str),
|
||||
vectors_loc=("Optional vectors file in Word2Vec format", "option", "v", str),
|
||||
prune_vectors=("Optional number of vectors to prune to", "option", "V", int),
|
||||
vectors_name=(
|
||||
"Optional name for the word vectors, e.g. en_core_web_lg.vectors",
|
||||
"option",
|
||||
"vn",
|
||||
str,
|
||||
),
|
||||
model_name=("Optional name for the model meta", "option", "mn", str),
|
||||
)
|
||||
def init_model(
|
||||
lang,
|
||||
|
@ -51,8 +44,6 @@ def init_model(
|
|||
jsonl_loc=None,
|
||||
vectors_loc=None,
|
||||
prune_vectors=-1,
|
||||
vectors_name=None,
|
||||
model_name=None,
|
||||
):
|
||||
"""
|
||||
Create a new model from raw data, like word frequencies, Brown clusters
|
||||
|
@ -84,10 +75,10 @@ def init_model(
|
|||
lex_attrs = read_attrs_from_deprecated(freqs_loc, clusters_loc)
|
||||
|
||||
with msg.loading("Creating model..."):
|
||||
nlp = create_model(lang, lex_attrs, name=model_name)
|
||||
nlp = create_model(lang, lex_attrs)
|
||||
msg.good("Successfully created model")
|
||||
if vectors_loc is not None:
|
||||
add_vectors(nlp, vectors_loc, prune_vectors, vectors_name)
|
||||
add_vectors(nlp, vectors_loc, prune_vectors)
|
||||
vec_added = len(nlp.vocab.vectors)
|
||||
lex_added = len(nlp.vocab)
|
||||
msg.good(
|
||||
|
@ -147,7 +138,7 @@ def read_attrs_from_deprecated(freqs_loc, clusters_loc):
|
|||
return lex_attrs
|
||||
|
||||
|
||||
def create_model(lang, lex_attrs, name=None):
|
||||
def create_model(lang, lex_attrs):
|
||||
lang_class = get_lang_class(lang)
|
||||
nlp = lang_class()
|
||||
for lexeme in nlp.vocab:
|
||||
|
@ -166,12 +157,10 @@ def create_model(lang, lex_attrs, name=None):
|
|||
else:
|
||||
oov_prob = DEFAULT_OOV_PROB
|
||||
nlp.vocab.cfg.update({"oov_prob": oov_prob})
|
||||
if name:
|
||||
nlp.meta["name"] = name
|
||||
return nlp
|
||||
|
||||
|
||||
def add_vectors(nlp, vectors_loc, prune_vectors, name=None):
|
||||
def add_vectors(nlp, vectors_loc, prune_vectors):
|
||||
vectors_loc = ensure_path(vectors_loc)
|
||||
if vectors_loc and vectors_loc.parts[-1].endswith(".npz"):
|
||||
nlp.vocab.vectors = Vectors(data=numpy.load(vectors_loc.open("rb")))
|
||||
|
@ -192,10 +181,7 @@ def add_vectors(nlp, vectors_loc, prune_vectors, name=None):
|
|||
lexeme.is_oov = False
|
||||
if vectors_data is not None:
|
||||
nlp.vocab.vectors = Vectors(data=vectors_data, keys=vector_keys)
|
||||
if name is None:
|
||||
nlp.vocab.vectors.name = "%s_model.vectors" % nlp.meta["lang"]
|
||||
else:
|
||||
nlp.vocab.vectors.name = name
|
||||
nlp.vocab.vectors.name = "%s_model.vectors" % nlp.meta["lang"]
|
||||
nlp.meta["vectors"]["name"] = nlp.vocab.vectors.name
|
||||
if prune_vectors >= 1:
|
||||
nlp.vocab.prune_vectors(prune_vectors)
|
||||
|
|
|
@ -21,35 +21,54 @@ from .. import about
|
|||
|
||||
|
||||
@plac.annotations(
|
||||
# fmt: off
|
||||
lang=("Model language", "positional", None, str),
|
||||
output_path=("Output directory to store model in", "positional", None, Path),
|
||||
train_path=("Location of JSON-formatted training data", "positional", None, Path),
|
||||
dev_path=("Location of JSON-formatted development data", "positional", None, Path),
|
||||
raw_text=("Path to jsonl file with unlabelled text documents.", "option", "rt", Path),
|
||||
raw_text=(
|
||||
"Path to jsonl file with unlabelled text documents.",
|
||||
"option",
|
||||
"rt",
|
||||
Path,
|
||||
),
|
||||
base_model=("Name of model to update (optional)", "option", "b", str),
|
||||
pipeline=("Comma-separated names of pipeline components", "option", "p", str),
|
||||
vectors=("Model to load vectors from", "option", "v", str),
|
||||
n_iter=("Number of iterations", "option", "n", int),
|
||||
n_early_stopping=("Maximum number of training epochs without dev accuracy improvement", "option", "ne", int),
|
||||
n_early_stopping=(
|
||||
"Maximum number of training epochs without dev accuracy improvement",
|
||||
"option",
|
||||
"ne",
|
||||
int,
|
||||
),
|
||||
n_examples=("Number of examples", "option", "ns", int),
|
||||
use_gpu=("Use GPU", "option", "g", int),
|
||||
version=("Model version", "option", "V", str),
|
||||
meta_path=("Optional path to meta.json to use as base.", "option", "m", Path),
|
||||
init_tok2vec=("Path to pretrained weights for the token-to-vector parts of the models. See 'spacy pretrain'. Experimental.", "option", "t2v", Path),
|
||||
parser_multitasks=("Side objectives for parser CNN, e.g. 'dep' or 'dep,tag'", "option", "pt", str),
|
||||
entity_multitasks=("Side objectives for NER CNN, e.g. 'dep' or 'dep,tag'", "option", "et", str),
|
||||
init_tok2vec=(
|
||||
"Path to pretrained weights for the token-to-vector parts of the models. See 'spacy pretrain'. Experimental.",
|
||||
"option",
|
||||
"t2v",
|
||||
Path,
|
||||
),
|
||||
parser_multitasks=(
|
||||
"Side objectives for parser CNN, e.g. 'dep' or 'dep,tag'",
|
||||
"option",
|
||||
"pt",
|
||||
str,
|
||||
),
|
||||
entity_multitasks=(
|
||||
"Side objectives for NER CNN, e.g. 'dep' or 'dep,tag'",
|
||||
"option",
|
||||
"et",
|
||||
str,
|
||||
),
|
||||
noise_level=("Amount of corruption for data augmentation", "option", "nl", float),
|
||||
orth_variant_level=("Amount of orthography variation for data augmentation", "option", "ovl", float),
|
||||
eval_beam_widths=("Beam widths to evaluate, e.g. 4,8", "option", "bw", str),
|
||||
gold_preproc=("Use gold preprocessing", "flag", "G", bool),
|
||||
learn_tokens=("Make parser learn gold-standard tokenization", "flag", "T", bool),
|
||||
textcat_multilabel=("Textcat classes aren't mutually exclusive (multilabel)", "flag", "TML", bool),
|
||||
textcat_arch=("Textcat model architecture", "option", "ta", str),
|
||||
textcat_positive_label=("Textcat positive label for binary classes with two labels", "option", "tpl", str),
|
||||
verbose=("Display more information for debug", "flag", "VV", bool),
|
||||
debug=("Run data diagnostics before training", "flag", "D", bool),
|
||||
# fmt: on
|
||||
)
|
||||
def train(
|
||||
lang,
|
||||
|
@ -70,13 +89,9 @@ def train(
|
|||
parser_multitasks="",
|
||||
entity_multitasks="",
|
||||
noise_level=0.0,
|
||||
orth_variant_level=0.0,
|
||||
eval_beam_widths="",
|
||||
gold_preproc=False,
|
||||
learn_tokens=False,
|
||||
textcat_multilabel=False,
|
||||
textcat_arch="bow",
|
||||
textcat_positive_label=None,
|
||||
verbose=False,
|
||||
debug=False,
|
||||
):
|
||||
|
@ -162,37 +177,9 @@ def train(
|
|||
if pipe not in nlp.pipe_names:
|
||||
if pipe == "parser":
|
||||
pipe_cfg = {"learn_tokens": learn_tokens}
|
||||
elif pipe == "textcat":
|
||||
pipe_cfg = {
|
||||
"exclusive_classes": not textcat_multilabel,
|
||||
"architecture": textcat_arch,
|
||||
"positive_label": textcat_positive_label,
|
||||
}
|
||||
else:
|
||||
pipe_cfg = {}
|
||||
nlp.add_pipe(nlp.create_pipe(pipe, config=pipe_cfg))
|
||||
else:
|
||||
if pipe == "textcat":
|
||||
textcat_cfg = nlp.get_pipe("textcat").cfg
|
||||
base_cfg = {
|
||||
"exclusive_classes": textcat_cfg["exclusive_classes"],
|
||||
"architecture": textcat_cfg["architecture"],
|
||||
"positive_label": textcat_cfg["positive_label"],
|
||||
}
|
||||
pipe_cfg = {
|
||||
"exclusive_classes": not textcat_multilabel,
|
||||
"architecture": textcat_arch,
|
||||
"positive_label": textcat_positive_label,
|
||||
}
|
||||
if base_cfg != pipe_cfg:
|
||||
msg.fail(
|
||||
"The base textcat model configuration does"
|
||||
"not match the provided training options. "
|
||||
"Existing cfg: {}, provided cfg: {}".format(
|
||||
base_cfg, pipe_cfg
|
||||
),
|
||||
exits=1,
|
||||
)
|
||||
else:
|
||||
msg.text("Starting with blank model '{}'".format(lang))
|
||||
lang_cls = util.get_lang_class(lang)
|
||||
|
@ -200,12 +187,6 @@ def train(
|
|||
for pipe in pipeline:
|
||||
if pipe == "parser":
|
||||
pipe_cfg = {"learn_tokens": learn_tokens}
|
||||
elif pipe == "textcat":
|
||||
pipe_cfg = {
|
||||
"exclusive_classes": not textcat_multilabel,
|
||||
"architecture": textcat_arch,
|
||||
"positive_label": textcat_positive_label,
|
||||
}
|
||||
else:
|
||||
pipe_cfg = {}
|
||||
nlp.add_pipe(nlp.create_pipe(pipe, config=pipe_cfg))
|
||||
|
@ -246,89 +227,12 @@ def train(
|
|||
components = _load_pretrained_tok2vec(nlp, init_tok2vec)
|
||||
msg.text("Loaded pretrained tok2vec for: {}".format(components))
|
||||
|
||||
# Verify textcat config
|
||||
if "textcat" in pipeline:
|
||||
textcat_labels = nlp.get_pipe("textcat").cfg["labels"]
|
||||
if textcat_positive_label and textcat_positive_label not in textcat_labels:
|
||||
msg.fail(
|
||||
"The textcat_positive_label (tpl) '{}' does not match any "
|
||||
"label in the training data.".format(textcat_positive_label),
|
||||
exits=1,
|
||||
)
|
||||
if textcat_positive_label and len(textcat_labels) != 2:
|
||||
msg.fail(
|
||||
"A textcat_positive_label (tpl) '{}' was provided for training "
|
||||
"data that does not appear to be a binary classification "
|
||||
"problem with two labels.".format(textcat_positive_label),
|
||||
exits=1,
|
||||
)
|
||||
train_docs = corpus.train_docs(
|
||||
nlp, noise_level=noise_level, gold_preproc=gold_preproc, max_length=0
|
||||
)
|
||||
train_labels = set()
|
||||
if textcat_multilabel:
|
||||
multilabel_found = False
|
||||
for text, gold in train_docs:
|
||||
train_labels.update(gold.cats.keys())
|
||||
if list(gold.cats.values()).count(1.0) != 1:
|
||||
multilabel_found = True
|
||||
if not multilabel_found and not base_model:
|
||||
msg.warn(
|
||||
"The textcat training instances look like they have "
|
||||
"mutually-exclusive classes. Remove the flag "
|
||||
"'--textcat-multilabel' to train a classifier with "
|
||||
"mutually-exclusive classes."
|
||||
)
|
||||
if not textcat_multilabel:
|
||||
for text, gold in train_docs:
|
||||
train_labels.update(gold.cats.keys())
|
||||
if list(gold.cats.values()).count(1.0) != 1 and not base_model:
|
||||
msg.warn(
|
||||
"Some textcat training instances do not have exactly "
|
||||
"one positive label. Modifying training options to "
|
||||
"include the flag '--textcat-multilabel' for classes "
|
||||
"that are not mutually exclusive."
|
||||
)
|
||||
nlp.get_pipe("textcat").cfg["exclusive_classes"] = False
|
||||
textcat_multilabel = True
|
||||
break
|
||||
if base_model and set(textcat_labels) != train_labels:
|
||||
msg.fail(
|
||||
"Cannot extend textcat model using data with different "
|
||||
"labels. Base model labels: {}, training data labels: "
|
||||
"{}.".format(textcat_labels, list(train_labels)),
|
||||
exits=1,
|
||||
)
|
||||
if textcat_multilabel:
|
||||
msg.text(
|
||||
"Textcat evaluation score: ROC AUC score macro-averaged across "
|
||||
"the labels '{}'".format(", ".join(textcat_labels))
|
||||
)
|
||||
elif textcat_positive_label and len(textcat_labels) == 2:
|
||||
msg.text(
|
||||
"Textcat evaluation score: F1-score for the "
|
||||
"label '{}'".format(textcat_positive_label)
|
||||
)
|
||||
elif len(textcat_labels) > 1:
|
||||
if len(textcat_labels) == 2:
|
||||
msg.warn(
|
||||
"If the textcat component is a binary classifier with "
|
||||
"exclusive classes, provide '--textcat_positive_label' for "
|
||||
"an evaluation on the positive class."
|
||||
)
|
||||
msg.text(
|
||||
"Textcat evaluation score: F1-score macro-averaged across "
|
||||
"the labels '{}'".format(", ".join(textcat_labels))
|
||||
)
|
||||
else:
|
||||
msg.fail(
|
||||
"Unsupported textcat configuration. Use `spacy debug-data` "
|
||||
"for more information."
|
||||
)
|
||||
|
||||
# fmt: off
|
||||
row_head, output_stats = _configure_training_output(pipeline, use_gpu, has_beam_widths)
|
||||
row_widths = [len(w) for w in row_head]
|
||||
row_head = ["Itn", "Dep Loss", "NER Loss", "UAS", "NER P", "NER R", "NER F", "Tag %", "Token %", "CPU WPS", "GPU WPS"]
|
||||
row_widths = [3, 10, 10, 7, 7, 7, 7, 7, 7, 7, 7]
|
||||
if has_beam_widths:
|
||||
row_head.insert(1, "Beam W.")
|
||||
row_widths.insert(1, 7)
|
||||
row_settings = {"widths": row_widths, "aligns": tuple(["r" for i in row_head]), "spacing": 2}
|
||||
# fmt: on
|
||||
print("")
|
||||
|
@ -339,11 +243,7 @@ def train(
|
|||
best_score = 0.0
|
||||
for i in range(n_iter):
|
||||
train_docs = corpus.train_docs(
|
||||
nlp,
|
||||
noise_level=noise_level,
|
||||
orth_variant_level=orth_variant_level,
|
||||
gold_preproc=gold_preproc,
|
||||
max_length=0,
|
||||
nlp, noise_level=noise_level, gold_preproc=gold_preproc, max_length=0
|
||||
)
|
||||
if raw_text:
|
||||
random.shuffle(raw_text)
|
||||
|
@ -386,7 +286,7 @@ def train(
|
|||
)
|
||||
nwords = sum(len(doc_gold[0]) for doc_gold in dev_docs)
|
||||
start_time = timer()
|
||||
scorer = nlp_loaded.evaluate(dev_docs, verbose=verbose)
|
||||
scorer = nlp_loaded.evaluate(dev_docs, debug)
|
||||
end_time = timer()
|
||||
if use_gpu < 0:
|
||||
gpu_wps = None
|
||||
|
@ -402,7 +302,7 @@ def train(
|
|||
corpus.dev_docs(nlp_loaded, gold_preproc=gold_preproc)
|
||||
)
|
||||
start_time = timer()
|
||||
scorer = nlp_loaded.evaluate(dev_docs, verbose=verbose)
|
||||
scorer = nlp_loaded.evaluate(dev_docs)
|
||||
end_time = timer()
|
||||
cpu_wps = nwords / (end_time - start_time)
|
||||
acc_loc = output_path / ("model%d" % i) / "accuracy.json"
|
||||
|
@ -436,7 +336,6 @@ def train(
|
|||
}
|
||||
meta.setdefault("name", "model%d" % i)
|
||||
meta.setdefault("version", version)
|
||||
meta["labels"] = nlp.meta["labels"]
|
||||
meta_loc = output_path / ("model%d" % i) / "meta.json"
|
||||
srsly.write_json(meta_loc, meta)
|
||||
util.set_env_log(verbose)
|
||||
|
@ -445,19 +344,10 @@ def train(
|
|||
i,
|
||||
losses,
|
||||
scorer.scores,
|
||||
output_stats,
|
||||
beam_width=beam_width if has_beam_widths else None,
|
||||
cpu_wps=cpu_wps,
|
||||
gpu_wps=gpu_wps,
|
||||
)
|
||||
if i == 0 and "textcat" in pipeline:
|
||||
textcats_per_cat = scorer.scores.get("textcats_per_cat", {})
|
||||
for cat, cat_score in textcats_per_cat.items():
|
||||
if cat_score.get("roc_auc_score", 0) < 0:
|
||||
msg.warn(
|
||||
"Textcat ROC AUC score is undefined due to "
|
||||
"only one value in label '{}'.".format(cat)
|
||||
)
|
||||
msg.row(progress, **row_settings)
|
||||
# Early stopping
|
||||
if n_early_stopping is not None:
|
||||
|
@ -498,8 +388,6 @@ def _score_for_model(meta):
|
|||
mean_acc.append((acc["uas"] + acc["las"]) / 2)
|
||||
if "ner" in pipes:
|
||||
mean_acc.append((acc["ents_p"] + acc["ents_r"] + acc["ents_f"]) / 3)
|
||||
if "textcat" in pipes:
|
||||
mean_acc.append(acc["textcat_score"])
|
||||
return sum(mean_acc) / len(mean_acc)
|
||||
|
||||
|
||||
|
@ -583,55 +471,40 @@ def _get_metrics(component):
|
|||
return ("token_acc",)
|
||||
|
||||
|
||||
def _configure_training_output(pipeline, use_gpu, has_beam_widths):
|
||||
row_head = ["Itn"]
|
||||
output_stats = []
|
||||
for pipe in pipeline:
|
||||
if pipe == "tagger":
|
||||
row_head.extend(["Tag Loss ", " Tag % "])
|
||||
output_stats.extend(["tag_loss", "tags_acc"])
|
||||
elif pipe == "parser":
|
||||
row_head.extend(["Dep Loss ", " UAS ", " LAS "])
|
||||
output_stats.extend(["dep_loss", "uas", "las"])
|
||||
elif pipe == "ner":
|
||||
row_head.extend(["NER Loss ", "NER P ", "NER R ", "NER F "])
|
||||
output_stats.extend(["ner_loss", "ents_p", "ents_r", "ents_f"])
|
||||
elif pipe == "textcat":
|
||||
row_head.extend(["Textcat Loss", "Textcat"])
|
||||
output_stats.extend(["textcat_loss", "textcat_score"])
|
||||
row_head.extend(["Token %", "CPU WPS"])
|
||||
output_stats.extend(["token_acc", "cpu_wps"])
|
||||
|
||||
if use_gpu >= 0:
|
||||
row_head.extend(["GPU WPS"])
|
||||
output_stats.extend(["gpu_wps"])
|
||||
|
||||
if has_beam_widths:
|
||||
row_head.insert(1, "Beam W.")
|
||||
return row_head, output_stats
|
||||
|
||||
|
||||
def _get_progress(
|
||||
itn, losses, dev_scores, output_stats, beam_width=None, cpu_wps=0.0, gpu_wps=0.0
|
||||
):
|
||||
def _get_progress(itn, losses, dev_scores, beam_width=None, cpu_wps=0.0, gpu_wps=0.0):
|
||||
scores = {}
|
||||
for stat in output_stats:
|
||||
scores[stat] = 0.0
|
||||
for col in [
|
||||
"dep_loss",
|
||||
"tag_loss",
|
||||
"uas",
|
||||
"tags_acc",
|
||||
"token_acc",
|
||||
"ents_p",
|
||||
"ents_r",
|
||||
"ents_f",
|
||||
"cpu_wps",
|
||||
"gpu_wps",
|
||||
]:
|
||||
scores[col] = 0.0
|
||||
scores["dep_loss"] = losses.get("parser", 0.0)
|
||||
scores["ner_loss"] = losses.get("ner", 0.0)
|
||||
scores["tag_loss"] = losses.get("tagger", 0.0)
|
||||
scores["textcat_loss"] = losses.get("textcat", 0.0)
|
||||
scores.update(dev_scores)
|
||||
scores["cpu_wps"] = cpu_wps
|
||||
scores["gpu_wps"] = gpu_wps or 0.0
|
||||
scores.update(dev_scores)
|
||||
formatted_scores = []
|
||||
for stat in output_stats:
|
||||
format_spec = "{:.3f}"
|
||||
if stat.endswith("_wps"):
|
||||
format_spec = "{:.0f}"
|
||||
formatted_scores.append(format_spec.format(scores[stat]))
|
||||
result = [itn + 1]
|
||||
result.extend(formatted_scores)
|
||||
result = [
|
||||
itn,
|
||||
"{:.3f}".format(scores["dep_loss"]),
|
||||
"{:.3f}".format(scores["ner_loss"]),
|
||||
"{:.3f}".format(scores["uas"]),
|
||||
"{:.3f}".format(scores["ents_p"]),
|
||||
"{:.3f}".format(scores["ents_r"]),
|
||||
"{:.3f}".format(scores["ents_f"]),
|
||||
"{:.3f}".format(scores["tags_acc"]),
|
||||
"{:.3f}".format(scores["token_acc"]),
|
||||
"{:.0f}".format(scores["cpu_wps"]),
|
||||
"{:.0f}".format(scores["gpu_wps"]),
|
||||
]
|
||||
if beam_width is not None:
|
||||
result.insert(1, beam_width)
|
||||
return result
|
||||
|
|
|
@ -84,10 +84,6 @@ class Warnings(object):
|
|||
W018 = ("Entity '{entity}' already exists in the Knowledge base.")
|
||||
W019 = ("Changing vectors name from {old} to {new}, to avoid clash with "
|
||||
"previously loaded vectors. See Issue #3853.")
|
||||
W020 = ("Unnamed vectors. This won't allow multiple vectors models to be "
|
||||
"loaded. (Shape: {shape})")
|
||||
W021 = ("Unexpected hash collision in PhraseMatcher. Matches may be "
|
||||
"incorrect. Modify PhraseMatcher._terminal_hash to fix.")
|
||||
|
||||
|
||||
@add_codes
|
||||
|
@ -122,7 +118,7 @@ class Errors(object):
|
|||
E011 = ("Unknown operator: '{op}'. Options: {opts}")
|
||||
E012 = ("Cannot add pattern for zero tokens to matcher.\nKey: {key}")
|
||||
E013 = ("Error selecting action in matcher")
|
||||
E014 = ("Unknown tag ID: {tag}")
|
||||
E014 = ("Uknown tag ID: {tag}")
|
||||
E015 = ("Conflicting morphology exception for ({tag}, {orth}). Use "
|
||||
"`force=True` to overwrite.")
|
||||
E016 = ("MultitaskObjective target should be function or one of: dep, "
|
||||
|
@ -461,25 +457,6 @@ class Errors(object):
|
|||
E160 = ("Can't find language data file: {path}")
|
||||
E161 = ("Found an internal inconsistency when predicting entity links. "
|
||||
"This is likely a bug in spaCy, so feel free to open an issue.")
|
||||
E162 = ("Cannot evaluate textcat model on data with different labels.\n"
|
||||
"Labels in model: {model_labels}\nLabels in evaluation "
|
||||
"data: {eval_labels}")
|
||||
E163 = ("cumsum was found to be unstable: its last element does not "
|
||||
"correspond to sum")
|
||||
E164 = ("x is neither increasing nor decreasing: {}.")
|
||||
E165 = ("Only one class present in y_true. ROC AUC score is not defined in "
|
||||
"that case.")
|
||||
E166 = ("Can only merge DocBins with the same pre-defined attributes.\n"
|
||||
"Current DocBin: {current}\nOther DocBin: {other}")
|
||||
E167 = ("Unknown morphological feature: '{feat}' ({feat_id}). This can "
|
||||
"happen if the tagger was trained with a different set of "
|
||||
"morphological features. If you're using a pre-trained model, make "
|
||||
"sure that your models are up to date:\npython -m spacy validate")
|
||||
E168 = ("Unknown field: {field}")
|
||||
E169 = ("Can't find module: {module}")
|
||||
E170 = ("Cannot apply transition {name}: invalid for the current state.")
|
||||
E171 = ("Matcher.add received invalid on_match callback argument: expected "
|
||||
"callable or None, but got: {arg_type}")
|
||||
|
||||
|
||||
@add_codes
|
||||
|
|
|
@ -307,10 +307,4 @@ GLOSSARY = {
|
|||
# https://pdfs.semanticscholar.org/5744/578cc243d92287f47448870bb426c66cc941.pdf
|
||||
"PER": "Named person or family.",
|
||||
"MISC": "Miscellaneous entities, e.g. events, nationalities, products or works of art",
|
||||
# https://github.com/ltgoslo/norne
|
||||
"EVT": "Festivals, cultural events, sports events, weather phenomena, wars, etc.",
|
||||
"PROD": "Product, i.e. artificially produced entities including speeches, radio shows, programming languages, contracts, laws and ideas",
|
||||
"DRV": "Words (and phrases?) that are dervied from a name, but not a name in themselves, e.g. 'Oslo-mannen' ('the man from Oslo')",
|
||||
"GPE_LOC": "Geo-political entity, with a locative sense, e.g. 'John lives in Spain'",
|
||||
"GPE_ORG": "Geo-political entity, with an organisation sense, e.g. 'Spain declined to meet with Belgium'",
|
||||
}
|
||||
|
|
|
@ -24,7 +24,6 @@ cdef class GoldParse:
|
|||
cdef public int loss
|
||||
cdef public list words
|
||||
cdef public list tags
|
||||
cdef public list morphology
|
||||
cdef public list heads
|
||||
cdef public list labels
|
||||
cdef public dict orths
|
||||
|
|
168
spacy/gold.pyx
168
spacy/gold.pyx
|
@ -7,7 +7,6 @@ import random
|
|||
import numpy
|
||||
import tempfile
|
||||
import shutil
|
||||
import itertools
|
||||
from pathlib import Path
|
||||
import srsly
|
||||
|
||||
|
@ -57,7 +56,6 @@ def tags_to_entities(tags):
|
|||
def merge_sents(sents):
|
||||
m_deps = [[], [], [], [], [], []]
|
||||
m_brackets = []
|
||||
m_cats = sents.pop()
|
||||
i = 0
|
||||
for (ids, words, tags, heads, labels, ner), brackets in sents:
|
||||
m_deps[0].extend(id_ + i for id_ in ids)
|
||||
|
@ -69,7 +67,6 @@ def merge_sents(sents):
|
|||
m_brackets.extend((b["first"] + i, b["last"] + i, b["label"])
|
||||
for b in brackets)
|
||||
i += len(ids)
|
||||
m_deps.append(m_cats)
|
||||
return [(m_deps, m_brackets)]
|
||||
|
||||
|
||||
|
@ -201,7 +198,6 @@ class GoldCorpus(object):
|
|||
n = 0
|
||||
i = 0
|
||||
for raw_text, paragraph_tuples in self.train_tuples:
|
||||
cats = paragraph_tuples.pop()
|
||||
for sent_tuples, brackets in paragraph_tuples:
|
||||
n += len(sent_tuples[1])
|
||||
if self.limit and i >= self.limit:
|
||||
|
@ -210,14 +206,13 @@ class GoldCorpus(object):
|
|||
return n
|
||||
|
||||
def train_docs(self, nlp, gold_preproc=False, max_length=None,
|
||||
noise_level=0.0, orth_variant_level=0.0):
|
||||
noise_level=0.0):
|
||||
locs = list((self.tmp_dir / 'train').iterdir())
|
||||
random.shuffle(locs)
|
||||
train_tuples = self.read_tuples(locs, limit=self.limit)
|
||||
gold_docs = self.iter_gold_docs(nlp, train_tuples, gold_preproc,
|
||||
max_length=max_length,
|
||||
noise_level=noise_level,
|
||||
orth_variant_level=orth_variant_level,
|
||||
make_projective=True)
|
||||
yield from gold_docs
|
||||
|
||||
|
@ -231,132 +226,43 @@ class GoldCorpus(object):
|
|||
|
||||
@classmethod
|
||||
def iter_gold_docs(cls, nlp, tuples, gold_preproc, max_length=None,
|
||||
noise_level=0.0, orth_variant_level=0.0, make_projective=False):
|
||||
noise_level=0.0, make_projective=False):
|
||||
for raw_text, paragraph_tuples in tuples:
|
||||
if gold_preproc:
|
||||
raw_text = None
|
||||
else:
|
||||
paragraph_tuples = merge_sents(paragraph_tuples)
|
||||
docs, paragraph_tuples = cls._make_docs(nlp, raw_text,
|
||||
paragraph_tuples, gold_preproc, noise_level=noise_level,
|
||||
orth_variant_level=orth_variant_level)
|
||||
docs = cls._make_docs(nlp, raw_text, paragraph_tuples, gold_preproc,
|
||||
noise_level=noise_level)
|
||||
golds = cls._make_golds(docs, paragraph_tuples, make_projective)
|
||||
for doc, gold in zip(docs, golds):
|
||||
if (not max_length) or len(doc) < max_length:
|
||||
yield doc, gold
|
||||
|
||||
@classmethod
|
||||
def _make_docs(cls, nlp, raw_text, paragraph_tuples, gold_preproc, noise_level=0.0, orth_variant_level=0.0):
|
||||
def _make_docs(cls, nlp, raw_text, paragraph_tuples, gold_preproc, noise_level=0.0):
|
||||
if raw_text is not None:
|
||||
raw_text, paragraph_tuples = make_orth_variants(nlp, raw_text, paragraph_tuples, orth_variant_level=orth_variant_level)
|
||||
raw_text = add_noise(raw_text, noise_level)
|
||||
return [nlp.make_doc(raw_text)], paragraph_tuples
|
||||
return [nlp.make_doc(raw_text)]
|
||||
else:
|
||||
docs = []
|
||||
raw_text, paragraph_tuples = make_orth_variants(nlp, None, paragraph_tuples, orth_variant_level=orth_variant_level)
|
||||
return [Doc(nlp.vocab, words=add_noise(sent_tuples[1], noise_level))
|
||||
for (sent_tuples, brackets) in paragraph_tuples], paragraph_tuples
|
||||
|
||||
for (sent_tuples, brackets) in paragraph_tuples]
|
||||
|
||||
@classmethod
|
||||
def _make_golds(cls, docs, paragraph_tuples, make_projective):
|
||||
if len(docs) != len(paragraph_tuples):
|
||||
n_annots = len(paragraph_tuples)
|
||||
raise ValueError(Errors.E070.format(n_docs=len(docs), n_annots=n_annots))
|
||||
return [GoldParse.from_annot_tuples(doc, sent_tuples,
|
||||
if len(docs) == 1:
|
||||
return [GoldParse.from_annot_tuples(docs[0], paragraph_tuples[0][0],
|
||||
make_projective=make_projective)]
|
||||
else:
|
||||
return [GoldParse.from_annot_tuples(doc, sent_tuples,
|
||||
make_projective=make_projective)
|
||||
for doc, (sent_tuples, brackets)
|
||||
in zip(docs, paragraph_tuples)]
|
||||
|
||||
|
||||
def make_orth_variants(nlp, raw, paragraph_tuples, orth_variant_level=0.0):
|
||||
if random.random() >= orth_variant_level:
|
||||
return raw, paragraph_tuples
|
||||
if random.random() >= 0.5:
|
||||
lower = True
|
||||
if raw is not None:
|
||||
raw = raw.lower()
|
||||
ndsv = nlp.Defaults.single_orth_variants
|
||||
ndpv = nlp.Defaults.paired_orth_variants
|
||||
# modify words in paragraph_tuples
|
||||
variant_paragraph_tuples = []
|
||||
for sent_tuples, brackets in paragraph_tuples:
|
||||
ids, words, tags, heads, labels, ner, cats = sent_tuples
|
||||
if lower:
|
||||
words = [w.lower() for w in words]
|
||||
# single variants
|
||||
punct_choices = [random.choice(x["variants"]) for x in ndsv]
|
||||
for word_idx in range(len(words)):
|
||||
for punct_idx in range(len(ndsv)):
|
||||
if tags[word_idx] in ndsv[punct_idx]["tags"] \
|
||||
and words[word_idx] in ndsv[punct_idx]["variants"]:
|
||||
words[word_idx] = punct_choices[punct_idx]
|
||||
# paired variants
|
||||
punct_choices = [random.choice(x["variants"]) for x in ndpv]
|
||||
for word_idx in range(len(words)):
|
||||
for punct_idx in range(len(ndpv)):
|
||||
if tags[word_idx] in ndpv[punct_idx]["tags"] \
|
||||
and words[word_idx] in itertools.chain.from_iterable(ndpv[punct_idx]["variants"]):
|
||||
# backup option: random left vs. right from pair
|
||||
pair_idx = random.choice([0, 1])
|
||||
# best option: rely on paired POS tags like `` / ''
|
||||
if len(ndpv[punct_idx]["tags"]) == 2:
|
||||
pair_idx = ndpv[punct_idx]["tags"].index(tags[word_idx])
|
||||
# next best option: rely on position in variants
|
||||
# (may not be unambiguous, so order of variants matters)
|
||||
else:
|
||||
for pair in ndpv[punct_idx]["variants"]:
|
||||
if words[word_idx] in pair:
|
||||
pair_idx = pair.index(words[word_idx])
|
||||
words[word_idx] = punct_choices[punct_idx][pair_idx]
|
||||
|
||||
variant_paragraph_tuples.append(((ids, words, tags, heads, labels, ner, cats), brackets))
|
||||
# modify raw to match variant_paragraph_tuples
|
||||
if raw is not None:
|
||||
variants = []
|
||||
for single_variants in ndsv:
|
||||
variants.extend(single_variants["variants"])
|
||||
for paired_variants in ndpv:
|
||||
variants.extend(list(itertools.chain.from_iterable(paired_variants["variants"])))
|
||||
# store variants in reverse length order to be able to prioritize
|
||||
# longer matches (e.g., "---" before "--")
|
||||
variants = sorted(variants, key=lambda x: len(x))
|
||||
variants.reverse()
|
||||
variant_raw = ""
|
||||
raw_idx = 0
|
||||
# add initial whitespace
|
||||
while raw_idx < len(raw) and re.match("\s", raw[raw_idx]):
|
||||
variant_raw += raw[raw_idx]
|
||||
raw_idx += 1
|
||||
for sent_tuples, brackets in variant_paragraph_tuples:
|
||||
ids, words, tags, heads, labels, ner, cats = sent_tuples
|
||||
for word in words:
|
||||
match_found = False
|
||||
# add identical word
|
||||
if word not in variants and raw[raw_idx:].startswith(word):
|
||||
variant_raw += word
|
||||
raw_idx += len(word)
|
||||
match_found = True
|
||||
# add variant word
|
||||
else:
|
||||
for variant in variants:
|
||||
if not match_found and \
|
||||
raw[raw_idx:].startswith(variant):
|
||||
raw_idx += len(variant)
|
||||
variant_raw += word
|
||||
match_found = True
|
||||
# something went wrong, abort
|
||||
# (add a warning message?)
|
||||
if not match_found:
|
||||
return raw, paragraph_tuples
|
||||
# add following whitespace
|
||||
while raw_idx < len(raw) and re.match("\s", raw[raw_idx]):
|
||||
variant_raw += raw[raw_idx]
|
||||
raw_idx += 1
|
||||
return variant_raw, variant_paragraph_tuples
|
||||
return raw, variant_paragraph_tuples
|
||||
|
||||
|
||||
def add_noise(orig, noise_level):
|
||||
if random.random() >= noise_level:
|
||||
return orig
|
||||
|
@ -371,8 +277,12 @@ def add_noise(orig, noise_level):
|
|||
def _corrupt(c, noise_level):
|
||||
if random.random() >= noise_level:
|
||||
return c
|
||||
elif c in [".", "'", "!", "?", ","]:
|
||||
elif c == " ":
|
||||
return "\n"
|
||||
elif c == "\n":
|
||||
return " "
|
||||
elif c in [".", "'", "!", "?", ","]:
|
||||
return ""
|
||||
else:
|
||||
return c.lower()
|
||||
|
||||
|
@ -420,10 +330,6 @@ def json_to_tuple(doc):
|
|||
sents.append([
|
||||
[ids, words, tags, heads, labels, ner],
|
||||
sent.get("brackets", [])])
|
||||
cats = {}
|
||||
for cat in paragraph.get("cats", {}):
|
||||
cats[cat["label"]] = cat["value"]
|
||||
sents.append(cats)
|
||||
if sents:
|
||||
yield [paragraph.get("raw", None), sents]
|
||||
|
||||
|
@ -537,12 +443,11 @@ cdef class GoldParse:
|
|||
"""
|
||||
@classmethod
|
||||
def from_annot_tuples(cls, doc, annot_tuples, make_projective=False):
|
||||
_, words, tags, heads, deps, entities, cats = annot_tuples
|
||||
_, words, tags, heads, deps, entities = annot_tuples
|
||||
return cls(doc, words=words, tags=tags, heads=heads, deps=deps,
|
||||
entities=entities, cats=cats,
|
||||
make_projective=make_projective)
|
||||
entities=entities, make_projective=make_projective)
|
||||
|
||||
def __init__(self, doc, annot_tuples=None, words=None, tags=None, morphology=None,
|
||||
def __init__(self, doc, annot_tuples=None, words=None, tags=None,
|
||||
heads=None, deps=None, entities=None, make_projective=False,
|
||||
cats=None, links=None, **_):
|
||||
"""Create a GoldParse.
|
||||
|
@ -577,13 +482,11 @@ cdef class GoldParse:
|
|||
if words is None:
|
||||
words = [token.text for token in doc]
|
||||
if tags is None:
|
||||
tags = [None for _ in words]
|
||||
tags = [None for _ in doc]
|
||||
if heads is None:
|
||||
heads = [None for _ in words]
|
||||
heads = [None for token in doc]
|
||||
if deps is None:
|
||||
deps = [None for _ in words]
|
||||
if morphology is None:
|
||||
morphology = [None for _ in words]
|
||||
deps = [None for _ in doc]
|
||||
if entities is None:
|
||||
entities = ["-" for _ in doc]
|
||||
elif len(entities) == 0:
|
||||
|
@ -595,6 +498,7 @@ cdef class GoldParse:
|
|||
if not isinstance(entities[0], basestring):
|
||||
# Assume we have entities specified by character offset.
|
||||
entities = biluo_tags_from_offsets(doc, entities)
|
||||
|
||||
self.mem = Pool()
|
||||
self.loss = 0
|
||||
self.length = len(doc)
|
||||
|
@ -614,7 +518,6 @@ cdef class GoldParse:
|
|||
self.heads = [None] * len(doc)
|
||||
self.labels = [None] * len(doc)
|
||||
self.ner = [None] * len(doc)
|
||||
self.morphology = [None] * len(doc)
|
||||
|
||||
# This needs to be done before we align the words
|
||||
if make_projective and heads is not None and deps is not None:
|
||||
|
@ -641,13 +544,11 @@ cdef class GoldParse:
|
|||
self.tags[i] = "_SP"
|
||||
self.heads[i] = None
|
||||
self.labels[i] = None
|
||||
self.ner[i] = None
|
||||
self.morphology[i] = set()
|
||||
self.ner[i] = "O"
|
||||
if gold_i is None:
|
||||
if i in i2j_multi:
|
||||
self.words[i] = words[i2j_multi[i]]
|
||||
self.tags[i] = tags[i2j_multi[i]]
|
||||
self.morphology[i] = morphology[i2j_multi[i]]
|
||||
is_last = i2j_multi[i] != i2j_multi.get(i+1)
|
||||
is_first = i2j_multi[i] != i2j_multi.get(i-1)
|
||||
# Set next word in multi-token span as head, until last
|
||||
|
@ -684,7 +585,6 @@ cdef class GoldParse:
|
|||
else:
|
||||
self.words[i] = words[gold_i]
|
||||
self.tags[i] = tags[gold_i]
|
||||
self.morphology[i] = morphology[gold_i]
|
||||
if heads[gold_i] is None:
|
||||
self.heads[i] = None
|
||||
else:
|
||||
|
@ -692,20 +592,9 @@ cdef class GoldParse:
|
|||
self.labels[i] = deps[gold_i]
|
||||
self.ner[i] = entities[gold_i]
|
||||
|
||||
# Prevent whitespace that isn't within entities from being tagged as
|
||||
# an entity.
|
||||
for i in range(len(self.ner)):
|
||||
if self.tags[i] == "_SP":
|
||||
prev_ner = self.ner[i-1] if i >= 1 else None
|
||||
next_ner = self.ner[i+1] if (i+1) < len(self.ner) else None
|
||||
if prev_ner == "O" or next_ner == "O":
|
||||
self.ner[i] = "O"
|
||||
|
||||
cycle = nonproj.contains_cycle(self.heads)
|
||||
if cycle is not None:
|
||||
raise ValueError(Errors.E069.format(cycle=cycle,
|
||||
cycle_tokens=" ".join(["'{}'".format(self.words[tok_id]) for tok_id in cycle]),
|
||||
doc_tokens=" ".join(words[:50])))
|
||||
raise ValueError(Errors.E069.format(cycle=cycle, cycle_tokens=" ".join(["'{}'".format(self.words[tok_id]) for tok_id in cycle]), doc_tokens=" ".join(words[:50])))
|
||||
|
||||
def __len__(self):
|
||||
"""Get the number of gold-standard tokens.
|
||||
|
@ -749,10 +638,7 @@ def docs_to_json(docs, id=0):
|
|||
docs = [docs]
|
||||
json_doc = {"id": id, "paragraphs": []}
|
||||
for i, doc in enumerate(docs):
|
||||
json_para = {'raw': doc.text, "sentences": [], "cats": []}
|
||||
for cat, val in doc.cats.items():
|
||||
json_cat = {"label": cat, "value": val}
|
||||
json_para["cats"].append(json_cat)
|
||||
json_para = {'raw': doc.text, "sentences": []}
|
||||
ent_offsets = [(e.start_char, e.end_char, e.label_) for e in doc.ents]
|
||||
biluo_tags = biluo_tags_from_offsets(doc, ent_offsets)
|
||||
for j, sent in enumerate(doc.sents):
|
||||
|
|
|
@ -24,7 +24,7 @@ cdef class Candidate:
|
|||
algorithm which will disambiguate the various candidates to the correct one.
|
||||
Each candidate (alias, entity) pair is assigned to a certain prior probability.
|
||||
|
||||
DOCS: https://spacy.io/api/kb/#candidate_init
|
||||
DOCS: https://spacy.io/api/candidate
|
||||
"""
|
||||
|
||||
def __init__(self, KnowledgeBase kb, entity_hash, entity_freq, entity_vector, alias_hash, prior_prob):
|
||||
|
|
|
@ -201,9 +201,7 @@ _ukrainian = r"а-щюяіїєґА-ЩЮЯІЇЄҐ"
|
|||
_upper = LATIN_UPPER + _russian_upper + _tatar_upper + _greek_upper + _ukrainian_upper
|
||||
_lower = LATIN_LOWER + _russian_lower + _tatar_lower + _greek_lower + _ukrainian_lower
|
||||
|
||||
_uncased = (
|
||||
_bengali + _hebrew + _persian + _sinhala + _hindi + _kannada + _tamil + _telugu
|
||||
)
|
||||
_uncased = _bengali + _hebrew + _persian + _sinhala + _hindi + _kannada + _tamil + _telugu
|
||||
|
||||
ALPHA = group_chars(LATIN + _russian + _tatar + _greek + _ukrainian + _uncased)
|
||||
ALPHA_LOWER = group_chars(_lower + _uncased)
|
||||
|
|
|
@ -27,20 +27,6 @@ class GermanDefaults(Language.Defaults):
|
|||
stop_words = STOP_WORDS
|
||||
syntax_iterators = SYNTAX_ITERATORS
|
||||
resources = {"lemma_lookup": "lemma_lookup.json"}
|
||||
single_orth_variants = [
|
||||
{"tags": ["$("], "variants": ["…", "..."]},
|
||||
{"tags": ["$("], "variants": ["-", "—", "–", "--", "---", "——"]},
|
||||
]
|
||||
paired_orth_variants = [
|
||||
{
|
||||
"tags": ["$("],
|
||||
"variants": [("'", "'"), (",", "'"), ("‚", "‘"), ("›", "‹"), ("‹", "›")],
|
||||
},
|
||||
{
|
||||
"tags": ["$("],
|
||||
"variants": [("``", "''"), ('"', '"'), ("„", "“"), ("»", "«"), ("«", "»")],
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
class German(Language):
|
||||
|
|
|
@ -10,7 +10,7 @@ TAG_MAP = {
|
|||
"$,": {POS: PUNCT, "PunctType": "comm"},
|
||||
"$.": {POS: PUNCT, "PunctType": "peri"},
|
||||
"ADJA": {POS: ADJ},
|
||||
"ADJD": {POS: ADJ},
|
||||
"ADJD": {POS: ADJ, "Variant": "short"},
|
||||
"ADV": {POS: ADV},
|
||||
"APPO": {POS: ADP, "AdpType": "post"},
|
||||
"APPR": {POS: ADP, "AdpType": "prep"},
|
||||
|
@ -32,7 +32,7 @@ TAG_MAP = {
|
|||
"PDAT": {POS: DET, "PronType": "dem"},
|
||||
"PDS": {POS: PRON, "PronType": "dem"},
|
||||
"PIAT": {POS: DET, "PronType": "ind|neg|tot"},
|
||||
"PIDAT": {POS: DET, "PronType": "ind|neg|tot"},
|
||||
"PIDAT": {POS: DET, "AdjType": "pdt", "PronType": "ind|neg|tot"},
|
||||
"PIS": {POS: PRON, "PronType": "ind|neg|tot"},
|
||||
"PPER": {POS: PRON, "PronType": "prs"},
|
||||
"PPOSAT": {POS: DET, "Poss": "yes", "PronType": "prs"},
|
||||
|
@ -42,7 +42,7 @@ TAG_MAP = {
|
|||
"PRF": {POS: PRON, "PronType": "prs", "Reflex": "yes"},
|
||||
"PTKA": {POS: PART},
|
||||
"PTKANT": {POS: PART, "PartType": "res"},
|
||||
"PTKNEG": {POS: PART, "Polarity": "neg"},
|
||||
"PTKNEG": {POS: PART, "Polarity": "Neg"},
|
||||
"PTKVZ": {POS: PART, "PartType": "vbp"},
|
||||
"PTKZU": {POS: PART, "PartType": "inf"},
|
||||
"PWAT": {POS: DET, "PronType": "int"},
|
||||
|
|
|
@ -46,10 +46,9 @@ class GreekLemmatizer(object):
|
|||
)
|
||||
return lemmas
|
||||
|
||||
def lookup(self, string, orth=None):
|
||||
key = orth if orth is not None else string
|
||||
if key in self.lookup_table:
|
||||
return self.lookup_table[key]
|
||||
def lookup(self, string):
|
||||
if string in self.lookup_table:
|
||||
return self.lookup_table[string]
|
||||
return string
|
||||
|
||||
|
||||
|
|
|
@ -38,14 +38,6 @@ class EnglishDefaults(Language.Defaults):
|
|||
"lemma_index": "lemmatizer/lemma_index.json",
|
||||
"lemma_exc": "lemmatizer/lemma_exc.json",
|
||||
}
|
||||
single_orth_variants = [
|
||||
{"tags": ["NFP"], "variants": ["…", "..."]},
|
||||
{"tags": [":"], "variants": ["-", "—", "–", "--", "---", "——"]},
|
||||
]
|
||||
paired_orth_variants = [
|
||||
{"tags": ["``", "''"], "variants": [("'", "'"), ("‘", "’")]},
|
||||
{"tags": ["``", "''"], "variants": [('"', '"'), ("“", "”")]},
|
||||
]
|
||||
|
||||
|
||||
class English(Language):
|
||||
|
|
|
@ -20574,7 +20574,7 @@
|
|||
"lengthier": "lengthy",
|
||||
"lengthiest": "lengthy",
|
||||
"lengths": "length",
|
||||
"lenses": "lens",
|
||||
"lenses": "lense",
|
||||
"lent": "lend",
|
||||
"lenticels": "lenticel",
|
||||
"lentils": "lentil",
|
||||
|
|
|
@ -3,59 +3,55 @@ from __future__ import unicode_literals
|
|||
|
||||
from ...symbols import LEMMA, PRON_LEMMA
|
||||
|
||||
# Several entries here look pretty suspicious. These will get the POS SCONJ
|
||||
# given the tag IN, when an adpositional reading seems much more likely for
|
||||
# a lot of these prepositions. I'm not sure what I was running in 04395ffa4
|
||||
# when I did this? It doesn't seem right.
|
||||
_subordinating_conjunctions = [
|
||||
"that",
|
||||
"if",
|
||||
"as",
|
||||
"because",
|
||||
# "of",
|
||||
# "for",
|
||||
# "before",
|
||||
# "in",
|
||||
"of",
|
||||
"for",
|
||||
"before",
|
||||
"in",
|
||||
"while",
|
||||
# "after",
|
||||
"after",
|
||||
"since",
|
||||
"like",
|
||||
# "with",
|
||||
"with",
|
||||
"so",
|
||||
# "to",
|
||||
# "by",
|
||||
# "on",
|
||||
# "about",
|
||||
"to",
|
||||
"by",
|
||||
"on",
|
||||
"about",
|
||||
"than",
|
||||
"whether",
|
||||
"although",
|
||||
# "from",
|
||||
"from",
|
||||
"though",
|
||||
# "until",
|
||||
"until",
|
||||
"unless",
|
||||
"once",
|
||||
# "without",
|
||||
# "at",
|
||||
# "into",
|
||||
"without",
|
||||
"at",
|
||||
"into",
|
||||
"cause",
|
||||
# "over",
|
||||
"over",
|
||||
"upon",
|
||||
"till",
|
||||
"whereas",
|
||||
# "beyond",
|
||||
"beyond",
|
||||
"whilst",
|
||||
"except",
|
||||
"despite",
|
||||
"wether",
|
||||
# "then",
|
||||
"then",
|
||||
"but",
|
||||
"becuse",
|
||||
"whie",
|
||||
# "below",
|
||||
# "against",
|
||||
"below",
|
||||
"against",
|
||||
"it",
|
||||
"w/out",
|
||||
# "toward",
|
||||
"toward",
|
||||
"albeit",
|
||||
"save",
|
||||
"besides",
|
||||
|
@ -67,17 +63,16 @@ _subordinating_conjunctions = [
|
|||
"out",
|
||||
"near",
|
||||
"seince",
|
||||
# "towards",
|
||||
"towards",
|
||||
"tho",
|
||||
"sice",
|
||||
"will",
|
||||
]
|
||||
|
||||
# This seems kind of wrong too?
|
||||
# _relative_pronouns = ["this", "that", "those", "these"]
|
||||
_relative_pronouns = ["this", "that", "those", "these"]
|
||||
|
||||
MORPH_RULES = {
|
||||
# "DT": {word: {"POS": "PRON"} for word in _relative_pronouns},
|
||||
"DT": {word: {"POS": "PRON"} for word in _relative_pronouns},
|
||||
"IN": {word: {"POS": "SCONJ"} for word in _subordinating_conjunctions},
|
||||
"NN": {
|
||||
"something": {"POS": "PRON"},
|
||||
|
|
|
@ -14,10 +14,10 @@ TAG_MAP = {
|
|||
'""': {POS: PUNCT, "PunctType": "quot", "PunctSide": "fin"},
|
||||
"''": {POS: PUNCT, "PunctType": "quot", "PunctSide": "fin"},
|
||||
":": {POS: PUNCT},
|
||||
"$": {POS: SYM},
|
||||
"#": {POS: SYM},
|
||||
"AFX": {POS: ADJ, "Hyph": "yes"},
|
||||
"CC": {POS: CCONJ, "ConjType": "comp"},
|
||||
"$": {POS: SYM, "Other": {"SymType": "currency"}},
|
||||
"#": {POS: SYM, "Other": {"SymType": "numbersign"}},
|
||||
"AFX": {POS: X, "Hyph": "yes"},
|
||||
"CC": {POS: CCONJ, "ConjType": "coor"},
|
||||
"CD": {POS: NUM, "NumType": "card"},
|
||||
"DT": {POS: DET},
|
||||
"EX": {POS: PRON, "AdvType": "ex"},
|
||||
|
@ -34,7 +34,7 @@ TAG_MAP = {
|
|||
"NNP": {POS: PROPN, "NounType": "prop", "Number": "sing"},
|
||||
"NNPS": {POS: PROPN, "NounType": "prop", "Number": "plur"},
|
||||
"NNS": {POS: NOUN, "Number": "plur"},
|
||||
"PDT": {POS: DET},
|
||||
"PDT": {POS: DET, "AdjType": "pdt", "PronType": "prn"},
|
||||
"POS": {POS: PART, "Poss": "yes"},
|
||||
"PRP": {POS: PRON, "PronType": "prs"},
|
||||
"PRP$": {POS: PRON, "PronType": "prs", "Poss": "yes"},
|
||||
|
@ -56,12 +56,12 @@ TAG_MAP = {
|
|||
"VerbForm": "fin",
|
||||
"Tense": "pres",
|
||||
"Number": "sing",
|
||||
"Person": "three",
|
||||
"Person": 3,
|
||||
},
|
||||
"WDT": {POS: PRON},
|
||||
"WP": {POS: PRON},
|
||||
"WP$": {POS: PRON, "Poss": "yes"},
|
||||
"WRB": {POS: ADV},
|
||||
"WDT": {POS: PRON, "PronType": "int|rel"},
|
||||
"WP": {POS: PRON, "PronType": "int|rel"},
|
||||
"WP$": {POS: PRON, "Poss": "yes", "PronType": "int|rel"},
|
||||
"WRB": {POS: ADV, "PronType": "int|rel"},
|
||||
"ADD": {POS: X},
|
||||
"NFP": {POS: PUNCT},
|
||||
"GW": {POS: X},
|
||||
|
|
|
@ -30,7 +30,14 @@ for pron in ["i"]:
|
|||
for orth in [pron, pron.title()]:
|
||||
_exc[orth + "'m"] = [
|
||||
{ORTH: orth, LEMMA: PRON_LEMMA, NORM: pron, TAG: "PRP"},
|
||||
{ORTH: "'m", LEMMA: "be", NORM: "am", TAG: "VBP"},
|
||||
{
|
||||
ORTH: "'m",
|
||||
LEMMA: "be",
|
||||
NORM: "am",
|
||||
TAG: "VBP",
|
||||
"tenspect": 1,
|
||||
"number": 1,
|
||||
},
|
||||
]
|
||||
|
||||
_exc[orth + "m"] = [
|
||||
|
|
|
@ -114,9 +114,9 @@ class FrenchLemmatizer(object):
|
|||
def punct(self, string, morphology=None):
|
||||
return self(string, "punct", morphology)
|
||||
|
||||
def lookup(self, string, orth=None):
|
||||
if orth is not None and orth in self.lookup_table:
|
||||
return self.lookup_table[orth][0]
|
||||
def lookup(self, string):
|
||||
if string in self.lookup_table:
|
||||
return self.lookup_table[string][0]
|
||||
return string
|
||||
|
||||
|
||||
|
|
|
@ -2,8 +2,7 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
|
||||
# Source: https://github.com/taranjeet/hindi-tokenizer/blob/master/stopwords.txt, https://data.mendeley.com/datasets/bsr3frvvjc/1#file-a21d5092-99d7-45d8-b044-3ae9edd391c6
|
||||
|
||||
# Source: https://github.com/taranjeet/hindi-tokenizer/blob/master/stopwords.txt
|
||||
STOP_WORDS = set(
|
||||
"""
|
||||
अंदर
|
||||
|
@ -19,7 +18,6 @@ STOP_WORDS = set(
|
|||
अंदर
|
||||
आदि
|
||||
आप
|
||||
अगर
|
||||
इंहिं
|
||||
इंहें
|
||||
इंहों
|
||||
|
@ -173,9 +171,6 @@ STOP_WORDS = set(
|
|||
मानो
|
||||
मे
|
||||
में
|
||||
मैं
|
||||
मुझको
|
||||
मेरा
|
||||
यदि
|
||||
यह
|
||||
यहाँ
|
||||
|
@ -232,7 +227,6 @@ STOP_WORDS = set(
|
|||
है
|
||||
हैं
|
||||
हो
|
||||
हूँ
|
||||
होता
|
||||
होति
|
||||
होती
|
||||
|
|
|
@ -37,11 +37,6 @@ def resolve_pos(token):
|
|||
in the sentence. This function adds information to the POS tag to
|
||||
resolve ambiguous mappings.
|
||||
"""
|
||||
|
||||
# this is only used for consecutive ascii spaces
|
||||
if token.pos == "空白":
|
||||
return "空白"
|
||||
|
||||
# TODO: This is a first take. The rules here are crude approximations.
|
||||
# For many of these, full dependencies are needed to properly resolve
|
||||
# PoS mappings.
|
||||
|
@ -59,7 +54,6 @@ def detailed_tokens(tokenizer, text):
|
|||
node = tokenizer.parseToNode(text)
|
||||
node = node.next # first node is beginning of sentence and empty, skip it
|
||||
words = []
|
||||
spaces = []
|
||||
while node.posid != 0:
|
||||
surface = node.surface
|
||||
base = surface # a default value. Updated if available later.
|
||||
|
@ -70,20 +64,8 @@ def detailed_tokens(tokenizer, text):
|
|||
# dictionary
|
||||
base = parts[7]
|
||||
words.append(ShortUnitWord(surface, base, pos))
|
||||
|
||||
# The way MeCab stores spaces is that the rlength of the next token is
|
||||
# the length of that token plus any preceding whitespace, **in bytes**.
|
||||
# also note that this is only for half-width / ascii spaces. Full width
|
||||
# spaces just become tokens.
|
||||
scount = node.next.rlength - node.next.length
|
||||
spaces.append(bool(scount))
|
||||
while scount > 1:
|
||||
words.append(ShortUnitWord(" ", " ", "空白"))
|
||||
spaces.append(False)
|
||||
scount -= 1
|
||||
|
||||
node = node.next
|
||||
return words, spaces
|
||||
return words
|
||||
|
||||
|
||||
class JapaneseTokenizer(DummyTokenizer):
|
||||
|
@ -93,8 +75,9 @@ class JapaneseTokenizer(DummyTokenizer):
|
|||
self.tokenizer.parseToNode("") # see #2901
|
||||
|
||||
def __call__(self, text):
|
||||
dtokens, spaces = detailed_tokens(self.tokenizer, text)
|
||||
dtokens = detailed_tokens(self.tokenizer, text)
|
||||
words = [x.surface for x in dtokens]
|
||||
spaces = [False] * len(words)
|
||||
doc = Doc(self.vocab, words=words, spaces=spaces)
|
||||
mecab_tags = []
|
||||
for token, dtoken in zip(doc, dtokens):
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
from ...symbols import POS, PUNCT, INTJ, X, ADJ, AUX, ADP, PART, SCONJ, NOUN
|
||||
from ...symbols import SYM, PRON, VERB, ADV, PROPN, NUM, DET, SPACE
|
||||
from ...symbols import SYM, PRON, VERB, ADV, PROPN, NUM, DET
|
||||
|
||||
|
||||
TAG_MAP = {
|
||||
|
@ -21,8 +21,6 @@ TAG_MAP = {
|
|||
"感動詞,一般,*,*": {POS: INTJ},
|
||||
# this is specifically for unicode full-width space
|
||||
"空白,*,*,*": {POS: X},
|
||||
# This is used when sequential half-width spaces are present
|
||||
"空白": {POS: SPACE},
|
||||
"形状詞,一般,*,*": {POS: ADJ},
|
||||
"形状詞,タリ,*,*": {POS: ADJ},
|
||||
"形状詞,助動詞語幹,*,*": {POS: ADJ},
|
||||
|
|
|
@ -1,6 +1,8 @@
|
|||
# encoding: utf8
|
||||
from __future__ import unicode_literals, print_function
|
||||
|
||||
import sys
|
||||
|
||||
from .stop_words import STOP_WORDS
|
||||
from .tag_map import TAG_MAP
|
||||
from ...attrs import LANG
|
||||
|
@ -8,12 +10,35 @@ from ...language import Language
|
|||
from ...tokens import Doc
|
||||
from ...compat import copy_reg
|
||||
from ...util import DummyTokenizer
|
||||
from ...compat import is_python3, is_python_pre_3_5
|
||||
|
||||
is_python_post_3_7 = is_python3 and sys.version_info[1] >= 7
|
||||
|
||||
# fmt: off
|
||||
if is_python_pre_3_5:
|
||||
from collections import namedtuple
|
||||
Morpheme = namedtuple("Morpheme", "surface lemma tag")
|
||||
elif is_python_post_3_7:
|
||||
from dataclasses import dataclass
|
||||
|
||||
@dataclass(frozen=True)
|
||||
class Morpheme:
|
||||
surface: str
|
||||
lemma: str
|
||||
tag: str
|
||||
else:
|
||||
from typing import NamedTuple
|
||||
|
||||
class Morpheme(NamedTuple):
|
||||
|
||||
surface = str("")
|
||||
lemma = str("")
|
||||
tag = str("")
|
||||
|
||||
|
||||
def try_mecab_import():
|
||||
try:
|
||||
from natto import MeCab
|
||||
|
||||
return MeCab
|
||||
except ImportError:
|
||||
raise ImportError(
|
||||
|
@ -21,8 +46,6 @@ def try_mecab_import():
|
|||
"[mecab-ko-dic](https://bitbucket.org/eunjeon/mecab-ko-dic), "
|
||||
"and [natto-py](https://github.com/buruzaemon/natto-py)"
|
||||
)
|
||||
|
||||
|
||||
# fmt: on
|
||||
|
||||
|
||||
|
@ -46,13 +69,13 @@ class KoreanTokenizer(DummyTokenizer):
|
|||
|
||||
def __call__(self, text):
|
||||
dtokens = list(self.detailed_tokens(text))
|
||||
surfaces = [dt["surface"] for dt in dtokens]
|
||||
surfaces = [dt.surface for dt in dtokens]
|
||||
doc = Doc(self.vocab, words=surfaces, spaces=list(check_spaces(text, surfaces)))
|
||||
for token, dtoken in zip(doc, dtokens):
|
||||
first_tag, sep, eomi_tags = dtoken["tag"].partition("+")
|
||||
first_tag, sep, eomi_tags = dtoken.tag.partition("+")
|
||||
token.tag_ = first_tag # stem(어간) or pre-final(선어말 어미)
|
||||
token.lemma_ = dtoken["lemma"]
|
||||
doc.user_data["full_tags"] = [dt["tag"] for dt in dtokens]
|
||||
token.lemma_ = dtoken.lemma
|
||||
doc.user_data["full_tags"] = [dt.tag for dt in dtokens]
|
||||
return doc
|
||||
|
||||
def detailed_tokens(self, text):
|
||||
|
@ -68,7 +91,7 @@ class KoreanTokenizer(DummyTokenizer):
|
|||
lemma, _, remainder = expr.partition("/")
|
||||
if lemma == "*":
|
||||
lemma = surface
|
||||
yield {"surface": surface, "lemma": lemma, "tag": tag}
|
||||
yield Morpheme(surface, lemma, tag)
|
||||
|
||||
|
||||
class KoreanDefaults(Language.Defaults):
|
||||
|
|
|
@ -1605,7 +1605,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1613,7 +1613,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1621,7 +1621,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1630,7 +1630,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1638,7 +1638,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1647,7 +1647,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1655,7 +1655,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1664,7 +1664,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1672,7 +1672,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1681,7 +1681,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1689,7 +1689,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1697,7 +1697,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1706,7 +1706,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1714,7 +1714,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1723,7 +1723,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1731,7 +1731,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1739,7 +1739,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1748,7 +1748,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Imp",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1756,21 +1756,21 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
"Vgm-3---n--ns-": {
|
||||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
"Vgm-3---n--ys-": {
|
||||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1778,14 +1778,14 @@ TAG_MAP = {
|
|||
"Vgm-3---y--ns-": {
|
||||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
"Vgm-3---y--ys-": {
|
||||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1794,7 +1794,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1802,7 +1802,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1811,7 +1811,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1819,7 +1819,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1827,7 +1827,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1836,7 +1836,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -1844,7 +1844,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Cnd",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1853,7 +1853,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1862,7 +1862,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -1872,7 +1872,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1881,7 +1881,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -1891,7 +1891,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1900,7 +1900,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -1910,7 +1910,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1919,7 +1919,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -1929,7 +1929,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1938,7 +1938,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -1948,7 +1948,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1957,7 +1957,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1966,7 +1966,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1974,7 +1974,7 @@ TAG_MAP = {
|
|||
"Vgma3---n--ni-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1982,7 +1982,7 @@ TAG_MAP = {
|
|||
"Vgma3---n--yi-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -1991,7 +1991,7 @@ TAG_MAP = {
|
|||
"Vgma3---y--ni-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -1999,7 +1999,7 @@ TAG_MAP = {
|
|||
"Vgma3--y--ni-": {
|
||||
POS: VERB,
|
||||
"Case": "Nom",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
},
|
||||
|
@ -2007,7 +2007,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2016,7 +2016,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2026,7 +2026,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2035,7 +2035,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2045,7 +2045,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2054,7 +2054,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2064,7 +2064,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2074,7 +2074,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2083,7 +2083,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2093,7 +2093,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2102,7 +2102,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2112,7 +2112,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2121,7 +2121,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2130,7 +2130,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2140,7 +2140,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2149,7 +2149,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2158,7 +2158,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2168,7 +2168,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2177,7 +2177,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2187,7 +2187,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2196,7 +2196,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2205,7 +2205,7 @@ TAG_MAP = {
|
|||
"Vgmf3---n--ni-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2213,7 +2213,7 @@ TAG_MAP = {
|
|||
"Vgmf3---y--ni-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2222,7 +2222,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2231,7 +2231,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2241,7 +2241,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2250,7 +2250,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2259,7 +2259,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2269,7 +2269,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Fut",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2278,7 +2278,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Fut",
|
||||
|
@ -2288,7 +2288,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2297,7 +2297,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2307,7 +2307,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2316,7 +2316,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2326,7 +2326,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2335,7 +2335,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2344,7 +2344,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2354,7 +2354,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2363,7 +2363,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2373,7 +2373,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2382,7 +2382,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2392,7 +2392,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2401,7 +2401,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2411,7 +2411,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2420,7 +2420,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2430,7 +2430,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2438,7 +2438,7 @@ TAG_MAP = {
|
|||
"Vgmp3---n--ni-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2446,7 +2446,7 @@ TAG_MAP = {
|
|||
"Vgmp3---n--yi-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2455,7 +2455,7 @@ TAG_MAP = {
|
|||
"Vgmp3---y--ni-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2463,7 +2463,7 @@ TAG_MAP = {
|
|||
"Vgmp3---y--yi-": {
|
||||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2473,7 +2473,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2482,7 +2482,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2492,7 +2492,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2501,7 +2501,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2511,7 +2511,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2520,7 +2520,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2529,7 +2529,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2538,7 +2538,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2548,7 +2548,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Pres",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2557,7 +2557,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Pres",
|
||||
|
@ -2568,7 +2568,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2578,7 +2578,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2589,7 +2589,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "one",
|
||||
"Person": "1",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2599,7 +2599,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "two",
|
||||
"Person": "2",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2608,7 +2608,7 @@ TAG_MAP = {
|
|||
POS: VERB,
|
||||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2618,7 +2618,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2628,7 +2628,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Plur",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2639,7 +2639,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2649,7 +2649,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Reflex": "Yes",
|
||||
"Tense": "Past",
|
||||
|
@ -2660,7 +2660,7 @@ TAG_MAP = {
|
|||
"Aspect": "Hab",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Neg",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
@ -2670,7 +2670,7 @@ TAG_MAP = {
|
|||
"Aspect": "Perf",
|
||||
"Mood": "Ind",
|
||||
"Number": "Sing",
|
||||
"Person": "three",
|
||||
"Person": "3",
|
||||
"Polarity": "Pos",
|
||||
"Tense": "Past",
|
||||
"VerbForm": "Fin",
|
||||
|
|
|
@ -73,7 +73,7 @@ class DutchLemmatizer(object):
|
|||
return [lemma[0]]
|
||||
except KeyError:
|
||||
pass
|
||||
# string corresponds to key in lookup table
|
||||
# string corresponds to key in lookup table
|
||||
lookup_table = self.lookup_table
|
||||
looked_up_lemma = lookup_table.get(string)
|
||||
if looked_up_lemma and looked_up_lemma in lemma_index:
|
||||
|
@ -103,12 +103,9 @@ class DutchLemmatizer(object):
|
|||
# Overrides parent method so that a lowercased version of the string is
|
||||
# used to search the lookup table. This is necessary because our lookup
|
||||
# table consists entirely of lowercase keys.
|
||||
def lookup(self, string, orth=None):
|
||||
def lookup(self, string):
|
||||
string = string.lower()
|
||||
if orth is not None:
|
||||
return self.lookup_table.get(orth, string)
|
||||
else:
|
||||
return self.lookup_table.get(string, string)
|
||||
return self.lookup_table.get(string, string)
|
||||
|
||||
def noun(self, string, morphology=None):
|
||||
return self(string, "noun", morphology)
|
||||
|
|
|
@ -73,7 +73,7 @@ class RussianLemmatizer(Lemmatizer):
|
|||
if (
|
||||
feature in morphology
|
||||
and feature in analysis_morph
|
||||
and morphology[feature].lower() != analysis_morph[feature].lower()
|
||||
and morphology[feature] != analysis_morph[feature]
|
||||
):
|
||||
break
|
||||
else:
|
||||
|
@ -115,7 +115,7 @@ class RussianLemmatizer(Lemmatizer):
|
|||
def pron(self, string, morphology=None):
|
||||
return self(string, "pron", morphology)
|
||||
|
||||
def lookup(self, string, orth=None):
|
||||
def lookup(self, string):
|
||||
analyses = self._morph.parse(string)
|
||||
if len(analyses) == 1:
|
||||
return analyses[0].normal_form
|
||||
|
|
|
@ -70,7 +70,7 @@ class UkrainianLemmatizer(Lemmatizer):
|
|||
if (
|
||||
feature in morphology
|
||||
and feature in analysis_morph
|
||||
and morphology[feature].lower() != analysis_morph[feature].lower()
|
||||
and morphology[feature] != analysis_morph[feature]
|
||||
):
|
||||
break
|
||||
else:
|
||||
|
@ -112,7 +112,7 @@ class UkrainianLemmatizer(Lemmatizer):
|
|||
def pron(self, string, morphology=None):
|
||||
return self(string, "pron", morphology)
|
||||
|
||||
def lookup(self, string, orth=None):
|
||||
def lookup(self, string):
|
||||
analyses = self._morph.parse(string)
|
||||
if len(analyses) == 1:
|
||||
return analyses[0].normal_form
|
||||
|
|
|
@ -20,7 +20,6 @@ from .pipeline import Tensorizer, EntityRecognizer, EntityLinker
|
|||
from .pipeline import SimilarityHook, TextCategorizer, Sentencizer
|
||||
from .pipeline import merge_noun_chunks, merge_entities, merge_subtokens
|
||||
from .pipeline import EntityRuler
|
||||
from .pipeline import Morphologizer
|
||||
from .compat import izip, basestring_
|
||||
from .gold import GoldParse
|
||||
from .scorer import Scorer
|
||||
|
@ -39,8 +38,6 @@ from . import about
|
|||
class BaseDefaults(object):
|
||||
@classmethod
|
||||
def create_lemmatizer(cls, nlp=None, lookups=None):
|
||||
if lookups is None:
|
||||
lookups = cls.create_lookups(nlp=nlp)
|
||||
rules, index, exc, lookup = util.get_lemma_tables(lookups)
|
||||
return Lemmatizer(index, exc, rules, lookup)
|
||||
|
||||
|
@ -111,8 +108,6 @@ class BaseDefaults(object):
|
|||
syntax_iterators = {}
|
||||
resources = {}
|
||||
writing_system = {"direction": "ltr", "has_case": True, "has_letters": True}
|
||||
single_orth_variants = []
|
||||
paired_orth_variants = []
|
||||
|
||||
|
||||
class Language(object):
|
||||
|
@ -133,7 +128,6 @@ class Language(object):
|
|||
"tokenizer": lambda nlp: nlp.Defaults.create_tokenizer(nlp),
|
||||
"tensorizer": lambda nlp, **cfg: Tensorizer(nlp.vocab, **cfg),
|
||||
"tagger": lambda nlp, **cfg: Tagger(nlp.vocab, **cfg),
|
||||
"morphologizer": lambda nlp, **cfg: Morphologizer(nlp.vocab, **cfg),
|
||||
"parser": lambda nlp, **cfg: DependencyParser(nlp.vocab, **cfg),
|
||||
"ner": lambda nlp, **cfg: EntityRecognizer(nlp.vocab, **cfg),
|
||||
"entity_linker": lambda nlp, **cfg: EntityLinker(nlp.vocab, **cfg),
|
||||
|
@ -257,8 +251,7 @@ class Language(object):
|
|||
|
||||
@property
|
||||
def pipe_labels(self):
|
||||
"""Get the labels set by the pipeline components, if available (if
|
||||
the component exposes a labels property).
|
||||
"""Get the labels set by the pipeline components, if available.
|
||||
|
||||
RETURNS (dict): Labels keyed by component name.
|
||||
"""
|
||||
|
@ -449,25 +442,6 @@ class Language(object):
|
|||
def make_doc(self, text):
|
||||
return self.tokenizer(text)
|
||||
|
||||
def _format_docs_and_golds(self, docs, golds):
|
||||
"""Format golds and docs before update models."""
|
||||
expected_keys = ("words", "tags", "heads", "deps", "entities", "cats", "links")
|
||||
gold_objs = []
|
||||
doc_objs = []
|
||||
for doc, gold in zip(docs, golds):
|
||||
if isinstance(doc, basestring_):
|
||||
doc = self.make_doc(doc)
|
||||
if not isinstance(gold, GoldParse):
|
||||
unexpected = [k for k in gold if k not in expected_keys]
|
||||
if unexpected:
|
||||
err = Errors.E151.format(unexp=unexpected, exp=expected_keys)
|
||||
raise ValueError(err)
|
||||
gold = GoldParse(doc, **gold)
|
||||
doc_objs.append(doc)
|
||||
gold_objs.append(gold)
|
||||
|
||||
return doc_objs, gold_objs
|
||||
|
||||
def update(self, docs, golds, drop=0.0, sgd=None, losses=None, component_cfg=None):
|
||||
"""Update the models in the pipeline.
|
||||
|
||||
|
@ -481,6 +455,7 @@ class Language(object):
|
|||
|
||||
DOCS: https://spacy.io/api/language#update
|
||||
"""
|
||||
expected_keys = ("words", "tags", "heads", "deps", "entities", "cats", "links")
|
||||
if len(docs) != len(golds):
|
||||
raise IndexError(Errors.E009.format(n_docs=len(docs), n_golds=len(golds)))
|
||||
if len(docs) == 0:
|
||||
|
@ -490,7 +465,21 @@ class Language(object):
|
|||
self._optimizer = create_default_optimizer(Model.ops)
|
||||
sgd = self._optimizer
|
||||
# Allow dict of args to GoldParse, instead of GoldParse objects.
|
||||
docs, golds = self._format_docs_and_golds(docs, golds)
|
||||
gold_objs = []
|
||||
doc_objs = []
|
||||
for doc, gold in zip(docs, golds):
|
||||
if isinstance(doc, basestring_):
|
||||
doc = self.make_doc(doc)
|
||||
if not isinstance(gold, GoldParse):
|
||||
unexpected = [k for k in gold if k not in expected_keys]
|
||||
if unexpected:
|
||||
err = Errors.E151.format(unexp=unexpected, exp=expected_keys)
|
||||
raise ValueError(err)
|
||||
gold = GoldParse(doc, **gold)
|
||||
doc_objs.append(doc)
|
||||
gold_objs.append(gold)
|
||||
golds = gold_objs
|
||||
docs = doc_objs
|
||||
grads = {}
|
||||
|
||||
def get_grads(W, dW, key=None):
|
||||
|
@ -594,7 +583,6 @@ class Language(object):
|
|||
# Populate vocab
|
||||
else:
|
||||
for _, annots_brackets in get_gold_tuples():
|
||||
_ = annots_brackets.pop()
|
||||
for annots, _ in annots_brackets:
|
||||
for word in annots[1]:
|
||||
_ = self.vocab[word] # noqa: F841
|
||||
|
@ -663,7 +651,7 @@ class Language(object):
|
|||
DOCS: https://spacy.io/api/language#evaluate
|
||||
"""
|
||||
if scorer is None:
|
||||
scorer = Scorer(pipeline=self.pipeline)
|
||||
scorer = Scorer()
|
||||
if component_cfg is None:
|
||||
component_cfg = {}
|
||||
docs, golds = zip(*docs_golds)
|
||||
|
|
|
@ -2,7 +2,8 @@
|
|||
from __future__ import unicode_literals
|
||||
from collections import OrderedDict
|
||||
|
||||
from .symbols import NOUN, VERB, ADJ, PUNCT, PROPN
|
||||
from .symbols import POS, NOUN, VERB, ADJ, PUNCT, PROPN
|
||||
from .symbols import VerbForm_inf, VerbForm_none, Number_sing, Degree_pos
|
||||
|
||||
|
||||
class Lemmatizer(object):
|
||||
|
@ -54,8 +55,12 @@ class Lemmatizer(object):
|
|||
Check whether we're dealing with an uninflected paradigm, so we can
|
||||
avoid lemmatization entirely.
|
||||
"""
|
||||
if morphology is None:
|
||||
morphology = {}
|
||||
morphology = {} if morphology is None else morphology
|
||||
others = [
|
||||
key
|
||||
for key in morphology
|
||||
if key not in (POS, "Number", "POS", "VerbForm", "Tense")
|
||||
]
|
||||
if univ_pos == "noun" and morphology.get("Number") == "sing":
|
||||
return True
|
||||
elif univ_pos == "verb" and morphology.get("VerbForm") == "inf":
|
||||
|
@ -66,17 +71,18 @@ class Lemmatizer(object):
|
|||
morphology.get("VerbForm") == "fin"
|
||||
and morphology.get("Tense") == "pres"
|
||||
and morphology.get("Number") is None
|
||||
and not others
|
||||
):
|
||||
return True
|
||||
elif univ_pos == "adj" and morphology.get("Degree") == "pos":
|
||||
return True
|
||||
elif morphology.get("VerbForm") == "inf":
|
||||
elif VerbForm_inf in morphology:
|
||||
return True
|
||||
elif morphology.get("VerbForm") == "none":
|
||||
elif VerbForm_none in morphology:
|
||||
return True
|
||||
elif morphology.get("VerbForm") == "inf":
|
||||
elif Number_sing in morphology:
|
||||
return True
|
||||
elif morphology.get("Degree") == "pos":
|
||||
elif Degree_pos in morphology:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
|
@ -93,19 +99,9 @@ class Lemmatizer(object):
|
|||
def punct(self, string, morphology=None):
|
||||
return self(string, "punct", morphology)
|
||||
|
||||
def lookup(self, string, orth=None):
|
||||
"""Look up a lemma in the table, if available. If no lemma is found,
|
||||
the original string is returned.
|
||||
|
||||
string (unicode): The original string.
|
||||
orth (int): Optional hash of the string to look up. If not set, the
|
||||
string will be used and hashed.
|
||||
RETURNS (unicode): The lemma if the string was found, otherwise the
|
||||
original string.
|
||||
"""
|
||||
key = orth if orth is not None else string
|
||||
if key in self.lookup_table:
|
||||
return self.lookup_table[key]
|
||||
def lookup(self, string):
|
||||
if string in self.lookup_table:
|
||||
return self.lookup_table[string]
|
||||
return string
|
||||
|
||||
|
||||
|
|
159
spacy/lookups.py
159
spacy/lookups.py
|
@ -1,13 +1,11 @@
|
|||
# coding: utf-8
|
||||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import srsly
|
||||
from collections import OrderedDict
|
||||
from preshed.bloom import BloomFilter
|
||||
|
||||
from .errors import Errors
|
||||
from .util import SimpleFrozenDict, ensure_path
|
||||
from .strings import get_string_id
|
||||
|
||||
|
||||
class Lookups(object):
|
||||
|
@ -16,14 +14,16 @@ class Lookups(object):
|
|||
so they can be accessed before the pipeline components are applied (e.g.
|
||||
in the tokenizer and lemmatizer), as well as within the pipeline components
|
||||
via doc.vocab.lookups.
|
||||
|
||||
Important note: At the moment, this class only performs a very basic
|
||||
dictionary lookup. We're planning to replace this with a more efficient
|
||||
implementation. See #3971 for details.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the Lookups object.
|
||||
|
||||
RETURNS (Lookups): The newly created object.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#init
|
||||
"""
|
||||
self._tables = OrderedDict()
|
||||
|
||||
|
@ -32,7 +32,7 @@ class Lookups(object):
|
|||
Lookups.has_table.
|
||||
|
||||
name (unicode): Name of the table.
|
||||
RETURNS (bool): Whether a table of that name is in the lookups.
|
||||
RETURNS (bool): Whether a table of that name exists.
|
||||
"""
|
||||
return self.has_table(name)
|
||||
|
||||
|
@ -51,12 +51,11 @@ class Lookups(object):
|
|||
name (unicode): Unique name of table.
|
||||
data (dict): Optional data to add to the table.
|
||||
RETURNS (Table): The newly added table.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#add_table
|
||||
"""
|
||||
if name in self.tables:
|
||||
raise ValueError(Errors.E158.format(name=name))
|
||||
table = Table(name=name, data=data)
|
||||
table = Table(name=name)
|
||||
table.update(data)
|
||||
self._tables[name] = table
|
||||
return table
|
||||
|
||||
|
@ -65,8 +64,6 @@ class Lookups(object):
|
|||
|
||||
name (unicode): Name of the table.
|
||||
RETURNS (Table): The table.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#get_table
|
||||
"""
|
||||
if name not in self._tables:
|
||||
raise KeyError(Errors.E159.format(name=name, tables=self.tables))
|
||||
|
@ -75,10 +72,8 @@ class Lookups(object):
|
|||
def remove_table(self, name):
|
||||
"""Remove a table. Raises an error if the table doesn't exist.
|
||||
|
||||
name (unicode): Name of the table to remove.
|
||||
name (unicode): The name to remove.
|
||||
RETURNS (Table): The removed table.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#remove_table
|
||||
"""
|
||||
if name not in self._tables:
|
||||
raise KeyError(Errors.E159.format(name=name, tables=self.tables))
|
||||
|
@ -89,57 +84,45 @@ class Lookups(object):
|
|||
|
||||
name (unicode): Name of the table.
|
||||
RETURNS (bool): Whether a table of that name exists.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#has_table
|
||||
"""
|
||||
return name in self._tables
|
||||
|
||||
def to_bytes(self, **kwargs):
|
||||
def to_bytes(self, exclude=tuple(), **kwargs):
|
||||
"""Serialize the lookups to a bytestring.
|
||||
|
||||
exclude (list): String names of serialization fields to exclude.
|
||||
RETURNS (bytes): The serialized Lookups.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#to_bytes
|
||||
"""
|
||||
return srsly.msgpack_dumps(self._tables)
|
||||
|
||||
def from_bytes(self, bytes_data, **kwargs):
|
||||
def from_bytes(self, bytes_data, exclude=tuple(), **kwargs):
|
||||
"""Load the lookups from a bytestring.
|
||||
|
||||
bytes_data (bytes): The data to load.
|
||||
RETURNS (Lookups): The loaded Lookups.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#from_bytes
|
||||
exclude (list): String names of serialization fields to exclude.
|
||||
RETURNS (bytes): The loaded Lookups.
|
||||
"""
|
||||
for key, value in srsly.msgpack_loads(bytes_data).items():
|
||||
self._tables[key] = Table(key)
|
||||
self._tables[key].update(value)
|
||||
self._tables = OrderedDict()
|
||||
msg = srsly.msgpack_loads(bytes_data)
|
||||
for key, value in msg.items():
|
||||
self._tables[key] = Table.from_dict(value)
|
||||
return self
|
||||
|
||||
def to_disk(self, path, **kwargs):
|
||||
"""Save the lookups to a directory as lookups.bin. Expects a path to a
|
||||
directory, which will be created if it doesn't exist.
|
||||
"""Save the lookups to a directory as lookups.bin.
|
||||
|
||||
path (unicode / Path): The file path.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#to_disk
|
||||
"""
|
||||
if len(self._tables):
|
||||
path = ensure_path(path)
|
||||
if not path.exists():
|
||||
path.mkdir()
|
||||
filepath = path / "lookups.bin"
|
||||
with filepath.open("wb") as file_:
|
||||
file_.write(self.to_bytes())
|
||||
|
||||
def from_disk(self, path, **kwargs):
|
||||
"""Load lookups from a directory containing a lookups.bin. Will skip
|
||||
loading if the file doesn't exist.
|
||||
"""Load lookups from a directory containing a lookups.bin.
|
||||
|
||||
path (unicode / Path): The directory path.
|
||||
path (unicode / Path): The file path.
|
||||
RETURNS (Lookups): The loaded lookups.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#from_disk
|
||||
"""
|
||||
path = ensure_path(path)
|
||||
filepath = path / "lookups.bin"
|
||||
|
@ -153,118 +136,22 @@ class Lookups(object):
|
|||
class Table(OrderedDict):
|
||||
"""A table in the lookups. Subclass of builtin dict that implements a
|
||||
slightly more consistent and unified API.
|
||||
|
||||
Includes a Bloom filter to speed up missed lookups.
|
||||
"""
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, data, name=None):
|
||||
"""Initialize a new table from a dict.
|
||||
|
||||
data (dict): The dictionary.
|
||||
name (unicode): Optional table name for reference.
|
||||
RETURNS (Table): The newly created object.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#table.from_dict
|
||||
"""
|
||||
self = cls(name=name)
|
||||
self.update(data)
|
||||
return self
|
||||
|
||||
def __init__(self, name=None, data=None):
|
||||
def __init__(self, name=None):
|
||||
"""Initialize a new table.
|
||||
|
||||
name (unicode): Optional table name for reference.
|
||||
data (dict): Initial data, used to hint Bloom Filter.
|
||||
RETURNS (Table): The newly created object.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#table.init
|
||||
"""
|
||||
OrderedDict.__init__(self)
|
||||
self.name = name
|
||||
# Assume a default size of 1M items
|
||||
self.default_size = 1e6
|
||||
size = len(data) if data and len(data) > 0 else self.default_size
|
||||
self.bloom = BloomFilter.from_error_rate(size)
|
||||
if data:
|
||||
self.update(data)
|
||||
|
||||
def __setitem__(self, key, value):
|
||||
"""Set new key/value pair. String keys will be hashed.
|
||||
|
||||
key (unicode / int): The key to set.
|
||||
value: The value to set.
|
||||
"""
|
||||
key = get_string_id(key)
|
||||
OrderedDict.__setitem__(self, key, value)
|
||||
self.bloom.add(key)
|
||||
|
||||
def set(self, key, value):
|
||||
"""Set new key/value pair. String keys will be hashed.
|
||||
Same as table[key] = value.
|
||||
|
||||
key (unicode / int): The key to set.
|
||||
value: The value to set.
|
||||
"""
|
||||
"""Set new key/value pair. Same as table[key] = value."""
|
||||
self[key] = value
|
||||
|
||||
def __getitem__(self, key):
|
||||
"""Get the value for a given key. String keys will be hashed.
|
||||
|
||||
key (unicode / int): The key to get.
|
||||
RETURNS: The value.
|
||||
"""
|
||||
key = get_string_id(key)
|
||||
return OrderedDict.__getitem__(self, key)
|
||||
|
||||
def get(self, key, default=None):
|
||||
"""Get the value for a given key. String keys will be hashed.
|
||||
|
||||
key (unicode / int): The key to get.
|
||||
default: The default value to return.
|
||||
RETURNS: The value.
|
||||
"""
|
||||
key = get_string_id(key)
|
||||
return OrderedDict.get(self, key, default)
|
||||
|
||||
def __contains__(self, key):
|
||||
"""Check whether a key is in the table. String keys will be hashed.
|
||||
|
||||
key (unicode / int): The key to check.
|
||||
RETURNS (bool): Whether the key is in the table.
|
||||
"""
|
||||
key = get_string_id(key)
|
||||
# This can give a false positive, so we need to check it after
|
||||
if key not in self.bloom:
|
||||
return False
|
||||
return OrderedDict.__contains__(self, key)
|
||||
|
||||
def to_bytes(self):
|
||||
"""Serialize table to a bytestring.
|
||||
|
||||
RETURNS (bytes): The serialized table.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#table.to_bytes
|
||||
"""
|
||||
data = [
|
||||
("name", self.name),
|
||||
("dict", dict(self.items())),
|
||||
("bloom", self.bloom.to_bytes()),
|
||||
]
|
||||
return srsly.msgpack_dumps(OrderedDict(data))
|
||||
|
||||
def from_bytes(self, bytes_data):
|
||||
"""Load a table from a bytestring.
|
||||
|
||||
bytes_data (bytes): The data to load.
|
||||
RETURNS (Table): The loaded table.
|
||||
|
||||
DOCS: https://spacy.io/api/lookups#table.from_bytes
|
||||
"""
|
||||
loaded = srsly.msgpack_loads(bytes_data)
|
||||
data = loaded.get("dict", {})
|
||||
self.name = loaded["name"]
|
||||
self.bloom = BloomFilter().from_bytes(loaded["bloom"])
|
||||
self.clear()
|
||||
self.update(data)
|
||||
return self
|
||||
|
|
|
@ -103,8 +103,6 @@ cdef class Matcher:
|
|||
*patterns (list): List of token descriptions.
|
||||
"""
|
||||
errors = {}
|
||||
if on_match is not None and not hasattr(on_match, "__call__"):
|
||||
raise ValueError(Errors.E171.format(arg_type=type(on_match)))
|
||||
for i, pattern in enumerate(patterns):
|
||||
if len(pattern) == 0:
|
||||
raise ValueError(Errors.E012.format(key=key))
|
||||
|
@ -164,37 +162,18 @@ cdef class Matcher:
|
|||
return default
|
||||
return (self._callbacks[key], self._patterns[key])
|
||||
|
||||
def pipe(self, docs, batch_size=1000, n_threads=-1, return_matches=False,
|
||||
as_tuples=False):
|
||||
def pipe(self, docs, batch_size=1000, n_threads=-1):
|
||||
"""Match a stream of documents, yielding them in turn.
|
||||
|
||||
docs (iterable): A stream of documents.
|
||||
batch_size (int): Number of documents to accumulate into a working set.
|
||||
return_matches (bool): Yield the match lists along with the docs, making
|
||||
results (doc, matches) tuples.
|
||||
as_tuples (bool): Interpret the input stream as (doc, context) tuples,
|
||||
and yield (result, context) tuples out.
|
||||
If both return_matches and as_tuples are True, the output will
|
||||
be a sequence of ((doc, matches), context) tuples.
|
||||
YIELDS (Doc): Documents, in order.
|
||||
"""
|
||||
if n_threads != -1:
|
||||
deprecation_warning(Warnings.W016)
|
||||
|
||||
if as_tuples:
|
||||
for doc, context in docs:
|
||||
matches = self(doc)
|
||||
if return_matches:
|
||||
yield ((doc, matches), context)
|
||||
else:
|
||||
yield (doc, context)
|
||||
else:
|
||||
for doc in docs:
|
||||
matches = self(doc)
|
||||
if return_matches:
|
||||
yield (doc, matches)
|
||||
else:
|
||||
yield doc
|
||||
for doc in docs:
|
||||
self(doc)
|
||||
yield doc
|
||||
|
||||
def __call__(self, Doc doc):
|
||||
"""Find all token sequences matching the supplied pattern.
|
||||
|
|
|
@ -1,27 +1,5 @@
|
|||
from libcpp.vector cimport vector
|
||||
|
||||
from cymem.cymem cimport Pool
|
||||
from preshed.maps cimport key_t, MapStruct
|
||||
from ..typedefs cimport hash_t
|
||||
|
||||
from ..attrs cimport attr_id_t
|
||||
from ..tokens.doc cimport Doc
|
||||
from ..vocab cimport Vocab
|
||||
|
||||
|
||||
cdef class PhraseMatcher:
|
||||
cdef Vocab vocab
|
||||
cdef attr_id_t attr
|
||||
cdef object _callbacks
|
||||
cdef object _docs
|
||||
cdef bint _validate
|
||||
cdef MapStruct* c_map
|
||||
cdef Pool mem
|
||||
cdef key_t _terminal_hash
|
||||
|
||||
cdef void find_matches(self, Doc doc, vector[MatchStruct] *matches) nogil
|
||||
|
||||
|
||||
cdef struct MatchStruct:
|
||||
key_t match_id
|
||||
int start
|
||||
int end
|
||||
ctypedef vector[hash_t] hash_vec
|
||||
|
|
|
@ -2,16 +2,28 @@
|
|||
# cython: profile=True
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from libc.stdint cimport uintptr_t
|
||||
from libcpp.vector cimport vector
|
||||
from cymem.cymem cimport Pool
|
||||
from murmurhash.mrmr cimport hash64
|
||||
from preshed.maps cimport PreshMap
|
||||
|
||||
from preshed.maps cimport map_init, map_set, map_get, map_clear, map_iter
|
||||
|
||||
from ..attrs cimport ORTH, POS, TAG, DEP, LEMMA
|
||||
from ..structs cimport TokenC
|
||||
from ..tokens.token cimport Token
|
||||
from .matcher cimport Matcher
|
||||
from ..attrs cimport ORTH, POS, TAG, DEP, LEMMA, attr_id_t
|
||||
from ..vocab cimport Vocab
|
||||
from ..tokens.doc cimport Doc, get_token_attr
|
||||
from ..typedefs cimport attr_t, hash_t
|
||||
|
||||
from ._schemas import TOKEN_PATTERN_SCHEMA
|
||||
from ..errors import Errors, Warnings, deprecation_warning, user_warning
|
||||
from ..attrs import FLAG61 as U_ENT
|
||||
from ..attrs import FLAG60 as B2_ENT
|
||||
from ..attrs import FLAG59 as B3_ENT
|
||||
from ..attrs import FLAG58 as B4_ENT
|
||||
from ..attrs import FLAG43 as L2_ENT
|
||||
from ..attrs import FLAG42 as L3_ENT
|
||||
from ..attrs import FLAG41 as L4_ENT
|
||||
from ..attrs import FLAG42 as I3_ENT
|
||||
from ..attrs import FLAG41 as I4_ENT
|
||||
|
||||
|
||||
cdef class PhraseMatcher:
|
||||
|
@ -21,11 +33,18 @@ cdef class PhraseMatcher:
|
|||
|
||||
DOCS: https://spacy.io/api/phrasematcher
|
||||
USAGE: https://spacy.io/usage/rule-based-matching#phrasematcher
|
||||
|
||||
Adapted from FlashText: https://github.com/vi3k6i5/flashtext
|
||||
MIT License (see `LICENSE`)
|
||||
Copyright (c) 2017 Vikash Singh (vikash.duliajan@gmail.com)
|
||||
"""
|
||||
cdef Pool mem
|
||||
cdef Vocab vocab
|
||||
cdef Matcher matcher
|
||||
cdef PreshMap phrase_ids
|
||||
cdef vector[hash_vec] ent_id_matrix
|
||||
cdef int max_length
|
||||
cdef attr_id_t attr
|
||||
cdef public object _callbacks
|
||||
cdef public object _patterns
|
||||
cdef public object _docs
|
||||
cdef public object _validate
|
||||
|
||||
def __init__(self, Vocab vocab, max_length=0, attr="ORTH", validate=False):
|
||||
"""Initialize the PhraseMatcher.
|
||||
|
@ -39,17 +58,11 @@ cdef class PhraseMatcher:
|
|||
"""
|
||||
if max_length != 0:
|
||||
deprecation_warning(Warnings.W010)
|
||||
self.vocab = vocab
|
||||
self._callbacks = {}
|
||||
self._docs = {}
|
||||
self._validate = validate
|
||||
|
||||
self.mem = Pool()
|
||||
self.c_map = <MapStruct*>self.mem.alloc(1, sizeof(MapStruct))
|
||||
self._terminal_hash = 826361138722620965
|
||||
map_init(self.mem, self.c_map, 8)
|
||||
|
||||
if isinstance(attr, (int, long)):
|
||||
self.max_length = max_length
|
||||
self.vocab = vocab
|
||||
self.matcher = Matcher(self.vocab, validate=False)
|
||||
if isinstance(attr, long):
|
||||
self.attr = attr
|
||||
else:
|
||||
attr = attr.upper()
|
||||
|
@ -58,15 +71,28 @@ cdef class PhraseMatcher:
|
|||
if attr not in TOKEN_PATTERN_SCHEMA["items"]["properties"]:
|
||||
raise ValueError(Errors.E152.format(attr=attr))
|
||||
self.attr = self.vocab.strings[attr]
|
||||
self.phrase_ids = PreshMap()
|
||||
abstract_patterns = [
|
||||
[{U_ENT: True}],
|
||||
[{B2_ENT: True}, {L2_ENT: True}],
|
||||
[{B3_ENT: True}, {I3_ENT: True}, {L3_ENT: True}],
|
||||
[{B4_ENT: True}, {I4_ENT: True}, {I4_ENT: True, "OP": "+"}, {L4_ENT: True}],
|
||||
]
|
||||
self.matcher.add("Candidate", None, *abstract_patterns)
|
||||
self._callbacks = {}
|
||||
self._docs = {}
|
||||
self._validate = validate
|
||||
|
||||
def __len__(self):
|
||||
"""Get the number of match IDs added to the matcher.
|
||||
"""Get the number of rules added to the matcher. Note that this only
|
||||
returns the number of rules (identical with the number of IDs), not the
|
||||
number of individual patterns.
|
||||
|
||||
RETURNS (int): The number of rules.
|
||||
|
||||
DOCS: https://spacy.io/api/phrasematcher#len
|
||||
"""
|
||||
return len(self._callbacks)
|
||||
return len(self._docs)
|
||||
|
||||
def __contains__(self, key):
|
||||
"""Check whether the matcher contains rules for a match ID.
|
||||
|
@ -76,79 +102,13 @@ cdef class PhraseMatcher:
|
|||
|
||||
DOCS: https://spacy.io/api/phrasematcher#contains
|
||||
"""
|
||||
return key in self._callbacks
|
||||
cdef hash_t ent_id = self.matcher._normalize_key(key)
|
||||
return ent_id in self._callbacks
|
||||
|
||||
def __reduce__(self):
|
||||
data = (self.vocab, self._docs, self._callbacks, self.attr)
|
||||
data = (self.vocab, self._docs, self._callbacks)
|
||||
return (unpickle_matcher, data, None, None)
|
||||
|
||||
def remove(self, key):
|
||||
"""Remove a rule from the matcher by match ID. A KeyError is raised if
|
||||
the key does not exist.
|
||||
|
||||
key (unicode): The match ID.
|
||||
|
||||
DOCS: https://spacy.io/api/phrasematcher#remove
|
||||
"""
|
||||
if key not in self._docs:
|
||||
raise KeyError(key)
|
||||
cdef MapStruct* current_node
|
||||
cdef MapStruct* terminal_map
|
||||
cdef MapStruct* node_pointer
|
||||
cdef void* result
|
||||
cdef key_t terminal_key
|
||||
cdef void* value
|
||||
cdef int c_i = 0
|
||||
cdef vector[MapStruct*] path_nodes
|
||||
cdef vector[key_t] path_keys
|
||||
cdef key_t key_to_remove
|
||||
for keyword in self._docs[key]:
|
||||
current_node = self.c_map
|
||||
for token in keyword:
|
||||
result = map_get(current_node, token)
|
||||
if result:
|
||||
path_nodes.push_back(current_node)
|
||||
path_keys.push_back(token)
|
||||
current_node = <MapStruct*>result
|
||||
else:
|
||||
# if token is not found, break out of the loop
|
||||
current_node = NULL
|
||||
break
|
||||
# remove the tokens from trie node if there are no other
|
||||
# keywords with them
|
||||
result = map_get(current_node, self._terminal_hash)
|
||||
if current_node != NULL and result:
|
||||
terminal_map = <MapStruct*>result
|
||||
terminal_keys = []
|
||||
c_i = 0
|
||||
while map_iter(terminal_map, &c_i, &terminal_key, &value):
|
||||
terminal_keys.append(self.vocab.strings[terminal_key])
|
||||
# if this is the only remaining key, remove unnecessary paths
|
||||
if terminal_keys == [key]:
|
||||
while not path_nodes.empty():
|
||||
node_pointer = path_nodes.back()
|
||||
path_nodes.pop_back()
|
||||
key_to_remove = path_keys.back()
|
||||
path_keys.pop_back()
|
||||
result = map_get(node_pointer, key_to_remove)
|
||||
if node_pointer.filled == 1:
|
||||
map_clear(node_pointer, key_to_remove)
|
||||
self.mem.free(result)
|
||||
else:
|
||||
# more than one key means more than 1 path,
|
||||
# delete not required path and keep the others
|
||||
map_clear(node_pointer, key_to_remove)
|
||||
self.mem.free(result)
|
||||
break
|
||||
# otherwise simply remove the key
|
||||
else:
|
||||
result = map_get(current_node, self._terminal_hash)
|
||||
if result:
|
||||
map_clear(<MapStruct*>result, self.vocab.strings[key])
|
||||
|
||||
del self._callbacks[key]
|
||||
del self._docs[key]
|
||||
|
||||
def add(self, key, on_match, *docs):
|
||||
"""Add a match-rule to the phrase-matcher. A match-rule consists of: an ID
|
||||
key, an on_match callback, and one or more patterns.
|
||||
|
@ -159,53 +119,53 @@ cdef class PhraseMatcher:
|
|||
|
||||
DOCS: https://spacy.io/api/phrasematcher#add
|
||||
"""
|
||||
|
||||
_ = self.vocab[key]
|
||||
self._callbacks[key] = on_match
|
||||
self._docs.setdefault(key, set())
|
||||
|
||||
cdef MapStruct* current_node
|
||||
cdef MapStruct* internal_node
|
||||
cdef void* result
|
||||
|
||||
cdef Doc doc
|
||||
cdef hash_t ent_id = self.matcher._normalize_key(key)
|
||||
self._callbacks[ent_id] = on_match
|
||||
self._docs[ent_id] = docs
|
||||
cdef int length
|
||||
cdef int i
|
||||
cdef hash_t phrase_hash
|
||||
cdef Pool mem = Pool()
|
||||
for doc in docs:
|
||||
if len(doc) == 0:
|
||||
length = doc.length
|
||||
if length == 0:
|
||||
continue
|
||||
if isinstance(doc, Doc):
|
||||
if self.attr in (POS, TAG, LEMMA) and not doc.is_tagged:
|
||||
raise ValueError(Errors.E155.format())
|
||||
if self.attr == DEP and not doc.is_parsed:
|
||||
raise ValueError(Errors.E156.format())
|
||||
if self._validate and (doc.is_tagged or doc.is_parsed) \
|
||||
and self.attr not in (DEP, POS, TAG, LEMMA):
|
||||
string_attr = self.vocab.strings[self.attr]
|
||||
user_warning(Warnings.W012.format(key=key, attr=string_attr))
|
||||
keyword = self._convert_to_array(doc)
|
||||
if self.attr in (POS, TAG, LEMMA) and not doc.is_tagged:
|
||||
raise ValueError(Errors.E155.format())
|
||||
if self.attr == DEP and not doc.is_parsed:
|
||||
raise ValueError(Errors.E156.format())
|
||||
if self._validate and (doc.is_tagged or doc.is_parsed) \
|
||||
and self.attr not in (DEP, POS, TAG, LEMMA):
|
||||
string_attr = self.vocab.strings[self.attr]
|
||||
user_warning(Warnings.W012.format(key=key, attr=string_attr))
|
||||
tags = get_biluo(length)
|
||||
phrase_key = <attr_t*>mem.alloc(length, sizeof(attr_t))
|
||||
for i, tag in enumerate(tags):
|
||||
attr_value = self.get_lex_value(doc, i)
|
||||
lexeme = self.vocab[attr_value]
|
||||
lexeme.set_flag(tag, True)
|
||||
phrase_key[i] = lexeme.orth
|
||||
phrase_hash = hash64(phrase_key, length * sizeof(attr_t), 0)
|
||||
|
||||
if phrase_hash in self.phrase_ids:
|
||||
phrase_index = self.phrase_ids[phrase_hash]
|
||||
ent_id_list = self.ent_id_matrix[phrase_index]
|
||||
ent_id_list.append(ent_id)
|
||||
self.ent_id_matrix[phrase_index] = ent_id_list
|
||||
|
||||
else:
|
||||
keyword = doc
|
||||
self._docs[key].add(tuple(keyword))
|
||||
ent_id_list = hash_vec(1)
|
||||
ent_id_list[0] = ent_id
|
||||
new_index = self.ent_id_matrix.size()
|
||||
if new_index == 0:
|
||||
# PreshMaps can not contain 0 as value, so storing a dummy at 0
|
||||
self.ent_id_matrix.push_back(hash_vec(0))
|
||||
new_index = 1
|
||||
self.ent_id_matrix.push_back(ent_id_list)
|
||||
self.phrase_ids.set(phrase_hash, <void*>new_index)
|
||||
|
||||
current_node = self.c_map
|
||||
for token in keyword:
|
||||
if token == self._terminal_hash:
|
||||
user_warning(Warnings.W021)
|
||||
break
|
||||
result = <MapStruct*>map_get(current_node, token)
|
||||
if not result:
|
||||
internal_node = <MapStruct*>self.mem.alloc(1, sizeof(MapStruct))
|
||||
map_init(self.mem, internal_node, 8)
|
||||
map_set(self.mem, current_node, token, internal_node)
|
||||
result = internal_node
|
||||
current_node = <MapStruct*>result
|
||||
result = <MapStruct*>map_get(current_node, self._terminal_hash)
|
||||
if not result:
|
||||
internal_node = <MapStruct*>self.mem.alloc(1, sizeof(MapStruct))
|
||||
map_init(self.mem, internal_node, 8)
|
||||
map_set(self.mem, current_node, self._terminal_hash, internal_node)
|
||||
result = internal_node
|
||||
map_set(self.mem, <MapStruct*>result, self.vocab.strings[key], NULL)
|
||||
|
||||
def __call__(self, doc):
|
||||
def __call__(self, Doc doc):
|
||||
"""Find all sequences matching the supplied patterns on the `Doc`.
|
||||
|
||||
doc (Doc): The document to match over.
|
||||
|
@ -216,63 +176,25 @@ cdef class PhraseMatcher:
|
|||
DOCS: https://spacy.io/api/phrasematcher#call
|
||||
"""
|
||||
matches = []
|
||||
if doc is None or len(doc) == 0:
|
||||
# if doc is empty or None just return empty list
|
||||
return matches
|
||||
|
||||
cdef vector[MatchStruct] c_matches
|
||||
self.find_matches(doc, &c_matches)
|
||||
for i in range(c_matches.size()):
|
||||
matches.append((c_matches[i].match_id, c_matches[i].start, c_matches[i].end))
|
||||
if self.attr == ORTH:
|
||||
match_doc = doc
|
||||
else:
|
||||
# If we're not matching on the ORTH, match_doc will be a Doc whose
|
||||
# token.orth values are the attribute values we're matching on,
|
||||
# e.g. Doc(nlp.vocab, words=[token.pos_ for token in doc])
|
||||
words = [self.get_lex_value(doc, i) for i in range(len(doc))]
|
||||
match_doc = Doc(self.vocab, words=words)
|
||||
for _, start, end in self.matcher(match_doc):
|
||||
ent_ids = self.accept_match(match_doc, start, end)
|
||||
if ent_ids is not None:
|
||||
for ent_id in ent_ids:
|
||||
matches.append((ent_id, start, end))
|
||||
for i, (ent_id, start, end) in enumerate(matches):
|
||||
on_match = self._callbacks.get(ent_id)
|
||||
if on_match is not None:
|
||||
on_match(self, doc, i, matches)
|
||||
return matches
|
||||
|
||||
cdef void find_matches(self, Doc doc, vector[MatchStruct] *matches) nogil:
|
||||
cdef MapStruct* current_node = self.c_map
|
||||
cdef int start = 0
|
||||
cdef int idx = 0
|
||||
cdef int idy = 0
|
||||
cdef key_t key
|
||||
cdef void* value
|
||||
cdef int i = 0
|
||||
cdef MatchStruct ms
|
||||
cdef void* result
|
||||
while idx < doc.length:
|
||||
start = idx
|
||||
token = Token.get_struct_attr(&doc.c[idx], self.attr)
|
||||
# look for sequences from this position
|
||||
result = map_get(current_node, token)
|
||||
if result:
|
||||
current_node = <MapStruct*>result
|
||||
idy = idx + 1
|
||||
while idy < doc.length:
|
||||
result = map_get(current_node, self._terminal_hash)
|
||||
if result:
|
||||
i = 0
|
||||
while map_iter(<MapStruct*>result, &i, &key, &value):
|
||||
ms = make_matchstruct(key, start, idy)
|
||||
matches.push_back(ms)
|
||||
inner_token = Token.get_struct_attr(&doc.c[idy], self.attr)
|
||||
result = map_get(current_node, inner_token)
|
||||
if result:
|
||||
current_node = <MapStruct*>result
|
||||
idy += 1
|
||||
else:
|
||||
break
|
||||
else:
|
||||
# end of doc reached
|
||||
result = map_get(current_node, self._terminal_hash)
|
||||
if result:
|
||||
i = 0
|
||||
while map_iter(<MapStruct*>result, &i, &key, &value):
|
||||
ms = make_matchstruct(key, start, idy)
|
||||
matches.push_back(ms)
|
||||
current_node = self.c_map
|
||||
idx += 1
|
||||
|
||||
def pipe(self, stream, batch_size=1000, n_threads=-1, return_matches=False,
|
||||
as_tuples=False):
|
||||
"""Match a stream of documents, yielding them in turn.
|
||||
|
@ -306,21 +228,53 @@ cdef class PhraseMatcher:
|
|||
else:
|
||||
yield doc
|
||||
|
||||
def _convert_to_array(self, Doc doc):
|
||||
return [Token.get_struct_attr(&doc.c[i], self.attr) for i in range(len(doc))]
|
||||
def accept_match(self, Doc doc, int start, int end):
|
||||
cdef int i, j
|
||||
cdef Pool mem = Pool()
|
||||
phrase_key = <attr_t*>mem.alloc(end-start, sizeof(attr_t))
|
||||
for i, j in enumerate(range(start, end)):
|
||||
phrase_key[i] = doc.c[j].lex.orth
|
||||
cdef hash_t key = hash64(phrase_key, (end-start) * sizeof(attr_t), 0)
|
||||
|
||||
ent_index = <hash_t>self.phrase_ids.get(key)
|
||||
if ent_index == 0:
|
||||
return None
|
||||
return self.ent_id_matrix[ent_index]
|
||||
|
||||
def get_lex_value(self, Doc doc, int i):
|
||||
if self.attr == ORTH:
|
||||
# Return the regular orth value of the lexeme
|
||||
return doc.c[i].lex.orth
|
||||
# Get the attribute value instead, e.g. token.pos
|
||||
attr_value = get_token_attr(&doc.c[i], self.attr)
|
||||
if attr_value in (0, 1):
|
||||
# Value is boolean, convert to string
|
||||
string_attr_value = str(attr_value)
|
||||
else:
|
||||
string_attr_value = self.vocab.strings[attr_value]
|
||||
string_attr_name = self.vocab.strings[self.attr]
|
||||
# Concatenate the attr name and value to not pollute lexeme space
|
||||
# e.g. 'POS-VERB' instead of just 'VERB', which could otherwise
|
||||
# create false positive matches
|
||||
return "matcher:{}-{}".format(string_attr_name, string_attr_value)
|
||||
|
||||
|
||||
def unpickle_matcher(vocab, docs, callbacks, attr):
|
||||
matcher = PhraseMatcher(vocab, attr=attr)
|
||||
def get_biluo(length):
|
||||
if length == 0:
|
||||
raise ValueError(Errors.E127)
|
||||
elif length == 1:
|
||||
return [U_ENT]
|
||||
elif length == 2:
|
||||
return [B2_ENT, L2_ENT]
|
||||
elif length == 3:
|
||||
return [B3_ENT, I3_ENT, L3_ENT]
|
||||
else:
|
||||
return [B4_ENT, I4_ENT] + [I4_ENT] * (length-3) + [L4_ENT]
|
||||
|
||||
|
||||
def unpickle_matcher(vocab, docs, callbacks):
|
||||
matcher = PhraseMatcher(vocab)
|
||||
for key, specs in docs.items():
|
||||
callback = callbacks.get(key, None)
|
||||
matcher.add(key, callback, *specs)
|
||||
return matcher
|
||||
|
||||
|
||||
cdef MatchStruct make_matchstruct(key_t match_id, int start, int end) nogil:
|
||||
cdef MatchStruct ms
|
||||
ms.match_id = match_id
|
||||
ms.start = start
|
||||
ms.end = end
|
||||
return ms
|
||||
|
|
|
@ -1,41 +1,301 @@
|
|||
from cymem.cymem cimport Pool
|
||||
from preshed.maps cimport PreshMap, PreshMapArray
|
||||
from preshed.maps cimport PreshMapArray
|
||||
from libc.stdint cimport uint64_t
|
||||
from murmurhash cimport mrmr
|
||||
|
||||
from .structs cimport TokenC, MorphAnalysisC
|
||||
from .structs cimport TokenC
|
||||
from .strings cimport StringStore
|
||||
from .typedefs cimport hash_t, attr_t, flags_t
|
||||
from .typedefs cimport attr_t, flags_t
|
||||
from .parts_of_speech cimport univ_pos_t
|
||||
|
||||
from . cimport symbols
|
||||
|
||||
|
||||
cdef struct RichTagC:
|
||||
uint64_t morph
|
||||
int id
|
||||
univ_pos_t pos
|
||||
attr_t name
|
||||
|
||||
|
||||
cdef struct MorphAnalysisC:
|
||||
RichTagC tag
|
||||
attr_t lemma
|
||||
|
||||
|
||||
cdef class Morphology:
|
||||
cdef readonly Pool mem
|
||||
cdef readonly StringStore strings
|
||||
cdef PreshMap tags # Keyed by hash, value is pointer to tag
|
||||
|
||||
cdef public object lemmatizer
|
||||
cdef readonly object tag_map
|
||||
cdef readonly object tag_names
|
||||
cdef readonly object reverse_index
|
||||
cdef readonly object exc
|
||||
cdef readonly object _feat_map
|
||||
cdef readonly PreshMapArray _cache
|
||||
cdef readonly int n_tags
|
||||
cdef public object n_tags
|
||||
cdef public object reverse_index
|
||||
cdef public object tag_names
|
||||
cdef public object exc
|
||||
|
||||
cdef RichTagC* rich_tags
|
||||
cdef PreshMapArray _cache
|
||||
|
||||
cpdef update(self, hash_t morph, features)
|
||||
cdef hash_t insert(self, MorphAnalysisC tag) except 0
|
||||
|
||||
cdef int assign_untagged(self, TokenC* token) except -1
|
||||
|
||||
cdef int assign_tag(self, TokenC* token, tag) except -1
|
||||
|
||||
cdef int assign_tag_id(self, TokenC* token, int tag_id) except -1
|
||||
|
||||
cdef int _assign_tag_from_exceptions(self, TokenC* token, int tag_id) except -1
|
||||
cdef int assign_feature(self, uint64_t* morph, univ_morph_t feat_id, bint value) except -1
|
||||
|
||||
|
||||
cdef int check_feature(const MorphAnalysisC* tag, attr_t feature) nogil
|
||||
cdef attr_t get_field(const MorphAnalysisC* tag, int field) nogil
|
||||
cdef list list_features(const MorphAnalysisC* tag)
|
||||
cdef enum univ_morph_t:
|
||||
NIL = 0
|
||||
Animacy_anim = symbols.Animacy_anim
|
||||
Animacy_inan
|
||||
Animacy_hum
|
||||
Animacy_nhum
|
||||
Aspect_freq
|
||||
Aspect_imp
|
||||
Aspect_mod
|
||||
Aspect_none
|
||||
Aspect_perf
|
||||
Case_abe
|
||||
Case_abl
|
||||
Case_abs
|
||||
Case_acc
|
||||
Case_ade
|
||||
Case_all
|
||||
Case_cau
|
||||
Case_com
|
||||
Case_dat
|
||||
Case_del
|
||||
Case_dis
|
||||
Case_ela
|
||||
Case_ess
|
||||
Case_gen
|
||||
Case_ill
|
||||
Case_ine
|
||||
Case_ins
|
||||
Case_loc
|
||||
Case_lat
|
||||
Case_nom
|
||||
Case_par
|
||||
Case_sub
|
||||
Case_sup
|
||||
Case_tem
|
||||
Case_ter
|
||||
Case_tra
|
||||
Case_voc
|
||||
Definite_two
|
||||
Definite_def
|
||||
Definite_red
|
||||
Definite_cons # U20
|
||||
Definite_ind
|
||||
Degree_cmp
|
||||
Degree_comp
|
||||
Degree_none
|
||||
Degree_pos
|
||||
Degree_sup
|
||||
Degree_abs
|
||||
Degree_com
|
||||
Degree_dim # du
|
||||
Gender_com
|
||||
Gender_fem
|
||||
Gender_masc
|
||||
Gender_neut
|
||||
Mood_cnd
|
||||
Mood_imp
|
||||
Mood_ind
|
||||
Mood_n
|
||||
Mood_pot
|
||||
Mood_sub
|
||||
Mood_opt
|
||||
Negative_neg
|
||||
Negative_pos
|
||||
Negative_yes
|
||||
Polarity_neg # U20
|
||||
Polarity_pos # U20
|
||||
Number_com
|
||||
Number_dual
|
||||
Number_none
|
||||
Number_plur
|
||||
Number_sing
|
||||
Number_ptan # bg
|
||||
Number_count # bg
|
||||
NumType_card
|
||||
NumType_dist
|
||||
NumType_frac
|
||||
NumType_gen
|
||||
NumType_mult
|
||||
NumType_none
|
||||
NumType_ord
|
||||
NumType_sets
|
||||
Person_one
|
||||
Person_two
|
||||
Person_three
|
||||
Person_none
|
||||
Poss_yes
|
||||
PronType_advPart
|
||||
PronType_art
|
||||
PronType_default
|
||||
PronType_dem
|
||||
PronType_ind
|
||||
PronType_int
|
||||
PronType_neg
|
||||
PronType_prs
|
||||
PronType_rcp
|
||||
PronType_rel
|
||||
PronType_tot
|
||||
PronType_clit
|
||||
PronType_exc # es, ca, it, fa
|
||||
Reflex_yes
|
||||
Tense_fut
|
||||
Tense_imp
|
||||
Tense_past
|
||||
Tense_pres
|
||||
VerbForm_fin
|
||||
VerbForm_ger
|
||||
VerbForm_inf
|
||||
VerbForm_none
|
||||
VerbForm_part
|
||||
VerbForm_partFut
|
||||
VerbForm_partPast
|
||||
VerbForm_partPres
|
||||
VerbForm_sup
|
||||
VerbForm_trans
|
||||
VerbForm_conv # U20
|
||||
VerbForm_gdv # la
|
||||
Voice_act
|
||||
Voice_cau
|
||||
Voice_pass
|
||||
Voice_mid # gkc
|
||||
Voice_int # hb
|
||||
Abbr_yes # cz, fi, sl, U
|
||||
AdpType_prep # cz, U
|
||||
AdpType_post # U
|
||||
AdpType_voc # cz
|
||||
AdpType_comprep # cz
|
||||
AdpType_circ # U
|
||||
AdvType_man
|
||||
AdvType_loc
|
||||
AdvType_tim
|
||||
AdvType_deg
|
||||
AdvType_cau
|
||||
AdvType_mod
|
||||
AdvType_sta
|
||||
AdvType_ex
|
||||
AdvType_adadj
|
||||
ConjType_oper # cz, U
|
||||
ConjType_comp # cz, U
|
||||
Connegative_yes # fi
|
||||
Derivation_minen # fi
|
||||
Derivation_sti # fi
|
||||
Derivation_inen # fi
|
||||
Derivation_lainen # fi
|
||||
Derivation_ja # fi
|
||||
Derivation_ton # fi
|
||||
Derivation_vs # fi
|
||||
Derivation_ttain # fi
|
||||
Derivation_ttaa # fi
|
||||
Echo_rdp # U
|
||||
Echo_ech # U
|
||||
Foreign_foreign # cz, fi, U
|
||||
Foreign_fscript # cz, fi, U
|
||||
Foreign_tscript # cz, U
|
||||
Foreign_yes # sl
|
||||
Gender_dat_masc # bq, U
|
||||
Gender_dat_fem # bq, U
|
||||
Gender_erg_masc # bq
|
||||
Gender_erg_fem # bq
|
||||
Gender_psor_masc # cz, sl, U
|
||||
Gender_psor_fem # cz, sl, U
|
||||
Gender_psor_neut # sl
|
||||
Hyph_yes # cz, U
|
||||
InfForm_one # fi
|
||||
InfForm_two # fi
|
||||
InfForm_three # fi
|
||||
NameType_geo # U, cz
|
||||
NameType_prs # U, cz
|
||||
NameType_giv # U, cz
|
||||
NameType_sur # U, cz
|
||||
NameType_nat # U, cz
|
||||
NameType_com # U, cz
|
||||
NameType_pro # U, cz
|
||||
NameType_oth # U, cz
|
||||
NounType_com # U
|
||||
NounType_prop # U
|
||||
NounType_class # U
|
||||
Number_abs_sing # bq, U
|
||||
Number_abs_plur # bq, U
|
||||
Number_dat_sing # bq, U
|
||||
Number_dat_plur # bq, U
|
||||
Number_erg_sing # bq, U
|
||||
Number_erg_plur # bq, U
|
||||
Number_psee_sing # U
|
||||
Number_psee_plur # U
|
||||
Number_psor_sing # cz, fi, sl, U
|
||||
Number_psor_plur # cz, fi, sl, U
|
||||
NumForm_digit # cz, sl, U
|
||||
NumForm_roman # cz, sl, U
|
||||
NumForm_word # cz, sl, U
|
||||
NumValue_one # cz, U
|
||||
NumValue_two # cz, U
|
||||
NumValue_three # cz, U
|
||||
PartForm_pres # fi
|
||||
PartForm_past # fi
|
||||
PartForm_agt # fi
|
||||
PartForm_neg # fi
|
||||
PartType_mod # U
|
||||
PartType_emp # U
|
||||
PartType_res # U
|
||||
PartType_inf # U
|
||||
PartType_vbp # U
|
||||
Person_abs_one # bq, U
|
||||
Person_abs_two # bq, U
|
||||
Person_abs_three # bq, U
|
||||
Person_dat_one # bq, U
|
||||
Person_dat_two # bq, U
|
||||
Person_dat_three # bq, U
|
||||
Person_erg_one # bq, U
|
||||
Person_erg_two # bq, U
|
||||
Person_erg_three # bq, U
|
||||
Person_psor_one # fi, U
|
||||
Person_psor_two # fi, U
|
||||
Person_psor_three # fi, U
|
||||
Polite_inf # bq, U
|
||||
Polite_pol # bq, U
|
||||
Polite_abs_inf # bq, U
|
||||
Polite_abs_pol # bq, U
|
||||
Polite_erg_inf # bq, U
|
||||
Polite_erg_pol # bq, U
|
||||
Polite_dat_inf # bq, U
|
||||
Polite_dat_pol # bq, U
|
||||
Prefix_yes # U
|
||||
PrepCase_npr # cz
|
||||
PrepCase_pre # U
|
||||
PunctSide_ini # U
|
||||
PunctSide_fin # U
|
||||
PunctType_peri # U
|
||||
PunctType_qest # U
|
||||
PunctType_excl # U
|
||||
PunctType_quot # U
|
||||
PunctType_brck # U
|
||||
PunctType_comm # U
|
||||
PunctType_colo # U
|
||||
PunctType_semi # U
|
||||
PunctType_dash # U
|
||||
Style_arch # cz, fi, U
|
||||
Style_rare # cz, fi, U
|
||||
Style_poet # cz, U
|
||||
Style_norm # cz, U
|
||||
Style_coll # cz, U
|
||||
Style_vrnc # cz, U
|
||||
Style_sing # cz, U
|
||||
Style_expr # cz, U
|
||||
Style_derg # cz, U
|
||||
Style_vulg # cz, U
|
||||
Style_yes # fi, U
|
||||
StyleVariant_styleShort # cz
|
||||
StyleVariant_styleBound # cz, sl
|
||||
VerbType_aux # U
|
||||
VerbType_cop # U
|
||||
VerbType_mod # U
|
||||
VerbType_light # U
|
||||
|
||||
|
||||
cdef tag_to_json(const MorphAnalysisC* tag)
|
||||
|
|
1376
spacy/morphology.pyx
1376
spacy/morphology.pyx
File diff suppressed because it is too large
Load Diff
|
@ -3,7 +3,6 @@ from __future__ import unicode_literals
|
|||
|
||||
from .pipes import Tagger, DependencyParser, EntityRecognizer, EntityLinker
|
||||
from .pipes import TextCategorizer, Tensorizer, Pipe, Sentencizer
|
||||
from .morphologizer import Morphologizer
|
||||
from .entityruler import EntityRuler
|
||||
from .hooks import SentenceSegmenter, SimilarityHook
|
||||
from .functions import merge_entities, merge_noun_chunks, merge_subtokens
|
||||
|
@ -16,7 +15,6 @@ __all__ = [
|
|||
"TextCategorizer",
|
||||
"Tensorizer",
|
||||
"Pipe",
|
||||
"Morphologizer",
|
||||
"EntityRuler",
|
||||
"Sentencizer",
|
||||
"SentenceSegmenter",
|
||||
|
|
|
@ -180,28 +180,21 @@ class EntityRuler(object):
|
|||
|
||||
DOCS: https://spacy.io/api/entityruler#add_patterns
|
||||
"""
|
||||
# disable the nlp components after this one in case they hadn't been initialized / deserialised yet
|
||||
try:
|
||||
current_index = self.nlp.pipe_names.index(self.name)
|
||||
subsequent_pipes = [pipe for pipe in self.nlp.pipe_names[current_index + 1:]]
|
||||
except ValueError:
|
||||
subsequent_pipes = []
|
||||
with self.nlp.disable_pipes(*subsequent_pipes):
|
||||
for entry in patterns:
|
||||
label = entry["label"]
|
||||
if "id" in entry:
|
||||
label = self._create_label(label, entry["id"])
|
||||
pattern = entry["pattern"]
|
||||
if isinstance(pattern, basestring_):
|
||||
self.phrase_patterns[label].append(self.nlp(pattern))
|
||||
elif isinstance(pattern, list):
|
||||
self.token_patterns[label].append(pattern)
|
||||
else:
|
||||
raise ValueError(Errors.E097.format(pattern=pattern))
|
||||
for label, patterns in self.token_patterns.items():
|
||||
self.matcher.add(label, None, *patterns)
|
||||
for label, patterns in self.phrase_patterns.items():
|
||||
self.phrase_matcher.add(label, None, *patterns)
|
||||
for entry in patterns:
|
||||
label = entry["label"]
|
||||
if "id" in entry:
|
||||
label = self._create_label(label, entry["id"])
|
||||
pattern = entry["pattern"]
|
||||
if isinstance(pattern, basestring_):
|
||||
self.phrase_patterns[label].append(self.nlp(pattern))
|
||||
elif isinstance(pattern, list):
|
||||
self.token_patterns[label].append(pattern)
|
||||
else:
|
||||
raise ValueError(Errors.E097.format(pattern=pattern))
|
||||
for label, patterns in self.token_patterns.items():
|
||||
self.matcher.add(label, None, *patterns)
|
||||
for label, patterns in self.phrase_patterns.items():
|
||||
self.phrase_matcher.add(label, None, *patterns)
|
||||
|
||||
def _split_label(self, label):
|
||||
"""Split Entity label into ent_label and ent_id if it contains self.ent_id_sep
|
||||
|
|
|
@ -1,164 +0,0 @@
|
|||
from __future__ import unicode_literals
|
||||
from collections import OrderedDict, defaultdict
|
||||
|
||||
import numpy
|
||||
cimport numpy as np
|
||||
|
||||
from thinc.api import chain
|
||||
from thinc.neural.util import to_categorical, copy_array, get_array_module
|
||||
from .. import util
|
||||
from .pipes import Pipe
|
||||
from .._ml import Tok2Vec, build_morphologizer_model
|
||||
from .._ml import link_vectors_to_models, zero_init, flatten
|
||||
from .._ml import create_default_optimizer
|
||||
from ..errors import Errors, TempErrors
|
||||
from ..compat import basestring_
|
||||
from ..tokens.doc cimport Doc
|
||||
from ..vocab cimport Vocab
|
||||
from ..morphology cimport Morphology
|
||||
|
||||
|
||||
class Morphologizer(Pipe):
|
||||
name = 'morphologizer'
|
||||
|
||||
@classmethod
|
||||
def Model(cls, **cfg):
|
||||
if cfg.get('pretrained_dims') and not cfg.get('pretrained_vectors'):
|
||||
raise ValueError(TempErrors.T008)
|
||||
class_map = Morphology.create_class_map()
|
||||
return build_morphologizer_model(class_map.field_sizes, **cfg)
|
||||
|
||||
def __init__(self, vocab, model=True, **cfg):
|
||||
self.vocab = vocab
|
||||
self.model = model
|
||||
self.cfg = OrderedDict(sorted(cfg.items()))
|
||||
self.cfg.setdefault('cnn_maxout_pieces', 2)
|
||||
self._class_map = self.vocab.morphology.create_class_map()
|
||||
|
||||
@property
|
||||
def labels(self):
|
||||
return self.vocab.morphology.tag_names
|
||||
|
||||
@property
|
||||
def tok2vec(self):
|
||||
if self.model in (None, True, False):
|
||||
return None
|
||||
else:
|
||||
return chain(self.model.tok2vec, flatten)
|
||||
|
||||
def __call__(self, doc):
|
||||
features, tokvecs = self.predict([doc])
|
||||
self.set_annotations([doc], features, tensors=tokvecs)
|
||||
return doc
|
||||
|
||||
def pipe(self, stream, batch_size=128, n_threads=-1):
|
||||
for docs in util.minibatch(stream, size=batch_size):
|
||||
docs = list(docs)
|
||||
features, tokvecs = self.predict(docs)
|
||||
self.set_annotations(docs, features, tensors=tokvecs)
|
||||
yield from docs
|
||||
|
||||
def predict(self, docs):
|
||||
if not any(len(doc) for doc in docs):
|
||||
# Handle case where there are no tokens in any docs.
|
||||
n_labels = self.model.nO
|
||||
guesses = [self.model.ops.allocate((0, n_labels)) for doc in docs]
|
||||
tokvecs = self.model.ops.allocate((0, self.model.tok2vec.nO))
|
||||
return guesses, tokvecs
|
||||
tokvecs = self.model.tok2vec(docs)
|
||||
scores = self.model.softmax(tokvecs)
|
||||
return scores, tokvecs
|
||||
|
||||
def set_annotations(self, docs, batch_scores, tensors=None):
|
||||
if isinstance(docs, Doc):
|
||||
docs = [docs]
|
||||
cdef Doc doc
|
||||
cdef Vocab vocab = self.vocab
|
||||
offsets = [self._class_map.get_field_offset(field)
|
||||
for field in self._class_map.fields]
|
||||
for i, doc in enumerate(docs):
|
||||
doc_scores = batch_scores[i]
|
||||
doc_guesses = scores_to_guesses(doc_scores, self.model.softmax.out_sizes)
|
||||
# Convert the neuron indices into feature IDs.
|
||||
doc_feat_ids = numpy.zeros((len(doc), len(self._class_map.fields)), dtype='i')
|
||||
for j in range(len(doc)):
|
||||
for k, offset in enumerate(offsets):
|
||||
if doc_guesses[j, k] == 0:
|
||||
doc_feat_ids[j, k] = 0
|
||||
else:
|
||||
doc_feat_ids[j, k] = offset + doc_guesses[j, k]
|
||||
# Get the set of feature names.
|
||||
feats = {self._class_map.col2info[f][2] for f in doc_feat_ids[j]}
|
||||
if "NIL" in feats:
|
||||
feats.remove("NIL")
|
||||
# Now add the analysis, and set the hash.
|
||||
doc.c[j].morph = self.vocab.morphology.add(feats)
|
||||
if doc[j].morph.pos != 0:
|
||||
doc.c[j].pos = doc[j].morph.pos
|
||||
|
||||
def update(self, docs, golds, drop=0., sgd=None, losses=None):
|
||||
if losses is not None and self.name not in losses:
|
||||
losses[self.name] = 0.
|
||||
|
||||
tag_scores, bp_tag_scores = self.model.begin_update(docs, drop=drop)
|
||||
loss, d_tag_scores = self.get_loss(docs, golds, tag_scores)
|
||||
bp_tag_scores(d_tag_scores, sgd=sgd)
|
||||
|
||||
if losses is not None:
|
||||
losses[self.name] += loss
|
||||
|
||||
def get_loss(self, docs, golds, scores):
|
||||
guesses = []
|
||||
for doc_scores in scores:
|
||||
guesses.append(scores_to_guesses(doc_scores, self.model.softmax.out_sizes))
|
||||
guesses = self.model.ops.xp.vstack(guesses)
|
||||
scores = self.model.ops.xp.vstack(scores)
|
||||
if not isinstance(scores, numpy.ndarray):
|
||||
scores = scores.get()
|
||||
if not isinstance(guesses, numpy.ndarray):
|
||||
guesses = guesses.get()
|
||||
cdef int idx = 0
|
||||
# Do this on CPU, as we can't vectorize easily.
|
||||
target = numpy.zeros(scores.shape, dtype='f')
|
||||
field_sizes = self.model.softmax.out_sizes
|
||||
for doc, gold in zip(docs, golds):
|
||||
for t, features in enumerate(gold.morphology):
|
||||
if features is None:
|
||||
target[idx] = scores[idx]
|
||||
else:
|
||||
gold_fields = {}
|
||||
for feature in features:
|
||||
field = self._class_map.feat2field[feature]
|
||||
gold_fields[field] = self._class_map.feat2offset[feature]
|
||||
for field in self._class_map.fields:
|
||||
field_id = self._class_map.field2id[field]
|
||||
col_offset = self._class_map.field2col[field]
|
||||
if field_id in gold_fields:
|
||||
target[idx, col_offset + gold_fields[field_id]] = 1.
|
||||
else:
|
||||
target[idx, col_offset] = 1.
|
||||
#print(doc[t])
|
||||
#for col, info in enumerate(self._class_map.col2info):
|
||||
# print(col, info, scores[idx, col], target[idx, col])
|
||||
idx += 1
|
||||
target = self.model.ops.asarray(target, dtype='f')
|
||||
scores = self.model.ops.asarray(scores, dtype='f')
|
||||
d_scores = scores - target
|
||||
loss = (d_scores**2).sum()
|
||||
d_scores = self.model.ops.unflatten(d_scores, [len(d) for d in docs])
|
||||
return float(loss), d_scores
|
||||
|
||||
def use_params(self, params):
|
||||
with self.model.use_params(params):
|
||||
yield
|
||||
|
||||
def scores_to_guesses(scores, out_sizes):
|
||||
xp = get_array_module(scores)
|
||||
guesses = xp.zeros((scores.shape[0], len(out_sizes)), dtype='i')
|
||||
offset = 0
|
||||
for i, size in enumerate(out_sizes):
|
||||
slice_ = scores[:, offset : offset + size]
|
||||
col_guesses = slice_.argmax(axis=1)
|
||||
guesses[:, i] = col_guesses
|
||||
offset += size
|
||||
return guesses
|
|
@ -69,7 +69,7 @@ class Pipe(object):
|
|||
predictions = self.predict([doc])
|
||||
if isinstance(predictions, tuple) and len(predictions) == 2:
|
||||
scores, tensors = predictions
|
||||
self.set_annotations([doc], scores, tensors=tensors)
|
||||
self.set_annotations([doc], scores, tensor=tensors)
|
||||
else:
|
||||
self.set_annotations([doc], predictions)
|
||||
return doc
|
||||
|
@ -90,7 +90,7 @@ class Pipe(object):
|
|||
predictions = self.predict(docs)
|
||||
if isinstance(predictions, tuple) and len(tuple) == 2:
|
||||
scores, tensors = predictions
|
||||
self.set_annotations(docs, scores, tensors=tensors)
|
||||
self.set_annotations(docs, scores, tensor=tensors)
|
||||
else:
|
||||
self.set_annotations(docs, predictions)
|
||||
yield from docs
|
||||
|
@ -424,22 +424,18 @@ class Tagger(Pipe):
|
|||
cdef Doc doc
|
||||
cdef int idx = 0
|
||||
cdef Vocab vocab = self.vocab
|
||||
assign_morphology = self.cfg.get("set_morphology", True)
|
||||
for i, doc in enumerate(docs):
|
||||
doc_tag_ids = batch_tag_ids[i]
|
||||
if hasattr(doc_tag_ids, "get"):
|
||||
doc_tag_ids = doc_tag_ids.get()
|
||||
for j, tag_id in enumerate(doc_tag_ids):
|
||||
# Don't clobber preset POS tags
|
||||
if doc.c[j].tag == 0:
|
||||
if doc.c[j].pos == 0 and assign_morphology:
|
||||
# Don't clobber preset lemmas
|
||||
lemma = doc.c[j].lemma
|
||||
vocab.morphology.assign_tag_id(&doc.c[j], tag_id)
|
||||
if lemma != 0 and lemma != doc.c[j].lex.orth:
|
||||
doc.c[j].lemma = lemma
|
||||
else:
|
||||
doc.c[j].tag = self.vocab.strings[self.labels[tag_id]]
|
||||
if doc.c[j].tag == 0 and doc.c[j].pos == 0:
|
||||
# Don't clobber preset lemmas
|
||||
lemma = doc.c[j].lemma
|
||||
vocab.morphology.assign_tag_id(&doc.c[j], tag_id)
|
||||
if lemma != 0 and lemma != doc.c[j].lex.orth:
|
||||
doc.c[j].lemma = lemma
|
||||
idx += 1
|
||||
if tensors is not None and len(tensors):
|
||||
if isinstance(doc.tensor, numpy.ndarray) \
|
||||
|
@ -504,7 +500,6 @@ class Tagger(Pipe):
|
|||
orig_tag_map = dict(self.vocab.morphology.tag_map)
|
||||
new_tag_map = OrderedDict()
|
||||
for raw_text, annots_brackets in get_gold_tuples():
|
||||
_ = annots_brackets.pop()
|
||||
for annots, brackets in annots_brackets:
|
||||
ids, words, tags, heads, deps, ents = annots
|
||||
for tag in tags:
|
||||
|
@ -937,6 +932,11 @@ class TextCategorizer(Pipe):
|
|||
def labels(self, value):
|
||||
self.cfg["labels"] = tuple(value)
|
||||
|
||||
def __call__(self, doc):
|
||||
scores, tensors = self.predict([doc])
|
||||
self.set_annotations([doc], scores, tensors=tensors)
|
||||
return doc
|
||||
|
||||
def pipe(self, stream, batch_size=128, n_threads=-1):
|
||||
for docs in util.minibatch(stream, size=batch_size):
|
||||
docs = list(docs)
|
||||
|
@ -1017,10 +1017,6 @@ class TextCategorizer(Pipe):
|
|||
return 1
|
||||
|
||||
def begin_training(self, get_gold_tuples=lambda: [], pipeline=None, sgd=None, **kwargs):
|
||||
for raw_text, annots_brackets in get_gold_tuples():
|
||||
cats = annots_brackets.pop()
|
||||
for cat in cats:
|
||||
self.add_label(cat)
|
||||
if self.model is True:
|
||||
self.cfg["pretrained_vectors"] = kwargs.get("pretrained_vectors")
|
||||
self.require_labels()
|
||||
|
|
385
spacy/scorer.py
385
spacy/scorer.py
|
@ -1,10 +1,7 @@
|
|||
# coding: utf8
|
||||
from __future__ import division, print_function, unicode_literals
|
||||
|
||||
import numpy as np
|
||||
|
||||
from .gold import tags_to_entities, GoldParse
|
||||
from .errors import Errors
|
||||
|
||||
|
||||
class PRFScore(object):
|
||||
|
@ -37,39 +34,10 @@ class PRFScore(object):
|
|||
return 2 * ((p * r) / (p + r + 1e-100))
|
||||
|
||||
|
||||
class ROCAUCScore(object):
|
||||
"""
|
||||
An AUC ROC score.
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.golds = []
|
||||
self.cands = []
|
||||
self.saved_score = 0.0
|
||||
self.saved_score_at_len = 0
|
||||
|
||||
def score_set(self, cand, gold):
|
||||
self.cands.append(cand)
|
||||
self.golds.append(gold)
|
||||
|
||||
@property
|
||||
def score(self):
|
||||
if len(self.golds) == self.saved_score_at_len:
|
||||
return self.saved_score
|
||||
try:
|
||||
self.saved_score = _roc_auc_score(self.golds, self.cands)
|
||||
# catch ValueError: Only one class present in y_true.
|
||||
# ROC AUC score is not defined in that case.
|
||||
except ValueError:
|
||||
self.saved_score = -float("inf")
|
||||
self.saved_score_at_len = len(self.golds)
|
||||
return self.saved_score
|
||||
|
||||
|
||||
class Scorer(object):
|
||||
"""Compute evaluation scores."""
|
||||
|
||||
def __init__(self, eval_punct=False, pipeline=None):
|
||||
def __init__(self, eval_punct=False):
|
||||
"""Initialize the Scorer.
|
||||
|
||||
eval_punct (bool): Evaluate the dependency attachments to and from
|
||||
|
@ -86,24 +54,6 @@ class Scorer(object):
|
|||
self.ner = PRFScore()
|
||||
self.ner_per_ents = dict()
|
||||
self.eval_punct = eval_punct
|
||||
self.textcat = None
|
||||
self.textcat_per_cat = dict()
|
||||
self.textcat_positive_label = None
|
||||
self.textcat_multilabel = False
|
||||
|
||||
if pipeline:
|
||||
for name, model in pipeline:
|
||||
if name == "textcat":
|
||||
self.textcat_positive_label = model.cfg.get("positive_label", None)
|
||||
if self.textcat_positive_label:
|
||||
self.textcat = PRFScore()
|
||||
if not model.cfg.get("exclusive_classes", False):
|
||||
self.textcat_multilabel = True
|
||||
for label in model.cfg.get("labels", []):
|
||||
self.textcat_per_cat[label] = ROCAUCScore()
|
||||
else:
|
||||
for label in model.cfg.get("labels", []):
|
||||
self.textcat_per_cat[label] = PRFScore()
|
||||
|
||||
@property
|
||||
def tags_acc(self):
|
||||
|
@ -151,47 +101,10 @@ class Scorer(object):
|
|||
for k, v in self.ner_per_ents.items()
|
||||
}
|
||||
|
||||
@property
|
||||
def textcat_score(self):
|
||||
"""RETURNS (float): f-score on positive label for binary exclusive,
|
||||
macro-averaged f-score for 3+ exclusive,
|
||||
macro-averaged AUC ROC score for multilabel (-1 if undefined)
|
||||
"""
|
||||
if not self.textcat_multilabel:
|
||||
# binary multiclass
|
||||
if self.textcat_positive_label:
|
||||
return self.textcat.fscore * 100
|
||||
# other multiclass
|
||||
return (
|
||||
sum([score.fscore for label, score in self.textcat_per_cat.items()])
|
||||
/ (len(self.textcat_per_cat) + 1e-100)
|
||||
* 100
|
||||
)
|
||||
# multilabel
|
||||
return max(
|
||||
sum([score.score for label, score in self.textcat_per_cat.items()])
|
||||
/ (len(self.textcat_per_cat) + 1e-100),
|
||||
-1,
|
||||
)
|
||||
|
||||
@property
|
||||
def textcats_per_cat(self):
|
||||
"""RETURNS (dict): Scores per textcat label.
|
||||
"""
|
||||
if not self.textcat_multilabel:
|
||||
return {
|
||||
k: {"p": v.precision * 100, "r": v.recall * 100, "f": v.fscore * 100}
|
||||
for k, v in self.textcat_per_cat.items()
|
||||
}
|
||||
return {
|
||||
k: {"roc_auc_score": max(v.score, -1)}
|
||||
for k, v in self.textcat_per_cat.items()
|
||||
}
|
||||
|
||||
@property
|
||||
def scores(self):
|
||||
"""RETURNS (dict): All scores with keys `uas`, `las`, `ents_p`,
|
||||
`ents_r`, `ents_f`, `tags_acc`, `token_acc`, and `textcat_score`.
|
||||
`ents_r`, `ents_f`, `tags_acc` and `token_acc`.
|
||||
"""
|
||||
return {
|
||||
"uas": self.uas,
|
||||
|
@ -202,8 +115,6 @@ class Scorer(object):
|
|||
"ents_per_type": self.ents_per_type,
|
||||
"tags_acc": self.tags_acc,
|
||||
"token_acc": self.token_acc,
|
||||
"textcat_score": self.textcat_score,
|
||||
"textcats_per_cat": self.textcats_per_cat,
|
||||
}
|
||||
|
||||
def score(self, doc, gold, verbose=False, punct_labels=("p", "punct")):
|
||||
|
@ -281,301 +192,9 @@ class Scorer(object):
|
|||
self.unlabelled.score_set(
|
||||
set(item[:2] for item in cand_deps), set(item[:2] for item in gold_deps)
|
||||
)
|
||||
if (
|
||||
len(gold.cats) > 0
|
||||
and set(self.textcat_per_cat) == set(gold.cats)
|
||||
and set(gold.cats) == set(doc.cats)
|
||||
):
|
||||
goldcat = max(gold.cats, key=gold.cats.get)
|
||||
candcat = max(doc.cats, key=doc.cats.get)
|
||||
if self.textcat_positive_label:
|
||||
self.textcat.score_set(
|
||||
set([self.textcat_positive_label]) & set([candcat]),
|
||||
set([self.textcat_positive_label]) & set([goldcat]),
|
||||
)
|
||||
for label in self.textcat_per_cat:
|
||||
if self.textcat_multilabel:
|
||||
self.textcat_per_cat[label].score_set(
|
||||
doc.cats[label], gold.cats[label]
|
||||
)
|
||||
else:
|
||||
self.textcat_per_cat[label].score_set(
|
||||
set([label]) & set([candcat]), set([label]) & set([goldcat])
|
||||
)
|
||||
elif len(self.textcat_per_cat) > 0:
|
||||
model_labels = set(self.textcat_per_cat)
|
||||
eval_labels = set(gold.cats)
|
||||
raise ValueError(
|
||||
Errors.E162.format(model_labels=model_labels, eval_labels=eval_labels)
|
||||
)
|
||||
if verbose:
|
||||
gold_words = [item[1] for item in gold.orig_annot]
|
||||
for w_id, h_id, dep in cand_deps - gold_deps:
|
||||
print("F", gold_words[w_id], dep, gold_words[h_id])
|
||||
for w_id, h_id, dep in gold_deps - cand_deps:
|
||||
print("M", gold_words[w_id], dep, gold_words[h_id])
|
||||
|
||||
|
||||
#############################################################################
|
||||
#
|
||||
# The following implementation of roc_auc_score() is adapted from
|
||||
# scikit-learn, which is distributed under the following license:
|
||||
#
|
||||
# New BSD License
|
||||
#
|
||||
# Copyright (c) 2007–2019 The scikit-learn developers.
|
||||
# All rights reserved.
|
||||
#
|
||||
#
|
||||
# Redistribution and use in source and binary forms, with or without
|
||||
# modification, are permitted provided that the following conditions are met:
|
||||
#
|
||||
# a. Redistributions of source code must retain the above copyright notice,
|
||||
# this list of conditions and the following disclaimer.
|
||||
# b. Redistributions in binary form must reproduce the above copyright
|
||||
# notice, this list of conditions and the following disclaimer in the
|
||||
# documentation and/or other materials provided with the distribution.
|
||||
# c. Neither the name of the Scikit-learn Developers nor the names of
|
||||
# its contributors may be used to endorse or promote products
|
||||
# derived from this software without specific prior written
|
||||
# permission.
|
||||
#
|
||||
#
|
||||
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
|
||||
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
|
||||
# ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE FOR
|
||||
# ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
|
||||
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
|
||||
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
|
||||
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
|
||||
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
|
||||
# OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
|
||||
# DAMAGE.
|
||||
|
||||
|
||||
def _roc_auc_score(y_true, y_score):
|
||||
"""Compute Area Under the Receiver Operating Characteristic Curve (ROC AUC)
|
||||
from prediction scores.
|
||||
|
||||
Note: this implementation is restricted to the binary classification task
|
||||
|
||||
Parameters
|
||||
----------
|
||||
y_true : array, shape = [n_samples] or [n_samples, n_classes]
|
||||
True binary labels or binary label indicators.
|
||||
The multiclass case expects shape = [n_samples] and labels
|
||||
with values in ``range(n_classes)``.
|
||||
|
||||
y_score : array, shape = [n_samples] or [n_samples, n_classes]
|
||||
Target scores, can either be probability estimates of the positive
|
||||
class, confidence values, or non-thresholded measure of decisions
|
||||
(as returned by "decision_function" on some classifiers). For binary
|
||||
y_true, y_score is supposed to be the score of the class with greater
|
||||
label. The multiclass case expects shape = [n_samples, n_classes]
|
||||
where the scores correspond to probability estimates.
|
||||
|
||||
Returns
|
||||
-------
|
||||
auc : float
|
||||
|
||||
References
|
||||
----------
|
||||
.. [1] `Wikipedia entry for the Receiver operating characteristic
|
||||
<https://en.wikipedia.org/wiki/Receiver_operating_characteristic>`_
|
||||
|
||||
.. [2] Fawcett T. An introduction to ROC analysis[J]. Pattern Recognition
|
||||
Letters, 2006, 27(8):861-874.
|
||||
|
||||
.. [3] `Analyzing a portion of the ROC curve. McClish, 1989
|
||||
<https://www.ncbi.nlm.nih.gov/pubmed/2668680>`_
|
||||
"""
|
||||
if len(np.unique(y_true)) != 2:
|
||||
raise ValueError(Errors.E165)
|
||||
fpr, tpr, _ = _roc_curve(y_true, y_score)
|
||||
return _auc(fpr, tpr)
|
||||
|
||||
|
||||
def _roc_curve(y_true, y_score):
|
||||
"""Compute Receiver operating characteristic (ROC)
|
||||
|
||||
Note: this implementation is restricted to the binary classification task.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
|
||||
y_true : array, shape = [n_samples]
|
||||
True binary labels. If labels are not either {-1, 1} or {0, 1}, then
|
||||
pos_label should be explicitly given.
|
||||
|
||||
y_score : array, shape = [n_samples]
|
||||
Target scores, can either be probability estimates of the positive
|
||||
class, confidence values, or non-thresholded measure of decisions
|
||||
(as returned by "decision_function" on some classifiers).
|
||||
|
||||
Returns
|
||||
-------
|
||||
fpr : array, shape = [>2]
|
||||
Increasing false positive rates such that element i is the false
|
||||
positive rate of predictions with score >= thresholds[i].
|
||||
|
||||
tpr : array, shape = [>2]
|
||||
Increasing true positive rates such that element i is the true
|
||||
positive rate of predictions with score >= thresholds[i].
|
||||
|
||||
thresholds : array, shape = [n_thresholds]
|
||||
Decreasing thresholds on the decision function used to compute
|
||||
fpr and tpr. `thresholds[0]` represents no instances being predicted
|
||||
and is arbitrarily set to `max(y_score) + 1`.
|
||||
|
||||
Notes
|
||||
-----
|
||||
Since the thresholds are sorted from low to high values, they
|
||||
are reversed upon returning them to ensure they correspond to both ``fpr``
|
||||
and ``tpr``, which are sorted in reversed order during their calculation.
|
||||
|
||||
References
|
||||
----------
|
||||
.. [1] `Wikipedia entry for the Receiver operating characteristic
|
||||
<https://en.wikipedia.org/wiki/Receiver_operating_characteristic>`_
|
||||
|
||||
.. [2] Fawcett T. An introduction to ROC analysis[J]. Pattern Recognition
|
||||
Letters, 2006, 27(8):861-874.
|
||||
"""
|
||||
fps, tps, thresholds = _binary_clf_curve(y_true, y_score)
|
||||
|
||||
# Add an extra threshold position
|
||||
# to make sure that the curve starts at (0, 0)
|
||||
tps = np.r_[0, tps]
|
||||
fps = np.r_[0, fps]
|
||||
thresholds = np.r_[thresholds[0] + 1, thresholds]
|
||||
|
||||
if fps[-1] <= 0:
|
||||
fpr = np.repeat(np.nan, fps.shape)
|
||||
else:
|
||||
fpr = fps / fps[-1]
|
||||
|
||||
if tps[-1] <= 0:
|
||||
tpr = np.repeat(np.nan, tps.shape)
|
||||
else:
|
||||
tpr = tps / tps[-1]
|
||||
|
||||
return fpr, tpr, thresholds
|
||||
|
||||
|
||||
def _binary_clf_curve(y_true, y_score):
|
||||
"""Calculate true and false positives per binary classification threshold.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
y_true : array, shape = [n_samples]
|
||||
True targets of binary classification
|
||||
|
||||
y_score : array, shape = [n_samples]
|
||||
Estimated probabilities or decision function
|
||||
|
||||
Returns
|
||||
-------
|
||||
fps : array, shape = [n_thresholds]
|
||||
A count of false positives, at index i being the number of negative
|
||||
samples assigned a score >= thresholds[i]. The total number of
|
||||
negative samples is equal to fps[-1] (thus true negatives are given by
|
||||
fps[-1] - fps).
|
||||
|
||||
tps : array, shape = [n_thresholds <= len(np.unique(y_score))]
|
||||
An increasing count of true positives, at index i being the number
|
||||
of positive samples assigned a score >= thresholds[i]. The total
|
||||
number of positive samples is equal to tps[-1] (thus false negatives
|
||||
are given by tps[-1] - tps).
|
||||
|
||||
thresholds : array, shape = [n_thresholds]
|
||||
Decreasing score values.
|
||||
"""
|
||||
pos_label = 1.0
|
||||
|
||||
y_true = np.ravel(y_true)
|
||||
y_score = np.ravel(y_score)
|
||||
|
||||
# make y_true a boolean vector
|
||||
y_true = y_true == pos_label
|
||||
|
||||
# sort scores and corresponding truth values
|
||||
desc_score_indices = np.argsort(y_score, kind="mergesort")[::-1]
|
||||
y_score = y_score[desc_score_indices]
|
||||
y_true = y_true[desc_score_indices]
|
||||
weight = 1.0
|
||||
|
||||
# y_score typically has many tied values. Here we extract
|
||||
# the indices associated with the distinct values. We also
|
||||
# concatenate a value for the end of the curve.
|
||||
distinct_value_indices = np.where(np.diff(y_score))[0]
|
||||
threshold_idxs = np.r_[distinct_value_indices, y_true.size - 1]
|
||||
|
||||
# accumulate the true positives with decreasing threshold
|
||||
tps = _stable_cumsum(y_true * weight)[threshold_idxs]
|
||||
fps = 1 + threshold_idxs - tps
|
||||
return fps, tps, y_score[threshold_idxs]
|
||||
|
||||
|
||||
def _stable_cumsum(arr, axis=None, rtol=1e-05, atol=1e-08):
|
||||
"""Use high precision for cumsum and check that final value matches sum
|
||||
|
||||
Parameters
|
||||
----------
|
||||
arr : array-like
|
||||
To be cumulatively summed as flat
|
||||
axis : int, optional
|
||||
Axis along which the cumulative sum is computed.
|
||||
The default (None) is to compute the cumsum over the flattened array.
|
||||
rtol : float
|
||||
Relative tolerance, see ``np.allclose``
|
||||
atol : float
|
||||
Absolute tolerance, see ``np.allclose``
|
||||
"""
|
||||
out = np.cumsum(arr, axis=axis, dtype=np.float64)
|
||||
expected = np.sum(arr, axis=axis, dtype=np.float64)
|
||||
if not np.all(
|
||||
np.isclose(
|
||||
out.take(-1, axis=axis), expected, rtol=rtol, atol=atol, equal_nan=True
|
||||
)
|
||||
):
|
||||
raise ValueError(Errors.E163)
|
||||
return out
|
||||
|
||||
|
||||
def _auc(x, y):
|
||||
"""Compute Area Under the Curve (AUC) using the trapezoidal rule
|
||||
|
||||
This is a general function, given points on a curve. For computing the
|
||||
area under the ROC-curve, see :func:`roc_auc_score`.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
x : array, shape = [n]
|
||||
x coordinates. These must be either monotonic increasing or monotonic
|
||||
decreasing.
|
||||
y : array, shape = [n]
|
||||
y coordinates.
|
||||
|
||||
Returns
|
||||
-------
|
||||
auc : float
|
||||
"""
|
||||
x = np.ravel(x)
|
||||
y = np.ravel(y)
|
||||
|
||||
direction = 1
|
||||
dx = np.diff(x)
|
||||
if np.any(dx < 0):
|
||||
if np.all(dx <= 0):
|
||||
direction = -1
|
||||
else:
|
||||
raise ValueError(Errors.E164.format(x))
|
||||
|
||||
area = direction * np.trapz(y, x)
|
||||
if isinstance(area, np.memmap):
|
||||
# Reductions such as .sum used internally in np.trapz do not return a
|
||||
# scalar by default for numpy.memmap instances contrary to
|
||||
# regular numpy.ndarray instances.
|
||||
area = area.dtype.type(area)
|
||||
return area
|
||||
|
|
|
@ -119,7 +119,9 @@ cdef class StringStore:
|
|||
return ""
|
||||
elif string_or_id in SYMBOLS_BY_STR:
|
||||
return SYMBOLS_BY_STR[string_or_id]
|
||||
|
||||
cdef hash_t key
|
||||
|
||||
if isinstance(string_or_id, unicode):
|
||||
key = hash_string(string_or_id)
|
||||
return key
|
||||
|
@ -137,20 +139,6 @@ cdef class StringStore:
|
|||
else:
|
||||
return decode_Utf8Str(utf8str)
|
||||
|
||||
def as_int(self, key):
|
||||
"""If key is an int, return it; otherwise, get the int value."""
|
||||
if not isinstance(key, basestring):
|
||||
return key
|
||||
else:
|
||||
return self[key]
|
||||
|
||||
def as_string(self, key):
|
||||
"""If key is a string, return it; otherwise, get the string value."""
|
||||
if isinstance(key, basestring):
|
||||
return key
|
||||
else:
|
||||
return self[key]
|
||||
|
||||
def add(self, string):
|
||||
"""Add a string to the StringStore.
|
||||
|
||||
|
|
|
@ -78,54 +78,6 @@ cdef struct TokenC:
|
|||
hash_t ent_id
|
||||
|
||||
|
||||
cdef struct MorphAnalysisC:
|
||||
univ_pos_t pos
|
||||
int length
|
||||
|
||||
attr_t abbr
|
||||
attr_t adp_type
|
||||
attr_t adv_type
|
||||
attr_t animacy
|
||||
attr_t aspect
|
||||
attr_t case
|
||||
attr_t conj_type
|
||||
attr_t connegative
|
||||
attr_t definite
|
||||
attr_t degree
|
||||
attr_t derivation
|
||||
attr_t echo
|
||||
attr_t foreign
|
||||
attr_t gender
|
||||
attr_t hyph
|
||||
attr_t inf_form
|
||||
attr_t mood
|
||||
attr_t negative
|
||||
attr_t number
|
||||
attr_t name_type
|
||||
attr_t noun_type
|
||||
attr_t num_form
|
||||
attr_t num_type
|
||||
attr_t num_value
|
||||
attr_t part_form
|
||||
attr_t part_type
|
||||
attr_t person
|
||||
attr_t polite
|
||||
attr_t polarity
|
||||
attr_t poss
|
||||
attr_t prefix
|
||||
attr_t prep_case
|
||||
attr_t pron_type
|
||||
attr_t punct_side
|
||||
attr_t punct_type
|
||||
attr_t reflex
|
||||
attr_t style
|
||||
attr_t style_variant
|
||||
attr_t tense
|
||||
attr_t typo
|
||||
attr_t verb_form
|
||||
attr_t voice
|
||||
attr_t verb_type
|
||||
|
||||
# Internal struct, for storage and disambiguation of entities.
|
||||
cdef struct KBEntryC:
|
||||
|
||||
|
|
|
@ -342,7 +342,6 @@ cdef class ArcEager(TransitionSystem):
|
|||
actions[RIGHT][label] = 1
|
||||
actions[REDUCE][label] = 1
|
||||
for raw_text, sents in kwargs.get('gold_parses', []):
|
||||
_ = sents.pop()
|
||||
for (ids, words, tags, heads, labels, iob), ctnts in sents:
|
||||
heads, labels = nonproj.projectivize(heads, labels)
|
||||
for child, head, label in zip(ids, heads, labels):
|
||||
|
|
|
@ -66,14 +66,12 @@ cdef class BiluoPushDown(TransitionSystem):
|
|||
UNIT: Counter(),
|
||||
OUT: Counter()
|
||||
}
|
||||
actions[OUT][''] = 1 # Represents a token predicted to be outside of any entity
|
||||
actions[UNIT][''] = 1 # Represents a token prohibited to be in an entity
|
||||
actions[OUT][''] = 1
|
||||
for entity_type in kwargs.get('entity_types', []):
|
||||
for action in (BEGIN, IN, LAST, UNIT):
|
||||
actions[action][entity_type] = 1
|
||||
moves = ('M', 'B', 'I', 'L', 'U')
|
||||
for raw_text, sents in kwargs.get('gold_parses', []):
|
||||
_ = sents.pop()
|
||||
for (ids, words, tags, heads, labels, biluo), _ in sents:
|
||||
for i, ner_tag in enumerate(biluo):
|
||||
if ner_tag != 'O' and ner_tag != '-':
|
||||
|
@ -163,7 +161,8 @@ cdef class BiluoPushDown(TransitionSystem):
|
|||
for i in range(self.n_moves):
|
||||
if self.c[i].move == move and self.c[i].label == label:
|
||||
return self.c[i]
|
||||
raise KeyError(Errors.E022.format(name=name))
|
||||
else:
|
||||
raise KeyError(Errors.E022.format(name=name))
|
||||
|
||||
cdef Transition init_transition(self, int clas, int move, attr_t label) except *:
|
||||
# TODO: Apparent Cython bug here when we try to use the Transition()
|
||||
|
@ -267,7 +266,7 @@ cdef class Begin:
|
|||
return False
|
||||
elif label == 0:
|
||||
return False
|
||||
elif preset_ent_iob == 1:
|
||||
elif preset_ent_iob == 1 or preset_ent_iob == 2:
|
||||
# Ensure we don't clobber preset entities. If no entity preset,
|
||||
# ent_iob is 0
|
||||
return False
|
||||
|
@ -283,8 +282,8 @@ cdef class Begin:
|
|||
# Otherwise, force acceptance, even if we're across a sentence
|
||||
# boundary or the token is whitespace.
|
||||
return True
|
||||
elif st.B_(1).ent_iob == 3:
|
||||
# If the next word is B, we can't B now
|
||||
elif st.B_(1).ent_iob == 2 or st.B_(1).ent_iob == 3:
|
||||
# If the next word is B or O, we can't B now
|
||||
return False
|
||||
elif st.B_(1).sent_start == 1:
|
||||
# Don't allow entities to extend across sentence boundaries
|
||||
|
@ -327,7 +326,6 @@ cdef class In:
|
|||
@staticmethod
|
||||
cdef bint is_valid(const StateC* st, attr_t label) nogil:
|
||||
cdef int preset_ent_iob = st.B_(0).ent_iob
|
||||
cdef attr_t preset_ent_label = st.B_(0).ent_type
|
||||
if label == 0:
|
||||
return False
|
||||
elif st.E_(0).ent_type != label:
|
||||
|
@ -337,22 +335,13 @@ cdef class In:
|
|||
elif st.B(1) == -1:
|
||||
# If we're at the end, we can't I.
|
||||
return False
|
||||
elif preset_ent_iob == 2:
|
||||
return False
|
||||
elif preset_ent_iob == 3:
|
||||
return False
|
||||
elif st.B_(1).ent_iob == 3:
|
||||
# If we know the next word is B, we can't be I (must be L)
|
||||
elif st.B_(1).ent_iob == 2 or st.B_(1).ent_iob == 3:
|
||||
# If we know the next word is B or O, we can't be I (must be L)
|
||||
return False
|
||||
elif preset_ent_iob == 1:
|
||||
if st.B_(1).ent_iob in (0, 2):
|
||||
# if next preset is missing or O, this can't be I (must be L)
|
||||
return False
|
||||
elif label != preset_ent_label:
|
||||
# If label isn't right, reject
|
||||
return False
|
||||
else:
|
||||
# Otherwise, force acceptance, even if we're across a sentence
|
||||
# boundary or the token is whitespace.
|
||||
return True
|
||||
elif st.B(1) != -1 and st.B_(1).sent_start == 1:
|
||||
# Don't allow entities to extend across sentence boundaries
|
||||
return False
|
||||
|
@ -398,24 +387,17 @@ cdef class In:
|
|||
else:
|
||||
return 1
|
||||
|
||||
|
||||
cdef class Last:
|
||||
@staticmethod
|
||||
cdef bint is_valid(const StateC* st, attr_t label) nogil:
|
||||
cdef int preset_ent_iob = st.B_(0).ent_iob
|
||||
cdef attr_t preset_ent_label = st.B_(0).ent_type
|
||||
if label == 0:
|
||||
return False
|
||||
elif not st.entity_is_open():
|
||||
return False
|
||||
elif preset_ent_iob == 1 and st.B_(1).ent_iob != 1:
|
||||
elif st.B_(0).ent_iob == 1 and st.B_(1).ent_iob != 1:
|
||||
# If a preset entity has I followed by not-I, is L
|
||||
if label != preset_ent_label:
|
||||
# If label isn't right, reject
|
||||
return False
|
||||
else:
|
||||
# Otherwise, force acceptance, even if we're across a sentence
|
||||
# boundary or the token is whitespace.
|
||||
return True
|
||||
return True
|
||||
elif st.E_(0).ent_type != label:
|
||||
return False
|
||||
elif st.B_(1).ent_iob == 1:
|
||||
|
@ -468,13 +450,12 @@ cdef class Unit:
|
|||
cdef int preset_ent_iob = st.B_(0).ent_iob
|
||||
cdef attr_t preset_ent_label = st.B_(0).ent_type
|
||||
if label == 0:
|
||||
# this is only allowed if it's a preset blocked annotation
|
||||
if preset_ent_label == 0 and preset_ent_iob == 3:
|
||||
return True
|
||||
else:
|
||||
return False
|
||||
return False
|
||||
elif st.entity_is_open():
|
||||
return False
|
||||
elif preset_ent_iob == 2:
|
||||
# Don't clobber preset O
|
||||
return False
|
||||
elif st.B_(1).ent_iob == 1:
|
||||
# If next token is In, we can't be Unit -- must be Begin
|
||||
return False
|
||||
|
|
|
@ -135,9 +135,7 @@ cdef class Parser:
|
|||
names = []
|
||||
for i in range(self.moves.n_moves):
|
||||
name = self.moves.move_name(self.moves.c[i].move, self.moves.c[i].label)
|
||||
# Explicitly removing the internal "U-" token used for blocking entities
|
||||
if name != "U-":
|
||||
names.append(name)
|
||||
names.append(name)
|
||||
return names
|
||||
|
||||
nr_feature = 8
|
||||
|
@ -163,16 +161,10 @@ cdef class Parser:
|
|||
added = self.moves.add_action(action, label)
|
||||
if added:
|
||||
resized = True
|
||||
if resized:
|
||||
self._resize()
|
||||
|
||||
def _resize(self):
|
||||
if "nr_class" in self.cfg:
|
||||
if resized and "nr_class" in self.cfg:
|
||||
self.cfg["nr_class"] = self.moves.n_moves
|
||||
if self.model not in (True, False, None):
|
||||
if self.model not in (True, False, None) and resized:
|
||||
self.model.resize_output(self.moves.n_moves)
|
||||
if self._rehearsal_model not in (True, False, None):
|
||||
self._rehearsal_model.resize_output(self.moves.n_moves)
|
||||
|
||||
def add_multitask_objective(self, target):
|
||||
# Defined in subclasses, to avoid circular import
|
||||
|
@ -243,9 +235,7 @@ cdef class Parser:
|
|||
if isinstance(docs, Doc):
|
||||
docs = [docs]
|
||||
if not any(len(doc) for doc in docs):
|
||||
result = self.moves.init_batch(docs)
|
||||
self._resize()
|
||||
return result
|
||||
return self.moves.init_batch(docs)
|
||||
if beam_width < 2:
|
||||
return self.greedy_parse(docs, drop=drop)
|
||||
else:
|
||||
|
@ -259,7 +249,7 @@ cdef class Parser:
|
|||
# This is pretty dirty, but the NER can resize itself in init_batch,
|
||||
# if labels are missing. We therefore have to check whether we need to
|
||||
# expand our model output.
|
||||
self._resize()
|
||||
self.model.resize_output(self.moves.n_moves)
|
||||
model = self.model(docs)
|
||||
weights = get_c_weights(model)
|
||||
for state in batch:
|
||||
|
@ -279,7 +269,7 @@ cdef class Parser:
|
|||
# This is pretty dirty, but the NER can resize itself in init_batch,
|
||||
# if labels are missing. We therefore have to check whether we need to
|
||||
# expand our model output.
|
||||
self._resize()
|
||||
self.model.resize_output(self.moves.n_moves)
|
||||
model = self.model(docs)
|
||||
token_ids = numpy.zeros((len(docs) * beam_width, self.nr_feature),
|
||||
dtype='i', order='C')
|
||||
|
@ -453,7 +443,8 @@ cdef class Parser:
|
|||
# This is pretty dirty, but the NER can resize itself in init_batch,
|
||||
# if labels are missing. We therefore have to check whether we need to
|
||||
# expand our model output.
|
||||
self._resize()
|
||||
self.model.resize_output(self.moves.n_moves)
|
||||
self._rehearsal_model.resize_output(self.moves.n_moves)
|
||||
# Prepare the stepwise model, and get the callback for finishing the batch
|
||||
tutor, _ = self._rehearsal_model.begin_update(docs, drop=0.0)
|
||||
model, finish_update = self.model.begin_update(docs, drop=0.0)
|
||||
|
@ -594,7 +585,6 @@ cdef class Parser:
|
|||
doc_sample = []
|
||||
gold_sample = []
|
||||
for raw_text, annots_brackets in islice(get_gold_tuples(), 1000):
|
||||
_ = annots_brackets.pop()
|
||||
for annots, brackets in annots_brackets:
|
||||
ids, words, tags, heads, deps, ents = annots
|
||||
doc_sample.append(Doc(self.vocab, words=words))
|
||||
|
|
|
@ -63,13 +63,6 @@ cdef class TransitionSystem:
|
|||
cdef Doc doc
|
||||
beams = []
|
||||
cdef int offset = 0
|
||||
|
||||
# Doc objects might contain labels that we need to register actions for. We need to check for that
|
||||
# *before* we create any Beam objects, because the Beam object needs the correct number of
|
||||
# actions. It's sort of dumb, but the best way is to just call init_batch() -- that triggers the additions,
|
||||
# and it doesn't matter that we create and discard the state objects.
|
||||
self.init_batch(docs)
|
||||
|
||||
for doc in docs:
|
||||
beam = Beam(self.n_moves, beam_width, min_density=beam_density)
|
||||
beam.initialize(self.init_beam_state, doc.length, doc.c)
|
||||
|
@ -103,7 +96,8 @@ cdef class TransitionSystem:
|
|||
|
||||
def apply_transition(self, StateClass state, name):
|
||||
if not self.is_valid(state, name):
|
||||
raise ValueError(Errors.E170.format(name=name))
|
||||
raise ValueError(
|
||||
"Cannot apply transition {name}: invalid for the current state.".format(name=name))
|
||||
action = self.lookup_transition(name)
|
||||
action.do(state.c, action.label)
|
||||
|
||||
|
|
|
@ -185,12 +185,6 @@ def ru_tokenizer():
|
|||
return get_lang_class("ru").Defaults.create_tokenizer()
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ru_lemmatizer():
|
||||
pytest.importorskip("pymorphy2")
|
||||
return get_lang_class("ru").Defaults.create_lemmatizer()
|
||||
|
||||
|
||||
@pytest.fixture(scope="session")
|
||||
def sr_tokenizer():
|
||||
return get_lang_class("sr").Defaults.create_tokenizer()
|
||||
|
|
|
@ -1,11 +1,11 @@
|
|||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from spacy.pipeline import EntityRecognizer
|
||||
from spacy.tokens import Span
|
||||
import pytest
|
||||
|
||||
from ...pipeline import EntityRecognizer
|
||||
from ..util import get_doc
|
||||
from ...tokens import Span
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
def test_doc_add_entities_set_ents_iob(en_vocab):
|
||||
|
@ -16,23 +16,10 @@ def test_doc_add_entities_set_ents_iob(en_vocab):
|
|||
ner(doc)
|
||||
assert len(list(doc.ents)) == 0
|
||||
assert [w.ent_iob_ for w in doc] == (["O"] * len(doc))
|
||||
|
||||
doc.ents = [(doc.vocab.strings["ANIMAL"], 3, 4)]
|
||||
assert [w.ent_iob_ for w in doc] == ["O", "O", "O", "B"]
|
||||
|
||||
assert [w.ent_iob_ for w in doc] == ["", "", "", "B"]
|
||||
doc.ents = [(doc.vocab.strings["WORD"], 0, 2)]
|
||||
assert [w.ent_iob_ for w in doc] == ["B", "I", "O", "O"]
|
||||
|
||||
|
||||
def test_ents_reset(en_vocab):
|
||||
text = ["This", "is", "a", "lion"]
|
||||
doc = get_doc(en_vocab, text)
|
||||
ner = EntityRecognizer(en_vocab)
|
||||
ner.begin_training([])
|
||||
ner(doc)
|
||||
assert [t.ent_iob_ for t in doc] == (["O"] * len(doc))
|
||||
doc.ents = list(doc.ents)
|
||||
assert [t.ent_iob_ for t in doc] == (["O"] * len(doc))
|
||||
assert [w.ent_iob_ for w in doc] == ["B", "I", "", ""]
|
||||
|
||||
|
||||
def test_add_overlapping_entities(en_vocab):
|
||||
|
|
|
@ -5,13 +5,11 @@ import pytest
|
|||
from spacy.vocab import Vocab
|
||||
from spacy.tokens import Doc
|
||||
from spacy.lemmatizer import Lemmatizer
|
||||
from spacy.lookups import Table
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def lemmatizer():
|
||||
lookup = Table(data={"dogs": "dog", "boxen": "box", "mice": "mouse"})
|
||||
return Lemmatizer(lookup=lookup)
|
||||
return Lemmatizer(lookup={"dogs": "dog", "boxen": "box", "mice": "mouse"})
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
|
|
|
@ -1,33 +0,0 @@
|
|||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def i_has(en_tokenizer):
|
||||
doc = en_tokenizer("I has")
|
||||
doc[0].tag_ = "PRP"
|
||||
doc[1].tag_ = "VBZ"
|
||||
return doc
|
||||
|
||||
|
||||
def test_token_morph_id(i_has):
|
||||
assert i_has[0].morph.id
|
||||
assert i_has[1].morph.id != 0
|
||||
assert i_has[0].morph.id != i_has[1].morph.id
|
||||
|
||||
|
||||
def test_morph_props(i_has):
|
||||
assert i_has[0].morph.pron_type == i_has.vocab.strings["PronType_prs"]
|
||||
assert i_has[0].morph.pron_type_ == "PronType_prs"
|
||||
assert i_has[1].morph.pron_type == 0
|
||||
|
||||
|
||||
def test_morph_iter(i_has):
|
||||
assert list(i_has[0].morph) == ["PronType_prs"]
|
||||
assert list(i_has[1].morph) == ["Number_sing", "Person_three", "VerbForm_fin"]
|
||||
|
||||
|
||||
def test_morph_get(i_has):
|
||||
assert i_has[0].morph.get("pron_type") == "PronType_prs"
|
|
@ -47,10 +47,3 @@ def test_ja_tokenizer_tags(ja_tokenizer, text, expected_tags):
|
|||
def test_ja_tokenizer_pos(ja_tokenizer, text, expected_pos):
|
||||
pos = [token.pos_ for token in ja_tokenizer(text)]
|
||||
assert pos == expected_pos
|
||||
|
||||
|
||||
def test_extra_spaces(ja_tokenizer):
|
||||
# note: three spaces after "I"
|
||||
tokens = ja_tokenizer("I like cheese.")
|
||||
assert tokens[1].orth_ == " "
|
||||
assert tokens[2].orth_ == " "
|
||||
|
|
|
@ -17,4 +17,4 @@ TEST_CASES = [
|
|||
|
||||
@pytest.mark.parametrize("tokens,lemmas", TEST_CASES)
|
||||
def test_lt_lemmatizer(lt_lemmatizer, tokens, lemmas):
|
||||
assert lemmas == [lt_lemmatizer.lookup_table.get(token, token) for token in tokens]
|
||||
assert lemmas == [lt_lemmatizer.lookup(token) for token in tokens]
|
||||
|
|
|
@ -2,10 +2,17 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
from spacy.lang.ru import Russian
|
||||
|
||||
from ...util import get_doc
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def ru_lemmatizer():
|
||||
pytest.importorskip("pymorphy2")
|
||||
return Russian.Defaults.create_lemmatizer()
|
||||
|
||||
|
||||
def test_ru_doc_lemmatization(ru_tokenizer):
|
||||
words = ["мама", "мыла", "раму"]
|
||||
tags = [
|
||||
|
|
|
@ -410,11 +410,3 @@ def test_matcher_schema_token_attributes(en_vocab, pattern, text):
|
|||
assert len(matcher) == 1
|
||||
matches = matcher(doc)
|
||||
assert len(matches) == 1
|
||||
|
||||
|
||||
def test_matcher_valid_callback(en_vocab):
|
||||
"""Test that on_match can only be None or callable."""
|
||||
matcher = Matcher(en_vocab)
|
||||
with pytest.raises(ValueError):
|
||||
matcher.add("TEST", [], [{"TEXT": "test"}])
|
||||
matcher(Doc(en_vocab, words=["test"]))
|
||||
|
|
|
@ -8,31 +8,10 @@ from ..util import get_doc
|
|||
|
||||
|
||||
def test_matcher_phrase_matcher(en_vocab):
|
||||
doc = Doc(en_vocab, words=["Google", "Now"])
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("COMPANY", None, doc)
|
||||
doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
|
||||
# intermediate phrase
|
||||
pattern = Doc(en_vocab, words=["Google", "Now"])
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("COMPANY", None, pattern)
|
||||
assert len(matcher(doc)) == 1
|
||||
# initial token
|
||||
pattern = Doc(en_vocab, words=["I"])
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("I", None, pattern)
|
||||
assert len(matcher(doc)) == 1
|
||||
# initial phrase
|
||||
pattern = Doc(en_vocab, words=["I", "like"])
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("ILIKE", None, pattern)
|
||||
assert len(matcher(doc)) == 1
|
||||
# final token
|
||||
pattern = Doc(en_vocab, words=["best"])
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("BEST", None, pattern)
|
||||
assert len(matcher(doc)) == 1
|
||||
# final phrase
|
||||
pattern = Doc(en_vocab, words=["Now", "best"])
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("NOWBEST", None, pattern)
|
||||
assert len(matcher(doc)) == 1
|
||||
|
||||
|
||||
|
@ -52,68 +31,6 @@ def test_phrase_matcher_contains(en_vocab):
|
|||
assert "TEST2" not in matcher
|
||||
|
||||
|
||||
def test_phrase_matcher_repeated_add(en_vocab):
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
# match ID only gets added once
|
||||
matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
|
||||
matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
|
||||
matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
|
||||
matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
|
||||
doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
|
||||
assert "TEST" in matcher
|
||||
assert "TEST2" not in matcher
|
||||
assert len(matcher(doc)) == 1
|
||||
|
||||
|
||||
def test_phrase_matcher_remove(en_vocab):
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("TEST1", None, Doc(en_vocab, words=["like"]))
|
||||
matcher.add("TEST2", None, Doc(en_vocab, words=["best"]))
|
||||
doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
|
||||
assert "TEST1" in matcher
|
||||
assert "TEST2" in matcher
|
||||
assert "TEST3" not in matcher
|
||||
assert len(matcher(doc)) == 2
|
||||
matcher.remove("TEST1")
|
||||
assert "TEST1" not in matcher
|
||||
assert "TEST2" in matcher
|
||||
assert "TEST3" not in matcher
|
||||
assert len(matcher(doc)) == 1
|
||||
matcher.remove("TEST2")
|
||||
assert "TEST1" not in matcher
|
||||
assert "TEST2" not in matcher
|
||||
assert "TEST3" not in matcher
|
||||
assert len(matcher(doc)) == 0
|
||||
with pytest.raises(KeyError):
|
||||
matcher.remove("TEST3")
|
||||
assert "TEST1" not in matcher
|
||||
assert "TEST2" not in matcher
|
||||
assert "TEST3" not in matcher
|
||||
assert len(matcher(doc)) == 0
|
||||
|
||||
|
||||
def test_phrase_matcher_overlapping_with_remove(en_vocab):
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add("TEST", None, Doc(en_vocab, words=["like"]))
|
||||
# TEST2 is added alongside TEST
|
||||
matcher.add("TEST2", None, Doc(en_vocab, words=["like"]))
|
||||
doc = Doc(en_vocab, words=["I", "like", "Google", "Now", "best"])
|
||||
assert "TEST" in matcher
|
||||
assert len(matcher) == 2
|
||||
assert len(matcher(doc)) == 2
|
||||
# removing TEST does not remove the entry for TEST2
|
||||
matcher.remove("TEST")
|
||||
assert "TEST" not in matcher
|
||||
assert len(matcher) == 1
|
||||
assert len(matcher(doc)) == 1
|
||||
assert matcher(doc)[0][0] == en_vocab.strings["TEST2"]
|
||||
# removing TEST2 removes all
|
||||
matcher.remove("TEST2")
|
||||
assert "TEST2" not in matcher
|
||||
assert len(matcher) == 0
|
||||
assert len(matcher(doc)) == 0
|
||||
|
||||
|
||||
def test_phrase_matcher_string_attrs(en_vocab):
|
||||
words1 = ["I", "like", "cats"]
|
||||
pos1 = ["PRON", "VERB", "NOUN"]
|
||||
|
|
|
@ -1,48 +0,0 @@
|
|||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
from spacy.morphology import Morphology
|
||||
from spacy.strings import StringStore, get_string_id
|
||||
from spacy.lemmatizer import Lemmatizer
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def morphology():
|
||||
return Morphology(StringStore(), {}, Lemmatizer())
|
||||
|
||||
|
||||
def test_init(morphology):
|
||||
pass
|
||||
|
||||
|
||||
def test_add_morphology_with_string_names(morphology):
|
||||
morphology.add({"Case_gen", "Number_sing"})
|
||||
|
||||
|
||||
def test_add_morphology_with_int_ids(morphology):
|
||||
morphology.add({get_string_id("Case_gen"), get_string_id("Number_sing")})
|
||||
|
||||
|
||||
def test_add_morphology_with_mix_strings_and_ints(morphology):
|
||||
morphology.add({get_string_id("PunctSide_ini"), "VerbType_aux"})
|
||||
|
||||
|
||||
def test_morphology_tags_hash_distinctly(morphology):
|
||||
tag1 = morphology.add({"PunctSide_ini", "VerbType_aux"})
|
||||
tag2 = morphology.add({"Case_gen", "Number_sing"})
|
||||
assert tag1 != tag2
|
||||
|
||||
|
||||
def test_morphology_tags_hash_independent_of_order(morphology):
|
||||
tag1 = morphology.add({"Case_gen", "Number_sing"})
|
||||
tag2 = morphology.add({"Number_sing", "Case_gen"})
|
||||
assert tag1 == tag2
|
||||
|
||||
|
||||
def test_update_morphology_tag(morphology):
|
||||
tag1 = morphology.add({"Case_gen"})
|
||||
tag2 = morphology.update(tag1, {"Number_sing"})
|
||||
assert tag1 != tag2
|
||||
tag3 = morphology.add({"Number_sing", "Case_gen"})
|
||||
assert tag2 == tag3
|
|
@ -2,9 +2,7 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
from spacy.lang.en import English
|
||||
|
||||
from spacy.pipeline import EntityRecognizer, EntityRuler
|
||||
from spacy.pipeline import EntityRecognizer
|
||||
from spacy.vocab import Vocab
|
||||
from spacy.syntax.ner import BiluoPushDown
|
||||
from spacy.gold import GoldParse
|
||||
|
@ -82,190 +80,14 @@ def test_get_oracle_moves_negative_O(tsys, vocab):
|
|||
assert names
|
||||
|
||||
|
||||
def test_oracle_moves_missing_B(en_vocab):
|
||||
words = ["B", "52", "Bomber"]
|
||||
biluo_tags = [None, None, "L-PRODUCT"]
|
||||
|
||||
doc = Doc(en_vocab, words=words)
|
||||
gold = GoldParse(doc, words=words, entities=biluo_tags)
|
||||
|
||||
moves = BiluoPushDown(en_vocab.strings)
|
||||
move_types = ("M", "B", "I", "L", "U", "O")
|
||||
for tag in biluo_tags:
|
||||
if tag is None:
|
||||
continue
|
||||
elif tag == "O":
|
||||
moves.add_action(move_types.index("O"), "")
|
||||
else:
|
||||
action, label = tag.split("-")
|
||||
moves.add_action(move_types.index("B"), label)
|
||||
moves.add_action(move_types.index("I"), label)
|
||||
moves.add_action(move_types.index("L"), label)
|
||||
moves.add_action(move_types.index("U"), label)
|
||||
moves.preprocess_gold(gold)
|
||||
seq = moves.get_oracle_sequence(doc, gold)
|
||||
|
||||
|
||||
def test_oracle_moves_whitespace(en_vocab):
|
||||
words = ["production", "\n", "of", "Northrop", "\n", "Corp.", "\n", "'s", "radar"]
|
||||
biluo_tags = ["O", "O", "O", "B-ORG", None, "I-ORG", "L-ORG", "O", "O"]
|
||||
|
||||
doc = Doc(en_vocab, words=words)
|
||||
gold = GoldParse(doc, words=words, entities=biluo_tags)
|
||||
|
||||
moves = BiluoPushDown(en_vocab.strings)
|
||||
move_types = ("M", "B", "I", "L", "U", "O")
|
||||
for tag in biluo_tags:
|
||||
if tag is None:
|
||||
continue
|
||||
elif tag == "O":
|
||||
moves.add_action(move_types.index("O"), "")
|
||||
else:
|
||||
action, label = tag.split("-")
|
||||
moves.add_action(move_types.index(action), label)
|
||||
moves.preprocess_gold(gold)
|
||||
moves.get_oracle_sequence(doc, gold)
|
||||
|
||||
|
||||
def test_accept_blocked_token():
|
||||
"""Test succesful blocking of tokens to be in an entity."""
|
||||
# 1. test normal behaviour
|
||||
nlp1 = English()
|
||||
doc1 = nlp1("I live in New York")
|
||||
ner1 = EntityRecognizer(doc1.vocab)
|
||||
assert [token.ent_iob_ for token in doc1] == ["", "", "", "", ""]
|
||||
assert [token.ent_type_ for token in doc1] == ["", "", "", "", ""]
|
||||
|
||||
# Add the OUT action
|
||||
ner1.moves.add_action(5, "")
|
||||
ner1.add_label("GPE")
|
||||
# Get into the state just before "New"
|
||||
state1 = ner1.moves.init_batch([doc1])[0]
|
||||
ner1.moves.apply_transition(state1, "O")
|
||||
ner1.moves.apply_transition(state1, "O")
|
||||
ner1.moves.apply_transition(state1, "O")
|
||||
# Check that B-GPE is valid.
|
||||
assert ner1.moves.is_valid(state1, "B-GPE")
|
||||
|
||||
# 2. test blocking behaviour
|
||||
nlp2 = English()
|
||||
doc2 = nlp2("I live in New York")
|
||||
ner2 = EntityRecognizer(doc2.vocab)
|
||||
|
||||
# set "New York" to a blocked entity
|
||||
doc2.ents = [(0, 3, 5)]
|
||||
assert [token.ent_iob_ for token in doc2] == ["", "", "", "B", "B"]
|
||||
assert [token.ent_type_ for token in doc2] == ["", "", "", "", ""]
|
||||
|
||||
# Check that B-GPE is now invalid.
|
||||
ner2.moves.add_action(4, "")
|
||||
ner2.moves.add_action(5, "")
|
||||
ner2.add_label("GPE")
|
||||
state2 = ner2.moves.init_batch([doc2])[0]
|
||||
ner2.moves.apply_transition(state2, "O")
|
||||
ner2.moves.apply_transition(state2, "O")
|
||||
ner2.moves.apply_transition(state2, "O")
|
||||
# we can only use U- for "New"
|
||||
assert not ner2.moves.is_valid(state2, "B-GPE")
|
||||
assert ner2.moves.is_valid(state2, "U-")
|
||||
ner2.moves.apply_transition(state2, "U-")
|
||||
# we can only use U- for "York"
|
||||
assert not ner2.moves.is_valid(state2, "B-GPE")
|
||||
assert ner2.moves.is_valid(state2, "U-")
|
||||
|
||||
|
||||
def test_overwrite_token():
|
||||
nlp = English()
|
||||
ner1 = nlp.create_pipe("ner")
|
||||
nlp.add_pipe(ner1, name="ner")
|
||||
nlp.begin_training()
|
||||
|
||||
# The untrained NER will predict O for each token
|
||||
doc = nlp("I live in New York")
|
||||
assert [token.ent_iob_ for token in doc] == ["O", "O", "O", "O", "O"]
|
||||
assert [token.ent_type_ for token in doc] == ["", "", "", "", ""]
|
||||
|
||||
# Check that a new ner can overwrite O
|
||||
ner2 = EntityRecognizer(doc.vocab)
|
||||
ner2.moves.add_action(5, "")
|
||||
ner2.add_label("GPE")
|
||||
state = ner2.moves.init_batch([doc])[0]
|
||||
assert ner2.moves.is_valid(state, "B-GPE")
|
||||
assert ner2.moves.is_valid(state, "U-GPE")
|
||||
ner2.moves.apply_transition(state, "B-GPE")
|
||||
assert ner2.moves.is_valid(state, "I-GPE")
|
||||
assert ner2.moves.is_valid(state, "L-GPE")
|
||||
|
||||
|
||||
def test_ruler_before_ner():
|
||||
""" Test that an NER works after an entity_ruler: the second can add annotations """
|
||||
nlp = English()
|
||||
|
||||
# 1 : Entity Ruler - should set "this" to B and everything else to empty
|
||||
ruler = EntityRuler(nlp)
|
||||
patterns = [{"label": "THING", "pattern": "This"}]
|
||||
ruler.add_patterns(patterns)
|
||||
nlp.add_pipe(ruler)
|
||||
|
||||
# 2: untrained NER - should set everything else to O
|
||||
untrained_ner = nlp.create_pipe("ner")
|
||||
untrained_ner.add_label("MY_LABEL")
|
||||
nlp.add_pipe(untrained_ner)
|
||||
nlp.begin_training()
|
||||
|
||||
doc = nlp("This is Antti Korhonen speaking in Finland")
|
||||
expected_iobs = ["B", "O", "O", "O", "O", "O", "O"]
|
||||
expected_types = ["THING", "", "", "", "", "", ""]
|
||||
assert [token.ent_iob_ for token in doc] == expected_iobs
|
||||
assert [token.ent_type_ for token in doc] == expected_types
|
||||
|
||||
|
||||
def test_ner_before_ruler():
|
||||
""" Test that an entity_ruler works after an NER: the second can overwrite O annotations """
|
||||
nlp = English()
|
||||
|
||||
# 1: untrained NER - should set everything to O
|
||||
untrained_ner = nlp.create_pipe("ner")
|
||||
untrained_ner.add_label("MY_LABEL")
|
||||
nlp.add_pipe(untrained_ner, name="uner")
|
||||
nlp.begin_training()
|
||||
|
||||
# 2 : Entity Ruler - should set "this" to B and keep everything else O
|
||||
ruler = EntityRuler(nlp)
|
||||
patterns = [{"label": "THING", "pattern": "This"}]
|
||||
ruler.add_patterns(patterns)
|
||||
nlp.add_pipe(ruler)
|
||||
|
||||
doc = nlp("This is Antti Korhonen speaking in Finland")
|
||||
expected_iobs = ["B", "O", "O", "O", "O", "O", "O"]
|
||||
expected_types = ["THING", "", "", "", "", "", ""]
|
||||
assert [token.ent_iob_ for token in doc] == expected_iobs
|
||||
assert [token.ent_type_ for token in doc] == expected_types
|
||||
|
||||
|
||||
def test_block_ner():
|
||||
""" Test functionality for blocking tokens so they can't be in a named entity """
|
||||
# block "Antti L Korhonen" from being a named entity
|
||||
nlp = English()
|
||||
nlp.add_pipe(BlockerComponent1(2, 5))
|
||||
untrained_ner = nlp.create_pipe("ner")
|
||||
untrained_ner.add_label("MY_LABEL")
|
||||
nlp.add_pipe(untrained_ner, name="uner")
|
||||
nlp.begin_training()
|
||||
doc = nlp("This is Antti L Korhonen speaking in Finland")
|
||||
expected_iobs = ["O", "O", "B", "B", "B", "O", "O", "O"]
|
||||
expected_types = ["", "", "", "", "", "", "", ""]
|
||||
assert [token.ent_iob_ for token in doc] == expected_iobs
|
||||
assert [token.ent_type_ for token in doc] == expected_types
|
||||
|
||||
|
||||
class BlockerComponent1(object):
|
||||
name = "my_blocker"
|
||||
|
||||
def __init__(self, start, end):
|
||||
self.start = start
|
||||
self.end = end
|
||||
|
||||
def __call__(self, doc):
|
||||
doc.ents = [(0, self.start, self.end)]
|
||||
return doc
|
||||
def test_doc_add_entities_set_ents_iob(en_vocab):
|
||||
doc = Doc(en_vocab, words=["This", "is", "a", "lion"])
|
||||
ner = EntityRecognizer(en_vocab)
|
||||
ner.begin_training([])
|
||||
ner(doc)
|
||||
assert len(list(doc.ents)) == 0
|
||||
assert [w.ent_iob_ for w in doc] == (["O"] * len(doc))
|
||||
doc.ents = [(doc.vocab.strings["ANIMAL"], 3, 4)]
|
||||
assert [w.ent_iob_ for w in doc] == ["", "", "", "B"]
|
||||
doc.ents = [(doc.vocab.strings["WORD"], 0, 2)]
|
||||
assert [w.ent_iob_ for w in doc] == ["B", "I", "", ""]
|
||||
|
|
|
@ -426,7 +426,7 @@ def test_issue957(en_tokenizer):
|
|||
def test_issue999(train_data):
|
||||
"""Test that adding entities and resuming training works passably OK.
|
||||
There are two issues here:
|
||||
1) We have to read labels. This isn't very nice.
|
||||
1) We have to readd labels. This isn't very nice.
|
||||
2) There's no way to set the learning rate for the weight update, so we
|
||||
end up out-of-scale, causing it to learn too fast.
|
||||
"""
|
||||
|
|
|
@ -187,7 +187,7 @@ def test_issue1799():
|
|||
|
||||
def test_issue1807():
|
||||
"""Test vocab.set_vector also adds the word to the vocab."""
|
||||
vocab = Vocab(vectors_name="test_issue1807")
|
||||
vocab = Vocab()
|
||||
assert "hello" not in vocab
|
||||
vocab.set_vector("hello", numpy.ones((50,), dtype="f"))
|
||||
assert "hello" in vocab
|
||||
|
|
|
@ -184,7 +184,7 @@ def test_issue2833(en_vocab):
|
|||
def test_issue2871():
|
||||
"""Test that vectors recover the correct key for spaCy reserved words."""
|
||||
words = ["dog", "cat", "SUFFIX"]
|
||||
vocab = Vocab(vectors_name="test_issue2871")
|
||||
vocab = Vocab()
|
||||
vocab.vectors.resize(shape=(3, 10))
|
||||
vector_data = numpy.zeros((3, 10), dtype="f")
|
||||
for word in words:
|
||||
|
|
|
@ -30,20 +30,20 @@ def test_issue3002():
|
|||
def test_issue3009(en_vocab):
|
||||
"""Test problem with matcher quantifiers"""
|
||||
patterns = [
|
||||
[{"LEMMA": "have"}, {"LOWER": "to"}, {"LOWER": "do"}, {"TAG": "IN"}],
|
||||
[{"LEMMA": "have"}, {"LOWER": "to"}, {"LOWER": "do"}, {"POS": "ADP"}],
|
||||
[
|
||||
{"LEMMA": "have"},
|
||||
{"IS_ASCII": True, "IS_PUNCT": False, "OP": "*"},
|
||||
{"LOWER": "to"},
|
||||
{"LOWER": "do"},
|
||||
{"TAG": "IN"},
|
||||
{"POS": "ADP"},
|
||||
],
|
||||
[
|
||||
{"LEMMA": "have"},
|
||||
{"IS_ASCII": True, "IS_PUNCT": False, "OP": "?"},
|
||||
{"LOWER": "to"},
|
||||
{"LOWER": "do"},
|
||||
{"TAG": "IN"},
|
||||
{"POS": "ADP"},
|
||||
],
|
||||
]
|
||||
words = ["also", "has", "to", "do", "with"]
|
||||
|
|
|
@ -1,82 +0,0 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import spacy
|
||||
from spacy.pipeline import EntityRecognizer, EntityRuler
|
||||
from spacy.lang.en import English
|
||||
from spacy.tokens import Span
|
||||
from spacy.util import ensure_path
|
||||
|
||||
from ..util import make_tempdir
|
||||
|
||||
|
||||
def test_issue4042():
|
||||
"""Test that serialization of an EntityRuler before NER works fine."""
|
||||
nlp = English()
|
||||
|
||||
# add ner pipe
|
||||
ner = nlp.create_pipe("ner")
|
||||
ner.add_label("SOME_LABEL")
|
||||
nlp.add_pipe(ner)
|
||||
nlp.begin_training()
|
||||
|
||||
# Add entity ruler
|
||||
ruler = EntityRuler(nlp)
|
||||
patterns = [
|
||||
{"label": "MY_ORG", "pattern": "Apple"},
|
||||
{"label": "MY_GPE", "pattern": [{"lower": "san"}, {"lower": "francisco"}]},
|
||||
]
|
||||
ruler.add_patterns(patterns)
|
||||
nlp.add_pipe(ruler, before="ner") # works fine with "after"
|
||||
doc1 = nlp("What do you think about Apple ?")
|
||||
assert doc1.ents[0].label_ == "MY_ORG"
|
||||
|
||||
with make_tempdir() as d:
|
||||
output_dir = ensure_path(d)
|
||||
if not output_dir.exists():
|
||||
output_dir.mkdir()
|
||||
nlp.to_disk(output_dir)
|
||||
|
||||
nlp2 = spacy.load(output_dir)
|
||||
doc2 = nlp2("What do you think about Apple ?")
|
||||
assert doc2.ents[0].label_ == "MY_ORG"
|
||||
|
||||
|
||||
def test_issue4042_bug2():
|
||||
"""
|
||||
Test that serialization of an NER works fine when new labels were added.
|
||||
This is the second bug of two bugs underlying the issue 4042.
|
||||
"""
|
||||
nlp1 = English()
|
||||
vocab = nlp1.vocab
|
||||
|
||||
# add ner pipe
|
||||
ner1 = nlp1.create_pipe("ner")
|
||||
ner1.add_label("SOME_LABEL")
|
||||
nlp1.add_pipe(ner1)
|
||||
nlp1.begin_training()
|
||||
|
||||
# add a new label to the doc
|
||||
doc1 = nlp1("What do you think about Apple ?")
|
||||
assert len(ner1.labels) == 1
|
||||
assert "SOME_LABEL" in ner1.labels
|
||||
apple_ent = Span(doc1, 5, 6, label="MY_ORG")
|
||||
doc1.ents = list(doc1.ents) + [apple_ent]
|
||||
|
||||
# reapply the NER - at this point it should resize itself
|
||||
ner1(doc1)
|
||||
assert len(ner1.labels) == 2
|
||||
assert "SOME_LABEL" in ner1.labels
|
||||
assert "MY_ORG" in ner1.labels
|
||||
|
||||
with make_tempdir() as d:
|
||||
# assert IO goes fine
|
||||
output_dir = ensure_path(d)
|
||||
if not output_dir.exists():
|
||||
output_dir.mkdir()
|
||||
ner1.to_disk(output_dir)
|
||||
|
||||
nlp2 = English(vocab)
|
||||
ner2 = EntityRecognizer(vocab)
|
||||
ner2.from_disk(output_dir)
|
||||
assert len(ner2.labels) == 2
|
|
@ -2,12 +2,12 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
from spacy.vocab import Vocab
|
||||
|
||||
import spacy
|
||||
from spacy.lang.en import English
|
||||
from spacy.tests.util import make_tempdir
|
||||
from spacy.util import ensure_path
|
||||
|
||||
from ..util import make_tempdir
|
||||
|
||||
|
||||
def test_issue4054(en_vocab):
|
||||
"""Test that a new blank model can be made with a vocab from file,
|
||||
|
|
|
@ -1,42 +0,0 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
|
||||
import spacy
|
||||
|
||||
from spacy.lang.en import English
|
||||
from spacy.pipeline import EntityRuler
|
||||
from spacy.tokens import Span
|
||||
|
||||
|
||||
def test_issue4267():
|
||||
""" Test that running an entity_ruler after ner gives consistent results"""
|
||||
nlp = English()
|
||||
ner = nlp.create_pipe("ner")
|
||||
ner.add_label("PEOPLE")
|
||||
nlp.add_pipe(ner)
|
||||
nlp.begin_training()
|
||||
|
||||
assert "ner" in nlp.pipe_names
|
||||
|
||||
# assert that we have correct IOB annotations
|
||||
doc1 = nlp("hi")
|
||||
assert doc1.is_nered
|
||||
for token in doc1:
|
||||
assert token.ent_iob == 2
|
||||
|
||||
# add entity ruler and run again
|
||||
ruler = EntityRuler(nlp)
|
||||
patterns = [{"label": "SOFTWARE", "pattern": "spacy"}]
|
||||
|
||||
ruler.add_patterns(patterns)
|
||||
nlp.add_pipe(ruler)
|
||||
assert "entity_ruler" in nlp.pipe_names
|
||||
assert "ner" in nlp.pipe_names
|
||||
|
||||
# assert that we still have correct IOB annotations
|
||||
doc2 = nlp("hi")
|
||||
assert doc2.is_nered
|
||||
for token in doc2:
|
||||
assert token.ent_iob == 2
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user