mirror of
https://github.com/explosion/spaCy.git
synced 2025-02-03 13:14:11 +03:00
Update better-parser branch with develop
This commit is contained in:
commit
35977bdbb9
10
.github/CONTRIBUTOR_AGREEMENT.md
vendored
10
.github/CONTRIBUTOR_AGREEMENT.md
vendored
|
@ -87,8 +87,8 @@ U.S. Federal law. Any choice of law rules will not apply.
|
|||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect my
|
||||
* [ ] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
|
@ -98,9 +98,9 @@ mark both statements:
|
|||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Shuvanon Razik |
|
||||
| Name | |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 3/12/2017 |
|
||||
| GitHub username | shuvanon |
|
||||
| Date | |
|
||||
| GitHub username | |
|
||||
| Website (optional) | |
|
||||
|
|
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
31
.github/PULL_REQUEST_TEMPLATE.md
vendored
|
@ -1,20 +1,19 @@
|
|||
<!--- Provide a general summary of your changes in the Title -->
|
||||
<!--- Provide a general summary of your changes in the title. -->
|
||||
|
||||
## Description
|
||||
<!--- Use this section to describe your changes and how they're affecting the code. -->
|
||||
<!-- If your changes required testing, include information about the testing environment and the tests you ran. -->
|
||||
<!--- Use this section to describe your changes. If your changes required
|
||||
testing, include information about the testing environment and the tests you
|
||||
ran. If your test fixes a bug reported in an issue, don't forget to include the
|
||||
issue number. If your PR is still a work in progress, that's totally fine – just
|
||||
include a note to let us know. -->
|
||||
|
||||
### Types of change
|
||||
<!-- What type of change does your PR cover? Is it a bug fix, an enhancement
|
||||
or new feature, or a change to the documentation? -->
|
||||
|
||||
## Types of changes
|
||||
<!--- What types of changes does your code introduce? Put an `x` in all applicable boxes.: -->
|
||||
- [ ] **Bug fix** (non-breaking change fixing an issue)
|
||||
- [ ] **New feature** (non-breaking change adding functionality to spaCy)
|
||||
- [ ] **Breaking change** (fix or feature causing change to spaCy's existing functionality)
|
||||
- [ ] **Documentation** (addition to documentation of spaCy)
|
||||
|
||||
## Checklist:
|
||||
<!--- Go over all the following points, and put an `x` in all applicable boxes.: -->
|
||||
- [ ] My change requires a change to spaCy's documentation.
|
||||
- [ ] I have updated the documentation accordingly.
|
||||
- [ ] I have added tests to cover my changes.
|
||||
- [ ] All new and existing tests passed.
|
||||
## Checklist
|
||||
<!--- Before you submit the PR, go over this checklist and make sure you can
|
||||
tick off all the boxes. [] -> [x] -->
|
||||
- [ ] I have submitted the spaCy Contributor Agreement.
|
||||
- [ ] I ran the tests, and all new and existing tests passed.
|
||||
- [ ] My changes don't require a change to the documentation, or if they do, I've added all required information.
|
||||
|
|
106
.github/contributors/demfier.md
vendored
Normal file
106
.github/contributors/demfier.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Gaurav Sahu |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 2017-10-18 |
|
||||
| GitHub username | demfier |
|
||||
| Website (optional) | |
|
106
.github/contributors/honnibal.md
vendored
Normal file
106
.github/contributors/honnibal.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [ ] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [x] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Matthew Honnibal |
|
||||
| Company name (if applicable) | Explosion AI |
|
||||
| Title or role (if applicable) | Founder |
|
||||
| Date | 2017-10-18 |
|
||||
| GitHub username | honnibal |
|
||||
| Website (optional) | https://explosion.ai |
|
106
.github/contributors/ines.md
vendored
Normal file
106
.github/contributors/ines.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [ ] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [x] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Ines Montani |
|
||||
| Company name (if applicable) | Explosion AI |
|
||||
| Title or role (if applicable) | Founder |
|
||||
| Date | 2017/10/18 |
|
||||
| GitHub username | ines |
|
||||
| Website (optional) | https://explosion.ai |
|
106
.github/contributors/jerbob92.md
vendored
Normal file
106
.github/contributors/jerbob92.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Jeroen Bobbeldijk |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 22-10-2017 |
|
||||
| GitHub username | jerbob92 |
|
||||
| Website (optional) | |
|
106
.github/contributors/johnhaley81.md
vendored
Normal file
106
.github/contributors/johnhaley81.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | John Haley |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 19/10/2017 |
|
||||
| GitHub username | johnhaley81 |
|
||||
| Website (optional) | |
|
106
.github/contributors/mdcclv.md
vendored
Normal file
106
.github/contributors/mdcclv.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------------------- |
|
||||
| Name | Orion Montoya |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 04-10-2017 |
|
||||
| GitHub username | mdcclv |
|
||||
| Website (optional) | http://www.mdcclv.com/ |
|
106
.github/contributors/polm.md
vendored
Normal file
106
.github/contributors/polm.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Paul McCann |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 2017-10-14 |
|
||||
| GitHub username | polm |
|
||||
| Website (optional) | http://dampfkraft.com|
|
106
.github/contributors/ramananbalakrishnan.md
vendored
Normal file
106
.github/contributors/ramananbalakrishnan.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect to my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Ramanan Balakrishnan |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 2017-10-19 |
|
||||
| GitHub username | ramananbalakrishnan |
|
||||
| Website (optional) | |
|
108
.github/contributors/shuvanon.md
vendored
Normal file
108
.github/contributors/shuvanon.md
vendored
Normal file
|
@ -0,0 +1,108 @@
|
|||
<!-- This agreement was mistakenly submitted as an update to the CONTRIBUTOR_AGREEMENT.md template. Commit: 8a2d22222dec5cf910df5a378cbcd9ea2ab53ec4. It was therefore moved over manually. -->
|
||||
|
||||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Shuvanon Razik |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 3/12/2017 |
|
||||
| GitHub username | shuvanon |
|
||||
| Website (optional) | |
|
106
.github/contributors/yuukos.md
vendored
Normal file
106
.github/contributors/yuukos.md
vendored
Normal file
|
@ -0,0 +1,106 @@
|
|||
# spaCy contributor agreement
|
||||
|
||||
This spaCy Contributor Agreement (**"SCA"**) is based on the
|
||||
[Oracle Contributor Agreement](http://www.oracle.com/technetwork/oca-405177.pdf).
|
||||
The SCA applies to any contribution that you make to any product or project
|
||||
managed by us (the **"project"**), and sets out the intellectual property rights
|
||||
you grant to us in the contributed materials. The term **"us"** shall mean
|
||||
[ExplosionAI UG (haftungsbeschränkt)](https://explosion.ai/legal). The term
|
||||
**"you"** shall mean the person or entity identified below.
|
||||
|
||||
If you agree to be bound by these terms, fill in the information requested
|
||||
below and include the filled-in version with your first pull request, under the
|
||||
folder [`.github/contributors/`](/.github/contributors/). The name of the file
|
||||
should be your GitHub username, with the extension `.md`. For example, the user
|
||||
example_user would create the file `.github/contributors/example_user.md`.
|
||||
|
||||
Read this agreement carefully before signing. These terms and conditions
|
||||
constitute a binding legal agreement.
|
||||
|
||||
## Contributor Agreement
|
||||
|
||||
1. The term "contribution" or "contributed materials" means any source code,
|
||||
object code, patch, tool, sample, graphic, specification, manual,
|
||||
documentation, or any other material posted or submitted by you to the project.
|
||||
|
||||
2. With respect to any worldwide copyrights, or copyright applications and
|
||||
registrations, in your contribution:
|
||||
|
||||
* you hereby assign to us joint ownership, and to the extent that such
|
||||
assignment is or becomes invalid, ineffective or unenforceable, you hereby
|
||||
grant to us a perpetual, irrevocable, non-exclusive, worldwide, no-charge,
|
||||
royalty-free, unrestricted license to exercise all rights under those
|
||||
copyrights. This includes, at our option, the right to sublicense these same
|
||||
rights to third parties through multiple levels of sublicensees or other
|
||||
licensing arrangements;
|
||||
|
||||
* you agree that each of us can do all things in relation to your
|
||||
contribution as if each of us were the sole owners, and if one of us makes
|
||||
a derivative work of your contribution, the one who makes the derivative
|
||||
work (or has it made will be the sole owner of that derivative work;
|
||||
|
||||
* you agree that you will not assert any moral rights in your contribution
|
||||
against us, our licensees or transferees;
|
||||
|
||||
* you agree that we may register a copyright in your contribution and
|
||||
exercise all ownership rights associated with it; and
|
||||
|
||||
* you agree that neither of us has any duty to consult with, obtain the
|
||||
consent of, pay or render an accounting to the other for any use or
|
||||
distribution of your contribution.
|
||||
|
||||
3. With respect to any patents you own, or that you can license without payment
|
||||
to any third party, you hereby grant to us a perpetual, irrevocable,
|
||||
non-exclusive, worldwide, no-charge, royalty-free license to:
|
||||
|
||||
* make, have made, use, sell, offer to sell, import, and otherwise transfer
|
||||
your contribution in whole or in part, alone or in combination with or
|
||||
included in any product, work or materials arising out of the project to
|
||||
which your contribution was submitted, and
|
||||
|
||||
* at our option, to sublicense these same rights to third parties through
|
||||
multiple levels of sublicensees or other licensing arrangements.
|
||||
|
||||
4. Except as set out above, you keep all right, title, and interest in your
|
||||
contribution. The rights that you grant to us under these terms are effective
|
||||
on the date you first submitted a contribution to us, even if your submission
|
||||
took place before the date you sign these terms.
|
||||
|
||||
5. You covenant, represent, warrant and agree that:
|
||||
|
||||
* Each contribution that you submit is and shall be an original work of
|
||||
authorship and you can legally grant the rights set out in this SCA;
|
||||
|
||||
* to the best of your knowledge, each contribution will not violate any
|
||||
third party's copyrights, trademarks, patents, or other intellectual
|
||||
property rights; and
|
||||
|
||||
* each contribution shall be in compliance with U.S. export control laws and
|
||||
other applicable export and import laws. You agree to notify us if you
|
||||
become aware of any circumstance which would make any of the foregoing
|
||||
representations inaccurate in any respect. We may publicly disclose your
|
||||
participation in the project, including the fact that you have signed the SCA.
|
||||
|
||||
6. This SCA is governed by the laws of the State of California and applicable
|
||||
U.S. Federal law. Any choice of law rules will not apply.
|
||||
|
||||
7. Please place an “x” on one of the applicable statement below. Please do NOT
|
||||
mark both statements:
|
||||
|
||||
* [x] I am signing on behalf of myself as an individual and no other person
|
||||
or entity, including my employer, has or will have rights with respect my
|
||||
contributions.
|
||||
|
||||
* [ ] I am signing on behalf of my employer or a legal entity and I have the
|
||||
actual authority to contractually bind that entity.
|
||||
|
||||
## Contributor Details
|
||||
|
||||
| Field | Entry |
|
||||
|------------------------------- | -------------------- |
|
||||
| Name | Alexey Kim |
|
||||
| Company name (if applicable) | |
|
||||
| Title or role (if applicable) | |
|
||||
| Date | 13-12-2017 |
|
||||
| GitHub username | yuukos |
|
||||
| Website (optional) | |
|
|
@ -3,6 +3,8 @@
|
|||
This is a list of everyone who has made significant contributions to spaCy, in alphabetical order. Thanks a lot for the great work!
|
||||
|
||||
* Adam Bittlingmayer, [@bittlingmayer](https://github.com/bittlingmayer)
|
||||
* Alexey Kim, [@yuukos](https://github.com/yuukos)
|
||||
* Alexis Eidelman, [@AlexisEidelman](https://github.com/AlexisEidelman)
|
||||
* Andreas Grivas, [@andreasgrv](https://github.com/andreasgrv)
|
||||
* Andrew Poliakov, [@pavlin99th](https://github.com/pavlin99th)
|
||||
* Aniruddha Adhikary [@aniruddha-adhikary](https://github.com/aniruddha-adhikary)
|
||||
|
@ -16,6 +18,7 @@ This is a list of everyone who has made significant contributions to spaCy, in a
|
|||
* Daniel Vila Suero, [@dvsrepo](https://github.com/dvsrepo)
|
||||
* Dmytro Sadovnychyi, [@sadovnychyi](https://github.com/sadovnychyi)
|
||||
* Eric Zhao, [@ericzhao28](https://github.com/ericzhao28)
|
||||
* Francisco Aranda, [@frascuchon](https://github.com/frascuchon)
|
||||
* Greg Baker, [@solresol](https://github.com/solresol)
|
||||
* Grégory Howard, [@Gregory-Howard](https://github.com/Gregory-Howard)
|
||||
* György Orosz, [@oroszgy](https://github.com/oroszgy)
|
||||
|
@ -24,6 +27,9 @@ This is a list of everyone who has made significant contributions to spaCy, in a
|
|||
* Ines Montani, [@ines](https://github.com/ines)
|
||||
* J Nicolas Schrading, [@NSchrading](https://github.com/NSchrading)
|
||||
* Janneke van der Zwaan, [@jvdzwaan](https://github.com/jvdzwaan)
|
||||
* Jim Geovedi, [@geovedi](https://github.com/geovedi)
|
||||
* Jim Regan, [@jimregan](https://github.com/jimregan)
|
||||
* Jeffrey Gerard, [@IamJeffG](https://github.com/IamJeffG)
|
||||
* Jordan Suchow, [@suchow](https://github.com/suchow)
|
||||
* Josh Reeter, [@jreeter](https://github.com/jreeter)
|
||||
* Juan Miguel Cejuela, [@juanmirocks](https://github.com/juanmirocks)
|
||||
|
@ -38,6 +44,8 @@ This is a list of everyone who has made significant contributions to spaCy, in a
|
|||
* Michael Wallin, [@wallinm1](https://github.com/wallinm1)
|
||||
* Miguel Almeida, [@mamoit](https://github.com/mamoit)
|
||||
* Oleg Zd, [@olegzd](https://github.com/olegzd)
|
||||
* Orion Montoya, [@mdcclv](https://github.com/mdcclv)
|
||||
* Paul O'Leary McCann, [@polm](https://github.com/polm)
|
||||
* Pokey Rule, [@pokey](https://github.com/pokey)
|
||||
* Raphaël Bournhonesque, [@raphael0202](https://github.com/raphael0202)
|
||||
* Rob van Nieuwpoort, [@RvanNieuwpoort](https://github.com/RvanNieuwpoort)
|
||||
|
@ -45,12 +53,18 @@ This is a list of everyone who has made significant contributions to spaCy, in a
|
|||
* Sam Bozek, [@sambozek](https://github.com/sambozek)
|
||||
* Sasho Savkov, [@savkov](https://github.com/savkov)
|
||||
* Shuvanon Razik, [@shuvanon](https://github.com/shuvanon)
|
||||
* Swier, [@swierh](https://github.com/swierh)
|
||||
* Thomas Tanon, [@Tpt](https://github.com/Tpt)
|
||||
* Tiago Rodrigues, [@TiagoMRodrigues](https://github.com/TiagoMRodrigues)
|
||||
* Vimos Tan, [@Vimos](https://github.com/Vimos)
|
||||
* Vsevolod Solovyov, [@vsolovyov](https://github.com/vsolovyov)
|
||||
* Wah Loon Keng, [@kengz](https://github.com/kengz)
|
||||
* Wannaphong Phatthiyaphaibun, [@wannaphongcom](https://github.com/wannaphongcom)
|
||||
* Willem van Hage, [@wrvhage](https://github.com/wrvhage)
|
||||
* Wolfgang Seeker, [@wbwseeker](https://github.com/wbwseeker)
|
||||
* Yam, [@hscspring](https://github.com/hscspring)
|
||||
* Yanhao Yang, [@YanhaoYang](https://github.com/YanhaoYang)
|
||||
* Yasuaki Uechi, [@uetchy](https://github.com/uetchy)
|
||||
* Yu-chun Huang, [@galaxyh](https://github.com/galaxyh)
|
||||
* Yubing Dong, [@tomtung](https://github.com/tomtung)
|
||||
* Yuval Pinter, [@yuvalpinter](https://github.com/yuvalpinter)
|
||||
|
|
258
README.rst
258
README.rst
|
@ -1,15 +1,16 @@
|
|||
spaCy: Industrial-strength NLP
|
||||
******************************
|
||||
|
||||
spaCy is a library for advanced natural language processing in Python and
|
||||
Cython. spaCy is built on the very latest research, but it isn't researchware.
|
||||
It was designed from day one to be used in real products. spaCy currently supports
|
||||
English, German, French and Spanish, as well as tokenization for Italian,
|
||||
Portuguese, Dutch, Swedish, Finnish, Norwegian, Danish, Hungarian, Polish,
|
||||
Bengali, Hebrew, Chinese and Japanese. It's commercial open-source software,
|
||||
released under the MIT license.
|
||||
spaCy is a library for advanced Natural Language Processing in Python and Cython.
|
||||
It's built on the very latest research, and was designed from day one to be
|
||||
used in real products. spaCy comes with
|
||||
`pre-trained statistical models <https://alpha.spacy.io/models>`_ and word
|
||||
vectors, and currently supports tokenization for **20+ languages**. It features
|
||||
the **fastest syntactic parser** in the world, convolutional **neural network models**
|
||||
for tagging, parsing and **named entity recognition** and easy **deep learning**
|
||||
integration. It's commercial open-source software, released under the MIT license.
|
||||
|
||||
💫 **Version 1.8 out now!** `Read the release notes here. <https://github.com/explosion/spaCy/releases/>`_
|
||||
💫 **Version 2.0 out now!** `Check out the new features here. <https://alpha.spacy.io/usage/v2>`_
|
||||
|
||||
.. image:: https://img.shields.io/travis/explosion/spaCy/master.svg?style=flat-square
|
||||
:target: https://travis-ci.org/explosion/spaCy
|
||||
|
@ -38,68 +39,72 @@ released under the MIT license.
|
|||
📖 Documentation
|
||||
================
|
||||
|
||||
=================== ===
|
||||
`Usage Workflows`_ How to use spaCy and its features.
|
||||
`API Reference`_ The detailed reference for spaCy's API.
|
||||
`Troubleshooting`_ Common problems and solutions for beginners.
|
||||
`Tutorials`_ End-to-end examples, with code you can modify and run.
|
||||
`Showcase & Demos`_ Demos, libraries and products from the spaCy community.
|
||||
`Contribute`_ How to contribute to the spaCy project and code base.
|
||||
=================== ===
|
||||
=================== ===
|
||||
`spaCy 101`_ New to spaCy? Here's everything you need to know!
|
||||
`Usage Guides`_ How to use spaCy and its features.
|
||||
`New in v2.0`_ New features, backwards incompatibilitiies and migration guide.
|
||||
`API Reference`_ The detailed reference for spaCy's API.
|
||||
`Models`_ Download statistical language models for spaCy.
|
||||
`Resources`_ Libraries, extensions, demos, books and courses.
|
||||
`Changelog`_ Changes and version history.
|
||||
`Contribute`_ How to contribute to the spaCy project and code base.
|
||||
=================== ===
|
||||
|
||||
.. _Usage Workflows: https://spacy.io/docs/usage/
|
||||
.. _API Reference: https://spacy.io/docs/api/
|
||||
.. _Troubleshooting: https://spacy.io/docs/usage/troubleshooting
|
||||
.. _Tutorials: https://spacy.io/docs/usage/tutorials
|
||||
.. _Showcase & Demos: https://spacy.io/docs/usage/showcase
|
||||
.. _spaCy 101: https://alpha.spacy.io/usage/spacy-101
|
||||
.. _New in v2.0: https://alpha.spacy.io/usage/v2#migrating
|
||||
.. _Usage Guides: https://alpha.spacy.io/usage/
|
||||
.. _API Reference: https://alpha.spacy.io/api/
|
||||
.. _Models: https://alpha.spacy.io/models
|
||||
.. _Resources: https://alpha.spacy.io/usage/resources
|
||||
.. _Changelog: https://alpha.spacy.io/usage/#changelog
|
||||
.. _Contribute: https://github.com/explosion/spaCy/blob/master/CONTRIBUTING.md
|
||||
|
||||
💬 Where to ask questions
|
||||
==========================
|
||||
|
||||
The spaCy project is maintained by `@honnibal <https://github.com/honnibal>`_
|
||||
and `@ines <https://github.com/ines>`_. Please understand that we won't be able
|
||||
to provide individual support via email. We also believe that help is much more
|
||||
valuable if it's shared publicly, so that more people can benefit from it.
|
||||
|
||||
====================== ===
|
||||
**Bug reports** `GitHub issue tracker`_
|
||||
**Usage questions** `StackOverflow`_, `Gitter chat`_, `Reddit user group`_
|
||||
**General discussion** `Gitter chat`_, `Reddit user group`_
|
||||
**Commercial support** contact@explosion.ai
|
||||
**Bug Reports** `GitHub Issue Tracker`_
|
||||
**Usage Questions** `StackOverflow`_, `Gitter Chat`_, `Reddit User Group`_
|
||||
**General Discussion** `Gitter Chat`_, `Reddit User Group`_
|
||||
====================== ===
|
||||
|
||||
.. _GitHub issue tracker: https://github.com/explosion/spaCy/issues
|
||||
.. _GitHub Issue Tracker: https://github.com/explosion/spaCy/issues
|
||||
.. _StackOverflow: http://stackoverflow.com/questions/tagged/spacy
|
||||
.. _Gitter chat: https://gitter.im/explosion/spaCy
|
||||
.. _Reddit user group: https://www.reddit.com/r/spacynlp
|
||||
.. _Gitter Chat: https://gitter.im/explosion/spaCy
|
||||
.. _Reddit User Group: https://www.reddit.com/r/spacynlp
|
||||
|
||||
Features
|
||||
========
|
||||
|
||||
* Non-destructive **tokenization**
|
||||
* Syntax-driven sentence segmentation
|
||||
* Pre-trained **word vectors**
|
||||
* Part-of-speech tagging
|
||||
* **Fastest syntactic parser** in the world
|
||||
* **Named entity** recognition
|
||||
* Labelled dependency parsing
|
||||
* Convenient string-to-int mapping
|
||||
* Export to numpy data arrays
|
||||
* GIL-free **multi-threading**
|
||||
* Efficient binary serialization
|
||||
* Non-destructive **tokenization**
|
||||
* Support for **20+ languages**
|
||||
* Pre-trained `statistical models <https://alpha.spacy.io/models>`_ and word vectors
|
||||
* Easy **deep learning** integration
|
||||
* Statistical models for **English**, **German**, **French** and **Spanish**
|
||||
* Part-of-speech tagging
|
||||
* Labelled dependency parsing
|
||||
* Syntax-driven sentence segmentation
|
||||
* Built in **visualizers** for syntax and NER
|
||||
* Convenient string-to-hash mapping
|
||||
* Export to numpy data arrays
|
||||
* Efficient binary serialization
|
||||
* Easy **model packaging** and deployment
|
||||
* State-of-the-art speed
|
||||
* Robust, rigorously evaluated accuracy
|
||||
|
||||
See `facts, figures and benchmarks <https://spacy.io/docs/api/>`_.
|
||||
📖 **For more details, see the** `facts, figures and benchmarks <https://alpha.spacy.io/usage/facts-figures>`_.
|
||||
|
||||
Top Performance
|
||||
---------------
|
||||
Install spaCy
|
||||
=============
|
||||
|
||||
* Fastest in the world: <50ms per document. No faster system has ever been
|
||||
announced.
|
||||
* Accuracy within 1% of the current state of the art on all tasks performed
|
||||
(parsing, named entity recognition, part-of-speech tagging). The only more
|
||||
accurate systems are an order of magnitude slower or more.
|
||||
|
||||
Supports
|
||||
--------
|
||||
For detailed installation instructions, see
|
||||
the `documentation <https://alpha.spacy.io/usage>`_.
|
||||
|
||||
==================== ===
|
||||
**Operating system** macOS / OS X, Linux, Windows (Cygwin, MinGW, Visual Studio)
|
||||
|
@ -110,12 +115,6 @@ Supports
|
|||
.. _pip: https://pypi.python.org/pypi/spacy
|
||||
.. _conda: https://anaconda.org/conda-forge/spacy
|
||||
|
||||
Install spaCy
|
||||
=============
|
||||
|
||||
Installation requires a working build environment. See notes on Ubuntu,
|
||||
macOS/OS X and Windows for details.
|
||||
|
||||
pip
|
||||
---
|
||||
|
||||
|
@ -123,7 +122,7 @@ Using pip, spaCy releases are currently only available as source packages.
|
|||
|
||||
.. code:: bash
|
||||
|
||||
pip install -U spacy
|
||||
pip install spacy
|
||||
|
||||
When using pip it is generally recommended to install packages in a ``virtualenv``
|
||||
to avoid modifying system state:
|
||||
|
@ -149,25 +148,41 @@ For the feedstock including the build recipe and configuration,
|
|||
check out `this repository <https://github.com/conda-forge/spacy-feedstock>`_.
|
||||
Improvements and pull requests to the recipe and setup are always appreciated.
|
||||
|
||||
Updating spaCy
|
||||
--------------
|
||||
|
||||
Some updates to spaCy may require downloading new statistical models. If you're
|
||||
running spaCy v2.0 or higher, you can use the ``validate`` command to check if
|
||||
your installed models are compatible and if not, print details on how to update
|
||||
them:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
pip install -U spacy
|
||||
spacy validate
|
||||
|
||||
If you've trained your own models, keep in mind that your training and runtime
|
||||
inputs must match. After updating spaCy, we recommend **retraining your models**
|
||||
with the new version.
|
||||
|
||||
📖 **For details on upgrading from spaCy 1.x to spaCy 2.x, see the**
|
||||
`migration guide <https://alpha.spacy.io/usage/v2#migrating>`_.
|
||||
|
||||
Download models
|
||||
===============
|
||||
|
||||
As of v1.7.0, models for spaCy can be installed as **Python packages**.
|
||||
This means that they're a component of your application, just like any
|
||||
other module. They're versioned and can be defined as a dependency in your
|
||||
``requirements.txt``. Models can be installed from a download URL or
|
||||
a local directory, manually or via pip. Their data can be located anywhere on
|
||||
your file system. To make a model available to spaCy, all you need to do is
|
||||
create a "shortcut link", an internal alias that tells spaCy where to find the
|
||||
data files for a specific model name.
|
||||
other module. Models can be installed using spaCy's ``download`` command,
|
||||
or manually by pointing pip to a path or URL.
|
||||
|
||||
======================= ===
|
||||
`spaCy Models`_ Available models, latest releases and direct download.
|
||||
`Available Models`_ Detailed model descriptions, accuracy figures and benchmarks.
|
||||
`Models Documentation`_ Detailed usage instructions.
|
||||
======================= ===
|
||||
|
||||
.. _spaCy Models: https://github.com/explosion/spacy-models/releases/
|
||||
.. _Models Documentation: https://spacy.io/docs/usage/models
|
||||
.. _Available Models: https://alpha.spacy.io/models
|
||||
.. _Models Documentation: https://alpha.spacy.io/docs/usage/models
|
||||
|
||||
.. code:: bash
|
||||
|
||||
|
@ -175,17 +190,10 @@ data files for a specific model name.
|
|||
python -m spacy download en
|
||||
|
||||
# download best-matching version of specific model for your spaCy installation
|
||||
python -m spacy download en_core_web_md
|
||||
python -m spacy download en_core_web_lg
|
||||
|
||||
# pip install .tar.gz archive from path or URL
|
||||
pip install /Users/you/en_core_web_md-1.2.0.tar.gz
|
||||
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-1.2.0/en_core_web_md-1.2.0.tar.gz
|
||||
|
||||
# set up shortcut link to load installed package as "en_default"
|
||||
python -m spacy link en_core_web_md en_default
|
||||
|
||||
# set up shortcut link to load local model as "my_amazing_model"
|
||||
python -m spacy link /Users/you/data my_amazing_model
|
||||
pip install /Users/you/en_core_web_sm-2.0.0.tar.gz
|
||||
|
||||
Loading and using models
|
||||
------------------------
|
||||
|
@ -199,24 +207,24 @@ To load a model, use ``spacy.load()`` with the model's shortcut link:
|
|||
doc = nlp(u'This is a sentence.')
|
||||
|
||||
If you've installed a model via pip, you can also ``import`` it directly and
|
||||
then call its ``load()`` method with no arguments. This should also work for
|
||||
older models in previous versions of spaCy.
|
||||
then call its ``load()`` method:
|
||||
|
||||
.. code:: python
|
||||
|
||||
import spacy
|
||||
import en_core_web_md
|
||||
import en_core_web_sm
|
||||
|
||||
nlp = en_core_web_md.load()
|
||||
nlp = en_core_web_.load()
|
||||
doc = nlp(u'This is a sentence.')
|
||||
|
||||
📖 **For more info and examples, check out the** `models documentation <https://spacy.io/docs/usage/models>`_.
|
||||
📖 **For more info and examples, check out the**
|
||||
`models documentation <https://alpha.spacy.io/docs/usage/models>`_.
|
||||
|
||||
Support for older versions
|
||||
--------------------------
|
||||
|
||||
If you're using an older version (v1.6.0 or below), you can still download and
|
||||
install the old models from within spaCy using ``python -m spacy.en.download all``
|
||||
If you're using an older version (``v1.6.0`` or below), you can still download
|
||||
and install the old models from within spaCy using ``python -m spacy.en.download all``
|
||||
or ``python -m spacy.de.download all``. The ``.tar.gz`` archives are also
|
||||
`attached to the v1.6.0 release <https://github.com/explosion/spaCy/tree/v1.6.0>`_.
|
||||
To download and install the models manually, unpack the archive, drop the
|
||||
|
@ -248,11 +256,13 @@ details.
|
|||
pip install -r requirements.txt
|
||||
pip install -e .
|
||||
|
||||
Compared to regular install via pip `requirements.txt <requirements.txt>`_
|
||||
Compared to regular install via pip, `requirements.txt <requirements.txt>`_
|
||||
additionally installs developer dependencies such as Cython.
|
||||
|
||||
Instead of the above verbose commands, you can also use the following
|
||||
`Fabric <http://www.fabfile.org/>`_ commands:
|
||||
`Fabric <http://www.fabfile.org/>`_ commands. All commands assume that your
|
||||
``virtualenv`` is located in a directory ``.env``. If you're using a different
|
||||
directory, you can change it via the environment variable ``VENV_DIR``, for
|
||||
example ``VENV_DIR=".custom-env" fab clean make``.
|
||||
|
||||
============= ===
|
||||
``fab env`` Create ``virtualenv`` and delete previous one, if it exists.
|
||||
|
@ -261,14 +271,6 @@ Instead of the above verbose commands, you can also use the following
|
|||
``fab test`` Run basic tests, aborting after first failure.
|
||||
============= ===
|
||||
|
||||
All commands assume that your ``virtualenv`` is located in a directory ``.env``.
|
||||
If you're using a different directory, you can change it via the environment
|
||||
variable ``VENV_DIR``, for example:
|
||||
|
||||
.. code:: bash
|
||||
|
||||
VENV_DIR=".custom-env" fab clean make
|
||||
|
||||
Ubuntu
|
||||
------
|
||||
|
||||
|
@ -310,76 +312,4 @@ and ``--model`` are optional and enable additional tests:
|
|||
|
||||
# make sure you are using recent pytest version
|
||||
python -m pip install -U pytest
|
||||
|
||||
python -m pytest <spacy-directory>
|
||||
|
||||
🛠 Changelog
|
||||
============
|
||||
|
||||
=========== ============== ===========
|
||||
Version Date Description
|
||||
=========== ============== ===========
|
||||
`v1.8.2`_ ``2017-04-26`` French model and small improvements
|
||||
`v1.8.1`_ ``2017-04-23`` Saving, loading and training bug fixes
|
||||
`v1.8.0`_ ``2017-04-16`` Better NER training, saving and loading
|
||||
`v1.7.5`_ ``2017-04-07`` Bug fixes and new CLI commands
|
||||
`v1.7.3`_ ``2017-03-26`` Alpha support for Hebrew, new CLI commands and bug fixes
|
||||
`v1.7.2`_ ``2017-03-20`` Small fixes to beam parser and model linking
|
||||
`v1.7.1`_ ``2017-03-19`` Fix data download for system installation
|
||||
`v1.7.0`_ ``2017-03-18`` New 50 MB model, CLI, better downloads and lots of bug fixes
|
||||
`v1.6.0`_ ``2017-01-16`` Improvements to tokenizer and tests
|
||||
`v1.5.0`_ ``2016-12-27`` Alpha support for Swedish and Hungarian
|
||||
`v1.4.0`_ ``2016-12-18`` Improved language data and alpha Dutch support
|
||||
`v1.3.0`_ ``2016-12-03`` Improve API consistency
|
||||
`v1.2.0`_ ``2016-11-04`` Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese
|
||||
`v1.1.0`_ ``2016-10-23`` Bug fixes and adjustments
|
||||
`v1.0.0`_ ``2016-10-18`` Support for deep learning workflows and entity-aware rule matcher
|
||||
`v0.101.0`_ ``2016-05-10`` Fixed German model
|
||||
`v0.100.7`_ ``2016-05-05`` German support
|
||||
`v0.100.6`_ ``2016-03-08`` Add support for GloVe vectors
|
||||
`v0.100.5`_ ``2016-02-07`` Fix incorrect use of header file
|
||||
`v0.100.4`_ ``2016-02-07`` Fix OSX problem introduced in 0.100.3
|
||||
`v0.100.3`_ ``2016-02-06`` Multi-threading, faster loading and bugfixes
|
||||
`v0.100.2`_ ``2016-01-21`` Fix data version lock
|
||||
`v0.100.1`_ ``2016-01-21`` Fix install for OSX
|
||||
`v0.100`_ ``2016-01-19`` Revise setup.py, better model downloads, bug fixes
|
||||
`v0.99`_ ``2015-11-08`` Improve span merging, internal refactoring
|
||||
`v0.98`_ ``2015-11-03`` Smaller package, bug fixes
|
||||
`v0.97`_ ``2015-10-23`` Load the StringStore from a json list, instead of a text file
|
||||
`v0.96`_ ``2015-10-19`` Hotfix to .merge method
|
||||
`v0.95`_ ``2015-10-18`` Bug fixes
|
||||
`v0.94`_ ``2015-10-09`` Fix memory and parse errors
|
||||
`v0.93`_ ``2015-09-22`` Bug fixes to word vectors
|
||||
=========== ============== ===========
|
||||
|
||||
.. _v1.8.2: https://github.com/explosion/spaCy/releases/tag/v1.8.2
|
||||
.. _v1.8.1: https://github.com/explosion/spaCy/releases/tag/v1.8.1
|
||||
.. _v1.8.0: https://github.com/explosion/spaCy/releases/tag/v1.8.0
|
||||
.. _v1.7.5: https://github.com/explosion/spaCy/releases/tag/v1.7.5
|
||||
.. _v1.7.3: https://github.com/explosion/spaCy/releases/tag/v1.7.3
|
||||
.. _v1.7.2: https://github.com/explosion/spaCy/releases/tag/v1.7.2
|
||||
.. _v1.7.1: https://github.com/explosion/spaCy/releases/tag/v1.7.1
|
||||
.. _v1.7.0: https://github.com/explosion/spaCy/releases/tag/v1.7.0
|
||||
.. _v1.6.0: https://github.com/explosion/spaCy/releases/tag/v1.6.0
|
||||
.. _v1.5.0: https://github.com/explosion/spaCy/releases/tag/v1.5.0
|
||||
.. _v1.4.0: https://github.com/explosion/spaCy/releases/tag/v1.4.0
|
||||
.. _v1.3.0: https://github.com/explosion/spaCy/releases/tag/v1.3.0
|
||||
.. _v1.2.0: https://github.com/explosion/spaCy/releases/tag/v1.2.0
|
||||
.. _v1.1.0: https://github.com/explosion/spaCy/releases/tag/v1.1.0
|
||||
.. _v1.0.0: https://github.com/explosion/spaCy/releases/tag/v1.0.0
|
||||
.. _v0.101.0: https://github.com/explosion/spaCy/releases/tag/0.101.0
|
||||
.. _v0.100.7: https://github.com/explosion/spaCy/releases/tag/0.100.7
|
||||
.. _v0.100.6: https://github.com/explosion/spaCy/releases/tag/0.100.6
|
||||
.. _v0.100.5: https://github.com/explosion/spaCy/releases/tag/0.100.5
|
||||
.. _v0.100.4: https://github.com/explosion/spaCy/releases/tag/0.100.4
|
||||
.. _v0.100.3: https://github.com/explosion/spaCy/releases/tag/0.100.3
|
||||
.. _v0.100.2: https://github.com/explosion/spaCy/releases/tag/0.100.2
|
||||
.. _v0.100.1: https://github.com/explosion/spaCy/releases/tag/0.100.1
|
||||
.. _v0.100: https://github.com/explosion/spaCy/releases/tag/0.100
|
||||
.. _v0.99: https://github.com/explosion/spaCy/releases/tag/0.99
|
||||
.. _v0.98: https://github.com/explosion/spaCy/releases/tag/0.98
|
||||
.. _v0.97: https://github.com/explosion/spaCy/releases/tag/0.97
|
||||
.. _v0.96: https://github.com/explosion/spaCy/releases/tag/0.96
|
||||
.. _v0.95: https://github.com/explosion/spaCy/releases/tag/0.95
|
||||
.. _v0.94: https://github.com/explosion/spaCy/releases/tag/0.94
|
||||
.. _v0.93: https://github.com/explosion/spaCy/releases/tag/0.93
|
||||
|
|
|
@ -56,8 +56,7 @@ def train_ner(nlp, train_data, output_dir):
|
|||
losses = {}
|
||||
for batch in minibatch(get_gold_parses(nlp.make_doc, train_data), size=3):
|
||||
docs, golds = zip(*batch)
|
||||
nlp.update(docs, golds, losses=losses, sgd=optimizer, update_shared=True,
|
||||
drop=0.35)
|
||||
nlp.update(docs, golds, losses=losses, sgd=optimizer, drop=0.35)
|
||||
print(losses)
|
||||
if not output_dir:
|
||||
return
|
||||
|
@ -100,9 +99,10 @@ def main(model_name, output_directory=None):
|
|||
)
|
||||
|
||||
]
|
||||
nlp.pipeline.append(TokenVectorEncoder(nlp.vocab))
|
||||
nlp.pipeline.append(NeuralEntityRecognizer(nlp.vocab))
|
||||
nlp.pipeline[-1].add_label('ANIMAL')
|
||||
nlp.add_pipe(TokenVectorEncoder(nlp.vocab))
|
||||
ner = NeuralEntityRecognizer(nlp.vocab)
|
||||
ner.add_label('ANIMAL')
|
||||
nlp.add_pipe(ner)
|
||||
train_ner(nlp, train_data, output_directory)
|
||||
|
||||
# Test that the entity is recognized
|
||||
|
|
641
examples/training/training-data.json
Normal file
641
examples/training/training-data.json
Normal file
|
@ -0,0 +1,641 @@
|
|||
[
|
||||
{
|
||||
"id": "wsj_0200",
|
||||
"paragraphs": [
|
||||
{
|
||||
"raw": "In an Oct. 19 review of \"The Misanthrope\" at Chicago's Goodman Theatre (\"Revitalized Classics Take the Stage in Windy City,\" Leisure & Arts), the role of Celimene, played by Kim Cattrall, was mistakenly attributed to Christina Haag. Ms. Haag plays Elianti.",
|
||||
"sentences": [
|
||||
{
|
||||
"tokens": [
|
||||
{
|
||||
"head": 44,
|
||||
"dep": "prep",
|
||||
"tag": "IN",
|
||||
"orth": "In",
|
||||
"ner": "O",
|
||||
"id": 0
|
||||
},
|
||||
{
|
||||
"head": 3,
|
||||
"dep": "det",
|
||||
"tag": "DT",
|
||||
"orth": "an",
|
||||
"ner": "O",
|
||||
"id": 1
|
||||
},
|
||||
{
|
||||
"head": 2,
|
||||
"dep": "nmod",
|
||||
"tag": "NNP",
|
||||
"orth": "Oct.",
|
||||
"ner": "B-DATE",
|
||||
"id": 2
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "nummod",
|
||||
"tag": "CD",
|
||||
"orth": "19",
|
||||
"ner": "L-DATE",
|
||||
"id": 3
|
||||
},
|
||||
{
|
||||
"head": -4,
|
||||
"dep": "pobj",
|
||||
"tag": "NN",
|
||||
"orth": "review",
|
||||
"ner": "O",
|
||||
"id": 4
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "prep",
|
||||
"tag": "IN",
|
||||
"orth": "of",
|
||||
"ner": "O",
|
||||
"id": 5
|
||||
},
|
||||
{
|
||||
"head": 2,
|
||||
"dep": "punct",
|
||||
"tag": "``",
|
||||
"orth": "``",
|
||||
"ner": "O",
|
||||
"id": 6
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "det",
|
||||
"tag": "DT",
|
||||
"orth": "The",
|
||||
"ner": "B-WORK_OF_ART",
|
||||
"id": 7
|
||||
},
|
||||
{
|
||||
"head": -3,
|
||||
"dep": "pobj",
|
||||
"tag": "NN",
|
||||
"orth": "Misanthrope",
|
||||
"ner": "L-WORK_OF_ART",
|
||||
"id": 8
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "punct",
|
||||
"tag": "''",
|
||||
"orth": "''",
|
||||
"ner": "O",
|
||||
"id": 9
|
||||
},
|
||||
{
|
||||
"head": -2,
|
||||
"dep": "prep",
|
||||
"tag": "IN",
|
||||
"orth": "at",
|
||||
"ner": "O",
|
||||
"id": 10
|
||||
},
|
||||
{
|
||||
"head": 3,
|
||||
"dep": "poss",
|
||||
"tag": "NNP",
|
||||
"orth": "Chicago",
|
||||
"ner": "U-GPE",
|
||||
"id": 11
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "case",
|
||||
"tag": "POS",
|
||||
"orth": "'s",
|
||||
"ner": "O",
|
||||
"id": 12
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "compound",
|
||||
"tag": "NNP",
|
||||
"orth": "Goodman",
|
||||
"ner": "B-FAC",
|
||||
"id": 13
|
||||
},
|
||||
{
|
||||
"head": -4,
|
||||
"dep": "pobj",
|
||||
"tag": "NNP",
|
||||
"orth": "Theatre",
|
||||
"ner": "L-FAC",
|
||||
"id": 14
|
||||
},
|
||||
{
|
||||
"head": 4,
|
||||
"dep": "punct",
|
||||
"tag": "-LRB-",
|
||||
"orth": "(",
|
||||
"ner": "O",
|
||||
"id": 15
|
||||
},
|
||||
{
|
||||
"head": 3,
|
||||
"dep": "punct",
|
||||
"tag": "``",
|
||||
"orth": "``",
|
||||
"ner": "O",
|
||||
"id": 16
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "amod",
|
||||
"tag": "VBN",
|
||||
"orth": "Revitalized",
|
||||
"ner": "B-WORK_OF_ART",
|
||||
"id": 17
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "nsubj",
|
||||
"tag": "NNS",
|
||||
"orth": "Classics",
|
||||
"ner": "I-WORK_OF_ART",
|
||||
"id": 18
|
||||
},
|
||||
{
|
||||
"head": -15,
|
||||
"dep": "appos",
|
||||
"tag": "VBP",
|
||||
"orth": "Take",
|
||||
"ner": "I-WORK_OF_ART",
|
||||
"id": 19
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "det",
|
||||
"tag": "DT",
|
||||
"orth": "the",
|
||||
"ner": "I-WORK_OF_ART",
|
||||
"id": 20
|
||||
},
|
||||
{
|
||||
"head": -2,
|
||||
"dep": "dobj",
|
||||
"tag": "NN",
|
||||
"orth": "Stage",
|
||||
"ner": "I-WORK_OF_ART",
|
||||
"id": 21
|
||||
},
|
||||
{
|
||||
"head": -3,
|
||||
"dep": "prep",
|
||||
"tag": "IN",
|
||||
"orth": "in",
|
||||
"ner": "I-WORK_OF_ART",
|
||||
"id": 22
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "compound",
|
||||
"tag": "NNP",
|
||||
"orth": "Windy",
|
||||
"ner": "I-WORK_OF_ART",
|
||||
"id": 23
|
||||
},
|
||||
{
|
||||
"head": -2,
|
||||
"dep": "pobj",
|
||||
"tag": "NNP",
|
||||
"orth": "City",
|
||||
"ner": "L-WORK_OF_ART",
|
||||
"id": 24
|
||||
},
|
||||
{
|
||||
"head": -6,
|
||||
"dep": "punct",
|
||||
"tag": ",",
|
||||
"orth": ",",
|
||||
"ner": "O",
|
||||
"id": 25
|
||||
},
|
||||
{
|
||||
"head": -7,
|
||||
"dep": "punct",
|
||||
"tag": "''",
|
||||
"orth": "''",
|
||||
"ner": "O",
|
||||
"id": 26
|
||||
},
|
||||
{
|
||||
"head": -8,
|
||||
"dep": "npadvmod",
|
||||
"tag": "NN",
|
||||
"orth": "Leisure",
|
||||
"ner": "B-ORG",
|
||||
"id": 27
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "cc",
|
||||
"tag": "CC",
|
||||
"orth": "&",
|
||||
"ner": "I-ORG",
|
||||
"id": 28
|
||||
},
|
||||
{
|
||||
"head": -2,
|
||||
"dep": "conj",
|
||||
"tag": "NNS",
|
||||
"orth": "Arts",
|
||||
"ner": "L-ORG",
|
||||
"id": 29
|
||||
},
|
||||
{
|
||||
"head": -11,
|
||||
"dep": "punct",
|
||||
"tag": "-RRB-",
|
||||
"orth": ")",
|
||||
"ner": "O",
|
||||
"id": 30
|
||||
},
|
||||
{
|
||||
"head": 13,
|
||||
"dep": "punct",
|
||||
"tag": ",",
|
||||
"orth": ",",
|
||||
"ner": "O",
|
||||
"id": 31
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "det",
|
||||
"tag": "DT",
|
||||
"orth": "the",
|
||||
"ner": "O",
|
||||
"id": 32
|
||||
},
|
||||
{
|
||||
"head": 11,
|
||||
"dep": "nsubjpass",
|
||||
"tag": "NN",
|
||||
"orth": "role",
|
||||
"ner": "O",
|
||||
"id": 33
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "prep",
|
||||
"tag": "IN",
|
||||
"orth": "of",
|
||||
"ner": "O",
|
||||
"id": 34
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "pobj",
|
||||
"tag": "NNP",
|
||||
"orth": "Celimene",
|
||||
"ner": "U-PERSON",
|
||||
"id": 35
|
||||
},
|
||||
{
|
||||
"head": -3,
|
||||
"dep": "punct",
|
||||
"tag": ",",
|
||||
"orth": ",",
|
||||
"ner": "O",
|
||||
"id": 36
|
||||
},
|
||||
{
|
||||
"head": -4,
|
||||
"dep": "acl",
|
||||
"tag": "VBN",
|
||||
"orth": "played",
|
||||
"ner": "O",
|
||||
"id": 37
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "agent",
|
||||
"tag": "IN",
|
||||
"orth": "by",
|
||||
"ner": "O",
|
||||
"id": 38
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "compound",
|
||||
"tag": "NNP",
|
||||
"orth": "Kim",
|
||||
"ner": "B-PERSON",
|
||||
"id": 39
|
||||
},
|
||||
{
|
||||
"head": -2,
|
||||
"dep": "pobj",
|
||||
"tag": "NNP",
|
||||
"orth": "Cattrall",
|
||||
"ner": "L-PERSON",
|
||||
"id": 40
|
||||
},
|
||||
{
|
||||
"head": -8,
|
||||
"dep": "punct",
|
||||
"tag": ",",
|
||||
"orth": ",",
|
||||
"ner": "O",
|
||||
"id": 41
|
||||
},
|
||||
{
|
||||
"head": 2,
|
||||
"dep": "auxpass",
|
||||
"tag": "VBD",
|
||||
"orth": "was",
|
||||
"ner": "O",
|
||||
"id": 42
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "advmod",
|
||||
"tag": "RB",
|
||||
"orth": "mistakenly",
|
||||
"ner": "O",
|
||||
"id": 43
|
||||
},
|
||||
{
|
||||
"head": 0,
|
||||
"dep": "root",
|
||||
"tag": "VBN",
|
||||
"orth": "attributed",
|
||||
"ner": "O",
|
||||
"id": 44
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "prep",
|
||||
"tag": "IN",
|
||||
"orth": "to",
|
||||
"ner": "O",
|
||||
"id": 45
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "compound",
|
||||
"tag": "NNP",
|
||||
"orth": "Christina",
|
||||
"ner": "B-PERSON",
|
||||
"id": 46
|
||||
},
|
||||
{
|
||||
"head": -2,
|
||||
"dep": "pobj",
|
||||
"tag": "NNP",
|
||||
"orth": "Haag",
|
||||
"ner": "L-PERSON",
|
||||
"id": 47
|
||||
},
|
||||
{
|
||||
"head": -4,
|
||||
"dep": "punct",
|
||||
"tag": ".",
|
||||
"orth": ".",
|
||||
"ner": "O",
|
||||
"id": 48
|
||||
}
|
||||
],
|
||||
"brackets": [
|
||||
{
|
||||
"first": 2,
|
||||
"last": 3,
|
||||
"label": "NML"
|
||||
},
|
||||
{
|
||||
"first": 1,
|
||||
"last": 4,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 7,
|
||||
"last": 8,
|
||||
"label": "NP-TTL"
|
||||
},
|
||||
{
|
||||
"first": 11,
|
||||
"last": 12,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 11,
|
||||
"last": 14,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 10,
|
||||
"last": 14,
|
||||
"label": "PP-LOC"
|
||||
},
|
||||
{
|
||||
"first": 6,
|
||||
"last": 14,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 5,
|
||||
"last": 14,
|
||||
"label": "PP"
|
||||
},
|
||||
{
|
||||
"first": 1,
|
||||
"last": 14,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 17,
|
||||
"last": 18,
|
||||
"label": "NP-SBJ"
|
||||
},
|
||||
{
|
||||
"first": 20,
|
||||
"last": 21,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 23,
|
||||
"last": 24,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 22,
|
||||
"last": 24,
|
||||
"label": "PP-LOC"
|
||||
},
|
||||
{
|
||||
"first": 19,
|
||||
"last": 24,
|
||||
"label": "VP"
|
||||
},
|
||||
{
|
||||
"first": 17,
|
||||
"last": 24,
|
||||
"label": "S-HLN"
|
||||
},
|
||||
{
|
||||
"first": 27,
|
||||
"last": 29,
|
||||
"label": "NP-TMP"
|
||||
},
|
||||
{
|
||||
"first": 15,
|
||||
"last": 30,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 1,
|
||||
"last": 30,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 0,
|
||||
"last": 30,
|
||||
"label": "PP-LOC"
|
||||
},
|
||||
{
|
||||
"first": 32,
|
||||
"last": 33,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 35,
|
||||
"last": 35,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 34,
|
||||
"last": 35,
|
||||
"label": "PP"
|
||||
},
|
||||
{
|
||||
"first": 32,
|
||||
"last": 35,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 39,
|
||||
"last": 40,
|
||||
"label": "NP-LGS"
|
||||
},
|
||||
{
|
||||
"first": 38,
|
||||
"last": 40,
|
||||
"label": "PP"
|
||||
},
|
||||
{
|
||||
"first": 37,
|
||||
"last": 40,
|
||||
"label": "VP"
|
||||
},
|
||||
{
|
||||
"first": 32,
|
||||
"last": 41,
|
||||
"label": "NP-SBJ-2"
|
||||
},
|
||||
{
|
||||
"first": 43,
|
||||
"last": 43,
|
||||
"label": "ADVP-MNR"
|
||||
},
|
||||
{
|
||||
"first": 46,
|
||||
"last": 47,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 45,
|
||||
"last": 47,
|
||||
"label": "PP-CLR"
|
||||
},
|
||||
{
|
||||
"first": 44,
|
||||
"last": 47,
|
||||
"label": "VP"
|
||||
},
|
||||
{
|
||||
"first": 42,
|
||||
"last": 47,
|
||||
"label": "VP"
|
||||
},
|
||||
{
|
||||
"first": 0,
|
||||
"last": 48,
|
||||
"label": "S"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"tokens": [
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "compound",
|
||||
"tag": "NNP",
|
||||
"orth": "Ms.",
|
||||
"ner": "O",
|
||||
"id": 0
|
||||
},
|
||||
{
|
||||
"head": 1,
|
||||
"dep": "nsubj",
|
||||
"tag": "NNP",
|
||||
"orth": "Haag",
|
||||
"ner": "U-PERSON",
|
||||
"id": 1
|
||||
},
|
||||
{
|
||||
"head": 0,
|
||||
"dep": "root",
|
||||
"tag": "VBZ",
|
||||
"orth": "plays",
|
||||
"ner": "O",
|
||||
"id": 2
|
||||
},
|
||||
{
|
||||
"head": -1,
|
||||
"dep": "dobj",
|
||||
"tag": "NNP",
|
||||
"orth": "Elianti",
|
||||
"ner": "U-PERSON",
|
||||
"id": 3
|
||||
},
|
||||
{
|
||||
"head": -2,
|
||||
"dep": "punct",
|
||||
"tag": ".",
|
||||
"orth": ".",
|
||||
"ner": "O",
|
||||
"id": 4
|
||||
}
|
||||
],
|
||||
"brackets": [
|
||||
{
|
||||
"first": 0,
|
||||
"last": 1,
|
||||
"label": "NP-SBJ"
|
||||
},
|
||||
{
|
||||
"first": 3,
|
||||
"last": 3,
|
||||
"label": "NP"
|
||||
},
|
||||
{
|
||||
"first": 2,
|
||||
"last": 3,
|
||||
"label": "VP"
|
||||
},
|
||||
{
|
||||
"first": 0,
|
||||
"last": 4,
|
||||
"label": "S"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
40
spacy/_ml.py
40
spacy/_ml.py
|
@ -462,46 +462,6 @@ def get_token_vectors(tokens_attrs_vectors, drop=0.):
|
|||
return vectors, backward
|
||||
|
||||
|
||||
def fine_tune(embedding, combine=None):
|
||||
if combine is not None:
|
||||
raise NotImplementedError(
|
||||
"fine_tune currently only supports addition. Set combine=None")
|
||||
def fine_tune_fwd(docs_tokvecs, drop=0.):
|
||||
docs, tokvecs = docs_tokvecs
|
||||
|
||||
lengths = model.ops.asarray([len(doc) for doc in docs], dtype='i')
|
||||
|
||||
vecs, bp_vecs = embedding.begin_update(docs, drop=drop)
|
||||
flat_tokvecs = embedding.ops.flatten(tokvecs)
|
||||
flat_vecs = embedding.ops.flatten(vecs)
|
||||
output = embedding.ops.unflatten(
|
||||
(model.mix[0] * flat_tokvecs + model.mix[1] * flat_vecs), lengths)
|
||||
|
||||
def fine_tune_bwd(d_output, sgd=None):
|
||||
flat_grad = model.ops.flatten(d_output)
|
||||
model.d_mix[0] += flat_tokvecs.dot(flat_grad.T).sum()
|
||||
model.d_mix[1] += flat_vecs.dot(flat_grad.T).sum()
|
||||
|
||||
bp_vecs([d_o * model.mix[1] for d_o in d_output], sgd=sgd)
|
||||
if sgd is not None:
|
||||
sgd(model._mem.weights, model._mem.gradient, key=model.id)
|
||||
return [d_o * model.mix[0] for d_o in d_output]
|
||||
return output, fine_tune_bwd
|
||||
|
||||
def fine_tune_predict(docs_tokvecs):
|
||||
docs, tokvecs = docs_tokvecs
|
||||
vecs = embedding(docs)
|
||||
return [model.mix[0]*tv+model.mix[1]*v
|
||||
for tv, v in zip(tokvecs, vecs)]
|
||||
|
||||
model = wrap(fine_tune_fwd, embedding)
|
||||
model.mix = model._mem.add((model.id, 'mix'), (2,))
|
||||
model.mix.fill(0.5)
|
||||
model.d_mix = model._mem.add_gradient((model.id, 'd_mix'), (model.id, 'mix'))
|
||||
model.predict = fine_tune_predict
|
||||
return model
|
||||
|
||||
|
||||
@layerize
|
||||
def flatten(seqs, drop=0.):
|
||||
if isinstance(seqs[0], numpy.ndarray):
|
||||
|
|
|
@ -3,7 +3,7 @@
|
|||
# https://github.com/pypa/warehouse/blob/master/warehouse/__about__.py
|
||||
|
||||
__title__ = 'spacy-nightly'
|
||||
__version__ = '2.0.0a17'
|
||||
__version__ = '2.0.0a18'
|
||||
__summary__ = 'Industrial-strength Natural Language Processing (NLP) with Python and Cython'
|
||||
__uri__ = 'https://spacy.io'
|
||||
__author__ = 'Explosion AI'
|
||||
|
|
|
@ -101,7 +101,7 @@ def generate_meta():
|
|||
def generate_pipeline():
|
||||
prints("If set to 'True', the default pipeline is used. If set to 'False', "
|
||||
"the pipeline will be disabled. Components should be specified as a "
|
||||
"comma-separated list of component names, e.g. tensorizer, tagger, "
|
||||
"comma-separated list of component names, e.g. tagger, "
|
||||
"parser, ner. For more information, see the docs on processing pipelines.",
|
||||
title="Enter your model's pipeline components")
|
||||
pipeline = util.get_raw_input("Pipeline components", True)
|
||||
|
|
|
@ -12,11 +12,11 @@ MORPH_RULES = {
|
|||
'কি': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Gender': 'Neut', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'সে': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Three', 'PronType': 'Prs', 'Case': 'Nom'},
|
||||
'কিসে': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Gender': 'Neut', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'কাদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'তাকে': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Three', 'PronType': 'Prs', 'Case': 'Acc'},
|
||||
'স্বয়ং': {LEMMA: PRON_LEMMA, 'Reflex': 'Yes', 'PronType': 'Ref'},
|
||||
'কোনগুলো': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Gender': 'Neut', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'তুমি': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Nom'},
|
||||
'তুই': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Nom'},
|
||||
'তাদেরকে': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Three', 'PronType': 'Prs', 'Case': 'Acc'},
|
||||
'আমরা': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'One ', 'PronType': 'Prs', 'Case': 'Nom'},
|
||||
'যিনি': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'PronType': 'Rel', 'Case': 'Nom'},
|
||||
|
@ -24,12 +24,15 @@ MORPH_RULES = {
|
|||
'কোন': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'কারা': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'তোমাকে': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Acc'},
|
||||
'তোকে': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Acc'},
|
||||
'খোদ': {LEMMA: PRON_LEMMA, 'Reflex': 'Yes', 'PronType': 'Ref'},
|
||||
'কে': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'যারা': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'PronType': 'Rel', 'Case': 'Nom'},
|
||||
'যে': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'PronType': 'Rel', 'Case': 'Nom'},
|
||||
'তোমরা': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Nom'},
|
||||
'তোরা': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Nom'},
|
||||
'তোমাদেরকে': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Acc'},
|
||||
'তোদেরকে': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Two', 'PronType': 'Prs', 'Case': 'Acc'},
|
||||
'আপন': {LEMMA: PRON_LEMMA, 'Reflex': 'Yes', 'PronType': 'Ref'},
|
||||
'এ': {LEMMA: PRON_LEMMA, 'PronType': 'Dem'},
|
||||
'নিজ': {LEMMA: PRON_LEMMA, 'Reflex': 'Yes', 'PronType': 'Ref'},
|
||||
|
@ -42,6 +45,10 @@ MORPH_RULES = {
|
|||
|
||||
'আমার': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'One', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'মোর': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'One', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'মোদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'One', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'তার': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Three', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'তোমাদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Two', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
|
@ -50,7 +57,13 @@ MORPH_RULES = {
|
|||
'Case': 'Nom'},
|
||||
'তোমার': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Two', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'তোর': {LEMMA: PRON_LEMMA, 'Number': 'Sing', 'Person': 'Two', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'তাদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Three', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'কাদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
'তোদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'Person': 'Two', 'PronType': 'Prs', 'Poss': 'Yes',
|
||||
'Case': 'Nom'},
|
||||
'যাদের': {LEMMA: PRON_LEMMA, 'Number': 'Plur', 'PronType': 'Int', 'Case': 'Acc'},
|
||||
}
|
||||
}
|
||||
|
|
|
@ -22,7 +22,7 @@ STOP_WORDS = set("""
|
|||
টি
|
||||
ঠিক
|
||||
তখন তত তথা তবু তবে তা তাঁকে তাঁদের তাঁর তাঁরা তাঁহারা তাই তাও তাকে তাতে তাদের তার তারপর তারা তারই তাহলে তাহা তাহাতে তাহার তিনই
|
||||
তিনি তিনিও তুমি তুলে তেমন তো তোমার
|
||||
তিনি তিনিও তুমি তুলে তেমন তো তোমার তুই তোরা তোর তোমাদের তোদের
|
||||
থাকবে থাকবেন থাকা থাকায় থাকে থাকেন থেকে থেকেই থেকেও থাকায়
|
||||
দিকে দিতে দিয়ে দিয়েছে দিয়েছেন দিলেন দিয়ে দু দুটি দুটো দেওয়া দেওয়ার দেখতে দেখা দেখে দেন দেয় দেশের
|
||||
দ্বারা দিয়েছে দিয়েছেন দেয় দেওয়া দেওয়ার দিন দুই
|
||||
|
@ -32,7 +32,7 @@ STOP_WORDS = set("""
|
|||
ফলে ফিরে ফের
|
||||
বছর বদলে বরং বলতে বলল বললেন বলা বলে বলেছেন বলেন বসে বহু বা বাদে বার বিনা বিভিন্ন বিশেষ বিষয়টি বেশ ব্যবহার ব্যাপারে বক্তব্য বন বেশি
|
||||
ভাবে ভাবেই
|
||||
মত মতো মতোই মধ্যভাগে মধ্যে মধ্যেই মধ্যেও মনে মাত্র মাধ্যমে মানুষ মানুষের মোট মোটেই
|
||||
মত মতো মতোই মধ্যভাগে মধ্যে মধ্যেই মধ্যেও মনে মাত্র মাধ্যমে মানুষ মানুষের মোট মোটেই মোদের মোর
|
||||
যখন যত যতটা যথেষ্ট যদি যদিও যা যাঁর যাঁরা যাওয়া যাওয়ার যাকে যাচ্ছে যাতে যাদের যান যাবে যায় যার যারা যায় যিনি যে যেখানে যেতে যেন
|
||||
যেমন
|
||||
রকম রয়েছে রাখা রেখে রয়েছে
|
||||
|
|
|
@ -3,6 +3,9 @@ from __future__ import unicode_literals
|
|||
|
||||
from .tokenizer_exceptions import TOKENIZER_EXCEPTIONS
|
||||
from .stop_words import STOP_WORDS
|
||||
from .lex_attrs import LEX_ATTRS
|
||||
from .morph_rules import MORPH_RULES
|
||||
from ..tag_map import TAG_MAP
|
||||
|
||||
from ..tokenizer_exceptions import BASE_EXCEPTIONS
|
||||
from ..norm_exceptions import BASE_NORMS
|
||||
|
@ -13,9 +16,12 @@ from ...util import update_exc, add_lookups
|
|||
|
||||
class DanishDefaults(Language.Defaults):
|
||||
lex_attr_getters = dict(Language.Defaults.lex_attr_getters)
|
||||
lex_attr_getters.update(LEX_ATTRS)
|
||||
lex_attr_getters[LANG] = lambda text: 'da'
|
||||
lex_attr_getters[NORM] = add_lookups(Language.Defaults.lex_attr_getters[NORM], BASE_NORMS)
|
||||
tokenizer_exceptions = update_exc(BASE_EXCEPTIONS, TOKENIZER_EXCEPTIONS)
|
||||
# morph_rules = MORPH_RULES
|
||||
tag_map = TAG_MAP
|
||||
stop_words = STOP_WORDS
|
||||
|
||||
|
||||
|
|
52
spacy/lang/da/lex_attrs.py
Normal file
52
spacy/lang/da/lex_attrs.py
Normal file
|
@ -0,0 +1,52 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from ...attrs import LIKE_NUM
|
||||
|
||||
# Source http://fjern-uv.dk/tal.php
|
||||
|
||||
_num_words = """nul
|
||||
en et to tre fire fem seks syv otte ni ti
|
||||
elleve tolv tretten fjorten femten seksten sytten atten nitten tyve
|
||||
enogtyve toogtyve treogtyve fireogtyve femogtyve seksogtyve syvogtyve otteogtyve niogtyve tredive
|
||||
enogtredive toogtredive treogtredive fireogtredive femogtredive seksogtredive syvogtredive otteogtredive niogtredive fyrre
|
||||
enogfyrre toogfyrre treogfyrre fireogfyrre femgogfyrre seksogfyrre syvogfyrre otteogfyrre niogfyrre halvtreds
|
||||
enoghalvtreds tooghalvtreds treoghalvtreds fireoghalvtreds femoghalvtreds seksoghalvtreds syvoghalvtreds otteoghalvtreds nioghalvtreds tres
|
||||
enogtres toogtres treogtres fireogtres femogtres seksogtres syvogtres otteogtres niogtres halvfjerds
|
||||
enoghalvfjerds tooghalvfjerds treoghalvfjerds fireoghalvfjerds femoghalvfjerds seksoghalvfjerds syvoghalvfjerds otteoghalvfjerds nioghalvfjerds firs
|
||||
enogfirs toogfirs treogfirs fireogfirs femogfirs seksogfirs syvogfirs otteogfirs niogfirs halvfems
|
||||
enoghalvfems tooghalvfems treoghalvfems fireoghalvfems femoghalvfems seksoghalvfems syvoghalvfems otteoghalvfems nioghalvfems hundrede
|
||||
million milliard billion billiard trillion trilliard
|
||||
""".split()
|
||||
|
||||
# source http://www.duda.dk/video/dansk/grammatik/talord/talord.html
|
||||
|
||||
_ordinal_words = """nulte
|
||||
første anden tredje fjerde femte sjette syvende ottende niende tiende
|
||||
elfte tolvte trettende fjortende femtende sekstende syttende attende nittende tyvende
|
||||
enogtyvende toogtyvende treogtyvende fireogtyvende femogtyvende seksogtyvende syvogtyvende otteogtyvende niogtyvende tredivte enogtredivte toogtredivte treogtredivte fireogtredivte femogtredivte seksogtredivte syvogtredivte otteogtredivte niogtredivte fyrretyvende
|
||||
enogfyrretyvende toogfyrretyvende treogfyrretyvende fireogfyrretyvende femogfyrretyvende seksogfyrretyvende syvogfyrretyvende otteogfyrretyvende niogfyrretyvende halvtredsindstyvende enoghalvtredsindstyvende
|
||||
tooghalvtredsindstyvende treoghalvtredsindstyvende fireoghalvtredsindstyvende femoghalvtredsindstyvende seksoghalvtredsindstyvende syvoghalvtredsindstyvende otteoghalvtredsindstyvende nioghalvtredsindstyvende
|
||||
tresindstyvende enogtresindstyvende toogtresindstyvende treogtresindstyvende fireogtresindstyvende femogtresindstyvende seksogtresindstyvende syvogtresindstyvende otteogtresindstyvende niogtresindstyvende halvfjerdsindstyvende
|
||||
enoghalvfjerdsindstyvende tooghalvfjerdsindstyvende treoghalvfjerdsindstyvende fireoghalvfjerdsindstyvende femoghalvfjerdsindstyvende seksoghalvfjerdsindstyvende syvoghalvfjerdsindstyvende otteoghalvfjerdsindstyvende nioghalvfjerdsindstyvende firsindstyvende
|
||||
enogfirsindstyvende toogfirsindstyvende treogfirsindstyvende fireogfirsindstyvende femogfirsindstyvende seksogfirsindstyvende syvogfirsindstyvende otteogfirsindstyvende niogfirsindstyvende halvfemsindstyvende
|
||||
enoghalvfemsindstyvende tooghalvfemsindstyvende treoghalvfemsindstyvende fireoghalvfemsindstyvende femoghalvfemsindstyvende seksoghalvfemsindstyvende syvoghalvfemsindstyvende otteoghalvfemsindstyvende nioghalvfemsindstyvende
|
||||
""".split()
|
||||
|
||||
def like_num(text):
|
||||
text = text.replace(',', '').replace('.', '')
|
||||
if text.isdigit():
|
||||
return True
|
||||
if text.count('/') == 1:
|
||||
num, denom = text.split('/')
|
||||
if num.isdigit() and denom.isdigit():
|
||||
return True
|
||||
if text in _num_words:
|
||||
return True
|
||||
if text in _ordinal_words:
|
||||
return True
|
||||
return False
|
||||
|
||||
LEX_ATTRS = {
|
||||
LIKE_NUM: like_num
|
||||
}
|
41
spacy/lang/da/morph_rules.py
Normal file
41
spacy/lang/da/morph_rules.py
Normal file
|
@ -0,0 +1,41 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from ...symbols import LEMMA
|
||||
from ...deprecated import PRON_LEMMA
|
||||
|
||||
MORPH_RULES = {
|
||||
"PRON": {
|
||||
"jeg": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Sing", "Case": "Nom"},
|
||||
"mig": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Sing", "Case": "Acc"},
|
||||
"du": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Two"},
|
||||
"han": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Masc", "Case": "Nom"},
|
||||
"ham": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Masc", "Case": "Acc"},
|
||||
"hun": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Fem", "Case": "Nom"},
|
||||
"hende": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Fem", "Case": "Acc"},
|
||||
"den": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Neut"},
|
||||
"det": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Neut"},
|
||||
"vi": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Plur", "Case": "Nom"},
|
||||
"os": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Plur", "Case": "Acc"},
|
||||
"de": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Plur", "Case": "Nom"},
|
||||
"dem": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Plur", "Case": "Acc"},
|
||||
|
||||
"min": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Sing", "Poss": "Yes", "Reflex": "Yes"},
|
||||
"din": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Two", "Number": "Sing", "Poss": "Yes", "Reflex": "Yes"},
|
||||
"hans": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Masc", "Poss": "Yes", "Reflex": "Yes"},
|
||||
"hendes": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Fem", "Poss": "Yes", "Reflex": "Yes"},
|
||||
"dens": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Neut", "Poss": "Yes", "Reflex": "Yes"},
|
||||
"dets": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Sing", "Gender": "Neut", "Poss": "Yes", "Reflex": "Yes"},
|
||||
"vores": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "One", "Number": "Plur", "Poss": "Yes", "Reflex": "Yes"},
|
||||
"deres": {LEMMA: PRON_LEMMA, "PronType": "Prs", "Person": "Three", "Number": "Plur", "Poss": "Yes", "Reflex": "Yes"},
|
||||
},
|
||||
|
||||
"VERB": {
|
||||
"er": {LEMMA: "være", "VerbForm": "Fin", "Tense": "Pres"},
|
||||
"var": {LEMMA: "være", "VerbForm": "Fin", "Tense": "Past"}
|
||||
}
|
||||
}
|
||||
|
||||
for tag, rules in MORPH_RULES.items():
|
||||
for key, attrs in dict(rules).items():
|
||||
rules[key.title()] = attrs
|
|
@ -1,47 +1,46 @@
|
|||
# encoding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
|
||||
# Source: https://github.com/stopwords-iso/stopwords-da
|
||||
# Source: Handpicked by Jens Dahl Møllerhøj.
|
||||
|
||||
STOP_WORDS = set("""
|
||||
ad af aldrig alle alt anden andet andre at
|
||||
af aldrig alene alle allerede alligevel alt altid anden andet andre at
|
||||
|
||||
bare begge blev blive bliver
|
||||
bag begge blandt blev blive bliver burde bør
|
||||
|
||||
da de dem den denne der deres det dette dig din dine disse dit dog du
|
||||
da de dem den denne dens der derefter deres derfor derfra deri dermed derpå derved det dette dig din dine disse dog du
|
||||
|
||||
efter ej eller en end ene eneste enhver er et
|
||||
efter egen eller ellers en end endnu ene eneste enhver ens enten er et
|
||||
|
||||
far fem fik fire flere fleste for fordi forrige fra få får før
|
||||
flere flest fleste for foran fordi forrige fra få før først
|
||||
|
||||
god godt
|
||||
gennem gjorde gjort god gør gøre gørende
|
||||
|
||||
ham han hans har havde have hej helt hende hendes her hos hun hvad hvem hver
|
||||
hvilken hvis hvor hvordan hvorfor hvornår
|
||||
ham han hans har havde have hel heller hen hende hendes henover her herefter heri hermed herpå hun hvad hvem hver hvilke hvilken hvilkes hvis hvor hvordan hvorefter hvorfor hvorfra hvorhen hvori hvorimod hvornår hvorved
|
||||
|
||||
i ikke ind ingen intet
|
||||
i igen igennem ikke imellem imens imod ind indtil ingen intet
|
||||
|
||||
ja jeg jer jeres jo
|
||||
jeg jer jeres jo
|
||||
|
||||
kan kom komme kommer kun kunne
|
||||
kan kom kommer kun kunne
|
||||
|
||||
lad lav lidt lige lille
|
||||
lad langs lav lave lavet lidt lige ligesom lille længere
|
||||
|
||||
man mand mange med meget men mens mere mig min mine mit mod må
|
||||
man mange med meget mellem men mens mere mest mig min mindre mindst mine mit må måske
|
||||
|
||||
ned nej ni nogen noget nogle nu ny nyt når nær næste næsten
|
||||
ned nemlig nogen nogensinde noget nogle nok nu ny nyt nær næste næsten
|
||||
|
||||
og også okay om op os otte over
|
||||
og også om omkring op os over overalt
|
||||
|
||||
på
|
||||
|
||||
se seks selv ser ses sig sige sin sine sit skal skulle som stor store syv så
|
||||
sådan
|
||||
samme sammen selv selvom senere ses siden sig sige skal skulle som stadig synes syntes så sådan således
|
||||
|
||||
tag tage thi ti til to tre
|
||||
temmelig tidligere til tilbage tit
|
||||
|
||||
ud under
|
||||
ud uden udover under undtagen
|
||||
|
||||
var ved vi vil ville vor vores være været
|
||||
var ved vi via vil ville vore vores vær være været
|
||||
|
||||
øvrigt
|
||||
""".split())
|
||||
|
|
|
@ -1,11 +1,27 @@
|
|||
# encoding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
from ...symbols import ORTH, LEMMA
|
||||
from ...symbols import ORTH, LEMMA, NORM
|
||||
|
||||
|
||||
_exc = {}
|
||||
|
||||
for exc_data in [
|
||||
{ORTH: "Kbh.", LEMMA: "København", NORM: "København"},
|
||||
|
||||
{ORTH: "Jan.", LEMMA: "januar", NORM: "januar"},
|
||||
{ORTH: "Feb.", LEMMA: "februar", NORM: "februar"},
|
||||
{ORTH: "Mar.", LEMMA: "marts", NORM: "marts"},
|
||||
{ORTH: "Apr.", LEMMA: "april", NORM: "april"},
|
||||
{ORTH: "Maj.", LEMMA: "maj", NORM: "maj"},
|
||||
{ORTH: "Jun.", LEMMA: "juni", NORM: "juni"},
|
||||
{ORTH: "Jul.", LEMMA: "juli", NORM: "juli"},
|
||||
{ORTH: "Aug.", LEMMA: "august", NORM: "august"},
|
||||
{ORTH: "Sep.", LEMMA: "september", NORM: "september"},
|
||||
{ORTH: "Okt.", LEMMA: "oktober", NORM: "oktober"},
|
||||
{ORTH: "Nov.", LEMMA: "november", NORM: "november"},
|
||||
{ORTH: "Dec.", LEMMA: "december", NORM: "december"}]:
|
||||
_exc[exc_data[ORTH]] = [dict(exc_data)]
|
||||
|
||||
for orth in [
|
||||
"A/S", "beg.", "bl.a.", "ca.", "d.s.s.", "dvs.", "f.eks.", "fr.", "hhv.",
|
||||
|
|
|
@ -62,5 +62,5 @@ TAG_MAP = {
|
|||
"VVIZU": {POS: VERB, "VerbForm": "inf"},
|
||||
"VVPP": {POS: VERB, "Aspect": "perf", "VerbForm": "part"},
|
||||
"XY": {POS: X},
|
||||
"SP": {POS: SPACE}
|
||||
"_SP": {POS: SPACE}
|
||||
}
|
||||
|
|
|
@ -16,7 +16,7 @@ call can cannot ca could
|
|||
|
||||
did do does doing done down due during
|
||||
|
||||
each eight either eleven else elsewhere empty enough etc even ever every
|
||||
each eight either eleven else elsewhere empty enough even ever every
|
||||
everyone everything everywhere except
|
||||
|
||||
few fifteen fifty first five for former formerly forty four from front full
|
||||
|
@ -27,7 +27,7 @@ get give go
|
|||
had has have he hence her here hereafter hereby herein hereupon hers herself
|
||||
him himself his how however hundred
|
||||
|
||||
i if in inc indeed into is it its itself
|
||||
i if in indeed into is it its itself
|
||||
|
||||
keep
|
||||
|
||||
|
|
|
@ -42,6 +42,7 @@ TAG_MAP = {
|
|||
"RBR": {POS: ADV, "Degree": "comp"},
|
||||
"RBS": {POS: ADV, "Degree": "sup"},
|
||||
"RP": {POS: PART},
|
||||
"SP": {POS: SPACE},
|
||||
"SYM": {POS: SYM},
|
||||
"TO": {POS: PART, "PartType": "inf", "VerbForm": "inf"},
|
||||
"UH": {POS: INTJ},
|
||||
|
@ -55,11 +56,11 @@ TAG_MAP = {
|
|||
"WP": {POS: NOUN, "PronType": "int|rel"},
|
||||
"WP$": {POS: ADJ, "Poss": "yes", "PronType": "int|rel"},
|
||||
"WRB": {POS: ADV, "PronType": "int|rel"},
|
||||
"SP": {POS: SPACE},
|
||||
"ADD": {POS: X},
|
||||
"NFP": {POS: PUNCT},
|
||||
"GW": {POS: X},
|
||||
"XX": {POS: X},
|
||||
"BES": {POS: VERB},
|
||||
"HVS": {POS: VERB}
|
||||
"HVS": {POS: VERB},
|
||||
"_SP": {POS: SPACE},
|
||||
}
|
||||
|
|
|
@ -303,5 +303,5 @@ TAG_MAP = {
|
|||
"VERB__VerbForm=Ger": {"morph": "VerbForm=Ger", "pos": "VERB"},
|
||||
"VERB__VerbForm=Inf": {"morph": "VerbForm=Inf", "pos": "VERB"},
|
||||
"X___": {"morph": "_", "pos": "X"},
|
||||
"SP": {"morph": "_", "pos": "SPACE"},
|
||||
"_SP": {"morph": "_", "pos": "SPACE"},
|
||||
}
|
||||
|
|
|
@ -33,8 +33,7 @@ class Japanese(Language):
|
|||
Defaults = JapaneseDefaults
|
||||
|
||||
def make_doc(self, text):
|
||||
words = self.tokenizer(text)
|
||||
return Doc(self.vocab, words=words, spaces=[False]*len(words))
|
||||
return self.tokenizer(text)
|
||||
|
||||
|
||||
__all__ = ['Japanese']
|
||||
|
|
18
spacy/lang/ja/examples.py
Normal file
18
spacy/lang/ja/examples.py
Normal file
|
@ -0,0 +1,18 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
|
||||
"""
|
||||
Example sentences to test spaCy and its language models.
|
||||
|
||||
>>> from spacy.lang.ja.examples import sentences
|
||||
>>> docs = nlp.pipe(sentences)
|
||||
"""
|
||||
|
||||
|
||||
sentences = [
|
||||
'アップルがイギリスの新興企業を10億ドルで購入を検討',
|
||||
'自動運転車の損害賠償責任、自動車メーカーに一定の負担を求める',
|
||||
'歩道を走る自動配達ロボ、サンフランシスコ市が走行禁止を検討',
|
||||
'ロンドンはイギリスの大都市です。'
|
||||
]
|
|
@ -19,63 +19,64 @@ TAG_MAP = {
|
|||
"NPRP": {POS: PRON},
|
||||
# ADJ
|
||||
"ADJ": {POS: ADJ},
|
||||
"NONM": {POS: ADJ},
|
||||
"VATT": {POS: ADJ},
|
||||
"DONM": {POS: ADJ},
|
||||
"NONM": {POS: ADJ},
|
||||
"VATT": {POS: ADJ},
|
||||
"DONM": {POS: ADJ},
|
||||
# ADV
|
||||
"ADV": {POS: ADV},
|
||||
"ADVN": {POS: ADV},
|
||||
"ADVI": {POS: ADV},
|
||||
"ADVP": {POS: ADV},
|
||||
"ADVS": {POS: ADV},
|
||||
"ADVN": {POS: ADV},
|
||||
"ADVI": {POS: ADV},
|
||||
"ADVP": {POS: ADV},
|
||||
"ADVS": {POS: ADV},
|
||||
# INT
|
||||
"INT": {POS: INTJ},
|
||||
# PRON
|
||||
"PROPN": {POS: PROPN},
|
||||
"PPRS": {POS: PROPN},
|
||||
"PDMN": {POS: PROPN},
|
||||
"PNTR": {POS: PROPN},
|
||||
"PPRS": {POS: PROPN},
|
||||
"PDMN": {POS: PROPN},
|
||||
"PNTR": {POS: PROPN},
|
||||
# DET
|
||||
"DET": {POS: DET},
|
||||
"DDAN": {POS: DET},
|
||||
"DDAC": {POS: DET},
|
||||
"DDBQ": {POS: DET},
|
||||
"DDAQ": {POS: DET},
|
||||
"DIAC": {POS: DET},
|
||||
"DIBQ": {POS: DET},
|
||||
"DIAQ": {POS: DET},
|
||||
"DCNM": {POS: DET},
|
||||
"DDAN": {POS: DET},
|
||||
"DDAC": {POS: DET},
|
||||
"DDBQ": {POS: DET},
|
||||
"DDAQ": {POS: DET},
|
||||
"DIAC": {POS: DET},
|
||||
"DIBQ": {POS: DET},
|
||||
"DIAQ": {POS: DET},
|
||||
"DCNM": {POS: DET},
|
||||
# NUM
|
||||
"NUM": {POS: NUM},
|
||||
"NCNM": {POS: NUM},
|
||||
"NLBL": {POS: NUM},
|
||||
"DCNM": {POS: NUM},
|
||||
"NCNM": {POS: NUM},
|
||||
"NLBL": {POS: NUM},
|
||||
"DCNM": {POS: NUM},
|
||||
# AUX
|
||||
"AUX": {POS: AUX},
|
||||
"XVBM": {POS: AUX},
|
||||
"XVAM": {POS: AUX},
|
||||
"XVMM": {POS: AUX},
|
||||
"XVBB": {POS: AUX},
|
||||
"XVAE": {POS: AUX},
|
||||
"XVBM": {POS: AUX},
|
||||
"XVAM": {POS: AUX},
|
||||
"XVMM": {POS: AUX},
|
||||
"XVBB": {POS: AUX},
|
||||
"XVAE": {POS: AUX},
|
||||
# ADP
|
||||
"ADP": {POS: ADP},
|
||||
"RPRE": {POS: ADP},
|
||||
"RPRE": {POS: ADP},
|
||||
# CCONJ
|
||||
"CCONJ": {POS: CCONJ},
|
||||
"JCRG": {POS: CCONJ},
|
||||
"JCRG": {POS: CCONJ},
|
||||
# SCONJ
|
||||
"SCONJ": {POS: SCONJ},
|
||||
"PREL": {POS: SCONJ},
|
||||
"JSBR": {POS: SCONJ},
|
||||
"JCMP": {POS: SCONJ},
|
||||
"PREL": {POS: SCONJ},
|
||||
"JSBR": {POS: SCONJ},
|
||||
"JCMP": {POS: SCONJ},
|
||||
# PART
|
||||
"PART": {POS: PART},
|
||||
"FIXN": {POS: PART},
|
||||
"FIXV": {POS: PART},
|
||||
"EAFF": {POS: PART},
|
||||
"AITT": {POS: PART},
|
||||
"NEG": {POS: PART},
|
||||
"PART": {POS: PART},
|
||||
"FIXN": {POS: PART},
|
||||
"FIXV": {POS: PART},
|
||||
"EAFF": {POS: PART},
|
||||
"AITT": {POS: PART},
|
||||
"NEG": {POS: PART},
|
||||
# PUNCT
|
||||
"PUNCT": {POS: PUNCT},
|
||||
"PUNC": {POS: PUNCT}
|
||||
"PUNC": {POS: PUNCT},
|
||||
"_SP": {POS: SPACE}
|
||||
}
|
||||
|
|
18
spacy/lang/zh/examples.py
Normal file
18
spacy/lang/zh/examples.py
Normal file
|
@ -0,0 +1,18 @@
|
|||
# coding: utf8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
|
||||
"""
|
||||
Example sentences to test spaCy and its language models.
|
||||
|
||||
>>> from spacy.lang.zh.examples import sentences
|
||||
>>> docs = nlp.pipe(sentences)
|
||||
"""
|
||||
|
||||
|
||||
sentences = [
|
||||
"蘋果公司正考量用一億元買下英國的新創公司",
|
||||
"自駕車將保險責任歸屬轉移至製造商",
|
||||
"舊金山考慮禁止送貨機器人在人行道上行駛",
|
||||
"倫敦是英國的大城市"
|
||||
]
|
|
@ -127,6 +127,7 @@ class Language(object):
|
|||
RETURNS (Language): The newly constructed object.
|
||||
"""
|
||||
self._meta = dict(meta)
|
||||
self._path = None
|
||||
if vocab is True:
|
||||
factory = self.Defaults.create_vocab
|
||||
vocab = factory(self, **meta.get('vocab', {}))
|
||||
|
@ -142,6 +143,10 @@ class Language(object):
|
|||
bytes_data = self.to_bytes(vocab=False)
|
||||
return (unpickle_language, (self.vocab, self.meta, bytes_data))
|
||||
|
||||
@property
|
||||
def path(self):
|
||||
return self._path
|
||||
|
||||
@property
|
||||
def meta(self):
|
||||
self._meta.setdefault('lang', self.vocab.lang)
|
||||
|
@ -611,6 +616,7 @@ class Language(object):
|
|||
if not (path / 'vocab').exists():
|
||||
exclude['vocab'] = True
|
||||
util.from_disk(path, deserializers, exclude)
|
||||
self._path = path
|
||||
return self
|
||||
|
||||
def to_bytes(self, disable=[], **exclude):
|
||||
|
|
|
@ -7,8 +7,8 @@ from .symbols import VerbForm_inf, VerbForm_none, Number_sing, Degree_pos
|
|||
|
||||
class Lemmatizer(object):
|
||||
@classmethod
|
||||
def load(cls, path, index=None, exc=None, rules=None):
|
||||
return cls(index or {}, exc or {}, rules or {})
|
||||
def load(cls, path, index=None, exc=None, rules=None, lookup=None):
|
||||
return cls(index or {}, exc or {}, rules or {}, lookup or {})
|
||||
|
||||
def __init__(self, index=None, exceptions=None, rules=None, lookup=None):
|
||||
self.index = index if index is not None else {}
|
||||
|
@ -26,10 +26,10 @@ class Lemmatizer(object):
|
|||
elif univ_pos in (PUNCT, 'PUNCT', 'punct'):
|
||||
univ_pos = 'punct'
|
||||
else:
|
||||
return set([string.lower()])
|
||||
return list(set([string.lower()]))
|
||||
# See Issue #435 for example of where this logic is requied.
|
||||
if self.is_base_form(univ_pos, morphology):
|
||||
return set([string.lower()])
|
||||
return list(set([string.lower()]))
|
||||
lemmas = lemmatize(string, self.index.get(univ_pos, {}),
|
||||
self.exc.get(univ_pos, {}),
|
||||
self.rules.get(univ_pos, []))
|
||||
|
@ -108,4 +108,4 @@ def lemmatize(string, index, exceptions, rules):
|
|||
forms.extend(oov_forms)
|
||||
if not forms:
|
||||
forms.append(string)
|
||||
return set(forms)
|
||||
return list(set(forms))
|
||||
|
|
|
@ -69,6 +69,7 @@ cdef enum action_t:
|
|||
REPEAT
|
||||
ACCEPT
|
||||
ADVANCE_ZERO
|
||||
ACCEPT_PREV
|
||||
PANIC
|
||||
|
||||
# A "match expression" conists of one or more token patterns
|
||||
|
@ -120,24 +121,27 @@ cdef attr_t get_pattern_key(const TokenPatternC* pattern) except 0:
|
|||
|
||||
|
||||
cdef int get_action(const TokenPatternC* pattern, const TokenC* token) nogil:
|
||||
lookahead = &pattern[1]
|
||||
for attr in pattern.attrs[:pattern.nr_attr]:
|
||||
if get_token_attr(token, attr.attr) != attr.value:
|
||||
if pattern.quantifier == ONE:
|
||||
return REJECT
|
||||
elif pattern.quantifier == ZERO:
|
||||
return ACCEPT if (pattern+1).nr_attr == 0 else ADVANCE
|
||||
return ACCEPT if lookahead.nr_attr == 0 else ADVANCE
|
||||
elif pattern.quantifier in (ZERO_ONE, ZERO_PLUS):
|
||||
return ACCEPT if (pattern+1).nr_attr == 0 else ADVANCE_ZERO
|
||||
return ACCEPT_PREV if lookahead.nr_attr == 0 else ADVANCE_ZERO
|
||||
else:
|
||||
return PANIC
|
||||
if pattern.quantifier == ZERO:
|
||||
return REJECT
|
||||
elif lookahead.nr_attr == 0:
|
||||
return ACCEPT
|
||||
elif pattern.quantifier in (ONE, ZERO_ONE):
|
||||
return ACCEPT if (pattern+1).nr_attr == 0 else ADVANCE
|
||||
return ADVANCE
|
||||
elif pattern.quantifier == ZERO_PLUS:
|
||||
# This is a bandaid over the 'shadowing' problem described here:
|
||||
# https://github.com/explosion/spaCy/issues/864
|
||||
next_action = get_action(pattern+1, token)
|
||||
next_action = get_action(lookahead, token)
|
||||
if next_action is REJECT:
|
||||
return REPEAT
|
||||
else:
|
||||
|
@ -194,7 +198,6 @@ cdef class Matcher:
|
|||
cdef public object _patterns
|
||||
cdef public object _entities
|
||||
cdef public object _callbacks
|
||||
cdef public object _acceptors
|
||||
|
||||
def __init__(self, vocab):
|
||||
"""Create the Matcher.
|
||||
|
@ -205,7 +208,6 @@ cdef class Matcher:
|
|||
"""
|
||||
self._patterns = {}
|
||||
self._entities = {}
|
||||
self._acceptors = {}
|
||||
self._callbacks = {}
|
||||
self.vocab = vocab
|
||||
self.mem = Pool()
|
||||
|
@ -228,7 +230,7 @@ cdef class Matcher:
|
|||
key (unicode): The match ID.
|
||||
RETURNS (bool): Whether the matcher contains rules for this match ID.
|
||||
"""
|
||||
return len(self._patterns)
|
||||
return self._normalize_key(key) in self._patterns
|
||||
|
||||
def add(self, key, on_match, *patterns):
|
||||
"""Add a match-rule to the matcher. A match-rule consists of: an ID key,
|
||||
|
@ -253,6 +255,10 @@ cdef class Matcher:
|
|||
and '*' patterns in a row and their matches overlap, the first
|
||||
operator will behave non-greedily. This quirk in the semantics
|
||||
makes the matcher more efficient, by avoiding the need for back-tracking.
|
||||
|
||||
key (unicode): The match ID.
|
||||
on_match (callable): Callback executed on match.
|
||||
*patterns (list): List of token descritions.
|
||||
"""
|
||||
for pattern in patterns:
|
||||
if len(pattern) == 0:
|
||||
|
@ -345,6 +351,9 @@ cdef class Matcher:
|
|||
while action == ADVANCE_ZERO:
|
||||
state.second += 1
|
||||
action = get_action(state.second, token)
|
||||
if action == PANIC:
|
||||
raise Exception("Error selecting action in matcher")
|
||||
|
||||
if action == REPEAT:
|
||||
# Leave the state in the queue, and advance to next slot
|
||||
# (i.e. we don't overwrite -- we want to greedily match more
|
||||
|
@ -356,14 +365,15 @@ cdef class Matcher:
|
|||
partials[q] = state
|
||||
partials[q].second += 1
|
||||
q += 1
|
||||
elif action == ACCEPT:
|
||||
elif action in (ACCEPT, ACCEPT_PREV):
|
||||
# TODO: What to do about patterns starting with ZERO? Need to
|
||||
# adjust the start position.
|
||||
start = state.first
|
||||
end = token_i+1
|
||||
end = token_i+1 if action == ACCEPT else token_i
|
||||
ent_id = state.second[1].attrs[0].value
|
||||
label = state.second[1].attrs[1].value
|
||||
matches.append((ent_id, start, end))
|
||||
|
||||
partials.resize(q)
|
||||
# Check whether we open any new patterns on this token
|
||||
for pattern in self.patterns:
|
||||
|
@ -383,15 +393,15 @@ cdef class Matcher:
|
|||
state.first = token_i
|
||||
state.second = pattern + 1
|
||||
partials.push_back(state)
|
||||
elif action == ACCEPT:
|
||||
elif action in (ACCEPT, ACCEPT_PREV):
|
||||
start = token_i
|
||||
end = token_i+1
|
||||
end = token_i+1 if action == ACCEPT else token_i
|
||||
ent_id = pattern[1].attrs[0].value
|
||||
label = pattern[1].attrs[1].value
|
||||
matches.append((ent_id, start, end))
|
||||
# Look for open patterns that are actually satisfied
|
||||
for state in partials:
|
||||
while state.second.quantifier in (ZERO, ZERO_PLUS):
|
||||
while state.second.quantifier in (ZERO, ZERO_ONE, ZERO_PLUS):
|
||||
state.second += 1
|
||||
if state.second.nr_attr == 0:
|
||||
start = state.first
|
||||
|
@ -465,15 +475,34 @@ cdef class PhraseMatcher:
|
|||
self._callbacks = {}
|
||||
|
||||
def __len__(self):
|
||||
raise NotImplementedError
|
||||
"""Get the number of rules added to the matcher. Note that this only
|
||||
returns the number of rules (identical with the number of IDs), not the
|
||||
number of individual patterns.
|
||||
|
||||
RETURNS (int): The number of rules.
|
||||
"""
|
||||
return len(self.phrase_ids)
|
||||
|
||||
def __contains__(self, key):
|
||||
raise NotImplementedError
|
||||
"""Check whether the matcher contains rules for a match ID.
|
||||
|
||||
key (unicode): The match ID.
|
||||
RETURNS (bool): Whether the matcher contains rules for this match ID.
|
||||
"""
|
||||
cdef hash_t ent_id = self.matcher._normalize_key(key)
|
||||
return ent_id in self._callbacks
|
||||
|
||||
def __reduce__(self):
|
||||
return (self.__class__, (self.vocab,), None, None)
|
||||
|
||||
def add(self, key, on_match, *docs):
|
||||
"""Add a match-rule to the matcher. A match-rule consists of: an ID key,
|
||||
an on_match callback, and one or more patterns.
|
||||
|
||||
key (unicode): The match ID.
|
||||
on_match (callable): Callback executed on match.
|
||||
*docs (Doc): `Doc` objects representing match patterns.
|
||||
"""
|
||||
cdef Doc doc
|
||||
for doc in docs:
|
||||
if len(doc) >= self.max_length:
|
||||
|
@ -502,6 +531,13 @@ cdef class PhraseMatcher:
|
|||
self.phrase_ids.set(phrase_hash, <void*>ent_id)
|
||||
|
||||
def __call__(self, Doc doc):
|
||||
"""Find all sequences matching the supplied patterns on the `Doc`.
|
||||
|
||||
doc (Doc): The document to match over.
|
||||
RETURNS (list): A list of `(key, start, end)` tuples,
|
||||
describing the matches. A match tuple describes a span
|
||||
`doc[start:end]`. The `label_id` and `key` are both integers.
|
||||
"""
|
||||
matches = []
|
||||
for _, start, end in self.matcher(doc):
|
||||
ent_id = self.accept_match(doc, start, end)
|
||||
|
@ -514,6 +550,14 @@ cdef class PhraseMatcher:
|
|||
return matches
|
||||
|
||||
def pipe(self, stream, batch_size=1000, n_threads=2):
|
||||
"""Match a stream of documents, yielding them in turn.
|
||||
|
||||
docs (iterable): A stream of documents.
|
||||
batch_size (int): The number of documents to accumulate into a working set.
|
||||
n_threads (int): The number of threads with which to work on the buffer
|
||||
in parallel, if the `Matcher` implementation supports multi-threading.
|
||||
YIELDS (Doc): Documents, in order.
|
||||
"""
|
||||
for doc in stream:
|
||||
self(doc)
|
||||
yield doc
|
||||
|
|
|
@ -44,7 +44,7 @@ cdef class Morphology:
|
|||
cdef int assign_feature(self, uint64_t* morph, univ_morph_t feat_id, bint value) except -1
|
||||
|
||||
|
||||
cpdef enum univ_morph_t:
|
||||
cdef enum univ_morph_t:
|
||||
NIL = 0
|
||||
Animacy_anim = symbols.Animacy_anim
|
||||
Animacy_inam
|
||||
|
|
|
@ -4,7 +4,7 @@ from __future__ import unicode_literals
|
|||
|
||||
from libc.string cimport memset
|
||||
|
||||
from .parts_of_speech cimport ADJ, VERB, NOUN, PUNCT
|
||||
from .parts_of_speech cimport ADJ, VERB, NOUN, PUNCT, SPACE
|
||||
from .attrs cimport POS, IS_SPACE
|
||||
from .parts_of_speech import IDS as POS_IDS
|
||||
from .lexeme cimport Lexeme
|
||||
|
@ -36,14 +36,22 @@ cdef class Morphology:
|
|||
def __init__(self, StringStore string_store, tag_map, lemmatizer, exc=None):
|
||||
self.mem = Pool()
|
||||
self.strings = string_store
|
||||
# Add special space symbol. We prefix with underscore, to make sure it
|
||||
# always sorts to the end.
|
||||
space_attrs = tag_map.get('SP', {POS: SPACE})
|
||||
if '_SP' not in tag_map:
|
||||
self.strings.add('_SP')
|
||||
tag_map = dict(tag_map)
|
||||
tag_map['_SP'] = space_attrs
|
||||
self.tag_names = tuple(sorted(tag_map.keys()))
|
||||
self.tag_map = {}
|
||||
self.lemmatizer = lemmatizer
|
||||
self.n_tags = len(tag_map)
|
||||
self.tag_names = tuple(sorted(tag_map.keys()))
|
||||
self.reverse_index = {}
|
||||
|
||||
self.rich_tags = <RichTagC*>self.mem.alloc(self.n_tags+1, sizeof(RichTagC))
|
||||
for i, (tag_str, attrs) in enumerate(sorted(tag_map.items())):
|
||||
self.strings.add(tag_str)
|
||||
self.tag_map[tag_str] = dict(attrs)
|
||||
attrs = _normalize_props(attrs)
|
||||
attrs = intify_attrs(attrs, self.strings, _do_deprecated=True)
|
||||
|
@ -93,7 +101,7 @@ cdef class Morphology:
|
|||
# the statistical model fails.
|
||||
# Related to Issue #220
|
||||
if Lexeme.c_check_flag(token.lex, IS_SPACE):
|
||||
tag_id = self.reverse_index[self.strings.add('SP')]
|
||||
tag_id = self.reverse_index[self.strings.add('_SP')]
|
||||
rich_tag = self.rich_tags[tag_id]
|
||||
analysis = <MorphAnalysisC*>self._cache.get(tag_id, token.lex.orth)
|
||||
if analysis is NULL:
|
||||
|
@ -164,7 +172,7 @@ cdef class Morphology:
|
|||
cdef unicode py_string = self.strings[orth]
|
||||
if self.lemmatizer is None:
|
||||
return self.strings.add(py_string.lower())
|
||||
cdef set lemma_strings
|
||||
cdef list lemma_strings
|
||||
cdef unicode lemma_string
|
||||
lemma_strings = self.lemmatizer(py_string, univ_pos, morphology)
|
||||
lemma_string = sorted(lemma_strings)[0]
|
||||
|
@ -426,3 +434,7 @@ IDS = {
|
|||
|
||||
|
||||
NAMES = [key for key, value in sorted(IDS.items(), key=lambda item: item[1])]
|
||||
# Unfortunate hack here, to work around problem with long cpdef enum
|
||||
# (which is generating an enormous amount of C++ in Cython 0.24+)
|
||||
# We keep the enum cdef, and just make sure the names are available to Python
|
||||
locals().update(IDS)
|
||||
|
|
|
@ -13,12 +13,12 @@ cdef enum symbol_t:
|
|||
LIKE_EMAIL
|
||||
IS_STOP
|
||||
IS_OOV
|
||||
IS_BRACKET
|
||||
IS_QUOTE
|
||||
IS_LEFT_PUNCT
|
||||
IS_RIGHT_PUNCT
|
||||
|
||||
FLAG14 = 14
|
||||
FLAG15
|
||||
FLAG16
|
||||
FLAG17
|
||||
FLAG18
|
||||
FLAG18 = 18
|
||||
FLAG19
|
||||
FLAG20
|
||||
FLAG21
|
||||
|
@ -455,15 +455,5 @@ cdef enum symbol_t:
|
|||
root
|
||||
xcomp
|
||||
|
||||
# Move these up to FLAG14--FLAG18 once we finish the functionality
|
||||
# and are ready to regenerate the model.
|
||||
#IS_BRACKET
|
||||
#IS_QUOTE
|
||||
#IS_LEFT_PUNCT
|
||||
#IS_RIGHT_PUNCT
|
||||
|
||||
# These symbols are currently missing. However, if we add them currently,
|
||||
# we'll throw off the integer index and the model will have to be retrained.
|
||||
# We therefore wait until the next data version to add them.
|
||||
# acl
|
||||
|
||||
acl
|
||||
LAW
|
||||
|
|
|
@ -18,10 +18,11 @@ IDS = {
|
|||
"LIKE_EMAIL": LIKE_EMAIL,
|
||||
"IS_STOP": IS_STOP,
|
||||
"IS_OOV": IS_OOV,
|
||||
"FLAG14": FLAG14,
|
||||
"FLAG15": FLAG15,
|
||||
"FLAG16": FLAG16,
|
||||
"FLAG17": FLAG17,
|
||||
"IS_BRACKET": IS_BRACKET,
|
||||
"IS_QUOTE": IS_QUOTE,
|
||||
"IS_LEFT_PUNCT": IS_LEFT_PUNCT,
|
||||
"IS_RIGHT_PUNCT": IS_RIGHT_PUNCT,
|
||||
|
||||
"FLAG18": FLAG18,
|
||||
"FLAG19": FLAG19,
|
||||
"FLAG20": FLAG20,
|
||||
|
@ -457,7 +458,10 @@ IDS = {
|
|||
"quantmod": quantmod,
|
||||
"rcmod": rcmod,
|
||||
"root": root,
|
||||
"xcomp": xcomp
|
||||
"xcomp": xcomp,
|
||||
|
||||
"acl": acl,
|
||||
"LAW": LAW
|
||||
}
|
||||
|
||||
def sort_nums(x):
|
||||
|
|
|
@ -2,6 +2,8 @@ from libc.string cimport memcpy, memset, memmove
|
|||
from libc.stdlib cimport malloc, calloc, free
|
||||
from libc.stdint cimport uint32_t, uint64_t
|
||||
|
||||
from cpython.exc cimport PyErr_CheckSignals, PyErr_SetFromErrno
|
||||
|
||||
from murmurhash.mrmr cimport hash64
|
||||
|
||||
from ..vocab cimport EMPTY_LEXEME
|
||||
|
@ -55,6 +57,11 @@ cdef cppclass StateC:
|
|||
this.shifted = <bint*>calloc(length + (PADDING * 2), sizeof(bint))
|
||||
this._sent = <TokenC*>calloc(length + (PADDING * 2), sizeof(TokenC))
|
||||
this._ents = <Entity*>calloc(length + (PADDING * 2), sizeof(Entity))
|
||||
if not (this._buffer and this._stack and this.shifted
|
||||
and this._sent and this._ents):
|
||||
with gil:
|
||||
PyErr_SetFromErrno(MemoryError)
|
||||
PyErr_CheckSignals()
|
||||
memset(&this._hist, 0, sizeof(this._hist))
|
||||
this.offset = 0
|
||||
cdef int i
|
||||
|
|
|
@ -212,7 +212,8 @@ cdef class LeftArc:
|
|||
cdef class RightArc:
|
||||
@staticmethod
|
||||
cdef bint is_valid(const StateC* st, attr_t label) nogil:
|
||||
return st.B_(0).sent_start != 1
|
||||
# If there's (perhaps partial) parse pre-set, don't allow cycle.
|
||||
return st.B_(0).sent_start != 1 and st.H(st.S(0)) != st.B(0)
|
||||
|
||||
@staticmethod
|
||||
cdef int transition(StateC* st, attr_t label) nogil:
|
||||
|
@ -446,14 +447,19 @@ cdef class ArcEager(TransitionSystem):
|
|||
|
||||
cdef int initialize_state(self, StateC* st) nogil:
|
||||
for i in range(st.length):
|
||||
st._sent[i].l_edge = i
|
||||
st._sent[i].r_edge = i
|
||||
if st._sent[i].dep == 0:
|
||||
st._sent[i].l_edge = i
|
||||
st._sent[i].r_edge = i
|
||||
st._sent[i].head = 0
|
||||
st._sent[i].dep = 0
|
||||
st._sent[i].l_kids = 0
|
||||
st._sent[i].r_kids = 0
|
||||
st.fast_forward()
|
||||
|
||||
cdef int finalize_state(self, StateC* st) nogil:
|
||||
cdef int i
|
||||
for i in range(st.length):
|
||||
if st._sent[i].head == 0 and st._sent[i].dep == 0:
|
||||
if st._sent[i].head == 0:
|
||||
st._sent[i].dep = self.root_label
|
||||
|
||||
def finalize_doc(self, doc):
|
||||
|
|
|
@ -22,7 +22,7 @@ cimport numpy as np
|
|||
|
||||
from libcpp.vector cimport vector
|
||||
from cpython.ref cimport PyObject, Py_INCREF, Py_XDECREF
|
||||
from cpython.exc cimport PyErr_CheckSignals
|
||||
from cpython.exc cimport PyErr_CheckSignals, PyErr_SetFromErrno
|
||||
from libc.stdint cimport uint32_t, uint64_t
|
||||
from libc.string cimport memset, memcpy
|
||||
from libc.stdlib cimport malloc, calloc, free
|
||||
|
@ -48,7 +48,7 @@ from thinc.neural.util import get_array_module
|
|||
from .. import util
|
||||
from ..util import get_async, get_cuda_stream
|
||||
from .._ml import zero_init, PrecomputableAffine
|
||||
from .._ml import Tok2Vec, doc2feats, rebatch, fine_tune
|
||||
from .._ml import Tok2Vec, doc2feats, rebatch
|
||||
from .._ml import Residual, drop_layer, flatten
|
||||
from .._ml import link_vectors_to_models
|
||||
from .._ml import HistoryFeatures
|
||||
|
@ -262,7 +262,7 @@ cdef class Parser:
|
|||
hist_width = util.env_opt('history_width', cfg.get('hist_width', 0))
|
||||
if hist_size != 0:
|
||||
raise ValueError("Currently history size is hard-coded to 0")
|
||||
if hist_width != 0:
|
||||
if hist_width != 0:
|
||||
raise ValueError("Currently history width is hard-coded to 0")
|
||||
tok2vec = Tok2Vec(token_vector_width, embed_size,
|
||||
pretrained_dims=cfg.get('pretrained_dims', 0))
|
||||
|
@ -440,6 +440,7 @@ cdef class Parser:
|
|||
self._parseC(states[i],
|
||||
feat_weights, bias, hW, hb,
|
||||
nr_class, nr_hidden, nr_feat, nr_piece)
|
||||
PyErr_CheckSignals()
|
||||
return state_objs
|
||||
|
||||
cdef void _parseC(self, StateC* state,
|
||||
|
@ -450,6 +451,10 @@ cdef class Parser:
|
|||
is_valid = <int*>calloc(nr_class, sizeof(int))
|
||||
vectors = <float*>calloc(nr_hidden * nr_piece, sizeof(float))
|
||||
scores = <float*>calloc(nr_class, sizeof(float))
|
||||
if not (token_ids and is_valid and vectors and scores):
|
||||
with gil:
|
||||
PyErr_SetFromErrno(MemoryError)
|
||||
PyErr_CheckSignals()
|
||||
|
||||
while not state.is_final():
|
||||
state.set_context_tokens(token_ids, nr_feat)
|
||||
|
|
|
@ -117,6 +117,9 @@ def he_tokenizer():
|
|||
def nb_tokenizer():
|
||||
return util.get_lang_class('nb').Defaults.create_tokenizer()
|
||||
|
||||
@pytest.fixture
|
||||
def da_tokenizer():
|
||||
return util.get_lang_class('da').Defaults.create_tokenizer()
|
||||
|
||||
@pytest.fixture
|
||||
def ja_tokenizer():
|
||||
|
|
|
@ -17,6 +17,26 @@ def test_doc_array_attr_of_token(en_tokenizer, en_vocab):
|
|||
assert feats_array[0][0] != feats_array[0][1]
|
||||
|
||||
|
||||
def test_doc_stringy_array_attr_of_token(en_tokenizer, en_vocab):
|
||||
text = "An example sentence"
|
||||
tokens = en_tokenizer(text)
|
||||
example = tokens.vocab["example"]
|
||||
assert example.orth != example.shape
|
||||
feats_array = tokens.to_array((ORTH, SHAPE))
|
||||
feats_array_stringy = tokens.to_array(("ORTH", "SHAPE"))
|
||||
assert feats_array_stringy[0][0] == feats_array[0][0]
|
||||
assert feats_array_stringy[0][1] == feats_array[0][1]
|
||||
|
||||
|
||||
def test_doc_scalar_attr_of_token(en_tokenizer, en_vocab):
|
||||
text = "An example sentence"
|
||||
tokens = en_tokenizer(text)
|
||||
example = tokens.vocab["example"]
|
||||
assert example.orth != example.shape
|
||||
feats_array = tokens.to_array(ORTH)
|
||||
assert feats_array.shape == (3,)
|
||||
|
||||
|
||||
def test_doc_array_tag(en_tokenizer):
|
||||
text = "A nice sentence."
|
||||
pos = ['DET', 'ADJ', 'NOUN', 'PUNCT']
|
||||
|
|
|
@ -2,6 +2,8 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
from ..util import get_doc
|
||||
from ...tokens import Doc
|
||||
from ...vocab import Vocab
|
||||
|
||||
import pytest
|
||||
import numpy
|
||||
|
@ -204,19 +206,20 @@ def test_doc_api_right_edge(en_tokenizer):
|
|||
assert doc[6].right_edge.text == ','
|
||||
|
||||
|
||||
@pytest.mark.xfail
|
||||
@pytest.mark.parametrize('text,vectors', [
|
||||
("apple orange pear", ["apple -1 -1 -1", "orange -1 -1 0", "pear -1 0 -1"])
|
||||
])
|
||||
def test_doc_api_has_vector(en_tokenizer, text_file, text, vectors):
|
||||
text_file.write('\n'.join(vectors))
|
||||
text_file.seek(0)
|
||||
vector_length = en_tokenizer.vocab.load_vectors(text_file)
|
||||
assert vector_length == 3
|
||||
|
||||
doc = en_tokenizer(text)
|
||||
def test_doc_api_has_vector():
|
||||
vocab = Vocab()
|
||||
vocab.clear_vectors(2)
|
||||
vocab.vectors.add('kitten', numpy.asarray([0., 2.], dtype='f'))
|
||||
doc = Doc(vocab, words=['kitten'])
|
||||
assert doc.has_vector
|
||||
|
||||
def test_lowest_common_ancestor(en_tokenizer):
|
||||
tokens = en_tokenizer('the lazy dog slept')
|
||||
doc = get_doc(tokens.vocab, [t.text for t in tokens], heads=[2, 1, 1, 0])
|
||||
lca = doc.get_lca_matrix()
|
||||
assert(lca[1, 1] == 1)
|
||||
assert(lca[0, 1] == 2)
|
||||
assert(lca[1, 2] == 2)
|
||||
|
||||
def test_parse_tree(en_tokenizer):
|
||||
"""Tests doc.print_tree() method."""
|
||||
|
|
|
@ -3,6 +3,8 @@ from __future__ import unicode_literals
|
|||
|
||||
from ...attrs import IS_ALPHA, IS_DIGIT, IS_LOWER, IS_PUNCT, IS_TITLE, IS_STOP
|
||||
from ..util import get_doc
|
||||
from ...vocab import Vocab
|
||||
from ...tokens import Doc
|
||||
|
||||
import pytest
|
||||
import numpy
|
||||
|
@ -68,26 +70,21 @@ def test_doc_token_api_is_properties(en_vocab):
|
|||
assert doc[5].like_email
|
||||
|
||||
|
||||
@pytest.mark.xfail
|
||||
@pytest.mark.parametrize('text,vectors', [
|
||||
("apples oranges ldskbjls", ["apples -1 -1 -1", "oranges -1 -1 0"])
|
||||
])
|
||||
def test_doc_token_api_vectors(en_tokenizer, text_file, text, vectors):
|
||||
text_file.write('\n'.join(vectors))
|
||||
text_file.seek(0)
|
||||
vector_length = en_tokenizer.vocab.load_vectors(text_file)
|
||||
assert vector_length == 3
|
||||
def test_doc_token_api_vectors():
|
||||
vocab = Vocab()
|
||||
vocab.clear_vectors(2)
|
||||
vocab.vectors.add('apples', numpy.asarray([0., 2.], dtype='f'))
|
||||
vocab.vectors.add('oranges', numpy.asarray([0., 1.], dtype='f'))
|
||||
doc = Doc(vocab, words=['apples', 'oranges', 'oov'])
|
||||
assert doc.has_vector
|
||||
|
||||
tokens = en_tokenizer(text)
|
||||
assert tokens[0].has_vector
|
||||
assert tokens[1].has_vector
|
||||
assert not tokens[2].has_vector
|
||||
assert tokens[0].similarity(tokens[1]) > tokens[0].similarity(tokens[2])
|
||||
assert tokens[0].similarity(tokens[1]) == tokens[1].similarity(tokens[0])
|
||||
assert sum(tokens[0].vector) != sum(tokens[1].vector)
|
||||
assert numpy.isclose(
|
||||
tokens[0].vector_norm,
|
||||
numpy.sqrt(numpy.dot(tokens[0].vector, tokens[0].vector)))
|
||||
assert doc[0].has_vector
|
||||
assert doc[1].has_vector
|
||||
assert not doc[2].has_vector
|
||||
apples_norm = (0*0 + 2*2) ** 0.5
|
||||
oranges_norm = (0*0 + 1*1) ** 0.5
|
||||
cosine = ((0*0) + (2*1)) / (apples_norm * oranges_norm)
|
||||
assert doc[0].similarity(doc[1]) == cosine
|
||||
|
||||
|
||||
def test_doc_token_api_ancestors(en_tokenizer):
|
||||
|
|
0
spacy/tests/lang/da/__init__.py
Normal file
0
spacy/tests/lang/da/__init__.py
Normal file
15
spacy/tests/lang/da/test_exceptions.py
Normal file
15
spacy/tests/lang/da/test_exceptions.py
Normal file
|
@ -0,0 +1,15 @@
|
|||
# coding: utf-8
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
|
||||
@pytest.mark.parametrize('text', ["ca.", "m.a.o.", "Jan.", "Dec."])
|
||||
def test_da_tokenizer_handles_abbr(da_tokenizer, text):
|
||||
tokens = da_tokenizer(text)
|
||||
assert len(tokens) == 1
|
||||
|
||||
def test_da_tokenizer_handles_exc_in_text(da_tokenizer):
|
||||
text = "Det er bl.a. ikke meningen"
|
||||
tokens = da_tokenizer(text)
|
||||
assert len(tokens) == 5
|
||||
assert tokens[2].text == "bl.a."
|
27
spacy/tests/lang/da/test_text.py
Normal file
27
spacy/tests/lang/da/test_text.py
Normal file
|
@ -0,0 +1,27 @@
|
|||
# coding: utf-8
|
||||
"""Test that longer and mixed texts are tokenized correctly."""
|
||||
|
||||
|
||||
from __future__ import unicode_literals
|
||||
|
||||
import pytest
|
||||
|
||||
def test_da_tokenizer_handles_long_text(da_tokenizer):
|
||||
text = """Der var så dejligt ude på landet. Det var sommer, kornet stod gult, havren grøn,
|
||||
høet var rejst i stakke nede i de grønne enge, og der gik storken på sine lange,
|
||||
røde ben og snakkede ægyptisk, for det sprog havde han lært af sin moder.
|
||||
|
||||
Rundt om ager og eng var der store skove, og midt i skovene dybe søer; jo, der var rigtignok dejligt derude på landet!"""
|
||||
tokens = da_tokenizer(text)
|
||||
assert len(tokens) == 84
|
||||
|
||||
@pytest.mark.parametrize('text,match', [
|
||||
('10', True), ('1', True), ('10.000', True), ('10.00', True),
|
||||
('999,0', True), ('en', True), ('treoghalvfemsindstyvende', True), ('hundrede', True),
|
||||
('hund', False), (',', False), ('1/2', True)])
|
||||
def test_lex_attrs_like_number(da_tokenizer, text, match):
|
||||
tokens = da_tokenizer(text)
|
||||
assert len(tokens) == 1
|
||||
print(tokens[0])
|
||||
assert tokens[0].like_num == match
|
||||
|
23
spacy/tests/regression/test_issue1242.py
Normal file
23
spacy/tests/regression/test_issue1242.py
Normal file
|
@ -0,0 +1,23 @@
|
|||
from __future__ import unicode_literals
|
||||
import pytest
|
||||
from ...lang.en import English
|
||||
from ...util import load_model
|
||||
|
||||
|
||||
def test_issue1242_empty_strings():
|
||||
nlp = English()
|
||||
doc = nlp('')
|
||||
assert len(doc) == 0
|
||||
docs = list(nlp.pipe(['', 'hello']))
|
||||
assert len(docs[0]) == 0
|
||||
assert len(docs[1]) == 1
|
||||
|
||||
|
||||
@pytest.mark.models('en')
|
||||
def test_issue1242_empty_strings_en_core_web_sm():
|
||||
nlp = load_model('en_core_web_sm')
|
||||
doc = nlp('')
|
||||
assert len(doc) == 0
|
||||
docs = list(nlp.pipe(['', 'hello']))
|
||||
assert len(docs[0]) == 0
|
||||
assert len(docs[1]) == 1
|
13
spacy/tests/regression/test_issue1250.py
Normal file
13
spacy/tests/regression/test_issue1250.py
Normal file
|
@ -0,0 +1,13 @@
|
|||
from __future__ import unicode_literals
|
||||
from ...tokenizer import Tokenizer
|
||||
from ...symbols import ORTH, LEMMA, POS
|
||||
from ...lang.en import English
|
||||
|
||||
def test_issue1250_cached_special_cases():
|
||||
nlp = English()
|
||||
nlp.tokenizer.add_special_case(u'reimbur', [{ORTH: u'reimbur', LEMMA: u'reimburse', POS: u'VERB'}])
|
||||
|
||||
lemmas = [w.lemma_ for w in nlp(u'reimbur, reimbur...')]
|
||||
assert lemmas == ['reimburse', ',', 'reimburse', '...']
|
||||
lemmas = [w.lemma_ for w in nlp(u'reimbur, reimbur...')]
|
||||
assert lemmas == ['reimburse', ',', 'reimburse', '...']
|
20
spacy/tests/regression/test_issue1253.py
Normal file
20
spacy/tests/regression/test_issue1253.py
Normal file
|
@ -0,0 +1,20 @@
|
|||
from __future__ import unicode_literals
|
||||
import pytest
|
||||
import spacy
|
||||
|
||||
|
||||
def ss(tt):
|
||||
for i in range(len(tt)-1):
|
||||
for j in range(i+1, len(tt)):
|
||||
tt[i:j].root
|
||||
|
||||
|
||||
@pytest.mark.models('en')
|
||||
def test_access_parse_for_merged():
|
||||
nlp = spacy.load('en_core_web_sm')
|
||||
t_t = nlp.tokenizer("Highly rated - I'll definitely")
|
||||
nlp.tagger(t_t)
|
||||
nlp.parser(t_t)
|
||||
nlp.parser(t_t)
|
||||
ss(t_t)
|
||||
|
|
@ -1,8 +1,10 @@
|
|||
import pytest
|
||||
import spacy
|
||||
|
||||
@pytest.mark.models('en')
|
||||
def test_issue1305(EN):
|
||||
def test_issue1305():
|
||||
'''Test lemmatization of English VBZ'''
|
||||
assert EN.vocab.morphology.lemmatizer('works', 'verb') == set(['work'])
|
||||
doc = EN(u'This app works well')
|
||||
nlp = spacy.load('en_core_web_sm')
|
||||
assert nlp.vocab.morphology.lemmatizer('works', 'verb') == ['work']
|
||||
doc = nlp(u'This app works well')
|
||||
assert doc[2].lemma_ == 'work'
|
||||
|
|
16
spacy/tests/regression/test_issue1375.py
Normal file
16
spacy/tests/regression/test_issue1375.py
Normal file
|
@ -0,0 +1,16 @@
|
|||
from __future__ import unicode_literals
|
||||
import pytest
|
||||
from ...vocab import Vocab
|
||||
from ...tokens.doc import Doc
|
||||
|
||||
|
||||
def test_issue1375():
|
||||
'''Test that token.nbor() raises IndexError for out-of-bounds access.'''
|
||||
doc = Doc(Vocab(), words=['0', '1', '2'])
|
||||
with pytest.raises(IndexError):
|
||||
assert doc[0].nbor(-1)
|
||||
assert doc[1].nbor(-1).text == '0'
|
||||
with pytest.raises(IndexError):
|
||||
assert doc[2].nbor(1)
|
||||
assert doc[1].nbor(1).text == '2'
|
||||
|
22
spacy/tests/regression/test_issue1434.py
Normal file
22
spacy/tests/regression/test_issue1434.py
Normal file
|
@ -0,0 +1,22 @@
|
|||
from __future__ import unicode_literals
|
||||
|
||||
from ...vocab import Vocab
|
||||
from ...lang.lex_attrs import LEX_ATTRS
|
||||
from ...tokens import Doc
|
||||
from ...matcher import Matcher
|
||||
|
||||
|
||||
def test_issue1434():
|
||||
'''Test matches occur when optional element at end of short doc'''
|
||||
vocab = Vocab(lex_attr_getters=LEX_ATTRS)
|
||||
hello_world = Doc(vocab, words=['Hello', 'World'])
|
||||
hello = Doc(vocab, words=['Hello'])
|
||||
|
||||
matcher = Matcher(vocab)
|
||||
matcher.add('MyMatcher', None,
|
||||
[ {'ORTH': 'Hello' }, {'IS_ALPHA': True, 'OP': '?'} ])
|
||||
|
||||
matches = matcher(hello_world)
|
||||
assert matches
|
||||
matches = matcher(hello)
|
||||
assert matches
|
58
spacy/tests/regression/test_issue1450.py
Normal file
58
spacy/tests/regression/test_issue1450.py
Normal file
|
@ -0,0 +1,58 @@
|
|||
from __future__ import unicode_literals
|
||||
import pytest
|
||||
|
||||
from ...matcher import Matcher
|
||||
from ...tokens import Doc
|
||||
from ...vocab import Vocab
|
||||
|
||||
|
||||
@pytest.mark.parametrize(
|
||||
'string,start,end',
|
||||
[
|
||||
('a', 0, 1),
|
||||
('a b', 0, 2),
|
||||
('a c', 0, 1),
|
||||
('a b c', 0, 2),
|
||||
('a b b c', 0, 2),
|
||||
('a b b', 0, 2),
|
||||
]
|
||||
)
|
||||
def test_issue1450_matcher_end_zero_plus(string, start, end):
|
||||
'''Test matcher works when patterns end with * operator.
|
||||
|
||||
Original example (rewritten to avoid model usage)
|
||||
|
||||
nlp = spacy.load('en_core_web_sm')
|
||||
matcher = Matcher(nlp.vocab)
|
||||
matcher.add(
|
||||
"TSTEND",
|
||||
on_match_1,
|
||||
[
|
||||
{TAG: "JJ", LOWER: "new"},
|
||||
{TAG: "NN", 'OP': "*"}
|
||||
]
|
||||
)
|
||||
doc = nlp(u'Could you create a new ticket for me?')
|
||||
print([(w.tag_, w.text, w.lower_) for w in doc])
|
||||
matches = matcher(doc)
|
||||
print(matches)
|
||||
assert len(matches) == 1
|
||||
assert matches[0][1] == 4
|
||||
assert matches[0][2] == 5
|
||||
'''
|
||||
matcher = Matcher(Vocab())
|
||||
matcher.add(
|
||||
"TSTEND",
|
||||
None,
|
||||
[
|
||||
{'ORTH': "a"},
|
||||
{'ORTH': "b", 'OP': "*"}
|
||||
]
|
||||
)
|
||||
doc = Doc(Vocab(), words=string.split())
|
||||
matches = matcher(doc)
|
||||
if start is None or end is None:
|
||||
assert matches == []
|
||||
|
||||
assert matches[0][1] == start
|
||||
assert matches[0][2] == end
|
|
@ -9,4 +9,4 @@ import pytest
|
|||
@pytest.mark.parametrize('word,lemmas', [("chromosomes", ["chromosome"]), ("endosomes", ["endosome"]), ("colocalizes", ["colocalize", "colocaliz"])])
|
||||
def test_issue781(EN, word, lemmas):
|
||||
lemmatizer = EN.Defaults.create_lemmatizer()
|
||||
assert lemmatizer(word, 'noun', morphology={'number': 'plur'}) == set(lemmas)
|
||||
assert lemmatizer(word, 'noun', morphology={'number': 'plur'}) == lemmas
|
||||
|
|
|
@ -55,6 +55,17 @@ def test_spans_span_sent(doc):
|
|||
assert doc[6:7].sent.root.left_edge.text == 'This'
|
||||
|
||||
|
||||
def test_spans_lca_matrix(en_tokenizer):
|
||||
"""Test span's lca matrix generation"""
|
||||
tokens = en_tokenizer('the lazy dog slept')
|
||||
doc = get_doc(tokens.vocab, [t.text for t in tokens], heads=[2, 1, 1, 0])
|
||||
lca = doc[:2].get_lca_matrix()
|
||||
assert(lca[0, 0] == 0)
|
||||
assert(lca[0, 1] == -1)
|
||||
assert(lca[1, 0] == -1)
|
||||
assert(lca[1, 1] == 1)
|
||||
|
||||
|
||||
def test_spans_default_sentiment(en_tokenizer):
|
||||
"""Test span.sentiment property's default averaging behaviour"""
|
||||
text = "good stuff bad stuff"
|
||||
|
@ -89,7 +100,7 @@ def test_spans_are_hashable(en_tokenizer):
|
|||
assert hash(span1) != hash(span2)
|
||||
span3 = tokens[0:2]
|
||||
assert hash(span3) == hash(span1)
|
||||
|
||||
|
||||
|
||||
def test_spans_by_character(doc):
|
||||
span1 = doc[1:-2]
|
||||
|
@ -106,3 +117,9 @@ def test_span_to_array(doc):
|
|||
assert arr[0, 0] == span[0].orth
|
||||
assert arr[0, 1] == len(span[0])
|
||||
|
||||
|
||||
@pytest.mark.xfail
|
||||
def test_span_as_doc(doc):
|
||||
span = doc[4:10]
|
||||
span_doc = span.as_doc()
|
||||
assert span.text == span_doc.text
|
||||
|
|
|
@ -3,6 +3,7 @@ from __future__ import unicode_literals
|
|||
|
||||
from ..matcher import Matcher, PhraseMatcher
|
||||
from .util import get_doc
|
||||
from ..tokens import Doc
|
||||
|
||||
import pytest
|
||||
|
||||
|
@ -63,6 +64,12 @@ def test_matcher_init(en_vocab, words):
|
|||
assert matcher(doc) == []
|
||||
|
||||
|
||||
def test_matcher_contains(matcher):
|
||||
matcher.add('TEST', None, [{'ORTH': 'test'}])
|
||||
assert 'TEST' in matcher
|
||||
assert 'TEST2' not in matcher
|
||||
|
||||
|
||||
def test_matcher_no_match(matcher):
|
||||
words = ["I", "like", "cheese", "."]
|
||||
doc = get_doc(matcher.vocab, words)
|
||||
|
@ -111,7 +118,8 @@ def test_matcher_empty_dict(en_vocab):
|
|||
matcher.add('A.', None, [{'ORTH': 'a'}, {}])
|
||||
matches = matcher(doc)
|
||||
assert matches[0][1:] == (0, 2)
|
||||
|
||||
|
||||
|
||||
def test_matcher_operator_shadow(en_vocab):
|
||||
matcher = Matcher(en_vocab)
|
||||
abc = ["a", "b", "c"]
|
||||
|
@ -122,7 +130,8 @@ def test_matcher_operator_shadow(en_vocab):
|
|||
matches = matcher(doc)
|
||||
assert len(matches) == 1
|
||||
assert matches[0][1:] == (0, 3)
|
||||
|
||||
|
||||
|
||||
def test_matcher_phrase_matcher(en_vocab):
|
||||
words = ["Google", "Now"]
|
||||
doc = get_doc(en_vocab, words)
|
||||
|
@ -133,6 +142,22 @@ def test_matcher_phrase_matcher(en_vocab):
|
|||
assert len(matcher(doc)) == 1
|
||||
|
||||
|
||||
def test_phrase_matcher_length(en_vocab):
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
assert len(matcher) == 0
|
||||
matcher.add('TEST', None, get_doc(en_vocab, ['test']))
|
||||
assert len(matcher) == 1
|
||||
matcher.add('TEST2', None, get_doc(en_vocab, ['test2']))
|
||||
assert len(matcher) == 2
|
||||
|
||||
|
||||
def test_phrase_matcher_contains(en_vocab):
|
||||
matcher = PhraseMatcher(en_vocab)
|
||||
matcher.add('TEST', None, get_doc(en_vocab, ['test']))
|
||||
assert 'TEST' in matcher
|
||||
assert 'TEST2' not in matcher
|
||||
|
||||
|
||||
def test_matcher_match_zero(matcher):
|
||||
words1 = 'He said , " some words " ...'.split()
|
||||
words2 = 'He said , " some three words " ...'.split()
|
||||
|
@ -212,3 +237,24 @@ def test_operator_combos(matcher):
|
|||
assert matches, (string, pattern_str)
|
||||
else:
|
||||
assert not matches, (string, pattern_str)
|
||||
|
||||
|
||||
def test_matcher_end_zero_plus(matcher):
|
||||
'''Test matcher works when patterns end with * operator. (issue 1450)'''
|
||||
matcher = Matcher(matcher.vocab)
|
||||
matcher.add(
|
||||
"TSTEND",
|
||||
None,
|
||||
[
|
||||
{'ORTH': "a"},
|
||||
{'ORTH': "b", 'OP': "*"}
|
||||
]
|
||||
)
|
||||
nlp = lambda string: Doc(matcher.vocab, words=string.split())
|
||||
assert len(matcher(nlp(u'a'))) == 1
|
||||
assert len(matcher(nlp(u'a b'))) == 1
|
||||
assert len(matcher(nlp(u'a b'))) == 1
|
||||
assert len(matcher(nlp(u'a c'))) == 1
|
||||
assert len(matcher(nlp(u'a b c'))) == 1
|
||||
assert len(matcher(nlp(u'a b b c'))) == 1
|
||||
assert len(matcher(nlp(u'a b b'))) == 1
|
||||
|
|
|
@ -35,18 +35,18 @@ def vocab(en_vocab, vectors):
|
|||
|
||||
|
||||
def test_init_vectors_with_data(strings, data):
|
||||
v = Vectors(strings, data)
|
||||
v = Vectors(strings, data=data)
|
||||
assert v.shape == data.shape
|
||||
|
||||
def test_init_vectors_with_width(strings):
|
||||
v = Vectors(strings, 3)
|
||||
v = Vectors(strings, width=3)
|
||||
for string in strings:
|
||||
v.add(string)
|
||||
assert v.shape == (len(strings), 3)
|
||||
|
||||
|
||||
def test_get_vector(strings, data):
|
||||
v = Vectors(strings, data)
|
||||
v = Vectors(strings, data=data)
|
||||
for string in strings:
|
||||
v.add(string)
|
||||
assert list(v[strings[0]]) == list(data[0])
|
||||
|
@ -56,7 +56,7 @@ def test_get_vector(strings, data):
|
|||
|
||||
def test_set_vector(strings, data):
|
||||
orig = data.copy()
|
||||
v = Vectors(strings, data)
|
||||
v = Vectors(strings, data=data)
|
||||
for string in strings:
|
||||
v.add(string)
|
||||
assert list(v[strings[0]]) == list(orig[0])
|
||||
|
|
|
@ -27,8 +27,9 @@ cdef class Tokenizer:
|
|||
cdef int _try_cache(self, hash_t key, Doc tokens) except -1
|
||||
cdef int _tokenize(self, Doc tokens, unicode span, hash_t key) except -1
|
||||
cdef unicode _split_affixes(self, Pool mem, unicode string, vector[LexemeC*] *prefixes,
|
||||
vector[LexemeC*] *suffixes)
|
||||
vector[LexemeC*] *suffixes, int* has_special)
|
||||
cdef int _attach_tokens(self, Doc tokens, unicode string,
|
||||
vector[LexemeC*] *prefixes, vector[LexemeC*] *suffixes) except -1
|
||||
|
||||
cdef int _save_cached(self, const TokenC* tokens, hash_t key, int n) except -1
|
||||
cdef int _save_cached(self, const TokenC* tokens, hash_t key, int has_special,
|
||||
int n) except -1
|
||||
|
|
|
@ -20,7 +20,8 @@ cdef class Tokenizer:
|
|||
"""Segment text, and create Doc objects with the discovered segment
|
||||
boundaries.
|
||||
"""
|
||||
def __init__(self, Vocab vocab, rules, prefix_search, suffix_search, infix_finditer, token_match=None):
|
||||
def __init__(self, Vocab vocab, rules=None, prefix_search=None,
|
||||
suffix_search=None, infix_finditer=None, token_match=None):
|
||||
"""Create a `Tokenizer`, to create `Doc` objects given unicode text.
|
||||
|
||||
vocab (Vocab): A storage container for lexical types.
|
||||
|
@ -48,8 +49,9 @@ cdef class Tokenizer:
|
|||
self.infix_finditer = infix_finditer
|
||||
self.vocab = vocab
|
||||
self._rules = {}
|
||||
for chunk, substrings in sorted(rules.items()):
|
||||
self.add_special_case(chunk, substrings)
|
||||
if rules is not None:
|
||||
for chunk, substrings in sorted(rules.items()):
|
||||
self.add_special_case(chunk, substrings)
|
||||
|
||||
def __reduce__(self):
|
||||
args = (self.vocab,
|
||||
|
@ -61,11 +63,8 @@ cdef class Tokenizer:
|
|||
return (self.__class__, args, None, None)
|
||||
|
||||
cpdef Doc tokens_from_list(self, list strings):
|
||||
# TODO: deprecation warning
|
||||
return Doc(self.vocab, words=strings)
|
||||
#raise NotImplementedError(
|
||||
# "Method deprecated in 1.0.\n"
|
||||
# "Old: tokenizer.tokens_from_list(strings)\n"
|
||||
# "New: Doc(tokenizer.vocab, words=strings)")
|
||||
|
||||
@cython.boundscheck(False)
|
||||
def __call__(self, unicode string):
|
||||
|
@ -148,14 +147,18 @@ cdef class Tokenizer:
|
|||
cdef vector[LexemeC*] prefixes
|
||||
cdef vector[LexemeC*] suffixes
|
||||
cdef int orig_size
|
||||
cdef int has_special
|
||||
orig_size = tokens.length
|
||||
span = self._split_affixes(tokens.mem, span, &prefixes, &suffixes)
|
||||
span = self._split_affixes(tokens.mem, span, &prefixes, &suffixes,
|
||||
&has_special)
|
||||
self._attach_tokens(tokens, span, &prefixes, &suffixes)
|
||||
self._save_cached(&tokens.c[orig_size], orig_key, tokens.length - orig_size)
|
||||
self._save_cached(&tokens.c[orig_size], orig_key, has_special,
|
||||
tokens.length - orig_size)
|
||||
|
||||
cdef unicode _split_affixes(self, Pool mem, unicode string,
|
||||
vector[const LexemeC*] *prefixes,
|
||||
vector[const LexemeC*] *suffixes):
|
||||
vector[const LexemeC*] *suffixes,
|
||||
int* has_special):
|
||||
cdef size_t i
|
||||
cdef unicode prefix
|
||||
cdef unicode suffix
|
||||
|
@ -174,6 +177,7 @@ cdef class Tokenizer:
|
|||
if minus_pre and self._specials.get(hash_string(minus_pre)) != NULL:
|
||||
string = minus_pre
|
||||
prefixes.push_back(self.vocab.get(mem, prefix))
|
||||
has_special[0] = 1
|
||||
break
|
||||
if self.token_match and self.token_match(string):
|
||||
break
|
||||
|
@ -185,6 +189,7 @@ cdef class Tokenizer:
|
|||
if minus_suf and (self._specials.get(hash_string(minus_suf)) != NULL):
|
||||
string = minus_suf
|
||||
suffixes.push_back(self.vocab.get(mem, suffix))
|
||||
has_special[0] = 1
|
||||
break
|
||||
if pre_len and suf_len and (pre_len + suf_len) <= len(string):
|
||||
string = string[pre_len:-suf_len]
|
||||
|
@ -197,6 +202,7 @@ cdef class Tokenizer:
|
|||
string = minus_suf
|
||||
suffixes.push_back(self.vocab.get(mem, suffix))
|
||||
if string and (self._specials.get(hash_string(string)) != NULL):
|
||||
has_special[0] = 1
|
||||
break
|
||||
return string
|
||||
|
||||
|
@ -256,11 +262,15 @@ cdef class Tokenizer:
|
|||
preinc(it)
|
||||
tokens.push_back(lexeme, False)
|
||||
|
||||
cdef int _save_cached(self, const TokenC* tokens, hash_t key, int n) except -1:
|
||||
cdef int _save_cached(self, const TokenC* tokens, hash_t key,
|
||||
int has_special, int n) except -1:
|
||||
cdef int i
|
||||
for i in range(n):
|
||||
if tokens[i].lex.id == 0:
|
||||
return 0
|
||||
# See https://github.com/explosion/spaCy/issues/1250
|
||||
if has_special:
|
||||
return 0
|
||||
cached = <_Cached*>self.mem.alloc(1, sizeof(_Cached))
|
||||
cached.length = n
|
||||
cached.is_lex = True
|
||||
|
|
|
@ -21,7 +21,7 @@ from .token cimport Token
|
|||
from .printers import parse_tree
|
||||
from ..lexeme cimport Lexeme, EMPTY_LEXEME
|
||||
from ..typedefs cimport attr_t, flags_t
|
||||
from ..attrs import intify_attrs
|
||||
from ..attrs import intify_attrs, IDS
|
||||
from ..attrs cimport attr_id_t
|
||||
from ..attrs cimport ID, ORTH, NORM, LOWER, SHAPE, PREFIX, SUFFIX, LENGTH, CLUSTER
|
||||
from ..attrs cimport LENGTH, POS, LEMMA, TAG, DEP, HEAD, SPACY, ENT_IOB, ENT_TYPE
|
||||
|
@ -536,11 +536,15 @@ cdef class Doc:
|
|||
|
||||
@cython.boundscheck(False)
|
||||
cpdef np.ndarray to_array(self, object py_attr_ids):
|
||||
"""Given a list of M attribute IDs, export the tokens to a numpy
|
||||
`ndarray` of shape `(N, M)`, where `N` is the length of the document.
|
||||
The values will be 32-bit integers.
|
||||
"""Export given token attributes to a numpy `ndarray`.
|
||||
|
||||
attr_ids (list[int]): A list of attribute ID ints.
|
||||
If `attr_ids` is a sequence of M attributes, the output array will
|
||||
be of shape `(N, M)`, where N is the length of the `Doc`
|
||||
(in tokens). If `attr_ids` is a single attribute, the output shape will
|
||||
be (N,). You can specify attributes by integer ID (e.g. spacy.attrs.LEMMA)
|
||||
or string name (e.g. 'LEMMA' or 'lemma').
|
||||
|
||||
attr_ids (list[]): A list of attributes (int IDs or string names).
|
||||
RETURNS (numpy.ndarray[long, ndim=2]): A feature matrix, with one row
|
||||
per word, and one column per attribute indicated in the input
|
||||
`attr_ids`.
|
||||
|
@ -553,15 +557,25 @@ cdef class Doc:
|
|||
"""
|
||||
cdef int i, j
|
||||
cdef attr_id_t feature
|
||||
cdef np.ndarray[attr_t, ndim=1] attr_ids
|
||||
cdef np.ndarray[attr_t, ndim=2] output
|
||||
# Handle scalar/list inputs of strings/ints for py_attr_ids
|
||||
if not hasattr(py_attr_ids, '__iter__'):
|
||||
py_attr_ids = [py_attr_ids]
|
||||
|
||||
# Allow strings, e.g. 'lemma' or 'LEMMA'
|
||||
py_attr_ids = [(IDS[id_.upper()] if hasattr(id_, 'upper') else id_)
|
||||
for id_ in py_attr_ids]
|
||||
# Make an array from the attributes --- otherwise our inner loop is Python
|
||||
# dict iteration.
|
||||
cdef np.ndarray[attr_t, ndim=1] attr_ids = numpy.asarray(py_attr_ids, dtype=numpy.uint64)
|
||||
attr_ids = numpy.asarray(py_attr_ids, dtype=numpy.uint64)
|
||||
output = numpy.ndarray(shape=(self.length, len(attr_ids)), dtype=numpy.uint64)
|
||||
for i in range(self.length):
|
||||
for j, feature in enumerate(attr_ids):
|
||||
output[i, j] = get_token_attr(&self.c[i], feature)
|
||||
return output
|
||||
# Handle 1d case
|
||||
return output if len(attr_ids) >= 2 else output.reshape((self.length,))
|
||||
|
||||
|
||||
def count_by(self, attr_id_t attr_id, exclude=None, PreshCounter counts=None):
|
||||
"""Count the frequencies of a given attribute. Produces a dict of
|
||||
|
@ -660,6 +674,54 @@ cdef class Doc:
|
|||
self.is_tagged = bool(TAG in attrs or POS in attrs)
|
||||
return self
|
||||
|
||||
def get_lca_matrix(self):
|
||||
'''
|
||||
Calculates the lowest common ancestor matrix
|
||||
for a given Spacy doc.
|
||||
Returns LCA matrix containing the integer index
|
||||
of the ancestor, or -1 if no common ancestor is
|
||||
found (ex if span excludes a necessary ancestor).
|
||||
Apologies about the recursion, but the
|
||||
impact on performance is negligible given
|
||||
the natural limitations on the depth of a typical human sentence.
|
||||
'''
|
||||
# Efficiency notes:
|
||||
#
|
||||
# We can easily improve the performance here by iterating in Cython.
|
||||
# To loop over the tokens in Cython, the easiest way is:
|
||||
# for token in doc.c[:doc.c.length]:
|
||||
# head = token + token.head
|
||||
# Both token and head will be TokenC* here. The token.head attribute
|
||||
# is an integer offset.
|
||||
def __pairwise_lca(token_j, token_k, lca_matrix):
|
||||
if lca_matrix[token_j.i][token_k.i] != -2:
|
||||
return lca_matrix[token_j.i][token_k.i]
|
||||
elif token_j == token_k:
|
||||
lca_index = token_j.i
|
||||
elif token_k.head == token_j:
|
||||
lca_index = token_j.i
|
||||
elif token_j.head == token_k:
|
||||
lca_index = token_k.i
|
||||
elif (token_j.head == token_j) and (token_k.head == token_k):
|
||||
lca_index = -1
|
||||
else:
|
||||
lca_index = __pairwise_lca(token_j.head, token_k.head, lca_matrix)
|
||||
lca_matrix[token_j.i][token_k.i] = lca_index
|
||||
lca_matrix[token_k.i][token_j.i] = lca_index
|
||||
|
||||
return lca_index
|
||||
|
||||
lca_matrix = numpy.empty((len(self), len(self)), dtype=numpy.int32)
|
||||
lca_matrix.fill(-2)
|
||||
for j in range(len(self)):
|
||||
token_j = self[j]
|
||||
for k in range(j, len(self)):
|
||||
token_k = self[k]
|
||||
lca_matrix[j][k] = __pairwise_lca(token_j, token_k, lca_matrix)
|
||||
lca_matrix[k][j] = lca_matrix[j][k]
|
||||
|
||||
return lca_matrix
|
||||
|
||||
def to_disk(self, path, **exclude):
|
||||
"""Save the current state to a directory.
|
||||
|
||||
|
|
|
@ -129,6 +129,7 @@ cdef class Span:
|
|||
def _(self):
|
||||
return Underscore(Underscore.span_extensions, self,
|
||||
start=self.start_char, end=self.end_char)
|
||||
|
||||
def as_doc(self):
|
||||
'''Create a Doc object view of the Span's data.
|
||||
|
||||
|
@ -177,6 +178,56 @@ cdef class Span:
|
|||
return 0.0
|
||||
return numpy.dot(self.vector, other.vector) / (self.vector_norm * other.vector_norm)
|
||||
|
||||
def get_lca_matrix(self):
|
||||
'''
|
||||
Calculates the lowest common ancestor matrix
|
||||
for a given Spacy span.
|
||||
Returns LCA matrix containing the integer index
|
||||
of the ancestor, or -1 if no common ancestor is
|
||||
found (ex if span excludes a necessary ancestor).
|
||||
Apologies about the recursion, but the
|
||||
impact on performance is negligible given
|
||||
the natural limitations on the depth of a typical human sentence.
|
||||
'''
|
||||
|
||||
def __pairwise_lca(token_j, token_k, lca_matrix, margins):
|
||||
offset = margins[0]
|
||||
token_k_head = token_k.head if token_k.head.i in range(*margins) else token_k
|
||||
token_j_head = token_j.head if token_j.head.i in range(*margins) else token_j
|
||||
token_j_i = token_j.i - offset
|
||||
token_k_i = token_k.i - offset
|
||||
|
||||
if lca_matrix[token_j_i][token_k_i] != -2:
|
||||
return lca_matrix[token_j_i][token_k_i]
|
||||
elif token_j == token_k:
|
||||
lca_index = token_j_i
|
||||
elif token_k_head == token_j:
|
||||
lca_index = token_j_i
|
||||
elif token_j_head == token_k:
|
||||
lca_index = token_k_i
|
||||
elif (token_j_head == token_j) and (token_k_head == token_k):
|
||||
lca_index = -1
|
||||
else:
|
||||
lca_index = __pairwise_lca(token_j_head, token_k_head, lca_matrix, margins)
|
||||
|
||||
lca_matrix[token_j_i][token_k_i] = lca_index
|
||||
lca_matrix[token_k_i][token_j_i] = lca_index
|
||||
|
||||
return lca_index
|
||||
|
||||
lca_matrix = numpy.empty((len(self), len(self)), dtype=numpy.int32)
|
||||
lca_matrix.fill(-2)
|
||||
margins = [self.start, self.end]
|
||||
|
||||
for j in range(len(self)):
|
||||
token_j = self[j]
|
||||
for k in range(len(self)):
|
||||
token_k = self[k]
|
||||
lca_matrix[j][k] = __pairwise_lca(token_j, token_k, lca_matrix, margins)
|
||||
lca_matrix[k][j] = lca_matrix[j][k]
|
||||
|
||||
return lca_matrix
|
||||
|
||||
cpdef np.ndarray to_array(self, object py_attr_ids):
|
||||
"""Given a list of M attribute IDs, export the tokens to a numpy
|
||||
`ndarray` of shape `(N, M)`, where `N` is the length of the document.
|
||||
|
|
|
@ -127,6 +127,9 @@ cdef class Token:
|
|||
i (int): The relative position of the token to get. Defaults to 1.
|
||||
RETURNS (Token): The token at position `self.doc[self.i+i]`.
|
||||
"""
|
||||
if self.i+i < 0 or (self.i+i >= len(self.doc)):
|
||||
msg = "Error accessing doc[%d].nbor(%d), for doc of length %d"
|
||||
raise IndexError(msg % (self.i, i, len(self.doc)))
|
||||
return self.doc[self.i+i]
|
||||
|
||||
def similarity(self, other):
|
||||
|
|
|
@ -32,22 +32,24 @@ cdef class Vectors:
|
|||
cdef public object keys
|
||||
cdef public int i
|
||||
|
||||
def __init__(self, strings, data_or_width=0):
|
||||
def __init__(self, strings, width=0, data=None):
|
||||
if isinstance(strings, StringStore):
|
||||
self.strings = strings
|
||||
else:
|
||||
self.strings = StringStore()
|
||||
for string in strings:
|
||||
self.strings.add(string)
|
||||
if isinstance(data_or_width, int):
|
||||
self.data = data = numpy.zeros((len(strings), data_or_width),
|
||||
dtype='f')
|
||||
if data is not None:
|
||||
self.data = numpy.asarray(data, dtype='f')
|
||||
else:
|
||||
data = data_or_width
|
||||
self.data = numpy.zeros((len(self.strings), width), dtype='f')
|
||||
self.i = 0
|
||||
self.data = data
|
||||
self.key2row = {}
|
||||
self.keys = np.ndarray((self.data.shape[0],), dtype='uint64')
|
||||
self.keys = numpy.zeros((self.data.shape[0],), dtype='uint64')
|
||||
for i, string in enumerate(self.strings):
|
||||
if i >= self.data.shape[0]:
|
||||
break
|
||||
self.add(self.strings[string], self.data[i])
|
||||
|
||||
def __reduce__(self):
|
||||
return (Vectors, (self.strings, self.data))
|
||||
|
|
|
@ -62,12 +62,9 @@ cdef class Vocab:
|
|||
if strings:
|
||||
for string in strings:
|
||||
_ = self[string]
|
||||
for name in tag_map.keys():
|
||||
if name:
|
||||
self.strings.add(name)
|
||||
self.lex_attr_getters = lex_attr_getters
|
||||
self.morphology = Morphology(self.strings, tag_map, lemmatizer)
|
||||
self.vectors = Vectors(self.strings)
|
||||
self.vectors = Vectors(self.strings, width=0)
|
||||
|
||||
property lang:
|
||||
def __get__(self):
|
||||
|
@ -255,7 +252,7 @@ cdef class Vocab:
|
|||
"""
|
||||
if new_dim is None:
|
||||
new_dim = self.vectors.data.shape[1]
|
||||
self.vectors = Vectors(self.strings, new_dim)
|
||||
self.vectors = Vectors(self.strings, width=new_dim)
|
||||
|
||||
def get_vector(self, orth):
|
||||
"""Retrieve a vector for a word in the vocabulary.
|
||||
|
@ -338,7 +335,7 @@ cdef class Vocab:
|
|||
if self.vectors is None:
|
||||
return None
|
||||
else:
|
||||
return self.vectors.to_bytes(exclude='strings.json')
|
||||
return self.vectors.to_bytes()
|
||||
|
||||
getters = OrderedDict((
|
||||
('strings', lambda: self.strings.to_bytes()),
|
||||
|
@ -358,7 +355,7 @@ cdef class Vocab:
|
|||
if self.vectors is None:
|
||||
return None
|
||||
else:
|
||||
return self.vectors.from_bytes(b, exclude='strings')
|
||||
return self.vectors.from_bytes(b)
|
||||
setters = OrderedDict((
|
||||
('strings', lambda b: self.strings.from_bytes(b)),
|
||||
('lexemes', lambda b: self.lexemes_from_bytes(b)),
|
||||
|
@ -400,6 +397,7 @@ cdef class Vocab:
|
|||
cdef int j = 0
|
||||
cdef SerializedLexemeC lex_data
|
||||
chunk_size = sizeof(lex_data.data)
|
||||
cdef void* ptr
|
||||
cdef unsigned char* bytes_ptr = bytes_data
|
||||
for i in range(0, len(bytes_data), chunk_size):
|
||||
lexeme = <LexemeC*>self.mem.alloc(1, sizeof(LexemeC))
|
||||
|
@ -407,6 +405,9 @@ cdef class Vocab:
|
|||
lex_data.data[j] = bytes_ptr[i+j]
|
||||
Lexeme.c_from_bytes(lexeme, lex_data)
|
||||
|
||||
ptr = self.strings._map.get(lexeme.orth)
|
||||
if ptr == NULL:
|
||||
continue
|
||||
py_str = self.strings[lexeme.orth]
|
||||
assert self.strings[py_str] == lexeme.orth, (py_str, lexeme.orth)
|
||||
key = hash_string(py_str)
|
||||
|
|
|
@ -181,7 +181,7 @@ mixin codepen(slug, height, default_tab)
|
|||
alt_file - [string] alternative file path used in footer and link button
|
||||
height - [integer] height of code preview in px
|
||||
|
||||
mixin github(repo, file, alt_file, height)
|
||||
mixin github(repo, file, alt_file, height, language)
|
||||
- var branch = ALPHA ? "develop" : "master"
|
||||
- var height = height || 250
|
||||
|
||||
|
|
|
@ -1,6 +1,6 @@
|
|||
//- 💫 DOCS > API > ANNOTATION > BILUO
|
||||
|
||||
+table([ "Tag", "Description" ])
|
||||
+table(["Tag", "Description"])
|
||||
+row
|
||||
+cell #[code #[span.u-color-theme B] EGIN]
|
||||
+cell The first token of a multi-token entity.
|
||||
|
|
|
@ -37,6 +37,10 @@
|
|||
+cell #[code WORK_OF_ART]
|
||||
+cell Titles of books, songs, etc.
|
||||
|
||||
+row
|
||||
+cell #[code LAW]
|
||||
+cell Named documents made into laws.
|
||||
|
||||
+row
|
||||
+cell #[code LANGUAGE]
|
||||
+cell Any named language.
|
||||
|
|
46
website/api/_annotation/_training.jade
Normal file
46
website/api/_annotation/_training.jade
Normal file
|
@ -0,0 +1,46 @@
|
|||
//- 💫 DOCS > API > ANNOTATION > TRAINING
|
||||
|
||||
p
|
||||
| spaCy takes training data in JSON format. The built-in
|
||||
| #[+api("cli#convert") #[code convert]] command helps you convert the
|
||||
| #[code .conllu] format used by the
|
||||
| #[+a("https://github.com/UniversalDependencies") Universal Dependencies corpora]
|
||||
| to spaCy's training format.
|
||||
|
||||
+aside("Annotating entities")
|
||||
| Named entities are provided in the #[+a("/api/annotation#biluo") BILUO]
|
||||
| notation. Tokens outside an entity are set to #[code "O"] and tokens
|
||||
| that are part of an entity are set to the entity label, prefixed by the
|
||||
| BILUO marker. For example #[code "B-ORG"] describes the first token of
|
||||
| a multi-token #[code ORG] entity and #[code "U-PERSON"] a single
|
||||
| token representing a #[code PERSON] entity
|
||||
|
||||
+code("Example structure").
|
||||
[{
|
||||
"id": int, # ID of the document within the corpus
|
||||
"paragraphs": [{ # list of paragraphs in the corpus
|
||||
"raw": string, # raw text of the paragraph
|
||||
"sentences": [{ # list of sentences in the paragraph
|
||||
"tokens": [{ # list of tokens in the sentence
|
||||
"id": int, # index of the token in the document
|
||||
"dep": string, # dependency label
|
||||
"head": int, # offset of token head relative to token index
|
||||
"tag": string, # part-of-speech tag
|
||||
"orth": string, # verbatim text of the token
|
||||
"ner": string # BILUO label, e.g. "O" or "B-ORG"
|
||||
}],
|
||||
"brackets": [{ # phrase structure (NOT USED by current models)
|
||||
"first": int, # index of first token
|
||||
"last": int, # index of last token
|
||||
"label": string # phrase label
|
||||
}]
|
||||
}]
|
||||
}]
|
||||
}]
|
||||
|
||||
p
|
||||
| Here's an example of dependencies, part-of-speech tags and names
|
||||
| entities, taken from the English Wall Street Journal portion of the Penn
|
||||
| Treebank:
|
||||
|
||||
+github("spacy", "examples/training/training-data.json", false, false, "json")
|
|
@ -154,13 +154,16 @@
|
|||
|
||||
"tokenizer": {
|
||||
"title": "Tokenizer",
|
||||
"teaser": "Segment text into words, punctuations marks etc.",
|
||||
"tag": "class",
|
||||
"source": "spacy/tokenizer.pyx"
|
||||
},
|
||||
|
||||
"lemmatizer": {
|
||||
"title": "Lemmatizer",
|
||||
"tag": "class"
|
||||
"teaser": "Assign the base forms of words.",
|
||||
"tag": "class",
|
||||
"source": "spacy/lemmatizer.py"
|
||||
},
|
||||
|
||||
"tagger": {
|
||||
|
|
|
@ -101,31 +101,4 @@ p This document describes the target annotations spaCy is trained to predict.
|
|||
+section("training")
|
||||
+h(2, "json-input") JSON input format for training
|
||||
|
||||
+under-construction
|
||||
|
||||
p spaCy takes training data in the following format:
|
||||
|
||||
+code("Example structure").
|
||||
doc: {
|
||||
id: string,
|
||||
paragraphs: [{
|
||||
raw: string,
|
||||
sents: [int],
|
||||
tokens: [{
|
||||
start: int,
|
||||
tag: string,
|
||||
head: int,
|
||||
dep: string
|
||||
}],
|
||||
ner: [{
|
||||
start: int,
|
||||
end: int,
|
||||
label: string
|
||||
}],
|
||||
brackets: [{
|
||||
start: int,
|
||||
end: int,
|
||||
label: string
|
||||
}]
|
||||
}]
|
||||
}
|
||||
include _annotation/_training
|
||||
|
|
|
@ -336,28 +336,40 @@ p
|
|||
+tag method
|
||||
|
||||
p
|
||||
| Export the document annotations to a numpy array of shape #[code N*M]
|
||||
| where #[code N] is the length of the document and #[code M] is the number
|
||||
| of attribute IDs to export. The values will be 32-bit integers.
|
||||
| Export given token attributes to a numpy #[code ndarray].
|
||||
| If #[code attr_ids] is a sequence of #[code M] attributes,
|
||||
| the output array will be of shape #[code (N, M)], where #[code N]
|
||||
| is the length of the #[code Doc] (in tokens). If #[code attr_ids] is
|
||||
| a single attribute, the output shape will be #[code (N,)]. You can
|
||||
| specify attributes by integer ID (e.g. #[code spacy.attrs.LEMMA])
|
||||
| or string name (e.g. 'LEMMA' or 'lemma'). The values will be 64-bit
|
||||
| integers.
|
||||
|
||||
+aside-code("Example").
|
||||
from spacy.attrs import LOWER, POS, ENT_TYPE, IS_ALPHA
|
||||
doc = nlp(text)
|
||||
# All strings mapped to integers, for easy export to numpy
|
||||
np_array = doc.to_array([LOWER, POS, ENT_TYPE, IS_ALPHA])
|
||||
np_array = doc.to_array("POS")
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row
|
||||
+cell #[code attr_ids]
|
||||
+cell list
|
||||
+cell A list of attribute ID ints.
|
||||
+cell list or int or string
|
||||
+cell
|
||||
| A list of attributes (int IDs or string names) or
|
||||
| a single attribute (int ID or string name)
|
||||
|
||||
+row("foot")
|
||||
+cell returns
|
||||
+cell #[code.u-break numpy.ndarray[ndim=2, dtype='int32']]
|
||||
+cell
|
||||
| #[code.u-break numpy.ndarray[ndim=2, dtype='uint64']] or
|
||||
| #[code.u-break numpy.ndarray[ndim=1, dtype='uint64']] or
|
||||
+cell
|
||||
| The exported attributes as a 2D numpy array, with one row per
|
||||
| token and one column per attribute.
|
||||
| token and one column per attribute (when #[code attr_ids] is a
|
||||
| list), or as a 1D numpy array, with one item per attribute (when
|
||||
| #[code attr_ids] is a single value).
|
||||
|
||||
+h(2, "from_array") Doc.from_array
|
||||
+tag method
|
||||
|
|
|
@ -229,6 +229,7 @@ p
|
|||
+cell Config parameters.
|
||||
|
||||
+h(2, "preprocess_gold") Language.preprocess_gold
|
||||
+tag method
|
||||
|
||||
p
|
||||
| Can be called before training to pre-process gold data. By default, it
|
||||
|
@ -609,6 +610,14 @@ p Load state from a binary string.
|
|||
| Custom meta data for the Language class. If a model is loaded,
|
||||
| contains meta data of the model.
|
||||
|
||||
+row
|
||||
+cell #[code path]
|
||||
+tag-new(2)
|
||||
+cell #[code Path]
|
||||
+cell
|
||||
| Path to the model data directory, if a model is loaded. Otherwise
|
||||
| #[code None].
|
||||
|
||||
+h(2, "class-attributes") Class attributes
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
|
|
|
@ -2,4 +2,159 @@
|
|||
|
||||
include ../_includes/_mixins
|
||||
|
||||
+under-construction
|
||||
p
|
||||
| The #[code Lemmatizer] supports simple part-of-speech-sensitive suffix
|
||||
| rules and lookup tables.
|
||||
|
||||
+h(2, "init") Lemmatizer.__init__
|
||||
+tag method
|
||||
|
||||
p Create a #[code Lemmatizer].
|
||||
|
||||
+aside-code("Example").
|
||||
from spacy.lemmatizer import Lemmatizer
|
||||
lemmatizer = Lemmatizer()
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row
|
||||
+cell #[code index]
|
||||
+cell dict / #[code None]
|
||||
+cell Inventory of lemmas in the language.
|
||||
|
||||
+row
|
||||
+cell #[code exceptions]
|
||||
+cell dict / #[code None]
|
||||
+cell Mapping of string forms to lemmas that bypass the #[code rules].
|
||||
|
||||
+row
|
||||
+cell #[code rules]
|
||||
+cell dict / #[code None]
|
||||
+cell List of suffix rewrite rules.
|
||||
|
||||
+row
|
||||
+cell #[code lookup]
|
||||
+cell dict / #[code None]
|
||||
+cell Lookup table mapping string to their lemmas.
|
||||
|
||||
+row("foot")
|
||||
+cell returns
|
||||
+cell #[code Lemmatizer]
|
||||
+cell The newly created object.
|
||||
|
||||
+h(2, "call") Lemmatizer.__call__
|
||||
+tag method
|
||||
|
||||
p Lemmatize a string.
|
||||
|
||||
+aside-code("Example").
|
||||
from spacy.lemmatizer import Lemmatizer
|
||||
from spacy.lang.en import LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES
|
||||
lemmatizer = Lemmatizer(LEMMA_INDEX, LEMMA_EXC, LEMMA_RULES)
|
||||
lemmas = lemmatizer(u'ducks', u'NOUN')
|
||||
assert lemmas == [u'duck']
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row
|
||||
+cell #[code string]
|
||||
+cell unicode
|
||||
+cell The string to lemmatize, e.g. the token text.
|
||||
|
||||
+row
|
||||
+cell #[code univ_pos]
|
||||
+cell unicode / int
|
||||
+cell The token's universal part-of-speech tag.
|
||||
|
||||
+row
|
||||
+cell #[code morphology]
|
||||
+cell dict / #[code None]
|
||||
+cell
|
||||
| Morphological features following the
|
||||
| #[+a("http://universaldependencies.org/") Universal Dependencies]
|
||||
| scheme.
|
||||
|
||||
+row("foot")
|
||||
+cell returns
|
||||
+cell list
|
||||
+cell The available lemmas for the string.
|
||||
|
||||
+h(2, "lookup") Lemmatizer.lookup
|
||||
+tag method
|
||||
+tag-new(2)
|
||||
|
||||
p
|
||||
| Look up a lemma in the lookup table, if available. If no lemma is found,
|
||||
| the original string is returned. Languages can provide a
|
||||
| #[+a("/usage/adding-languages#lemmatizer") lookup table] via the
|
||||
| #[code lemma_lookup] variable, set on the individual #[code Language]
|
||||
| class.
|
||||
|
||||
+aside-code("Example").
|
||||
lookup = {u'going': u'go'}
|
||||
lemmatizer = Lemmatizer(lookup=lookup)
|
||||
assert lemmatizer.lookup(u'going') == u'go'
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row
|
||||
+cell #[code string]
|
||||
+cell unicode
|
||||
+cell The string to look up.
|
||||
|
||||
+row("foot")
|
||||
+cell returns
|
||||
+cell unicode
|
||||
+cell The lemma if the string was found, otherwise the original string.
|
||||
|
||||
+h(2, "is_base_form") Lemmatizer.is_base_form
|
||||
+tag method
|
||||
|
||||
p
|
||||
| Check whether we're dealing with an uninflected paradigm, so we can
|
||||
| avoid lemmatization entirely.
|
||||
|
||||
+aside-code("Example").
|
||||
pos = 'verb'
|
||||
morph = {'VerbForm': 'inf'}
|
||||
is_base_form = lemmatizer.is_base_form(pos, morph)
|
||||
assert is_base_form == True
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row
|
||||
+cell #[code univ_pos]
|
||||
+cell unicode / int
|
||||
+cell The token's universal part-of-speech tag.
|
||||
|
||||
+row
|
||||
+cell #[code morphology]
|
||||
+cell dict
|
||||
+cell The token's morphological features.
|
||||
|
||||
+row("foot")
|
||||
+cell returns
|
||||
+cell bool
|
||||
+cell
|
||||
| Whether the token's part-of-speech tag and morphological features
|
||||
| describe a base form.
|
||||
|
||||
+h(2, "attributes") Attributes
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row
|
||||
+cell #[code index]
|
||||
+cell dict / #[code None]
|
||||
+cell Inventory of lemmas in the language.
|
||||
|
||||
+row
|
||||
+cell #[code exc]
|
||||
+cell dict / #[code None]
|
||||
+cell Mapping of string forms to lemmas that bypass the #[code rules].
|
||||
|
||||
+row
|
||||
+cell #[code rules]
|
||||
+cell dict / #[code None]
|
||||
+cell List of suffix rewrite rules.
|
||||
|
||||
+row
|
||||
+cell #[code lookup_table]
|
||||
+tag-new(2)
|
||||
+cell dict / #[code None]
|
||||
+cell The lemma lookup table, if available.
|
||||
|
|
|
@ -284,7 +284,7 @@ p Retokenize the document, such that the span is merged into a single token.
|
|||
|
||||
+aside-code("Example").
|
||||
doc = nlp(u'I like New York in Autumn.')
|
||||
span = doc[2:3]
|
||||
span = doc[2:4]
|
||||
span.merge()
|
||||
assert len(doc) == 6
|
||||
assert doc[2].text == 'New York'
|
||||
|
@ -302,6 +302,25 @@ p Retokenize the document, such that the span is merged into a single token.
|
|||
+cell #[code Token]
|
||||
+cell The newly merged token.
|
||||
|
||||
+h(2, "as_doc") Span.as_doc
|
||||
|
||||
p
|
||||
| Create a #[code Doc] object view of the #[code Span]'s data. Mostly
|
||||
| useful for C-typed interfaces.
|
||||
|
||||
+aside-code("Example").
|
||||
doc = nlp(u'I like New York in Autumn.')
|
||||
span = doc[2:4]
|
||||
doc2 = span.as_doc()
|
||||
assert doc2.text == 'New York'
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row("foot")
|
||||
+cell returns
|
||||
+cell #[code Doc]
|
||||
+cell A #[code Doc] object of the #[code Span]'s content.
|
||||
|
||||
|
||||
+h(2, "root") Span.root
|
||||
+tag property
|
||||
+tag-model("parse")
|
||||
|
|
|
@ -586,6 +586,16 @@ p The L2 norm of the token's vector representation.
|
|||
+cell bool
|
||||
+cell Is the token punctuation?
|
||||
|
||||
+row
|
||||
+cell #[code is_left_punct]
|
||||
+cell bool
|
||||
+cell Is the token a left punctuation mark, e.g. #[code (]?
|
||||
|
||||
+row
|
||||
+cell #[code is_right_punct]
|
||||
+cell bool
|
||||
+cell Is the token a right punctuation mark, e.g. #[code )]?
|
||||
|
||||
+row
|
||||
+cell #[code is_space]
|
||||
+cell bool
|
||||
|
@ -593,6 +603,16 @@ p The L2 norm of the token's vector representation.
|
|||
| Does the token consist of whitespace characters? Equivalent to
|
||||
| #[code token.text.isspace()].
|
||||
|
||||
+row
|
||||
+cell #[code is_bracket]
|
||||
+cell bool
|
||||
+cell Is the token a bracket?
|
||||
|
||||
+row
|
||||
+cell #[code is_quote]
|
||||
+cell bool
|
||||
+cell Is the token a quotation mark?
|
||||
|
||||
+row
|
||||
+cell #[code like_url]
|
||||
+cell bool
|
||||
|
|
|
@ -12,7 +12,7 @@ p
|
|||
|
||||
p
|
||||
| Create a new vector store. To keep the vector table empty, pass
|
||||
| #[code data_or_width=0]. You can also create the vector table and add
|
||||
| #[code width=0]. You can also create the vector table and add
|
||||
| vectors one by one, or set the vector values directly on initialisation.
|
||||
|
||||
+aside-code("Example").
|
||||
|
@ -21,11 +21,11 @@ p
|
|||
|
||||
empty_vectors = Vectors(StringStore())
|
||||
|
||||
vectors = Vectors([u'cat'], 300)
|
||||
vectors = Vectors([u'cat'], width=300)
|
||||
vectors[u'cat'] = numpy.random.uniform(-1, 1, (300,))
|
||||
|
||||
vector_table = numpy.zeros((3, 300), dtype='f')
|
||||
vectors = Vectors(StringStore(), vector_table)
|
||||
vectors = Vectors(StringStore(), data=vector_table)
|
||||
|
||||
+table(["Name", "Type", "Description"])
|
||||
+row
|
||||
|
@ -36,9 +36,12 @@ p
|
|||
| that maps strings to hash values, and vice versa.
|
||||
|
||||
+row
|
||||
+cell #[code data_or_width]
|
||||
+cell #[code.u-break numpy.ndarray[ndim=1, dtype='float32']] or int
|
||||
+cell Vector data or number of dimensions.
|
||||
+cell #[code data]
|
||||
+cell #[code.u-break numpy.ndarray[ndim=1, dtype='float32']]
|
||||
|
||||
+row
|
||||
+cell #[code width]
|
||||
+cell Number of dimensions.
|
||||
|
||||
+row("foot")
|
||||
+cell returns
|
||||
|
|
|
@ -63,7 +63,6 @@ code
|
|||
padding: 0.2rem 0.4rem
|
||||
border-radius: 0.25rem
|
||||
font-family: $font-code
|
||||
white-space: nowrap
|
||||
margin: 0
|
||||
box-decoration-break: clone
|
||||
white-space: nowrap
|
||||
|
|
|
@ -14,9 +14,6 @@
|
|||
width: 100%
|
||||
box-shadow: $box-shadow
|
||||
|
||||
//@include breakpoint(min, md)
|
||||
// position: fixed
|
||||
|
||||
&.is-fixed
|
||||
animation: slideInDown 0.5s ease-in-out
|
||||
position: fixed
|
||||
|
|
|
@ -79,12 +79,12 @@ include _includes/_mixins
|
|||
+h(2) Features
|
||||
+list
|
||||
+item Non-destructive #[strong tokenization]
|
||||
+item #[strong Named entity] recognition
|
||||
+item Support for #[strong #{LANG_COUNT}+ languages]
|
||||
+item #[strong #{MODEL_COUNT} statistical models] for #{MODEL_LANG_COUNT} languages
|
||||
+item Pre-trained #[strong word vectors]
|
||||
+item Easy #[strong deep learning] integration
|
||||
+item Part-of-speech tagging
|
||||
+item #[strong Named entity] recognition
|
||||
+item Labelled dependency parsing
|
||||
+item Syntax-driven sentence segmentation
|
||||
+item Built in #[strong visualizers] for syntax and NER
|
||||
|
|
|
@ -1,3 +1,7 @@
|
|||
//- 💫 DOCS > USAGE > TRAINING > TAGGER & PARSER
|
||||
|
||||
+under-construction
|
||||
|
||||
+h(3, "training-json") JSON format for training
|
||||
|
||||
include ../../api/_annotation/_training
|
||||
|
|
|
@ -497,6 +497,7 @@ p
|
|||
|
||||
+code-new.
|
||||
nlp = spacy.load('en', disable=['tagger', 'ner'])
|
||||
doc = nlp(u"I don't want parsed", disable['parser'])
|
||||
nlp.remove_pipe('parser')
|
||||
+code-old.
|
||||
nlp = spacy.load('en', tagger=False, entity=False)
|
||||
|
|
Loading…
Reference in New Issue
Block a user