Merge branch 'master' into develop

This commit is contained in:
ines 2017-03-20 18:02:47 +01:00
commit da7cc4f9b8
232 changed files with 20197 additions and 908983 deletions

View File

@ -87,7 +87,7 @@ U.S. Federal law. Any choice of law rules will not apply.
7. Please place an “x” on one of the applicable statement below. Please do NOT
mark both statements:
* [ ] I am signing on behalf of myself as an individual and no other person
* [x] I am signing on behalf of myself as an individual and no other person
or entity, including my employer, has or will have rights with respect my
contributions.
@ -98,9 +98,9 @@ mark both statements:
| Field | Entry |
|------------------------------- | -------------------- |
| Name | |
| Name | Shuvanon Razik |
| Company name (if applicable) | |
| Title or role (if applicable) | |
| Date | |
| GitHub username | |
| Date | 3/12/2017 |
| GitHub username | shuvanon |
| Website (optional) | |

View File

@ -7,7 +7,8 @@ http://stackoverflow.com/questions/tagged/spacy -->
## Your Environment
<!-- Include details of your environment -->
<!-- Include details of your environment. If you're using spaCy 1.7+, you can also type
`python -m spacy info --markdown` and copy-paste the result here.-->
* Operating System:
* Python Version Used:
* spaCy Version Used:

4
.gitignore vendored
View File

@ -105,3 +105,7 @@ website/package.json
website/announcement.jade
website/www/
website/.gitignore
# Python virtualenv
venv
venv/*

View File

@ -6,7 +6,7 @@ group: edge
python:
- "2.7"
- "3.5"
- "3.6"
os:
- linux

View File

@ -19,9 +19,13 @@ First, [do a quick search](https://github.com/issues?q=+is%3Aissue+user%3Aexplos
If you're looking for help with your code, consider posting a question on [StackOverflow](http://stackoverflow.com/questions/tagged/spacy) instead. If you tag it `spacy` and `python`, more people will see it and hopefully be able to help.
When opening an issue, use a descriptive title and include your environment (operating system, Python version, spaCy version). Our [issue template](https://github.com/explosion/spaCy/issues/new) helps you remember the most important details to include. **Pro tip:** If you need to share long blocks of code or logs, you can wrap them in `<details>` and `</details>`. This [collapses the content](https://developer.mozilla.org/en/docs/Web/HTML/Element/details) so it only becomes visible on click, making the issue easier to read and follow.
When opening an issue, use a descriptive title and include your environment (operating system, Python version, spaCy version). Our [issue template](https://github.com/explosion/spaCy/issues/new) helps you remember the most important details to include. If you've discovered a bug, you can also submit a [regression test](#fixing-bugs) straight away. When you're opening an issue to report the bug, simply refer to your pull request in the issue body.
If you've discovered a bug, you can also submit a [regression test](#fixing-bugs) straight away. When you're opening an issue to report the bug, simply refer to your pull request in the issue body.
### Tips
* **Getting info about your spaCy installation and environment**: If you're using spaCy v1.7+, you can use the command line interface to print details and even format them as Markdown to copy-paste into GitHub issues: `python -m spacy info --markdown`.
* **Sharing long blocks of code or logs**: If you need to include long code, logs or tracebacks, you can wrap them in `<details>` and `</details>`. This [collapses the content](https://developer.mozilla.org/en/docs/Web/HTML/Element/details) so it only becomes visible on click, making the issue easier to read and follow.
### Issue labels
@ -34,14 +38,13 @@ To distinguish issues that are opened by us, the maintainers, we usually add a
| [`install`](https://github.com/explosion/spaCy/labels/install) | Installation problems |
| [`performance`](https://github.com/explosion/spaCy/labels/performance) | Accuracy, speed and memory use problems |
| [`tests`](https://github.com/explosion/spaCy/labels/tests) | Missing or incorrect [tests](spacy/tests) |
| [`examples`](https://github.com/explosion/spaCy/labels/examples) | Issues related to the [examples](spacy/examples) |
| [`english`](https://github.com/explosion/spaCy/labels/english), [`german`](https://github.com/explosion/spaCy/labels/german) | Issues related to the specific languages, models and data |
| [`docs`](https://github.com/explosion/spaCy/labels/docs), [`examples`](https://github.com/explosion/spaCy/labels/examples) | Issues related to the [documentation](https://spacy.io/docs) and [examples](spacy/examples) |
| [`models`](https://github.com/explosion/spaCy/labels/models), [`english`](https://github.com/explosion/spaCy/labels/english), [`german`](https://github.com/explosion/spaCy/labels/german) | Issues related to the specific [models](https://github.com/explosion/spacy-models), languages and data |
| [`linux`](https://github.com/explosion/spaCy/labels/linux), [`osx`](https://github.com/explosion/spaCy/labels/osx), [`windows`](https://github.com/explosion/spaCy/labels/windows) | Issues related to the specific operating systems |
| [`pip`](https://github.com/explosion/spaCy/labels/pip), [`conda`](https://github.com/explosion/spaCy/labels/conda) | Issues related to the specific package managers |
| [`duplicate`](https://github.com/explosion/spaCy/labels/duplicate) | Duplicates, i.e. issues that have been reported before |
| [`meta`](https://github.com/explosion/spaCy/labels/meta) | Meta topics, e.g. repo organisation and issue management |
| [`help wanted`](https://github.com/explosion/spaCy/labels/help%20wanted) | Requests for contributions |
| [`help wanted (easy)`](https://github.com/explosion/spaCy/labels/help%20wanted%20%28easy%29) | Requests for contributions suitable for beginners |
| [`help wanted`](https://github.com/explosion/spaCy/labels/help%20wanted), [`help wanted (easy)`](https://github.com/explosion/spaCy/labels/help%20wanted%20%28easy%29) | Requests for contributions |
## Contributing to the code base

View File

@ -30,7 +30,8 @@ This is a list of everyone who has made significant contributions to spaCy, in a
* Raphaël Bournhonesque, [@raphael0202](https://github.com/raphael0202)
* Rob van Nieuwpoort, [@RvanNieuwpoort](https://github.com/RvanNieuwpoort)
* Sam Bozek, [@sambozek](https://github.com/sambozek)
* Sasho Savkov [@savkov](https://github.com/savkov)
* Sasho Savkov, [@savkov](https://github.com/savkov)
* Shuvanon Razik, [@shuvanon](https://github.com/shuvanon)
* Thomas Tanon, [@Tpt](https://github.com/Tpt)
* Tiago Rodrigues, [@TiagoMRodrigues](https://github.com/TiagoMRodrigues)
* Vsevolod Solovyov, [@vsolovyov](https://github.com/vsolovyov)

View File

@ -1,4 +1,3 @@
recursive-include include *.h
include buildbot.json
include LICENSE
include README.rst

View File

@ -8,7 +8,7 @@ English and German, as well as tokenization for Chinese, Spanish, Italian, Fren
Portuguese, Dutch, Swedish, Finnish, Hungarian and Bengali. It's commercial open-source
software, released under the MIT license.
💫 **Version 1.6 out now!** `Read the release notes here. <https://github.com/explosion/spaCy/releases/>`_
💫 **Version 1.7 out now!** `Read the release notes here. <https://github.com/explosion/spaCy/releases/>`_
.. image:: https://img.shields.io/travis/explosion/spaCy/master.svg?style=flat-square
:target: https://travis-ci.org/explosion/spaCy
@ -37,32 +37,34 @@ software, released under the MIT license.
📖 Documentation
================
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| `Usage Workflows <https://spacy.io/docs/usage/>`_   | How to use spaCy and its features.              |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| `API Reference <https://spacy.io/docs/api/>`_   | The detailed reference for spaCy's API. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| `Tutorials <https://spacy.io/docs/usage/tutorials>`_ | End-to-end examples, with code you can modify and run. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| `Showcase & Demos <https://spacy.io/docs/usage/showcase>`_ | Demos, libraries and products from the spaCy community. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
| `Contribute <https://github.com/explosion/spaCy/blob/master/CONTRIBUTING.md>`_ | How to contribute to the spaCy project and code base. |
+--------------------------------------------------------------------------------+---------------------------------------------------------+
=================== ===
`Usage Workflows`_ How to use spaCy and its features.
`API Reference`_ The detailed reference for spaCy's API.
`Tutorials`_ End-to-end examples, with code you can modify and run.
`Showcase & Demos`_ Demos, libraries and products from the spaCy community.
`Contribute`_ How to contribute to the spaCy project and code base.
=================== ===
.. _Usage Workflows: https://spacy.io/docs/usage/
.. _API Reference: https://spacy.io/docs/api/
.. _Tutorials: https://spacy.io/docs/usage/tutorials
.. _Showcase & Demos: https://spacy.io/docs/usage/showcase
.. _Contribute: https://github.com/explosion/spaCy/blob/master/CONTRIBUTING.md
💬 Where to ask questions
==========================
+---------------------------+------------------------------------------------------------------------------------------------------------+
| **Bug reports**     | `GitHub Issue tracker <https://github.com/explosion/spaCy/issues>`_                                     |
+---------------------------+------------------------------------------------------------------------------------------------------------+
| **Usage questions**   | `StackOverflow <http://stackoverflow.com/questions/tagged/spacy>`_, `Reddit usergroup                     |
| | <https://www.reddit.com/r/spacynlp>`_, `Gitter chat <https://gitter.im/explosion/spaCy>`_ |
+---------------------------+------------------------------------------------------------------------------------------------------------+
| **General discussion** | `Reddit usergroup <https://www.reddit.com/r/spacynlp>`_, |
| | `Gitter chat <https://gitter.im/explosion/spaCy>`_  |
+---------------------------+------------------------------------------------------------------------------------------------------------+
| **Commercial support** | contact@explosion.ai                                                                                     |
+---------------------------+------------------------------------------------------------------------------------------------------------+
====================== ===
**Bug reports** `GitHub issue tracker`_
**Usage questions** `StackOverflow`_, `Gitter chat`_, `Reddit user group`_
**General discussion** `Gitter chat`_, `Reddit user group`_
**Commercial support** contact@explosion.ai
====================== ===
.. _GitHub issue tracker: https://github.com/explosion/spaCy/issues
.. _StackOverflow: http://stackoverflow.com/questions/tagged/spacy
.. _Gitter chat: https://gitter.im/explosion/spaCy
.. _Reddit user group: https://www.reddit.com/r/spacynlp
Features
========
@ -85,7 +87,7 @@ Features
See `facts, figures and benchmarks <https://spacy.io/docs/api/>`_.
Top Performance
===============
---------------
* Fastest in the world: <50ms per document. No faster system has ever been
announced.
@ -94,21 +96,22 @@ Top Performance
accurate systems are an order of magnitude slower or more.
Supports
========
--------
* CPython 2.6, 2.7, 3.3, 3.4, 3.5 (only 64 bit)
* macOS / OS X
* Linux
* Windows (Cygwin, MinGW, Visual Studio)
==================== ===
**Operating system** macOS / OS X, Linux, Windows (Cygwin, MinGW, Visual Studio)
**Python version** CPython 2.6, 2.7, 3.3, 3.4, 3.5. Only 64 bit.
**Package managers** `pip`_ (source packages only), `conda`_ (via ``conda-forge``)
==================== ===
.. _pip: https://pypi.python.org/pypi/spacy
.. _conda: https://anaconda.org/conda-forge/spacy
Install spaCy
=============
spaCy is compatible with **64-bit CPython 2.6+/3.3+** and runs on **Unix/Linux**,
**macOS/OS X** and **Windows**. The latest spaCy releases are available over
`pip <https://pypi.python.org/pypi/spacy>`_ (source packages only) and
`conda <https://anaconda.org/conda-forge/spacy>`_. Installation requires a working
build environment. See notes on Ubuntu, macOS/OS X and Windows for details.
Installation requires a working build environment. See notes on Ubuntu,
macOS/OS X and Windows for details.
pip
---
@ -146,52 +149,76 @@ Improvements and pull requests to the recipe and setup are always appreciated.
Download models
===============
After installation you need to download a language model. Models for English
(``en``) and German (``de``) are available.
As of v1.7.0, models for spaCy can be installed as **Python packages**.
This means that they're a component of your application, just like any
other module. They're versioned and can be defined as a dependency in your
``requirements.txt``. Models can be installed from a download URL or
a local directory, manually or via pip. Their data can be located anywhere on
your file system. To make a model available to spaCy, all you need to do is
create a "shortcut link", an internal alias that tells spaCy where to find the
data files for a specific model name.
======================= ===
`spaCy Models`_ Available models, latest releases and direct download.
`Models Documentation`_ Detailed usage instructions.
======================= ===
.. _spaCy Models: https://github.com/explosion/spacy-models/releases/
.. _Models Documentation: https://spacy.io/docs/usage/models
.. code:: bash
python -m spacy.en.download all
python -m spacy.de.download all
# out-of-the-box: download best-matching default model
python -m spacy download en
The download command fetches about 1 GB of data which it installs
within the ``spacy`` package directory.
# download best-matching version of specific model for your spaCy installation
python -m spacy download en_core_web_md
Sometimes new releases require a new language model. Then you will have to
upgrade to a new model, too. You can also force re-downloading and installing a
new language model:
# pip install .tar.gz archive from path or URL
pip install /Users/you/en_core_web_md-1.2.0.tar.gz
pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_md-1.2.0/en_core_web_md-1.2.0.tar.gz
.. code:: bash
# set up shortcut link to load installed package as "en_default"
python -m spacy link en_core_web_md en_default
python -m spacy.en.download --force
# set up shortcut link to load local model as "my_amazing_model"
python -m spacy link /Users/you/data my_amazing_model
Download model to custom location
---------------------------------
You can specify where ``spacy.en.download`` and ``spacy.de.download`` download
the language model to using the ``--data-path`` or ``-d`` argument:
.. code:: bash
python -m spacy.en.download all --data-path /some/dir
If you choose to download to a custom location, you will need to tell spaCy where to load the model
from in order to use it. You can do this either by calling ``spacy.util.set_data_path()`` before
calling ``spacy.load()``, or by passing a ``path`` argument to the ``spacy.en.English`` or
``spacy.de.German`` constructors.
Download models manually
Loading and using models
------------------------
As of v1.6, the models and word vectors are also available as direct downloads
from GitHub, attached to the `releases <https://github.com/explosion/spacy/releases>`_
as ``.tar.gz`` archives.
To load a model, use ``spacy.load()`` with the model's shortcut link:
To install the models manually, first find the default data path. You can use
``spacy.util.get_data_path()`` to find the directory where spaCy will look for
its models, or change the default data path with ``spacy.util.set_data_path()``.
Then simply unpack the archive and place the contained folder in that directory.
You can now load the models via ``spacy.load()``.
.. code:: python
import spacy
nlp = spacy.load('en_default')
doc = nlp(u'This is a sentence.')
If you've installed a model via pip, you can also ``import`` it directly and
then call its ``load()`` method with no arguments. This should also work for
older models in previous versions of spaCy.
.. code:: python
import spacy
import en_core_web_md
nlp = en_core_web_md.load()
doc = nlp(u'This is a sentence.')
📖 **For more info and examples, check out the** `models documentation <https://spacy.io/docs/usage/models>`_.
Support for older versions
--------------------------
If you're using an older version (v1.6.0 or below), you can still download and
install the old models from within spaCy using ``python -m spacy.en.download all``
or ``python -m spacy.de.download all``. The ``.tar.gz`` archives are also
`attached to the v1.6.0 release <https://github.com/explosion/spaCy/tree/v1.6.0>`_.
To download and install the models manually, unpack the archive, drop the
contained directory into ``spacy/data`` and load the model via ``spacy.load('en')``
or ``spacy.load('de')``.
Compile from source
===================
@ -224,15 +251,12 @@ additionally installs developer dependencies such as Cython.
Instead of the above verbose commands, you can also use the following
`Fabric <http://www.fabfile.org/>`_ commands:
+---------------+--------------------------------------------------------------+
| ``fab env`` | Create ``virtualenv`` and delete previous one, if it exists. |
+---------------+--------------------------------------------------------------+
| ``fab make`` | Compile the source. |
+---------------+--------------------------------------------------------------+
| ``fab clean`` | Remove compiled objects, including the generated C++. |
+---------------+--------------------------------------------------------------+
| ``fab test`` | Run basic tests, aborting after first failure. |
+---------------+--------------------------------------------------------------+
============= ===
``fab env`` Create ``virtualenv`` and delete previous one, if it exists.
``fab make`` Compile the source.
``fab clean`` Remove compiled objects, including the generated C++.
``fab test`` Run basic tests, aborting after first failure.
============= ===
All commands assume that your ``virtualenv`` is located in a directory ``.env``.
If you're using a different directory, you can change it via the environment
@ -284,374 +308,65 @@ and ``--model`` are optional and enable additional tests:
# make sure you are using recent pytest version
python -m pip install -U pytest
python -m pytest <spacy-directory> --vectors --model --slow
Changelog
=========
2017-01-16 `v1.6.0 <https://github.com/explosion/spaCy/releases/>`_: *Improvements to tokenizer and tests*
----------------------------------------------------------------------------------------------------------
**✨ Major features and improvements**
* Updated token exception handling mechanism to allow the usage of arbitrary functions as token exception matchers.
* Improve how tokenizer exceptions for English contractions and punctuations are generated.
* Update language data for Hungarian and Swedish tokenization.
* Update to use `Thinc v6 <https://github.com/explosion/thinc/>`_ to prepare for `spaCy v2.0 <https://github.com/explosion/spaCy/projects/3>`_.
**🔴 Bug fixes**
* Fix issue `#326 <https://github.com/explosion/spaCy/issues/326>`_: Tokenizer is now more consistent and handles abbreviations correctly.
* Fix issue `#344 <https://github.com/explosion/spaCy/issues/344>`_: Tokenizer now handles URLs correctly.
* Fix issue `#483 <https://github.com/explosion/spaCy/issues/483>`_: Period after two or more uppercase letters is split off in tokenizer exceptions.
* Fix issue `#631 <https://github.com/explosion/spaCy/issues/631>`_: Add ``richcmp`` method to ``Token``.
* Fix issue `#718 <https://github.com/explosion/spaCy/issues/718>`_: Contractions with ``She`` are now handled correctly.
* Fix issue `#736 <https://github.com/explosion/spaCy/issues/736>`_: Times are now tokenized with correct string values.
* Fix issue `#743 <https://github.com/explosion/spaCy/issues/743>`_: ``Token`` is now hashable.
* Fix issue `#744 <https://github.com/explosion/spaCy/issues/744>`_: ``were`` and ``Were`` are now excluded correctly from contractions.
**📋 Tests**
* Modernise and reorganise all tests and remove model dependencies where possible.
* Improve test speed to ~20s for basic tests (from previously >80s) and ~100s including models (from previously >200s).
* Add fixtures for spaCy components and test utilities, e.g. to create ``Doc`` object manually.
* Add `documentation for tests <https://github.com/explosion/spaCy/tree/master/spacy/tests>`_ to explain conventions and organisation.
**👥 Contributors**
Thanks to `@oroszgy <https://github.com/oroszgy>`_, `@magnusburton <https://github.com/magnusburton>`_, `@guyrosin <https://github.com/guyrosin>`_ and `@danielhers <https://github.com/danielhers>`_ for the pull requests!
2016-12-27 `v1.5.0 <https://github.com/explosion/spaCy/releases/tag/v1.5.0>`_: *Alpha support for Swedish and Hungarian*
------------------------------------------------------------------------------------------------------------------------
**✨ Major features and improvements**
* **NEW:** Alpha support for Swedish tokenization.
* **NEW:** Alpha support for Hungarian tokenization.
* Update language data for Spanish tokenization.
* Speed up tokenization when no data is preloaded by caching the first 10,000 vocabulary items seen.
**🔴 Bug fixes**
* List the ``language_data`` package in the ``setup.py``.
* Fix missing ``vec_path`` declaration that was failing if ``add_vectors`` was set.
* Allow ``Vocab`` to load without ``serializer_freqs``.
**📖 Documentation and examples**
* **NEW:** `spaCy Jupyter notebooks <https://github.com/explosion/spacy-notebooks>`_ repo: ongoing collection of easy-to-run spaCy examples and tutorials.
* Fix issue `#657 <https://github.com/explosion/spaCy/issues/657>`_: Generalise dependency parsing `annotation specs <https://spacy.io/docs/api/annotation>`_ beyond English.
* Fix various typos and inconsistencies.
**👥 Contributors**
Thanks to `@oroszgy <https://github.com/oroszgy>`_, `@magnusburton <https://github.com/magnusburton>`_, `@jmizgajski <https://github.com/jmizgajski>`_, `@aikramer2 <https://github.com/aikramer2>`_, `@fnorf <https://github.com/fnorf>`_ and `@bhargavvader <https://github.com/bhargavvader>`_ for the pull requests!
2016-12-18 `v1.4.0 <https://github.com/explosion/spaCy/releases/tag/v1.4.0>`_: *Improved language data and alpha Dutch support*
-------------------------------------------------------------------------------------------------------------------------------
**✨ Major features and improvements**
* **NEW:** Alpha support for Dutch tokenization.
* Reorganise and improve format for language data.
* Add shared tag map, entity rules, emoticons and punctuation to language data.
* Convert entity rules, morphological rules and lemmatization rules from JSON to Python.
* Update language data for English, German, Spanish, French, Italian and Portuguese.
**🔴 Bug fixes**
* Fix issue `#649 <https://github.com/explosion/spaCy/issues/649>`_: Update and reorganise stop lists.
* Fix issue `#672 <https://github.com/explosion/spaCy/issues/672>`_: Make ``token.ent_iob_`` return unicode.
* Fix issue `#674 <https://github.com/explosion/spaCy/issues/674>`_: Add missing lemmas for contracted forms of "be" to ``TOKENIZER_EXCEPTIONS``.
* Fix issue `#683 <https://github.com/explosion/spaCy/issues/683>`_ ``Morphology`` class now supplies tag map value for the special space tag if it's missing.
* Fix issue `#684 <https://github.com/explosion/spaCy/issues/684>`_: Ensure ``spacy.en.English()`` loads the Glove vector data if available. Previously was inconsistent with behaviour of ``spacy.load('en')``.
* Fix issue `#685 <https://github.com/explosion/spaCy/issues/685>`_: Expand ``TOKENIZER_EXCEPTIONS`` with unicode apostrophe (````).
* Fix issue `#689 <https://github.com/explosion/spaCy/issues/689>`_: Correct typo in ``STOP_WORDS``.
* Fix issue `#691 <https://github.com/explosion/spaCy/issues/691>`_: Add tokenizer exceptions for "gonna" and "Gonna".
**⚠️ Backwards incompatibilities**
No changes to the public, documented API, but the previously undocumented language data and model initialisation processes have been refactored and reorganised. If you were relying on the ``bin/init_model.py`` script, see the new `spaCy Developer Resources <https://github.com/explosion/spacy-dev-resources>`_ repo. Code that references internals of the ``spacy.en`` or ``spacy.de`` packages should also be reviewed before updating to this version.
**📖 Documentation and examples**
* **NEW:** `"Adding languages" <https://spacy.io/docs/usage/adding-languages>`_ workflow.
* **NEW:** `"Part-of-speech tagging" <https://spacy.io/docs/usage/pos-tagging>`_ workflow.
* **NEW:** `spaCy Developer Resources <https://github.com/explosion/spacy-dev-resources>`_ repo scripts, tools and resources for developing spaCy.
* Fix various typos and inconsistencies.
**👥 Contributors**
Thanks to `@dafnevk <https://github.com/dafnevk>`_, `@jvdzwaan <https://github.com/jvdzwaan>`_, `@RvanNieuwpoort <https://github.com/RvanNieuwpoort>`_, `@wrvhage <https://github.com/wrvhage>`_, `@jaspb <https://github.com/jaspb>`_, `@savvopoulos <https://github.com/savvopoulos>`_ and `@davedwards <https://github.com/davedwards>`_ for the pull requests!
2016-12-03 `v1.3.0 <https://github.com/explosion/spaCy/releases/tag/v1.3.0>`_: *Improve API consistency*
--------------------------------------------------------------------------------------------------------
**✨ API improvements**
* Add ``Span.sentiment`` attribute.
* `#658 <https://github.com/explosion/spaCy/pull/658>`_: Add ``Span.noun_chunks`` iterator (thanks `@pokey <https://github.com/pokey>`_).
* `#642 <https://github.com/explosion/spaCy/pull/642>`_: Let ``--data-path`` be specified when running download.py scripts (thanks `@ExplodingCabbage <https://github.com/ExplodingCabbage>`_).
* `#638 <https://github.com/explosion/spaCy/pull/638>`_: Add German stopwords (thanks `@souravsingh <https://github.com/souravsingh>`_).
* `#614 <https://github.com/explosion/spaCy/pull/614>`_: Fix ``PhraseMatcher`` to work with new ``Matcher`` (thanks `@sadovnychyi <https://github.com/sadovnychyi>`_).
**🔴 Bug fixes**
* Fix issue `#605 <https://github.com/explosion/spaCy/issues/605>`_: ``accept`` argument to ``Matcher`` now rejects matches as expected.
* Fix issue `#617 <https://github.com/explosion/spaCy/issues/617>`_: ``Vocab.load()`` now works with string paths, as well as ``Path`` objects.
* Fix issue `#639 <https://github.com/explosion/spaCy/issues/639>`_: Stop words in ``Language`` class now used as expected.
* Fix issues `#656 <https://github.com/explosion/spaCy/issues/656>`_, `#624 <https://github.com/explosion/spaCy/issues/624>`_: ``Tokenizer`` special-case rules now support arbitrary token attributes.
**📖 Documentation and examples**
* Add `"Customizing the tokenizer" <https://spacy.io/docs/usage/customizing-tokenizer>`_ workflow.
* Add `"Training the tagger, parser and entity recognizer" <https://spacy.io/docs/usage/training>`_ workflow.
* Add `"Entity recognition" <https://spacy.io/docs/usage/entity-recognition>`_ workflow.
* Fix various typos and inconsistencies.
**👥 Contributors**
Thanks to `@pokey <https://github.com/pokey>`_, `@ExplodingCabbage <https://github.com/ExplodingCabbage>`_, `@souravsingh <https://github.com/souravsingh>`_, `@sadovnychyi <https://github.com/sadovnychyi>`_, `@manojsakhwar <https://github.com/manojsakhwar>`_, `@TiagoMRodrigues <https://github.com/TiagoMRodrigues>`_, `@savkov <https://github.com/savkov>`_, `@pspiegelhalter <https://github.com/pspiegelhalter>`_, `@chenb67 <https://github.com/chenb67>`_, `@kylepjohnson <https://github.com/kylepjohnson>`_, `@YanhaoYang <https://github.com/YanhaoYang>`_, `@tjrileywisc <https://github.com/tjrileywisc>`_, `@dechov <https://github.com/dechov>`_, `@wjt <https://github.com/wjt>`_, `@jsmootiv <https://github.com/jsmootiv>`_ and `@blarghmatey <https://github.com/blarghmatey>`_ for the pull requests!
2016-11-04 `v1.2.0 <https://github.com/explosion/spaCy/releases/tag/v1.2.0>`_: *Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese*
------------------------------------------------------------------------------------------------------------------------------------------------------
**✨ Major features and improvements**
* **NEW:** Support Chinese tokenization, via `Jieba <https://github.com/fxsjy/jieba>`_.
* **NEW:** Alpha support for French, Spanish, Italian and Portuguese tokenization.
**🔴 Bug fixes**
* Fix issue `#376 <https://github.com/explosion/spaCy/issues/376>`_: POS tags for "and/or" are now correct.
* Fix issue `#578 <https://github.com/explosion/spaCy/issues578/>`_: ``--force`` argument on download command now operates correctly.
* Fix issue `#595 <https://github.com/explosion/spaCy/issues/595>`_: Lemmatization corrected for some base forms.
* Fix issue `#588 <https://github.com/explosion/spaCy/issues/588>`_: `Matcher` now rejects empty patterns.
* Fix issue `#592 <https://github.com/explosion/spaCy/issues/592>`_: Added exception rule for tokenization of "Ph.D."
* Fix issue `#599 <https://github.com/explosion/spaCy/issues/599>`_: Empty documents now considered tagged and parsed.
* Fix issue `#600 <https://github.com/explosion/spaCy/issues/600>`_: Add missing ``token.tag`` and ``token.tag_`` setters.
* Fix issue `#596 <https://github.com/explosion/spaCy/issues/596>`_: Added missing unicode import when compiling regexes that led to incorrect tokenization.
* Fix issue `#587 <https://github.com/explosion/spaCy/issues/587>`_: Resolved bug that caused ``Matcher`` to sometimes segfault.
* Fix issue `#429 <https://github.com/explosion/spaCy/issues/429>`_: Ensure missing entity types are added to the entity recognizer.
2016-10-23 `v1.1.0 <https://github.com/explosion/spaCy/releases/tag/v1.1.0>`_: *Bug fixes and adjustments*
----------------------------------------------------------------------------------------------------------
* Rename new ``pipeline`` keyword argument of ``spacy.load()`` to ``create_pipeline``.
* Rename new ``vectors`` keyword argument of ``spacy.load()`` to ``add_vectors``.
**🔴 Bug fixes**
* Fix issue `#544 <https://github.com/explosion/spaCy/issues/544>`_: Add ``vocab.resize_vectors()`` method, to support changing to vectors of different dimensionality.
* Fix issue `#536 <https://github.com/explosion/spaCy/issues/536>`_: Default probability was incorrect for OOV words.
* Fix issue `#539 <https://github.com/explosion/spaCy/issues/539>`_: Unspecified encoding when opening some JSON files.
* Fix issue `#541 <https://github.com/explosion/spaCy/issues/541>`_: GloVe vectors were being loaded incorrectly.
* Fix issue `#522 <https://github.com/explosion/spaCy/issues/522>`_: Similarities and vector norms were calculated incorrectly.
* Fix issue `#461 <https://github.com/explosion/spaCy/issues/461>`_: ``ent_iob`` attribute was incorrect after setting entities via ``doc.ents``
* Fix issue `#459 <https://github.com/explosion/spaCy/issues/459>`_: Deserialiser failed on empty doc
* Fix issue `#514 <https://github.com/explosion/spaCy/issues/514>`_: Serialization failed after adding a new entity label.
2016-10-18 `v1.0.0 <https://github.com/explosion/spaCy/releases/tag/v1.0.0>`_: *Support for deep learning workflows and entity-aware rule matcher*
--------------------------------------------------------------------------------------------------------------------------------------------------
**✨ Major features and improvements**
* **NEW:** `custom processing pipelines <https://spacy.io/docs/usage/customizing-pipeline>`_, to support deep learning workflows
* **NEW:** `Rule matcher <https://spacy.io/docs/usage/rule-based-matching>`_ now supports entity IDs and attributes
* **NEW:** Official/documented `training APIs <https://github.com/explosion/spaCy/tree/master/examples/training>`_ and `GoldParse` class
* Download and use GloVe vectors by default
* Make it easier to load and unload word vectors
* Improved rule matching functionality
* Move basic data into the code, rather than the json files. This makes it simpler to use the tokenizer without the models installed, and makes adding new languages much easier.
* Replace file-system strings with ``Path`` objects. You can now load resources over your network, or do similar trickery, by passing any object that supports the ``Path`` protocol.
**⚠️ Backwards incompatibilities**
* The data_dir keyword argument of ``Language.__init__`` (and its subclasses ``English.__init__`` and ``German.__init__``) has been renamed to ``path``.
* Details of how the Language base-class and its sub-classes are loaded, and how defaults are accessed, have been heavily changed. If you have your own subclasses, you should review the changes.
* The deprecated ``token.repvec`` name has been removed.
* The ``.train()`` method of Tagger and Parser has been renamed to ``.update()``
* The previously undocumented ``GoldParse`` class has a new ``__init__()`` method. The old method has been preserved in ``GoldParse.from_annot_tuples()``.
* Previously undocumented details of the ``Parser`` class have changed.
* The previously undocumented ``get_package`` and ``get_package_by_name`` helper functions have been moved into a new module, ``spacy.deprecated``, in case you still need them while you update.
**🔴 Bug fixes**
* Fix ``get_lang_class`` bug when GloVe vectors are used.
* Fix Issue `#411 <https://github.com/explosion/spaCy/issues/411>`_: ``doc.sents`` raised IndexError on empty string.
* Fix Issue `#455 <https://github.com/explosion/spaCy/issues/455>`_: Correct lemmatization logic
* Fix Issue `#371 <https://github.com/explosion/spaCy/issues/371>`_: Make ``Lexeme`` objects hashable
* Fix Issue `#469 <https://github.com/explosion/spaCy/issues/469>`_: Make ``noun_chunks`` detect root NPs
**👥 Contributors**
Thanks to `@daylen <https://github.com/daylen>`_, `@RahulKulhari <https://github.com/RahulKulhari>`_, `@stared <https://github.com/stared>`_, `@adamhadani <https://github.com/adamhadani>`_, `@izeye <https://github.com/adamhadani>`_ and `@crawfordcomeaux <https://github.com/adamhadani>`_ for the pull requests!
2016-05-10 `v0.101.0 <https://github.com/explosion/spaCy/releases/tag/0.101.0>`_: *Fixed German model*
------------------------------------------------------------------------------------------------------
* Fixed bug that prevented German parses from being deprojectivised.
* Bug fixes to sentence boundary detection.
* Add rich comparison methods to the Lexeme class.
* Add missing ``Doc.has_vector`` and ``Span.has_vector`` properties.
* Add missing ``Span.sent`` property.
2016-05-05 `v0.100.7 <https://github.com/explosion/spaCy/releases/tag/0.100.7>`_: *German!*
-------------------------------------------------------------------------------------------
spaCy finally supports another language, in addition to English. We're lucky
to have Wolfgang Seeker on the team, and the new German model is just the
beginning. Now that there are multiple languages, you should consider loading
spaCy via the ``load()`` function. This function also makes it easier to load extra
word vector data for English:
.. code:: python
import spacy
en_nlp = spacy.load('en', vectors='en_glove_cc_300_1m_vectors')
de_nlp = spacy.load('de')
To support use of the load function, there are also two new helper functions:
``spacy.get_lang_class`` and ``spacy.set_lang_class``. Once the German model is
loaded, you can use it just like the English model:
.. code:: python
doc = nlp(u'''Wikipedia ist ein Projekt zum Aufbau einer Enzyklopädie aus freien Inhalten, zu dem du mit deinem Wissen beitragen kannst. Seit Mai 2001 sind 1.936.257 Artikel in deutscher Sprache entstanden.''')
for sent in doc.sents:
print(sent.root.text, sent.root.n_lefts, sent.root.n_rights)
# (u'ist', 1, 2)
# (u'sind', 1, 3)
The German model provides tokenization, POS tagging, sentence boundary detection,
syntactic dependency parsing, recognition of organisation, location and person
entities, and word vector representations trained on a mix of open subtitles and
Wikipedia data. It doesn't yet provide lemmatisation or morphological analysis,
and it doesn't yet recognise numeric entities such as numbers and dates.
**Bugfixes**
* spaCy < 0.100.7 had a bug in the semantics of the ``Token.__str__`` and ``Token.__unicode__`` built-ins: they included a trailing space.
* Improve handling of "infixed" hyphens. Previously the tokenizer struggled with multiple hyphens, such as "well-to-do".
* Improve handling of periods after mixed-case tokens
* Improve lemmatization for English special-case tokens
* Fix bug that allowed spaces to be treated as heads in the syntactic parse
* Fix bug that led to inconsistent sentence boundaries before and after serialisation.
* Fix bug from deserialising untagged documents.
2016-03-08 `v0.100.6 <https://github.com/explosion/spaCy/releases/tag/0.100.6>`_: *Add support for GloVe vectors*
-----------------------------------------------------------------------------------------------------------------
This release offers improved support for replacing the word vectors used by spaCy.
To install Stanford's GloVe vectors, trained on the Common Crawl, just run:
.. code:: bash
sputnik --name spacy install en_glove_cc_300_1m_vectors
To reduce memory usage and loading time, we've trimmed the vocabulary down to 1m entries.
This release also integrates all the code necessary for German parsing. A German model
will be released shortly. To assist in multi-lingual processing, we've added a ``load()``
function. To load the English model with the GloVe vectors:
.. code:: python
spacy.load('en', vectors='en_glove_cc_300_1m_vectors')
2016-02-07 `v0.100.5 <https://github.com/explosion/spaCy/releases/tag/0.100.5>`_
--------------------------------------------------------------------------------
Fix incorrect use of header file, caused from problem with thinc
2016-02-07 `v0.100.4 <https://github.com/explosion/spaCy/releases/tag/0.100.4>`_: *Fix OSX problem introduced in 0.100.3*
-------------------------------------------------------------------------------------------------------------------------
Small correction to right_edge calculation
2016-02-06 `v0.100.3 <https://github.com/explosion/spaCy/releases/tag/0.100.3>`_
--------------------------------------------------------------------------------
Support multi-threading, via the ``.pipe`` method. spaCy now releases the GIL around the
parser and entity recognizer, so systems that support OpenMP should be able to do
shared memory parallelism at close to full efficiency.
We've also greatly reduced loading time, and fixed a number of bugs.
2016-01-21 `v0.100.2 <https://github.com/explosion/spaCy/releases/tag/0.100.2>`_
--------------------------------------------------------------------------------
Fix data version lock that affected v0.100.1
2016-01-21 `v0.100.1 <https://github.com/explosion/spaCy/releases/tag/0.100.1>`_: *Fix install for OSX*
-------------------------------------------------------------------------------------------------------
v0.100 included header files built on Linux that caused installation to fail on OSX.
This should now be corrected. We also update the default data distribution, to
include a small fix to the tokenizer.
2016-01-19 `v0.100 <https://github.com/explosion/spaCy/releases/tag/0.100>`_: *Revise setup.py, better model downloads, bug fixes*
----------------------------------------------------------------------------------------------------------------------------------
* Redo setup.py, and remove ugly headers_workaround hack. Should result in fewer install problems.
* Update data downloading and installation functionality, by migrating to the Sputnik data-package manager. This will allow us to offer finer grained control of data installation in future.
* Fix bug when using custom entity types in ``Matcher``. This should work by default when using the
``English.__call__`` method of running the pipeline. If invoking ``Parser.__call__`` directly to do NER,
you should call the ``Parser.add_label()`` method to register your entity type.
* Fix head-finding rules in ``Span``.
* Fix problem that caused ``doc.merge()`` to sometimes hang
* Fix problems in handling of whitespace
2015-11-08 `v0.99 <https://github.com/explosion/spaCy/releases/tag/0.99>`_: *Improve span merging, internal refactoring*
------------------------------------------------------------------------------------------------------------------------
* Merging multi-word tokens into one, via the ``doc.merge()`` and ``span.merge()`` methods, no longer invalidates existing ``Span`` objects. This makes it much easier to merge multiple spans, e.g. to merge all named entities, or all base noun phrases. Thanks to @andreasgrv for help on this patch.
* Lots of internal refactoring, especially around the machine learning module, thinc. The thinc API has now been improved, and the spacy._ml wrapper module is no longer necessary.
* The lemmatizer now lower-cases non-noun, noun-verb and non-adjective words.
* A new attribute, ``.rank``, is added to Token and Lexeme objects, giving the frequency rank of the word.
2015-11-03 `v0.98 <https://github.com/explosion/spaCy/releases/tag/0.98>`_: *Smaller package, bug fixes*
---------------------------------------------------------------------------------------------------------
* Remove binary data from PyPi package.
* Delete archive after downloading data
* Use updated cymem, preshed and thinc packages
* Fix information loss in deserialize
* Fix ``__str__`` methods for Python2
2015-10-23 `v0.97 <https://github.com/explosion/spaCy/releases/tag/0.97>`_: *Load the StringStore from a json list, instead of a text file*
-------------------------------------------------------------------------------------------------------------------------------------------
* Fix bugs in download.py
* Require ``--force`` to over-write the data directory in download.py
* Fix bugs in ``Matcher`` and ``doc.merge()``
2015-10-19 `v0.96 <https://github.com/explosion/spaCy/releases/tag/0.96>`_: *Hotfix to .merge method*
-----------------------------------------------------------------------------------------------------
* Fix bug that caused text to be lost after ``.merge``
* Fix bug in Matcher when matched entities overlapped
2015-10-18 `v0.95 <https://github.com/explosion/spaCy/releases/tag/0.95>`_: *Bugfixes*
--------------------------------------------------------------------------------------
* Reform encoding of symbols
* Fix bugs in ``Matcher``
* Fix bugs in ``Span``
* Add tokenizer rule to fix numeric range tokenization
* Add specific string-length cap in Tokenizer
* Fix ``token.conjuncts``
2015-10-09 `v0.94 <https://github.com/explosion/spaCy/releases/tag/0.94>`_
--------------------------------------------------------------------------
* Fix memory error that caused crashes on 32bit platforms
* Fix parse errors caused by smart quotes and em-dashes
2015-09-22 `v0.93 <https://github.com/explosion/spaCy/releases/tag/0.93>`_
--------------------------------------------------------------------------
Bug fixes to word vectors
python -m pytest <spacy-directory> --vectors --models --slow
🛠 Changelog
============
=========== ============== ===========
Version Date Description
=========== ============== ===========
`v1.7.2`_ ``2017-03-20`` Small fixes to beam parser and model linking
`v1.7.1`_ ``2017-03-19`` Fix data download for system installation
`v1.7.0`_ ``2017-03-18`` New 50 MB model, CLI, better downloads and lots of bug fixes
`v1.6.0`_ ``2017-01-16`` Improvements to tokenizer and tests
`v1.5.0`_ ``2016-12-27`` Alpha support for Swedish and Hungarian
`v1.4.0`_ ``2016-12-18`` Improved language data and alpha Dutch support
`v1.3.0`_ ``2016-12-03`` Improve API consistency
`v1.2.0`_ ``2016-11-04`` Alpha tokenizers for Chinese, French, Spanish, Italian and Portuguese
`v1.1.0`_ ``2016-10-23`` Bug fixes and adjustments
`v1.0.0`_ ``2016-10-18`` Support for deep learning workflows and entity-aware rule matcher
`v0.101.0`_ ``2016-05-10`` Fixed German model
`v0.100.7`_ ``2016-05-05`` German support
`v0.100.6`_ ``2016-03-08`` Add support for GloVe vectors
`v0.100.5`_ ``2016-02-07`` Fix incorrect use of header file
`v0.100.4`_ ``2016-02-07`` Fix OSX problem introduced in 0.100.3
`v0.100.3`_ ``2016-02-06`` Multi-threading, faster loading and bugfixes
`v0.100.2`_ ``2016-01-21`` Fix data version lock
`v0.100.1`_ ``2016-01-21`` Fix install for OSX
`v0.100`_ ``2016-01-19`` Revise setup.py, better model downloads, bug fixes
`v0.99`_ ``2015-11-08`` Improve span merging, internal refactoring
`v0.98`_ ``2015-11-03`` Smaller package, bug fixes
`v0.97`_ ``2015-10-23`` Load the StringStore from a json list, instead of a text file
`v0.96`_ ``2015-10-19`` Hotfix to .merge method
`v0.95`_ ``2015-10-18`` Bug fixes
`v0.94`_ ``2015-10-09`` Fix memory and parse errors
`v0.93`_ ``2015-09-22`` Bug fixes to word vectors
=========== ============== ===========
.. _v1.7.2: https://github.com/explosion/spaCy/releases/tag/v1.7.2
.. _v1.7.1: https://github.com/explosion/spaCy/releases/tag/v1.7.1
.. _v1.7.0: https://github.com/explosion/spaCy/releases/tag/v1.7.0
.. _v1.6.0: https://github.com/explosion/spaCy/releases/tag/v1.6.0
.. _v1.5.0: https://github.com/explosion/spaCy/releases/tag/v1.5.0
.. _v1.4.0: https://github.com/explosion/spaCy/releases/tag/v1.4.0
.. _v1.3.0: https://github.com/explosion/spaCy/releases/tag/v1.3.0
.. _v1.2.0: https://github.com/explosion/spaCy/releases/tag/v1.2.0
.. _v1.1.0: https://github.com/explosion/spaCy/releases/tag/v1.1.0
.. _v1.0.0: https://github.com/explosion/spaCy/releases/tag/v1.0.0
.. _v0.101.0: https://github.com/explosion/spaCy/releases/tag/0.101.0
.. _v0.100.7: https://github.com/explosion/spaCy/releases/tag/0.100.7
.. _v0.100.6: https://github.com/explosion/spaCy/releases/tag/0.100.6
.. _v0.100.5: https://github.com/explosion/spaCy/releases/tag/0.100.5
.. _v0.100.4: https://github.com/explosion/spaCy/releases/tag/0.100.4
.. _v0.100.3: https://github.com/explosion/spaCy/releases/tag/0.100.3
.. _v0.100.2: https://github.com/explosion/spaCy/releases/tag/0.100.2
.. _v0.100.1: https://github.com/explosion/spaCy/releases/tag/0.100.1
.. _v0.100: https://github.com/explosion/spaCy/releases/tag/0.100
.. _v0.99: https://github.com/explosion/spaCy/releases/tag/0.99
.. _v0.98: https://github.com/explosion/spaCy/releases/tag/0.98
.. _v0.97: https://github.com/explosion/spaCy/releases/tag/0.97
.. _v0.96: https://github.com/explosion/spaCy/releases/tag/0.96
.. _v0.95: https://github.com/explosion/spaCy/releases/tag/0.95
.. _v0.94: https://github.com/explosion/spaCy/releases/tag/0.94
.. _v0.93: https://github.com/explosion/spaCy/releases/tag/0.93

View File

@ -14,7 +14,7 @@ from spacy.language import Language
from spacy.gold import GoldParse
from spacy.vocab import Vocab
from spacy.tagger import Tagger
from spacy.pipeline import DependencyParser
from spacy.pipeline import DependencyParser, BeamDependencyParser
from spacy.syntax.parser import get_templates
from spacy.syntax.arc_eager import ArcEager
from spacy.scorer import Scorer
@ -35,8 +35,8 @@ def read_conllx(loc, n=0):
lines.pop(0)
tokens = []
for line in lines:
id_, word, lemma, tag, pos, morph, head, dep, _1, _2 = line.split()
if '-' in id_:
id_, word, lemma, pos, tag, morph, head, dep, _1, _2 = line.split()
if '-' in id_ or '.' in id_:
continue
try:
id_ = int(id_) - 1
@ -66,12 +66,8 @@ def score_model(vocab, tagger, parser, gold_docs, verbose=False):
return scorer
def main(train_loc, dev_loc, model_dir, tag_map_loc=None):
if tag_map_loc:
with open(tag_map_loc) as file_:
tag_map = json.loads(file_.read())
else:
tag_map = DEFAULT_TAG_MAP
def main(lang_name, train_loc, dev_loc, model_dir, clusters_loc=None):
LangClass = spacy.util.get_lang_class(lang_name)
train_sents = list(read_conllx(train_loc))
train_sents = PseudoProjectivity.preprocess_training_data(train_sents)
@ -79,13 +75,37 @@ def main(train_loc, dev_loc, model_dir, tag_map_loc=None):
features = get_templates('basic')
model_dir = pathlib.Path(model_dir)
if not model_dir.exists():
model_dir.mkdir()
if not (model_dir / 'deps').exists():
(model_dir / 'deps').mkdir()
if not (model_dir / 'pos').exists():
(model_dir / 'pos').mkdir()
with (model_dir / 'deps' / 'config.json').open('wb') as file_:
file_.write(
json.dumps(
{'pseudoprojective': True, 'labels': actions, 'features': features}).encode('utf8'))
vocab = Vocab(lex_attr_getters=Language.Defaults.lex_attr_getters, tag_map=tag_map)
vocab = LangClass.Defaults.create_vocab()
if not (model_dir / 'vocab').exists():
(model_dir / 'vocab').mkdir()
else:
if (model_dir / 'vocab' / 'strings.json').exists():
with (model_dir / 'vocab' / 'strings.json').open() as file_:
vocab.strings.load(file_)
if (model_dir / 'vocab' / 'lexemes.bin').exists():
vocab.load_lexemes(model_dir / 'vocab' / 'lexemes.bin')
if clusters_loc is not None:
clusters_loc = pathlib.Path(clusters_loc)
with clusters_loc.open() as file_:
for line in file_:
try:
cluster, word, freq = line.split()
except ValueError:
continue
lex = vocab[word]
lex.cluster = int(cluster[::-1], 2)
# Populate vocab
for _, doc_sents in train_sents:
for (ids, words, tags, heads, deps, ner), _ in doc_sents:
@ -95,13 +115,13 @@ def main(train_loc, dev_loc, model_dir, tag_map_loc=None):
_ = vocab[dep]
for tag in tags:
_ = vocab[tag]
if tag_map:
if vocab.morphology.tag_map:
for tag in tags:
assert tag in tag_map, repr(tag)
tagger = Tagger(vocab, tag_map=tag_map)
assert tag in vocab.morphology.tag_map, repr(tag)
tagger = Tagger(vocab)
parser = DependencyParser(vocab, actions=actions, features=features, L1=0.0)
for itn in range(15):
for itn in range(30):
loss = 0.
for _, doc_sents in train_sents:
for (ids, words, tags, heads, deps, ner), _ in doc_sents:

View File

@ -1,25 +0,0 @@
{
"build": {
"sdist": [
"pip install -r requirements.txt",
"pip install \"numpy<1.8\"",
"python setup.py sdist"
],
"install": [
"pip install -v source.tar.gz"
],
"wheel": [
"python untar.py source.tar.gz .",
"python setup.py bdist_wheel",
"python cpdist.py dist"
]
},
"test": {
"after": ["install", "wheel"],
"run": [
"python -m spacy.en.download --force"
],
"package": "spacy",
"args": "--tb=native -x --models --vectors --slow"
}
}

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +0,0 @@
Cognitive Science Laboratory
Princeton University
http://wordnet.princeton.edu
wordnet@princeton.edu

View File

@ -1,31 +0,0 @@
WordNet Release 3.0
This software and database is being provided to you, the LICENSEE, by
Princeton University under the following license. By obtaining, using
and/or copying this software and database, you agree that you have
read, understood, and will comply with these terms and conditions.:
Permission to use, copy, modify and distribute this software and
database and its documentation for any purpose and without fee or
royalty is hereby granted, provided that you agree to comply with
the following copyright notice and statements, including the disclaimer,
and that the same appear on ALL copies of the software, database and
documentation, including modifications that you make for internal
use or for distribution.
WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT-
ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE
OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT
INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR
OTHER RIGHTS.
The name of Princeton University or Princeton may not be used in
advertising or publicity pertaining to distribution of the software
and/or database. Title to copyright in this software, database and
any associated documentation shall at all times remain with
Princeton University and LICENSEE agrees to preserve same.

View File

@ -1,9 +0,0 @@
Changes between WordNet 2.1 and 3.0
Some changes were made to the graphical interface and WordNet library
with regard to adjective and adverb searches. The adjective search
"Synonyms/Related Nouns" was relabeled "Synonyms", and, similarly, the
adverb search "Synonyms/Stem Adjectives" was relabled "Synonyms". A
separate "Related Noun" search was inserted for adjectives, and a
separate "Base Adjective" search was added for adverbs.

View File

@ -1,312 +0,0 @@
WordNet 3.0 Installation Instructions
Beginning with Version 2.1, we have changed the Unix package to a GNU
Autotools package. With Autotools, a system independent installation
process builds and installs WordNet on your specific platform. Read
both the `Basic Installation' and `WordNet Installation' sections
below before attempting to build and install WordNet.
See the `Running WordNet' section for important information concerning
environment variables and the commands to run WordNet.
The WordNet browser makes use of the open source Tcl and Tk
packages. Many systems come with either or both pre-installed. If
your system doesn't (some systems have Tcl installed, but not Tk)
Tcl/Tk can be downloaded from:
Linux - http://www.tcl.tk/
OS X - http://tcltkaqua.sourceforge.net/ (note that 10.4 comes with
Tcl/Tk preinstalled, but earlier versions may not)
Some Linux systems come with the Tcl/Tk libraries installed, but not
all the header files. If your build fails due to missing Tk headers, a
subset that may be sufficient on your system can be found in the
"include/tk" directory. Copy the header files to the "include" directory
and try the make again. If it fails, you should download and install
a full copy of Tcl and/or Tk from the site above.
Tcl and Tk must be installed BEFORE you build and install WordNet. You
must also have a C compiler before installing Tcl/Tk or WordNet.
WordNet has been built and tested with the GNU gcc compiler. This is
pre-installed on most Unix systems, and can be downloaded from:
http://gcc.gnu.org/
Basic Installation
==================
********************************************************************
These are generic installation instructions. Details specific to
WordNet follow in the `WordNet Installation' section below.
********************************************************************
The `configure' shell script attempts to guess correct values for
various system-dependent variables used during compilation. It uses
those values to create a `Makefile' in each directory of the package.
It may also create one or more `.h' files containing system-dependent
definitions. Finally, it creates a shell script `config.status' that
you can run in the future to recreate the current configuration, and a
file `config.log' containing compiler output (useful mainly for
debugging `configure').
It can also use an optional file (typically called `config.cache'
and enabled with `--cache-file=config.cache' or simply `-C') that saves
the results of its tests to speed up reconfiguring. (Caching is
disabled by default to prevent problems with accidental use of stale
cache files.)
The simplest way to compile this package is:
1. `cd' to the directory containing the package's source code and type
`./configure' to configure the package for your system. If you're
using `csh' on an old version of System V, you might need to type
`sh ./configure' instead to prevent `csh' from trying to execute
`configure' itself.
Running `configure' takes awhile. While running, it prints some
messages telling which features it is checking for.
2. Type `make' to compile the package.
3. Type `make install' to install the programs and any data files and
documentation.
4. You can remove the program binaries and object files from the
source code directory by typing `make clean'. To also remove the
files that `configure' created (so you can compile the package for
a different kind of computer), type `make distclean'. There is
also a `make maintainer-clean' target, but that is intended mainly
for the package's developers. If you use it, you may have to get
all sorts of other programs in order to regenerate files that came
with the distribution.
Compilers and Options
=====================
Some systems require unusual options for compilation or linking that
the `configure' script does not know about. Run `./configure --help'
for details on some of the pertinent environment variables.
You can give `configure' initial values for configuration parameters
by setting variables in the command line or in the environment. Here
is an example:
./configure CC=c89 CFLAGS=-O2 LIBS=-lposix
*Note Defining Variables::, for more details.
Compiling For Multiple Architectures
====================================
You can compile the package for more than one kind of computer at the
same time, by placing the object files for each architecture in their
own directory. To do this, you must use a version of `make' that
supports the `VPATH' variable, such as GNU `make'. `cd' to the
directory where you want the object files and executables to go and run
the `configure' script. `configure' automatically checks for the
source code in the directory that `configure' is in and in `..'.
If you have to use a `make' that does not support the `VPATH'
variable, you have to compile the package for one architecture at a
time in the source code directory. After you have installed the
package for one architecture, use `make distclean' before reconfiguring
for another architecture.
Installation Names
==================
By default, `make install' will install the package's files in
`/usr/local/bin', `/usr/local/man', etc. You can specify an
installation prefix other than `/usr/local' by giving `configure' the
option `--prefix=PATH'.
You can specify separate installation prefixes for
architecture-specific files and architecture-independent files. If you
give `configure' the option `--exec-prefix=PATH', the package will use
PATH as the prefix for installing programs and libraries.
Documentation and other data files will still use the regular prefix.
In addition, if you use an unusual directory layout you can give
options like `--bindir=PATH' to specify different values for particular
kinds of files. Run `configure --help' for a list of the directories
you can set and what kinds of files go in them.
If the package supports it, you can cause programs to be installed
with an extra prefix or suffix on their names by giving `configure' the
option `--program-prefix=PREFIX' or `--program-suffix=SUFFIX'.
Optional Features
=================
Some packages pay attention to `--enable-FEATURE' options to
`configure', where FEATURE indicates an optional part of the package.
They may also pay attention to `--with-PACKAGE' options, where PACKAGE
is something like `gnu-as' or `x' (for the X Window System). The
`README' should mention any `--enable-' and `--with-' options that the
package recognizes.
For packages that use the X Window System, `configure' can usually
find the X include and library files automatically, but if it doesn't,
you can use the `configure' options `--x-includes=DIR' and
`--x-libraries=DIR' to specify their locations.
Specifying the System Type
==========================
There may be some features `configure' cannot figure out
automatically, but needs to determine by the type of machine the package
will run on. Usually, assuming the package is built to be run on the
_same_ architectures, `configure' can figure that out, but if it prints
a message saying it cannot guess the machine type, give it the
`--build=TYPE' option. TYPE can either be a short name for the system
type, such as `sun4', or a canonical name which has the form:
CPU-COMPANY-SYSTEM
where SYSTEM can have one of these forms:
OS KERNEL-OS
See the file `config.sub' for the possible values of each field. If
`config.sub' isn't included in this package, then this package doesn't
need to know the machine type.
If you are _building_ compiler tools for cross-compiling, you should
use the `--target=TYPE' option to select the type of system they will
produce code for.
If you want to _use_ a cross compiler, that generates code for a
platform different from the build platform, you should specify the
"host" platform (i.e., that on which the generated programs will
eventually be run) with `--host=TYPE'.
Sharing Defaults
================
If you want to set default values for `configure' scripts to share,
you can create a site shell script called `config.site' that gives
default values for variables like `CC', `cache_file', and `prefix'.
`configure' looks for `PREFIX/share/config.site' if it exists, then
`PREFIX/etc/config.site' if it exists. Or, you can set the
`CONFIG_SITE' environment variable to the location of the site script.
A warning: not all `configure' scripts look for a site script.
Defining Variables
==================
Variables not defined in a site shell script can be set in the
environment passed to `configure'. However, some packages may run
configure again during the build, and the customized values of these
variables may be lost. In order to avoid this problem, you should set
them in the `configure' command line, using `VAR=value'. For example:
./configure CC=/usr/local2/bin/gcc
will cause the specified gcc to be used as the C compiler (unless it is
overridden in the site shell script).
`configure' Invocation
======================
`configure' recognizes the following options to control how it
operates.
`--help'
`-h'
Print a summary of the options to `configure', and exit.
`--version'
`-V'
Print the version of Autoconf used to generate the `configure'
script, and exit.
`--cache-file=FILE'
Enable the cache: use and save the results of the tests in FILE,
traditionally `config.cache'. FILE defaults to `/dev/null' to
disable caching.
`--config-cache'
`-C'
Alias for `--cache-file=config.cache'.
`--quiet'
`--silent'
`-q'
Do not print messages saying which checks are being made. To
suppress all normal output, redirect it to `/dev/null' (any error
messages will still be shown).
`--srcdir=DIR'
Look for the package's source code in directory DIR. Usually
`configure' can determine that directory automatically.
`configure' also accepts some other, not widely useful, options. Run
`configure --help' for more details.
WordNet Installation
====================
By default, WordNet is installed in `/usr/local/WordNet-3.0'. You
must usually be the `root' user to install something here. If you
choose to install WordNet in a different location, you must use the
`--prefix=' option to `configure' and specify an installation
directory.
WordNet relies on the Tcl/Tk package, which you must have installed on
your system prior to building the WordNet package. If you have
installed Tcl/Tk in a non-standard location, you must specify the
`--with-tcl=' and `--with-tk=' options to `configure' and specify the
directory that contains the `tclConfig.sh' and `tkConfig.sh'
configuration scripts, respectively. (Note that these are usually the
same directories.)
If you're running OS X and installed the Aqua Tcl/Tk package from the
web site above, use the following settings:
--with-tcl=/Library/Frameworks/Tcl.framework
--with-tk=/Library/Frameworks/Tk.framework
If `configure' can't find either `tclConfig.sh' or `tkConfig.sh', it
will print an error and stop processing.
After successfully running `configure', you must then build and
install WordNet using these commands:
make
make install
Running WordNet
===============
In order to run WordNet, you must set your PATH variable to include
the directory that contains the WordNet binraries. By default, WordNet
is installed in `/usr/local/WordNet-3.0'.
Several other environment variables may need to be set in order to
run WordNet on your system:
PATH - should include either `/usr/local/WordNet-3.0/bin' or the path
you specified with the `--prefix=' option to `configure', unless you
installed WordNet in a directory that is already in your path.
WNHOME - if you did not install in the default location, you must set
this environment variable to the value you specified on the `prefix='
option. This tells the WordNet browser where to find the database files.
LD_LIBRARY_PATH - may need to be set to the location of the Tcl/Tk
libraries.
TK_LIBRARY - on OS X, may need to be set to the directory that
contains the `tk.tcl' file (usually a subidrectory of where the Tk
library is installed).
The command `wnb' starts the WordNet browser application. If any
of the above variables is not set, or not set properly, an error will
occur when you run `wnb'.
The command line interface is run with the `wn' command. The `PATH' and
`WNHOME' environment variables must also be set.

View File

@ -1,2 +0,0 @@
EXTRA_DIST = README ChangeLog COPYING INSTALL AUTHORS LICENSE doc dict include
SUBDIRS = doc dict include lib src

View File

@ -1,569 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = .
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = @INSTALL@
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = .
DIST_COMMON = README $(am__configure_deps) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in $(srcdir)/config.h.in \
$(top_srcdir)/configure AUTHORS COPYING ChangeLog INSTALL NEWS \
compile depcomp install-sh missing
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
am__CONFIG_DISTCLEAN_FILES = config.status config.cache config.log \
configure.lineno configure.status.lineno
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \
html-recursive info-recursive install-data-recursive \
install-exec-recursive install-info-recursive \
install-recursive installcheck-recursive installdirs-recursive \
pdf-recursive ps-recursive uninstall-info-recursive \
uninstall-recursive
ETAGS = etags
CTAGS = ctags
DIST_SUBDIRS = $(SUBDIRS)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
distdir = $(PACKAGE)-$(VERSION)
top_distdir = $(distdir)
am__remove_distdir = \
{ test ! -d $(distdir) \
|| { find $(distdir) -type d ! -perm -200 -exec chmod u+w {} ';' \
&& rm -fr $(distdir); }; }
DIST_ARCHIVES = $(distdir).tar.gz
GZIP_ENV = --best
distuninstallcheck_listfiles = find . -type f -print
distcleancheck_listfiles = find . -type f -print
ACLOCAL = @ACLOCAL@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
RANLIB = @RANLIB@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TCL_INCLUDE_SPEC = @TCL_INCLUDE_SPEC@
TCL_LIB_SPEC = @TCL_LIB_SPEC@
TK_LIBS = @TK_LIBS@
TK_LIB_SPEC = @TK_LIB_SPEC@
TK_PREFIX = @TK_PREFIX@
TK_XINCLUDES = @TK_XINCLUDES@
VERSION = @VERSION@
ac_ct_CC = @ac_ct_CC@
ac_ct_RANLIB = @ac_ct_RANLIB@
ac_ct_STRIP = @ac_ct_STRIP@
ac_prefix = @ac_prefix@
am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
datadir = @datadir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
prefix = @prefix@
program_transform_name = @program_transform_name@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
EXTRA_DIST = README ChangeLog COPYING INSTALL AUTHORS LICENSE doc dict include
SUBDIRS = doc dict include lib src
all: config.h
$(MAKE) $(AM_MAKEFLAGS) all-recursive
.SUFFIXES:
am--refresh:
@:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
echo ' cd $(srcdir) && $(AUTOMAKE) --gnu '; \
cd $(srcdir) && $(AUTOMAKE) --gnu \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
echo ' $(SHELL) ./config.status'; \
$(SHELL) ./config.status;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
$(SHELL) ./config.status --recheck
$(top_srcdir)/configure: $(am__configure_deps)
cd $(srcdir) && $(AUTOCONF)
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(srcdir) && $(ACLOCAL) $(ACLOCAL_AMFLAGS)
config.h: stamp-h1
@if test ! -f $@; then \
rm -f stamp-h1; \
$(MAKE) stamp-h1; \
else :; fi
stamp-h1: $(srcdir)/config.h.in $(top_builddir)/config.status
@rm -f stamp-h1
cd $(top_builddir) && $(SHELL) ./config.status config.h
$(srcdir)/config.h.in: $(am__configure_deps)
cd $(top_srcdir) && $(AUTOHEADER)
rm -f stamp-h1
touch $@
distclean-hdr:
-rm -f config.h stamp-h1
uninstall-info-am:
# This directory's subdirectories are mostly independent; you can cd
# into them and run `make' without going through this Makefile.
# To change the values of `make' variables: instead of editing Makefiles,
# (1) if the variable is set in `config.status', edit `config.status'
# (which will cause the Makefiles to be regenerated when you run `make');
# (2) otherwise, pass the desired values on the `make' command line.
$(RECURSIVE_TARGETS):
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
target=`echo $@ | sed s/-recursive//`; \
list='$(SUBDIRS)'; for subdir in $$list; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
dot_seen=yes; \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done; \
if test "$$dot_seen" = "no"; then \
$(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
fi; test -z "$$fail"
mostlyclean-recursive clean-recursive distclean-recursive \
maintainer-clean-recursive:
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
case "$@" in \
distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
*) list='$(SUBDIRS)' ;; \
esac; \
rev=''; for subdir in $$list; do \
if test "$$subdir" = "."; then :; else \
rev="$$subdir $$rev"; \
fi; \
done; \
rev="$$rev ."; \
target=`echo $@ | sed s/-recursive//`; \
for subdir in $$rev; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done && test -z "$$fail"
tags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \
done
ctags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \
done
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: tags-recursive $(HEADERS) $(SOURCES) config.h.in $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
include_option=--etags-include; \
empty_fix=.; \
else \
include_option=--include; \
empty_fix=; \
fi; \
list='$(SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test ! -f $$subdir/TAGS || \
tags="$$tags $$include_option=$$here/$$subdir/TAGS"; \
fi; \
done; \
list='$(SOURCES) $(HEADERS) config.h.in $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$tags $$unique; \
fi
ctags: CTAGS
CTAGS: ctags-recursive $(HEADERS) $(SOURCES) config.h.in $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) config.h.in $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(CTAGS_ARGS)$$tags$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$tags $$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& cd $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) $$here
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
$(am__remove_distdir)
mkdir $(distdir)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test -d "$(distdir)/$$subdir" \
|| $(mkdir_p) "$(distdir)/$$subdir" \
|| exit 1; \
distdir=`$(am__cd) $(distdir) && pwd`; \
top_distdir=`$(am__cd) $(top_distdir) && pwd`; \
(cd $$subdir && \
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$$top_distdir" \
distdir="$$distdir/$$subdir" \
distdir) \
|| exit 1; \
fi; \
done
-find $(distdir) -type d ! -perm -777 -exec chmod a+rwx {} \; -o \
! -type d ! -perm -444 -links 1 -exec chmod a+r {} \; -o \
! -type d ! -perm -400 -exec chmod a+r {} \; -o \
! -type d ! -perm -444 -exec $(SHELL) $(install_sh) -c -m a+r {} {} \; \
|| chmod -R a+r $(distdir)
dist-gzip: distdir
tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
$(am__remove_distdir)
dist-bzip2: distdir
tardir=$(distdir) && $(am__tar) | bzip2 -9 -c >$(distdir).tar.bz2
$(am__remove_distdir)
dist-tarZ: distdir
tardir=$(distdir) && $(am__tar) | compress -c >$(distdir).tar.Z
$(am__remove_distdir)
dist-shar: distdir
shar $(distdir) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).shar.gz
$(am__remove_distdir)
dist-zip: distdir
-rm -f $(distdir).zip
zip -rq $(distdir).zip $(distdir)
$(am__remove_distdir)
dist dist-all: distdir
tardir=$(distdir) && $(am__tar) | GZIP=$(GZIP_ENV) gzip -c >$(distdir).tar.gz
$(am__remove_distdir)
# This target untars the dist file and tries a VPATH configuration. Then
# it guarantees that the distribution is self-contained by making another
# tarfile.
distcheck: dist
case '$(DIST_ARCHIVES)' in \
*.tar.gz*) \
GZIP=$(GZIP_ENV) gunzip -c $(distdir).tar.gz | $(am__untar) ;;\
*.tar.bz2*) \
bunzip2 -c $(distdir).tar.bz2 | $(am__untar) ;;\
*.tar.Z*) \
uncompress -c $(distdir).tar.Z | $(am__untar) ;;\
*.shar.gz*) \
GZIP=$(GZIP_ENV) gunzip -c $(distdir).shar.gz | unshar ;;\
*.zip*) \
unzip $(distdir).zip ;;\
esac
chmod -R a-w $(distdir); chmod a+w $(distdir)
mkdir $(distdir)/_build
mkdir $(distdir)/_inst
chmod a-w $(distdir)
dc_install_base=`$(am__cd) $(distdir)/_inst && pwd | sed -e 's,^[^:\\/]:[\\/],/,'` \
&& dc_destdir="$${TMPDIR-/tmp}/am-dc-$$$$/" \
&& cd $(distdir)/_build \
&& ../configure --srcdir=.. --prefix="$$dc_install_base" \
$(DISTCHECK_CONFIGURE_FLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) \
&& $(MAKE) $(AM_MAKEFLAGS) dvi \
&& $(MAKE) $(AM_MAKEFLAGS) check \
&& $(MAKE) $(AM_MAKEFLAGS) install \
&& $(MAKE) $(AM_MAKEFLAGS) installcheck \
&& $(MAKE) $(AM_MAKEFLAGS) uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) distuninstallcheck_dir="$$dc_install_base" \
distuninstallcheck \
&& chmod -R a-w "$$dc_install_base" \
&& ({ \
(cd ../.. && umask 077 && mkdir "$$dc_destdir") \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" install \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" uninstall \
&& $(MAKE) $(AM_MAKEFLAGS) DESTDIR="$$dc_destdir" \
distuninstallcheck_dir="$$dc_destdir" distuninstallcheck; \
} || { rm -rf "$$dc_destdir"; exit 1; }) \
&& rm -rf "$$dc_destdir" \
&& $(MAKE) $(AM_MAKEFLAGS) dist \
&& rm -rf $(DIST_ARCHIVES) \
&& $(MAKE) $(AM_MAKEFLAGS) distcleancheck
$(am__remove_distdir)
@(echo "$(distdir) archives ready for distribution: "; \
list='$(DIST_ARCHIVES)'; for i in $$list; do echo $$i; done) | \
sed -e '1{h;s/./=/g;p;x;}' -e '$${p;x;}'
distuninstallcheck:
@cd $(distuninstallcheck_dir) \
&& test `$(distuninstallcheck_listfiles) | wc -l` -le 1 \
|| { echo "ERROR: files left after uninstall:" ; \
if test -n "$(DESTDIR)"; then \
echo " (check DESTDIR support)"; \
fi ; \
$(distuninstallcheck_listfiles) ; \
exit 1; } >&2
distcleancheck: distclean
@if test '$(srcdir)' = . ; then \
echo "ERROR: distcleancheck can only run from a VPATH build" ; \
exit 1 ; \
fi
@test `$(distcleancheck_listfiles) | wc -l` -eq 0 \
|| { echo "ERROR: files left in build directory after distclean:" ; \
$(distcleancheck_listfiles) ; \
exit 1; } >&2
check-am: all-am
check: check-recursive
all-am: Makefile config.h
installdirs: installdirs-recursive
installdirs-am:
install: install-recursive
install-exec: install-exec-recursive
install-data: install-data-recursive
uninstall: uninstall-recursive
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-recursive
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-recursive
clean-am: clean-generic mostlyclean-am
distclean: distclean-recursive
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -f Makefile
distclean-am: clean-am distclean-generic distclean-hdr distclean-tags
dvi: dvi-recursive
dvi-am:
html: html-recursive
info: info-recursive
info-am:
install-data-am:
install-exec-am:
install-info: install-info-recursive
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-recursive
-rm -f $(am__CONFIG_DISTCLEAN_FILES)
-rm -rf $(top_srcdir)/autom4te.cache
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-recursive
mostlyclean-am: mostlyclean-generic
pdf: pdf-recursive
pdf-am:
ps: ps-recursive
ps-am:
uninstall-am: uninstall-info-am
uninstall-info: uninstall-info-recursive
.PHONY: $(RECURSIVE_TARGETS) CTAGS GTAGS all all-am am--refresh check \
check-am clean clean-generic clean-recursive ctags \
ctags-recursive dist dist-all dist-bzip2 dist-gzip dist-shar \
dist-tarZ dist-zip distcheck distclean distclean-generic \
distclean-hdr distclean-recursive distclean-tags \
distcleancheck distdir distuninstallcheck dvi dvi-am html \
html-am info info-am install install-am install-data \
install-data-am install-exec install-exec-am install-info \
install-info-am install-man install-strip installcheck \
installcheck-am installdirs installdirs-am maintainer-clean \
maintainer-clean-generic maintainer-clean-recursive \
mostlyclean mostlyclean-generic mostlyclean-recursive pdf \
pdf-am ps ps-am tags tags-recursive uninstall uninstall-am \
uninstall-info-am
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,101 +0,0 @@
This is the README file for WordNet 3.0
1. About WordNet
WordNet was developed at Princeton University's Cognitive Science
Laboratory under the direction of George Miller, James S. McDonnell
Distinguished University Professor of Psychology, Emeritus. Over the
years many linguists, lexicographers, students, and software engineers
have contributed to the project.
WordNet is an online lexical reference system. Word forms in WordNet
are represented in their familiar orthography; word meanings are
represented by synonym sets (synsets) - lists of synonymous word forms
that are interchangeable in some context. Two kinds of relations are
recognized: lexical and semantic. Lexical relations hold between word
forms; semantic relations hold between word meanings.
To learn more about WordNet, the book "WordNet: An Electronic Lexical
Database," containing an updated version of "Five Papers on WordNet"
and additional papers by WordNet users, is available from MIT Press:
http://mitpress.mit.edu/book-home.tcl?isbn=026206197X
2. The WordNet Web Site
We maintain a Web site at:
http://wordnet.princeton.edu
Information about WordNet, access to our online interface, and the
various WordNet packages that you can download are available from our
web site. All of the software documentation is available online, as
well as a FAQ. On this site we also have information about other
applications that use WordNet. If you have an application that you
would like included, please send e-mail to the above address.
3. Contacting Us
Ongoing deveopment work and WordNet related projects are done by a
small group of researchers, lexicographers, and systems programmers.
Since our resources are VERY limited, we request that you please
confine correspondence to WordNet topics only. Please check the
documentation, FAQ, and other resources for the answer to your
question or problem before contacting us.
If you have trouble installing or downloading WordNet, have a bug to
report, or any other problem, please refer to the online FAQ file
first. If you can heal thyself, please do so. The FAQ will be
updated over time. And if you do find a previously unreported
problem, please use our Bug Report Form:
http://wordnet.princeton.edu/cgi-bin/bugsubmit.pl
When reporting a problem, please be as specific as possible, stating
the computer platform you are using, which interface you are using,
and the exact error. The more details you can provide, the more
likely it is that you will get an answer.
There is a WordNet user discussion group mailing list that we invite
our users to join. Users use this list to ask questions of one
another, announce extensions to WordNet that they've developed, and
other topics of general usefulness to the user community.
Information on joining the user discussion list, reporting bugs and other
contact information is in found on our website at:
http://wordnet.princeton.edu/contact
4. Current Release
WordNet Version 3.0 is the latest version available for download. Two
basic database packages are available - one for Windows and one for
Unix platforms (including Mac OS X). See the file ChangeLog (Unix) or
CHANGES.txt (Windows) for a list of changes from previous versions.
WordNet packages can either be downloaded from our web site via:
http://wordnet.princeton.edu/obtain
The Windows package is a self-extracting archive that installs itself
when you double-click on it.
Beginning with Version 2.1, we changed the Unix package to a GNU Autotools
package. The WordNet browser makes use of the open source Tcl and Tk
packages. Many systems come with either or both pre-installed. If
your system doesn't (some systems have Tcl installed, but not Tk)
Tcl/Tk can be downloaded from:
http://www.tcl.tk/
Tcl and Tk must be installed BEFORE you compile WordNet. You must also
have a C compiler before installing Tcl/Tk or WordNet. WordNet has
been built and tested with the GNU gcc compiler. This is
pre-installed on most Unix systems, and can be downloaded from:
http://gcc.gnu.org/
See the file INSTALL for detailed WordNet installation instructions.

View File

@ -1,333 +0,0 @@
#------------------------------------------------------------------------
# SC_PATH_TCLCONFIG --
#
# Locate the tclConfig.sh file and perform a sanity check on
# the Tcl compile flags
#
# Arguments:
# none
#
# Results:
#
# Adds the following arguments to configure:
# --with-tcl=...
#
# Defines the following vars:
# TCL_BIN_DIR Full path to the directory containing
# the tclConfig.sh file
#------------------------------------------------------------------------
AC_DEFUN(SC_PATH_TCLCONFIG, [
#
# Ok, lets find the tcl configuration
# First, look for one uninstalled.
# the alternative search directory is invoked by --with-tcl
#
if test x"${no_tcl}" = x ; then
# we reset no_tcl in case something fails here
no_tcl=true
AC_ARG_WITH(tcl, [ --with-tcl directory containing tcl configuration (tclConfig.sh)], with_tclconfig=${withval})
AC_MSG_CHECKING([for Tcl configuration])
AC_CACHE_VAL(ac_cv_c_tclconfig,[
# First check to see if --with-tcl was specified.
if test x"${with_tclconfig}" != x ; then
if test -f "${with_tclconfig}/tclConfig.sh" ; then
ac_cv_c_tclconfig=`(cd ${with_tclconfig}; pwd)`
else
AC_MSG_ERROR([${with_tclconfig} directory doesn't contain tclConfig.sh])
fi
fi
# then check for a private Tcl installation
if test x"${ac_cv_c_tclconfig}" = x ; then
for i in \
../tcl \
`ls -dr ../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ../tcl[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ../tcl[[8-9]].[[0-9]]* 2>/dev/null` \
../../tcl \
`ls -dr ../../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ../../tcl[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ../../tcl[[8-9]].[[0-9]]* 2>/dev/null` \
../../../tcl \
`ls -dr ../../../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ../../../tcl[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ../../../tcl[[8-9]].[[0-9]]* 2>/dev/null` ; do
if test -f "$i/unix/tclConfig.sh" ; then
ac_cv_c_tclconfig=`(cd $i/unix; pwd)`
break
fi
done
fi
# check in a few common install locations
if test x"${ac_cv_c_tclconfig}" = x ; then
for i in `ls -d ${libdir} 2>/dev/null` \
`ls -d /usr/local/lib 2>/dev/null` \
`ls -d /usr/contrib/lib 2>/dev/null` \
`ls -d /usr/lib 2>/dev/null` \
`ls -d /usr/lib64 2>/dev/null` \
; do
if test -f "$i/tclConfig.sh" ; then
ac_cv_c_tclconfig=`(cd $i; pwd)`
break
fi
done
fi
# check in a few other private locations
if test x"${ac_cv_c_tclconfig}" = x ; then
for i in \
${srcdir}/../tcl \
`ls -dr ${srcdir}/../tcl[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ${srcdir}/../tcl[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ${srcdir}/../tcl[[8-9]].[[0-9]]* 2>/dev/null` ; do
if test -f "$i/unix/tclConfig.sh" ; then
ac_cv_c_tclconfig=`(cd $i/unix; pwd)`
break
fi
done
fi
])
if test x"${ac_cv_c_tclconfig}" = x ; then
TCL_BIN_DIR="# no Tcl configs found"
AC_MSG_WARN(Can't find Tcl configuration definitions)
exit 0
else
no_tcl=
TCL_BIN_DIR=${ac_cv_c_tclconfig}
AC_MSG_RESULT(found $TCL_BIN_DIR/tclConfig.sh)
fi
fi
])
#------------------------------------------------------------------------
# SC_PATH_TKCONFIG --
#
# Locate the tkConfig.sh file
#
# Arguments:
# none
#
# Results:
#
# Adds the following arguments to configure:
# --with-tk=...
#
# Defines the following vars:
# TK_BIN_DIR Full path to the directory containing
# the tkConfig.sh file
#------------------------------------------------------------------------
AC_DEFUN(SC_PATH_TKCONFIG, [
#
# Ok, lets find the tk configuration
# First, look for one uninstalled.
# the alternative search directory is invoked by --with-tk
#
if test x"${no_tk}" = x ; then
# we reset no_tk in case something fails here
no_tk=true
AC_ARG_WITH(tk, [ --with-tk directory containing tk configuration (tkConfig.sh)], with_tkconfig=${withval})
AC_MSG_CHECKING([for Tk configuration])
AC_CACHE_VAL(ac_cv_c_tkconfig,[
# First check to see if --with-tkconfig was specified.
if test x"${with_tkconfig}" != x ; then
if test -f "${with_tkconfig}/tkConfig.sh" ; then
ac_cv_c_tkconfig=`(cd ${with_tkconfig}; pwd)`
else
AC_MSG_ERROR([${with_tkconfig} directory doesn't contain tkConfig.sh])
fi
fi
# then check for a private Tk library
if test x"${ac_cv_c_tkconfig}" = x ; then
for i in \
../tk \
`ls -dr ../tk[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ../tk[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ../tk[[8-9]].[[0-9]]* 2>/dev/null` \
../../tk \
`ls -dr ../../tk[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ../../tk[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ../../tk[[8-9]].[[0-9]]* 2>/dev/null` \
../../../tk \
`ls -dr ../../../tk[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ../../../tk[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ../../../tk[[8-9]].[[0-9]]* 2>/dev/null` ; do
if test -f "$i/unix/tkConfig.sh" ; then
ac_cv_c_tkconfig=`(cd $i/unix; pwd)`
break
fi
done
fi
# check in a few common install locations
if test x"${ac_cv_c_tkconfig}" = x ; then
for i in `ls -d ${libdir} 2>/dev/null` \
`ls -d /usr/local/lib 2>/dev/null` \
`ls -d /usr/contrib/lib 2>/dev/null` \
`ls -d /usr/lib 2>/dev/null` \
`ls -d /usr/lib64 2>/dev/null` \
; do
if test -f "$i/tkConfig.sh" ; then
ac_cv_c_tkconfig=`(cd $i; pwd)`
break
fi
done
fi
# check in a few other private locations
if test x"${ac_cv_c_tkconfig}" = x ; then
for i in \
${srcdir}/../tk \
`ls -dr ${srcdir}/../tk[[8-9]].[[0-9]].[[0-9]]* 2>/dev/null` \
`ls -dr ${srcdir}/../tk[[8-9]].[[0-9]] 2>/dev/null` \
`ls -dr ${srcdir}/../tk[[8-9]].[[0-9]]* 2>/dev/null` ; do
if test -f "$i/unix/tkConfig.sh" ; then
ac_cv_c_tkconfig=`(cd $i/unix; pwd)`
break
fi
done
fi
])
if test x"${ac_cv_c_tkconfig}" = x ; then
TK_BIN_DIR="# no Tk configs found"
AC_MSG_WARN(Can't find Tk configuration definitions)
exit 0
else
no_tk=
TK_BIN_DIR=${ac_cv_c_tkconfig}
AC_MSG_RESULT(found $TK_BIN_DIR/tkConfig.sh)
fi
fi
])
#------------------------------------------------------------------------
# SC_LOAD_TCLCONFIG --
#
# Load the tclConfig.sh file
#
# Arguments:
#
# Requires the following vars to be set:
# TCL_BIN_DIR
#
# Results:
#
# Subst the following vars:
# TCL_BIN_DIR
# TCL_SRC_DIR
# TCL_LIB_FILE
#
#------------------------------------------------------------------------
AC_DEFUN(SC_LOAD_TCLCONFIG, [
AC_MSG_CHECKING([for existence of $TCL_BIN_DIR/tclConfig.sh])
if test -f "$TCL_BIN_DIR/tclConfig.sh" ; then
AC_MSG_RESULT([loading])
. $TCL_BIN_DIR/tclConfig.sh
else
AC_MSG_RESULT([file not found])
fi
#
# If the TCL_BIN_DIR is the build directory (not the install directory),
# then set the common variable name to the value of the build variables.
# For example, the variable TCL_LIB_SPEC will be set to the value
# of TCL_BUILD_LIB_SPEC. An extension should make use of TCL_LIB_SPEC
# instead of TCL_BUILD_LIB_SPEC since it will work with both an
# installed and uninstalled version of Tcl.
#
if test -f $TCL_BIN_DIR/Makefile ; then
TCL_LIB_SPEC=${TCL_BUILD_LIB_SPEC}
TCL_STUB_LIB_SPEC=${TCL_BUILD_STUB_LIB_SPEC}
TCL_STUB_LIB_PATH=${TCL_BUILD_STUB_LIB_PATH}
fi
#
# eval is required to do the TCL_DBGX substitution
#
eval "TCL_LIB_FILE=\"${TCL_LIB_FILE}\""
eval "TCL_LIB_FLAG=\"${TCL_LIB_FLAG}\""
eval "TCL_LIB_SPEC=\"${TCL_LIB_SPEC}\""
eval "TCL_INCLUDE_SPEC=\"${TCL_INCLUDE_SPEC}\""
eval "TCL_STUB_LIB_FILE=\"${TCL_STUB_LIB_FILE}\""
eval "TCL_STUB_LIB_FLAG=\"${TCL_STUB_LIB_FLAG}\""
eval "TCL_STUB_LIB_SPEC=\"${TCL_STUB_LIB_SPEC}\""
# AC_SUBST(TCL_VERSION)
# AC_SUBST(TCL_BIN_DIR)
# AC_SUBST(TCL_SRC_DIR)
# AC_SUBST(TCL_LIB_FILE)
# AC_SUBST(TCL_LIB_FLAG)
AC_SUBST(TCL_LIB_SPEC)
AC_SUBST(TCL_INCLUDE_SPEC)
# AC_SUBST(TCL_STUB_LIB_FILE)
# AC_SUBST(TCL_STUB_LIB_FLAG)
# AC_SUBST(TCL_STUB_LIB_SPEC)
])
#------------------------------------------------------------------------
# SC_LOAD_TKCONFIG --
#
# Load the tkConfig.sh file
#
# Arguments:
#
# Requires the following vars to be set:
# TK_BIN_DIR
#
# Results:
#
# Sets the following vars that should be in tkConfig.sh:
# TK_BIN_DIR
#------------------------------------------------------------------------
AC_DEFUN(SC_LOAD_TKCONFIG, [
AC_MSG_CHECKING([for existence of $TK_BIN_DIR/tkConfig.sh])
if test -f "$TK_BIN_DIR/tkConfig.sh" ; then
AC_MSG_RESULT([loading])
. $TK_BIN_DIR/tkConfig.sh
else
AC_MSG_RESULT([could not find $TK_BIN_DIR/tkConfig.sh])
fi
AC_SUBST(TK_LIB_SPEC)
AC_SUBST(TK_LIBS)
AC_SUBST(TK_XINCLUDES)
AC_SUBST(TK_PREFIX)
# AC_SUBST(TK_VERSION)
# AC_SUBST(TK_BIN_DIR)
# AC_SUBST(TK_SRC_DIR)
# AC_SUBST(TK_LIB_FILE)
])
dnl From Bruno Haible.
AC_DEFUN([AC_LANGINFO_CODESET],
[
AC_CACHE_CHECK([for nl_langinfo and CODESET], am_cv_langinfo_codeset,
[AC_TRY_LINK([#include <langinfo.h>],
[char* cs = nl_langinfo(CODESET);],
am_cv_langinfo_codeset=yes,
am_cv_langinfo_codeset=no)
])
if test $am_cv_langinfo_codeset = yes; then
AC_DEFINE(HAVE_LANGINFO_CODESET, 1,
[Define if you have <langinfo.h> and nl_langinfo(CODESET).])
fi
])

File diff suppressed because it is too large Load Diff

View File

@ -1,136 +0,0 @@
#! /bin/sh
# Wrapper for compilers which do not understand `-c -o'.
scriptversion=2003-11-09.00
# Copyright (C) 1999, 2000, 2003 Free Software Foundation, Inc.
# Written by Tom Tromey <tromey@cygnus.com>.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# This file is maintained in Automake, please report
# bugs to <bug-automake@gnu.org> or send patches to
# <automake-patches@gnu.org>.
case $1 in
'')
echo "$0: No command. Try \`$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: compile [--help] [--version] PROGRAM [ARGS]
Wrapper for compilers which do not understand `-c -o'.
Remove `-o dest.o' from ARGS, run PROGRAM with the remaining
arguments, and rename the output as expected.
If you are trying to build a whole package this is not the
right script to run: please start by reading the file `INSTALL'.
Report bugs to <bug-automake@gnu.org>.
EOF
exit 0
;;
-v | --v*)
echo "compile $scriptversion"
exit 0
;;
esac
prog=$1
shift
ofile=
cfile=
args=
while test $# -gt 0; do
case "$1" in
-o)
# configure might choose to run compile as `compile cc -o foo foo.c'.
# So we do something ugly here.
ofile=$2
shift
case "$ofile" in
*.o | *.obj)
;;
*)
args="$args -o $ofile"
ofile=
;;
esac
;;
*.c)
cfile=$1
args="$args $1"
;;
*)
args="$args $1"
;;
esac
shift
done
if test -z "$ofile" || test -z "$cfile"; then
# If no `-o' option was seen then we might have been invoked from a
# pattern rule where we don't need one. That is ok -- this is a
# normal compilation that the losing compiler can handle. If no
# `.c' file was seen then we are probably linking. That is also
# ok.
exec "$prog" $args
fi
# Name of file we expect compiler to create.
cofile=`echo $cfile | sed -e 's|^.*/||' -e 's/\.c$/.o/'`
# Create the lock directory.
# Note: use `[/.-]' here to ensure that we don't use the same name
# that we are using for the .o file. Also, base the name on the expected
# object file name, since that is what matters with a parallel build.
lockdir=`echo $cofile | sed -e 's|[/.-]|_|g'`.d
while true; do
if mkdir $lockdir > /dev/null 2>&1; then
break
fi
sleep 1
done
# FIXME: race condition here if user kills between mkdir and trap.
trap "rmdir $lockdir; exit 1" 1 2 15
# Run the compile.
"$prog" $args
status=$?
if test -f "$cofile"; then
mv "$cofile" "$ofile"
fi
rmdir $lockdir
exit $status
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

View File

@ -1,86 +0,0 @@
/* config.h.in. Generated from configure.ac by autoheader. */
/* Default installation prefix. */
#undef DEFAULTPATH
/* Define to 1 if you have the <inttypes.h> header file. */
#undef HAVE_INTTYPES_H
/* Define if you have <langinfo.h> and nl_langinfo(CODESET). */
#undef HAVE_LANGINFO_CODESET
/* Define to 1 if you have the <locale.h> header file. */
#undef HAVE_LOCALE_H
/* Define to 1 if your system has a GNU libc compatible `malloc' function, and
to 0 otherwise. */
#undef HAVE_MALLOC
/* Define to 1 if you have the <malloc.h> header file. */
#undef HAVE_MALLOC_H
/* Define to 1 if you have the <memory.h> header file. */
#undef HAVE_MEMORY_H
/* Define to 1 if you have the <stdint.h> header file. */
#undef HAVE_STDINT_H
/* Define to 1 if you have the <stdlib.h> header file. */
#undef HAVE_STDLIB_H
/* Define to 1 if you have the `strchr' function. */
#undef HAVE_STRCHR
/* Define to 1 if you have the `strdup' function. */
#undef HAVE_STRDUP
/* Define to 1 if you have the <strings.h> header file. */
#undef HAVE_STRINGS_H
/* Define to 1 if you have the <string.h> header file. */
#undef HAVE_STRING_H
/* Define to 1 if you have the `strrchr' function. */
#undef HAVE_STRRCHR
/* Define to 1 if you have the `strstr' function. */
#undef HAVE_STRSTR
/* Define to 1 if you have the `strtol' function. */
#undef HAVE_STRTOL
/* Define to 1 if you have the <sys/stat.h> header file. */
#undef HAVE_SYS_STAT_H
/* Define to 1 if you have the <sys/types.h> header file. */
#undef HAVE_SYS_TYPES_H
/* Define to 1 if you have the <unistd.h> header file. */
#undef HAVE_UNISTD_H
/* Name of package */
#undef PACKAGE
/* Define to the address where bug reports for this package should be sent. */
#undef PACKAGE_BUGREPORT
/* Define to the full name of this package. */
#undef PACKAGE_NAME
/* Define to the full name and version of this package. */
#undef PACKAGE_STRING
/* Define to the one symbol short name of this package. */
#undef PACKAGE_TARNAME
/* Define to the version of this package. */
#undef PACKAGE_VERSION
/* Define to 1 if you have the ANSI C header files. */
#undef STDC_HEADERS
/* Version number of package */
#undef VERSION
/* Define to rpl_malloc if the replacement function should be used. */
#undef malloc

File diff suppressed because it is too large Load Diff

View File

@ -1,74 +0,0 @@
# -*- Autoconf -*-
# Process this file with autoconf to produce a configure script.
AC_PREREQ(2.59)
AC_INIT(WordNet, 3.0, [wordnet@princeton.edu], wordnet)
AC_CONFIG_SRCDIR([config.h.in])
AC_CONFIG_HEADER([config.h])
# Checks for programs.
AC_PROG_CC
AC_PROG_RANLIB
AC_PROG_INSTALL
# Checks for header files.
AC_HEADER_STDC
AC_CHECK_HEADERS([locale.h malloc.h stdlib.h string.h])
# Checks for typedefs, structures, and compiler characteristics.
# Checks for library functions.
AC_FUNC_MALLOC
AC_CHECK_FUNCS([strchr strdup strrchr strstr strtol])
# Set HAVE_LANGINFO_CODESET if nl_langinfo is found
AC_LANGINFO_CODESET
AM_INIT_AUTOMAKE(WordNet, 3.0)
SC_PATH_TCLCONFIG
SC_PATH_TKCONFIG
SC_LOAD_TCLCONFIG
SC_LOAD_TKCONFIG
# Set default installation prefix.
AC_PREFIX_DEFAULT([/usr/local/WordNet-3.0])
ac_prefix=$prefix
if test "x$ac_prefix" = "xNONE"; then
ac_prefix=$ac_default_prefix
fi
AC_SUBST(ac_prefix)
AH_TEMPLATE([DEFAULTPATH],[The default search path for WordNet data files])
AC_DEFINE_UNQUOTED(DEFAULTPATH, ["$ac_prefix/dict"], [Default installation prefix.])
#AC_DEFINE_UNQUOTED(DEFAULTPATH,"${prefix}/dict")
# This doesn't do anything
AC_CONFIG_COMMANDS([default])
AC_CONFIG_FILES(Makefile dict/Makefile doc/Makefile doc/html/Makefile doc/man/Makefile doc/pdf/Makefile doc/ps/Makefile include/Makefile include/tk/Makefile
src/Makefile lib/Makefile lib/wnres/Makefile)
AC_OUTPUT
AC_MSG_RESULT(
[
WordNet is now configured
Installation directory: ${prefix}
To build and install WordNet:
make
make install
To run, environment variables should be set as follows:
PATH - include ${bindir}
WNHOME - if not using default installation location, set to ${prefix}
See INSTALL file for details and additional environment variables
which may need to be set on your system.
])

View File

@ -1,522 +0,0 @@
#! /bin/sh
# depcomp - compile a program generating dependencies as side-effects
scriptversion=2004-05-31.23
# Copyright (C) 1999, 2000, 2003, 2004 Free Software Foundation, Inc.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
# Originally written by Alexandre Oliva <oliva@dcc.unicamp.br>.
case $1 in
'')
echo "$0: No command. Try \`$0 --help' for more information." 1>&2
exit 1;
;;
-h | --h*)
cat <<\EOF
Usage: depcomp [--help] [--version] PROGRAM [ARGS]
Run PROGRAMS ARGS to compile a file, generating dependencies
as side-effects.
Environment variables:
depmode Dependency tracking mode.
source Source file read by `PROGRAMS ARGS'.
object Object file output by `PROGRAMS ARGS'.
DEPDIR directory where to store dependencies.
depfile Dependency file to output.
tmpdepfile Temporary file to use when outputing dependencies.
libtool Whether libtool is used (yes/no).
Report bugs to <bug-automake@gnu.org>.
EOF
exit 0
;;
-v | --v*)
echo "depcomp $scriptversion"
exit 0
;;
esac
if test -z "$depmode" || test -z "$source" || test -z "$object"; then
echo "depcomp: Variables source, object and depmode must be set" 1>&2
exit 1
fi
# Dependencies for sub/bar.o or sub/bar.obj go into sub/.deps/bar.Po.
depfile=${depfile-`echo "$object" |
sed 's|[^\\/]*$|'${DEPDIR-.deps}'/&|;s|\.\([^.]*\)$|.P\1|;s|Pobj$|Po|'`}
tmpdepfile=${tmpdepfile-`echo "$depfile" | sed 's/\.\([^.]*\)$/.T\1/'`}
rm -f "$tmpdepfile"
# Some modes work just like other modes, but use different flags. We
# parameterize here, but still list the modes in the big case below,
# to make depend.m4 easier to write. Note that we *cannot* use a case
# here, because this file can only contain one case statement.
if test "$depmode" = hp; then
# HP compiler uses -M and no extra arg.
gccflag=-M
depmode=gcc
fi
if test "$depmode" = dashXmstdout; then
# This is just like dashmstdout with a different argument.
dashmflag=-xM
depmode=dashmstdout
fi
case "$depmode" in
gcc3)
## gcc 3 implements dependency tracking that does exactly what
## we want. Yay! Note: for some reason libtool 1.4 doesn't like
## it if -MD -MP comes after the -MF stuff. Hmm.
"$@" -MT "$object" -MD -MP -MF "$tmpdepfile"
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
mv "$tmpdepfile" "$depfile"
;;
gcc)
## There are various ways to get dependency output from gcc. Here's
## why we pick this rather obscure method:
## - Don't want to use -MD because we'd like the dependencies to end
## up in a subdir. Having to rename by hand is ugly.
## (We might end up doing this anyway to support other compilers.)
## - The DEPENDENCIES_OUTPUT environment variable makes gcc act like
## -MM, not -M (despite what the docs say).
## - Using -M directly means running the compiler twice (even worse
## than renaming).
if test -z "$gccflag"; then
gccflag=-MD,
fi
"$@" -Wp,"$gccflag$tmpdepfile"
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
echo "$object : \\" > "$depfile"
alpha=ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz
## The second -e expression handles DOS-style file names with drive letters.
sed -e 's/^[^:]*: / /' \
-e 's/^['$alpha']:\/[^:]*: / /' < "$tmpdepfile" >> "$depfile"
## This next piece of magic avoids the `deleted header file' problem.
## The problem is that when a header file which appears in a .P file
## is deleted, the dependency causes make to die (because there is
## typically no way to rebuild the header). We avoid this by adding
## dummy dependencies for each header file. Too bad gcc doesn't do
## this for us directly.
tr ' ' '
' < "$tmpdepfile" |
## Some versions of gcc put a space before the `:'. On the theory
## that the space means something, we add a space to the output as
## well.
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
hp)
# This case exists only to let depend.m4 do its work. It works by
# looking at the text of this script. This case will never be run,
# since it is checked for above.
exit 1
;;
sgi)
if test "$libtool" = yes; then
"$@" "-Wp,-MDupdate,$tmpdepfile"
else
"$@" -MDupdate "$tmpdepfile"
fi
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
if test -f "$tmpdepfile"; then # yes, the sourcefile depend on other files
echo "$object : \\" > "$depfile"
# Clip off the initial element (the dependent). Don't try to be
# clever and replace this with sed code, as IRIX sed won't handle
# lines with more than a fixed number of characters (4096 in
# IRIX 6.2 sed, 8192 in IRIX 6.5). We also remove comment lines;
# the IRIX cc adds comments like `#:fec' to the end of the
# dependency line.
tr ' ' '
' < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' | \
tr '
' ' ' >> $depfile
echo >> $depfile
# The second pass generates a dummy entry for each header file.
tr ' ' '
' < "$tmpdepfile" \
| sed -e 's/^.*\.o://' -e 's/#.*$//' -e '/^$/ d' -e 's/$/:/' \
>> $depfile
else
# The sourcefile does not contain any dependencies, so just
# store a dummy comment line, to avoid errors with the Makefile
# "include basename.Plo" scheme.
echo "#dummy" > "$depfile"
fi
rm -f "$tmpdepfile"
;;
aix)
# The C for AIX Compiler uses -M and outputs the dependencies
# in a .u file. In older versions, this file always lives in the
# current directory. Also, the AIX compiler puts `$object:' at the
# start of each line; $object doesn't have directory information.
# Version 6 uses the directory in both cases.
stripped=`echo "$object" | sed 's/\(.*\)\..*$/\1/'`
tmpdepfile="$stripped.u"
if test "$libtool" = yes; then
"$@" -Wc,-M
else
"$@" -M
fi
stat=$?
if test -f "$tmpdepfile"; then :
else
stripped=`echo "$stripped" | sed 's,^.*/,,'`
tmpdepfile="$stripped.u"
fi
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
if test -f "$tmpdepfile"; then
outname="$stripped.o"
# Each line is of the form `foo.o: dependent.h'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed -e "s,^$outname:,$object :," < "$tmpdepfile" > "$depfile"
sed -e "s,^$outname: \(.*\)$,\1:," < "$tmpdepfile" >> "$depfile"
else
# The sourcefile does not contain any dependencies, so just
# store a dummy comment line, to avoid errors with the Makefile
# "include basename.Plo" scheme.
echo "#dummy" > "$depfile"
fi
rm -f "$tmpdepfile"
;;
icc)
# Intel's C compiler understands `-MD -MF file'. However on
# icc -MD -MF foo.d -c -o sub/foo.o sub/foo.c
# ICC 7.0 will fill foo.d with something like
# foo.o: sub/foo.c
# foo.o: sub/foo.h
# which is wrong. We want:
# sub/foo.o: sub/foo.c
# sub/foo.o: sub/foo.h
# sub/foo.c:
# sub/foo.h:
# ICC 7.1 will output
# foo.o: sub/foo.c sub/foo.h
# and will wrap long lines using \ :
# foo.o: sub/foo.c ... \
# sub/foo.h ... \
# ...
"$@" -MD -MF "$tmpdepfile"
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile"
exit $stat
fi
rm -f "$depfile"
# Each line is of the form `foo.o: dependent.h',
# or `foo.o: dep1.h dep2.h \', or ` dep3.h dep4.h \'.
# Do two passes, one to just change these to
# `$object: dependent.h' and one to simply `dependent.h:'.
sed "s,^[^:]*:,$object :," < "$tmpdepfile" > "$depfile"
# Some versions of the HPUX 10.20 sed can't process this invocation
# correctly. Breaking it into two sed invocations is a workaround.
sed 's,^[^:]*: \(.*\)$,\1,;s/^\\$//;/^$/d;/:$/d' < "$tmpdepfile" |
sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
tru64)
# The Tru64 compiler uses -MD to generate dependencies as a side
# effect. `cc -MD -o foo.o ...' puts the dependencies into `foo.o.d'.
# At least on Alpha/Redhat 6.1, Compaq CCC V6.2-504 seems to put
# dependencies in `foo.d' instead, so we check for that too.
# Subdirectories are respected.
dir=`echo "$object" | sed -e 's|/[^/]*$|/|'`
test "x$dir" = "x$object" && dir=
base=`echo "$object" | sed -e 's|^.*/||' -e 's/\.o$//' -e 's/\.lo$//'`
if test "$libtool" = yes; then
# Dependencies are output in .lo.d with libtool 1.4.
# With libtool 1.5 they are output both in $dir.libs/$base.o.d
# and in $dir.libs/$base.o.d and $dir$base.o.d. We process the
# latter, because the former will be cleaned when $dir.libs is
# erased.
tmpdepfile1="$dir.libs/$base.lo.d"
tmpdepfile2="$dir$base.o.d"
tmpdepfile3="$dir.libs/$base.d"
"$@" -Wc,-MD
else
tmpdepfile1="$dir$base.o.d"
tmpdepfile2="$dir$base.d"
tmpdepfile3="$dir$base.d"
"$@" -MD
fi
stat=$?
if test $stat -eq 0; then :
else
rm -f "$tmpdepfile1" "$tmpdepfile2" "$tmpdepfile3"
exit $stat
fi
if test -f "$tmpdepfile1"; then
tmpdepfile="$tmpdepfile1"
elif test -f "$tmpdepfile2"; then
tmpdepfile="$tmpdepfile2"
else
tmpdepfile="$tmpdepfile3"
fi
if test -f "$tmpdepfile"; then
sed -e "s,^.*\.[a-z]*:,$object:," < "$tmpdepfile" > "$depfile"
# That's a tab and a space in the [].
sed -e 's,^.*\.[a-z]*:[ ]*,,' -e 's,$,:,' < "$tmpdepfile" >> "$depfile"
else
echo "#dummy" > "$depfile"
fi
rm -f "$tmpdepfile"
;;
#nosideeffect)
# This comment above is used by automake to tell side-effect
# dependency tracking mechanisms from slower ones.
dashmstdout)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test $1 != '--mode=compile'; do
shift
done
shift
fi
# Remove `-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
test -z "$dashmflag" && dashmflag=-M
# Require at least two characters before searching for `:'
# in the target name. This is to cope with DOS-style filenames:
# a dependency such as `c:/foo/bar' could be seen as target `c' otherwise.
"$@" $dashmflag |
sed 's:^[ ]*[^: ][^:][^:]*\:[ ]*:'"$object"'\: :' > "$tmpdepfile"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
tr ' ' '
' < "$tmpdepfile" | \
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
dashXmstdout)
# This case only exists to satisfy depend.m4. It is never actually
# run, as this mode is specially recognized in the preamble.
exit 1
;;
makedepend)
"$@" || exit $?
# Remove any Libtool call
if test "$libtool" = yes; then
while test $1 != '--mode=compile'; do
shift
done
shift
fi
# X makedepend
shift
cleared=no
for arg in "$@"; do
case $cleared in
no)
set ""; shift
cleared=yes ;;
esac
case "$arg" in
-D*|-I*)
set fnord "$@" "$arg"; shift ;;
# Strip any option that makedepend may not understand. Remove
# the object too, otherwise makedepend will parse it as a source file.
-*|$object)
;;
*)
set fnord "$@" "$arg"; shift ;;
esac
done
obj_suffix="`echo $object | sed 's/^.*\././'`"
touch "$tmpdepfile"
${MAKEDEPEND-makedepend} -o"$obj_suffix" -f"$tmpdepfile" "$@"
rm -f "$depfile"
cat < "$tmpdepfile" > "$depfile"
sed '1,2d' "$tmpdepfile" | tr ' ' '
' | \
## Some versions of the HPUX 10.20 sed can't process this invocation
## correctly. Breaking it into two sed invocations is a workaround.
sed -e 's/^\\$//' -e '/^$/d' -e '/:$/d' | sed -e 's/$/ :/' >> "$depfile"
rm -f "$tmpdepfile" "$tmpdepfile".bak
;;
cpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout.
"$@" || exit $?
# Remove the call to Libtool.
if test "$libtool" = yes; then
while test $1 != '--mode=compile'; do
shift
done
shift
fi
# Remove `-o $object'.
IFS=" "
for arg
do
case $arg in
-o)
shift
;;
$object)
shift
;;
*)
set fnord "$@" "$arg"
shift # fnord
shift # $arg
;;
esac
done
"$@" -E |
sed -n '/^# [0-9][0-9]* "\([^"]*\)".*/ s:: \1 \\:p' |
sed '$ s: \\$::' > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
cat < "$tmpdepfile" >> "$depfile"
sed < "$tmpdepfile" '/^$/d;s/^ //;s/ \\$//;s/$/ :/' >> "$depfile"
rm -f "$tmpdepfile"
;;
msvisualcpp)
# Important note: in order to support this mode, a compiler *must*
# always write the preprocessed file to stdout, regardless of -o,
# because we must use -o when running libtool.
"$@" || exit $?
IFS=" "
for arg
do
case "$arg" in
"-Gm"|"/Gm"|"-Gi"|"/Gi"|"-ZI"|"/ZI")
set fnord "$@"
shift
shift
;;
*)
set fnord "$@" "$arg"
shift
shift
;;
esac
done
"$@" -E |
sed -n '/^#line [0-9][0-9]* "\([^"]*\)"/ s::echo "`cygpath -u \\"\1\\"`":p' | sort | uniq > "$tmpdepfile"
rm -f "$depfile"
echo "$object : \\" > "$depfile"
. "$tmpdepfile" | sed 's% %\\ %g' | sed -n '/^\(.*\)$/ s:: \1 \\:p' >> "$depfile"
echo " " >> "$depfile"
. "$tmpdepfile" | sed 's% %\\ %g' | sed -n '/^\(.*\)$/ s::\1\::p' >> "$depfile"
rm -f "$tmpdepfile"
;;
none)
exec "$@"
;;
*)
echo "Unknown depmode $depmode" 1>&2
exit 1
;;
esac
exit 0
# Local Variables:
# mode: shell-script
# sh-indentation: 2
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

View File

@ -1,314 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# dict/Makefile. Generated from Makefile.in by configure.
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
srcdir = .
top_srcdir = ..
pkgdatadir = $(datadir)/WordNet
pkglibdir = $(libdir)/WordNet
pkgincludedir = $(includedir)/WordNet
top_builddir = ..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = /usr/csl/bin/install -c
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = dict
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(dictdir)"
dictDATA_INSTALL = $(INSTALL_DATA)
DATA = $(dict_DATA)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run aclocal-1.9
AMDEP_FALSE = #
AMDEP_TRUE =
AMTAR = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run tar
AUTOCONF = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run autoconf
AUTOHEADER = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run autoheader
AUTOMAKE = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run automake-1.9
AWK = nawk
CC = gcc
CCDEPMODE = depmode=gcc3
CFLAGS = -g -O2
CPP = gcc -E
CPPFLAGS =
CYGPATH_W = echo
DEFS = -DHAVE_CONFIG_H
DEPDIR = .deps
ECHO_C =
ECHO_N = -n
ECHO_T =
EGREP = egrep
EXEEXT =
INSTALL_DATA = ${INSTALL} -m 644
INSTALL_PROGRAM = ${INSTALL}
INSTALL_SCRIPT = ${INSTALL}
INSTALL_STRIP_PROGRAM = ${SHELL} $(install_sh) -c -s
LDFLAGS =
LIBOBJS =
LIBS =
LTLIBOBJS =
MAKEINFO = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run makeinfo
OBJEXT = o
PACKAGE = WordNet
PACKAGE_BUGREPORT = wordnet@princeton.edu
PACKAGE_NAME = WordNet
PACKAGE_STRING = WordNet 3.0
PACKAGE_TARNAME = wordnet
PACKAGE_VERSION = 3.0
PATH_SEPARATOR = :
RANLIB = ranlib
SET_MAKE =
SHELL = /bin/bash
STRIP =
TCL_INCLUDE_SPEC = -I/usr/csl/include
TCL_LIB_SPEC = -L/usr/csl/lib -ltcl8.4
TK_LIBS = -L/usr/openwin/lib -lX11 -ldl -lpthread -lsocket -lnsl -lm
TK_LIB_SPEC = -L/usr/csl/lib -ltk8.4
TK_PREFIX = /usr/csl
TK_XINCLUDES = -I/usr/openwin/include
VERSION = 3.0
ac_ct_CC = gcc
ac_ct_RANLIB = ranlib
ac_ct_STRIP =
ac_prefix = /usr/local/WordNet-3.0
am__fastdepCC_FALSE = #
am__fastdepCC_TRUE =
am__include = include
am__leading_dot = .
am__quote =
am__tar = ${AMTAR} chof - "$$tardir"
am__untar = ${AMTAR} xf -
bindir = ${exec_prefix}/bin
build_alias =
datadir = ${prefix}/share
exec_prefix = ${prefix}
host_alias =
includedir = ${prefix}/include
infodir = ${prefix}/info
install_sh = /people/wn/src/Release/3.0/Unix/install-sh
libdir = ${exec_prefix}/lib
libexecdir = ${exec_prefix}/libexec
localstatedir = ${prefix}/var
mandir = ${prefix}/man
mkdir_p = $(install_sh) -d
oldincludedir = /usr/include
prefix = /usr/local/WordNet-3.0
program_transform_name = s,x,x,
sbindir = ${exec_prefix}/sbin
sharedstatedir = ${prefix}/com
sysconfdir = ${prefix}/etc
target_alias =
dictdir = $(prefix)/dict
dict_DATA = adj.exc adv.exc cntlist cntlist.rev data.adj data.adv data.noun data.verb frames.vrb index.adj index.adv index.noun index.sense index.verb log.grind.3.0 noun.exc sentidx.vrb sents.vrb verb.Framestext verb.exc lexnames
all: all-am
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu dict/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu dict/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
uninstall-info-am:
install-dictDATA: $(dict_DATA)
@$(NORMAL_INSTALL)
test -z "$(dictdir)" || $(mkdir_p) "$(DESTDIR)$(dictdir)"
@list='$(dict_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(dictDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(dictdir)/$$f'"; \
$(dictDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(dictdir)/$$f"; \
done
uninstall-dictDATA:
@$(NORMAL_UNINSTALL)
@list='$(dict_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(dictdir)/$$f'"; \
rm -f "$(DESTDIR)$(dictdir)/$$f"; \
done
tags: TAGS
TAGS:
ctags: CTAGS
CTAGS:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(DATA)
installdirs:
for dir in "$(DESTDIR)$(dictdir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am: install-dictDATA
install-exec-am:
install-info: install-info-am
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-dictDATA uninstall-info-am
.PHONY: all all-am check check-am clean clean-generic distclean \
distclean-generic distdir dvi dvi-am html html-am info info-am \
install install-am install-data install-data-am \
install-dictDATA install-exec install-exec-am install-info \
install-info-am install-man install-strip installcheck \
installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-generic pdf \
pdf-am ps ps-am uninstall uninstall-am uninstall-dictDATA \
uninstall-info-am
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,2 +0,0 @@
dictdir = $(prefix)/dict
dict_DATA = adj.exc adv.exc cntlist cntlist.rev data.adj data.adv data.noun data.verb frames.vrb index.adj index.adv index.noun index.sense index.verb log.grind.3.0 noun.exc sentidx.vrb sents.vrb verb.Framestext verb.exc lexnames

View File

@ -1,314 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = ..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = @INSTALL@
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = dict
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(dictdir)"
dictDATA_INSTALL = $(INSTALL_DATA)
DATA = $(dict_DATA)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
RANLIB = @RANLIB@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TCL_INCLUDE_SPEC = @TCL_INCLUDE_SPEC@
TCL_LIB_SPEC = @TCL_LIB_SPEC@
TK_LIBS = @TK_LIBS@
TK_LIB_SPEC = @TK_LIB_SPEC@
TK_PREFIX = @TK_PREFIX@
TK_XINCLUDES = @TK_XINCLUDES@
VERSION = @VERSION@
ac_ct_CC = @ac_ct_CC@
ac_ct_RANLIB = @ac_ct_RANLIB@
ac_ct_STRIP = @ac_ct_STRIP@
ac_prefix = @ac_prefix@
am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
datadir = @datadir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
prefix = @prefix@
program_transform_name = @program_transform_name@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
dictdir = $(prefix)/dict
dict_DATA = adj.exc adv.exc cntlist cntlist.rev data.adj data.adv data.noun data.verb frames.vrb index.adj index.adv index.noun index.sense index.verb log.grind.3.0 noun.exc sentidx.vrb sents.vrb verb.Framestext verb.exc lexnames
all: all-am
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu dict/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu dict/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
uninstall-info-am:
install-dictDATA: $(dict_DATA)
@$(NORMAL_INSTALL)
test -z "$(dictdir)" || $(mkdir_p) "$(DESTDIR)$(dictdir)"
@list='$(dict_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(dictDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(dictdir)/$$f'"; \
$(dictDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(dictdir)/$$f"; \
done
uninstall-dictDATA:
@$(NORMAL_UNINSTALL)
@list='$(dict_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(dictdir)/$$f'"; \
rm -f "$(DESTDIR)$(dictdir)/$$f"; \
done
tags: TAGS
TAGS:
ctags: CTAGS
CTAGS:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(DATA)
installdirs:
for dir in "$(DESTDIR)$(dictdir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am: install-dictDATA
install-exec-am:
install-info: install-info-am
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-dictDATA uninstall-info-am
.PHONY: all all-am check check-am clean clean-generic distclean \
distclean-generic distdir dvi dvi-am html html-am info info-am \
install install-am install-data install-data-am \
install-dictDATA install-exec install-exec-am install-info \
install-info-am install-man install-strip installcheck \
installcheck-am installdirs maintainer-clean \
maintainer-clean-generic mostlyclean mostlyclean-generic pdf \
pdf-am ps ps-am uninstall uninstall-am uninstall-dictDATA \
uninstall-info-am
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +0,0 @@
best well
better well
deeper deeply
farther far
further far
harder hard
hardest hard

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@ -1,35 +0,0 @@
1 Something ----s
2 Somebody ----s
3 It is ----ing
4 Something is ----ing PP
5 Something ----s something Adjective/Noun
6 Something ----s Adjective/Noun
7 Somebody ----s Adjective
8 Somebody ----s something
9 Somebody ----s somebody
10 Something ----s somebody
11 Something ----s something
12 Something ----s to somebody
13 Somebody ----s on something
14 Somebody ----s somebody something
15 Somebody ----s something to somebody
16 Somebody ----s something from somebody
17 Somebody ----s somebody with something
18 Somebody ----s somebody of something
19 Somebody ----s something on somebody
20 Somebody ----s somebody PP
21 Somebody ----s something PP
22 Somebody ----s PP
23 Somebody's (body part) ----s
24 Somebody ----s somebody to INFINITIVE
25 Somebody ----s somebody INFINITIVE
26 Somebody ----s that CLAUSE
27 Somebody ----s to somebody
28 Somebody ----s to INFINITIVE
29 Somebody ----s whether INFINITIVE
30 Somebody ----s somebody into V-ing something
31 Somebody ----s something with something
32 Somebody ----s INFINITIVE
33 Somebody ----s VERB-ing
34 It ----s that CLAUSE
35 Something ----s INFINITIVE

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,45 +0,0 @@
00 adj.all 3
01 adj.pert 3
02 adv.all 4
03 noun.Tops 1
04 noun.act 1
05 noun.animal 1
06 noun.artifact 1
07 noun.attribute 1
08 noun.body 1
09 noun.cognition 1
10 noun.communication 1
11 noun.event 1
12 noun.feeling 1
13 noun.food 1
14 noun.group 1
15 noun.location 1
16 noun.motive 1
17 noun.object 1
18 noun.person 1
19 noun.phenomenon 1
20 noun.plant 1
21 noun.possession 1
22 noun.process 1
23 noun.quantity 1
24 noun.relation 1
25 noun.shape 1
26 noun.state 1
27 noun.substance 1
28 noun.time 1
29 verb.body 2
30 verb.change 2
31 verb.cognition 2
32 verb.communication 2
33 verb.competition 2
34 verb.consumption 2
35 verb.contact 2
36 verb.creation 2
37 verb.emotion 2
38 verb.motion 2
39 verb.perception 2
40 verb.possession 2
41 verb.social 2
42 verb.stative 2
43 verb.weather 2
44 adj.ppl 3

View File

@ -1,89 +0,0 @@
Processing adj.all...
Processing adj.pert...
Processing adv.all...
Processing noun.Tops...
noun.Tops, line 7: warning: No hypernyms in synset
Processing noun.act...
Processing noun.animal...
Processing noun.artifact...
Processing noun.attribute...
Processing noun.body...
Processing noun.cognition...
Processing noun.communication...
Processing noun.event...
Processing noun.feeling...
Processing noun.food...
Processing noun.group...
Processing noun.location...
Processing noun.motive...
Processing noun.object...
Processing noun.person...
Processing noun.phenomenon...
Processing noun.plant...
Processing noun.possession...
Processing noun.process...
Processing noun.quantity...
Processing noun.relation...
Processing noun.shape...
Processing noun.state...
Processing noun.substance...
Processing noun.time...
Processing verb.body...
Processing verb.change...
Processing verb.cognition...
Processing verb.communication...
Processing verb.competition...
Processing verb.consumption...
Processing verb.contact...
Processing verb.creation...
Processing verb.emotion...
Processing verb.motion...
Processing verb.perception...
Processing verb.possession...
Processing verb.social...
Processing verb.stative...
Processing verb.weather...
Processing adj.ppl...
*** Statistics for ground files:
82115 noun synsets
13767 verb synsets
3812 adj synsets
3621 adv synsets
3651 pertainym synsets
10693 adjective satellite synsets
117659 synsets in total (including satellite and pertainym synsets)
82115 noun synsets have definitional glosses
13767 verb synsets have definitional glosses
3812 adj synsets have definitional glosses
3621 adv synsets have definitional glosses
3651 pertainym synsets have definitional glosses
10693 adjective satellite synsets have definitional glosses
117659 definitional glosses in total (including adjective satellite synsets)
225000 pointers in total
206978 synonyms in synsets
147306 unique word phrases
83118 word phrases of length 1
54533 word phrases of length 2
7766 word phrases of length 3
1454 word phrases of length 4
298 word phrases of length 5
80 word phrases of length 6
28 word phrases of length 7
20 word phrases of length 8
9 word phrases of length 9
Resolving pointers...
Done resolving pointers...
Getting sense counts...
Done with sense counts...
Figuring out byte offsets...
Dumping data files...
Done dumping data files...
Cannot open file: cntlist
Cannot order senses
Dumping index files...
Done dumping index files...
Dumping sense index...
Done dumping sense index...

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,170 +0,0 @@
1 The children %s to the playground
10 The cars %s down the avenue
100 These glasses %s easily
101 These fabrics %s easily
102 They %s their earnings this year
103 Their earnings %s this year
104 The water %ss
105 They %s the water
106 The animals %s
107 They %s a long time
108 The car %ss the tree
109 John will %s angry
11 They %s the car down the avenue
110 They %s in the city
111 They won't %s the story
112 They %s that there was a traffic accident
113 They %s whether there was a traffic accident
114 They %s her vice president
115 Did he %s his major works over a short period of time?
116 The chefs %s the vegetables
117 They %s the cape
118 The food does %s good
119 The music does %s good
12 They %s the glass tubes
120 The cool air does %s good
121 This food does %s well
122 It was %sing all day long
123 They %s him to write the letter
124 They %s him into writing the letter
125 They %s him from writing the letter
126 The bad news will %s him
127 The good news will %s her
128 The chef wants to %s the eggs
129 Sam wants to %s with Sue
13 The glass tubes %s
130 The fighter managed to %s his opponent
131 These cars won't %s
132 The branches %s from the trees
133 The stock market is going to %s
134 The moon will soon %s
135 The business is going to %s
136 The airplane is sure to %s
137 They %s to move
138 They %s moving
139 Sam and Sue %s the movie
14 Sam and Sue %s
140 They want to %s the prisoners
141 They want to %s the doors
142 The doors %s
143 Did he %s his foot?
144 Did his feet %s?
145 They will %s the duet
146 They %s their hair
147 They %s the trees
148 They %s him of all his money
149 Lights %s on the horizon
15 Sam cannot %s Sue
150 The horizon is %sing with lights
151 The crowds %s in the streets
152 The streets %s with crowds
153 Cars %s in the streets
154 The streets %s with cars
155 You can hear animals %s in the meadows
156 The meadows %s with animals
157 The birds %s in the woods
158 The woods %s with many kinds of birds
159 The performance is likely to %s Sue
16 The ropes %s
160 Sam and Sue %s over the results of the experiment
161 In the summer they like to go out and %s
162 The children %s in the rocking chair
163 There %s some children in the rocking chair
164 Some big birds %s in the tree
165 There %s some big birds in the tree
166 The men %s the area for animals
167 The men %s for animals in the area
168 The customs agents %s the bags for drugs
169 They %s him as chairman
17 The strong winds %s the rope
170 They %s him "Bobby"
18 They %s the sheets
19 The sheets didn't %s
2 The banks %s the check
20 The horses %s across the field
21 They %s the bags on the table
22 The men %s the horses across the field
23 Our properties %s at this point
24 His fields %s mine at this point
25 They %s the hill
26 They %s up the hill
27 They %s the river
28 They %s down the river
29 They %s the countryside
3 The checks %s
30 They %s in the countryside
31 These men %s across the river
32 These men %s the river
33 They %s the food to the people
34 They %s the people the food
35 They %s more bread
36 They %s the object in the water
37 The men %s the bookshelves
38 They %s the money in the closet
39 The lights %s from the ceiling
4 The children %s the ball
40 They %s the lights from the ceiling
41 They %s their rifles on the cabinet
42 The chairs %s in the corner
43 The men %s the chairs
44 The women %s water into the bowl
45 Water and oil %s into the bowl
46 They %s the wire around the stick
47 The wires %s around the stick
48 They %s the bread with melted butter
49 They %s the cart with boxes
5 The balls %s
50 They %s the books into the box
51 They %s sugar over the cake
52 They %s the cake with sugar
53 They %s the fruit with a chemical
54 They %s a chemical into the fruit
55 They %s the field with rye
56 They %s rye in the field
57 They %s notices on the doors
58 They %s the doors with notices
59 They %s money on their grandchild
6 The girls %s the wooden sticks
60 They %s their grandchild with money
61 They %s coins on the image
62 They %s the image with coins
63 They %s butter on the bread
64 They %s the lake with fish
65 The children %s the paper with grease
66 The children %s grease onto the paper
67 They %s papers over the floor
68 They %s the floor with papers
69 They %s the money
7 The wooden sticks %s
70 They %s the newspapers
71 They %s the goods
72 The men %s the boat
73 They %s the animals
74 The books %s the box
75 They %s the halls with holly
76 Holly flowers %s the halls
77 The wind storms %s the area with dust and dirt
78 Dust and dirt %s the area
79 The swollen rivers %s the area with water
8 The coins %s
80 The waters %s the area
81 They %s the cloth with water and alcohol
82 Water and alcohol %s the cloth
83 They %s the snow from the path
84 They %s the path of the snow
85 They %s the water from the sink
86 They %s the sink of water
87 They %s the parcel to their parents
88 They %s them the parcel
89 They %s cars to the tourists
9 They %s the coin
90 They %s the tourists their cars
91 They %s the money to them
92 They %s them the money
93 They %s them the information
94 They %s the information to them
95 The parents %s a French poem to the children
96 The parents %s the children a French poem
97 They %s
98 They %s themselves
99 These balls %s easily

View File

@ -1,35 +0,0 @@
1 Something ----s
2 Somebody ----s
3 It is ----ing
4 Something is ----ing PP
5 Something ----s something Adjective/Noun
6 Something ----s Adjective/Noun
7 Somebody ----s Adjective
8 Somebody ----s something
9 Somebody ----s somebody
10 Something ----s somebody
11 Something ----s something
12 Something ----s to somebody
13 Somebody ----s on something
14 Somebody ----s somebody something
15 Somebody ----s something to somebody
16 Somebody ----s something from somebody
17 Somebody ----s somebody with something
18 Somebody ----s somebody of something
19 Somebody ----s something on somebody
20 Somebody ----s somebody PP
21 Somebody ----s something PP
22 Somebody ----s PP
23 Somebody's (body part) ----s
24 Somebody ----s somebody to INFINITIVE
25 Somebody ----s somebody INFINITIVE
26 Somebody ----s that CLAUSE
27 Somebody ----s to somebody
28 Somebody ----s to INFINITIVE
29 Somebody ----s whether INFINITIVE
30 Somebody ----s somebody into V-ing something
31 Somebody ----s something with something
32 Somebody ----s INFINITIVE
33 Somebody ----s VERB-ing
34 It ----s that CLAUSE
35 Something ----s INFINITIVE

File diff suppressed because it is too large Load Diff

View File

@ -1,459 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# include/Makefile. Generated from Makefile.in by configure.
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
srcdir = .
top_srcdir = ..
pkgdatadir = $(datadir)/WordNet
pkglibdir = $(libdir)/WordNet
pkgincludedir = $(includedir)/WordNet
top_builddir = ..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = /usr/csl/bin/install -c
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = include
DIST_COMMON = $(include_HEADERS) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \
html-recursive info-recursive install-data-recursive \
install-exec-recursive install-info-recursive \
install-recursive installcheck-recursive installdirs-recursive \
pdf-recursive ps-recursive uninstall-info-recursive \
uninstall-recursive
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(includedir)"
includeHEADERS_INSTALL = $(INSTALL_HEADER)
HEADERS = $(include_HEADERS)
ETAGS = etags
CTAGS = ctags
DIST_SUBDIRS = $(SUBDIRS)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run aclocal-1.9
AMDEP_FALSE = #
AMDEP_TRUE =
AMTAR = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run tar
AUTOCONF = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run autoconf
AUTOHEADER = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run autoheader
AUTOMAKE = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run automake-1.9
AWK = nawk
CC = gcc
CCDEPMODE = depmode=gcc3
CFLAGS = -g -O2
CPP = gcc -E
CPPFLAGS =
CYGPATH_W = echo
DEFS = -DHAVE_CONFIG_H
DEPDIR = .deps
ECHO_C =
ECHO_N = -n
ECHO_T =
EGREP = egrep
EXEEXT =
INSTALL_DATA = ${INSTALL} -m 644
INSTALL_PROGRAM = ${INSTALL}
INSTALL_SCRIPT = ${INSTALL}
INSTALL_STRIP_PROGRAM = ${SHELL} $(install_sh) -c -s
LDFLAGS =
LIBOBJS =
LIBS =
LTLIBOBJS =
MAKEINFO = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run makeinfo
OBJEXT = o
PACKAGE = WordNet
PACKAGE_BUGREPORT = wordnet@princeton.edu
PACKAGE_NAME = WordNet
PACKAGE_STRING = WordNet 3.0
PACKAGE_TARNAME = wordnet
PACKAGE_VERSION = 3.0
PATH_SEPARATOR = :
RANLIB = ranlib
SET_MAKE =
SHELL = /bin/bash
STRIP =
TCL_INCLUDE_SPEC = -I/usr/csl/include
TCL_LIB_SPEC = -L/usr/csl/lib -ltcl8.4
TK_LIBS = -L/usr/openwin/lib -lX11 -ldl -lpthread -lsocket -lnsl -lm
TK_LIB_SPEC = -L/usr/csl/lib -ltk8.4
TK_PREFIX = /usr/csl
TK_XINCLUDES = -I/usr/openwin/include
VERSION = 3.0
ac_ct_CC = gcc
ac_ct_RANLIB = ranlib
ac_ct_STRIP =
ac_prefix = /usr/local/WordNet-3.0
am__fastdepCC_FALSE = #
am__fastdepCC_TRUE =
am__include = include
am__leading_dot = .
am__quote =
am__tar = ${AMTAR} chof - "$$tardir"
am__untar = ${AMTAR} xf -
bindir = ${exec_prefix}/bin
build_alias =
datadir = ${prefix}/share
exec_prefix = ${prefix}
host_alias =
includedir = ${prefix}/include
infodir = ${prefix}/info
install_sh = /people/wn/src/Release/3.0/Unix/install-sh
libdir = ${exec_prefix}/lib
libexecdir = ${exec_prefix}/libexec
localstatedir = ${prefix}/var
mandir = ${prefix}/man
mkdir_p = $(install_sh) -d
oldincludedir = /usr/include
prefix = /usr/local/WordNet-3.0
program_transform_name = s,x,x,
sbindir = ${exec_prefix}/sbin
sharedstatedir = ${prefix}/com
sysconfdir = ${prefix}/etc
target_alias =
include_HEADERS = wn.h
SUBDIRS = tk
all: all-recursive
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu include/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu include/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
uninstall-info-am:
install-includeHEADERS: $(include_HEADERS)
@$(NORMAL_INSTALL)
test -z "$(includedir)" || $(mkdir_p) "$(DESTDIR)$(includedir)"
@list='$(include_HEADERS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(includeHEADERS_INSTALL) '$$d$$p' '$(DESTDIR)$(includedir)/$$f'"; \
$(includeHEADERS_INSTALL) "$$d$$p" "$(DESTDIR)$(includedir)/$$f"; \
done
uninstall-includeHEADERS:
@$(NORMAL_UNINSTALL)
@list='$(include_HEADERS)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(includedir)/$$f'"; \
rm -f "$(DESTDIR)$(includedir)/$$f"; \
done
# This directory's subdirectories are mostly independent; you can cd
# into them and run `make' without going through this Makefile.
# To change the values of `make' variables: instead of editing Makefiles,
# (1) if the variable is set in `config.status', edit `config.status'
# (which will cause the Makefiles to be regenerated when you run `make');
# (2) otherwise, pass the desired values on the `make' command line.
$(RECURSIVE_TARGETS):
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
target=`echo $@ | sed s/-recursive//`; \
list='$(SUBDIRS)'; for subdir in $$list; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
dot_seen=yes; \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done; \
if test "$$dot_seen" = "no"; then \
$(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
fi; test -z "$$fail"
mostlyclean-recursive clean-recursive distclean-recursive \
maintainer-clean-recursive:
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
case "$@" in \
distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
*) list='$(SUBDIRS)' ;; \
esac; \
rev=''; for subdir in $$list; do \
if test "$$subdir" = "."; then :; else \
rev="$$subdir $$rev"; \
fi; \
done; \
rev="$$rev ."; \
target=`echo $@ | sed s/-recursive//`; \
for subdir in $$rev; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done && test -z "$$fail"
tags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \
done
ctags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \
done
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: tags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
include_option=--etags-include; \
empty_fix=.; \
else \
include_option=--include; \
empty_fix=; \
fi; \
list='$(SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test ! -f $$subdir/TAGS || \
tags="$$tags $$include_option=$$here/$$subdir/TAGS"; \
fi; \
done; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$tags $$unique; \
fi
ctags: CTAGS
CTAGS: ctags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(CTAGS_ARGS)$$tags$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$tags $$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& cd $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) $$here
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test -d "$(distdir)/$$subdir" \
|| $(mkdir_p) "$(distdir)/$$subdir" \
|| exit 1; \
distdir=`$(am__cd) $(distdir) && pwd`; \
top_distdir=`$(am__cd) $(top_distdir) && pwd`; \
(cd $$subdir && \
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$$top_distdir" \
distdir="$$distdir/$$subdir" \
distdir) \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-recursive
all-am: Makefile $(HEADERS)
installdirs: installdirs-recursive
installdirs-am:
for dir in "$(DESTDIR)$(includedir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-recursive
install-exec: install-exec-recursive
install-data: install-data-recursive
uninstall: uninstall-recursive
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-recursive
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-recursive
clean-am: clean-generic mostlyclean-am
distclean: distclean-recursive
-rm -f Makefile
distclean-am: clean-am distclean-generic distclean-tags
dvi: dvi-recursive
dvi-am:
html: html-recursive
info: info-recursive
info-am:
install-data-am: install-includeHEADERS
install-exec-am:
install-info: install-info-recursive
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-recursive
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-recursive
mostlyclean-am: mostlyclean-generic
pdf: pdf-recursive
pdf-am:
ps: ps-recursive
ps-am:
uninstall-am: uninstall-includeHEADERS uninstall-info-am
uninstall-info: uninstall-info-recursive
.PHONY: $(RECURSIVE_TARGETS) CTAGS GTAGS all all-am check check-am \
clean clean-generic clean-recursive ctags ctags-recursive \
distclean distclean-generic distclean-recursive distclean-tags \
distdir dvi dvi-am html html-am info info-am install \
install-am install-data install-data-am install-exec \
install-exec-am install-includeHEADERS install-info \
install-info-am install-man install-strip installcheck \
installcheck-am installdirs installdirs-am maintainer-clean \
maintainer-clean-generic maintainer-clean-recursive \
mostlyclean mostlyclean-generic mostlyclean-recursive pdf \
pdf-am ps ps-am tags tags-recursive uninstall uninstall-am \
uninstall-includeHEADERS uninstall-info-am
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,2 +0,0 @@
include_HEADERS = wn.h
SUBDIRS = tk

View File

@ -1,459 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = ..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = @INSTALL@
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = include
DIST_COMMON = $(include_HEADERS) $(srcdir)/Makefile.am \
$(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \
html-recursive info-recursive install-data-recursive \
install-exec-recursive install-info-recursive \
install-recursive installcheck-recursive installdirs-recursive \
pdf-recursive ps-recursive uninstall-info-recursive \
uninstall-recursive
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(includedir)"
includeHEADERS_INSTALL = $(INSTALL_HEADER)
HEADERS = $(include_HEADERS)
ETAGS = etags
CTAGS = ctags
DIST_SUBDIRS = $(SUBDIRS)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
RANLIB = @RANLIB@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TCL_INCLUDE_SPEC = @TCL_INCLUDE_SPEC@
TCL_LIB_SPEC = @TCL_LIB_SPEC@
TK_LIBS = @TK_LIBS@
TK_LIB_SPEC = @TK_LIB_SPEC@
TK_PREFIX = @TK_PREFIX@
TK_XINCLUDES = @TK_XINCLUDES@
VERSION = @VERSION@
ac_ct_CC = @ac_ct_CC@
ac_ct_RANLIB = @ac_ct_RANLIB@
ac_ct_STRIP = @ac_ct_STRIP@
ac_prefix = @ac_prefix@
am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
datadir = @datadir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
prefix = @prefix@
program_transform_name = @program_transform_name@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
include_HEADERS = wn.h
SUBDIRS = tk
all: all-recursive
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu include/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu include/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
uninstall-info-am:
install-includeHEADERS: $(include_HEADERS)
@$(NORMAL_INSTALL)
test -z "$(includedir)" || $(mkdir_p) "$(DESTDIR)$(includedir)"
@list='$(include_HEADERS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(includeHEADERS_INSTALL) '$$d$$p' '$(DESTDIR)$(includedir)/$$f'"; \
$(includeHEADERS_INSTALL) "$$d$$p" "$(DESTDIR)$(includedir)/$$f"; \
done
uninstall-includeHEADERS:
@$(NORMAL_UNINSTALL)
@list='$(include_HEADERS)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(includedir)/$$f'"; \
rm -f "$(DESTDIR)$(includedir)/$$f"; \
done
# This directory's subdirectories are mostly independent; you can cd
# into them and run `make' without going through this Makefile.
# To change the values of `make' variables: instead of editing Makefiles,
# (1) if the variable is set in `config.status', edit `config.status'
# (which will cause the Makefiles to be regenerated when you run `make');
# (2) otherwise, pass the desired values on the `make' command line.
$(RECURSIVE_TARGETS):
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
target=`echo $@ | sed s/-recursive//`; \
list='$(SUBDIRS)'; for subdir in $$list; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
dot_seen=yes; \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done; \
if test "$$dot_seen" = "no"; then \
$(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
fi; test -z "$$fail"
mostlyclean-recursive clean-recursive distclean-recursive \
maintainer-clean-recursive:
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
case "$@" in \
distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
*) list='$(SUBDIRS)' ;; \
esac; \
rev=''; for subdir in $$list; do \
if test "$$subdir" = "."; then :; else \
rev="$$subdir $$rev"; \
fi; \
done; \
rev="$$rev ."; \
target=`echo $@ | sed s/-recursive//`; \
for subdir in $$rev; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done && test -z "$$fail"
tags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \
done
ctags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \
done
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: tags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
include_option=--etags-include; \
empty_fix=.; \
else \
include_option=--include; \
empty_fix=; \
fi; \
list='$(SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test ! -f $$subdir/TAGS || \
tags="$$tags $$include_option=$$here/$$subdir/TAGS"; \
fi; \
done; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$tags $$unique; \
fi
ctags: CTAGS
CTAGS: ctags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(CTAGS_ARGS)$$tags$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$tags $$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& cd $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) $$here
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test -d "$(distdir)/$$subdir" \
|| $(mkdir_p) "$(distdir)/$$subdir" \
|| exit 1; \
distdir=`$(am__cd) $(distdir) && pwd`; \
top_distdir=`$(am__cd) $(top_distdir) && pwd`; \
(cd $$subdir && \
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$$top_distdir" \
distdir="$$distdir/$$subdir" \
distdir) \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-recursive
all-am: Makefile $(HEADERS)
installdirs: installdirs-recursive
installdirs-am:
for dir in "$(DESTDIR)$(includedir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-recursive
install-exec: install-exec-recursive
install-data: install-data-recursive
uninstall: uninstall-recursive
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-recursive
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-recursive
clean-am: clean-generic mostlyclean-am
distclean: distclean-recursive
-rm -f Makefile
distclean-am: clean-am distclean-generic distclean-tags
dvi: dvi-recursive
dvi-am:
html: html-recursive
info: info-recursive
info-am:
install-data-am: install-includeHEADERS
install-exec-am:
install-info: install-info-recursive
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-recursive
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-recursive
mostlyclean-am: mostlyclean-generic
pdf: pdf-recursive
pdf-am:
ps: ps-recursive
ps-am:
uninstall-am: uninstall-includeHEADERS uninstall-info-am
uninstall-info: uninstall-info-recursive
.PHONY: $(RECURSIVE_TARGETS) CTAGS GTAGS all all-am check check-am \
clean clean-generic clean-recursive ctags ctags-recursive \
distclean distclean-generic distclean-recursive distclean-tags \
distdir dvi dvi-am html html-am info info-am install \
install-am install-data install-data-am install-exec \
install-exec-am install-includeHEADERS install-info \
install-info-am install-man install-strip installcheck \
installcheck-am installdirs installdirs-am maintainer-clean \
maintainer-clean-generic maintainer-clean-recursive \
mostlyclean mostlyclean-generic mostlyclean-recursive pdf \
pdf-am ps ps-am tags tags-recursive uninstall uninstall-am \
uninstall-includeHEADERS uninstall-info-am
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,314 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# include/tk/Makefile. Generated from Makefile.in by configure.
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
srcdir = .
top_srcdir = ../..
pkgdatadir = $(datadir)/WordNet
pkglibdir = $(libdir)/WordNet
pkgincludedir = $(includedir)/WordNet
top_builddir = ../..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = /usr/csl/bin/install -c
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = include/tk
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(tkdir)"
tkDATA_INSTALL = $(INSTALL_DATA)
DATA = $(tk_DATA)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run aclocal-1.9
AMDEP_FALSE = #
AMDEP_TRUE =
AMTAR = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run tar
AUTOCONF = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run autoconf
AUTOHEADER = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run autoheader
AUTOMAKE = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run automake-1.9
AWK = nawk
CC = gcc
CCDEPMODE = depmode=gcc3
CFLAGS = -g -O2
CPP = gcc -E
CPPFLAGS =
CYGPATH_W = echo
DEFS = -DHAVE_CONFIG_H
DEPDIR = .deps
ECHO_C =
ECHO_N = -n
ECHO_T =
EGREP = egrep
EXEEXT =
INSTALL_DATA = ${INSTALL} -m 644
INSTALL_PROGRAM = ${INSTALL}
INSTALL_SCRIPT = ${INSTALL}
INSTALL_STRIP_PROGRAM = ${SHELL} $(install_sh) -c -s
LDFLAGS =
LIBOBJS =
LIBS =
LTLIBOBJS =
MAKEINFO = ${SHELL} /people/wn/src/Release/3.0/Unix/missing --run makeinfo
OBJEXT = o
PACKAGE = WordNet
PACKAGE_BUGREPORT = wordnet@princeton.edu
PACKAGE_NAME = WordNet
PACKAGE_STRING = WordNet 3.0
PACKAGE_TARNAME = wordnet
PACKAGE_VERSION = 3.0
PATH_SEPARATOR = :
RANLIB = ranlib
SET_MAKE =
SHELL = /bin/bash
STRIP =
TCL_INCLUDE_SPEC = -I/usr/csl/include
TCL_LIB_SPEC = -L/usr/csl/lib -ltcl8.4
TK_LIBS = -L/usr/openwin/lib -lX11 -ldl -lpthread -lsocket -lnsl -lm
TK_LIB_SPEC = -L/usr/csl/lib -ltk8.4
TK_PREFIX = /usr/csl
TK_XINCLUDES = -I/usr/openwin/include
VERSION = 3.0
ac_ct_CC = gcc
ac_ct_RANLIB = ranlib
ac_ct_STRIP =
ac_prefix = /usr/local/WordNet-3.0
am__fastdepCC_FALSE = #
am__fastdepCC_TRUE =
am__include = include
am__leading_dot = .
am__quote =
am__tar = ${AMTAR} chof - "$$tardir"
am__untar = ${AMTAR} xf -
bindir = ${exec_prefix}/bin
build_alias =
datadir = ${prefix}/share
exec_prefix = ${prefix}
host_alias =
includedir = ${prefix}/include
infodir = ${prefix}/info
install_sh = /people/wn/src/Release/3.0/Unix/install-sh
libdir = ${exec_prefix}/lib
libexecdir = ${exec_prefix}/libexec
localstatedir = ${prefix}/var
mandir = ${prefix}/man
mkdir_p = $(install_sh) -d
oldincludedir = /usr/include
prefix = /usr/local/WordNet-3.0
program_transform_name = s,x,x,
sbindir = ${exec_prefix}/sbin
sharedstatedir = ${prefix}/com
sysconfdir = ${prefix}/etc
target_alias =
EXTRA_DIST = tk.h tkDecls.h
tkdir = $(prefix)/include/tk
tk_DATA = tk.h tkDecls.h
all: all-am
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu include/tk/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu include/tk/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
uninstall-info-am:
install-tkDATA: $(tk_DATA)
@$(NORMAL_INSTALL)
test -z "$(tkdir)" || $(mkdir_p) "$(DESTDIR)$(tkdir)"
@list='$(tk_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(tkDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(tkdir)/$$f'"; \
$(tkDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(tkdir)/$$f"; \
done
uninstall-tkDATA:
@$(NORMAL_UNINSTALL)
@list='$(tk_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(tkdir)/$$f'"; \
rm -f "$(DESTDIR)$(tkdir)/$$f"; \
done
tags: TAGS
TAGS:
ctags: CTAGS
CTAGS:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(DATA)
installdirs:
for dir in "$(DESTDIR)$(tkdir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am: install-tkDATA
install-exec-am:
install-info: install-info-am
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-info-am uninstall-tkDATA
.PHONY: all all-am check check-am clean clean-generic distclean \
distclean-generic distdir dvi dvi-am html html-am info info-am \
install install-am install-data install-data-am install-exec \
install-exec-am install-info install-info-am install-man \
install-strip install-tkDATA installcheck installcheck-am \
installdirs maintainer-clean maintainer-clean-generic \
mostlyclean mostlyclean-generic pdf pdf-am ps ps-am uninstall \
uninstall-am uninstall-info-am uninstall-tkDATA
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,3 +0,0 @@
EXTRA_DIST = tk.h tkDecls.h
tkdir = $(prefix)/include/tk
tk_DATA = tk.h tkDecls.h

View File

@ -1,314 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = ../..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = @INSTALL@
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = include/tk
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(tkdir)"
tkDATA_INSTALL = $(INSTALL_DATA)
DATA = $(tk_DATA)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
RANLIB = @RANLIB@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TCL_INCLUDE_SPEC = @TCL_INCLUDE_SPEC@
TCL_LIB_SPEC = @TCL_LIB_SPEC@
TK_LIBS = @TK_LIBS@
TK_LIB_SPEC = @TK_LIB_SPEC@
TK_PREFIX = @TK_PREFIX@
TK_XINCLUDES = @TK_XINCLUDES@
VERSION = @VERSION@
ac_ct_CC = @ac_ct_CC@
ac_ct_RANLIB = @ac_ct_RANLIB@
ac_ct_STRIP = @ac_ct_STRIP@
ac_prefix = @ac_prefix@
am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
datadir = @datadir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
prefix = @prefix@
program_transform_name = @program_transform_name@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
EXTRA_DIST = tk.h tkDecls.h
tkdir = $(prefix)/include/tk
tk_DATA = tk.h tkDecls.h
all: all-am
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu include/tk/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu include/tk/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
uninstall-info-am:
install-tkDATA: $(tk_DATA)
@$(NORMAL_INSTALL)
test -z "$(tkdir)" || $(mkdir_p) "$(DESTDIR)$(tkdir)"
@list='$(tk_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(tkDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(tkdir)/$$f'"; \
$(tkDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(tkdir)/$$f"; \
done
uninstall-tkDATA:
@$(NORMAL_UNINSTALL)
@list='$(tk_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(tkdir)/$$f'"; \
rm -f "$(DESTDIR)$(tkdir)/$$f"; \
done
tags: TAGS
TAGS:
ctags: CTAGS
CTAGS:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(DATA)
installdirs:
for dir in "$(DESTDIR)$(tkdir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am: install-tkDATA
install-exec-am:
install-info: install-info-am
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-info-am uninstall-tkDATA
.PHONY: all all-am check check-am clean clean-generic distclean \
distclean-generic distdir dvi dvi-am html html-am info info-am \
install install-am install-data install-data-am install-exec \
install-exec-am install-info install-info-am install-man \
install-strip install-tkDATA installcheck installcheck-am \
installdirs maintainer-clean maintainer-clean-generic \
mostlyclean mostlyclean-generic pdf pdf-am ps ps-am uninstall \
uninstall-am uninstall-info-am uninstall-tkDATA
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,523 +0,0 @@
/*
wn.h - header file needed to use WordNet Run Time Library
$Id: wn.h,v 1.61 2006/11/14 20:58:30 wn Exp $
*/
#ifndef _WN_
#define _WN_
#include <stdio.h>
/* Platform specific path and filename specifications */
#ifdef _WINDOWS
#define DICTDIR "\\dict"
#ifndef DEFAULTPATH
#define DEFAULTPATH "C:\\Program Files\\WordNet\\3.0\\dict"
#endif
#define DATAFILE "%s\\data.%s"
#define INDEXFILE "%s\\index.%s"
#define SENSEIDXFILE "%s\\index.sense"
#define KEYIDXFILE "%s\\index.key"
#define REVKEYIDXFILE "%s\\index.key.rev"
#define VRBSENTFILE "%s\\sents.vrb"
#define VRBIDXFILE "%s\\sentidx.vrb"
#define CNTLISTFILE "%s\\cntlist.rev"
#else
#define DICTDIR "/dict"
#ifndef DEFAULTPATH
#define DEFAULTPATH "/usr/local/WordNet-3.0/dict"
#endif
#define DATAFILE "%s/data.%s"
#define INDEXFILE "%s/index.%s"
#define SENSEIDXFILE "%s/index.sense"
#define KEYIDXFILE "%s/index.key"
#define REVKEYIDXFILE "%s/index.key.rev"
#define VRBSENTFILE "%s/sents.vrb"
#define VRBIDXFILE "%s/sentidx.vrb"
#define CNTLISTFILE "%s/cntlist.rev"
#endif
/* Various buffer sizes */
#define SEARCHBUF ((long)(200*(long)1024))
#define LINEBUF (15*1024) /* 15K buffer to read index & data files */
#define SMLINEBUF (3*1024) /* small buffer for output lines */
#define WORDBUF (256) /* buffer for one word or collocation */
#define ALLSENSES 0 /* pass to findtheinfo() if want all senses */
#define MAXID 15 /* maximum id number in lexicographer file */
#define MAXDEPTH 20 /* maximum tree depth - used to find cycles */
#define MAXSENSE 75 /* maximum number of senses in database */
#define MAX_FORMS 5 /* max # of different 'forms' word can have */
#define MAXFNUM 44 /* maximum number of lexicographer files */
/* Pointer type and search type counts */
/* Pointers */
#define ANTPTR 1 /* ! */
#define HYPERPTR 2 /* @ */
#define HYPOPTR 3 /* ~ */
#define ENTAILPTR 4 /* * */
#define SIMPTR 5 /* & */
#define ISMEMBERPTR 6 /* #m */
#define ISSTUFFPTR 7 /* #s */
#define ISPARTPTR 8 /* #p */
#define HASMEMBERPTR 9 /* %m */
#define HASSTUFFPTR 10 /* %s */
#define HASPARTPTR 11 /* %p */
#define MERONYM 12 /* % (not valid in lexicographer file) */
#define HOLONYM 13 /* # (not valid in lexicographer file) */
#define CAUSETO 14 /* > */
#define PPLPTR 15 /* < */
#define SEEALSOPTR 16 /* ^ */
#define PERTPTR 17 /* \ */
#define ATTRIBUTE 18 /* = */
#define VERBGROUP 19 /* $ */
#define DERIVATION 20 /* + */
#define CLASSIFICATION 21 /* ; */
#define CLASS 22 /* - */
#define LASTTYPE CLASS
/* Misc searches */
#define SYNS (LASTTYPE + 1)
#define FREQ (LASTTYPE + 2)
#define FRAMES (LASTTYPE + 3)
#define COORDS (LASTTYPE + 4)
#define RELATIVES (LASTTYPE + 5)
#define HMERONYM (LASTTYPE + 6)
#define HHOLONYM (LASTTYPE + 7)
#define WNGREP (LASTTYPE + 8)
#define OVERVIEW (LASTTYPE + 9)
#define MAXSEARCH OVERVIEW
#define CLASSIF_START (MAXSEARCH + 1)
#define CLASSIF_CATEGORY (CLASSIF_START) /* ;c */
#define CLASSIF_USAGE (CLASSIF_START + 1) /* ;u */
#define CLASSIF_REGIONAL (CLASSIF_START + 2) /* ;r */
#define CLASSIF_END CLASSIF_REGIONAL
#define CLASS_START (CLASSIF_END + 1)
#define CLASS_CATEGORY (CLASS_START) /* -c */
#define CLASS_USAGE (CLASS_START + 1) /* -u */
#define CLASS_REGIONAL (CLASS_START + 2) /* -r */
#define CLASS_END CLASS_REGIONAL
#define INSTANCE (CLASS_END + 1) /* @i */
#define INSTANCES (CLASS_END + 2) /* ~i */
#define MAXPTR INSTANCES
/* WordNet part of speech stuff */
#define NUMPARTS 4 /* number of parts of speech */
#define NUMFRAMES 35 /* number of verb frames */
/* Generic names for part of speech */
#define NOUN 1
#define VERB 2
#define ADJ 3
#define ADV 4
#define SATELLITE 5 /* not really a part of speech */
#define ADJSAT SATELLITE
#define ALL_POS 0 /* passed to in_wn() to check all POS */
#define bit(n) ((unsigned int)((unsigned int)1<<((unsigned int)n)))
/* Adjective markers */
#define PADJ 1 /* (p) */
#define NPADJ 2 /* (a) */
#define IPADJ 3 /* (ip) */
#define UNKNOWN_MARKER 0
#define ATTRIBUTIVE NPADJ
#define PREDICATIVE PADJ
#define IMMED_POSTNOMINAL IPADJ
extern char *wnrelease; /* WordNet release/version number */
extern char *lexfiles[]; /* names of lexicographer files */
extern char *ptrtyp[]; /* pointer characters */
extern char *partnames[]; /* POS strings */
extern char partchars[]; /* single chars for each POS */
extern char *adjclass[]; /* adjective class strings */
extern char *frametext[]; /* text of verb frames */
/* Data structures used by search code functions. */
/* Structure for index file entry */
typedef struct {
long idxoffset; /* byte offset of entry in index file */
char *wd; /* word string */
char *pos; /* part of speech */
int sense_cnt; /* sense (collins) count */
int off_cnt; /* number of offsets */
int tagged_cnt; /* number senses that are tagged */
unsigned long *offset; /* offsets of synsets containing word */
int ptruse_cnt; /* number of pointers used */
int *ptruse; /* pointers used */
} Index;
typedef Index *IndexPtr;
/* Structure for data file synset */
typedef struct ss {
long hereiam; /* current file position */
int sstype; /* type of ADJ synset */
int fnum; /* file number that synset comes from */
char *pos; /* part of speech */
int wcount; /* number of words in synset */
char **words; /* words in synset */
int *lexid; /* unique id in lexicographer file */
int *wnsns; /* sense number in wordnet */
int whichword; /* which word in synset we're looking for */
int ptrcount; /* number of pointers */
int *ptrtyp; /* pointer types */
long *ptroff; /* pointer offsets */
int *ppos; /* pointer part of speech */
int *pto; /* pointer 'to' fields */
int *pfrm; /* pointer 'from' fields */
int fcount; /* number of verb frames */
int *frmid; /* frame numbers */
int *frmto; /* frame 'to' fields */
char *defn; /* synset gloss (definition) */
unsigned int key; /* unique synset key */
/* these fields are used if a data structure is returned
instead of a text buffer */
struct ss *nextss; /* ptr to next synset containing searchword */
struct ss *nextform; /* ptr to list of synsets for alternate
spelling of wordform */
int searchtype; /* type of search performed */
struct ss *ptrlist; /* ptr to synset list result of search */
char *headword; /* if pos is "s", this is cluster head word */
short headsense; /* sense number of headword */
} Synset;
typedef Synset *SynsetPtr;
typedef struct si {
char *sensekey; /* sense key */
char *word; /* word string */
long loc; /* synset offset */
int wnsense; /* WordNet sense number */
int tag_cnt; /* number of semantic tags to sense */
struct si *nextsi; /* ptr to next sense index entry */
} SnsIndex;
typedef SnsIndex *SnsIndexPtr;
typedef struct {
int SenseCount[MAX_FORMS]; /* number of senses word form has */
int OutSenseCount[MAX_FORMS]; /* number of senses printed for word form */
int numforms; /* number of word forms searchword has */
int printcnt; /* number of senses printed by search */
char *searchbuf; /* buffer containing formatted results */
SynsetPtr searchds; /* data structure containing search results */
} SearchResults;
typedef SearchResults *SearchResultsPtr;
/* Global variables and flags */
extern SearchResults wnresults; /* structure containing results of search */
extern int fnflag; /* if set, print lex filename after sense */
extern int dflag; /* if set, print definitional glosses */
extern int saflag; /* if set, print SEE ALSO pointers */
extern int fileinfoflag; /* if set, print lex file info on synsets */
extern int frflag; /* if set, print verb frames after synset */
extern int abortsearch; /* if set, stop search algorithm */
extern int offsetflag; /* if set, print byte offset of each synset */
extern int wnsnsflag; /* if set, print WN sense # for each word */
/* File pointers for database files */
extern int OpenDB; /* if non-zero, database file are open */
extern FILE *datafps[NUMPARTS + 1],
*indexfps[NUMPARTS + 1],
*sensefp,
*cntlistfp,
*keyindexfp, *revkeyindexfp,
*vidxfilefp, *vsentfilefp;
/* Method for interface to check for events while search is running */
extern void (*interface_doevents_func)(void);
/* callback for interruptable searches in */
/* single-threaded interfaces */
/* General error message handler - can be defined by interface.
Default function provided in library returns -1 */
extern int default_display_message(char *);
extern int (*display_message)(char *);
/* Make all the functions compatible with c++ files */
#ifdef __cplusplus
extern "C" {
#endif
/* External library function prototypes */
/*** Search and database functions (search.c) ***/
/* Primry search algorithm for use with user interfaces */
extern char *findtheinfo(char *, int, int, int);
/* Primary search algorithm for use with programs (returns data structure) */
extern SynsetPtr findtheinfo_ds(char *, int, int, int);
/* Set bit for each search type that is valid for the search word
passed and return bit mask. */
extern unsigned int is_defined(char *, int);
/* Set bit for each POS that search word is in. 0 returned if
word is not in WordNet. */
extern unsigned int in_wn(char *, int);
/* Find word in index file and return parsed entry in data structure.
Input word must be exact match of string in database. */
extern IndexPtr index_lookup(char *, int);
/* 'smart' search of index file. Find word in index file, trying different
techniques - replace hyphens with underscores, replace underscores with
hyphens, strip hyphens and underscores, strip periods. */
extern IndexPtr getindex(char *, int);
extern IndexPtr parse_index(long, int, char *);
/* Read synset from data file at byte offset passed and return parsed
entry in data structure. */
extern SynsetPtr read_synset(int, long, char *);
/* Read synset at current byte offset in file and return parsed entry
in data structure. */
extern SynsetPtr parse_synset(FILE *, int, char *);
/* Free a synset linked list allocated by findtheinfo_ds() */
extern void free_syns(SynsetPtr);
/* Free a synset */
extern void free_synset(SynsetPtr);
/* Free an index structure */
extern void free_index(IndexPtr);
/* Recursive search algorithm to trace a pointer tree and return results
in linked list of data structures. */
SynsetPtr traceptrs_ds(SynsetPtr, int, int, int);
/* Do requested search on synset passed, returning output in buffer. */
extern char *do_trace(SynsetPtr, int, int, int);
/*** Morphology functions (morph.c) ***/
/* Open exception list files */
extern int morphinit();
/* Close exception list files and reopen */
extern int re_morphinit();
/* Try to find baseform (lemma) of word or collocation in POS. */
extern char *morphstr(char *, int);
/* Try to find baseform (lemma) of individual word in POS. */
extern char *morphword(char *, int);
/*** Utility functions (wnutil.c) ***/
/* Top level function to open database files, initialize wn_filenames,
and open exeception lists. */
extern int wninit();
/* Top level function to close and reopen database files, initialize
wn_filenames and open exception lists. */
extern int re_wninit();
/* Count the number of underscore or space separated words in a string. */
extern int cntwords(char *, char);
/* Convert string to lower case remove trailing adjective marker if found */
extern char *strtolower(char *);
/* Convert string passed to lower case */
extern char *ToLowerCase(char *);
/* Replace all occurrences of 'from' with 'to' in 'str' */
extern char *strsubst(char *, char, char);
/* Return pointer code for pointer type characer passed. */
extern int getptrtype(char *);
/* Return part of speech code for string passed */
extern int getpos(char *);
/* Return synset type code for string passed. */
extern int getsstype(char *);
/* Reconstruct synset from synset pointer and return ptr to buffer */
extern char *FmtSynset(SynsetPtr, int);
/* Find string for 'searchstr' as it is in index file */
extern char *GetWNStr(char *, int);
/* Pass in string for POS, return corresponding integer value */
extern int StrToPos(char *);
/* Return synset for sense key passed. */
extern SynsetPtr GetSynsetForSense(char *);
/* Find offset of sense key in data file */
extern long GetDataOffset(char *);
/* Find polysemy (collins) count for sense key passed. */
extern int GetPolyCount(char *);
/* Return word part of sense key */
extern char *GetWORD(char *);
/* Return POS code for sense key passed. */
extern int GetPOS(char *);
/* Convert WordNet sense number passed of IndexPtr entry to sense key. */
extern char *WNSnsToStr(IndexPtr, int);
/* Search for string and/or baseform of word in database and return
index structure for word if found in database. */
extern IndexPtr GetValidIndexPointer(char *, int);
/* Return sense number in database for word and lexsn passed. */
int GetWNSense(char *, char *);
SnsIndexPtr GetSenseIndex(char *);
char *GetOffsetForKey(unsigned int);
unsigned int GetKeyForOffset(char *);
char *SetSearchdir();
/* Return number of times sense is tagged */
int GetTagcnt(IndexPtr, int);
/*
** Wrapper functions for strstr that allow you to retrieve each
** occurance of a word within a longer string, not just the first.
**
** strstr_init is called with the same arguments as normal strstr,
** but does not return any value.
**
** strstr_getnext returns the position offset (not a pointer, as does
** normal strstr) of the next occurance, or -1 if none remain.
*/
extern void strstr_init (char *, char *);
extern int strstr_getnext (void);
/*** Binary search functions (binsearch.c) ***/
/* General purpose binary search function to search for key as first
item on line in open file. Item is delimited by space. */
extern char *bin_search(char *, FILE *);
extern char *read_index(long, FILE *);
/* Copy contents from one file to another. */
extern void copyfile(FILE *, FILE *);
/* Function to replace a line in a file. Returns the original line,
or NULL in case of error. */
extern char *replace_line(char *, char *, FILE *);
/* Find location to insert line at in file. If line with this
key is already in file, return NULL. */
extern char *insert_line(char *, char *, FILE *);
#ifdef __cplusplus
}
#endif
extern char **helptext[NUMPARTS + 1];
static char *license = "\
This software and database is being provided to you, the LICENSEE, by \n\
Princeton University under the following license. By obtaining, using \n\
and/or copying this software and database, you agree that you have \n\
read, understood, and will comply with these terms and conditions.: \n\
\n\
Permission to use, copy, modify and distribute this software and \n\
database and its documentation for any purpose and without fee or \n\
royalty is hereby granted, provided that you agree to comply with \n\
the following copyright notice and statements, including the disclaimer, \n\
and that the same appear on ALL copies of the software, database and \n\
documentation, including modifications that you make for internal \n\
use or for distribution. \n\
\n\
WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. \n\
\n\
THIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON \n\
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR \n\
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON \n\
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- \n\
ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE \n\
OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT \n\
INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR \n\
OTHER RIGHTS. \n\
\n\
The name of Princeton University or Princeton may not be used in \n\
advertising or publicity pertaining to distribution of the software \n\
and/or database. Title to copyright in this software, database and \n\
any associated documentation shall at all times remain with \n\
Princeton University and LICENSEE agrees to preserve same. \n"
;
static char dblicense[] = "\
1 This software and database is being provided to you, the LICENSEE, by \n\
2 Princeton University under the following license. By obtaining, using \n\
3 and/or copying this software and database, you agree that you have \n\
4 read, understood, and will comply with these terms and conditions.: \n\
5 \n\
6 Permission to use, copy, modify and distribute this software and \n\
7 database and its documentation for any purpose and without fee or \n\
8 royalty is hereby granted, provided that you agree to comply with \n\
9 the following copyright notice and statements, including the disclaimer, \n\
10 and that the same appear on ALL copies of the software, database and \n\
11 documentation, including modifications that you make for internal \n\
12 use or for distribution. \n\
13 \n\
14 WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. \n\
15 \n\
16 THIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON \n\
17 UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR \n\
18 IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON \n\
19 UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- \n\
20 ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE \n\
21 OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT \n\
22 INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR \n\
23 OTHER RIGHTS. \n\
24 \n\
25 The name of Princeton University or Princeton may not be used in \n\
26 advertising or publicity pertaining to distribution of the software \n\
27 and/or database. Title to copyright in this software, database and \n\
28 any associated documentation shall at all times remain with \n\
29 Princeton University and LICENSEE agrees to preserve same. \n"
;
#define DBLICENSE_SIZE (sizeof(dblicense))
#endif /*_WN_*/

View File

@ -1,519 +0,0 @@
/*
wn.h - header file needed to use WordNet Run Time Library
$Id: wn.h,v 1.61 2006/11/14 20:58:30 wn Exp $
*/
#ifndef _WN_
#define _WN_
#include <stdio.h>
/* Platform specific path and filename specifications */
#ifdef _WINDOWS
#define DICTDIR "\\dict"
#define DEFAULTPATH "C:\\Program Files\\WordNet\\3.0\\dict"
#define DATAFILE "%s\\data.%s"
#define INDEXFILE "%s\\index.%s"
#define SENSEIDXFILE "%s\\index.sense"
#define KEYIDXFILE "%s\\index.key"
#define REVKEYIDXFILE "%s\\index.key.rev"
#define VRBSENTFILE "%s\\sents.vrb"
#define VRBIDXFILE "%s\\sentidx.vrb"
#define CNTLISTFILE "%s\\cntlist.rev"
#else
#define DICTDIR "/dict"
#define DEFAULTPATH "/usr/local/WordNet-3.0/dict"
#define DATAFILE "%s/data.%s"
#define INDEXFILE "%s/index.%s"
#define SENSEIDXFILE "%s/index.sense"
#define KEYIDXFILE "%s/index.key"
#define REVKEYIDXFILE "%s/index.key.rev"
#define VRBSENTFILE "%s/sents.vrb"
#define VRBIDXFILE "%s/sentidx.vrb"
#define CNTLISTFILE "%s/cntlist.rev"
#endif
/* Various buffer sizes */
#define SEARCHBUF ((long)(200*(long)1024))
#define LINEBUF (15*1024) /* 15K buffer to read index & data files */
#define SMLINEBUF (3*1024) /* small buffer for output lines */
#define WORDBUF (256) /* buffer for one word or collocation */
#define ALLSENSES 0 /* pass to findtheinfo() if want all senses */
#define MAXID 15 /* maximum id number in lexicographer file */
#define MAXDEPTH 20 /* maximum tree depth - used to find cycles */
#define MAXSENSE 75 /* maximum number of senses in database */
#define MAX_FORMS 5 /* max # of different 'forms' word can have */
#define MAXFNUM 44 /* maximum number of lexicographer files */
/* Pointer type and search type counts */
/* Pointers */
#define ANTPTR 1 /* ! */
#define HYPERPTR 2 /* @ */
#define HYPOPTR 3 /* ~ */
#define ENTAILPTR 4 /* * */
#define SIMPTR 5 /* & */
#define ISMEMBERPTR 6 /* #m */
#define ISSTUFFPTR 7 /* #s */
#define ISPARTPTR 8 /* #p */
#define HASMEMBERPTR 9 /* %m */
#define HASSTUFFPTR 10 /* %s */
#define HASPARTPTR 11 /* %p */
#define MERONYM 12 /* % (not valid in lexicographer file) */
#define HOLONYM 13 /* # (not valid in lexicographer file) */
#define CAUSETO 14 /* > */
#define PPLPTR 15 /* < */
#define SEEALSOPTR 16 /* ^ */
#define PERTPTR 17 /* \ */
#define ATTRIBUTE 18 /* = */
#define VERBGROUP 19 /* $ */
#define DERIVATION 20 /* + */
#define CLASSIFICATION 21 /* ; */
#define CLASS 22 /* - */
#define LASTTYPE CLASS
/* Misc searches */
#define SYNS (LASTTYPE + 1)
#define FREQ (LASTTYPE + 2)
#define FRAMES (LASTTYPE + 3)
#define COORDS (LASTTYPE + 4)
#define RELATIVES (LASTTYPE + 5)
#define HMERONYM (LASTTYPE + 6)
#define HHOLONYM (LASTTYPE + 7)
#define WNGREP (LASTTYPE + 8)
#define OVERVIEW (LASTTYPE + 9)
#define MAXSEARCH OVERVIEW
#define CLASSIF_START (MAXSEARCH + 1)
#define CLASSIF_CATEGORY (CLASSIF_START) /* ;c */
#define CLASSIF_USAGE (CLASSIF_START + 1) /* ;u */
#define CLASSIF_REGIONAL (CLASSIF_START + 2) /* ;r */
#define CLASSIF_END CLASSIF_REGIONAL
#define CLASS_START (CLASSIF_END + 1)
#define CLASS_CATEGORY (CLASS_START) /* -c */
#define CLASS_USAGE (CLASS_START + 1) /* -u */
#define CLASS_REGIONAL (CLASS_START + 2) /* -r */
#define CLASS_END CLASS_REGIONAL
#define INSTANCE (CLASS_END + 1) /* @i */
#define INSTANCES (CLASS_END + 2) /* ~i */
#define MAXPTR INSTANCES
/* WordNet part of speech stuff */
#define NUMPARTS 4 /* number of parts of speech */
#define NUMFRAMES 35 /* number of verb frames */
/* Generic names for part of speech */
#define NOUN 1
#define VERB 2
#define ADJ 3
#define ADV 4
#define SATELLITE 5 /* not really a part of speech */
#define ADJSAT SATELLITE
#define ALL_POS 0 /* passed to in_wn() to check all POS */
#define bit(n) ((unsigned int)((unsigned int)1<<((unsigned int)n)))
/* Adjective markers */
#define PADJ 1 /* (p) */
#define NPADJ 2 /* (a) */
#define IPADJ 3 /* (ip) */
#define UNKNOWN_MARKER 0
#define ATTRIBUTIVE NPADJ
#define PREDICATIVE PADJ
#define IMMED_POSTNOMINAL IPADJ
extern char *wnrelease; /* WordNet release/version number */
extern char *lexfiles[]; /* names of lexicographer files */
extern char *ptrtyp[]; /* pointer characters */
extern char *partnames[]; /* POS strings */
extern char partchars[]; /* single chars for each POS */
extern char *adjclass[]; /* adjective class strings */
extern char *frametext[]; /* text of verb frames */
/* Data structures used by search code functions. */
/* Structure for index file entry */
typedef struct {
long idxoffset; /* byte offset of entry in index file */
char *wd; /* word string */
char *pos; /* part of speech */
int sense_cnt; /* sense (collins) count */
int off_cnt; /* number of offsets */
int tagged_cnt; /* number senses that are tagged */
unsigned long *offset; /* offsets of synsets containing word */
int ptruse_cnt; /* number of pointers used */
int *ptruse; /* pointers used */
} Index;
typedef Index *IndexPtr;
/* Structure for data file synset */
typedef struct ss {
long hereiam; /* current file position */
int sstype; /* type of ADJ synset */
int fnum; /* file number that synset comes from */
char *pos; /* part of speech */
int wcount; /* number of words in synset */
char **words; /* words in synset */
int *lexid; /* unique id in lexicographer file */
int *wnsns; /* sense number in wordnet */
int whichword; /* which word in synset we're looking for */
int ptrcount; /* number of pointers */
int *ptrtyp; /* pointer types */
long *ptroff; /* pointer offsets */
int *ppos; /* pointer part of speech */
int *pto; /* pointer 'to' fields */
int *pfrm; /* pointer 'from' fields */
int fcount; /* number of verb frames */
int *frmid; /* frame numbers */
int *frmto; /* frame 'to' fields */
char *defn; /* synset gloss (definition) */
unsigned int key; /* unique synset key */
/* these fields are used if a data structure is returned
instead of a text buffer */
struct ss *nextss; /* ptr to next synset containing searchword */
struct ss *nextform; /* ptr to list of synsets for alternate
spelling of wordform */
int searchtype; /* type of search performed */
struct ss *ptrlist; /* ptr to synset list result of search */
char *headword; /* if pos is "s", this is cluster head word */
short headsense; /* sense number of headword */
} Synset;
typedef Synset *SynsetPtr;
typedef struct si {
char *sensekey; /* sense key */
char *word; /* word string */
long loc; /* synset offset */
int wnsense; /* WordNet sense number */
int tag_cnt; /* number of semantic tags to sense */
struct si *nextsi; /* ptr to next sense index entry */
} SnsIndex;
typedef SnsIndex *SnsIndexPtr;
typedef struct {
int SenseCount[MAX_FORMS]; /* number of senses word form has */
int OutSenseCount[MAX_FORMS]; /* number of senses printed for word form */
int numforms; /* number of word forms searchword has */
int printcnt; /* number of senses printed by search */
char *searchbuf; /* buffer containing formatted results */
SynsetPtr searchds; /* data structure containing search results */
} SearchResults;
typedef SearchResults *SearchResultsPtr;
/* Global variables and flags */
extern SearchResults wnresults; /* structure containing results of search */
extern int fnflag; /* if set, print lex filename after sense */
extern int dflag; /* if set, print definitional glosses */
extern int saflag; /* if set, print SEE ALSO pointers */
extern int fileinfoflag; /* if set, print lex file info on synsets */
extern int frflag; /* if set, print verb frames after synset */
extern int abortsearch; /* if set, stop search algorithm */
extern int offsetflag; /* if set, print byte offset of each synset */
extern int wnsnsflag; /* if set, print WN sense # for each word */
/* File pointers for database files */
extern int OpenDB; /* if non-zero, database file are open */
extern FILE *datafps[NUMPARTS + 1],
*indexfps[NUMPARTS + 1],
*sensefp,
*cntlistfp,
*keyindexfp, *revkeyindexfp,
*vidxfilefp, *vsentfilefp;
/* Method for interface to check for events while search is running */
extern void (*interface_doevents_func)(void);
/* callback for interruptable searches in */
/* single-threaded interfaces */
/* General error message handler - can be defined by interface.
Default function provided in library returns -1 */
extern int default_display_message(char *);
extern int (*display_message)(char *);
/* Make all the functions compatible with c++ files */
#ifdef __cplusplus
extern "C" {
#endif
/* External library function prototypes */
/*** Search and database functions (search.c) ***/
/* Primry search algorithm for use with user interfaces */
extern char *findtheinfo(char *, int, int, int);
/* Primary search algorithm for use with programs (returns data structure) */
extern SynsetPtr findtheinfo_ds(char *, int, int, int);
/* Set bit for each search type that is valid for the search word
passed and return bit mask. */
extern unsigned int is_defined(char *, int);
/* Set bit for each POS that search word is in. 0 returned if
word is not in WordNet. */
extern unsigned int in_wn(char *, int);
/* Find word in index file and return parsed entry in data structure.
Input word must be exact match of string in database. */
extern IndexPtr index_lookup(char *, int);
/* 'smart' search of index file. Find word in index file, trying different
techniques - replace hyphens with underscores, replace underscores with
hyphens, strip hyphens and underscores, strip periods. */
extern IndexPtr getindex(char *, int);
extern IndexPtr parse_index(long, int, char *);
/* Read synset from data file at byte offset passed and return parsed
entry in data structure. */
extern SynsetPtr read_synset(int, long, char *);
/* Read synset at current byte offset in file and return parsed entry
in data structure. */
extern SynsetPtr parse_synset(FILE *, int, char *);
/* Free a synset linked list allocated by findtheinfo_ds() */
extern void free_syns(SynsetPtr);
/* Free a synset */
extern void free_synset(SynsetPtr);
/* Free an index structure */
extern void free_index(IndexPtr);
/* Recursive search algorithm to trace a pointer tree and return results
in linked list of data structures. */
SynsetPtr traceptrs_ds(SynsetPtr, int, int, int);
/* Do requested search on synset passed, returning output in buffer. */
extern char *do_trace(SynsetPtr, int, int, int);
/*** Morphology functions (morph.c) ***/
/* Open exception list files */
extern int morphinit();
/* Close exception list files and reopen */
extern int re_morphinit();
/* Try to find baseform (lemma) of word or collocation in POS. */
extern char *morphstr(char *, int);
/* Try to find baseform (lemma) of individual word in POS. */
extern char *morphword(char *, int);
/*** Utility functions (wnutil.c) ***/
/* Top level function to open database files, initialize wn_filenames,
and open exeception lists. */
extern int wninit();
/* Top level function to close and reopen database files, initialize
wn_filenames and open exception lists. */
extern int re_wninit();
/* Count the number of underscore or space separated words in a string. */
extern int cntwords(char *, char);
/* Convert string to lower case remove trailing adjective marker if found */
extern char *strtolower(char *);
/* Convert string passed to lower case */
extern char *ToLowerCase(char *);
/* Replace all occurrences of 'from' with 'to' in 'str' */
extern char *strsubst(char *, char, char);
/* Return pointer code for pointer type characer passed. */
extern int getptrtype(char *);
/* Return part of speech code for string passed */
extern int getpos(char *);
/* Return synset type code for string passed. */
extern int getsstype(char *);
/* Reconstruct synset from synset pointer and return ptr to buffer */
extern char *FmtSynset(SynsetPtr, int);
/* Find string for 'searchstr' as it is in index file */
extern char *GetWNStr(char *, int);
/* Pass in string for POS, return corresponding integer value */
extern int StrToPos(char *);
/* Return synset for sense key passed. */
extern SynsetPtr GetSynsetForSense(char *);
/* Find offset of sense key in data file */
extern long GetDataOffset(char *);
/* Find polysemy (collins) count for sense key passed. */
extern int GetPolyCount(char *);
/* Return word part of sense key */
extern char *GetWORD(char *);
/* Return POS code for sense key passed. */
extern int GetPOS(char *);
/* Convert WordNet sense number passed of IndexPtr entry to sense key. */
extern char *WNSnsToStr(IndexPtr, int);
/* Search for string and/or baseform of word in database and return
index structure for word if found in database. */
extern IndexPtr GetValidIndexPointer(char *, int);
/* Return sense number in database for word and lexsn passed. */
int GetWNSense(char *, char *);
SnsIndexPtr GetSenseIndex(char *);
char *GetOffsetForKey(unsigned int);
unsigned int GetKeyForOffset(char *);
char *SetSearchdir();
/* Return number of times sense is tagged */
int GetTagcnt(IndexPtr, int);
/*
** Wrapper functions for strstr that allow you to retrieve each
** occurance of a word within a longer string, not just the first.
**
** strstr_init is called with the same arguments as normal strstr,
** but does not return any value.
**
** strstr_getnext returns the position offset (not a pointer, as does
** normal strstr) of the next occurance, or -1 if none remain.
*/
extern void strstr_init (char *, char *);
extern int strstr_getnext (void);
/*** Binary search functions (binsearch.c) ***/
/* General purpose binary search function to search for key as first
item on line in open file. Item is delimited by space. */
extern char *bin_search(char *, FILE *);
extern char *read_index(long, FILE *);
/* Copy contents from one file to another. */
extern void copyfile(FILE *, FILE *);
/* Function to replace a line in a file. Returns the original line,
or NULL in case of error. */
extern char *replace_line(char *, char *, FILE *);
/* Find location to insert line at in file. If line with this
key is already in file, return NULL. */
extern char *insert_line(char *, char *, FILE *);
#ifdef __cplusplus
}
#endif
extern char **helptext[NUMPARTS + 1];
static char *license = "\
This software and database is being provided to you, the LICENSEE, by \n\
Princeton University under the following license. By obtaining, using \n\
and/or copying this software and database, you agree that you have \n\
read, understood, and will comply with these terms and conditions.: \n\
\n\
Permission to use, copy, modify and distribute this software and \n\
database and its documentation for any purpose and without fee or \n\
royalty is hereby granted, provided that you agree to comply with \n\
the following copyright notice and statements, including the disclaimer, \n\
and that the same appear on ALL copies of the software, database and \n\
documentation, including modifications that you make for internal \n\
use or for distribution. \n\
\n\
WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. \n\
\n\
THIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON \n\
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR \n\
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON \n\
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- \n\
ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE \n\
OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT \n\
INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR \n\
OTHER RIGHTS. \n\
\n\
The name of Princeton University or Princeton may not be used in \n\
advertising or publicity pertaining to distribution of the software \n\
and/or database. Title to copyright in this software, database and \n\
any associated documentation shall at all times remain with \n\
Princeton University and LICENSEE agrees to preserve same. \n"
;
static char dblicense[] = "\
1 This software and database is being provided to you, the LICENSEE, by \n\
2 Princeton University under the following license. By obtaining, using \n\
3 and/or copying this software and database, you agree that you have \n\
4 read, understood, and will comply with these terms and conditions.: \n\
5 \n\
6 Permission to use, copy, modify and distribute this software and \n\
7 database and its documentation for any purpose and without fee or \n\
8 royalty is hereby granted, provided that you agree to comply with \n\
9 the following copyright notice and statements, including the disclaimer, \n\
10 and that the same appear on ALL copies of the software, database and \n\
11 documentation, including modifications that you make for internal \n\
12 use or for distribution. \n\
13 \n\
14 WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. \n\
15 \n\
16 THIS SOFTWARE AND DATABASE IS PROVIDED \"AS IS\" AND PRINCETON \n\
17 UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR \n\
18 IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON \n\
19 UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT- \n\
20 ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE \n\
21 OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT \n\
22 INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR \n\
23 OTHER RIGHTS. \n\
24 \n\
25 The name of Princeton University or Princeton may not be used in \n\
26 advertising or publicity pertaining to distribution of the software \n\
27 and/or database. Title to copyright in this software, database and \n\
28 any associated documentation shall at all times remain with \n\
29 Princeton University and LICENSEE agrees to preserve same. \n"
;
#define DBLICENSE_SIZE (sizeof(dblicense))
#endif /*_WN_*/

View File

@ -1,175 +0,0 @@
/*
grind.h - grinder include file
*/
/* $Id: wngrind.h,v 1.1 2005/02/01 17:58:21 wn Rel $ */
#ifndef _GRIND_
#include "wn.h"
#ifndef NULL
#define NULL 0
#endif
#define FALSE 0
#define TRUE 1
/* Bit positions for legalptrs[] */
#define P_NOUN 1
#define P_VERB 2
#define P_ADJ 4
#define P_ADV 8
/* Pointer status values */
#define UNRESOLVED 0
#define RESOLVED 1
#define DUPLICATE 2
#define SELF_REF 3
#define ALLWORDS (short)0
#define NOSENSE (unsigned char)0xff
#ifdef FOOP
#define HASHSIZE 100003 /* some large prime # */
#endif
#define HASHSIZE 500009 /* some large prime # */
#define ptrkind(p) arraypos(ptrsymbols, p)
/* Structure for representing a synset */
typedef struct synset {
struct synset *ssnext; /* next synset */
struct synset *fans; /* if adjective cluster head, list of fans
if fan, pointer to cluster head */
struct synonym *syns; /* list of synonyms in synset */
struct pointer *ptrs; /* list of pointers from this synset */
struct framelist *frames; /* for verbs - list of framelists */
char *defn; /* textual gloss (optional) */
unsigned int key; /* unique synset key */
unsigned char part; /* part of speech */
unsigned char isfanss; /* TRUE - synset is fan synset */
unsigned char filenum; /* file number (from cmdline) synset is in */
int clusnum; /* cluster # if synset is part of cluster */
int lineno; /* line number in file of synset */
long filepos; /* byte offset of synset in output file */
} G_Ss, *G_Synset; /* Grinder Synset */
/* A pointer from one synset to another */
typedef struct pointer {
struct pointer *pnext; /* next pointer from synset */
struct symbol *pword; /* word used to identify target synset */
struct symbol *pslite; /* label of satellite pointed to (optional) */
struct synset *psynset; /* target synset */
unsigned char pfilenum; /* file containing target synset */
unsigned char psensenum; /* sense number of word */
unsigned char pslite_sense; /* sense number of satellite (optional) */
unsigned char phead; /* TRUE - pointer is to cluster head word */
unsigned char ptype; /* pointer type */
unsigned char status; /* status of pointer */
short fromwdnum; /* word number in this synset ptr is from */
short towdnum; /* word number in target synset ptr is to */
} Ptr, *Pointer;
/* Verb frame list */
typedef struct framelist {
struct framelist *fnext; /* next framelist */
unsigned long frames[(NUMFRAMES/32) + 1]; /* bits for verb frames */
unsigned char frwdnum; /* word number that frame list is for */
} Fr, *Framelist;
/* A word in a synset */
typedef struct synonym {
struct synonym *synnext; /* next word in synset */
struct synset *ss; /* synset this synonym is in */
struct symbol *word; /* symbol table entry for word string */
short sswdnum; /* word number in synset ( <0, headword ) */
short tagcnt; /* num times sense is tagged in concordance */
unsigned char wnsensenum; /* sense number in wn database */
unsigned char sensenum; /* sense number in lexicographer's file */
unsigned char adjclass; /* adjective class of word */
unsigned char infanss; /* TRUE - synonym is in fan synset */
/* FALSE - synonym is not in fan */
char *label; /* only used if string is not lowercase
if lowercase, use word->label */
} Syn, *Synonym;
/* Structure for storing word strings */
typedef struct symbol {
struct symbol *symnext; /* next symbol in this slot */
struct synlist *syns; /* uses of this word as a synonym */
unsigned char sensecnt[NUMPARTS + 1]; /* senses for all parts of speech */
char *label; /* word */
} Sym, *Symbol;
/* List of use of this word as a synonym */
typedef struct synlist {
struct synlist *snext; /* next item on synonym list */
struct synonym *psyn; /* pointer to synonym structure */
} Synl, *SynList;
typedef struct flist {
char *fname; /* file name */
int present; /* file entered on command line? */
} Flist;
extern Flist filelist[];
extern int yylineno;
extern G_Synset headss;
extern int pcount;
extern int errcount;
extern int verifyflag;
extern int nowarn;
extern int ordersenses;
extern int synsetkeys;
extern char *ptrsymbols[];
extern char *legalptrs;
extern char *legalptrsets[];
extern char *ptrreflects[];
extern char **Argv;
extern int Argc;
extern FILE *logfile;
extern char partprefix[];
extern char partseen[];
extern char *adjclass[];
extern Symbol hashtab[];
/* External functions */
extern int arraypos(char **, char *);
extern int filenum(char *);
extern char *strclone(char *);
extern char *strupper(char *);
extern char *strlower(char *);
extern char *PrintFileName(int);
extern char *PrintPointer(Pointer);
extern char *PrintSynonym(Synonym);
extern char *NextFile();
extern int filemode();
extern G_Synset CreateSynset(unsigned char, Synonym, Pointer,
Framelist, char *, unsigned int, int, unsigned char);
extern Pointer CreatePointer(Symbol, Symbol, unsigned char,
unsigned char, unsigned char, unsigned char,
short, short);
extern Synonym CreateSynonym(Symbol, unsigned char, short,
unsigned char, char *);
extern Framelist CreateFramelist(int);
extern Symbol CreateSymbol(char *);
extern Symbol FindSymbol(char *);
extern void ResolvePointers();
extern void FindOffsets();
extern void DumpData();
extern void DumpIndex();
extern void DumpSenseIndex();
extern void ReadCntlist();
#endif /* _GRIND_ */

View File

@ -1,322 +0,0 @@
#!/bin/sh
# install - install a program, script, or datafile
scriptversion=2004-07-05.00
# This originates from X11R5 (mit/util/scripts/install.sh), which was
# later released in X11R6 (xc/config/util/install.sh) with the
# following copyright and license.
#
# Copyright (C) 1994 X Consortium
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to
# deal in the Software without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
# sell copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# X CONSORTIUM BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
# AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNEC-
# TION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
#
# Except as contained in this notice, the name of the X Consortium shall not
# be used in advertising or otherwise to promote the sale, use or other deal-
# ings in this Software without prior written authorization from the X Consor-
# tium.
#
#
# FSF changes to this file are in the public domain.
#
# Calling this script install-sh is preferred over install.sh, to prevent
# `make' implicit rules from creating a file called install from it
# when there is no Makefile.
#
# This script is compatible with the BSD install script, but was written
# from scratch. It can only install one file at a time, a restriction
# shared with many OS's install programs.
# set DOITPROG to echo to test this script
# Don't use :- since 4.3BSD and earlier shells don't like it.
doit="${DOITPROG-}"
# put in absolute paths if you don't have them in your path; or use env. vars.
mvprog="${MVPROG-mv}"
cpprog="${CPPROG-cp}"
chmodprog="${CHMODPROG-chmod}"
chownprog="${CHOWNPROG-chown}"
chgrpprog="${CHGRPPROG-chgrp}"
stripprog="${STRIPPROG-strip}"
rmprog="${RMPROG-rm}"
mkdirprog="${MKDIRPROG-mkdir}"
chmodcmd="$chmodprog 0755"
chowncmd=
chgrpcmd=
stripcmd=
rmcmd="$rmprog -f"
mvcmd="$mvprog"
src=
dst=
dir_arg=
dstarg=
no_target_directory=
usage="Usage: $0 [OPTION]... [-T] SRCFILE DSTFILE
or: $0 [OPTION]... SRCFILES... DIRECTORY
or: $0 [OPTION]... -t DIRECTORY SRCFILES...
or: $0 [OPTION]... -d DIRECTORIES...
In the 1st form, copy SRCFILE to DSTFILE.
In the 2nd and 3rd, copy all SRCFILES to DIRECTORY.
In the 4th, create DIRECTORIES.
Options:
-c (ignored)
-d create directories instead of installing files.
-g GROUP $chgrpprog installed files to GROUP.
-m MODE $chmodprog installed files to MODE.
-o USER $chownprog installed files to USER.
-s $stripprog installed files.
-t DIRECTORY install into DIRECTORY.
-T report an error if DSTFILE is a directory.
--help display this help and exit.
--version display version info and exit.
Environment variables override the default commands:
CHGRPPROG CHMODPROG CHOWNPROG CPPROG MKDIRPROG MVPROG RMPROG STRIPPROG
"
while test -n "$1"; do
case $1 in
-c) shift
continue;;
-d) dir_arg=true
shift
continue;;
-g) chgrpcmd="$chgrpprog $2"
shift
shift
continue;;
--help) echo "$usage"; exit 0;;
-m) chmodcmd="$chmodprog $2"
shift
shift
continue;;
-o) chowncmd="$chownprog $2"
shift
shift
continue;;
-s) stripcmd=$stripprog
shift
continue;;
-t) dstarg=$2
shift
shift
continue;;
-T) no_target_directory=true
shift
continue;;
--version) echo "$0 $scriptversion"; exit 0;;
*) # When -d is used, all remaining arguments are directories to create.
# When -t is used, the destination is already specified.
test -n "$dir_arg$dstarg" && break
# Otherwise, the last argument is the destination. Remove it from $@.
for arg
do
if test -n "$dstarg"; then
# $@ is not empty: it contains at least $arg.
set fnord "$@" "$dstarg"
shift # fnord
fi
shift # arg
dstarg=$arg
done
break;;
esac
done
if test -z "$1"; then
if test -z "$dir_arg"; then
echo "$0: no input file specified." >&2
exit 1
fi
# It's OK to call `install-sh -d' without argument.
# This can happen when creating conditional directories.
exit 0
fi
for src
do
# Protect names starting with `-'.
case $src in
-*) src=./$src ;;
esac
if test -n "$dir_arg"; then
dst=$src
src=
if test -d "$dst"; then
mkdircmd=:
chmodcmd=
else
mkdircmd=$mkdirprog
fi
else
# Waiting for this to be detected by the "$cpprog $src $dsttmp" command
# might cause directories to be created, which would be especially bad
# if $src (and thus $dsttmp) contains '*'.
if test ! -f "$src" && test ! -d "$src"; then
echo "$0: $src does not exist." >&2
exit 1
fi
if test -z "$dstarg"; then
echo "$0: no destination specified." >&2
exit 1
fi
dst=$dstarg
# Protect names starting with `-'.
case $dst in
-*) dst=./$dst ;;
esac
# If destination is a directory, append the input filename; won't work
# if double slashes aren't ignored.
if test -d "$dst"; then
if test -n "$no_target_directory"; then
echo "$0: $dstarg: Is a directory" >&2
exit 1
fi
dst=$dst/`basename "$src"`
fi
fi
# This sed command emulates the dirname command.
dstdir=`echo "$dst" | sed -e 's,[^/]*$,,;s,/$,,;s,^$,.,'`
# Make sure that the destination directory exists.
# Skip lots of stat calls in the usual case.
if test ! -d "$dstdir"; then
defaultIFS='
'
IFS="${IFS-$defaultIFS}"
oIFS=$IFS
# Some sh's can't handle IFS=/ for some reason.
IFS='%'
set - `echo "$dstdir" | sed -e 's@/@%@g' -e 's@^%@/@'`
IFS=$oIFS
pathcomp=
while test $# -ne 0 ; do
pathcomp=$pathcomp$1
shift
if test ! -d "$pathcomp"; then
$mkdirprog "$pathcomp"
# mkdir can fail with a `File exist' error in case several
# install-sh are creating the directory concurrently. This
# is OK.
test -d "$pathcomp" || exit
fi
pathcomp=$pathcomp/
done
fi
if test -n "$dir_arg"; then
$doit $mkdircmd "$dst" \
&& { test -z "$chowncmd" || $doit $chowncmd "$dst"; } \
&& { test -z "$chgrpcmd" || $doit $chgrpcmd "$dst"; } \
&& { test -z "$stripcmd" || $doit $stripcmd "$dst"; } \
&& { test -z "$chmodcmd" || $doit $chmodcmd "$dst"; }
else
dstfile=`basename "$dst"`
# Make a couple of temp file names in the proper directory.
dsttmp=$dstdir/_inst.$$_
rmtmp=$dstdir/_rm.$$_
# Trap to clean up those temp files at exit.
trap 'status=$?; rm -f "$dsttmp" "$rmtmp" && exit $status' 0
trap '(exit $?); exit' 1 2 13 15
# Copy the file name to the temp name.
$doit $cpprog "$src" "$dsttmp" &&
# and set any options; do chmod last to preserve setuid bits.
#
# If any of these fail, we abort the whole thing. If we want to
# ignore errors from any of these, just make sure not to ignore
# errors from the above "$doit $cpprog $src $dsttmp" command.
#
{ test -z "$chowncmd" || $doit $chowncmd "$dsttmp"; } \
&& { test -z "$chgrpcmd" || $doit $chgrpcmd "$dsttmp"; } \
&& { test -z "$stripcmd" || $doit $stripcmd "$dsttmp"; } \
&& { test -z "$chmodcmd" || $doit $chmodcmd "$dsttmp"; } &&
# Now rename the file to the real destination.
{ $doit $mvcmd -f "$dsttmp" "$dstdir/$dstfile" 2>/dev/null \
|| {
# The rename failed, perhaps because mv can't rename something else
# to itself, or perhaps because mv is so ancient that it does not
# support -f.
# Now remove or move aside any old file at destination location.
# We try this two ways since rm can't unlink itself on some
# systems and the destination file might be busy for other
# reasons. In this case, the final cleanup might fail but the new
# file should still install successfully.
{
if test -f "$dstdir/$dstfile"; then
$doit $rmcmd -f "$dstdir/$dstfile" 2>/dev/null \
|| $doit $mvcmd -f "$dstdir/$dstfile" "$rmtmp" 2>/dev/null \
|| {
echo "$0: cannot unlink or rename $dstdir/$dstfile" >&2
(exit 1); exit
}
else
:
fi
} &&
# Now rename the file to the real destination.
$doit $mvcmd "$dsttmp" "$dstdir/$dstfile"
}
}
fi || { (exit 1); exit; }
done
# The final little trick to "correctly" pass the exit status to the exit trap.
{
(exit 0); exit
}
# Local variables:
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

View File

@ -1,5 +0,0 @@
lib_LIBRARIES = libWN.a
libWN_a_SOURCES = binsrch.c morph.c search.c wnglobal.c wnhelp.c wnrtl.c wnutil.c
libWN_a_CPPFLAGS = $(INCLUDES)
INCLUDES = -I$(top_srcdir) -I$(top_srcdir)/include
SUBDIRS = wnres

View File

@ -1,626 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
SOURCES = $(libWN_a_SOURCES)
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = ..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = @INSTALL@
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = lib
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(libdir)"
libLIBRARIES_INSTALL = $(INSTALL_DATA)
LIBRARIES = $(lib_LIBRARIES)
AR = ar
ARFLAGS = cru
libWN_a_AR = $(AR) $(ARFLAGS)
libWN_a_LIBADD =
am_libWN_a_OBJECTS = libWN_a-binsrch.$(OBJEXT) libWN_a-morph.$(OBJEXT) \
libWN_a-search.$(OBJEXT) libWN_a-wnglobal.$(OBJEXT) \
libWN_a-wnhelp.$(OBJEXT) libWN_a-wnrtl.$(OBJEXT) \
libWN_a-wnutil.$(OBJEXT)
libWN_a_OBJECTS = $(am_libWN_a_OBJECTS)
DEFAULT_INCLUDES = -I. -I$(srcdir) -I$(top_builddir)
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
CCLD = $(CC)
LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@
SOURCES = $(libWN_a_SOURCES)
DIST_SOURCES = $(libWN_a_SOURCES)
RECURSIVE_TARGETS = all-recursive check-recursive dvi-recursive \
html-recursive info-recursive install-data-recursive \
install-exec-recursive install-info-recursive \
install-recursive installcheck-recursive installdirs-recursive \
pdf-recursive ps-recursive uninstall-info-recursive \
uninstall-recursive
ETAGS = etags
CTAGS = ctags
DIST_SUBDIRS = $(SUBDIRS)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
RANLIB = @RANLIB@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TCL_INCLUDE_SPEC = @TCL_INCLUDE_SPEC@
TCL_LIB_SPEC = @TCL_LIB_SPEC@
TK_LIBS = @TK_LIBS@
TK_LIB_SPEC = @TK_LIB_SPEC@
TK_PREFIX = @TK_PREFIX@
TK_XINCLUDES = @TK_XINCLUDES@
VERSION = @VERSION@
ac_ct_CC = @ac_ct_CC@
ac_ct_RANLIB = @ac_ct_RANLIB@
ac_ct_STRIP = @ac_ct_STRIP@
ac_prefix = @ac_prefix@
am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
datadir = @datadir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
prefix = @prefix@
program_transform_name = @program_transform_name@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
lib_LIBRARIES = libWN.a
libWN_a_SOURCES = binsrch.c morph.c search.c wnglobal.c wnhelp.c wnrtl.c wnutil.c
libWN_a_CPPFLAGS = $(INCLUDES)
INCLUDES = -I$(top_srcdir) -I$(top_srcdir)/include
SUBDIRS = wnres
all: all-recursive
.SUFFIXES:
.SUFFIXES: .c .o .obj
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu lib/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu lib/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
install-libLIBRARIES: $(lib_LIBRARIES)
@$(NORMAL_INSTALL)
test -z "$(libdir)" || $(mkdir_p) "$(DESTDIR)$(libdir)"
@list='$(lib_LIBRARIES)'; for p in $$list; do \
if test -f $$p; then \
f=$(am__strip_dir) \
echo " $(libLIBRARIES_INSTALL) '$$p' '$(DESTDIR)$(libdir)/$$f'"; \
$(libLIBRARIES_INSTALL) "$$p" "$(DESTDIR)$(libdir)/$$f"; \
else :; fi; \
done
@$(POST_INSTALL)
@list='$(lib_LIBRARIES)'; for p in $$list; do \
if test -f $$p; then \
p=$(am__strip_dir) \
echo " $(RANLIB) '$(DESTDIR)$(libdir)/$$p'"; \
$(RANLIB) "$(DESTDIR)$(libdir)/$$p"; \
else :; fi; \
done
uninstall-libLIBRARIES:
@$(NORMAL_UNINSTALL)
@list='$(lib_LIBRARIES)'; for p in $$list; do \
p=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(libdir)/$$p'"; \
rm -f "$(DESTDIR)$(libdir)/$$p"; \
done
clean-libLIBRARIES:
-test -z "$(lib_LIBRARIES)" || rm -f $(lib_LIBRARIES)
libWN.a: $(libWN_a_OBJECTS) $(libWN_a_DEPENDENCIES)
-rm -f libWN.a
$(libWN_a_AR) libWN.a $(libWN_a_OBJECTS) $(libWN_a_LIBADD)
$(RANLIB) libWN.a
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libWN_a-binsrch.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libWN_a-morph.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libWN_a-search.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libWN_a-wnglobal.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libWN_a-wnhelp.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libWN_a-wnrtl.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/libWN_a-wnutil.Po@am__quote@
.c.o:
@am__fastdepCC_TRUE@ if $(COMPILE) -MT $@ -MD -MP -MF "$(DEPDIR)/$*.Tpo" -c -o $@ $<; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/$*.Tpo" "$(DEPDIR)/$*.Po"; else rm -f "$(DEPDIR)/$*.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(COMPILE) -c $<
.c.obj:
@am__fastdepCC_TRUE@ if $(COMPILE) -MT $@ -MD -MP -MF "$(DEPDIR)/$*.Tpo" -c -o $@ `$(CYGPATH_W) '$<'`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/$*.Tpo" "$(DEPDIR)/$*.Po"; else rm -f "$(DEPDIR)/$*.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(COMPILE) -c `$(CYGPATH_W) '$<'`
libWN_a-binsrch.o: binsrch.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-binsrch.o -MD -MP -MF "$(DEPDIR)/libWN_a-binsrch.Tpo" -c -o libWN_a-binsrch.o `test -f 'binsrch.c' || echo '$(srcdir)/'`binsrch.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-binsrch.Tpo" "$(DEPDIR)/libWN_a-binsrch.Po"; else rm -f "$(DEPDIR)/libWN_a-binsrch.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='binsrch.c' object='libWN_a-binsrch.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-binsrch.o `test -f 'binsrch.c' || echo '$(srcdir)/'`binsrch.c
libWN_a-binsrch.obj: binsrch.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-binsrch.obj -MD -MP -MF "$(DEPDIR)/libWN_a-binsrch.Tpo" -c -o libWN_a-binsrch.obj `if test -f 'binsrch.c'; then $(CYGPATH_W) 'binsrch.c'; else $(CYGPATH_W) '$(srcdir)/binsrch.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-binsrch.Tpo" "$(DEPDIR)/libWN_a-binsrch.Po"; else rm -f "$(DEPDIR)/libWN_a-binsrch.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='binsrch.c' object='libWN_a-binsrch.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-binsrch.obj `if test -f 'binsrch.c'; then $(CYGPATH_W) 'binsrch.c'; else $(CYGPATH_W) '$(srcdir)/binsrch.c'; fi`
libWN_a-morph.o: morph.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-morph.o -MD -MP -MF "$(DEPDIR)/libWN_a-morph.Tpo" -c -o libWN_a-morph.o `test -f 'morph.c' || echo '$(srcdir)/'`morph.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-morph.Tpo" "$(DEPDIR)/libWN_a-morph.Po"; else rm -f "$(DEPDIR)/libWN_a-morph.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='morph.c' object='libWN_a-morph.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-morph.o `test -f 'morph.c' || echo '$(srcdir)/'`morph.c
libWN_a-morph.obj: morph.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-morph.obj -MD -MP -MF "$(DEPDIR)/libWN_a-morph.Tpo" -c -o libWN_a-morph.obj `if test -f 'morph.c'; then $(CYGPATH_W) 'morph.c'; else $(CYGPATH_W) '$(srcdir)/morph.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-morph.Tpo" "$(DEPDIR)/libWN_a-morph.Po"; else rm -f "$(DEPDIR)/libWN_a-morph.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='morph.c' object='libWN_a-morph.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-morph.obj `if test -f 'morph.c'; then $(CYGPATH_W) 'morph.c'; else $(CYGPATH_W) '$(srcdir)/morph.c'; fi`
libWN_a-search.o: search.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-search.o -MD -MP -MF "$(DEPDIR)/libWN_a-search.Tpo" -c -o libWN_a-search.o `test -f 'search.c' || echo '$(srcdir)/'`search.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-search.Tpo" "$(DEPDIR)/libWN_a-search.Po"; else rm -f "$(DEPDIR)/libWN_a-search.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='search.c' object='libWN_a-search.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-search.o `test -f 'search.c' || echo '$(srcdir)/'`search.c
libWN_a-search.obj: search.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-search.obj -MD -MP -MF "$(DEPDIR)/libWN_a-search.Tpo" -c -o libWN_a-search.obj `if test -f 'search.c'; then $(CYGPATH_W) 'search.c'; else $(CYGPATH_W) '$(srcdir)/search.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-search.Tpo" "$(DEPDIR)/libWN_a-search.Po"; else rm -f "$(DEPDIR)/libWN_a-search.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='search.c' object='libWN_a-search.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-search.obj `if test -f 'search.c'; then $(CYGPATH_W) 'search.c'; else $(CYGPATH_W) '$(srcdir)/search.c'; fi`
libWN_a-wnglobal.o: wnglobal.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnglobal.o -MD -MP -MF "$(DEPDIR)/libWN_a-wnglobal.Tpo" -c -o libWN_a-wnglobal.o `test -f 'wnglobal.c' || echo '$(srcdir)/'`wnglobal.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnglobal.Tpo" "$(DEPDIR)/libWN_a-wnglobal.Po"; else rm -f "$(DEPDIR)/libWN_a-wnglobal.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnglobal.c' object='libWN_a-wnglobal.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnglobal.o `test -f 'wnglobal.c' || echo '$(srcdir)/'`wnglobal.c
libWN_a-wnglobal.obj: wnglobal.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnglobal.obj -MD -MP -MF "$(DEPDIR)/libWN_a-wnglobal.Tpo" -c -o libWN_a-wnglobal.obj `if test -f 'wnglobal.c'; then $(CYGPATH_W) 'wnglobal.c'; else $(CYGPATH_W) '$(srcdir)/wnglobal.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnglobal.Tpo" "$(DEPDIR)/libWN_a-wnglobal.Po"; else rm -f "$(DEPDIR)/libWN_a-wnglobal.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnglobal.c' object='libWN_a-wnglobal.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnglobal.obj `if test -f 'wnglobal.c'; then $(CYGPATH_W) 'wnglobal.c'; else $(CYGPATH_W) '$(srcdir)/wnglobal.c'; fi`
libWN_a-wnhelp.o: wnhelp.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnhelp.o -MD -MP -MF "$(DEPDIR)/libWN_a-wnhelp.Tpo" -c -o libWN_a-wnhelp.o `test -f 'wnhelp.c' || echo '$(srcdir)/'`wnhelp.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnhelp.Tpo" "$(DEPDIR)/libWN_a-wnhelp.Po"; else rm -f "$(DEPDIR)/libWN_a-wnhelp.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnhelp.c' object='libWN_a-wnhelp.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnhelp.o `test -f 'wnhelp.c' || echo '$(srcdir)/'`wnhelp.c
libWN_a-wnhelp.obj: wnhelp.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnhelp.obj -MD -MP -MF "$(DEPDIR)/libWN_a-wnhelp.Tpo" -c -o libWN_a-wnhelp.obj `if test -f 'wnhelp.c'; then $(CYGPATH_W) 'wnhelp.c'; else $(CYGPATH_W) '$(srcdir)/wnhelp.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnhelp.Tpo" "$(DEPDIR)/libWN_a-wnhelp.Po"; else rm -f "$(DEPDIR)/libWN_a-wnhelp.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnhelp.c' object='libWN_a-wnhelp.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnhelp.obj `if test -f 'wnhelp.c'; then $(CYGPATH_W) 'wnhelp.c'; else $(CYGPATH_W) '$(srcdir)/wnhelp.c'; fi`
libWN_a-wnrtl.o: wnrtl.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnrtl.o -MD -MP -MF "$(DEPDIR)/libWN_a-wnrtl.Tpo" -c -o libWN_a-wnrtl.o `test -f 'wnrtl.c' || echo '$(srcdir)/'`wnrtl.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnrtl.Tpo" "$(DEPDIR)/libWN_a-wnrtl.Po"; else rm -f "$(DEPDIR)/libWN_a-wnrtl.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnrtl.c' object='libWN_a-wnrtl.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnrtl.o `test -f 'wnrtl.c' || echo '$(srcdir)/'`wnrtl.c
libWN_a-wnrtl.obj: wnrtl.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnrtl.obj -MD -MP -MF "$(DEPDIR)/libWN_a-wnrtl.Tpo" -c -o libWN_a-wnrtl.obj `if test -f 'wnrtl.c'; then $(CYGPATH_W) 'wnrtl.c'; else $(CYGPATH_W) '$(srcdir)/wnrtl.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnrtl.Tpo" "$(DEPDIR)/libWN_a-wnrtl.Po"; else rm -f "$(DEPDIR)/libWN_a-wnrtl.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnrtl.c' object='libWN_a-wnrtl.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnrtl.obj `if test -f 'wnrtl.c'; then $(CYGPATH_W) 'wnrtl.c'; else $(CYGPATH_W) '$(srcdir)/wnrtl.c'; fi`
libWN_a-wnutil.o: wnutil.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnutil.o -MD -MP -MF "$(DEPDIR)/libWN_a-wnutil.Tpo" -c -o libWN_a-wnutil.o `test -f 'wnutil.c' || echo '$(srcdir)/'`wnutil.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnutil.Tpo" "$(DEPDIR)/libWN_a-wnutil.Po"; else rm -f "$(DEPDIR)/libWN_a-wnutil.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnutil.c' object='libWN_a-wnutil.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnutil.o `test -f 'wnutil.c' || echo '$(srcdir)/'`wnutil.c
libWN_a-wnutil.obj: wnutil.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT libWN_a-wnutil.obj -MD -MP -MF "$(DEPDIR)/libWN_a-wnutil.Tpo" -c -o libWN_a-wnutil.obj `if test -f 'wnutil.c'; then $(CYGPATH_W) 'wnutil.c'; else $(CYGPATH_W) '$(srcdir)/wnutil.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/libWN_a-wnutil.Tpo" "$(DEPDIR)/libWN_a-wnutil.Po"; else rm -f "$(DEPDIR)/libWN_a-wnutil.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wnutil.c' object='libWN_a-wnutil.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(libWN_a_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o libWN_a-wnutil.obj `if test -f 'wnutil.c'; then $(CYGPATH_W) 'wnutil.c'; else $(CYGPATH_W) '$(srcdir)/wnutil.c'; fi`
uninstall-info-am:
# This directory's subdirectories are mostly independent; you can cd
# into them and run `make' without going through this Makefile.
# To change the values of `make' variables: instead of editing Makefiles,
# (1) if the variable is set in `config.status', edit `config.status'
# (which will cause the Makefiles to be regenerated when you run `make');
# (2) otherwise, pass the desired values on the `make' command line.
$(RECURSIVE_TARGETS):
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
target=`echo $@ | sed s/-recursive//`; \
list='$(SUBDIRS)'; for subdir in $$list; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
dot_seen=yes; \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done; \
if test "$$dot_seen" = "no"; then \
$(MAKE) $(AM_MAKEFLAGS) "$$target-am" || exit 1; \
fi; test -z "$$fail"
mostlyclean-recursive clean-recursive distclean-recursive \
maintainer-clean-recursive:
@set fnord $$MAKEFLAGS; amf=$$2; \
dot_seen=no; \
case "$@" in \
distclean-* | maintainer-clean-*) list='$(DIST_SUBDIRS)' ;; \
*) list='$(SUBDIRS)' ;; \
esac; \
rev=''; for subdir in $$list; do \
if test "$$subdir" = "."; then :; else \
rev="$$subdir $$rev"; \
fi; \
done; \
rev="$$rev ."; \
target=`echo $@ | sed s/-recursive//`; \
for subdir in $$rev; do \
echo "Making $$target in $$subdir"; \
if test "$$subdir" = "."; then \
local_target="$$target-am"; \
else \
local_target="$$target"; \
fi; \
(cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) $$local_target) \
|| case "$$amf" in *=*) exit 1;; *k*) fail=yes;; *) exit 1;; esac; \
done && test -z "$$fail"
tags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) tags); \
done
ctags-recursive:
list='$(SUBDIRS)'; for subdir in $$list; do \
test "$$subdir" = . || (cd $$subdir && $(MAKE) $(AM_MAKEFLAGS) ctags); \
done
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: tags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
if ($(ETAGS) --etags-include --version) >/dev/null 2>&1; then \
include_option=--etags-include; \
empty_fix=.; \
else \
include_option=--include; \
empty_fix=; \
fi; \
list='$(SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test ! -f $$subdir/TAGS || \
tags="$$tags $$include_option=$$here/$$subdir/TAGS"; \
fi; \
done; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$tags $$unique; \
fi
ctags: CTAGS
CTAGS: ctags-recursive $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(CTAGS_ARGS)$$tags$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$tags $$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& cd $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) $$here
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
list='$(DIST_SUBDIRS)'; for subdir in $$list; do \
if test "$$subdir" = .; then :; else \
test -d "$(distdir)/$$subdir" \
|| $(mkdir_p) "$(distdir)/$$subdir" \
|| exit 1; \
distdir=`$(am__cd) $(distdir) && pwd`; \
top_distdir=`$(am__cd) $(top_distdir) && pwd`; \
(cd $$subdir && \
$(MAKE) $(AM_MAKEFLAGS) \
top_distdir="$$top_distdir" \
distdir="$$distdir/$$subdir" \
distdir) \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-recursive
all-am: Makefile $(LIBRARIES)
installdirs: installdirs-recursive
installdirs-am:
for dir in "$(DESTDIR)$(libdir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-recursive
install-exec: install-exec-recursive
install-data: install-data-recursive
uninstall: uninstall-recursive
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-recursive
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-recursive
clean-am: clean-generic clean-libLIBRARIES mostlyclean-am
distclean: distclean-recursive
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-recursive
dvi-am:
html: html-recursive
info: info-recursive
info-am:
install-data-am:
install-exec-am: install-libLIBRARIES
install-info: install-info-recursive
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-recursive
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-recursive
mostlyclean-am: mostlyclean-compile mostlyclean-generic
pdf: pdf-recursive
pdf-am:
ps: ps-recursive
ps-am:
uninstall-am: uninstall-info-am uninstall-libLIBRARIES
uninstall-info: uninstall-info-recursive
.PHONY: $(RECURSIVE_TARGETS) CTAGS GTAGS all all-am check check-am \
clean clean-generic clean-libLIBRARIES clean-recursive ctags \
ctags-recursive distclean distclean-compile distclean-generic \
distclean-recursive distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-data \
install-data-am install-exec install-exec-am install-info \
install-info-am install-libLIBRARIES install-man install-strip \
installcheck installcheck-am installdirs installdirs-am \
maintainer-clean maintainer-clean-generic \
maintainer-clean-recursive mostlyclean mostlyclean-compile \
mostlyclean-generic mostlyclean-recursive pdf pdf-am ps ps-am \
tags tags-recursive uninstall uninstall-am uninstall-info-am \
uninstall-libLIBRARIES
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,225 +0,0 @@
/*
binsearch.c - general binary search functions
*/
#include <stdio.h>
#include <string.h>
static char *Id = "$Id: binsrch.c,v 1.15 2005/02/01 16:46:43 wn Rel $";
/* Binary search - looks for the key passed at the start of a line
in the file associated with open file descriptor fp, and returns
a buffer containing the line in the file. */
#define KEY_LEN (1024)
#define LINE_LEN (1024*25)
static char line[LINE_LEN];
long last_bin_search_offset = 0;
/* General purpose binary search function to search for key as first
item on line in open file. Item is delimited by space. */
#undef getc
char *read_index(long offset, FILE *fp) {
char *linep;
linep = line;
line[0] = '0';
fseek( fp, offset, SEEK_SET );
fgets(linep, LINE_LEN, fp);
return(line);
}
char *bin_search(char *searchkey, FILE *fp)
{
int c;
long top, mid, bot, diff;
char *linep, key[KEY_LEN];
int length;
diff=666;
linep = line;
line[0] = '\0';
fseek(fp, 0L, 2);
top = 0;
bot = ftell(fp);
mid = (bot - top) / 2;
do {
fseek(fp, mid - 1, 0);
if(mid != 1)
while((c = getc(fp)) != '\n' && c != EOF);
last_bin_search_offset = ftell( fp );
fgets(linep, LINE_LEN, fp);
length = (int)(strchr(linep, ' ') - linep);
strncpy(key, linep, length);
key[length] = '\0';
if(strcmp(key, searchkey) < 0) {
top = mid;
diff = (bot - top) / 2;
mid = top + diff;
}
if(strcmp(key, searchkey) > 0) {
bot = mid;
diff = (bot - top) / 2;
mid = top + diff;
}
} while((strcmp(key, searchkey)) && (diff != 0));
if(!strcmp(key, searchkey))
return(line);
else
return(NULL);
}
static long offset;
static int bin_search_key(char *searchkey, FILE *fp)
{
int c;
long top, mid, bot, diff;
char *linep, key[KEY_LEN];
int length, offset1, offset2;
/* do binary search to find correct place in file to insert line */
diff=666;
linep = line;
line[0] = '\0';
fseek(fp, 0L, 2);
top = 0;
bot = ftell(fp);
if (bot == 0) {
offset = 0;
return(0); /* empty file */
}
mid = (bot - top) / 2;
/* If only one line in file, don't work through loop */
length = 0;
rewind(fp);
while((c = getc(fp)) != '\n' && c != EOF)
line[length++] = c;
if (getc(fp) == EOF) { /* only 1 line in file */
length = (int)(strchr(linep, ' ') - linep);
strncpy(key, linep, length);
key[length] = '\0';
if(strcmp(key, searchkey) > 0) {
offset = 0;
return(0); /* line with key is not found */
} else if (strcmp(key, searchkey) < 0) {
offset = ftell(fp);
return(0); /* line with key is not found */
} else {
offset = 0;
return(1); /* line with key is found */
}
}
do {
fseek(fp, mid - 1, 0);
if(mid != 1)
while((c = getc(fp)) != '\n' && c != EOF);
offset1 = ftell(fp); /* offset at start of line */
if (fgets(linep, LINE_LEN, fp) != NULL) {
offset2 = ftell(fp); /* offset at start of next line */
length = (int)(strchr(linep, ' ') - linep);
strncpy(key, linep, length);
key[length] = '\0';
if(strcmp(key, searchkey) < 0) { /* further in file */
top = mid;
diff = (bot - top) / 2;
mid = top + diff;
offset = offset2;
}
if(strcmp(key, searchkey) > 0) { /* earlier in file */
bot = mid;
diff = (bot - top) / 2;
mid = top + diff;
offset = offset1;
}
} else {
bot = mid;
diff = (bot - top) / 2;
mid = top + diff;
}
} while((strcmp(key, searchkey)) && (diff != 0));
if(!strcmp(key, searchkey)) {
offset = offset1; /* get to start of current line */
return(1); /* line with key is found */
} else
return(0); /* line with key is not found */
}
/* Copy contents from one file to another. */
void copyfile(FILE *fromfp, FILE *tofp)
{
int c;
while ((c = getc(fromfp)) != EOF)
putc(c, tofp);
}
/* Function to replace a line in a file. Returns the original line,
or NULL in case of error. */
char *replace_line(char *new_line, char *searchkey, FILE *fp)
{
FILE *tfp; /* temporary file pointer */
if (!bin_search_key(searchkey, fp))
return(NULL); /* line with key not found */
if ((tfp = tmpfile()) == NULL)
return(NULL); /* could not create temp file */
fseek(fp, offset, 0);
fgets(line, LINE_LEN, fp); /* read original */
copyfile(fp, tfp);
if (fseek(fp, offset, 0) == -1)
return(NULL); /* could not seek to offset */
fprintf(fp, new_line); /* write line */
rewind(tfp);
copyfile(tfp, fp);
fclose(tfp);
fflush(fp);
return(line);
}
/* Find location to insert line at in file. If line with this
key is already in file, return NULL. */
char *insert_line(char *new_line, char *searchkey, FILE *fp)
{
FILE *tfp;
if (bin_search_key(searchkey, fp))
return(NULL);
if ((tfp = tmpfile()) == NULL)
return(NULL); /* could not create temp file */
if (fseek(fp, offset, 0) == -1)
return(NULL); /* could not seek to offset */
copyfile(fp, tfp);
if (fseek(fp, offset, 0) == -1)
return(NULL); /* could not seek to offset */
fprintf(fp, new_line); /* write line */
rewind(tfp);
copyfile(tfp, fp);
fclose(tfp);
fflush(fp);
return(new_line);
}

View File

@ -1,472 +0,0 @@
/*
morph.c - WordNet search code morphology functions
*/
#include <stdio.h>
#include <ctype.h>
#include <string.h>
#include <stdlib.h>
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include "wn.h"
#ifdef _WINDOWS
#include <windows.h>
#include <windowsx.h>
#define EXCFILE "%s\\%s.exc"
#else
#define EXCFILE "%s/%s.exc"
#endif
static char *Id = "$Id: morph.c,v 1.67 2006/11/14 21:00:23 wn Exp $";
static char *sufx[] ={
/* Noun suffixes */
"s", "ses", "xes", "zes", "ches", "shes", "men", "ies",
/* Verb suffixes */
"s", "ies", "es", "es", "ed", "ed", "ing", "ing",
/* Adjective suffixes */
"er", "est", "er", "est"
};
static char *addr[] ={
/* Noun endings */
"", "s", "x", "z", "ch", "sh", "man", "y",
/* Verb endings */
"", "y", "e", "", "e", "", "e", "",
/* Adjective endings */
"", "", "e", "e"
};
static int offsets[NUMPARTS] = { 0, 0, 8, 16 };
static int cnts[NUMPARTS] = { 0, 8, 8, 4 };
static char msgbuf[256];
#define NUMPREPS 15
static struct {
char *str;
int strlen;
} prepositions[NUMPREPS] = {
"to", 2,
"at", 2,
"of", 2,
"on", 2,
"off", 3,
"in", 2,
"out", 3,
"up", 2,
"down", 4,
"from", 4,
"with", 4,
"into", 4,
"for", 3,
"about", 5,
"between", 7,
};
static FILE *exc_fps[NUMPARTS + 1];
static int do_init();
static int strend(char *, char *);
static char *wordbase(char *, int);
static int hasprep(char *, int);
static char *exc_lookup(char *, int);
static char *morphprep(char *);
/* Open exception list files */
int morphinit(void)
{
static int done = 0;
static int openerr = 0;
if (!done) {
if (OpenDB) { /* make sure WN database files are open */
if (!(openerr = do_init()))
done = 1;
} else
openerr = -1;
}
return(openerr);
}
/* Close exception list files and reopen */
int re_morphinit(void)
{
int i;
for (i = 1; i <= NUMPARTS; i++) {
if (exc_fps[i] != NULL) {
fclose(exc_fps[i]); exc_fps[i] = NULL;
}
}
return(OpenDB ? do_init() : -1);
}
static int do_init(void)
{
int i, openerr;
#ifdef _WINDOWS
HKEY hkey;
DWORD dwType, dwSize;
#else
char *env;
#endif
char searchdir[256], fname[256];
openerr = 0;
/* Find base directory for database. If set, use WNSEARCHDIR.
If not set, check for WNHOME/dict, otherwise use DEFAULTPATH. */
#ifdef _WINDOWS
if (RegOpenKeyEx(HKEY_LOCAL_MACHINE, TEXT("Software\\WordNet\\3.0"),
0, KEY_READ, &hkey) == ERROR_SUCCESS) {
dwSize = sizeof(searchdir);
RegQueryValueEx(hkey, TEXT("WNHome"),
NULL, &dwType, searchdir, &dwSize);
RegCloseKey(hkey);
strcat(searchdir, DICTDIR);
}
else if (RegOpenKeyEx(HKEY_CURRENT_USER, TEXT("Software\\WordNet\\3.0"),
0, KEY_READ, &hkey) == ERROR_SUCCESS) {
dwSize = sizeof(searchdir);
RegQueryValueEx(hkey, TEXT("WNHome"),
NULL, &dwType, searchdir, &dwSize);
RegCloseKey(hkey);
strcat(searchdir, DICTDIR);
} else
sprintf(searchdir, DEFAULTPATH);
#else
if ((env = getenv("WNSEARCHDIR")) != NULL)
strcpy(searchdir, env);
else if ((env = getenv("WNHOME")) != NULL)
sprintf(searchdir, "%s%s", env, DICTDIR);
else
strcpy(searchdir, DEFAULTPATH);
#endif
for (i = 1; i <= NUMPARTS; i++) {
sprintf(fname, EXCFILE, searchdir, partnames[i]);
if ((exc_fps[i] = fopen(fname, "r")) == NULL) {
sprintf(msgbuf,
"WordNet library error: Can't open exception file(%s)\n\n",
fname);
display_message(msgbuf);
openerr = -1;
}
}
return(openerr);
}
/* Try to find baseform (lemma) of word or collocation in POS.
Works like strtok() - first call is with string, subsequent calls
with NULL argument return additional baseforms for original string. */
char *morphstr(char *origstr, int pos)
{
static char searchstr[WORDBUF], str[WORDBUF];
static int svcnt, svprep;
char word[WORDBUF], *tmp;
int cnt, st_idx = 0, end_idx;
int prep;
char *end_idx1, *end_idx2;
char *append;
if (pos == SATELLITE)
pos = ADJ;
/* First time through for this string */
if (origstr != NULL) {
/* Assume string hasn't had spaces substitued with '_' */
strtolower(strsubst(strcpy(str, origstr), ' ', '_'));
searchstr[0] = '\0';
cnt = cntwords(str, '_');
svprep = 0;
/* first try exception list */
if ((tmp = exc_lookup(str, pos)) && strcmp(tmp, str)) {
svcnt = 1; /* force next time to pass NULL */
return(tmp);
}
/* Then try simply morph on original string */
if (pos != VERB && (tmp = morphword(str, pos)) && strcmp(tmp, str))
return(tmp);
if (pos == VERB && cnt > 1 && (prep = hasprep(str, cnt))) {
/* assume we have a verb followed by a preposition */
svprep = prep;
return(morphprep(str));
} else {
svcnt = cnt = cntwords(str, '-');
while (origstr && --cnt) {
end_idx1 = strchr(str + st_idx, '_');
end_idx2 = strchr(str + st_idx, '-');
if (end_idx1 && end_idx2) {
if (end_idx1 < end_idx2) {
end_idx = (int)(end_idx1 - str);
append = "_";
} else {
end_idx = (int)(end_idx2 - str);
append = "-";
}
} else {
if (end_idx1) {
end_idx = (int)(end_idx1 - str);
append = "_";
} else {
end_idx = (int)(end_idx2 - str);
append = "-";
}
}
if (end_idx < 0) return(NULL); /* shouldn't do this */
strncpy(word, str + st_idx, end_idx - st_idx);
word[end_idx - st_idx] = '\0';
if(tmp = morphword(word, pos))
strcat(searchstr,tmp);
else
strcat(searchstr,word);
strcat(searchstr, append);
st_idx = end_idx + 1;
}
if(tmp = morphword(strcpy(word, str + st_idx), pos))
strcat(searchstr,tmp);
else
strcat(searchstr,word);
if(strcmp(searchstr, str) && is_defined(searchstr,pos))
return(searchstr);
else
return(NULL);
}
} else { /* subsequent call on string */
if (svprep) { /* if verb has preposition, no more morphs */
svprep = 0;
return(NULL);
} else if (svcnt == 1)
return(exc_lookup(NULL, pos));
else {
svcnt = 1;
if ((tmp = exc_lookup(str, pos)) && strcmp(tmp, str))
return(tmp);
else
return(NULL);
}
}
}
/* Try to find baseform (lemma) of individual word in POS */
char *morphword(char *word, int pos)
{
int offset, cnt;
int i;
static char retval[WORDBUF];
char *tmp, tmpbuf[WORDBUF], *end;
sprintf(retval,"");
sprintf(tmpbuf, "");
end = "";
if(word == NULL)
return(NULL);
/* first look for word on exception list */
if((tmp = exc_lookup(word, pos)) != NULL)
return(tmp); /* found it in exception list */
if (pos == ADV) { /* only use exception list for adverbs */
return(NULL);
}
if (pos == NOUN) {
if (strend(word, "ful")) {
cnt = strrchr(word, 'f') - word;
strncat(tmpbuf, word, cnt);
end = "ful";
} else
/* check for noun ending with 'ss' or short words */
if (strend(word, "ss") || (strlen(word) <= 2))
return(NULL);
}
/* If not in exception list, try applying rules from tables */
if (tmpbuf[0] == '\0')
strcpy(tmpbuf, word);
offset = offsets[pos];
cnt = cnts[pos];
for(i = 0; i < cnt; i++){
strcpy(retval, wordbase(tmpbuf, (i + offset)));
if(strcmp(retval, tmpbuf) && is_defined(retval, pos)) {
strcat(retval, end);
return(retval);
}
}
return(NULL);
}
static int strend(char *str1, char *str2)
{
char *pt1;
if(strlen(str2) >= strlen(str1))
return(0);
else {
pt1=str1;
pt1=strchr(str1,0);
pt1=pt1-strlen(str2);
return(!strcmp(pt1,str2));
}
}
static char *wordbase(char *word, int ender)
{
char *pt1;
static char copy[WORDBUF];
strcpy(copy, word);
if(strend(copy,sufx[ender])) {
pt1=strchr(copy,'\0');
pt1 -= strlen(sufx[ender]);
*pt1='\0';
strcat(copy,addr[ender]);
}
return(copy);
}
static int hasprep(char *s, int wdcnt)
{
/* Find a preposition in the verb string and return its
corresponding word number. */
int i, wdnum;
for (wdnum = 2; wdnum <= wdcnt; wdnum++) {
s = strchr(s, '_');
for (s++, i = 0; i < NUMPREPS; i++)
if (!strncmp(s, prepositions[i].str, prepositions[i].strlen) &&
(s[prepositions[i].strlen] == '_' ||
s[prepositions[i].strlen] == '\0'))
return(wdnum);
}
return(0);
}
static char *exc_lookup(char *word, int pos)
{
static char line[WORDBUF], *beglp, *endlp;
char *excline;
int found = 0;
if (exc_fps[pos] == NULL)
return(NULL);
/* first time through load line from exception file */
if(word != NULL){
if ((excline = bin_search(word, exc_fps[pos])) != NULL) {
strcpy(line, excline);
endlp = strchr(line,' ');
} else
endlp = NULL;
}
if(endlp && *(endlp + 1) != ' '){
beglp = endlp + 1;
while(*beglp && *beglp == ' ') beglp++;
endlp = beglp;
while(*endlp && *endlp != ' ' && *endlp != '\n') endlp++;
if(endlp != beglp){
*endlp='\0';
return(beglp);
}
}
beglp = NULL;
endlp = NULL;
return(NULL);
}
static char *morphprep(char *s)
{
char *rest, *exc_word, *lastwd = NULL, *last;
int i, offset, cnt;
char word[WORDBUF], end[WORDBUF];
static char retval[WORDBUF];
/* Assume that the verb is the first word in the phrase. Strip it
off, check for validity, then try various morphs with the
rest of the phrase tacked on, trying to find a match. */
rest = strchr(s, '_');
last = strrchr(s, '_');
if (rest != last) { /* more than 2 words */
if (lastwd = morphword(last + 1, NOUN)) {
strncpy(end, rest, last - rest + 1);
end[last-rest+1] = '\0';
strcat(end, lastwd);
}
}
strncpy(word, s, rest - s);
word[rest - s] = '\0';
for (i = 0, cnt = strlen(word); i < cnt; i++)
if (!isalnum((unsigned char)(word[i]))) return(NULL);
offset = offsets[VERB];
cnt = cnts[VERB];
/* First try to find the verb in the exception list */
if ((exc_word = exc_lookup(word, VERB)) &&
strcmp(exc_word, word)) {
sprintf(retval, "%s%s", exc_word, rest);
if(is_defined(retval, VERB))
return(retval);
else if (lastwd) {
sprintf(retval, "%s%s", exc_word, end);
if(is_defined(retval, VERB))
return(retval);
}
}
for (i = 0; i < cnt; i++) {
if ((exc_word = wordbase(word, (i + offset))) &&
strcmp(word, exc_word)) { /* ending is different */
sprintf(retval, "%s%s", exc_word, rest);
if(is_defined(retval, VERB))
return(retval);
else if (lastwd) {
sprintf(retval, "%s%s", exc_word, end);
if(is_defined(retval, VERB))
return(retval);
}
}
}
sprintf(retval, "%s%s", word, rest);
if (strcmp(s, retval))
return(retval);
if (lastwd) {
sprintf(retval, "%s%s", word, end);
if (strcmp(s, retval))
return(retval);
}
return(NULL);
}
/*
* Revision 1.1 91/09/25 15:39:47 wn
* Initial revision
*
*/

File diff suppressed because it is too large Load Diff

View File

@ -1,156 +0,0 @@
/*
wnglobal.c - global variables used by various WordNet applications
$Id: wnglobal.c,v 1.56 2006/11/14 21:00:34 wn Exp $
*/
#ifndef NULL
#define NULL 0
#endif
char *wnrelease = "3.0";
/* Lexicographer file names and numbers */
char *lexfiles[] = {
"adj.all", /* 0 */
"adj.pert", /* 1 */
"adv.all", /* 2 */
"noun.Tops", /* 3 */
"noun.act", /* 4 */
"noun.animal", /* 5 */
"noun.artifact", /* 6 */
"noun.attribute", /* 7 */
"noun.body", /* 8 */
"noun.cognition", /* 9 */
"noun.communication", /* 10 */
"noun.event", /* 11 */
"noun.feeling", /* 12 */
"noun.food", /* 13 */
"noun.group", /* 14 */
"noun.location", /* 15 */
"noun.motive", /* 16 */
"noun.object", /* 17 */
"noun.person", /* 18 */
"noun.phenomenon", /* 19 */
"noun.plant", /* 20 */
"noun.possession", /* 21 */
"noun.process", /* 22 */
"noun.quantity", /* 23 */
"noun.relation", /* 24 */
"noun.shape", /* 25 */
"noun.state", /* 26 */
"noun.substance", /* 27 */
"noun.time", /* 28 */
"verb.body", /* 29 */
"verb.change", /* 30 */
"verb.cognition", /* 31 */
"verb.communication", /* 32 */
"verb.competition", /* 33 */
"verb.consumption", /* 34 */
"verb.contact", /* 35 */
"verb.creation", /* 36 */
"verb.emotion", /* 37 */
"verb.motion", /* 38 */
"verb.perception", /* 39 */
"verb.possession", /* 40 */
"verb.social", /* 41 */
"verb.stative", /* 42 */
"verb.weather", /* 43 */
"adj.ppl", /* 44 */
};
/* Pointer characters and searches */
char *ptrtyp[]={
"", /* 0 not used */
"!", /* 1 ANTPTR */
"@", /* 2 HYPERPTR */
"~", /* 3 HYPOPTR */
"*", /* 4 ENTAILPTR */
"&", /* 5 SIMPTR */
"#m", /* 6 ISMEMBERPTR */
"#s", /* 7 ISSTUFFPTR */
"#p", /* 8 ISPARTPTR */
"%m", /* 9 HASMEMBERPTR */
"%s", /* 10 HASSTUFFPTR */
"%p", /* 11 HASPARTPTR */
"%", /* 12 MERONYM */
"#", /* 13 HOLONYM */
">", /* 14 CAUSETO */
"<", /* 15 PPLPTR */
"^", /* 16 SEEALSO */
"\\", /* 17 PERTPTR */
"=", /* 18 ATTRIBUTE */
"$", /* 19 VERBGROUP */
"+", /* 20 NOMINALIZATIONS */
";", /* 21 CLASSIFICATION */
"-", /* 22 CLASS */
/* additional searches, but not pointers. */
"", /* SYNS */
"", /* FREQ */
"+", /* FRAMES */
"", /* COORDS */
"", /* RELATIVES */
"", /* HMERONYM */
"", /* HHOLONYM */
"", /* WNGREP */
"", /* OVERVIEW */
";c", /* CLASSIF_CATEGORY */
";u", /* CLASSIF_USAGE */
";r", /* CLASSIF_REGIONAL */
"-c", /* CLASS_CATEGORY */
"-u", /* CLASS_USAGE */
"-r", /* CLASS_REGIONAL */
"@i", /* INSTANCE */
"~i", /* INSTANCES */
NULL,
};
char *partnames[]={ "", "noun", "verb", "adj", "adv", NULL };
char partchars[] = " nvara"; /* add char for satellites to end */
char *adjclass[] = { "", "(p)", "(a)", "(ip)" };
/* Text of verb sentence frames */
char *frametext[] = {
"",
"Something ----s",
"Somebody ----s",
"It is ----ing",
"Something is ----ing PP",
"Something ----s something Adjective/Noun",
"Something ----s Adjective/Noun",
"Somebody ----s Adjective",
"Somebody ----s something",
"Somebody ----s somebody",
"Something ----s somebody",
"Something ----s something",
"Something ----s to somebody",
"Somebody ----s on something",
"Somebody ----s somebody something",
"Somebody ----s something to somebody",
"Somebody ----s something from somebody",
"Somebody ----s somebody with something",
"Somebody ----s somebody of something",
"Somebody ----s something on somebody",
"Somebody ----s somebody PP",
"Somebody ----s something PP",
"Somebody ----s PP",
"Somebody's (body part) ----s",
"Somebody ----s somebody to INFINITIVE",
"Somebody ----s somebody INFINITIVE",
"Somebody ----s that CLAUSE",
"Somebody ----s to somebody",
"Somebody ----s to INFINITIVE",
"Somebody ----s whether INFINITIVE",
"Somebody ----s somebody into V-ing something",
"Somebody ----s something with something",
"Somebody ----s INFINITIVE",
"Somebody ----s VERB-ing",
"It ----s that CLAUSE",
"Something ----s INFINITIVE",
""
};

View File

@ -1,375 +0,0 @@
/*
wnhelp.c
*/
/* $Id: wnhelp.c,v 1.14 2005/02/01 17:03:46 wn Rel $ */
#include "wn.h"
#ifndef NULL
#define NULL 0
#endif
/* Help Strings */
static char freq_help[] = /* FREQ */
"Display familiarity and polysemy information for the search string. \n\
The polysemy count is the number of senses in WordNet. \n\
";
static char grep_help[] = /* WNGREP */
"Print all strings in the database containing the search string \n\
as an individual word, or as the first or last string in a word or \n\
collocation. \n\
";
static char coord_help[] = /* COORDS */
"Display the coordinates (sisters) of the search string. This search \n\
prints the immediate hypernym for each synset that contains the \n\
search string and the hypernym's immediate `hyponyms'. \n\
\n\
Hypernym is the generic term used to designate a whole class of \n\
specific instances. Y is a hypernym of X if X is a (kind of) Y. \n\
\n\
Hyponym is the generic term used to designate a member of a class. \n\
X is a hyponym of Y if X is a (kind of) Y. \n\
\n\
Coordinate words are words that have the same hypernym.\n\
\n\
Hypernym synsets are preceded by \"->\", and hyponym synsets are \n\
preceded by \"=>\". \n\
";
static char hyper_help[] = /* HYPERPTR */
"Display synonyms and immediate hypernyms of synsets containing \n\
the search string. Synsets are ordered by frequency of occurrence. \n\
\n\
Hypernym is the generic term used to designate a whole class of \n\
specific instances. Y is a hypernym of X if X is a (kind of) Y. \n\
\n\
Hypernym synsets are preceded by \"=>\". \n\
";
static char relatives_help[] = /* RELATIVES */
"Display synonyms and immediate hypernyms of synsets containing \n\
the search string. Synsets are grouped by similarity of meaning. \n\
\n\
Hypernym is the generic term used to designate a whole class of \n\
specific instances. Y is a hypernym of X if X is a (kind of) Y. \n\
\n\
Hypernym synsets are preceded by \"=>\". \n\
";
static char ant_help[] = /* ANTPTR */
"Display synsets containing direct anotnyms of the search string. \n\
\n\
Direct antonyms are a pair of words between which there is an \n\
associative bond built up by co-occurrences. \n\
\n\
Antonym synsets are preceded by \"=>\". \n\
";
static char hypertree_help[] = /* -HYPERPTR */
"Recursively display hypernym (superordinate) tree for the search \n\
string. \n\
\n\
Hypernym is the generic term used to designate a whole class of \n\
specific instances. Y is a hypernym of X if X is a (kind of) Y. \n\
\n\
Hypernym synsets are preceded by \"=>\", and are indented from \n\
the left according to their level in the hierarchy. \n\
";
static char hypo_help[] = /* HYPONYM */
"Display immediate hyponyms (subordinates) for the search string. \n\
\n\
Hyponym is the generic term used to designate a member of a class. \n\
X is a hyponym of Y if X is a (kind of) Y. \n\
\n\
Hyponym synsets are preceded by \"=>\". \n\
";
static char hypotree_help[] = /* -HYPONYM */
"Display hyponym (subordinate) tree for the search string. This is \n\
a recursive search that finds the hyponyms of each hyponym. \n\
\n\
Hyponym is the generic term used to designate a member of a class. \n\
X is a hyponym of Y if X is a (kind of) Y. \n\
\n\
Hyponym synsets are preceded by \"=>\", and are indented from the left \n\
according to their level in the hierarchy. \n\
";
static char holo_help[] = /* HOLONYM */
"Display all holonyms of the search string. \n\
\n\
A holonym is the name of the whole of which the 'meronym' names a part. \n\
Y is a holonym of X if X is a part of Y. \n\
\n\
A meronym is the name of a constituent part, the substance of, or a \n\
member of something. X is a meronym of Y if X is a part of Y. \n\
\n\
Holonym synsets are preceded with either the string \"MEMBER OF\", \n\
\"PART OF\" or \"SUBSTANCE OF\" depending on the specific type of holonym. \n\
";
static char holotree_help[] = /* -HOLONYM */
"Display holonyms for search string tree. This is a recursive search \n\
that prints all the holonyms of the search string and all of the \n\
holonym's holonyms. \n\
\n\
A holonym is the name of the whole of which the meronym names a part. \n\
Y is a holonym of X if X is a part of Y. \n\
\n\
A meronym is the name of a constituent part, the substance of, or a \n\
member of something. X is a meronym of Y if X is a part of Y. \n\
\n\
Holonym synsets are preceded with either the string \"MEMBER OF\", \n\
\"PART OF\" or \"SUBSTANCE OF\" depending on the specific \n\
type of holonym. Synsets are indented from the left according to \n\
their level in the hierarchy. \n\
";
static char mero_help[] = /* MERONYM */
"Display all meronyms of the search string. \n\
\n\
A meronym is the name of a constituent part, the substance of, or a \n\
member of something. X is a meronym of Y if X is a part of Y. \n\
\n\
A holonym is the name of the whole of which the meronym names a part. \n\
Y is a holonym of X if X is a part of Y. \n\
\n\
Meronym synsets are preceded with either the string \"HAS MEMBER\", \n\
\"HAS PART\" or \"HAS SUBSTANCE\" depending on the specific type of holonym. \n\
";
static char merotree_help[] = /* -HMERONYM */
"Display meronyms for search string tree. This is a recursive search \n\
the prints all the meronyms of the search string and all of its \n\
hypernyms. \n\
\n\
A meronym is the name of a constituent part, the substance of, or a \n\
member of something. X is a meronym of Y if X is a part of Y. \n\
\n\
A holonym is the name of the whole of which the meronym names a part. \n\
Y is a holonym of X if X is a part of Y. \n\
\n\
Hypernym is the generic term used to designate a whole class of \n\
specific instances. Y is a hypernym of X if X is a (kind of) Y. \n\
\n\
Meronym synsets are preceded with either the string \"HAS MEMBER\", \n\
\"HAS PART\" or \"HAS SUBSTANCE\" depending on the specific type of \n\
holonym. Synsets are indented from the left according to their level \n\
in the hierarchy. \n\
";
static char deriv_help[] = /* DERIVATION */
"Display derived forms - nouns and verbs that are related morphologically. \n\
Each related synset is preceeded by its part of speech. Each word in the \n\
synset is followed by its sense number. \n\
";
static char domain_help[] = /* CLASSIFICATION */
"Display domain to which this synset belongs. \n\
\n\
Each domain synset is preceeded by \"TOPIC\", \"REGION\", or \"USAGE\" to \n\
distinguish topical, geographic and functional classifications, and \n\
it's part of speech. Each word is followed by its sense number. \n\
";
static char domainterms_help[] = /* CLASS */
"Display all synsets belonging to the domain. \n\
\n\
Each domain term synset is preceeded by \"TOPIC TERM\", \"REGION TERM\", or \n\
\"USAGE TERM\" to distinguish topical, geographic and functional classes, \n\
and its part of speech. Each word is followed by its sense number. \n\
";
static char nattrib_help[] = /* ATTRIBUTE */
"Display adjectives for which search string is an attribute. \n\
";
static char aattrib_help[] = /* ATTRIBUTE */
"Display nouns that are attributes of search string. \n\
";
static char tropo_help[] = /* -HYPOPTR */
"Display hyponym tree for the search string. This is \n\
a recursive search that finds the hyponyms of each hyponym. \n\
\n\
For verbs, hyponyms are refered to as troponyms. Troponyms indicate particular ways \n\
to perform a function. X is a hyponym of Y if to X is a particular way to Y. \n\
\n\
Troponym synsets are preceded by \"=>\", and are indented from the left \n\
according to their level in the hierarchy. \n\
";
static char entail_help[] = /* ENTAILPTR */
"Recursively display entailment relations of the search string. \n\
\n\
The action represented by the verb X entails Y if X cannot be done \n\
unless Y is, or has been, done. \n\
\n\
Entailment synsets are preceded by \"=>\", and are indented from the left \n\
according to their level in the hierarchy. \n\
";
static char causeto_help[] = /* CAUSETO */
"Recursively display CAUSE TO relations of the search string. \n\
\n\
The action represented by the verb X causes the action represented by \n\
the verb Y. \n\
\n\
CAUSE TO synsets are preceded by \"=>\", and are indented from the left \n\
according to their level in the hierarch. \n\
";
static char frames_help[] = /* FRAMES */
"Display applicable verb sentence frames for the search string. \n\
\n\
A frame is a sentence template illustrating the usage of a verb. \n\
\n\
Verb sentence frames are preceded with the string \"*>\" if a sentence \n\
frame is acceptable for all of the words in the synset, and with \"=>\" \n\
if a sentence frame is acceptable for the search string only. \n\
\n\
Some verb senses have example sentences. These are preceeded with \"EX:\". \n\
";
static char *nounhelps[] = {
hyper_help,
relatives_help,
ant_help,
coord_help,
hypertree_help,
hypo_help,
hypotree_help,
holo_help,
holotree_help,
mero_help,
merotree_help,
deriv_help,
nattrib_help,
domain_help,
domainterms_help,
freq_help,
grep_help
};
static char *verbhelps[] = {
hyper_help,
relatives_help,
ant_help,
coord_help,
hypertree_help,
tropo_help,
entail_help,
causeto_help,
deriv_help,
frames_help,
domain_help,
domainterms_help,
freq_help,
grep_help
};
static char *adjhelps[] = {
/* SIMPTR */
"Display synonyms and synsets related to synsets containing \n\
the search string. If the search string is in a head synset \n\
the 'cluster's' satellite synsets are displayed. If the search \n\
string is in a satellite synset, its head synset is displayed. \n\
If the search string is a pertainym the word or synset that it \n\
pertains to is displayed. \n\
\n\
A cluster is a group of adjective synsets that are organized around \n\
antonymous pairs or triplets. An adjective cluster contains two or more \n\
head synsets that contan antonyms. Each head synset has one or more \n\
satellite synsets. \n\
\n\
A head synset contains at least one word that has a direct antonym \n\
in another head synset of the same cluster. \n\
\n\
A satellite synset represents a concept that is similar in meaning to \n\
the concept represented by its head synset. \n\
\n\
Direct antonyms are a pair of words between which there is an \n\
associative bond built up by co-occurrences. \n\
\n\
Direct antonyms are printed in parentheses following the adjective. \n\
The position of an adjective in relation to the noun may be restricted \n\
to the prenominal, postnominal or predicative position. Where present \n\
these restrictions are noted in parentheses. \n\
\n\
A pertainym is a relational adjective, usually defined by such phrases \n\
as \"of or pertaining to\" and that does not have an antonym. It pertains \n\
to a noun or another pertainym. \n\
\n\
Senses contained in head synsets are displayed above the satellites, \n\
which are indented and preceded by \"=>\". Senses contained in \n\
satellite synsets are displayed with the head synset below. The head \n\
synset is preceded by \"=>\". \n\
\n\
Pertainym senses display the word or synsets that the search string \n\
pertains to. \n\
",
/* ANTPTR */
"Display synsets containing antonyms of the search string. If the \n\
search string is in a head synset the direct antonym is displayed \n\
along with the head synset's satellite synsets. If the search \n\
string is in a satellite synset, its indirect antonym is displayed \n\
via the head synset \n\
\n\
A head synset contains at least one word that has a direct antonym \n\
in another head synset of the same cluster. \n\
\n\
A satellite synset represents a concept that is similar in meaning to \n\
the concept represented by its head synset. \n\
\n\
Direct antonyms are a pair of words between which there is an \n\
associative bond built up by co-occurrences. \n\
\n\
Direct antonyms are printed in parentheses following the adjective. \n\
The position of an adjective in relation to the noun may be restricted \n\
to the prenominal, postnominal or predicative position. Where present \n\
these restrictions are noted in parentheses. \n\
\n\
Senses contained in head synsets are displayed, followed by the \n\
head synset containing the search string's direct antonym and its \n\
similar synsets, which are indented and preceded by \"=>\". Senses \n\
contained in satellite synsets are displayed followed by the indirect \n\
antonym via the satellite's head synset. \n\
",
aattrib_help,
domain_help,
domainterms_help,
freq_help,
grep_help
};
static char *advhelps[] = {
/* SIMPTR */
"Display synonyms and synsets related to synsets containing \n\
the search string. If the search string is a pertainym the word \n\
or synset that it pertains to is displayed. \n\
\n\
A pertainym is a relational adverb that is derived from an adjective. \n\
\n\
Pertainym senses display the word that the search string is derived from \n\
and the adjective synset that contains the word. If the adjective synset \n\
is a satellite synset, its head synset is also displayed. \n\
",
ant_help,
domain_help,
domainterms_help,
freq_help,
grep_help
};
char **helptext[NUMPARTS + 1] = {
NULL, nounhelps, verbhelps, adjhelps, advhelps
};

View File

@ -1,3 +0,0 @@
EXTRA_DIST = license.txt wn.xbm wnb.man wngloss.man
wnresdir = $(prefix)/lib/wnres
wnres_DATA = license.txt wn.xbm wnb.man wngloss.man

View File

@ -1,314 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = ../..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = @INSTALL@
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
subdir = lib/wnres
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
SOURCES =
DIST_SOURCES =
am__vpath_adj_setup = srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`;
am__vpath_adj = case $$p in \
$(srcdir)/*) f=`echo "$$p" | sed "s|^$$srcdirstrip/||"`;; \
*) f=$$p;; \
esac;
am__strip_dir = `echo $$p | sed -e 's|^.*/||'`;
am__installdirs = "$(DESTDIR)$(wnresdir)"
wnresDATA_INSTALL = $(INSTALL_DATA)
DATA = $(wnres_DATA)
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
RANLIB = @RANLIB@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TCL_INCLUDE_SPEC = @TCL_INCLUDE_SPEC@
TCL_LIB_SPEC = @TCL_LIB_SPEC@
TK_LIBS = @TK_LIBS@
TK_LIB_SPEC = @TK_LIB_SPEC@
TK_PREFIX = @TK_PREFIX@
TK_XINCLUDES = @TK_XINCLUDES@
VERSION = @VERSION@
ac_ct_CC = @ac_ct_CC@
ac_ct_RANLIB = @ac_ct_RANLIB@
ac_ct_STRIP = @ac_ct_STRIP@
ac_prefix = @ac_prefix@
am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
datadir = @datadir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
prefix = @prefix@
program_transform_name = @program_transform_name@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
EXTRA_DIST = license.txt wn.xbm wnb.man wngloss.man
wnresdir = $(prefix)/lib/wnres
wnres_DATA = license.txt wn.xbm wnb.man wngloss.man
all: all-am
.SUFFIXES:
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu lib/wnres/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu lib/wnres/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
uninstall-info-am:
install-wnresDATA: $(wnres_DATA)
@$(NORMAL_INSTALL)
test -z "$(wnresdir)" || $(mkdir_p) "$(DESTDIR)$(wnresdir)"
@list='$(wnres_DATA)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
f=$(am__strip_dir) \
echo " $(wnresDATA_INSTALL) '$$d$$p' '$(DESTDIR)$(wnresdir)/$$f'"; \
$(wnresDATA_INSTALL) "$$d$$p" "$(DESTDIR)$(wnresdir)/$$f"; \
done
uninstall-wnresDATA:
@$(NORMAL_UNINSTALL)
@list='$(wnres_DATA)'; for p in $$list; do \
f=$(am__strip_dir) \
echo " rm -f '$(DESTDIR)$(wnresdir)/$$f'"; \
rm -f "$(DESTDIR)$(wnresdir)/$$f"; \
done
tags: TAGS
TAGS:
ctags: CTAGS
CTAGS:
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(DATA)
installdirs:
for dir in "$(DESTDIR)$(wnresdir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-generic mostlyclean-am
distclean: distclean-am
-rm -f Makefile
distclean-am: clean-am distclean-generic
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am: install-wnresDATA
install-exec-am:
install-info: install-info-am
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-info-am uninstall-wnresDATA
.PHONY: all all-am check check-am clean clean-generic distclean \
distclean-generic distdir dvi dvi-am html html-am info info-am \
install install-am install-data install-data-am install-exec \
install-exec-am install-info install-info-am install-man \
install-strip install-wnresDATA installcheck installcheck-am \
installdirs maintainer-clean maintainer-clean-generic \
mostlyclean mostlyclean-generic pdf pdf-am ps ps-am uninstall \
uninstall-am uninstall-info-am uninstall-wnresDATA
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,32 +0,0 @@
WordNet Release 3.0
This software and database is being provided to you, the LICENSEE, by
Princeton University under the following license. By obtaining, using
and/or copying this software and database, you agree that you have
read, understood, and will comply with these terms and conditions.:
Permission to use, copy, modify and distribute this software and
database and its documentation for any purpose and without fee or
royalty is hereby granted, provided that you agree to comply with
the following copyright notice and statements, including the disclaimer,
and that the same appear on ALL copies of the software, database and
documentation, including modifications that you make for internal
use or for distribution.
WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved.
THIS SOFTWARE AND DATABASE IS PROVIDED "AS IS" AND PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PRINCETON
UNIVERSITY MAKES NO REPRESENTATIONS OR WARRANTIES OF MERCHANT-
ABILITY OR FITNESS FOR ANY PARTICULAR PURPOSE OR THAT THE USE
OF THE LICENSED SOFTWARE, DATABASE OR DOCUMENTATION WILL NOT
INFRINGE ANY THIRD PARTY PATENTS, COPYRIGHTS, TRADEMARKS OR
OTHER RIGHTS.
The name of Princeton University or Princeton may not be used in
advertising or publicity pertaining to distribution of the software
and/or database. Title to copyright in this software, database and
any associated documentation shall at all times remain with
Princeton University and LICENSEE agrees to preserve same.

View File

@ -1,168 +0,0 @@
#define wn_width 135
#define wn_height 116
static char wn_bits[] = {
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0xff,
0x07, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xf0, 0x00, 0x78, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x3c, 0xfe, 0x87, 0x0f,
0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x86, 0xff, 0x3f, 0x30, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x63, 0x00, 0xf0, 0xc7, 0x00, 0x00,
0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x19,
0x00, 0x80, 0x1f, 0x01, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0xc0, 0x04, 0x00, 0x00, 0x78, 0x02, 0x00, 0x00, 0x80,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc0, 0x06, 0x00, 0x00,
0xc0, 0x04, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x60, 0x02, 0x00, 0xc0, 0x80, 0x09, 0x00, 0x00, 0x80, 0x00, 0x00,
0x00, 0x0c, 0x00, 0x00, 0x00, 0x00, 0x70, 0x02, 0x00, 0x80, 0x00, 0x13,
0x00, 0x00, 0x80, 0x00, 0x78, 0x00, 0x1e, 0x00, 0x00, 0x00, 0x00, 0x1f,
0x01, 0x00, 0xc0, 0x00, 0x36, 0x00, 0x00, 0x80, 0x00, 0x7f, 0x00, 0x7e,
0x00, 0x00, 0x00, 0x02, 0xff, 0xc0, 0x03, 0x80, 0x01, 0xee, 0x00, 0x00,
0x80, 0xc0, 0x7f, 0x00, 0xfc, 0x00, 0x00, 0x00, 0x06, 0xff, 0x80, 0x0f,
0x80, 0x01, 0xcc, 0x00, 0x00, 0x80, 0xe0, 0x7f, 0x00, 0xfc, 0x03, 0x00,
0x00, 0x06, 0xff, 0x80, 0x1f, 0x80, 0x03, 0xcc, 0x01, 0x00, 0x80, 0xf0,
0x7f, 0x00, 0xfc, 0x0f, 0x00, 0x00, 0x0e, 0xfe, 0x81, 0x1f, 0x80, 0x07,
0x5c, 0x02, 0x00, 0x80, 0xe0, 0x7f, 0x00, 0xfc, 0x3f, 0x00, 0x00, 0x0c,
0xfe, 0x81, 0x1f, 0x80, 0x07, 0x58, 0x04, 0x00, 0x80, 0xc0, 0x7f, 0x00,
0xfc, 0x3f, 0x00, 0x00, 0x1c, 0xfe, 0x81, 0x1f, 0x80, 0x07, 0x58, 0x08,
0x00, 0x80, 0xc0, 0x7f, 0x00, 0xfc, 0x3f, 0x00, 0x00, 0x3c, 0xfe, 0x83,
0x1f, 0x80, 0x07, 0x58, 0x10, 0x00, 0x80, 0x80, 0xff, 0x00, 0xfc, 0x1f,
0x00, 0x00, 0x7c, 0xfe, 0x83, 0x1f, 0x80, 0x07, 0x58, 0x20, 0x00, 0x80,
0x80, 0xff, 0x00, 0xfc, 0x0f, 0x00, 0x00, 0x7c, 0xfe, 0x87, 0x1f, 0x80,
0x07, 0xd8, 0x20, 0x00, 0x80, 0x00, 0xff, 0x00, 0xfc, 0x0f, 0x00, 0x00,
0x7c, 0xfe, 0x87, 0x0f, 0x80, 0x07, 0x58, 0x41, 0x00, 0x80, 0x00, 0xff,
0x00, 0xfc, 0x07, 0x00, 0x00, 0x3c, 0xfe, 0x87, 0x0f, 0x80, 0x07, 0x5c,
0x81, 0x00, 0x80, 0x00, 0xff, 0x00, 0xfc, 0x07, 0x00, 0x00, 0xbc, 0xfe,
0x8f, 0x0f, 0x00, 0x07, 0x2c, 0x03, 0x01, 0x80, 0x00, 0xfe, 0x00, 0xfc,
0x03, 0x00, 0x00, 0xbc, 0xfe, 0x8f, 0x0f, 0x00, 0x07, 0x6c, 0x02, 0x02,
0x80, 0x00, 0xfe, 0x18, 0xfe, 0x03, 0x00, 0x00, 0xbc, 0xfe, 0x9f, 0x0f,
0x00, 0x07, 0xa4, 0x03, 0x02, 0x80, 0x00, 0xfe, 0x19, 0xfe, 0x03, 0x00,
0x00, 0x3c, 0xfe, 0x9f, 0x0f, 0x00, 0x07, 0x24, 0xfe, 0x07, 0x80, 0x00,
0xfe, 0x39, 0xfe, 0x03, 0x00, 0x00, 0x3e, 0xfe, 0x9f, 0x0f, 0x00, 0x07,
0x16, 0x04, 0x04, 0x80, 0x00, 0xfe, 0x39, 0xfe, 0x01, 0x00, 0x00, 0x3e,
0xfe, 0xbf, 0x0f, 0x00, 0x07, 0x12, 0x04, 0x0c, 0x80, 0x00, 0xfc, 0x3d,
0xfe, 0x01, 0x00, 0x03, 0xbe, 0xfe, 0xbf, 0x0f, 0x00, 0x07, 0x0b, 0x04,
0x08, 0x80, 0x00, 0xfc, 0x7d, 0xfe, 0x61, 0xd8, 0x07, 0xbe, 0xbe, 0xff,
0x8f, 0x0f, 0x87, 0x05, 0x04, 0x08, 0x80, 0x00, 0xfc, 0x7f, 0xfe, 0xf0,
0xf8, 0x87, 0x9f, 0xbe, 0xff, 0xcf, 0xdf, 0xcf, 0x05, 0x04, 0x10, 0x80,
0x00, 0xfc, 0xff, 0xfe, 0xf8, 0xf9, 0xcf, 0x3f, 0x3f, 0xff, 0xcf, 0xdf,
0xcf, 0x02, 0x08, 0x10, 0x80, 0x00, 0xfc, 0xff, 0xff, 0xfc, 0xf1, 0xef,
0x3f, 0x3f, 0xff, 0xef, 0xd9, 0x6f, 0x01, 0x08, 0x20, 0x80, 0x00, 0xf8,
0xff, 0xff, 0xfc, 0xf9, 0xef, 0x3f, 0x3f, 0xfe, 0xef, 0xdf, 0x37, 0x01,
0x08, 0x20, 0x80, 0x00, 0xf8, 0xff, 0x7f, 0xde, 0xfb, 0xe6, 0x3e, 0x3f,
0xfe, 0xef, 0x1f, 0x93, 0x00, 0x08, 0x20, 0x80, 0x00, 0xf8, 0xff, 0x7f,
0xde, 0x79, 0xf0, 0xbc, 0x3f, 0xfe, 0xef, 0x07, 0x4b, 0x07, 0x08, 0x20,
0x80, 0x00, 0xf8, 0xff, 0x7f, 0xde, 0x79, 0xf0, 0xbc, 0x3f, 0xfe, 0xef,
0x90, 0xa7, 0x1c, 0x08, 0x20, 0x80, 0x00, 0xf8, 0xff, 0x3f, 0xfe, 0x7d,
0xf0, 0xbe, 0x3f, 0xfc, 0xef, 0x9d, 0x9b, 0xf1, 0x0d, 0x20, 0x80, 0x00,
0xf8, 0xff, 0x3f, 0xfe, 0x7c, 0xf0, 0x9f, 0x3f, 0xfc, 0xef, 0x9f, 0x2d,
0x01, 0x1f, 0x20, 0x80, 0x00, 0xf0, 0xef, 0x3f, 0x7c, 0x7c, 0xf0, 0xdf,
0x3f, 0xfc, 0xc7, 0x9f, 0x3e, 0x01, 0xf0, 0x21, 0x80, 0x00, 0xf0, 0xc7,
0x1f, 0x38, 0xfe, 0xe0, 0x9f, 0x3f, 0xf8, 0x87, 0x4f, 0x3e, 0x02, 0x10,
0x3e, 0x80, 0x00, 0xf0, 0xc7, 0x1f, 0x00, 0x7e, 0xe0, 0x17, 0x3c, 0x78,
0x00, 0x30, 0x01, 0x02, 0x10, 0x40, 0x80, 0x00, 0xf0, 0x87, 0x1f, 0x00,
0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0xd8, 0x00, 0x02, 0x10, 0x40, 0x80,
0x00, 0xe0, 0x83, 0x03, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x6c,
0x00, 0x02, 0x10, 0x40, 0x80, 0x00, 0xe0, 0x00, 0x00, 0x00, 0x00, 0x00,
0x16, 0x00, 0x00, 0x00, 0x36, 0x00, 0x02, 0x10, 0x40, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0x00, 0x3b, 0x00, 0x02,
0x10, 0x40, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x16, 0x00,
0x00, 0x80, 0x35, 0x00, 0x02, 0x10, 0x40, 0x80, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x16, 0x00, 0x00, 0xe0, 0xf6, 0x00, 0x02, 0x10, 0x20,
0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x2c, 0x00, 0x00, 0x38,
0x11, 0x07, 0x02, 0x00, 0x20, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x2c, 0x00, 0x00, 0x8e, 0x08, 0x7c, 0x02, 0x10, 0x20, 0x80, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x58, 0x00, 0x80, 0x61, 0x08, 0x00,
0x0f, 0x10, 0x20, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x90,
0x01, 0x60, 0x1e, 0x08, 0x00, 0xf6, 0x10, 0x20, 0x80, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x30, 0x0e, 0x1c, 0x01, 0x0c, 0x00, 0x81, 0xdf,
0x1f, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xe0, 0xf0, 0xc7,
0x00, 0x04, 0x00, 0x00, 0x10, 0x18, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xe0, 0x03, 0xb8, 0x01, 0x04, 0x00, 0x03, 0x10, 0x10, 0x80,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x30, 0xff, 0x07, 0x03, 0x02,
0x00, 0x01, 0x10, 0x10, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0xb0, 0x78, 0x00, 0x0e, 0x00, 0x80, 0x00, 0x18, 0x10, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x58, 0xc0, 0x01, 0x38, 0x02, 0x80, 0x00,
0x08, 0x10, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4c, 0x00,
0x02, 0xe0, 0x02, 0x80, 0x00, 0x08, 0x10, 0x80, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x2c, 0x00, 0x3c, 0x80, 0x03, 0x40, 0x00, 0x08, 0x10,
0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x26, 0x00, 0xc0, 0x03,
0x07, 0x40, 0x00, 0x0c, 0x08, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x17, 0x00, 0x00, 0x9c, 0x1c, 0x20, 0x00, 0x04, 0x08, 0x80, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00, 0x00, 0xe0, 0xe0, 0x20,
0x00, 0x04, 0x08, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x0b,
0x00, 0x00, 0x00, 0x81, 0x13, 0x00, 0x06, 0x08, 0x80, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x80, 0x09, 0x00, 0x00, 0x00, 0x06, 0x1e, 0x00, 0x02,
0x08, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc0, 0x04, 0x00, 0x00,
0x00, 0x08, 0xb0, 0x01, 0x02, 0x08, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0xc0, 0x06, 0x00, 0x00, 0x00, 0x10, 0x18, 0x07, 0x02, 0x08, 0x80,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x60, 0x02, 0x00, 0x00, 0x00, 0x20,
0x08, 0x38, 0x02, 0x08, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x60,
0x01, 0x00, 0x00, 0x00, 0x40, 0x0c, 0xe0, 0x03, 0x08, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x30, 0x01, 0x00, 0x00, 0x00, 0x40, 0x04, 0x00,
0x3f, 0x08, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x98, 0x00, 0x00,
0x00, 0x00, 0x80, 0x06, 0x00, 0xc1, 0x0f, 0x80, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x58, 0x00, 0x00, 0x00, 0x00, 0x00, 0x03, 0x80, 0x01, 0x08,
0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4c, 0x00, 0x00, 0x00, 0x00,
0x00, 0x01, 0x80, 0x00, 0x08, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x26, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01, 0x80, 0x00, 0x0c, 0x80, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x26, 0x00, 0x00, 0x00, 0x00, 0x00, 0x01,
0x40, 0x00, 0x0c, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x13, 0x00,
0x00, 0x00, 0x00, 0x00, 0x02, 0x40, 0x00, 0x14, 0x80, 0x00, 0x00, 0x00,
0x00, 0x00, 0x80, 0x13, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x40, 0x00,
0x16, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x09, 0x00, 0x00, 0x00,
0x00, 0x00, 0x02, 0x20, 0x00, 0x22, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00,
0xc0, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x20, 0x00, 0x22, 0x80,
0x00, 0x00, 0x00, 0x00, 0x00, 0x60, 0x04, 0x00, 0x00, 0x00, 0x00, 0x00,
0x06, 0x20, 0x00, 0x22, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x60, 0x02,
0x00, 0x00, 0x00, 0x00, 0x00, 0x0e, 0x20, 0x00, 0x41, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x30, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x3a, 0x30,
0x00, 0x41, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x38, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0xe2, 0x10, 0x00, 0x41, 0x80, 0x00, 0x00, 0x00, 0x00,
0x00, 0x98, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x82, 0x1b, 0x80, 0x81,
0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x8c, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x02, 0x1e, 0x80, 0x80, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x4e,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0x76, 0xc0, 0x80, 0x80, 0x00,
0x00, 0x00, 0x00, 0x00, 0x26, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x04,
0xc2, 0x4f, 0x80, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x23, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x04, 0x03, 0x78, 0x80, 0x80, 0x00, 0x00, 0x00,
0x00, 0x80, 0x13, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x84, 0x00, 0xe0,
0x81, 0x80, 0x00, 0x00, 0x00, 0x00, 0x80, 0x09, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0xc4, 0x00, 0x20, 0xff, 0x81, 0x00, 0x00, 0x00, 0x00, 0xc0,
0x09, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x64, 0x00, 0x10, 0x40, 0x81,
0x00, 0x00, 0x00, 0x00, 0xc0, 0x05, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x34, 0x00, 0x18, 0x60, 0x81, 0x00, 0x00, 0x00, 0x00, 0xe0, 0x04, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00, 0x08, 0x20, 0x81, 0x00, 0x00,
0x00, 0x00, 0x60, 0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x08, 0x00,
0x0c, 0x10, 0x81, 0x00, 0x00, 0x00, 0x00, 0x70, 0x01, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x38, 0x00, 0x04, 0x10, 0x81, 0x00, 0x00, 0x00, 0x00,
0x38, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x70, 0x00, 0x04, 0x10,
0x81, 0x00, 0x00, 0x00, 0x00, 0xb8, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x90, 0x01, 0x06, 0x90, 0x80, 0x00, 0x00, 0x00, 0x00, 0x9c, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20, 0x06, 0x02, 0x88, 0x80, 0x00,
0x00, 0x00, 0x00, 0x4c, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x20,
0x38, 0x03, 0x88, 0x80, 0x00, 0x00, 0x00, 0x00, 0x2e, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x40, 0xc0, 0x03, 0x44, 0x80, 0x00, 0x00, 0x00,
0x00, 0x2f, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x40, 0x80, 0x7d,
0x26, 0x80, 0x00, 0x00, 0x00, 0x00, 0x19, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x80, 0x80, 0xc0, 0x1b, 0x80, 0x00, 0x00, 0x00, 0x00, 0x0e,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xc1, 0x00, 0x0d, 0x80,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x66, 0x80, 0x05, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x38, 0x80, 0x03, 0x80, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xf0,
0x7f, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x80, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x80};

View File

@ -1,450 +0,0 @@
Table of Contents
NAME
wnb - WordNet window-based browser interface
SYNOPSIS
wnb
DESCRIPTION
wnb() provides a window-based interface for browsing the WordNet
database, allowing synsets and relations to be displayed as formatted
text. For each search word, different searches are available based on
syntactic category and information available in the database.
wnb is written in Tcl/Tk, which is available for Unix and Windows
platforms. This allows the same code to work on all supported WordNet
platforms without modification.
WNB WINDOWS
wnb() was developed with the philosophy that only those searches and
buttons that are applicable at the current time are displayed. As a
result, the appearance of the interface changes as it is used. Use the
standard windowing system mouse functions to open and close the WordNet
Browser Window, move the window, and change its size.
The WordNet Browser Window contains the following areas, from top to
bottom:
Menubar
A menubar runs along the top of the browser window with pulldown
menus and button entitled File , History , Options , and Help . Search
Word Entry
Below the Menubar is a line for entering the search word. A search
word can be a single word, hyphenated string, or a collocation. Case
is ignored. Although only uninflected forms of words are usually
stored in WordNet, users may search for inflected forms. WordNet's
morphological processor finds the base form automatically. Search Selection
Below the Search Word Entry line is an area for selecting the search
type and senses to search. Until a search word is entered this area
is blank. After a search word is entered, buttons appear
corresponding to each syntactic category (Noun , Verb , Adjective ,
Adverb ) in which the search string is defined in WordNet.
At the right edge of the Search Selection line is a box for entering
sense numbers. When this box is empty, search results for all senses of
the search word that match the search type are displayed. The search may
be restricted to one or more specific senses by entering a comma or
space separated list of sense numbers in the Senses box. These sense
numbers remain in effect until either the user changes or deletes them,
or a new search word is entered.
Results Window
Most of the browser window consists of a large text buffer for
displaying the results of WordNet searches. Horizontal and vertical
scroll bars are present for scrolling through the output. Status Line
A status line is at the bottom of the browser window. When search
results are displayed in the Results Window, this status line
reflects the type of search selected. When there is no search word
entered, your are prompted to "Enter search word and press return."
If the search word entered is not in WordNet, the message "Sorry, no
matches found." is displayed.
SEARCHING THE DATABASE
The WordNet browser navigates through WordNet in two steps. First a
search word is entered and an overview of all the senses of the word in
all syntactic categories is displayed in the Results Window. The senses
are grouped by syntactic category, and each synset is annotated as
described above with synset_offset , lex_filename , and sense_number as
dictated by the advanced search options set. The overview search also
indicates how many of the senses in each syntactic category are
represented in the tagged texts. This is a way for the user to determine
whether a sense's sense number is based on semantic tagging data, or was
arbitrarily assigned. For each sense that has appeared in such texts,
the number of semantic tags to that sense are indicated in parentheses
after the sense number.
Then, within a syntactic category, a specific search is selected. The
desired search is performed and the search results are displayed in the
Results Window. Additional searches on the same word can be performed,
or a new search word can be entered.
To enter a search word, click the mouse in the horizontal box labeled
Search Word , type a single word, hyphenated string, or collocation and
press RETURN.
wnb() responds by making a set of Part of Speech buttons appear in the
Search Selection line. Each button corresponds to a syntactic category
in which the search string is defined in WordNet. At the same time, an
Overview of the synsets for all senses of the search word is displayed
in the Results Window. The Overview includes the gloss for each synset
and also indicates which of the senses have appeared in the semantically
tagged texts. For each sense that has appeared in such texts, the number
of semantic tags to that sense are indicated in parentheses after the
sense number.
The pulldown menus in the Search Selection line list all of the WordNet
searches that can be performed for the search word in that part of
speech. To select a search, highlight it by dragging the mouse to it,
and release the mouse while it is highlighted. Drag the mouse outside of
the pulldown list and release to hide the menu without making a
selection. Dragging the mouse across the Part of Speech buttons displays
the available searches for each syntactic category.
To restrict a search to one or more senses within a syntactic category,
enter a comma or space separated list of sense numbers in the Senses box
before selecting a search.
After a search is selected, wnb() performs the search on the WordNet
database and displays the formatted results in the Results Window.
Whenever search results are displayed, a button entitled Redisplay
Overview is present at the right edge of the Search Word Entry line.
Clicking on this button redisplays the Overview of all synsets for the
search word in the Results Window.
Changing the Search Word
A new search word can be entered at any time by moving to the Search
Word Entry box, if necessary highlighting it by clicking, erasing the
old string, typing a new one and pressing RETURN. The Senses box is
cleared if necessary, the Part of Speech buttons applicable to the new
search word appear, and the Overview for the new search word is displayed.
The middle mouse button can also be used to select a new search word by
placing the mouse over any word in the Results Window and clicking. The
selected word will replace the text in the Search Word Entry box, and
the overview for that word will automatically be displayed.
To select a new search string collocation from text in the Results
Window, highlight the text with the mouse and press CONTROL-S.
Interrupting a Search
When a search is in progress the message "Searching...(press escape to
abort)" is displayed in the Status Line. Note that most searches return
very quickly, so this message isn't noticeable. As indicated, pressing
the ESCAPE key will interrupt the search. The results of the search
obtained before the time the search was interrupted are displayed in the
Results Window.
MENUS
File Menu
Find keywords by substring
Display a popup window for specifying a search of WordNet for
words or collocations that contain a specific substring. If a
search word is currently entered in the Search Word box, it is
used as the substring to search for by default. The Substring
Search Window contains a box for entering a substring, a
pulldown menu to its right for specifying the part of speech to
search, a large area for displaying the search results, and
action buttons at the bottom entitled Search , Save , Print
Dismiss .
Once a substring is entered and a part of speech selected, clicking
on the Search button causes a search to be done for all words and
collocations in WordNet, in that syntactic category, that contain
the substring according to the following criteria:
1. The substring can appear at the beginning or end of a word,
hyphenated string o collocation.
2. The substring can appear in the middle of a hyphenated string or
collocation, but only delimited on both sides by spaces or hyphens.
The search results are displayed in the large buffer. Clicking on an
item from the search results list causes wnb() to automatically
enter that word in the Search Word box of the WordNet Browser Window
and perform the Overview search.
Clicking the Save button generates a popup dialog for specifying a
filename to save the substring search results to. Clicking the Print
button generates a popup dialog in which a print command can be
specified.
Selecting Dismiss closes the Substring Search Window.
Save current display
Display a popup dialog for specifying a filename to save the
current Results Window contents to. Print current display
Display a popup dialog in which to specify a print command to
which the current Results Window contents can be piped. Note -
this option does not exist in the Windows version. Clear current
display
Clear the Search Word and Senses boxes, and Results Window. Exit
Does what you would expect.
History
This pulldown menu contains a list of the last searches performed.
Selecting an item from this list performs that search again. The maximum
number of searches stored in the list can be adjusted from the Options
menu. The default is 10.
Options
Show help with each search
When this checkbox is selected search results are preceded by
some explanatory text about the type of search selected. This is
off by default. Show descriptive gloss
When this checkbox is selected, synset glosses are displayed in
all search results. This is set by default. Note that glosses
are always displayed in the Overview. Wrap Lines
When this checkbox is selected, lines in the Results Window that
are wider than the window are automatically wrapped. This is set
by default. If not selected, a horizontal scroll bar is present
if any lines are longer than the width of the window. Set advanced
search options...
Selecting this item displays a popup window for setting the
following search options: Lexical file information; Synset
location in database file; Sense number . Choices for each are:
Don't show (default)
Show with searches
Show with searches and overview
When lexical file information is shown, the name of the
lexicographer file is printed before each synset, enclosed in angle
brackets (< ... > ). When both lexical file information and synset
location information are displayed, the synset location information
appears first. If within one lexicographer file more than one sense
of a word is entered, an integer lex_id is appended onto all but one
of the word's instances to uniquely identify it. In each synset,
each word having a non-zero lex_id is printed with the lex_id value
printed immediately following the word. If both lexicographer
information and sense numbers are displayed, lex_id s, if present,
precede sense numbers.
When synset location is shown, the byte offset of the synset in the
database "data" file corresponding to the syntactic category of the
synset is printed before each synset, enclosed in curly braces
({ ... } ). When both lexical file information and synset location
information are displayed, the synset location information appears
first.
When sense numbers are shown, the sense number of each word in each
synset is printed immediately after the word, and is preceded by a
number sign (# ).
Set maximum history length...
Display a popup dialog in which the maximum number of previous
searches to be kept on the History list can be set. Set
font...
Display a popup window for setting the font (typeface) and font
size to use for the Results Window. Choices for typeface are:
Courier , Helvetica , and Times (default). Font size can be
small , medium (default), or large . Save current options as default
Save the currently set options. Next time the browser is
started, these options will be used as the user defaults.
Help
Help on using the WordNet browser
Display this manual page. Help on WordNet terminology
Display the wngloss <wngloss.7WN.html>(7WN) <wngloss.7WN.html>
manual page. Display the WordNet license
Display the WordNet copyright notice and license agreement. About
the WordNet browser
Information about this application.
SHORCUTS
Clicking on any word in the Results Window while holding down the SHIFT
key on the keyboard causes the browser to replace Search Word with the
word and display its Overview and available searches. Clicking on any
word in the Results Window with the middle mouse button does the same
thing.
Pressing the CONTROL-S keys causes the browser to do as above on the
text that is currently highlighted. Under Unix, this will work even if
the highlighted text is in another window. This works on hyphenated
strings and collocations, as well as individual words.
Pressing the CONTROL-G keys displays the Substring Search Window.
SEARCH RESULTS
The results of a search of the WordNet database are displayed in the
Results Window. Horizontal and vertical scroll bars are present for
scrolling through the search results.
All searches other than the Overview list all senses matching the search
results in the following general format. Items enclosed in italicized
square brackets ([ ... ] ) may not be present.
If a search cannot be performed on some senses of searchstr , the search
results are headed by a string of the form: X of Y senses of searchstr
One line listing the number of senses matching the search selected.
Each sense matching the search selected displayed as follows:
Sense n
[{synset_offset}]
[<lex_filename>] word1[#sense_number][, word2...]
Where n is the sense number of the search word, synset_offset is the
byte offset of the synset in the data.pos file corresponding to the
syntactic category, lex_filename is the name of the lexicographer
file that the synset comes from, word1 is the first word in the
synset (note that this is not necessarily the search word) and
sense_number is the WordNet sense number assigned to the preceding
word. synset_offset , lex_filename , and sense_number are generated
if the appropriate Options are specified.
The synsets matching the search selected are printed below each
sense's synset output described above. Each line of output is
preceded by a marker (usually => ), then a synset, formatted as
described above. If a search traverses more one level of the tree,
then successive lines are indented by spaces corresponding to its
level in the hierarchy. Glosses are displayed in parentheses at the
end of each synset if the appropriate Option is set. Each synset is
printed on one line.
Senses are ordered from most to least frequently used, with the most
common sense numbered 1 . Frequency of use is determined by the
number of times a sense is tagged in the various semantic
concordance texts. Senses that are not semantically tagged follow
the ordered senses. Note that this ordering is only an estimate
based on usage in a small corpus.
Verb senses can be grouped by similarity of meaning, rather than
ordered by frequency of use. When the "Synonyms, grouped by
similarity of meaning" search is selected, senses that are close in
meaning are printed together, with a line of dashes indicating the
end of a group. See wngroups <wngroups.7WN.html>(7WN)
<wngroups.7WN.html> for a discussion how senses are grouped.
The output of the "Derivationally Related Forms" search shows word
forms that are morphologically related to searchstr . Each word form
pointed to from searchstr is displayed, preceded by RELATED TO-> and
the syntactic category of the link, followed, on the next line, by
its synset. Printed after the word form is # n where n indicates the
WordNet sense number of the term pointed to.
The "Domain" and "Domain Terms" searches show the domain that a
synset has been classified in and, conversely, all of the terms that
have been assigned to a specific domain. A domain is either a TOPIC,
REGION or USAGE, as reflected in the specific pointer character
stored in the database, and displayed in the output. A Domain search
on a term shows the domain, if any, that each synset containing
searchstr has been classified in. The output display shows the
domain type (TOPIC, REGION or USAGE ), followed by the syntactic
category of the domain synset and the terms in the synset. Each term
is followed by # n where n indicates the WordNet sense number of the
term. The converse search, Domain TermsfP, shows all of the synsets
that have been placed into the domain searchstr , with analogous
markers.
When the "Sentence Frames" search is specified, sample illustrative
sentences and generic sentence frames are displayed. If a sample
sentence is found, the base form of the search word is substituted
into the sentence, and it is printed below the synset, preceded with
the EX: marker. When no sample sentences are found, the generic
sentence frames are displayed. Sentence frames that are acceptable
for all words in a synset are preceded by the marker *> . If a frame
is acceptable for the search word only, it is preceded by the marker
=> .
Search results for adjectives are slightly different from those for
other parts of speech. When an adjective is printed, its direct
antonym, if it has one, is also printed in parentheses. When the
search word is in a head synset, all of the head synset's satellites
are also displayed. The position of an adjective in relation to the
noun may be restricted to the prenominal , postnominal or
predicative position. Where present, these restrictions are noted in
parentheses.
When an adjective is a participle of a verb, the output indicates
the verb and displays its synset.
When an adverb is derived from an adjective, the specific adjectival
sense on which it is based is indicated.
The morphological transformations performed by the search code may
result in more than one word to search for. wnb() automatically
performs the requested search on all of the strings and returns the
results grouped by word. For example, the verb saw is both the
present tense of saw and the past tense of see . When there is more
than one word to search for, search results are grouped by word.
DIAGNOSTICS
If the WordNet database files cannot be opened, error messages are
displayed. This is usually corrected by setting the environment
variables described below to the proper location of the WordNet database
for your installation.
ENVIRONMENT VARIABLES (UNIX)
WNHOME
Base directory for WordNet. Default is /usr/local/WordNet-2.1 . WNSEARCHDIR
Directory in which the WordNet database has been installed. Default
is WNHOME/dict .
REGISTRY (WINDOWS)
HKEY_LOCAL_MACHINE\SOFTWARE\WordNet\2.1\WNHome
Base directory for WordNet. Default is C:\Program Files\WordNet\2.1 .
HKEY_CURRENT_USER\SOFTWARE\WordNet\2.1\wnres
User's default browser options.
FILES
index.pos
database index files data.pos
database data files *.vrb
files of sentences illustrating the use of verbs pos .exc
morphology exception lists
SEE ALSO
wnintro <wnintro.1WN.html>(1WN) <wnintro.1WN.html> , wn
<wn.1WN.html>(1WN) <wn.1WN.html> , wnintro <wnintro.3WN.html>(3WN)
<wnintro.3WN.html> , lexnames <lexnames.5WN.html>(5WN)
<lexnames.5WN.html> , senseidx <senseidx.5WN.html>(5WN)
<senseidx.5WN.html> , wndb <wndb.5WN.html>(5WN) <wndb.5WN.html> ,
wninput <wninput.5WN.html>(5WN) <wninput.5WN.html> , morphy
<morphy.7WN.html>(7WN) <morphy.7WN.html> , wngloss
<wngloss.7WN.html>(7WN) <wngloss.7WN.html> , wngroups
<wngroups.7WN.html>(7WN) <wngroups.7WN.html> .
BUGS
Please reports bugs to wordnet@princeton.edu.

View File

@ -1,197 +0,0 @@
Table of Contents
NAME
wngloss - glossary of terms used in WordNet system
DESCRIPTION
The WordNet Reference Manual consists of Unix-style manual pages divided
into sections as follows:
Section Description
1 WordNet User Commands
3 WordNet Library Functions
5 WordNet File Formats
7 Miscellaneous Information about WordNet
System Description
The WordNet system consists of lexicographer files, code to convert
these files into a database, and search routines and interfaces that
display information from the database. The lexicographer files organize
nouns, verbs, adjectives and adverbs into groups of synonyms, and
describe relations between synonym groups. grind <grind.1WN.html>(1WN)
<grind.1WN.html> converts the lexicographer files into a database that
encodes the relations between the synonym groups. The different
interfaces to the WordNet database utilize a common library of search
routines to display these relations. Note that the lexicographer files
and grind <grind.1WN.html>(1WN) <grind.1WN.html> program are not
generally distributed.
Database Organization
Information in WordNet is organized around logical groupings called
synsets. Each synset consists of a list of synonymous words or
collocations (eg. "fountain pen" , "take in" ), and pointers that
describe the relations between this synset and other synsets. A word or
collocation may appear in more than one synset, and in more than one
part of speech. The words in a synset are grouped such that they are
interchangeable in some context.
Two kinds of relations are represented by pointers: lexical and
semantic. Lexical relations hold between semantically related word
forms; semantic relations hold between word meanings. These relations
include (but are not limited to) hypernymy/hyponymy
(superordinate/subordinate), antonymy, entailment, and meronymy/holonymy.
Nouns and verbs are organized into hierarchies based on the
hypernymy/hyponymy relation between synsets. Additional pointers are be
used to indicate other relations.
Adjectives are arranged in clusters containing head synsets and
satellite synsets. Each cluster is organized around antonymous pairs
(and occasionally antonymous triplets). The antonymous pairs (or
triplets) are indicated in the head synsets of a cluster. Most head
synsets have one or more satellite synsets, each of which represents a
concept that is similar in meaning to the concept represented by the
head synset. One way to think of the adjective cluster organization is
to visualize a wheel, with a head synset as the hub and satellite
synsets as the spokes. Two or more wheels are logically connected via
antonymy, which can be thought of as an axle between the wheels.
Pertainyms are relational adjectives and do not follow the structure
just described. Pertainyms do not have antonyms; the synset for a
pertainym most often contains only one word or collocation and a lexical
pointer to the noun that the adjective is "pertaining to". Participial
adjectives have lexical pointers to the verbs that they are derived from.
Adverbs are often derived from adjectives, and sometimes have antonyms;
therefore the synset for an adverb usually contains a lexical pointer to
the adjective from which it is derived.
See wndb <wndb.5WN.html>(5WN) <wndb.5WN.html> for a detailed description
of the database files and how the data are represented.
GLOSSARY OF TERMS >
Many terms used in the WordNet Reference Manual are unique to the
WordNet system. Other general terms have specific meanings when used in
the WordNet documentation. Definitions for many of these terms are given
to help with the interpretation and understanding of the reference
manual, and in the use of the WordNet system.
In following definitions word is used in place of word or collocation .
adjective cluster
A group of adjective synsets that are organized around antonymous
pairs or triplets. An adjective cluster contains two or more head
synsets which represent antonymous concepts. Each head synset has
one or more satellite synsets . attribute
A noun for which adjectives express values. The noun weight is an
attribute, for which the adjectives light and heavy express values. base
form
The base form of a word or collocation is the form to which
inflections are added. basic synset
Syntactically, same as synset . Term is used in wninput
<wninput.5WN.html>(5WN) <wninput.5WN.html> to help explain
differences in entering synsets in lexicographer files. collocation
A collocation in WordNet is a string of two or more words, connected
by spaces or hyphens. Examples are: man-eating shark , blue-collar ,
depend on , line of products . In the database files spaces are
represented as underscore (_ ) characters. coordinate
Coordinate terms are nouns or verbs that have the same hypernym .
cross-cluster pointer
A semantic pointer from one adjective cluster to another. derivationally
related forms
Terms in different syntactic categories that have the same root form
and are semantically related. direct antonyms
A pair of words between which there is an associative bond resulting
from their frequent co-occurrence. In adjective clusters , direct
antonyms appears only in head synsets . domain
A topical classification to which a synset has been linked with a
CATEGORY, REGION or USAGE pointer. domain term
A synset belonging to a topical class. A domain term is further
identified as being a CATEGORY_TERM, REGION_TERM or USAGE_TERM. entailment
A verb X entails Y if X cannot be done unless Y is, or has been, done.
exception list
Morphological transformations for words that are not regular and
therefore cannot be processed in an algorithmic manner. group
Verb senses that similar in meaning and have been manually grouped
together. gloss
Each synset contains gloss consisting of a definition and optionally
example sentences. head synset
Synset in an adjective cluster containing at least one word that has
a direct antonym . holonym
The name of the whole of which the meronym names a part. Y is a
holonym of X if X is a part of Y . hypernym
The generic term used to designate a whole class of specific
instances. Y is a hypernym of X if X is a (kind of) Y . hyponym
The specific term used to designate a member of a class. X is a
hyponym of Y if X is a (kind of) Y . indirect antonym
An adjective in a satellite synset that does not have a direct
antonym has an indirect antonyms via the direct antonym of the head
synset . instance
A proper noun that refers to a particular, unique referent (as
distinguished from nouns that refer to classes). This is a specific
form of hyponym. lemma
Lower case ASCII text of word as found in the WordNet database index
files. Usually the base form for a word or collocation. lexical pointer
A lexical pointer indicates a relation between words in synsets
(word forms). lexicographer file
Files containing the raw data for WordNet synsets, edited by
lexicographers, that are input to the grind program to generate a
WordNet database. lexicographer id (lex id)
A decimal integer that, when appended onto lemma , uniquely
identifies a sense within a lexicographer file. monosemous
Having only one sense in a syntactic category. meronym
The name of a constituent part of, the substance of, or a member of
something. X is a meronym of Y if X is a part of Y . part of speech
WordNet defines "part of speech" as either noun, verb, adjective, or
adverb. Same as syntactic category . participial adjective
An adjective that is derived from a verb. pertainym
A relational adjective. Adjectives that are pertainyms are usually
defined by such phrases as "of or pertaining to" and do not have
antonyms. A pertainym can point to a noun or another pertainym. polysemous
Having more than one sense in a syntactic category. polysemy count
Number of senses of a word in a syntactic category, in WordNet. postnominal
A postnominal adjective occurs only immediately following the noun
that it modifies. predicative
An adjective that can be used only in predicate positions. If X is a
predicate adjective, it can only be used in such phrases as "it is X
" and never prenominally. prenominal
An adjective that can occur only before the noun that it modifies:
it cannot be used predicatively. satellite synset
Synset in an adjective cluster representing a concept that is
similar in meaning to the concept represented by its head synset .
semantic concordance
A textual corpus (e.g. the Brown Corpus) and a lexicon (e.g.
WordNet) so combined that every substantive word in the text is
linked to its appropriate sense in the lexicon via a semantic tag .
semantic tag
A pointer from a word in a text file to a specific sense of that
word in the WordNet database. A semantic tag in a semantic
concordance is represented by a sense key . semantic pointer
A semantic pointer indicates a relation between synsets (concepts). sense
A meaning of a word in WordNet. Each sense of a word is in a
different synset . sense key
Information necessary to find a sense in the WordNet database. A
sense key combines a lemma field and codes for the synset type,
lexicographer id, lexicographer file number, and information about a
satellite's head synset , if required. See senseidx
<senseidx.5WN.html>(5WN) <senseidx.5WN.html> for a description of
the format of a sense key. subordinate
Same as hyponym . superordinate
Same as hypernym . synset
A synonym set; a set of words that are interchangeable in some
context without changing the truth value of the preposition in which
they are embedded. troponym
A verb expressing a specific manner elaboration of another verb. X
is a troponym of Y if to X is to Y in some manner. unique beginner
A noun synset with no superordinate .

View File

@ -1,77 +0,0 @@
/*
wnrtl.c - global variables used by WordNet Run Time Library
*/
#include <stdio.h>
#include "wn.h"
static char *Id = "$Id: wnrtl.c,v 1.8 2005/01/27 17:33:54 wn Rel $";
/* Search code variables and flags */
SearchResults wnresults; /* structure containing results of search */
int fnflag = 0; /* if set, print lex filename after sense */
int dflag = 1; /* if set, print definitional glosses */
int saflag = 1; /* if set, print SEE ALSO pointers */
int fileinfoflag = 0; /* if set, print lex file info on synsets */
int frflag = 0; /* if set, print verb frames */
int abortsearch = 0; /* if set, stop search algorithm */
int offsetflag = 0; /* if set, print byte offset of each synset */
int wnsnsflag = 0; /* if set, print WN sense # for each word */
/* File pointers for database files */
int OpenDB = 0; /* if non-zero, database file are open */
FILE *datafps[NUMPARTS + 1] = { NULL, NULL, NULL, NULL, NULL } ,
*indexfps[NUMPARTS + 1] = { NULL, NULL, NULL, NULL, NULL } ,
*sensefp = NULL,
*cntlistfp = NULL,
*keyindexfp = NULL,
*revkeyindexfp = NULL,
*vsentfilefp = NULL, *vidxfilefp = NULL;
/* Method for interface to check for events while search is running */
void (*interface_doevents_func)(void) = NULL;
/* callback function for interruptable searches */
/* in single-threaded interfaces */
/* General error message handler - can be defined by interface.
Default function provided in library returns -1 */
int default_display_message(char *);
int (*display_message)(char *) = default_display_message;
/*
Revsion log:
$Log: wnrtl.c,v $
Revision 1.8 2005/01/27 17:33:54 wn
cleaned up includes
Revision 1.7 2005/01/27 16:31:17 wn
removed cousinfp for 1.6
Revision 1.6 2002/03/22 20:29:58 wn
added revkeyindexfp
Revision 1.5 2001/10/11 18:03:02 wn
initialize keyindexfp
Revision 1.4 2001/03/27 18:48:15 wn
added cntlistfp
Revision 1.3 2001/03/13 17:45:48 wn
*** empty log message ***
Revision 1.2 2000/08/14 16:05:06 wn
added tcflag
Revision 1.1 1997/09/02 16:31:18 wn
Initial revision
*/

View File

@ -1,732 +0,0 @@
/*
wnutil.c - utility functions used by WordNet code
*/
#ifdef _WINDOWS
#include <windows.h>
#include <windowsx.h>
#endif
#include <stdio.h>
#include <ctype.h>
#ifdef __unix__
#ifndef __MACH__
#include <malloc.h>
#endif
#endif
#include <assert.h>
#include <string.h>
#include <stdlib.h>
#ifdef HAVE_CONFIG_H
#include "config.h"
#endif
#include "wn.h"
static int do_init();
static char msgbuf[256]; /* buffer for constructing error messages */
/* used by the strstr wrapper functions */
static char *strstr_word;
static char *strstr_stringstart;
static char *strstr_stringcurrent;
/* Initialization functions */
static void closefps();
int wninit(void)
{
static int done = 0;
static int openerr = 0;
char *env;
if (!done) {
if (env = getenv("WNDBVERSION")) {
wnrelease = strdup(env); /* set release */
assert(wnrelease);
}
openerr = do_init();
if (!openerr) {
done = 1;
OpenDB = 1;
openerr = morphinit();
}
}
return(openerr);
}
int re_wninit(void)
{
int openerr;
char *env;
closefps();
if (env = getenv("WNDBVERSION")) {
wnrelease = strdup(env); /* set release */
assert(wnrelease);
}
openerr = do_init();
if (!openerr) {
OpenDB = 1;
openerr = re_morphinit();
}
return(openerr);
}
static void closefps(void)
{
int i;
if (OpenDB) {
for (i = 1; i < NUMPARTS + 1; i++) {
if (datafps[i] != NULL)
fclose(datafps[i]); datafps[i] = NULL;
if (indexfps[i] != NULL)
fclose(indexfps[i]); indexfps[i] = NULL;
}
if (sensefp != NULL) {
fclose(sensefp); sensefp = NULL;
}
if (cntlistfp != NULL) {
fclose(cntlistfp); cntlistfp = NULL;
}
if (keyindexfp != NULL) {
fclose(keyindexfp); keyindexfp = NULL;
}
if (vsentfilefp != NULL) {
fclose(vsentfilefp); vsentfilefp = NULL;
}
if (vidxfilefp != NULL) {
fclose(vidxfilefp); vidxfilefp = NULL;
}
OpenDB = 0;
}
}
static int do_init(void)
{
int i, openerr;
char searchdir[256], tmpbuf[256];
#ifdef _WINDOWS
HKEY hkey;
DWORD dwType, dwSize;
#else
char *env;
#endif
openerr = 0;
/* Find base directory for database. If set, use WNSEARCHDIR.
If not set, check for WNHOME/dict, otherwise use DEFAULTPATH. */
#ifdef _WINDOWS
if (RegOpenKeyEx(HKEY_LOCAL_MACHINE, TEXT("Software\\WordNet\\3.0"),
0, KEY_READ, &hkey) == ERROR_SUCCESS) {
dwSize = sizeof(searchdir);
RegQueryValueEx(hkey, TEXT("WNHome"),
NULL, &dwType, searchdir, &dwSize);
RegCloseKey(hkey);
strcat(searchdir, DICTDIR);
} else if (RegOpenKeyEx(HKEY_CURRENT_USER, TEXT("Software\\WordNet\\3.0"),
0, KEY_READ, &hkey) == ERROR_SUCCESS) {
dwSize = sizeof(searchdir);
RegQueryValueEx(hkey, TEXT("WNHome"),
NULL, &dwType, searchdir, &dwSize);
RegCloseKey(hkey);
strcat(searchdir, DICTDIR);
} else
sprintf(searchdir, DEFAULTPATH);
#else
if ((env = getenv("WNSEARCHDIR")) != NULL)
strcpy(searchdir, env);
else if ((env = getenv("WNHOME")) != NULL)
sprintf(searchdir, "%s%s", env, DICTDIR);
else
strcpy(searchdir, DEFAULTPATH);
#endif
for (i = 1; i < NUMPARTS + 1; i++) {
sprintf(tmpbuf, DATAFILE, searchdir, partnames[i]);
if((datafps[i] = fopen(tmpbuf, "r")) == NULL) {
sprintf(msgbuf,
"WordNet library error: Can't open datafile(%s)\n",
tmpbuf);
display_message(msgbuf);
openerr = -1;
}
sprintf(tmpbuf, INDEXFILE, searchdir, partnames[i]);
if((indexfps[i] = fopen(tmpbuf, "r")) == NULL) {
sprintf(msgbuf,
"WordNet library error: Can't open indexfile(%s)\n",
tmpbuf);
display_message(msgbuf);
openerr = -1;
}
}
/* This file isn't used by the library and doesn't have to
be present. No error is reported if the open fails. */
sprintf(tmpbuf, SENSEIDXFILE, searchdir);
sensefp = fopen(tmpbuf, "r");
/* If this file isn't present, the runtime code will skip printint out
the number of times each sense was tagged. */
sprintf(tmpbuf, CNTLISTFILE, searchdir);
cntlistfp = fopen(tmpbuf, "r");
/* This file doesn't have to be present. No error is reported if the
open fails. */
sprintf(tmpbuf, KEYIDXFILE, searchdir);
keyindexfp = fopen(tmpbuf, "r");
sprintf(tmpbuf, REVKEYIDXFILE, searchdir);
revkeyindexfp = fopen(tmpbuf, "r");
sprintf(tmpbuf, VRBSENTFILE, searchdir);
if ((vsentfilefp = fopen(tmpbuf, "r")) == NULL) {
sprintf(msgbuf,
"WordNet library warning: Can't open verb example sentence file(%s)\n",
tmpbuf);
display_message(msgbuf);
}
sprintf(tmpbuf, VRBIDXFILE, searchdir);
if ((vidxfilefp = fopen(tmpbuf, "r")) == NULL) {
sprintf(msgbuf,
"WordNet library warning: Can't open verb example sentence index file(%s)\n",
tmpbuf);
display_message(msgbuf);
}
return(openerr);
}
/* Count the number of underscore or space separated words in a string. */
int cntwords(char *s, char separator)
{
register int wdcnt = 0;
while (*s) {
if (*s == separator || *s == ' ' || *s == '_') {
wdcnt++;
while (*s && (*s == separator || *s == ' ' || *s == '_'))
s++;
} else
s++;
}
return(++wdcnt);
}
/* Convert string to lower case remove trailing adjective marker if found */
char *strtolower(char *str)
{
register char *s = str;
while(*s != '\0') {
if(*s >= 'A' && *s <= 'Z')
*s += 32;
else if(*s == '(') {
*s='\0';
break;
}
s++;
}
return(str);
}
/* Convert string passed to lower case */
char *ToLowerCase(char *str)
{
register char *s = str;
while(*s != '\0') {
if(*s >= 'A' && *s <= 'Z')
*s += 32;
s++;
}
return(str);
}
/* Replace all occurences of 'from' with 'to' in 'str' */
char *strsubst(char *str, char from, char to)
{
register char *p;
for (p = str; *p != 0; ++p)
if (*p == from)
*p = to;
return str;
}
/* Return pointer code for pointer type characer passed. */
int getptrtype(char *ptrstr)
{
register int i;
for(i = 1; i <= MAXPTR; i++) {
if(!strcmp(ptrstr, ptrtyp[i]))
return(i);
}
return(0);
}
/* Return part of speech code for string passed */
int getpos(char *s)
{
switch (*s) {
case 'n':
return(NOUN);
case 'a':
case 's':
return(ADJ);
case 'v':
return(VERB);
case 'r':
return(ADV);
default:
sprintf(msgbuf,
"WordNet library error: unknown part of speech %s\n", s);
display_message(msgbuf);
exit(-1);
}
}
/* Return synset type code for string passed. */
int getsstype(char *s)
{
switch (*s) {
case 'n':
return(NOUN);
case 'a':
return(ADJ);
case 'v':
return(VERB);
case 's':
return(SATELLITE);
case 'r':
return(ADV);
default:
sprintf(msgbuf, "WordNet library error: Unknown synset type %s\n", s);
display_message(msgbuf);
exit(-1);
}
}
/* Pass in string for POS, return corresponding integer value */
int StrToPos(char *str)
{
if (!strcmp(str, "noun"))
return(NOUN);
else if (!strcmp(str, "verb"))
return(VERB);
else if (!strcmp(str, "adj"))
return(ADJ);
else if (!strcmp(str, "adv"))
return(ADV);
else {
return(-1);
}
}
#define MAX_TRIES 5
/* Find string for 'searchstr' as it is in index file */
char *GetWNStr(char *searchstr, int dbase)
{
register int i, j, k, offset = 0;
register char c;
char *underscore = NULL, *hyphen = NULL, *period = NULL;
static char strings[MAX_TRIES][WORDBUF];
ToLowerCase(searchstr);
if (!(underscore = strchr(searchstr, '_')) &&
!(hyphen = strchr(searchstr, '-')) &&
!(period = strchr(searchstr, '.')))
return (strcpy(strings[0],searchstr));
for(i = 0; i < 3; i++)
strcpy(strings[i], searchstr);
if (underscore != NULL) strsubst(strings[1], '_', '-');
if (hyphen != NULL) strsubst(strings[2], '-', '_');
for(i = j = k = 0; (c = searchstr[i]) != '\0'; i++){
if(c != '_' && c != '-') strings[3][j++] = c;
if(c != '.') strings[4][k++] = c;
}
strings[3][j] = '\0';
strings[4][k] = '\0';
for(i = 1; i < MAX_TRIES; i++)
if(strcmp(strings[0], strings[i]) == 0) strings[i][0] = '\0';
for (i = (MAX_TRIES - 1); i >= 0; i--)
if (strings[i][0] != '\0')
if (bin_search(strings[i], indexfps[dbase]) != NULL)
offset = i;
return(strings[offset]);
}
/* Return synset for sense key passed. */
SynsetPtr GetSynsetForSense(char *sensekey)
{
long offset;
/* Pass in sense key and return parsed sysnet structure */
if ((offset = GetDataOffset(sensekey)))
return(read_synset(GetPOS(sensekey),
offset,
GetWORD(sensekey)));
else
return(NULL);
}
/* Find offset of sense key in data file */
long GetDataOffset(char *sensekey)
{
char *line;
/* Pass in encoded sense string, return byte offset of corresponding
synset in data file. */
if (sensefp == NULL) {
display_message("WordNet library error: Sense index file not open\n");
return(0L);
}
line = bin_search(sensekey, sensefp);
if (line) {
while (*line++ != ' ');
return(atol(line));
} else
return(0L);
}
/* Find polysemy count for sense key passed. */
int GetPolyCount(char *sensekey)
{
IndexPtr idx;
int sense_cnt = 0;
/* Pass in encoded sense string and return polysemy count
for word in corresponding POS */
idx = index_lookup(GetWORD(sensekey), GetPOS(sensekey));
if (idx) {
sense_cnt = idx->sense_cnt;
free_index(idx);
}
return(sense_cnt);
}
/* Return word part of sense key */
char *GetWORD(char *sensekey)
{
static char word[100];
int i = 0;
/* Pass in encoded sense string and return WORD */
while ((word[i++] = *sensekey++) != '%');
word[i - 1] = '\0';
return(word);
}
/* Return POS code for sense key passed. */
int GetPOS(char *sensekey)
{
int pos;
/* Pass in encoded sense string and return POS */
while (*sensekey++ != '%'); /* skip over WORD */
sscanf(sensekey, "%1d", &pos);
return(pos == SATELLITE ? ADJ : pos);
}
/* Reconstruct synset from synset pointer and return ptr to buffer */
char *FmtSynset(SynsetPtr synptr, int defn)
{
int i;
static char synset[SMLINEBUF];
synset[0] = '\0';
if (fileinfoflag)
sprintf(synset, "<%s> ", lexfiles[synptr->fnum]);
strcat(synset, "{ ");
for (i = 0; i < (synptr->wcount - 1); i++)
sprintf(synset + strlen(synset), "%s, ", synptr->words[i]);
strcat(synset, synptr->words[i]);
if (defn && synptr->defn)
sprintf(synset + strlen(synset), " (%s) ", synptr->defn);
strcat(synset, " }");
return(synset);
}
/* Convert WordNet sense number passed of IndexPtr entry to sense key. */
char *WNSnsToStr(IndexPtr idx, int sense)
{
SynsetPtr sptr, adjss;
char sensekey[512], lowerword[256];
int j, sstype, pos;
pos = getpos(idx->pos);
sptr = read_synset(pos, idx->offset[sense - 1], "");
if ((sstype = getsstype(sptr->pos)) == SATELLITE) {
for (j = 0; j < sptr->ptrcount; j++) {
if (sptr->ptrtyp[j] == SIMPTR) {
adjss = read_synset(sptr->ppos[j],sptr->ptroff[j],"");
sptr->headword = malloc (strlen(adjss->words[0]) + 1);
assert(sptr->headword);
strcpy(sptr->headword, adjss->words[0]);
strtolower(sptr->headword);
sptr->headsense = adjss->lexid[0];
free_synset(adjss);
break;
}
}
}
for (j = 0; j < sptr->wcount; j++) {
strcpy(lowerword, sptr->words[j]);
strtolower(lowerword);
if(!strcmp(lowerword, idx->wd))
break;
}
if (j == sptr->wcount) {
free_synset(sptr);
return(NULL);
}
if (sstype == SATELLITE)
sprintf(sensekey,"%s%%%-1.1d:%-2.2d:%-2.2d:%s:%-2.2d",
idx->wd, SATELLITE, sptr->fnum,
sptr->lexid[j], sptr->headword,sptr->headsense);
else
sprintf(sensekey,"%s%%%-1.1d:%-2.2d:%-2.2d::",
idx->wd, pos, sptr->fnum, sptr->lexid[j]);
free_synset(sptr);
return(strdup(sensekey));
}
/* Search for string and/or baseform of word in database and return
index structure for word if found in database. */
IndexPtr GetValidIndexPointer(char *word, int pos)
{
IndexPtr idx;
char *morphword;
idx = getindex(word, pos);
if (idx == NULL) {
if ((morphword = morphstr(word, pos)) != NULL)
while (morphword) {
if ((idx = getindex(morphword, pos)) != NULL) break;
morphword = morphstr(NULL, pos);
}
}
return (idx);
}
/* Return sense number in database for word and lexsn passed. */
int GetWNSense(char *word, char *lexsn)
{
SnsIndexPtr snsidx;
char buf[256];
sprintf(buf, "%s%%%s", word, lexsn); /* create sensekey */
if ((snsidx = GetSenseIndex(buf)) != NULL)
return(snsidx->wnsense);
else
return(0);
}
/* Return parsed sense index entry for sense key passed. */
SnsIndexPtr GetSenseIndex(char *sensekey)
{
char *line;
char buf[256], loc[9];
SnsIndexPtr snsidx = NULL;
if ((line = bin_search(sensekey, sensefp)) != NULL) {
snsidx = (SnsIndexPtr)malloc(sizeof(SnsIndex));
assert(snsidx);
sscanf(line, "%s %s %d %d\n",
buf,
loc,
&snsidx->wnsense,
&snsidx->tag_cnt);
snsidx->sensekey = malloc(strlen(buf + 1));
assert(snsidx->sensekey);
strcpy(snsidx->sensekey, buf);
snsidx->loc = atol(loc);
/* Parse out word from sensekey to make things easier for caller */
snsidx->word = strdup(GetWORD(snsidx->sensekey));
assert(snsidx->word);
snsidx->nextsi = NULL;
}
return(snsidx);
}
/* Return number of times sense is tagged */
int GetTagcnt(IndexPtr idx, int sense)
{
char *sensekey, *line;
char buf[256];
int snum, cnt = 0;
if (cntlistfp) {
sensekey = WNSnsToStr(idx, sense);
if ((line = bin_search(sensekey, cntlistfp)) != NULL) {
sscanf(line, "%s %d %d", buf, &snum, &cnt);
}
free(sensekey);
}
return(cnt);
}
void FreeSenseIndex(SnsIndexPtr snsidx)
{
if (snsidx) {
free(snsidx->word);
free(snsidx);
}
}
char *GetOffsetForKey(unsigned int key)
{
unsigned int rkey;
char ckey[7];
static char loc[11] = "";
char *line;
char searchdir[256], tmpbuf[256];
/* Try to open file in case wn_init wasn't called */
if (!keyindexfp) {
strcpy(searchdir, SetSearchdir());
sprintf(tmpbuf, KEYIDXFILE, searchdir);
keyindexfp = fopen(tmpbuf, "r");
}
if (keyindexfp) {
sprintf(ckey, "%6.6d", key);
if ((line = bin_search(ckey, keyindexfp)) != NULL) {
sscanf(line, "%d %s", &rkey, loc);
return(loc);
}
}
return(NULL);
}
unsigned int GetKeyForOffset(char *loc)
{
unsigned int key;
char rloc[11] = "";
char *line;
char searchdir[256], tmpbuf[256];
/* Try to open file in case wn_init wasn't called */
if (!revkeyindexfp) {
strcpy(searchdir, SetSearchdir());
sprintf(tmpbuf, REVKEYIDXFILE, searchdir);
revkeyindexfp = fopen(tmpbuf, "r");
}
if (revkeyindexfp) {
if ((line = bin_search(loc, revkeyindexfp)) != NULL) {
sscanf(line, "%s %d", rloc, &key );
return(key);
}
}
return(0);
}
char *SetSearchdir()
{
static char searchdir[256];
char *env;
/* Find base directory for database. If set, use WNSEARCHDIR.
If not set, check for WNHOME/dict, otherwise use DEFAULTPATH. */
if ((env = getenv("WNSEARCHDIR")) != NULL)
strcpy(searchdir, env);
else if ((env = getenv("WNHOME")) != NULL)
sprintf(searchdir, "%s%s", env, DICTDIR);
else
strcpy(searchdir, DEFAULTPATH);
return(searchdir);
}
int default_display_message(char *msg)
{
return(-1);
}
/*
** Wrapper functions for strstr that allow you to retrieve each
** occurance of a word within a longer string, not just the first.
**
** strstr_init is called with the same arguments as normal strstr,
** but does not return any value.
**
** strstr_getnext returns the position offset (not a pointer, as does
** normal strstr) of the next occurance, or -1 if none remain.
*/
void strstr_init (char *string, char *word) {
strstr_word = word;
strstr_stringstart = string;
strstr_stringcurrent = string;
}
int strstr_getnext (void) {
char *loc = strstr (strstr_stringcurrent, strstr_word);
if (loc == NULL) return -1;
strstr_stringcurrent = loc + 1;
return (loc - strstr_stringstart);
}

View File

@ -1,360 +0,0 @@
#! /bin/sh
# Common stub for a few missing GNU programs while installing.
scriptversion=2003-09-02.23
# Copyright (C) 1996, 1997, 1999, 2000, 2002, 2003
# Free Software Foundation, Inc.
# Originally by Fran,cois Pinard <pinard@iro.umontreal.ca>, 1996.
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2, or (at your option)
# any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA
# 02111-1307, USA.
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program that contains a
# configuration script generated by Autoconf, you may include it under
# the same distribution terms that you use for the rest of that program.
if test $# -eq 0; then
echo 1>&2 "Try \`$0 --help' for more information"
exit 1
fi
run=:
# In the cases where this matters, `missing' is being run in the
# srcdir already.
if test -f configure.ac; then
configure_ac=configure.ac
else
configure_ac=configure.in
fi
msg="missing on your system"
case "$1" in
--run)
# Try to run requested program, and just exit if it succeeds.
run=
shift
"$@" && exit 0
# Exit code 63 means version mismatch. This often happens
# when the user try to use an ancient version of a tool on
# a file that requires a minimum version. In this case we
# we should proceed has if the program had been absent, or
# if --run hadn't been passed.
if test $? = 63; then
run=:
msg="probably too old"
fi
;;
esac
# If it does not exist, or fails to run (possibly an outdated version),
# try to emulate it.
case "$1" in
-h|--h|--he|--hel|--help)
echo "\
$0 [OPTION]... PROGRAM [ARGUMENT]...
Handle \`PROGRAM [ARGUMENT]...' for when PROGRAM is missing, or return an
error status if there is no known handling for PROGRAM.
Options:
-h, --help display this help and exit
-v, --version output version information and exit
--run try to run the given command, and emulate it if it fails
Supported PROGRAM values:
aclocal touch file \`aclocal.m4'
autoconf touch file \`configure'
autoheader touch file \`config.h.in'
automake touch all \`Makefile.in' files
bison create \`y.tab.[ch]', if possible, from existing .[ch]
flex create \`lex.yy.c', if possible, from existing .c
help2man touch the output file
lex create \`lex.yy.c', if possible, from existing .c
makeinfo touch the output file
tar try tar, gnutar, gtar, then tar without non-portable flags
yacc create \`y.tab.[ch]', if possible, from existing .[ch]
Send bug reports to <bug-automake@gnu.org>."
;;
-v|--v|--ve|--ver|--vers|--versi|--versio|--version)
echo "missing $scriptversion (GNU Automake)"
;;
-*)
echo 1>&2 "$0: Unknown \`$1' option"
echo 1>&2 "Try \`$0 --help' for more information"
exit 1
;;
aclocal*)
if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
# We have it, but it failed.
exit 1
fi
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`acinclude.m4' or \`${configure_ac}'. You might want
to install the \`Automake' and \`Perl' packages. Grab them from
any GNU archive site."
touch aclocal.m4
;;
autoconf)
if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
# We have it, but it failed.
exit 1
fi
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`${configure_ac}'. You might want to install the
\`Autoconf' and \`GNU m4' packages. Grab them from any GNU
archive site."
touch configure
;;
autoheader)
if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
# We have it, but it failed.
exit 1
fi
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`acconfig.h' or \`${configure_ac}'. You might want
to install the \`Autoconf' and \`GNU m4' packages. Grab them
from any GNU archive site."
files=`sed -n 's/^[ ]*A[CM]_CONFIG_HEADER(\([^)]*\)).*/\1/p' ${configure_ac}`
test -z "$files" && files="config.h"
touch_files=
for f in $files; do
case "$f" in
*:*) touch_files="$touch_files "`echo "$f" |
sed -e 's/^[^:]*://' -e 's/:.*//'`;;
*) touch_files="$touch_files $f.in";;
esac
done
touch $touch_files
;;
automake*)
if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
# We have it, but it failed.
exit 1
fi
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified \`Makefile.am', \`acinclude.m4' or \`${configure_ac}'.
You might want to install the \`Automake' and \`Perl' packages.
Grab them from any GNU archive site."
find . -type f -name Makefile.am -print |
sed 's/\.am$/.in/' |
while read f; do touch "$f"; done
;;
autom4te)
if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
# We have it, but it failed.
exit 1
fi
echo 1>&2 "\
WARNING: \`$1' is needed, but is $msg.
You might have modified some files without having the
proper tools for further handling them.
You can get \`$1' as part of \`Autoconf' from any GNU
archive site."
file=`echo "$*" | sed -n 's/.*--output[ =]*\([^ ]*\).*/\1/p'`
test -z "$file" && file=`echo "$*" | sed -n 's/.*-o[ ]*\([^ ]*\).*/\1/p'`
if test -f "$file"; then
touch $file
else
test -z "$file" || exec >$file
echo "#! /bin/sh"
echo "# Created by GNU Automake missing as a replacement of"
echo "# $ $@"
echo "exit 0"
chmod +x $file
exit 1
fi
;;
bison|yacc)
echo 1>&2 "\
WARNING: \`$1' $msg. You should only need it if
you modified a \`.y' file. You may need the \`Bison' package
in order for those modifications to take effect. You can get
\`Bison' from any GNU archive site."
rm -f y.tab.c y.tab.h
if [ $# -ne 1 ]; then
eval LASTARG="\${$#}"
case "$LASTARG" in
*.y)
SRCFILE=`echo "$LASTARG" | sed 's/y$/c/'`
if [ -f "$SRCFILE" ]; then
cp "$SRCFILE" y.tab.c
fi
SRCFILE=`echo "$LASTARG" | sed 's/y$/h/'`
if [ -f "$SRCFILE" ]; then
cp "$SRCFILE" y.tab.h
fi
;;
esac
fi
if [ ! -f y.tab.h ]; then
echo >y.tab.h
fi
if [ ! -f y.tab.c ]; then
echo 'main() { return 0; }' >y.tab.c
fi
;;
lex|flex)
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified a \`.l' file. You may need the \`Flex' package
in order for those modifications to take effect. You can get
\`Flex' from any GNU archive site."
rm -f lex.yy.c
if [ $# -ne 1 ]; then
eval LASTARG="\${$#}"
case "$LASTARG" in
*.l)
SRCFILE=`echo "$LASTARG" | sed 's/l$/c/'`
if [ -f "$SRCFILE" ]; then
cp "$SRCFILE" lex.yy.c
fi
;;
esac
fi
if [ ! -f lex.yy.c ]; then
echo 'main() { return 0; }' >lex.yy.c
fi
;;
help2man)
if test -z "$run" && ($1 --version) > /dev/null 2>&1; then
# We have it, but it failed.
exit 1
fi
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified a dependency of a manual page. You may need the
\`Help2man' package in order for those modifications to take
effect. You can get \`Help2man' from any GNU archive site."
file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'`
if test -z "$file"; then
file=`echo "$*" | sed -n 's/.*--output=\([^ ]*\).*/\1/p'`
fi
if [ -f "$file" ]; then
touch $file
else
test -z "$file" || exec >$file
echo ".ab help2man is required to generate this page"
exit 1
fi
;;
makeinfo)
if test -z "$run" && (makeinfo --version) > /dev/null 2>&1; then
# We have makeinfo, but it failed.
exit 1
fi
echo 1>&2 "\
WARNING: \`$1' is $msg. You should only need it if
you modified a \`.texi' or \`.texinfo' file, or any other file
indirectly affecting the aspect of the manual. The spurious
call might also be the consequence of using a buggy \`make' (AIX,
DU, IRIX). You might want to install the \`Texinfo' package or
the \`GNU make' package. Grab either from any GNU archive site."
file=`echo "$*" | sed -n 's/.*-o \([^ ]*\).*/\1/p'`
if test -z "$file"; then
file=`echo "$*" | sed 's/.* \([^ ]*\) *$/\1/'`
file=`sed -n '/^@setfilename/ { s/.* \([^ ]*\) *$/\1/; p; q; }' $file`
fi
touch $file
;;
tar)
shift
if test -n "$run"; then
echo 1>&2 "ERROR: \`tar' requires --run"
exit 1
fi
# We have already tried tar in the generic part.
# Look for gnutar/gtar before invocation to avoid ugly error
# messages.
if (gnutar --version > /dev/null 2>&1); then
gnutar "$@" && exit 0
fi
if (gtar --version > /dev/null 2>&1); then
gtar "$@" && exit 0
fi
firstarg="$1"
if shift; then
case "$firstarg" in
*o*)
firstarg=`echo "$firstarg" | sed s/o//`
tar "$firstarg" "$@" && exit 0
;;
esac
case "$firstarg" in
*h*)
firstarg=`echo "$firstarg" | sed s/h//`
tar "$firstarg" "$@" && exit 0
;;
esac
fi
echo 1>&2 "\
WARNING: I can't seem to be able to run \`tar' with the given arguments.
You may want to install GNU tar or Free paxutils, or check the
command line arguments."
exit 1
;;
*)
echo 1>&2 "\
WARNING: \`$1' is needed, and is $msg.
You might have modified some files without having the
proper tools for further handling them. Check the \`README' file,
it often tells you about the needed prerequisites for installing
this package. You may also peek at any GNU archive site, in case
some other package would contain this missing \`$1' program."
exit 1
;;
esac
exit 0
# Local variables:
# eval: (add-hook 'write-file-hooks 'time-stamp)
# time-stamp-start: "scriptversion="
# time-stamp-format: "%:y-%02m-%02d.%02H"
# time-stamp-end: "$"
# End:

View File

@ -1,12 +0,0 @@
EXTRA_DIST = wnb
bin_PROGRAMS = wn wishwn
bin_SCRIPTS = wnb
wishwn_SOURCES = tkAppInit.c stubs.c
wishwn_CPPFLAGS = $(INCLUDES)
wishwn_LDADD = $(LDADD) $(TK_LIB_SPEC) $(TCL_LIB_SPEC) $(TK_LIBS)
wn_SOURCES = wn.c
wn_CPPFLAGS = $(INCLUDES)
wn_LDADD = $(LDADD)
LDADD = -L$(top_srcdir)/lib -lWN
INCLUDES = -I$(top_srcdir) -I$(top_srcdir)/include $(TCL_INCLUDE_SPEC) $(TK_XINCLUDES) -I$(TK_PREFIX)/include

View File

@ -1,485 +0,0 @@
# Makefile.in generated by automake 1.9 from Makefile.am.
# @configure_input@
# Copyright (C) 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002,
# 2003, 2004 Free Software Foundation, Inc.
# This Makefile.in is free software; the Free Software Foundation
# gives unlimited permission to copy and/or distribute it,
# with or without modifications, as long as this notice is preserved.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY, to the extent permitted by law; without
# even the implied warranty of MERCHANTABILITY or FITNESS FOR A
# PARTICULAR PURPOSE.
@SET_MAKE@
SOURCES = $(wishwn_SOURCES) $(wn_SOURCES)
srcdir = @srcdir@
top_srcdir = @top_srcdir@
VPATH = @srcdir@
pkgdatadir = $(datadir)/@PACKAGE@
pkglibdir = $(libdir)/@PACKAGE@
pkgincludedir = $(includedir)/@PACKAGE@
top_builddir = ..
am__cd = CDPATH="$${ZSH_VERSION+.}$(PATH_SEPARATOR)" && cd
INSTALL = @INSTALL@
install_sh_DATA = $(install_sh) -c -m 644
install_sh_PROGRAM = $(install_sh) -c
install_sh_SCRIPT = $(install_sh) -c
INSTALL_HEADER = $(INSTALL_DATA)
transform = $(program_transform_name)
NORMAL_INSTALL = :
PRE_INSTALL = :
POST_INSTALL = :
NORMAL_UNINSTALL = :
PRE_UNINSTALL = :
POST_UNINSTALL = :
bin_PROGRAMS = wn$(EXEEXT) wishwn$(EXEEXT)
subdir = src
DIST_COMMON = $(srcdir)/Makefile.am $(srcdir)/Makefile.in
ACLOCAL_M4 = $(top_srcdir)/aclocal.m4
am__aclocal_m4_deps = $(top_srcdir)/acinclude.m4 \
$(top_srcdir)/configure.ac
am__configure_deps = $(am__aclocal_m4_deps) $(CONFIGURE_DEPENDENCIES) \
$(ACLOCAL_M4)
mkinstalldirs = $(install_sh) -d
CONFIG_HEADER = $(top_builddir)/config.h
CONFIG_CLEAN_FILES =
am__installdirs = "$(DESTDIR)$(bindir)" "$(DESTDIR)$(bindir)"
binPROGRAMS_INSTALL = $(INSTALL_PROGRAM)
PROGRAMS = $(bin_PROGRAMS)
am_wishwn_OBJECTS = wishwn-tkAppInit.$(OBJEXT) wishwn-stubs.$(OBJEXT)
wishwn_OBJECTS = $(am_wishwn_OBJECTS)
am__DEPENDENCIES_1 =
wishwn_DEPENDENCIES = $(am__DEPENDENCIES_1) $(am__DEPENDENCIES_1) \
$(am__DEPENDENCIES_1) $(am__DEPENDENCIES_1)
am_wn_OBJECTS = wn-wn.$(OBJEXT)
wn_OBJECTS = $(am_wn_OBJECTS)
wn_DEPENDENCIES = $(am__DEPENDENCIES_1)
binSCRIPT_INSTALL = $(INSTALL_SCRIPT)
SCRIPTS = $(bin_SCRIPTS)
DEFAULT_INCLUDES = -I. -I$(srcdir) -I$(top_builddir)
depcomp = $(SHELL) $(top_srcdir)/depcomp
am__depfiles_maybe = depfiles
COMPILE = $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(AM_CPPFLAGS) \
$(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS)
CCLD = $(CC)
LINK = $(CCLD) $(AM_CFLAGS) $(CFLAGS) $(AM_LDFLAGS) $(LDFLAGS) -o $@
SOURCES = $(wishwn_SOURCES) $(wn_SOURCES)
DIST_SOURCES = $(wishwn_SOURCES) $(wn_SOURCES)
ETAGS = etags
CTAGS = ctags
DISTFILES = $(DIST_COMMON) $(DIST_SOURCES) $(TEXINFOS) $(EXTRA_DIST)
ACLOCAL = @ACLOCAL@
AMDEP_FALSE = @AMDEP_FALSE@
AMDEP_TRUE = @AMDEP_TRUE@
AMTAR = @AMTAR@
AUTOCONF = @AUTOCONF@
AUTOHEADER = @AUTOHEADER@
AUTOMAKE = @AUTOMAKE@
AWK = @AWK@
CC = @CC@
CCDEPMODE = @CCDEPMODE@
CFLAGS = @CFLAGS@
CPP = @CPP@
CPPFLAGS = @CPPFLAGS@
CYGPATH_W = @CYGPATH_W@
DEFS = @DEFS@
DEPDIR = @DEPDIR@
ECHO_C = @ECHO_C@
ECHO_N = @ECHO_N@
ECHO_T = @ECHO_T@
EGREP = @EGREP@
EXEEXT = @EXEEXT@
INSTALL_DATA = @INSTALL_DATA@
INSTALL_PROGRAM = @INSTALL_PROGRAM@
INSTALL_SCRIPT = @INSTALL_SCRIPT@
INSTALL_STRIP_PROGRAM = @INSTALL_STRIP_PROGRAM@
LDFLAGS = @LDFLAGS@
LIBOBJS = @LIBOBJS@
LIBS = @LIBS@
LTLIBOBJS = @LTLIBOBJS@
MAKEINFO = @MAKEINFO@
OBJEXT = @OBJEXT@
PACKAGE = @PACKAGE@
PACKAGE_BUGREPORT = @PACKAGE_BUGREPORT@
PACKAGE_NAME = @PACKAGE_NAME@
PACKAGE_STRING = @PACKAGE_STRING@
PACKAGE_TARNAME = @PACKAGE_TARNAME@
PACKAGE_VERSION = @PACKAGE_VERSION@
PATH_SEPARATOR = @PATH_SEPARATOR@
RANLIB = @RANLIB@
SET_MAKE = @SET_MAKE@
SHELL = @SHELL@
STRIP = @STRIP@
TCL_INCLUDE_SPEC = @TCL_INCLUDE_SPEC@
TCL_LIB_SPEC = @TCL_LIB_SPEC@
TK_LIBS = @TK_LIBS@
TK_LIB_SPEC = @TK_LIB_SPEC@
TK_PREFIX = @TK_PREFIX@
TK_XINCLUDES = @TK_XINCLUDES@
VERSION = @VERSION@
ac_ct_CC = @ac_ct_CC@
ac_ct_RANLIB = @ac_ct_RANLIB@
ac_ct_STRIP = @ac_ct_STRIP@
ac_prefix = @ac_prefix@
am__fastdepCC_FALSE = @am__fastdepCC_FALSE@
am__fastdepCC_TRUE = @am__fastdepCC_TRUE@
am__include = @am__include@
am__leading_dot = @am__leading_dot@
am__quote = @am__quote@
am__tar = @am__tar@
am__untar = @am__untar@
bindir = @bindir@
build_alias = @build_alias@
datadir = @datadir@
exec_prefix = @exec_prefix@
host_alias = @host_alias@
includedir = @includedir@
infodir = @infodir@
install_sh = @install_sh@
libdir = @libdir@
libexecdir = @libexecdir@
localstatedir = @localstatedir@
mandir = @mandir@
mkdir_p = @mkdir_p@
oldincludedir = @oldincludedir@
prefix = @prefix@
program_transform_name = @program_transform_name@
sbindir = @sbindir@
sharedstatedir = @sharedstatedir@
sysconfdir = @sysconfdir@
target_alias = @target_alias@
EXTRA_DIST = wnb
bin_SCRIPTS = wnb
wishwn_SOURCES = tkAppInit.c stubs.c
wishwn_CPPFLAGS = $(INCLUDES)
wishwn_LDADD = $(LDADD) $(TK_LIB_SPEC) $(TCL_LIB_SPEC) $(TK_LIBS)
wn_SOURCES = wn.c
wn_CPPFLAGS = $(INCLUDES)
wn_LDADD = $(LDADD)
LDADD = -L$(top_srcdir)/lib -lWN
INCLUDES = -I$(top_srcdir) -I$(top_srcdir)/include $(TCL_INCLUDE_SPEC) $(TK_XINCLUDES) -I$(TK_PREFIX)/include
all: all-am
.SUFFIXES:
.SUFFIXES: .c .o .obj
$(srcdir)/Makefile.in: $(srcdir)/Makefile.am $(am__configure_deps)
@for dep in $?; do \
case '$(am__configure_deps)' in \
*$$dep*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh \
&& exit 0; \
exit 1;; \
esac; \
done; \
echo ' cd $(top_srcdir) && $(AUTOMAKE) --gnu src/Makefile'; \
cd $(top_srcdir) && \
$(AUTOMAKE) --gnu src/Makefile
.PRECIOUS: Makefile
Makefile: $(srcdir)/Makefile.in $(top_builddir)/config.status
@case '$?' in \
*config.status*) \
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh;; \
*) \
echo ' cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe)'; \
cd $(top_builddir) && $(SHELL) ./config.status $(subdir)/$@ $(am__depfiles_maybe);; \
esac;
$(top_builddir)/config.status: $(top_srcdir)/configure $(CONFIG_STATUS_DEPENDENCIES)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(top_srcdir)/configure: $(am__configure_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
$(ACLOCAL_M4): $(am__aclocal_m4_deps)
cd $(top_builddir) && $(MAKE) $(AM_MAKEFLAGS) am--refresh
install-binPROGRAMS: $(bin_PROGRAMS)
@$(NORMAL_INSTALL)
test -z "$(bindir)" || $(mkdir_p) "$(DESTDIR)$(bindir)"
@list='$(bin_PROGRAMS)'; for p in $$list; do \
p1=`echo $$p|sed 's/$(EXEEXT)$$//'`; \
if test -f $$p \
; then \
f=`echo "$$p1" | sed 's,^.*/,,;$(transform);s/$$/$(EXEEXT)/'`; \
echo " $(INSTALL_PROGRAM_ENV) $(binPROGRAMS_INSTALL) '$$p' '$(DESTDIR)$(bindir)/$$f'"; \
$(INSTALL_PROGRAM_ENV) $(binPROGRAMS_INSTALL) "$$p" "$(DESTDIR)$(bindir)/$$f" || exit 1; \
else :; fi; \
done
uninstall-binPROGRAMS:
@$(NORMAL_UNINSTALL)
@list='$(bin_PROGRAMS)'; for p in $$list; do \
f=`echo "$$p" | sed 's,^.*/,,;s/$(EXEEXT)$$//;$(transform);s/$$/$(EXEEXT)/'`; \
echo " rm -f '$(DESTDIR)$(bindir)/$$f'"; \
rm -f "$(DESTDIR)$(bindir)/$$f"; \
done
clean-binPROGRAMS:
-test -z "$(bin_PROGRAMS)" || rm -f $(bin_PROGRAMS)
wishwn$(EXEEXT): $(wishwn_OBJECTS) $(wishwn_DEPENDENCIES)
@rm -f wishwn$(EXEEXT)
$(LINK) $(wishwn_LDFLAGS) $(wishwn_OBJECTS) $(wishwn_LDADD) $(LIBS)
wn$(EXEEXT): $(wn_OBJECTS) $(wn_DEPENDENCIES)
@rm -f wn$(EXEEXT)
$(LINK) $(wn_LDFLAGS) $(wn_OBJECTS) $(wn_LDADD) $(LIBS)
install-binSCRIPTS: $(bin_SCRIPTS)
@$(NORMAL_INSTALL)
test -z "$(bindir)" || $(mkdir_p) "$(DESTDIR)$(bindir)"
@list='$(bin_SCRIPTS)'; for p in $$list; do \
if test -f "$$p"; then d=; else d="$(srcdir)/"; fi; \
if test -f $$d$$p; then \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
echo " $(binSCRIPT_INSTALL) '$$d$$p' '$(DESTDIR)$(bindir)/$$f'"; \
$(binSCRIPT_INSTALL) "$$d$$p" "$(DESTDIR)$(bindir)/$$f"; \
else :; fi; \
done
uninstall-binSCRIPTS:
@$(NORMAL_UNINSTALL)
@list='$(bin_SCRIPTS)'; for p in $$list; do \
f=`echo "$$p" | sed 's|^.*/||;$(transform)'`; \
echo " rm -f '$(DESTDIR)$(bindir)/$$f'"; \
rm -f "$(DESTDIR)$(bindir)/$$f"; \
done
mostlyclean-compile:
-rm -f *.$(OBJEXT)
distclean-compile:
-rm -f *.tab.c
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/wishwn-stubs.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/wishwn-tkAppInit.Po@am__quote@
@AMDEP_TRUE@@am__include@ @am__quote@./$(DEPDIR)/wn-wn.Po@am__quote@
.c.o:
@am__fastdepCC_TRUE@ if $(COMPILE) -MT $@ -MD -MP -MF "$(DEPDIR)/$*.Tpo" -c -o $@ $<; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/$*.Tpo" "$(DEPDIR)/$*.Po"; else rm -f "$(DEPDIR)/$*.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(COMPILE) -c $<
.c.obj:
@am__fastdepCC_TRUE@ if $(COMPILE) -MT $@ -MD -MP -MF "$(DEPDIR)/$*.Tpo" -c -o $@ `$(CYGPATH_W) '$<'`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/$*.Tpo" "$(DEPDIR)/$*.Po"; else rm -f "$(DEPDIR)/$*.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='$<' object='$@' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(COMPILE) -c `$(CYGPATH_W) '$<'`
wishwn-tkAppInit.o: tkAppInit.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT wishwn-tkAppInit.o -MD -MP -MF "$(DEPDIR)/wishwn-tkAppInit.Tpo" -c -o wishwn-tkAppInit.o `test -f 'tkAppInit.c' || echo '$(srcdir)/'`tkAppInit.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/wishwn-tkAppInit.Tpo" "$(DEPDIR)/wishwn-tkAppInit.Po"; else rm -f "$(DEPDIR)/wishwn-tkAppInit.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='tkAppInit.c' object='wishwn-tkAppInit.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o wishwn-tkAppInit.o `test -f 'tkAppInit.c' || echo '$(srcdir)/'`tkAppInit.c
wishwn-tkAppInit.obj: tkAppInit.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT wishwn-tkAppInit.obj -MD -MP -MF "$(DEPDIR)/wishwn-tkAppInit.Tpo" -c -o wishwn-tkAppInit.obj `if test -f 'tkAppInit.c'; then $(CYGPATH_W) 'tkAppInit.c'; else $(CYGPATH_W) '$(srcdir)/tkAppInit.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/wishwn-tkAppInit.Tpo" "$(DEPDIR)/wishwn-tkAppInit.Po"; else rm -f "$(DEPDIR)/wishwn-tkAppInit.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='tkAppInit.c' object='wishwn-tkAppInit.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o wishwn-tkAppInit.obj `if test -f 'tkAppInit.c'; then $(CYGPATH_W) 'tkAppInit.c'; else $(CYGPATH_W) '$(srcdir)/tkAppInit.c'; fi`
wishwn-stubs.o: stubs.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT wishwn-stubs.o -MD -MP -MF "$(DEPDIR)/wishwn-stubs.Tpo" -c -o wishwn-stubs.o `test -f 'stubs.c' || echo '$(srcdir)/'`stubs.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/wishwn-stubs.Tpo" "$(DEPDIR)/wishwn-stubs.Po"; else rm -f "$(DEPDIR)/wishwn-stubs.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='stubs.c' object='wishwn-stubs.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o wishwn-stubs.o `test -f 'stubs.c' || echo '$(srcdir)/'`stubs.c
wishwn-stubs.obj: stubs.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT wishwn-stubs.obj -MD -MP -MF "$(DEPDIR)/wishwn-stubs.Tpo" -c -o wishwn-stubs.obj `if test -f 'stubs.c'; then $(CYGPATH_W) 'stubs.c'; else $(CYGPATH_W) '$(srcdir)/stubs.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/wishwn-stubs.Tpo" "$(DEPDIR)/wishwn-stubs.Po"; else rm -f "$(DEPDIR)/wishwn-stubs.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='stubs.c' object='wishwn-stubs.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wishwn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o wishwn-stubs.obj `if test -f 'stubs.c'; then $(CYGPATH_W) 'stubs.c'; else $(CYGPATH_W) '$(srcdir)/stubs.c'; fi`
wn-wn.o: wn.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT wn-wn.o -MD -MP -MF "$(DEPDIR)/wn-wn.Tpo" -c -o wn-wn.o `test -f 'wn.c' || echo '$(srcdir)/'`wn.c; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/wn-wn.Tpo" "$(DEPDIR)/wn-wn.Po"; else rm -f "$(DEPDIR)/wn-wn.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wn.c' object='wn-wn.o' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o wn-wn.o `test -f 'wn.c' || echo '$(srcdir)/'`wn.c
wn-wn.obj: wn.c
@am__fastdepCC_TRUE@ if $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -MT wn-wn.obj -MD -MP -MF "$(DEPDIR)/wn-wn.Tpo" -c -o wn-wn.obj `if test -f 'wn.c'; then $(CYGPATH_W) 'wn.c'; else $(CYGPATH_W) '$(srcdir)/wn.c'; fi`; \
@am__fastdepCC_TRUE@ then mv -f "$(DEPDIR)/wn-wn.Tpo" "$(DEPDIR)/wn-wn.Po"; else rm -f "$(DEPDIR)/wn-wn.Tpo"; exit 1; fi
@AMDEP_TRUE@@am__fastdepCC_FALSE@ source='wn.c' object='wn-wn.obj' libtool=no @AMDEPBACKSLASH@
@AMDEP_TRUE@@am__fastdepCC_FALSE@ DEPDIR=$(DEPDIR) $(CCDEPMODE) $(depcomp) @AMDEPBACKSLASH@
@am__fastdepCC_FALSE@ $(CC) $(DEFS) $(DEFAULT_INCLUDES) $(INCLUDES) $(wn_CPPFLAGS) $(CPPFLAGS) $(AM_CFLAGS) $(CFLAGS) -c -o wn-wn.obj `if test -f 'wn.c'; then $(CYGPATH_W) 'wn.c'; else $(CYGPATH_W) '$(srcdir)/wn.c'; fi`
uninstall-info-am:
ID: $(HEADERS) $(SOURCES) $(LISP) $(TAGS_FILES)
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
mkid -fID $$unique
tags: TAGS
TAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
if test -z "$(ETAGS_ARGS)$$tags$$unique"; then :; else \
test -n "$$unique" || unique=$$empty_fix; \
$(ETAGS) $(ETAGSFLAGS) $(AM_ETAGSFLAGS) $(ETAGS_ARGS) \
$$tags $$unique; \
fi
ctags: CTAGS
CTAGS: $(HEADERS) $(SOURCES) $(TAGS_DEPENDENCIES) \
$(TAGS_FILES) $(LISP)
tags=; \
here=`pwd`; \
list='$(SOURCES) $(HEADERS) $(LISP) $(TAGS_FILES)'; \
unique=`for i in $$list; do \
if test -f "$$i"; then echo $$i; else echo $(srcdir)/$$i; fi; \
done | \
$(AWK) ' { files[$$0] = 1; } \
END { for (i in files) print i; }'`; \
test -z "$(CTAGS_ARGS)$$tags$$unique" \
|| $(CTAGS) $(CTAGSFLAGS) $(AM_CTAGSFLAGS) $(CTAGS_ARGS) \
$$tags $$unique
GTAGS:
here=`$(am__cd) $(top_builddir) && pwd` \
&& cd $(top_srcdir) \
&& gtags -i $(GTAGS_ARGS) $$here
distclean-tags:
-rm -f TAGS ID GTAGS GRTAGS GSYMS GPATH tags
distdir: $(DISTFILES)
@srcdirstrip=`echo "$(srcdir)" | sed 's|.|.|g'`; \
topsrcdirstrip=`echo "$(top_srcdir)" | sed 's|.|.|g'`; \
list='$(DISTFILES)'; for file in $$list; do \
case $$file in \
$(srcdir)/*) file=`echo "$$file" | sed "s|^$$srcdirstrip/||"`;; \
$(top_srcdir)/*) file=`echo "$$file" | sed "s|^$$topsrcdirstrip/|$(top_builddir)/|"`;; \
esac; \
if test -f $$file || test -d $$file; then d=.; else d=$(srcdir); fi; \
dir=`echo "$$file" | sed -e 's,/[^/]*$$,,'`; \
if test "$$dir" != "$$file" && test "$$dir" != "."; then \
dir="/$$dir"; \
$(mkdir_p) "$(distdir)$$dir"; \
else \
dir=''; \
fi; \
if test -d $$d/$$file; then \
if test -d $(srcdir)/$$file && test $$d != $(srcdir); then \
cp -pR $(srcdir)/$$file $(distdir)$$dir || exit 1; \
fi; \
cp -pR $$d/$$file $(distdir)$$dir || exit 1; \
else \
test -f $(distdir)/$$file \
|| cp -p $$d/$$file $(distdir)/$$file \
|| exit 1; \
fi; \
done
check-am: all-am
check: check-am
all-am: Makefile $(PROGRAMS) $(SCRIPTS)
installdirs:
for dir in "$(DESTDIR)$(bindir)" "$(DESTDIR)$(bindir)"; do \
test -z "$$dir" || $(mkdir_p) "$$dir"; \
done
install: install-am
install-exec: install-exec-am
install-data: install-data-am
uninstall: uninstall-am
install-am: all-am
@$(MAKE) $(AM_MAKEFLAGS) install-exec-am install-data-am
installcheck: installcheck-am
install-strip:
$(MAKE) $(AM_MAKEFLAGS) INSTALL_PROGRAM="$(INSTALL_STRIP_PROGRAM)" \
install_sh_PROGRAM="$(INSTALL_STRIP_PROGRAM)" INSTALL_STRIP_FLAG=-s \
`test -z '$(STRIP)' || \
echo "INSTALL_PROGRAM_ENV=STRIPPROG='$(STRIP)'"` install
mostlyclean-generic:
clean-generic:
distclean-generic:
-test -z "$(CONFIG_CLEAN_FILES)" || rm -f $(CONFIG_CLEAN_FILES)
maintainer-clean-generic:
@echo "This command is intended for maintainers to use"
@echo "it deletes files that may require special tools to rebuild."
clean: clean-am
clean-am: clean-binPROGRAMS clean-generic mostlyclean-am
distclean: distclean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
distclean-am: clean-am distclean-compile distclean-generic \
distclean-tags
dvi: dvi-am
dvi-am:
html: html-am
info: info-am
info-am:
install-data-am:
install-exec-am: install-binPROGRAMS install-binSCRIPTS
install-info: install-info-am
install-man:
installcheck-am:
maintainer-clean: maintainer-clean-am
-rm -rf ./$(DEPDIR)
-rm -f Makefile
maintainer-clean-am: distclean-am maintainer-clean-generic
mostlyclean: mostlyclean-am
mostlyclean-am: mostlyclean-compile mostlyclean-generic
pdf: pdf-am
pdf-am:
ps: ps-am
ps-am:
uninstall-am: uninstall-binPROGRAMS uninstall-binSCRIPTS \
uninstall-info-am
.PHONY: CTAGS GTAGS all all-am check check-am clean clean-binPROGRAMS \
clean-generic ctags distclean distclean-compile \
distclean-generic distclean-tags distdir dvi dvi-am html \
html-am info info-am install install-am install-binPROGRAMS \
install-binSCRIPTS install-data install-data-am install-exec \
install-exec-am install-info install-info-am install-man \
install-strip installcheck installcheck-am installdirs \
maintainer-clean maintainer-clean-generic mostlyclean \
mostlyclean-compile mostlyclean-generic pdf pdf-am ps ps-am \
tags uninstall uninstall-am uninstall-binPROGRAMS \
uninstall-binSCRIPTS uninstall-info-am
# Tell versions [3.59,3.63) of GNU make to not export all variables.
# Otherwise a system limit (for SysV at least) may be exceeded.
.NOEXPORT:

View File

@ -1,270 +0,0 @@
/* This file acts as a gateway between Tcl and the Wordnet C library. It
** contains stubs for all the commands added to the default Tcl and Tk set
** for this Wordnet application, as well as the routine that initializes them.
*/
#ifdef _WINDOWS
#include <windows.h>
#endif
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <tcl.h>
#include <tk.h>
#include <wn.h>
static char *Id = "$Id: stubs.c,v 1.7 2005/04/29 19:01:57 wn Exp $";
static char resultbuf[SEARCHBUF];
#ifndef HAVE_LANGINFO_CODESET
char *nl_langinfo(int item) {
static char val[4] = "Sun";
return(val);
}
#endif
/* This command (accessed by the name "findvalidsearches" in Tcl) returns
** a bitfield that describes all the available searches for a given word
** as the given part of speech. The calls to morphstr are used to extract
** the search word's base form.
*/
int wn_findvalidsearches (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
unsigned int bitfield;
static char bitfieldstr[32];
char *morph;
int pos;
if (argc != 3) {
interp -> result =
"usage: findvalidsearches searchword partofspeechnum";
return TCL_ERROR;
}
pos = atoi (argv[2]);
bitfield = is_defined (argv[1], pos);
if ((morph = morphstr (argv[1], pos)) != NULL) {
do {
bitfield |= is_defined (morph, pos);
} while ((morph = morphstr (NULL, pos)) != NULL);
}
sprintf (bitfieldstr, "%u", bitfield);
interp -> result = bitfieldstr;
return TCL_OK;
}
/* This command returns a bitfield of unsigned integer length with all bits
** zero except for the specified bit, which is one. This can be binary
** and-ed with another bitfield to check if the other bitfield has the
** specified bit set to one. This is particularly useful for interpreting
** the results of findvalidsearches. Invoked from Tcl as "bit".
*/
int wn_bit (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
unsigned int bitfield;
static char bitfieldstr[32];
int whichbit;
if (argc != 2) {
interp -> result = "usage: bit bitnum";
return TCL_ERROR;
}
whichbit = atoi (argv[1]);
bitfield = bit (whichbit);
sprintf (bitfieldstr, "%u", bitfield);
interp -> result = bitfieldstr;
return TCL_OK;
}
/* This command performs the requested search and returns the results in
** a string buffer. This is the primary purpose of the whole program.
** It is invoked from Tcl simply as "search".
*/
int wn_search (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
int pos, searchtype, sense;
char *morph;
if (argc != 5) {
interp -> result =
"usage: search searchword partofspeechnum searchtypenum sensenum";
return TCL_ERROR;
}
pos = atoi (argv[2]);
searchtype = atoi (argv[3]);
sense = atoi (argv[4]);
strcpy (resultbuf, findtheinfo (argv[1], pos, searchtype, sense));
if ((morph = morphstr (argv[1], pos)) != NULL) {
do {
strcat (resultbuf, findtheinfo (morph, pos, searchtype, sense));
} while ((morph = morphstr (NULL, pos)) != NULL);
}
interp -> result = resultbuf;
return TCL_OK;
}
/* This command, accessed in Tcl as "glosses" sets the flag that tells the
** search engine whether or not to include textual glosses in the search
** results.
*/
int wn_glosses (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
if (argc != 2) {
interp -> result = "usage: glosses [1 | 0]";
return TCL_ERROR;
}
dflag = atoi (argv[1]);
return TCL_OK;
}
/* This command, accessed in Tcl as "fileinfo" sets the flag that tells the
** search engine whether or not to include lex filenames in the search
** results.
*/
int wn_fileinfo (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
if (argc != 2) {
interp -> result = "usage: fileinfo [1 | 0]";
return TCL_ERROR;
}
fileinfoflag = atoi (argv[1]);
return TCL_OK;
}
/* This command, accessed in Tcl as "byteoffset" sets the flag that tells the
** search engine whether or not to include byte offsets into the lex files
** in the search results.
*/
int wn_byteoffset (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
if (argc != 2) {
interp -> result = "usage: byteoffset [1 | 0]";
return TCL_ERROR;
}
offsetflag = atoi (argv[1]);
return TCL_OK;
}
/* This command, accessed in Tcl as "senseflag" sets the flag that tells the
** search engine whether or not to report the WordNet sense for each word
** returned.
*/
int wn_senseflag (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
if (argc != 2) {
interp -> result = "usage: senseflag [1 | 0]";
return TCL_ERROR;
}
wnsnsflag = atoi (argv[1]);
return TCL_OK;
}
/* This command, accessed in Tcl as "contextualhelp" returns a string of
** text which describes, to the less-experienced user, exactly what each
** type of search does.
*/
int wn_contextualhelp (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
int pos, searchtype;
if (argc != 3) {
interp -> result = "usage: contextualhelp partofspeechnum searchtypenum";
return TCL_ERROR;
}
pos = atoi (argv[1]);
searchtype = atoi (argv[2]);
interp -> result = helptext[pos][searchtype];
return TCL_OK;
}
/* This command, accessed in Tcl as "reopendb" reopens the WordNet database.
*/
int wn_reopendb (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
if (argc != 1) {
interp -> result = "usage: reopendb";
return TCL_ERROR;
}
re_wninit ();
return TCL_OK;
}
/* This command, accessed in Tcl as "abortsearch" causes the library to
** stop whatever search it is currently in the middle of performing.
*/
int wn_abortsearch (ClientData clientData, Tcl_Interp *interp,
int argc, char *argv[]) {
if (argc != 1) {
interp -> result = "usage: abortsearch";
return TCL_ERROR;
}
abortsearch = 1;
return TCL_OK;
}
/* This is a callback function invoked by the WordNet search engine every so
** often, to allow the interface to respond to events (especially the pressing
** of a stop button) during the search.
*/
void tkwn_doevents (void) {
while (Tcl_DoOneEvent (TCL_WINDOW_EVENTS | TCL_DONT_WAIT) != 0) {}
}
/* This is a callback function invoked by the WordNet search engine whenever
** it needs to display an error message. Its implementation is platform
** specific, since it uses the native error reporting mechanism.
*/
int tkwn_displayerror (char *msg) {
#ifdef _WINDOWS
MessageBeep (MB_ICONEXCLAMATION);
MessageBox (NULL, msg, "WordNet Library Error",
MB_ICONEXCLAMATION | MB_OK | MB_TASKMODAL | MB_SETFOREGROUND);
#else
fprintf (stderr, "%s", msg);
#endif
return -1;
}
/* This is the initialization routine, which is called from tkAppInit.c
** when the program starts. It registers each new command with the Tcl
** interpreter.
*/
int Wordnet_Init (Tcl_Interp *interp) {
interface_doevents_func = tkwn_doevents;
display_message = tkwn_displayerror;
wninit ();
Tcl_CreateCommand (interp, "findvalidsearches", (void *)
wn_findvalidsearches, (ClientData) NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "bit", (void *) wn_bit, (ClientData) NULL,
(Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "search", (void *) wn_search, (ClientData)
NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "glosses", (void *) wn_glosses, (ClientData)
NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "fileinfo", (void *) wn_fileinfo, (ClientData)
NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "byteoffset", (void *) wn_byteoffset,
(ClientData) NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "senseflag", (void *) wn_senseflag,
(ClientData) NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "contextualhelp", (void *) wn_contextualhelp,
(ClientData) NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "reopendb", (void *) wn_reopendb, (ClientData)
NULL, (Tcl_CmdDeleteProc *) NULL);
Tcl_CreateCommand (interp, "abortsearch", (void *) wn_abortsearch,
(ClientData) NULL, (Tcl_CmdDeleteProc *) NULL);
return TCL_OK;
}

View File

@ -1,164 +0,0 @@
/*
* tkAppInit.c --
*
* Provides a default version of the Tcl_AppInit procedure for
* use in wish and similar Tk-based applications.
*
* Copyright (c) 1993 The Regents of the University of California.
* Copyright (c) 1994-1997 Sun Microsystems, Inc.
*
* See the file "license.terms" for information on usage and redistribution
* of this file, and for a DISCLAIMER OF ALL WARRANTIES.
*
* RCS: @(#) $Id: tkAppInit.c,v 1.5 2005/02/01 17:35:34 wn Rel $
*/
#include <tk.h>
#include <locale.h>
#include <math.h>
/*
* The following variable is a special hack that is needed in order for
* Sun shared libraries to be used for Tcl.
*/
#ifdef __sun__
extern int matherr();
int *tclDummyMathPtr = (int *) matherr;
#endif
#ifdef TK_TEST
extern int Tcltest_Init _ANSI_ARGS_((Tcl_Interp *interp));
extern int Tktest_Init _ANSI_ARGS_((Tcl_Interp *interp));
#endif /* TK_TEST */
/*
*----------------------------------------------------------------------
*
* main --
*
* This is the main program for the application.
*
* Results:
* None: Tk_Main never returns here, so this procedure never
* returns either.
*
* Side effects:
* Whatever the application does.
*
*----------------------------------------------------------------------
*/
int
main(argc, argv)
int argc; /* Number of command-line arguments. */
char **argv; /* Values of command-line arguments. */
{
/*
* The following #if block allows you to change the AppInit
* function by using a #define of TCL_LOCAL_APPINIT instead
* of rewriting this entire file. The #if checks for that
* #define and uses Tcl_AppInit if it doesn't exist.
*/
#ifndef TK_LOCAL_APPINIT
#define TK_LOCAL_APPINIT Tcl_AppInit
#endif
extern int TK_LOCAL_APPINIT _ANSI_ARGS_((Tcl_Interp *interp));
/*
* The following #if block allows you to change how Tcl finds the startup
* script, prime the library or encoding paths, fiddle with the argv,
* etc., without needing to rewrite Tk_Main()
*/
#ifdef TK_LOCAL_MAIN_HOOK
extern int TK_LOCAL_MAIN_HOOK _ANSI_ARGS_((int *argc, char ***argv));
TK_LOCAL_MAIN_HOOK(&argc, &argv);
#endif
#ifdef _WINDOWS
argv[argc++] = "wnb.tcl";
#endif
Tk_Main(argc, argv, TK_LOCAL_APPINIT);
return 0; /* Needed only to prevent compiler warning. */
}
/*
*----------------------------------------------------------------------
*
* Tcl_AppInit --
*
* This procedure performs application-specific initialization.
* Most applications, especially those that incorporate additional
* packages, will have their own version of this procedure.
*
* Results:
* Returns a standard Tcl completion code, and leaves an error
* message in the interp's result if an error occurs.
*
* Side effects:
* Depends on the startup script.
*
*----------------------------------------------------------------------
*/
extern int Wordnet_Init(Tcl_Interp *interp);
int
Tcl_AppInit(interp)
Tcl_Interp *interp; /* Interpreter for application. */
{
if (Tcl_Init(interp) == TCL_ERROR) {
return TCL_ERROR;
}
if (Tk_Init(interp) == TCL_ERROR) {
return TCL_ERROR;
}
Tcl_StaticPackage(interp, "Tk", Tk_Init, Tk_SafeInit);
#ifdef TK_TEST
if (Tcltest_Init(interp) == TCL_ERROR) {
return TCL_ERROR;
}
Tcl_StaticPackage(interp, "Tcltest", Tcltest_Init,
(Tcl_PackageInitProc *) NULL);
if (Tktest_Init(interp) == TCL_ERROR) {
return TCL_ERROR;
}
Tcl_StaticPackage(interp, "Tktest", Tktest_Init,
(Tcl_PackageInitProc *) NULL);
#endif /* TK_TEST */
/*
* Call the init procedures for included packages. Each call should
* look like this:
*
* if (Mod_Init(interp) == TCL_ERROR) {
* return TCL_ERROR;
* }
*
* where "Mod" is the name of the module.
*/
if (Wordnet_Init(interp) == TCL_ERROR) {
return TCL_ERROR;
}
/*
* Call Tcl_CreateCommand for application-specific commands, if
* they weren't already created by the init procedures called above.
*/
/*
* Specify a user-specific startup file to invoke if the application
* is run interactively. Typically the startup file is "~/.apprc"
* where "app" is the name of the application. If this line is deleted
* then no user-specific startup file will be run under any conditions.
*/
Tcl_SetVar(interp, "tcl_rcFileName", "~/.wishrc", TCL_GLOBAL_ONLY);
return TCL_OK;
}

View File

@ -1,423 +0,0 @@
/*
wn.c - Command line interface to WordNet
*/
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
#include "wn.h"
static char *Id = "$Id: wn.c,v 1.13 2005/01/31 19:19:09 wn Rel $";
static struct {
char *option; /* user's search request */
int search; /* search to pass findtheinfo() */
int pos; /* part-of-speech to pass findtheinfo() */
int helpmsgidx; /* index into help message table */
char *label; /* text for search header message */
} *optptr, optlist[] = {
{ "-synsa", SIMPTR, ADJ, 0, "Similarity" },
{ "-antsa", ANTPTR, ADJ, 1, "Antonyms" },
{ "-perta", PERTPTR, ADJ, 0, "Pertainyms" },
{ "-attra", ATTRIBUTE, ADJ, 2, "Attributes" },
{ "-domna", CLASSIFICATION, ADJ, 3, "Domain" },
{ "-domta", CLASS, ADJ, 4, "Domain Terms" },
{ "-famla", FREQ, ADJ, 5, "Familiarity" },
{ "-grepa", WNGREP, ADJ, 6, "Grep" },
{ "-synsn", HYPERPTR, NOUN, 0, "Synonyms/Hypernyms (Ordered by Estimated Frequency)" },
{ "-antsn", ANTPTR, NOUN, 2, "Antonyms" },
{ "-coorn", COORDS, NOUN, 3, "Coordinate Terms (sisters)" },
{ "-hypen", -HYPERPTR, NOUN, 4, "Synonyms/Hypernyms (Ordered by Estimated Frequency)" },
{ "-hypon", HYPOPTR, NOUN, 5, "Hyponyms" },
{ "-treen", -HYPOPTR, NOUN, 6, "Hyponyms" },
{ "-holon", HOLONYM, NOUN, 7, "Holonyms" },
{ "-sprtn", ISPARTPTR, NOUN, 7, "Part Holonyms" },
{ "-smemn", ISMEMBERPTR, NOUN, 7, "Member Holonyms" },
{ "-ssubn", ISSTUFFPTR, NOUN, 7, "Substance Holonyms" },
{ "-hholn", -HHOLONYM, NOUN, 8, "Holonyms" },
{ "-meron", MERONYM, NOUN, 9, "Meronyms" },
{ "-subsn", HASSTUFFPTR, NOUN, 9, "Substance Meronyms" },
{ "-partn", HASPARTPTR, NOUN, 9, "Part Meronyms" },
{ "-membn", HASMEMBERPTR, NOUN, 9, "Member Meronyms" },
{ "-hmern", -HMERONYM, NOUN, 10, "Meronyms" },
{ "-nomnn", DERIVATION, NOUN, 11, "Derived Forms" },
{ "-derin", DERIVATION, NOUN, 11, "Derived Forms" },
{ "-domnn", CLASSIFICATION, NOUN, 13, "Domain" },
{ "-domtn", CLASS, NOUN, 14, "Domain Terms" },
{ "-attrn", ATTRIBUTE, NOUN, 12, "Attributes" },
{ "-famln", FREQ, NOUN, 15, "Familiarity" },
{ "-grepn", WNGREP, NOUN, 16, "Grep" },
{ "-synsv", HYPERPTR, VERB, 0, "Synonyms/Hypernyms (Ordered by Estimated Frequency)" },
{ "-simsv", RELATIVES, VERB, 1, "Synonyms (Grouped by Similarity of Meaning)" },
{ "-antsv", ANTPTR, VERB, 2, "Antonyms" },
{ "-coorv", COORDS, VERB, 3, "Coordinate Terms (sisters)" },
{ "-hypev", -HYPERPTR, VERB, 4, "Synonyms/Hypernyms (Ordered by Estimated Frequency)" },
{ "-hypov", HYPOPTR, VERB, 5, "Troponyms (hyponyms)" },
{ "-treev", -HYPOPTR, VERB, 5, "Troponyms (hyponyms)" },
{ "-tropv", -HYPOPTR, VERB, 5, "Troponyms (hyponyms)" },
{ "-entav", ENTAILPTR, VERB, 6, "Entailment" },
{ "-causv", CAUSETO, VERB, 7, "\'Cause To\'" },
{ "-nomnv", DERIVATION, VERB, 8, "Derived Forms" },
{ "-deriv", DERIVATION, VERB, 8, "Derived Forms" },
{ "-domnv", CLASSIFICATION, VERB, 10, "Domain" },
{ "-domtv", CLASS, VERB, 11, "Domain Terms" },
{ "-framv", FRAMES, VERB, 9, "Sample Sentences" },
{ "-famlv", FREQ, VERB, 12, "Familiarity" },
{ "-grepv", WNGREP, VERB, 13, "Grep" },
{ "-synsr", SYNS, ADV, 0, "Synonyms" },
{ "-antsr", ANTPTR, ADV, 1, "Antonyms" },
{ "-pertr", PERTPTR, ADV, 0, "Pertainyms" },
{ "-domnr", CLASSIFICATION, ADV, 2, "Domain" },
{ "-domtr", CLASS, ADV, 3, "Domain Terms" },
{ "-famlr", FREQ, ADV, 4, "Familiarity" },
{ "-grepr", WNGREP, ADV, 5, "Grep" },
{ "-over", OVERVIEW, ALL_POS, -1, "Overview" },
{ NULL, 0, 0, 0, NULL }
};
struct {
char *template; /* template for generic search message */
char *option; /* text for help message */
char *helpstr;
} searchstr[] = { /* index by search type type */
{ NULL, NULL, NULL },
{ "-ants%c", "-ants{n|v|a|r}", "\t\tAntonyms", },
{ "-hype%c", "-hype{n|v}", "\t\tHypernyms", },
{ "-hypo%c, -tree%c", "-hypo{n|v}, -tree{n|v}",
"\tHyponyms & Hyponym Tree", },
{ "-enta%c", "-entav\t", "\t\tVerb Entailment", },
{ "-syns%c", "-syns{n|v|a|r}", "\t\tSynonyms (ordered by estimated frequency)", },
{ "-smem%c", "-smemn\t", "\t\tMember of Holonyms", },
{ "-ssub%c", "-ssubn\t", "\t\tSubstance of Holonyms", },
{ "-sprt%c", "-sprtn\t", "\t\tPart of Holonyms", },
{ "-memb%c", "-membn\t", "\t\tHas Member Meronyms", },
{ "-subs%c", "-subsn\t", "\t\tHas Substance Meronyms", },
{ "-part%c", "-partn\t", "\t\tHas Part Meronyms", },
{ "-mero%c", "-meron\t", "\t\tAll Meronyms", },
{ "-holo%c", "-holon\t", "\t\tAll Holonyms", },
{ "-caus%c", "-causv\t", "\t\tCause to", },
{ NULL, NULL, NULL }, /* PPLPTR - no specific search */
{ NULL, NULL, NULL }, /* SEEALSOPTR - no specific search */
{ "-pert%c", "-pert{a|r}", "\t\tPertainyms", },
{ "-attr%c", "-attr{n|a}", "\t\tAttributes", },
{ NULL, NULL, NULL }, /* verb groups - no specific pointer */
{ "-deri%c", "-deri{n|v}", "\t\tDerived Forms",},
{ "-domn%c", "-domn{n|v|a|r}", "\t\tDomain" },
{ "-domt%c", "-domt{n|v|a|r}", "\t\tDomain Terms" },
{ NULL, NULL, NULL }, /* SYNS - taken care of with SIMPTR */
{ "-faml%c", "-faml{n|v|a|r}", "\t\tFamiliarity & Polysemy Count", },
{ "-fram%c", "-framv\t", "\t\tVerb Frames", },
{ "-coor%c", "-coor{n|v}", "\t\tCoordinate Terms (sisters)", },
{ "-sims%c", "-simsv\t", "\t\tSynonyms (grouped by similarity of meaning)", },
{ "-hmer%c", "-hmern\t", "\t\tHierarchical Meronyms", },
{ "-hhol%c", "-hholn\t", "\t\tHierarchical Holonyms" },
{ "-grep%c", "-grep{n|v|a|r}", "\t\tList of Compound Words" },
{ "-over", "-over\t", "\t\tOverview of Senses" },
};
static int getoptidx(char *), cmdopt(char *);
static int searchwn(int, char *[]);
static int do_search(char *, int, int, int, char *);
static int do_is_defined(char *);
static void printusage(), printlicense(),
printsearches(char *, int, unsigned long);
static int error_message(char *);
main(int argc,char *argv[])
{
display_message = error_message;
if (argc < 2) {
printusage();
exit(-1);
} else if (argc == 2 && !strcmp("-l", argv[1])) {
printlicense();
exit(-1);
}
if (wninit()) { /* open database */
display_message("wn: Fatal error - cannot open WordNet database\n");
exit (-1);
}
exit(searchwn(argc, argv));
}
static int searchwn(int ac, char *av[])
{
int i, j = 1, pos;
int whichsense = ALLSENSES, help = 0;
int errcount = 0, outsenses = 0;
char tmpbuf[256]; /* buffer for constuction error messages */
if (ac == 2) /* print available searches for word */
exit(do_is_defined(av[1]));
/* Parse command line options once and set flags */
dflag = fileinfoflag = offsetflag = wnsnsflag = 0;
for(i = 1; i < ac; i++) {
if(!strcmp("-g",av[i]))
dflag++;
else if (!strcmp("-h",av[i]))
help++;
else if (!strcmp("-l", av[i]))
printlicense();
else if (!strncmp("-n", av[i], 2) && strncmp("-nomn", av[i] ,5))
whichsense = atoi(av[i] + 2);
else if (!strcmp("-a", av[i]))
fileinfoflag = 1;
else if (!strcmp("-o", av[i]))
offsetflag = 1;
else if (!strcmp("-s", av[i]))
wnsnsflag = 1;
}
/* Replace spaces with underscores before looking in database */
strtolower(strsubst(av[1], ' ', '_'));
/* Look at each option in turn. If it's not a command line option
(which was processed earlier), perform the search requested. */
while(av[++j]) {
if (!cmdopt(av[j])) { /* not a command line option */
if ((i = getoptidx(av[j])) != -1) {
optptr = &optlist[i];
/* print help text before search output */
if (help && optptr->helpmsgidx >= 0)
printf("%s\n", helptext[optptr->pos][optptr->helpmsgidx]);
if (optptr->pos == ALL_POS)
for (pos = 1; pos <= NUMPARTS; pos++)
outsenses += do_search(av[1], pos, optptr->search,
whichsense, optptr->label);
else
outsenses += do_search(av[1], optptr->pos, optptr->search,
whichsense, optptr->label);
} else {
sprintf(tmpbuf, "wn: invalid search option: %s\n", av[j]);
display_message(tmpbuf);
errcount++;
}
}
}
return(errcount ? -errcount : outsenses);
}
static int do_search(char *searchword, int pos, int search, int whichsense,
char *label)
{
int totsenses = 0;
char *morphword, *outbuf;
outbuf = findtheinfo(searchword, pos, search, whichsense);
totsenses += wnresults.printcnt;
if (strlen(outbuf) > 0)
printf("\n%s of %s %s\n%s",
label, partnames[pos], searchword, outbuf);
if (morphword = morphstr(searchword, pos))
do {
outbuf = findtheinfo(morphword, pos, search, whichsense);
totsenses += wnresults.printcnt;
if (strlen(outbuf) > 0)
printf("\n%s of %s %s\n%s",
label, partnames[pos], morphword, outbuf);
} while (morphword = morphstr(NULL, pos));
return(totsenses);
}
static int do_is_defined(char *searchword)
{
int i, found = 0;
unsigned int search;
char *morphword;
if (searchword[0] == '-') {
display_message("wn: invalid search word\n");
return(-1);
}
/* Print all valid searches for word in all parts of speech */
strtolower(strsubst(searchword, ' ', '_'));
for (i = 1; i <= NUMPARTS; i++) {
if ((search = is_defined(searchword, i)) != 0) {
printsearches(searchword, i, search);
found = 1;
} else
printf("\nNo information available for %s %s\n",
partnames[i], searchword);
if ((morphword = morphstr(searchword, i)) != NULL)
do {
if ((search = is_defined(morphword, i)) != 0) {
printsearches(morphword, i, search);
found = 1;
} else
printf("\nNo information available for %s %s\n",
partnames[i], morphword);
} while ((morphword = morphstr(NULL, i)) != NULL );
}
return(found);
}
static void printsearches(char *word, int dbase, unsigned long search)
{
int j;
printf("\nInformation available for %s %s\n", partnames[dbase], word);
for (j = 1; j <= MAXSEARCH; j++)
if ((search & bit(j)) && searchstr[j].option) {
printf("\t");
printf(searchstr[j].template,
partchars[dbase], partchars[dbase]);
printf(searchstr[j].helpstr);
printf("\n");
}
}
static void printusage()
{
int i;
fprintf(stdout,
"\nusage: wn word [-hgla] [-n#] -searchtype [-searchtype...]\n");
fprintf(stdout, " wn [-l]\n\n");
fprintf(stdout, "\t-h\t\tDisplay help text before search output\n");
fprintf(stdout, "\t-g\t\tDisplay gloss\n");
fprintf(stdout, "\t-l\t\tDisplay license and copyright notice\n");
fprintf(stdout, "\t-a\t\tDisplay lexicographer file information\n");
fprintf(stdout, "\t-o\t\tDisplay synset offset\n");
fprintf(stdout, "\t-s\t\tDisplay sense numbers in synsets\n");
fprintf(stdout, "\t-n#\t\tSearch only sense number #\n");
fprintf(stdout,"\nsearchtype is at least one of the following:\n");
for (i = 1; i <= OVERVIEW; i++)
if (searchstr[i].option)
fprintf(stdout, "\t%s%s\n",
searchstr[i].option, searchstr[i].helpstr);
}
static void printlicense()
{
printf("WordNet Release %s\n\n%s", wnrelease, license);
}
static int cmdopt(char *str)
{
if (!strcmp("-g", str) ||
!strcmp("-h", str) ||
!strcmp("-o", str) ||
!strcmp("-l", str) ||
!strcmp("-a", str) ||
!strcmp("-s", str) ||
(!strncmp("-n", str, 2) && strncmp("-nomn", str,5)))
return (1);
else
return(0);
}
static int getoptidx(char *searchtype)
{
int i;
for (i = 0; optlist[i].option; i++)
if (!strcmp(optlist[i].option, searchtype))
return(i);
return(-1);
}
static int error_message(char *msg)
{
fprintf(stderr, msg);
return(0);
}
/*
Revision log: (since version 1.5)
$Log: wn.c,v $
Revision 1.13 2005/01/31 19:19:09 wn
removed include for license.h
Revision 1.12 2005/01/27 17:32:37 wn
removed wnhelp.h
Revision 1.11 2004/10/25 16:34:43 wn
removed 1.6 references
Revision 1.10 2003/07/15 16:50:53 wn
added domain and domain term searches
Revision 1.9 2003/07/15 15:53:05 wn
updated search numbers
Revision 1.8 2002/03/07 17:52:49 wn
fixes for 1.7.1
Revision 1.7 2001/11/06 18:51:59 wn
added CLASSIFICATION placeholders
Revision 1.6 2001/10/25 16:56:11 wn
changed text on synsnym searches to say 'estimated frequency'
Revision 1.5 2001/07/20 18:15:10 wn
changed Nominalizations to Derived Forms
Revision 1.4 2001/06/19 15:06:17 wn
changed search from long to int
Revision 1.3 2001/03/30 17:16:05 wn
cleanups for 1.7
Revision 1.2 2000/10/30 19:06:49 wn
added code to handle nominalizations
Revision 1.1 1998/05/06 18:22:57 wn
Initial revision
* Revision 1.58 1997/11/21 19:01:17 wn
* added simsv search
*
* Revision 1.57 1997/09/02 17:10:32 wn
* changed includes
*
* Revision 1.56 1997/08/29 18:43:37 wn
* rearranged functions
*
* Revision 1.55 1997/08/29 16:44:24 wn
* added code to exit with total senses printed
*
* Revision 1.54 1997/08/26 20:29:38 wn
* added -s option, reorganized code
*
* Revision 1.53 1997/08/08 19:19:03 wn
* major cleanup
*
* Revision 1.52 1997/08/05 20:15:31 wn
* removed WNDEBUG, cleanups
*
* Revision 1.51 1995/06/30 19:25:02 wn
* access first element of OutSenseCount array
*
*
* Revision 1.1 91/09/17 15:51:09 wn
* Initial revision
*
*/

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,235 @@
'''Example of training a named entity recognition system from scratch using spaCy
This example is written to be self-contained and reasonably transparent.
To achieve that, it duplicates some of spaCy's internal functionality.
Specifically, in this example, we don't use spaCy's built-in Language class to
wire together the Vocab, Tokenizer and EntityRecognizer. Instead, we write
our own simle Pipeline class, so that it's easier to see how the pieces
interact.
Input data:
https://www.lt.informatik.tu-darmstadt.de/fileadmin/user_upload/Group_LangTech/data/GermEval2014_complete_data.zip
Developed for: spaCy 1.7.1
Last tested for: spaCy 1.7.1
'''
from __future__ import unicode_literals, print_function
import plac
from pathlib import Path
import random
import json
import spacy.orth as orth_funcs
from spacy.vocab import Vocab
from spacy.pipeline import BeamEntityRecognizer
from spacy.pipeline import EntityRecognizer
from spacy.tokenizer import Tokenizer
from spacy.tokens import Doc
from spacy.attrs import *
from spacy.gold import GoldParse
from spacy.gold import _iob_to_biluo as iob_to_biluo
from spacy.scorer import Scorer
try:
unicode
except NameError:
unicode = str
def init_vocab():
return Vocab(
lex_attr_getters={
LOWER: lambda string: string.lower(),
SHAPE: orth_funcs.word_shape,
PREFIX: lambda string: string[0],
SUFFIX: lambda string: string[-3:],
CLUSTER: lambda string: 0,
IS_ALPHA: orth_funcs.is_alpha,
IS_ASCII: orth_funcs.is_ascii,
IS_DIGIT: lambda string: string.isdigit(),
IS_LOWER: orth_funcs.is_lower,
IS_PUNCT: orth_funcs.is_punct,
IS_SPACE: lambda string: string.isspace(),
IS_TITLE: orth_funcs.is_title,
IS_UPPER: orth_funcs.is_upper,
IS_STOP: lambda string: False,
IS_OOV: lambda string: True
})
def save_vocab(vocab, path):
path = Path(path)
if not path.exists():
path.mkdir()
elif not path.is_dir():
raise IOError("Can't save vocab to %s\nNot a directory" % path)
with (path / 'strings.json').open('w') as file_:
vocab.strings.dump(file_)
vocab.dump((path / 'lexemes.bin').as_posix())
def load_vocab(path):
path = Path(path)
if not path.exists():
raise IOError("Cannot load vocab from %s\nDoes not exist" % path)
if not path.is_dir():
raise IOError("Cannot load vocab from %s\nNot a directory" % path)
return Vocab.load(path)
def init_ner_model(vocab, features=None):
if features is None:
features = tuple(EntityRecognizer.feature_templates)
return BeamEntityRecognizer(vocab, features=features)
def save_ner_model(model, path):
path = Path(path)
if not path.exists():
path.mkdir()
if not path.is_dir():
raise IOError("Can't save model to %s\nNot a directory" % path)
model.model.dump((path / 'model').as_posix())
with (path / 'config.json').open('w') as file_:
data = json.dumps(model.cfg)
if not isinstance(data, unicode):
data = data.decode('utf8')
file_.write(data)
def load_ner_model(vocab, path):
return BeamEntityRecognizer.load(path, vocab)
class Pipeline(object):
@classmethod
def load(cls, path):
path = Path(path)
if not path.exists():
raise IOError("Cannot load pipeline from %s\nDoes not exist" % path)
if not path.is_dir():
raise IOError("Cannot load pipeline from %s\nNot a directory" % path)
vocab = load_vocab(path / 'vocab')
tokenizer = Tokenizer(vocab, {}, None, None, None)
ner_model = load_ner_model(vocab, path / 'ner')
return cls(vocab, tokenizer, ner_model)
def __init__(self, vocab=None, tokenizer=None, ner_model=None):
if vocab is None:
self.vocab = init_vocab()
if tokenizer is None:
tokenizer = Tokenizer(vocab, {}, None, None, None)
if ner_model is None:
self.entity = init_ner_model(self.vocab)
self.pipeline = [self.entity]
def __call__(self, input_):
doc = self.make_doc(input_)
for process in self.pipeline:
process(doc)
return doc
def make_doc(self, input_):
if isinstance(input_, bytes):
input_ = input_.decode('utf8')
if isinstance(input_, unicode):
return self.tokenizer(input_)
else:
return Doc(self.vocab, words=input_)
def make_gold(self, input_, annotations):
doc = self.make_doc(input_)
gold = GoldParse(doc, entities=annotations)
return gold
def update(self, input_, annot):
doc = self.make_doc(input_)
gold = self.make_gold(input_, annot)
for ner in gold.ner:
if ner not in (None, '-', 'O'):
action, label = ner.split('-', 1)
self.entity.add_label(label)
return self.entity.update(doc, gold)
def evaluate(self, examples):
scorer = Scorer()
for input_, annot in examples:
gold = self.make_gold(input_, annot)
doc = self(input_)
scorer.score(doc, gold)
return scorer.scores
def average_weights(self):
self.entity.model.end_training()
def save(self, path):
path = Path(path)
if not path.exists():
path.mkdir()
elif not path.is_dir():
raise IOError("Can't save pipeline to %s\nNot a directory" % path)
save_vocab(self.vocab, path / 'vocab')
save_ner_model(self.entity, path / 'ner')
def train(nlp, train_examples, dev_examples, nr_epoch=5):
next_epoch = train_examples
print("Iter", "Loss", "P", "R", "F")
for i in range(nr_epoch):
this_epoch = next_epoch
next_epoch = []
loss = 0
for input_, annot in this_epoch:
loss += nlp.update(input_, annot)
if (i+1) < nr_epoch:
next_epoch.append((input_, annot))
random.shuffle(next_epoch)
scores = nlp.evaluate(dev_examples)
precision = '%.2f' % scores['ents_p']
recall = '%.2f' % scores['ents_r']
f_measure = '%.2f' % scores['ents_f']
print(i, int(loss), precision, recall, f_measure)
nlp.average_weights()
scores = nlp.evaluate(dev_examples)
print("After averaging")
print(scores['ents_p'], scores['ents_r'], scores['ents_f'])
def read_examples(path):
path = Path(path)
with path.open() as file_:
sents = file_.read().strip().split('\n\n')
for sent in sents:
if not sent.strip():
continue
tokens = sent.split('\n')
while tokens and tokens[0].startswith('#'):
tokens.pop(0)
words = []
iob = []
for token in tokens:
if token.strip():
pieces = token.split()
words.append(pieces[1])
iob.append(pieces[2])
yield words, iob_to_biluo(iob)
@plac.annotations(
model_dir=("Path to save the model", "positional", None, Path),
train_loc=("Path to your training data", "positional", None, Path),
dev_loc=("Path to your development data", "positional", None, Path),
)
def main(model_dir, train_loc, dev_loc, nr_epoch=10):
train_examples = read_examples(train_loc)
dev_examples = read_examples(dev_loc)
nlp = Pipeline()
train(nlp, train_examples, list(dev_examples), nr_epoch)
nlp.save(model_dir)
if __name__ == '__main__':
plac.call(main)

View File

@ -5,8 +5,8 @@ cymem>=1.30,<1.32
preshed>=1.0.0,<2.0.0
thinc>=6.5.0,<6.6.0
murmurhash>=0.26,<0.27
plac<0.9.3
plac<1.0.0,>=0.9.6
six
ujson>=1.35
sputnik>=0.9.2,<0.10.0
dill>=0.2,<0.3
requests>=2.13.0,<3.0.0

View File

@ -20,6 +20,8 @@ PACKAGE_DATA = {'': ['*.pyx', '*.pxd', '*.txt', '*.tokens']}
PACKAGES = [
'spacy',
'spacy.data',
'spacy.cli',
'spacy.tokens',
'spacy.en',
'spacy.de',
@ -33,6 +35,7 @@ PACKAGES = [
'spacy.sv',
'spacy.fi',
'spacy.bn',
'spacy.en.lemmatizer',
'spacy.language_data',
'spacy.serialize',
'spacy.syntax',
@ -237,12 +240,12 @@ def setup_package():
'cymem>=1.30,<1.32',
'preshed>=1.0.0,<2.0.0',
'thinc>=6.5.0,<6.6.0',
'plac<0.9.3',
'plac<1.0.0,>=0.9.6',
'six',
'pathlib',
'sputnik>=0.9.2,<0.10.0',
'ujson>=1.35',
'dill>=0.2,<0.3'],
'dill>=0.2,<0.3',
'requests>=2.13.0,<3.0.0'],
classifiers=[
'Development Status :: 5 - Production/Stable',
'Environment :: Console',
@ -258,6 +261,7 @@ def setup_package():
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Topic :: Scientific/Engineering'],
cmdclass = {
'build_ext': build_ext_subclass},

View File

@ -1,7 +1,11 @@
import pathlib
# coding: utf8
from __future__ import unicode_literals, print_function
from .util import set_lang_class, get_lang_class
from .about import __version__
import json
from pathlib import Path
from .util import set_lang_class, get_lang_class, parse_package_meta
from .deprecated import resolve_model_name
from .cli.info import info
from . import en
from . import de
@ -16,10 +20,7 @@ from . import sv
from . import fi
from . import bn
try:
basestring
except NameError:
basestring = str
from .about import *
set_lang_class(en.English.lang, en.English)
@ -36,11 +37,19 @@ set_lang_class(fi.Finnish.lang, fi.Finnish)
set_lang_class(bn.Bengali.lang, bn.Bengali)
def load(name, **overrides):
target_name, target_version = util.split_data_name(name)
data_path = overrides.get('path', util.get_data_path())
path = util.match_best_version(target_name, target_version, data_path)
cls = get_lang_class(target_name)
overrides['path'] = path
model_name = resolve_model_name(name)
meta = parse_package_meta(data_path, model_name, require=False)
lang = meta['lang'] if meta and 'lang' in meta else name
cls = get_lang_class(lang)
overrides['meta'] = meta
model_path = Path(data_path / model_name)
if model_path.exists():
overrides['path'] = model_path
return cls(**overrides)
def info(name, markdown):
info(name, markdown)

71
spacy/__main__.py Normal file
View File

@ -0,0 +1,71 @@
# coding: utf8
#
from __future__ import print_function
# NB! This breaks in plac on Python 2!!
#from __future__ import unicode_literals,
import plac
from spacy.cli import download as cli_download
from spacy.cli import link as cli_link
from spacy.cli import info as cli_info
class CLI(object):
"""Command-line interface for spaCy"""
commands = ('download', 'link', 'info')
@plac.annotations(
model=("model to download (shortcut or model name)", "positional", None, str),
direct=("force direct download. Needs model name with version and won't "
"perform compatibility check", "flag", "d", bool)
)
def download(self, model=None, direct=False):
"""
Download compatible model from default download path using pip. Model
can be shortcut, model name or, if --direct flag is set, full model name
with version.
"""
cli_download(model, direct)
@plac.annotations(
origin=("package name or local path to model", "positional", None, str),
link_name=("Name of shortuct link to create", "positional", None, str),
force=("Force overwriting of existing link", "flag", "f", bool)
)
def link(self, origin, link_name, force=False):
"""
Create a symlink for models within the spacy/data directory. Accepts
either the name of a pip package, or the local path to the model data
directory. Linking models allows loading them via spacy.load(link_name).
"""
cli_link(origin, link_name, force)
@plac.annotations(
model=("optional: shortcut link of model", "positional", None, str),
markdown=("generate Markdown for GitHub issues", "flag", "md", str)
)
def info(self, model=None, markdown=False):
"""
Print info about spaCy installation. If a model shortcut link is
speficied as an argument, print model information. Flag --markdown
prints details in Markdown for easy copy-pasting to GitHub issues.
"""
cli_info(model, markdown)
def __missing__(self, name):
print("\n Command %r does not exist\n" % name)
if __name__ == '__main__':
import plac
import sys
cli = CLI()
sys.argv[0] = 'spacy'
plac.Interpreter.call(CLI)

View File

@ -1,16 +1,16 @@
# inspired from:
# https://python-packaging-user-guide.readthedocs.org/en/latest/single_source_version/
# https://github.com/pypa/warehouse/blob/master/warehouse/__about__.py
__title__ = 'spacy'
__version__ = '1.6.0'
__version__ = '1.7.2'
__summary__ = 'Industrial-strength Natural Language Processing (NLP) with Python and Cython'
__uri__ = 'https://spacy.io'
__author__ = 'Matthew Honnibal'
__email__ = 'matt@explosion.ai'
__license__ = 'MIT'
__models__ = {
'en': 'en>=1.1.0,<1.2.0',
'de': 'de>=1.0.0,<1.1.0',
}
__docs__ = 'https://spacy.io/docs/usage'
__download_url__ = 'https://github.com/explosion/spacy-models/releases/download'
__compatibility__ = 'https://raw.githubusercontent.com/explosion/spacy-models/master/compatibility.json'
__shortcuts__ = {'en': 'en_core_web_sm', 'de': 'de_core_news_md', 'vectors': 'en_vectors_glove_md'}

View File

@ -124,7 +124,10 @@ def intify_attrs(stringy_attrs, strings_map=None, _do_deprecated=False):
'PunctType', 'PunctSide', 'Other', 'Degree', 'AdvType', 'Number',
'VerbForm', 'PronType', 'Aspect', 'Tense', 'PartType', 'Poss',
'Hyph', 'ConjType', 'NumType', 'Foreign', 'VerbType', 'NounType',
'Number', 'PronType', 'AdjType', 'Person', 'Variant', 'AdpType',
'Gender', 'Mood', 'Negative', 'Tense', 'Voice', 'Abbr',
'Derivation', 'Echo', 'Foreign', 'NameType', 'NounType', 'NumForm',
'NumValue', 'PartType', 'Polite', 'StyleVariant',
'PronType', 'AdjType', 'Person', 'Variant', 'AdpType',
'Reflex', 'Negative', 'Mood', 'Aspect', 'Case',
'Polarity', # U20
]

View File

@ -1,4 +1,4 @@
# encoding: utf8
# coding: utf8
from __future__ import unicode_literals, print_function
from ..language import Language

View File

@ -1,4 +1,4 @@
# encoding: utf8
# coding: utf8
from __future__ import unicode_literals
from spacy.language_data import strings_to_exc, update_exc

View File

@ -1,4 +1,4 @@
# encoding: utf8
# coding: utf8
from __future__ import unicode_literals
# Source: উচ্চতর বাংলা ব্যাকরণ ও রচনা - অধ্যাপক নিরঞ্জন অধিকারী ও অধ্যাপক ড. সফিউদ্দিন আহমদ

View File

@ -1,4 +1,4 @@
# encoding: utf8
# coding: utf8
from __future__ import unicode_literals
from ..language_data import PRON_LEMMA

View File

@ -1,4 +1,4 @@
# encoding: utf8
# coding: utf8
from __future__ import unicode_literals
from ..language_data.punctuation import ALPHA_LOWER, LIST_ELLIPSES, QUOTES, ALPHA_UPPER, LIST_QUOTES, UNITS, \

View File

@ -1,32 +1,44 @@
# encoding: utf8
# coding: utf8
from __future__ import unicode_literals
STOP_WORDS = set("""
অতএব অথচ অথব অন অন অন অন অনতত অন অবধি অবশ অর
আই আগ আগ আগ আছ আজ আদযভ আপন আপনি আব আমর আম আম আম আমি আর আরও
ইতি ইহ উচি উনি উপর উপর
এই এক একই একজন একট একটি একব এক এখন এখনও এখ এখ এট এট এটি এত এতট এত এদ এব এব এমন এমনক এর এর এল এস এস
অতএব অথচ অথব অন অন অন অন অনতত অবধি অবশ অর অন অন অরধভ
আগ আগ আগ আছ আজ আদযভ আপন আপনি আব আমর আম আম আম আমি আর আরও
ইতি ইহ
উচি উনি উপর উপর উততর
এই এক একই একজন একট একটি একব এক এখন এখনও এখ এখ এট এস
এট এটি এত এতট এত এদ এব এব এমন এমনি এমনকি এর এর এল এস এস
ওই ওক ওখ ওদ ওর ওর
কখনও কত কথ কব কয কযকটি করছ করছ করত করব করব করল করল কর কর কর কর করি করি করি করি কর কর করি কর কর কর উক রও রণ ি ি ি ি ি উই নও
ি ি ি
চল
ি ি
জন জনক জন জন জনযওজ নত ি ি
ি ি তখন তত তথ তব তব রপর হল িনঐ িি িি ি মন
কব কব
ি ি ি ি ি ি ি ি ওয ওয খত
কখনও কত কথ কব কয কযকটি করছ করছ করত করব করব করল কয় কয়কটি করি করি কর
করল কর কর কর কর করি করি করি করি কর কর করি কর কর কর উক
রও রণ ি ি ি ি ি উই নও মন ি
ি ি ি ি ি
চল
ি ি
জন জনক জন জন জন নত ি ি ি ি
ি
ি
তখন তত তথ তব তব রপর রই হল িনই
িি িি ি মন
কব কব
ি ি ি ি ি ি ি ি ওয ওয খত
ি ি ওয় ওয় ি
ধর ধর
নয ি ি ি ি ি ি ি ি ওয ওয নয়
পক পর পর পর পর পরযন ওয ি রতি রভি
নয ি ি ি ি ি ি ি ি ওয নয় নত
পক পর পর পর পর পরযন ওয ি রতি রভি ওয় রথম থমি
ফল ি
বছর বদল বর বলত বলল বলল বল বল বল বল বস বহ ি িি ি িষযি যবহ
বছর বদল বর বলত বলল বলল বল বল বল বল বস বহ ি িি ি িষযি যবহ বকতব বন ি
মত মত মত মধযভ মধ মধ মধ মন যম
যখন যত যতট যথ যদি যদি ওয ওয িি মন
রকম রয
সঙ সঙ সব সব সমস সমরতি সময় সহ সহি তর ি পষ বয
হইত হইব হইয হওয হওয হওয হচ হত হত হত হন হব হব হয হয হযি হয হয হযি হয হয হল হল হল হল হল ি ি হয় হয় হয়
যখন যত যতট যথ যদি যদি ওয ওয িি
মন
রকম রয রয়
লক
রণ মন সঙ সঙ সব সব সমস সমরতি সময় সহ সহি তর ি পষ বয
হইত হইব হইয হওয হওয হওয হচ হত হত হত হন হব হব হয হয হযি হয হয হযি হয
হয় হল হল হল হল হল ি ি হয় হয় হয় হইয় হয়ি হয় হয়নি হয় হয়ত হওয় হওয় হওয়
""".split())

View File

@ -1,4 +1,4 @@
# encoding: utf8
# coding: utf8
from __future__ import unicode_literals
from ..symbols import *

3
spacy/cli/__init__.py Normal file
View File

@ -0,0 +1,3 @@
from .download import download
from .info import info
from .link import link

75
spacy/cli/download.py Normal file
View File

@ -0,0 +1,75 @@
# coding: utf8
from __future__ import unicode_literals
import pip
import requests
import os
import subprocess
import sys
from .link import link_package
from .. import about
from .. import util
def download(model=None, direct=False):
check_error_depr(model)
if direct:
download_model('{m}/{m}.tar.gz'.format(m=model))
else:
model_name = about.__shortcuts__[model] if model in about.__shortcuts__ else model
compatibility = get_compatibility()
version = get_version(model_name, compatibility)
download_model('{m}-{v}/{m}-{v}.tar.gz'.format(m=model_name, v=version))
link_package(model_name, model, force=True)
def get_compatibility():
version = about.__version__
r = requests.get(about.__compatibility__)
if r.status_code != 200:
util.sys_exit(
"Couldn't fetch compatibility table. Please find the right model for "
"your spaCy installation (v{v}), and download it manually:".format(v=version),
"python -m spacy.download [full model name + version] --direct",
title="Server error ({c})".format(c=r.status_code))
comp = r.json()['spacy']
if version not in comp:
util.sys_exit(
"No compatible models found for v{v} of spaCy.".format(v=version),
title="Compatibility error")
else:
return comp[version]
def get_version(model, comp):
if model not in comp:
util.sys_exit(
"No compatible model found for "
"{m} (spaCy v{v}).".format(m=model, v=about.__version__),
title="Compatibility error")
return comp[model][0]
def download_model(filename):
util.print_msg("Downloading {f}".format(f=filename))
download_url = about.__download_url__ + '/' + filename
subprocess.call([sys.executable, '-m', 'pip', 'install', download_url],
env=os.environ.copy())
def check_error_depr(model):
if not model:
util.sys_exit(
"python -m spacy.download [name or shortcut]",
title="Missing model name or shortcut")
if model == 'all':
util.sys_exit(
"As of v1.7.0, the download all command is deprecated. Please "
"download the models individually via spacy.download [model name] "
"or pip install. For more info on this, see the documentation: "
"{d}".format(d=about.__docs__),
title="Deprecated command")

53
spacy/cli/info.py Normal file
View File

@ -0,0 +1,53 @@
# coding: utf8
from __future__ import unicode_literals
import platform
from pathlib import Path
from .. import about
from .. import util
def info(model=None, markdown=False):
if model:
data = util.parse_package_meta(util.get_data_path(), model, require=True)
model_path = Path(__file__).parent / util.get_data_path() / model
if model_path.resolve() != model_path:
data['link'] = str(model_path)
data['source'] = str(model_path.resolve())
else:
data['source'] = str(model_path)
print_info(data, "model " + model, markdown)
else:
data = get_spacy_data()
print_info(data, "spaCy", markdown)
def print_info(data, title, markdown):
title = "Info about {title}".format(title=title)
if markdown:
util.print_markdown(data, title=title)
else:
util.print_table(data, title=title)
def get_spacy_data():
return {
'spaCy version': about.__version__,
'Location': str(Path(__file__).parent.parent),
'Platform': platform.platform(),
'Python version': platform.python_version(),
'Installed models': ', '.join(list_models())
}
def list_models():
# exclude common cache directories this means models called "cache" etc.
# won't show up in list, but it seems worth it
exclude = ['cache', 'pycache', '__pycache__']
data_path = util.get_data_path()
models = [f.parts[-1] for f in data_path.iterdir() if f.is_dir()]
return [m for m in models if m not in exclude]

Some files were not shown because too many files have changed in this diff Show More