Merge branch 'devel'

This commit is contained in:
Federico Di Gregorio 2011-02-27 13:03:48 +01:00
commit 29f83f05c4
57 changed files with 1296 additions and 623 deletions

View File

@ -42,7 +42,7 @@ The included Makefile allows to run all the tests included in the
distribution. Just use:
make
make runtests
make check
The tests are run against a database called psycopg2_test on unix socket
and standard port. You can configure a different database to run the test

View File

@ -1,4 +1,4 @@
recursive-include psycopg *.c *.h
recursive-include psycopg *.c *.h *.manifest
recursive-include lib *.py
recursive-include tests *.py
recursive-include ZPsycopgDA *.py *.gif *.dtml
@ -12,5 +12,5 @@ recursive-include doc/html *
prune doc/src/_build
recursive-include scripts *.py *.sh
include scripts/maketypes.sh scripts/buildtypes.py
include AUTHORS README INSTALL LICENSE NEWS-2.0 NEWS-2.3 ChangeLog
include AUTHORS README INSTALL LICENSE NEWS ChangeLog
include PKG-INFO MANIFEST.in MANIFEST setup.py setup.cfg Makefile

69
NEWS
View File

@ -3,33 +3,56 @@ What's new in psycopg 2.4
* New features and changes:
- Added `register_composite()` function to cast PostgreSQL composite types
into Python tuples/namedtuples.
- More efficient iteration on named cursors.
- The build script refuses to guess values if pg_config is not found.
- Connections and cursors are weakly referenceable.
- Added 'b' and 't' mode to large objects: write can deal with both bytes
strings and unicode; read can return either bytes strings or decoded
unicode.
- COPY sends Unicode data to files implementing io.TextIOBase.
- The build script refuses to guess values if pg_config is not found.
- Improved PostgreSQL-Python encodings mapping. Added a few
missing encodings: EUC_CN, EUC_JIS_2004, ISO885910, ISO885916,
LATIN10, SHIFT_JIS_2004.
- Dropped repeated dictionary lookups with unicode query/parameters.
- Empty lists correctly roundtrip Python -> PostgreSQL -> Python.
- Added support for Python 3.1 and 3.2. The conversion has also
brought several improvements:
- Added 'b' and 't' mode to large objects: write can deal with both
bytes strings and unicode; read can return either bytes strings
or decoded unicode.
- COPY sends Unicode data to files implementing 'io.TextIOBase'.
- Improved PostgreSQL-Python encodings mapping.
- Added a few missing encodings: EUC_CN, EUC_JIS_2004, ISO885910,
ISO885916, LATIN10, SHIFT_JIS_2004.
- Dropped repeated dictionary lookups with unicode query/parameters.
- Improvements to the named cusors:
- More efficient iteration on named cursors, fetching 'itersize'
records at time from the backend.
- The named cursors name can be an invalid identifier.
- Improvements in data handling:
- Added 'register_composite()' function to cast PostgreSQL
composite types into Python tuples/namedtuples.
- Adapt types 'bytearray' (from Python 2.6), 'memoryview' (from
Python 2.7) and other objects implementing the "Revised Buffer
Protocol" to 'bytea' data type.
- The 'hstore' adapter can work even when the data type is not
installed in the 'public' namespace.
- Raise a clean exception instead of returning bad data when
receiving bytea in 'hex' format and the client libpq can't parse
them.
- Empty lists correctly roundtrip Python -> PostgreSQL -> Python.
- Other changes:
- 'cursor.description' is provided as named tuples if available.
- The build script refuses to guess values if 'pg_config' is not
found.
- Connections and cursors are weakly referenceable.
* Bug fixes:
- Fixed adaptation of None in composite types (ticket #26). Bug report by
Karsten Hilbert.
- Fixed adaptation of None in composite types (ticket #26). Bug
report by Karsten Hilbert.
- Fixed several reference leaks in less common code paths.
- Fixed segfault when a large object is closed and its connection no more
available.
- Added missing icon to ZPsycopgDA package, not available in Zope 2.12.9
(ticket #30). Bug report and patch by Pumukel.
- Fixed conversion of negative infinity (ticket #40). Bug report and patch
by Marti Raudsepp.
- Fixed segfault when a large object is closed and its connection no
more available.
- Added missing icon to ZPsycopgDA package, not available in Zope
2.12.9 (ticket #30). Bug report and patch by Pumukel.
- Fixed conversion of negative infinity (ticket #40). Bug report and
patch by Marti Raudsepp.
What's new in psycopg 2.3.2

View File

@ -16,7 +16,7 @@
# their work without bothering about the module dependencies.
ALLOWED_PSYCOPG_VERSIONS = ('2.4-beta1', '2.4-beta2')
ALLOWED_PSYCOPG_VERSIONS = ('2.4-beta1', '2.4-beta2', '2.4')
import sys
import time

View File

@ -103,14 +103,15 @@ There are two basic ways to have a Python object adapted to SQL:
viable if you are the author of the object and if the object is specifically
designed for the database (i.e. having Psycopg as a dependency and polluting
its interface with the required methods doesn't bother you). For a simple
example you can take a look to the source code for the
example you can take a look at the source code for the
`psycopg2.extras.Inet` object.
- If implementing the `!ISQLQuote` interface directly in the object is not an
option, you can use an adaptation function, taking the object to be adapted
as argument and returning a conforming object. The adapter must be
option (maybe because the object to adapt comes from a third party library),
you can use an *adaptation function*, taking the object to be adapted as
argument and returning a conforming object. The adapter must be
registered via the `~psycopg2.extensions.register_adapter()` function. A
simple example wrapper is the `!psycopg2.extras.UUID_adapter` used by the
simple example wrapper is `!psycopg2.extras.UUID_adapter` used by the
`~psycopg2.extras.register_uuid()` function.
A convenient object to write adapters is the `~psycopg2.extensions.AsIs`
@ -254,7 +255,7 @@ wasting resources.
A simple application could poll the connection from time to time to check if
something new has arrived. A better strategy is to use some I/O completion
function such as |select()|_ to sleep until awaken from the kernel when there is
function such as :py:func:`~select.select` to sleep until awaken from the kernel when there is
some data to read on the connection, thereby using no CPU unless there is
something to read::
@ -288,9 +289,9 @@ in a separate :program:`psql` shell, the output may look similar to::
Timeout
...
Notice that the payload is only available from PostgreSQL 9.0: notifications
received from a previous version server will have the `!payload` attribute set
to the empty string.
Note that the payload is only available from PostgreSQL 9.0: notifications
received from a previous version server will have the
`~psycopg2.extensions.Notify.payload` attribute set to the empty string.
.. versionchanged:: 2.3
Added `~psycopg2.extensions.Notify` object and handling notification
@ -321,7 +322,7 @@ descriptor and `~connection.poll()` to make communication proceed according to
the current connection state.
The following is an example loop using methods `!fileno()` and `!poll()`
together with the Python |select()|_ function in order to carry on
together with the Python :py:func:`~select.select` function in order to carry on
asynchronous operations with Psycopg::
def wait(conn):
@ -336,9 +337,6 @@ asynchronous operations with Psycopg::
else:
raise psycopg2.OperationalError("poll() returned %s" % state)
.. |select()| replace:: `!select()`
.. _select(): http://docs.python.org/library/select.html#select.select
The above loop of course would block an entire application: in a real
asynchronous framework, `!select()` would be called on many file descriptors
waiting for any of them to be ready. Nonetheless the function can be used to
@ -371,7 +369,7 @@ client and available using the regular cursor methods:
42
When an asynchronous query is being executed, `connection.isexecuting()` returns
`True`. Two cursors can't execute concurrent queries on the same asynchronous
`!True`. Two cursors can't execute concurrent queries on the same asynchronous
connection.
There are several limitations in using asynchronous connections: the

View File

@ -23,7 +23,7 @@ sys.path.append(os.path.abspath('tools/lib'))
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.todo', 'sphinx.ext.ifconfig',
'sphinx.ext.doctest']
'sphinx.ext.doctest', 'sphinx.ext.intersphinx' ]
# Specific extensions for Psycopg documentation.
extensions += [ 'dbapi_extension', 'sql_role' ]
@ -42,7 +42,7 @@ master_doc = 'index'
# General information about the project.
project = u'Psycopg'
copyright = u'2001-2010, Federico Di Gregorio. Documentation by Daniele Varrazzo'
copyright = u'2001-2011, Federico Di Gregorio. Documentation by Daniele Varrazzo'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
@ -50,14 +50,21 @@ copyright = u'2001-2010, Federico Di Gregorio. Documentation by Daniele Varrazzo
#
# The short X.Y version.
version = '2.0'
# The full version, including alpha/beta/rc tags.
try:
import psycopg2
release = psycopg2.__version__.split()[0]
version = '.'.join(release.split('.')[:2])
except ImportError:
print "WARNING: couldn't import psycopg to read version."
release = version
intersphinx_mapping = {
'py': ('http://docs.python.org/', None),
'py3': ('http://docs.python.org/3.2', None),
}
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None

View File

@ -25,11 +25,24 @@ The ``connection`` class
Return a new `cursor` object using the connection.
If `name` is specified, the returned cursor will be a *server
side* (or *named*) cursor. Otherwise the cursor will be *client side*.
See :ref:`server-side-cursors` for further details.
If *name* is specified, the returned cursor will be a :ref:`server
side cursor <server-side-cursors>` (also known as *named cursor*).
Otherwise it will be a regular *client side* cursor.
The `cursor_factory` argument can be used to create non-standard
The name can be a string not valid as a PostgreSQL identifier: for
example it may start with a digit and contain non-alphanumeric
characters and quotes.
.. versionchanged:: 2.4
previously only valid PostgreSQL identifiers were accepted as
cursor name.
.. warning::
It is unsafe to expose the *name* to an untrusted source, for
instance you shouldn't allow *name* to be read from a HTML form.
Consider it as part of the query, not as a query parameter.
The *cursor_factory* argument can be used to create non-standard
cursors. The class returned should be a subclass of
`psycopg2.extensions.cursor`. See :ref:`subclassing-cursor` for
details.
@ -62,8 +75,8 @@ The ``connection`` class
.. method:: close()
Close the connection now (rather than whenever `__del__()` is
called). The connection will be unusable from this point forward; an
Close the connection now (rather than whenever `del` is executed).
The connection will be unusable from this point forward; an
`~psycopg2.InterfaceError` will be raised if any operation is
attempted with the connection. The same applies to all cursor objects
trying to use the connection. Note that closing a connection without
@ -124,9 +137,10 @@ The ``connection`` class
constraints are explained in :ref:`tpc`.
The values passed to the method will be available on the returned
object as the members `!format_id`, `!gtrid`, `!bqual`. The object
also allows accessing to these members and unpacking as a 3-items
tuple.
object as the members `~psycopg2.extensions.Xid.format_id`,
`~psycopg2.extensions.Xid.gtrid`, `~psycopg2.extensions.Xid.bqual`.
The object also allows accessing to these members and unpacking as a
3-items tuple.
.. method:: tpc_begin(xid)
@ -230,7 +244,7 @@ The ``connection`` class
If a transaction was not initiated by Psycopg, the returned Xids will
have attributes `~psycopg2.extensions.Xid.format_id` and
`~psycopg2.extensions.Xid.bqual` set to `None` and the
`~psycopg2.extensions.Xid.bqual` set to `!None` and the
`~psycopg2.extensions.Xid.gtrid` set to the PostgreSQL transaction ID: such Xids are still
usable for recovery. Psycopg uses the same algorithm of the
`PostgreSQL JDBC driver`__ to encode a XA triple in a string, so
@ -418,7 +432,7 @@ The ``connection`` class
``session_authorization``, ``DateStyle``, ``TimeZone``,
``integer_datetimes``, and ``standard_conforming_strings``.
If server did not report requested parameter, return ``None``.
If server did not report requested parameter, return `!None`.
.. seealso:: libpq docs for `PQparameterStatus()`__ for details.
@ -499,8 +513,8 @@ The ``connection`` class
a new large object and and have its OID assigned automatically.
:param mode: Access mode to the object, see below.
:param new_oid: Create a new object using the specified OID. The
function raises `OperationalError` if the OID is already in
use. Default is 0, meaning assign a new one automatically.
function raises `~psycopg2.OperationalError` if the OID is already
in use. Default is 0, meaning assign a new one automatically.
:param new_file: The name of a file to be imported in the the database
(using the |lo_import|_ function)
:param lobject_factory: Subclass of
@ -518,8 +532,8 @@ The ``connection`` class
``w`` Open for write only
``rw`` Open for read/write
``n`` Don't open the file
``b`` Don't decode read data (return data as `str` in Python 2 or `bytes` in Python 3)
``t`` Decode read data according to `connection.encoding` (return data as `unicode` in Python 2 or `str` in Python 3)
``b`` Don't decode read data (return data as `!str` in Python 2 or `!bytes` in Python 3)
``t`` Decode read data according to `connection.encoding` (return data as `!unicode` in Python 2 or `!str` in Python 3)
======= =========
``b`` and ``t`` can be specified together with a read/write mode. If
@ -528,7 +542,7 @@ The ``connection`` class
.. versionadded:: 2.0.8
.. versionchanged:: 2.3.3 added ``b`` and ``t`` mode and unicode
.. versionchanged:: 2.4 added ``b`` and ``t`` mode and unicode
support.
@ -571,7 +585,7 @@ The ``connection`` class
.. method:: isexecuting()
Return `True` if the connection is executing an asynchronous operation.
Return `!True` if the connection is executing an asynchronous operation.
.. testcode::

View File

@ -39,42 +39,55 @@ The ``cursor`` class
This read-only attribute is a sequence of 7-item sequences.
Each of these sequences contains information describing one result
column:
Each of these sequences is a named tuple (a regular tuple if
:func:`collections.namedtuple` is not available) containing information
describing one result column:
- ``name``
- ``type_code``
- ``display_size``
- ``internal_size``
- ``precision``
- ``scale``
- ``null_ok``
0. `!name`: the name of the column returned.
1. `!type_code`: the PostgreSQL OID of the column. You can use the
|pg_type|_ system table to get more informations about the type.
This is the value used by Psycopg to decide what Python type use
to represent the value. See also
:ref:`type-casting-from-sql-to-python`.
2. `!display_size`: the actual length of the column in bytes.
Obtaining this value is computationally intensive, so it is
always `!None` unless the :envvar:`PSYCOPG_DISPLAY_SIZE` parameter
is set at compile time. See also PQgetlength_.
3. `!internal_size`: the size in bytes of the column associated to
this column on the server. Set to a negative value for
variable-size types See also PQfsize_.
4. `!precision`: total number of significant digits in columns of
type |NUMERIC|_. `!None` for other types.
5. `!scale`: count of decimal digits in the fractional part in
columns of type |NUMERIC|. `!None` for other types.
6. `!null_ok`: always `!None` as not easy to retrieve from the libpq.
The first two items (``name`` and ``type_code``) are always specified,
the other five are optional and are set to ``None`` if no meaningful
values can be provided.
This attribute will be ``None`` for operations that do not return rows
This attribute will be `!None` for operations that do not return rows
or if the cursor has not had an operation invoked via the
|execute*|_ methods yet.
The ``type_code`` can be interpreted by comparing it to the Type
Objects specified in the section :ref:`type-objects-and-constructors`.
It is also used to register typecasters to convert PostgreSQL types to
Python objects: see :ref:`type-casting-from-sql-to-python`.
.. |pg_type| replace:: :sql:`pg_type`
.. _pg_type: http://www.postgresql.org/docs/9.0/static/catalog-pg-type.html
.. _PQgetlength: http://www.postgresql.org/docs/9.0/static/libpq-exec.html#LIBPQ-PQGETLENGTH
.. _PQfsize: http://www.postgresql.org/docs/9.0/static/libpq-exec.html#LIBPQ-PQFSIZE
.. _NUMERIC: http://www.postgresql.org/docs/9.0/static/datatype-numeric.html#DATATYPE-NUMERIC-DECIMAL
.. |NUMERIC| replace:: :sql:`NUMERIC`
.. versionchanged:: 2.4
if possible, columns descriptions are named tuple instead of
regular tuples.
.. method:: close()
Close the cursor now (rather than whenever `!__del__()` is
called). The cursor will be unusable from this point forward; an
Close the cursor now (rather than whenever `del` is executed).
The cursor will be unusable from this point forward; an
`~psycopg2.InterfaceError` will be raised if any operation is
attempted with the cursor.
.. attribute:: closed
Read-only boolean attribute: specifies if the cursor is closed
(``True``) or not (``False``).
(`!True`) or not (`!False`).
.. extension::
@ -93,7 +106,7 @@ The ``cursor`` class
.. attribute:: name
Read-only attribute containing the name of the cursor if it was
creates as named cursor by `connection.cursor()`, or ``None`` if
creates as named cursor by `connection.cursor()`, or `!None` if
it is a client side cursor. See :ref:`server-side-cursors`.
.. extension::
@ -118,7 +131,7 @@ The ``cursor`` class
positional (``%s``) or named (:samp:`%({name})s`) placeholders. See
:ref:`query-parameters`.
The method returns `None`. If a query was executed, the returned
The method returns `!None`. If a query was executed, the returned
values can be retrieved using |fetch*|_ methods.
@ -147,12 +160,6 @@ The ``cursor`` class
be made available through the standard |fetch*|_ methods.
.. method:: setinputsizes(sizes)
This method is exposed in compliance with the |DBAPI|. It currently
does nothing but it is safe to call it.
.. method:: mogrify(operation [, parameters])
Return a query string after arguments binding. The string returned is
@ -166,19 +173,10 @@ The ``cursor`` class
The `mogrify()` method is a Psycopg extension to the |DBAPI|.
.. method:: cast(oid, s)
Convert a value from the PostgreSQL string representation to a Python
object.
Use the most specific of the typecasters registered by
`~psycopg2.extensions.register_type()`.
.. versionadded:: 2.3.3
.. extension::
The `cast()` method is a Psycopg extension to the |DBAPI|.
.. method:: setinputsizes(sizes)
This method is exposed in compliance with the |DBAPI|. It currently
does nothing but it is safe to call it.
@ -208,16 +206,16 @@ The ``cursor`` class
(2, None, 'dada')
(3, 42, 'bar')
.. versionchanged:: 2.3.3
.. versionchanged:: 2.4
iterating over a :ref:`named cursor <server-side-cursors>`
fetches `~cursor.arraysize` records at time from the backend.
fetches `~cursor.itersize` records at time from the backend.
Previously only one record was fetched per roundtrip, resulting
in unefficient iteration.
in a large overhead.
.. method:: fetchone()
Fetch the next row of a query result set, returning a single tuple,
or ``None`` when no more data is available:
or `!None` when no more data is available:
>>> cur.execute("SELECT * FROM test WHERE id = %s", (3,))
>>> cur.fetchone()
@ -306,18 +304,20 @@ The ``cursor`` class
time with `~cursor.fetchmany()`. It defaults to 1 meaning to fetch
a single row at a time.
The attribute is also used when iterating a :ref:`named cursor
<server-side-cursors>`: when syntax such as ``for i in cursor:`` is
used, in order to avoid an excessive number of network roundtrips, the
cursor will actually fetch `!arraysize` records at time from the
backend. For this task the default value of 1 is a poor value: if
`!arraysize` is 1, a default value of 2000 will be used instead. If
you really want to retrieve one record at time from the backend use
`fetchone()` in a loop.
.. versionchanged:: 2.3.3
`!arraysize` used in named cursor iteration.
.. attribute:: itersize
Read/write attribute specifying the number of rows to fetch from the
backend at each network roundtrip during :ref:`iteration
<cursor-iterable>` on a :ref:`named cursor <server-side-cursors>`. The
default is 2000.
.. versionadded:: 2.4
.. extension::
The `itersize` attribute is a Psycopg extension to the |DBAPI|.
.. attribute:: rowcount
@ -333,14 +333,14 @@ The ``cursor`` class
.. note::
The |DBAPI|_ interface reserves to redefine the latter case to
have the object return ``None`` instead of -1 in future versions
have the object return `!None` instead of -1 in future versions
of the specification.
.. attribute:: rownumber
This read-only attribute provides the current 0-based index of the
cursor in the result set or ``None`` if the index cannot be
cursor in the result set or `!None` if the index cannot be
determined.
The index can be seen as index of the cursor in a sequence (the result
@ -355,12 +355,14 @@ The ``cursor`` class
This read-only attribute provides the OID of the last row inserted
by the cursor. If the table wasn't created with OID support or the
last operation is not a single record insert, the attribute is set to
``None``.
`!None`.
PostgreSQL currently advices to not create OIDs on the tables and the
default for |CREATE-TABLE|__ is to not support them. The
|INSERT-RETURNING|__ syntax available from PostgreSQL 8.3 allows more
flexibility.
.. note::
PostgreSQL currently advices to not create OIDs on the tables and
the default for |CREATE-TABLE|__ is to not support them. The
|INSERT-RETURNING|__ syntax available from PostgreSQL 8.3 allows
more flexibility.
.. |CREATE-TABLE| replace:: :sql:`CREATE TABLE`
.. __: http://www.postgresql.org/docs/9.0/static/sql-createtable.html
@ -369,22 +371,10 @@ The ``cursor`` class
.. __: http://www.postgresql.org/docs/9.0/static/sql-insert.html
.. method:: nextset()
This method is not supported (PostgreSQL does not have multiple data
sets) and will raise a `~psycopg2.NotSupportedError` exception.
.. method:: setoutputsize(size [, column])
This method is exposed in compliance with the |DBAPI|. It currently
does nothing but it is safe to call it.
.. attribute:: query
Read-only attribute containing the body of the last query sent to the
backend (including bound arguments). ``None`` if no query has been
backend (including bound arguments). `!None` if no query has been
executed yet:
>>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar'))
@ -411,14 +401,40 @@ The ``cursor`` class
|DBAPI|.
.. method:: cast(oid, s)
Convert a value from the PostgreSQL string representation to a Python
object.
Use the most specific of the typecasters registered by
`~psycopg2.extensions.register_type()`.
.. versionadded:: 2.4
.. extension::
The `cast()` method is a Psycopg extension to the |DBAPI|.
.. attribute:: tzinfo_factory
The time zone factory used to handle data types such as
:sql:`TIMESTAMP WITH TIME ZONE`. It should be a |tzinfo|_ object.
See also the `psycopg2.tz` module.
:sql:`TIMESTAMP WITH TIME ZONE`. It should be a `~datetime.tzinfo`
object. A few implementations are available in the `psycopg2.tz`
module.
.. method:: nextset()
This method is not supported (PostgreSQL does not have multiple data
sets) and will raise a `~psycopg2.NotSupportedError` exception.
.. method:: setoutputsize(size [, column])
This method is exposed in compliance with the |DBAPI|. It currently
does nothing but it is safe to call it.
.. |tzinfo| replace:: `!tzinfo`
.. _tzinfo: http://docs.python.org/library/datetime.html#tzinfo-objects
.. rubric:: COPY-related methods
@ -430,15 +446,15 @@ The ``cursor`` class
.. method:: copy_from(file, table, sep='\\t', null='\\N', columns=None)
Read data *from* the file-like object `file` appending them to
the table named `table`. `file` must have both
Read data *from* the file-like object *file* appending them to
the table named *table*. *file* must have both
`!read()` and `!readline()` method. See :ref:`copy` for an
overview.
The optional argument `sep` is the columns separator and
`null` represents :sql:`NULL` values in the file.
The optional argument *sep* is the columns separator and
*null* represents :sql:`NULL` values in the file.
The `columns` argument is a sequence containing the name of the
The *columns* argument is a sequence containing the name of the
fields where the read data will be entered. Its length and column
type should match the content of the read file. If not specifies, it
is assumed that the entire table matches the file structure.
@ -450,20 +466,24 @@ The ``cursor`` class
[(6, 42, 'foo'), (7, 74, 'bar')]
.. versionchanged:: 2.0.6
added the `columns` parameter.
added the *columns* parameter.
.. versionchanged:: 2.4
data read from files implementing the `io.TextIOBase` interface
are encoded in the connection `~connection.encoding` when sent to
the backend.
.. method:: copy_to(file, table, sep='\\t', null='\\N', columns=None)
Write the content of the table named `table` *to* the file-like
object `file`. `file` must have a `!write()` method.
Write the content of the table named *table* *to* the file-like
object *file*. *file* must have a `!write()` method.
See :ref:`copy` for an overview.
The optional argument `sep` is the columns separator and
`null` represents :sql:`NULL` values in the file.
The optional argument *sep* is the columns separator and
*null* represents :sql:`NULL` values in the file.
The `columns` argument is a sequence of field names: if not
``None`` only the specified fields will be included in the dump.
The *columns* argument is a sequence of field names: if not
`!None` only the specified fields will be included in the dump.
>>> cur.copy_to(sys.stdout, 'test', sep="|")
1|100|abc'def
@ -471,7 +491,12 @@ The ``cursor`` class
...
.. versionchanged:: 2.0.6
added the `columns` parameter.
added the *columns* parameter.
.. versionchanged:: 2.4
data sent to files implementing the `io.TextIOBase` interface
are decoded in the connection `~connection.encoding` when read
from the backend.
.. method:: copy_expert(sql, file [, size])
@ -480,10 +505,10 @@ The ``cursor`` class
handle all the parameters that PostgreSQL makes available (see
|COPY|__ command documentation).
`file` must be an open, readable file for :sql:`COPY FROM` or an
open, writeable file for :sql:`COPY TO`. The optional `size`
*file* must be an open, readable file for :sql:`COPY FROM` or an
open, writeable file for :sql:`COPY TO`. The optional *size*
argument, when specified for a :sql:`COPY FROM` statement, will be
passed to `file`\ 's read method to control the read buffer
passed to *file*\ 's read method to control the read buffer
size.
>>> cur.copy_expert("COPY test TO STDOUT WITH CSV HEADER", sys.stdout)
@ -497,6 +522,10 @@ The ``cursor`` class
.. versionadded:: 2.0.6
.. versionchanged:: 2.4
files implementing the `io.TextIOBase` interface are dealt with
using Unicode data instead of bytes.
.. testcode::
:hide:

View File

@ -63,7 +63,7 @@ functionalities defined by the |DBAPI|_.
`connection.encoding`) if the file was open in ``t`` mode, a bytes
string for ``b`` mode.
.. versionchanged:: 2.3.3
.. versionchanged:: 2.4
added Unicode support.
.. method:: write(str)
@ -72,7 +72,7 @@ functionalities defined by the |DBAPI|_.
written. Unicode strings are encoded in the `connection.encoding`
before writing.
.. versionchanged:: 2.3.3
.. versionchanged:: 2.4
added Unicode support.
.. method:: export(file_name)
@ -201,10 +201,10 @@ deal with Python objects adaptation:
A conform object can implement this method if the SQL
representation depends on any server parameter, such as the server
version or the ``standard_conforming_string`` setting. Container
version or the :envvar:`standard_conforming_string` setting. Container
objects may store the connection and use it to recursively prepare
contained objects: see the implementation for
``psycopg2.extensions.SQL_IN`` for a simple example.
`psycopg2.extensions.SQL_IN` for a simple example.
.. class:: AsIs(object)
@ -303,7 +303,7 @@ details.
*adapter* should have signature :samp:`fun({value}, {cur})` where
*value* is the string representation returned by PostgreSQL and
*cur* is the cursor from which data are read. In case of
:sql:`NULL`, *value* will be ``None``. The adapter should return the
:sql:`NULL`, *value* will be `!None`. The adapter should return the
converted object.
See :ref:`type-casting-from-sql-to-python` for an usage example.

View File

@ -83,7 +83,7 @@ Real dictionary cursor
.. versionadded:: 2.3
These objects require `!collection.namedtuple()` to be found, so it is
These objects require :py:func:`collections.namedtuple` to be found, so it is
available out-of-the-box only from Python 2.6. Anyway, the namedtuple
implementation is compatible with previous Python versions, so all you
have to do is to `download it`__ and make it available where we
@ -143,11 +143,11 @@ been greatly improved in capacity and usefulness with the addiction of many
functions. It supports GiST or GIN indexes allowing search by keys or
key/value pairs as well as regular BTree indexes for equality, uniqueness etc.
Psycopg can convert Python `dict` objects to and from |hstore| structures.
Only dictionaries with string/unicode keys and values are supported. `None`
is also allowed as value. Psycopg uses a more efficient |hstore|
Psycopg can convert Python `!dict` objects to and from |hstore| structures.
Only dictionaries with string/unicode keys and values are supported. `!None`
is also allowed as value but not as a key. Psycopg uses a more efficient |hstore|
representation when dealing with PostgreSQL 9.0 but previous server versions
are supportes as well. By default the adapter/typecaster are disabled: they
are supported as well. By default the adapter/typecaster are disabled: they
can be enabled using the `register_hstore()` function.
.. autofunction:: register_hstore
@ -165,11 +165,11 @@ can be enabled using the `register_hstore()` function.
Composite types casting
^^^^^^^^^^^^^^^^^^^^^^^
.. versionadded:: 2.3.3
.. versionadded:: 2.4
Using `register_composite()` it is possible to cast a PostgreSQL composite
type (e.g. created with |CREATE TYPE|_ command) into a Python named tuple, or
into a regular tuple if `!collections.namedtuple()` is not found.
into a regular tuple if :py:func:`collections.namedtuple` is not found.
.. |CREATE TYPE| replace:: :sql:`CREATE TYPE`
.. _CREATE TYPE: http://www.postgresql.org/docs/9.0/static/sql-createtype.html

View File

@ -73,7 +73,7 @@ I try to execute a query but it fails with the error *not all arguments converte
>>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar",)) # correct
>>> cur.execute("INSERT INTO foo VALUES (%s)", ["bar"]) # correct
My database is Unicode, but I receive all the strings as UTF-8 `str`. Can I receive `unicode` objects instead?
My database is Unicode, but I receive all the strings as UTF-8 `!str`. Can I receive `!unicode` objects instead?
The following magic formula will do the trick::
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
@ -100,13 +100,14 @@ Transferring binary data from PostgreSQL 9.0 doesn't work.
earlier. Three options to solve the problem are:
- set the bytea_output__ parameter to ``escape`` in the server;
- use ``SET bytea_output TO escape`` in the client before reading binary
data;
- execute the database command ``SET bytea_output TO escape;`` in the
session before reading binary data;
- upgrade the libpq library on the client to at least 9.0.
.. __: http://www.postgresql.org/docs/9.0/static/datatype-binary.html
.. __: http://www.postgresql.org/docs/9.0/static/runtime-config-client.html#GUC-BYTEA-OUTPUT
Best practices
--------------
@ -138,8 +139,8 @@ What are the advantages or disadvantages of using named cursors?
little memory on the client and to skip or discard parts of the result set.
Problems compiling Psycopg from source
--------------------------------------
Problems compiling and deploying psycopg2
-----------------------------------------
.. cssclass:: faq
@ -151,3 +152,14 @@ I can't compile `!psycopg2`: the compiler says *error: libpq-fe.h: No such file
You need to install the development version of the libpq: the package is
usually called ``libpq-dev``.
Psycopg raises *ImportError: cannot import name tz* on import in mod_wsgi / ASP, but it works fine otherwise.
If `!psycopg2` is installed in an egg_ (e.g. because installed by
:program:`easy_install`), the user running the program may be unable to
write in the `eggs cache`__. Set the env variable
:envvar:`PYTHON_EGG_CACHE` to a writable directory. With modwsgi you can
use the WSGIPythonEggs__ directive.
.. _egg: http://peak.telecommunity.com/DevCenter/PythonEggs
.. __: http://stackoverflow.com/questions/2192323/what-is-the-python-egg-cache-python-egg-cache
.. __: http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonEggs

View File

@ -30,13 +30,13 @@ The module interface respects the standard defined in the |DBAPI|_.
The full list of available parameters is:
- `dbname` -- the database name (only in dsn string)
- `database` -- the database name (only as keyword argument)
- `user` -- user name used to authenticate
- `password` -- password used to authenticate
- `host` -- database host address (defaults to UNIX socket if not provided)
- `port` -- connection port number (defaults to 5432 if not provided)
- `sslmode` -- `SSL TCP/IP negotiation`__ mode
- `!dbname` -- the database name (only in dsn string)
- `!database` -- the database name (only as keyword argument)
- `!user` -- user name used to authenticate
- `!password` -- password used to authenticate
- `!host` -- database host address (defaults to UNIX socket if not provided)
- `!port` -- connection port number (defaults to 5432 if not provided)
- `!sslmode` -- `SSL TCP/IP negotiation`__ mode
.. __: http://www.postgresql.org/docs/9.0/static/libpq-ssl.html#LIBPQ-SSL-SSLMODE-STATEMENTS
@ -87,23 +87,23 @@ available through the following exceptions:
.. exception:: Warning
Exception raised for important warnings like data truncations while
inserting, etc. It is a subclass of the Python |StandardError|_.
inserting, etc. It is a subclass of the Python `~exceptions.StandardError`.
.. exception:: Error
Exception that is the base class of all other error exceptions. You can
use this to catch all errors with one single ``except`` statement. Warnings
use this to catch all errors with one single `!except` statement. Warnings
are not considered errors and thus not use this class as base. It
is a subclass of the Python |StandardError|_.
is a subclass of the Python `!StandardError`.
.. attribute:: pgerror
String representing the error message returned by the backend,
``None`` if not available.
`!None` if not available.
.. attribute:: pgcode
String representing the error code returned by the backend, ``None``
String representing the error code returned by the backend, `!None`
if not available. The `~psycopg2.errorcodes` module contains
symbolic constants representing PostgreSQL error codes.
@ -197,7 +197,7 @@ This is the exception inheritance layout:
.. parsed-literal::
|StandardError|
`!StandardError`
\|__ `Warning`
\|__ `Error`
\|__ `InterfaceError`
@ -212,9 +212,6 @@ This is the exception inheritance layout:
\|__ `NotSupportedError`
.. |StandardError| replace:: `!StandardError`
.. _StandardError: http://docs.python.org/library/exceptions.html#exceptions.StandardError
.. _type-objects-and-constructors:

View File

@ -24,7 +24,7 @@ directly into the client application.
.. method:: getconn(key=None)
Get a free connection and assign it to *key* if not ``None``.
Get a free connection and assign it to *key* if not `!None`.
.. method:: putconn(conn, key=None)

View File

@ -6,8 +6,8 @@
.. module:: psycopg2.tz
This module holds two different tzinfo implementations that can be used as the
`tzinfo` argument to datetime constructors, directly passed to Psycopg
functions or used to set the `cursor.tzinfo_factory` attribute in
`tzinfo` argument to `~datetime.datetime` constructors, directly passed to
Psycopg functions or used to set the `cursor.tzinfo_factory` attribute in
cursors.
.. autoclass:: psycopg2.tz.FixedOffsetTimezone

View File

@ -207,49 +207,78 @@ module.
In the following examples the method `~cursor.mogrify()` is used to show
the SQL string that would be sent to the database.
.. _adapt-consts:
.. index::
pair: None; Adaptation
single: NULL; Adaptation
pair: Boolean; Adaptation
- Python ``None`` and boolean values are converted into the proper SQL
literals::
- Python `None` and boolean values `True` and `False` are converted into the
proper SQL literals::
>>> cur.mogrify("SELECT %s, %s, %s;", (None, True, False))
>>> 'SELECT NULL, true, false;'
.. _adapt-numbers:
.. index::
single: Adaptation; numbers
single: Integer; Adaptation
single: Float; Adaptation
single: Decimal; Adaptation
- Numeric objects: `!int`, `!long`, `!float`,
`!Decimal` are converted in the PostgreSQL numerical representation::
- Numeric objects: `int`, `long`, `float`, `~decimal.Decimal` are converted in
the PostgreSQL numerical representation::
>>> cur.mogrify("SELECT %s, %s, %s, %s;", (10, 10L, 10.0, Decimal("10.00")))
>>> 'SELECT 10, 10, 10.0, 10.00;'
.. _adapt-string:
.. index::
pair: Strings; Adaptation
single: Unicode; Adaptation
- String types: `!str`, `!unicode` are converted in SQL string syntax.
- String types: `str`, `unicode` are converted in SQL string syntax.
`!unicode` objects (`!str` in Python 3) are encoded in the connection
`~connection.encoding` to be sent to the backend: trying to send a character
not supported by the encoding will result in an error. Received data can be
converted either as `!str` or `!unicode`: see :ref:`unicode-handling` for
received, either `!str` or `!unicode`
converted either as `!str` or `!unicode`: see :ref:`unicode-handling`.
.. _adapt-binary:
.. index::
single: Buffer; Adaptation
single: bytea; Adaptation
single: bytes; Adaptation
single: bytearray; Adaptation
single: memoryview; Adaptation
single: Binary string
- Binary types: Python types such as `!bytes`, `!bytearray`, `!buffer`,
`!memoryview` are converted in PostgreSQL binary string syntax, suitable for
:sql:`bytea` fields. Received data is returned as `!buffer` (in Python 2) or
`!memoryview` (in Python 3).
- Binary types: Python types representing binary objects are converted in
PostgreSQL binary string syntax, suitable for :sql:`bytea` fields. Such
types are `buffer` (only available in Python 2), `memoryview` (available
from Python 2.7), `bytearray` (available from Python 2.6) and `bytes`
(only form Python 3: the name is available from Python 2.6 but it's only an
alias for the type `!str`). Any object implementing the `Revised Buffer
Protocol`__ should be usable as binary type where the protocol is supported
(i.e. from Python 2.6). Received data is returned as `!buffer` (in Python 2)
or `!memoryview` (in Python 3).
.. __: http://www.python.org/dev/peps/pep-3118/
.. versionchanged:: 2.4
only strings were supported before.
.. note::
In Python 2, if you have binary data in a `!str` object, you can pass them
to a :sql:`bytea` field using the `psycopg2.Binary` wrapper::
mypic = open('picture.png', 'rb').read()
curs.execute("insert into blobs (file) values (%s)",
(psycopg2.Binary(mypic),))
.. warning::
@ -261,10 +290,15 @@ the SQL string that would be sent to the database.
`bytea_output`__ parameter to ``escape``, either in the server
configuration or in the client session using a query such as ``SET
bytea_output TO escape;`` before trying to receive binary data.
Starting from Psycopg 2.4 this condition is detected and signaled with a
`~psycopg2.InterfaceError`.
.. __: http://www.postgresql.org/docs/9.0/static/datatype-binary.html
.. __: http://www.postgresql.org/docs/9.0/static/runtime-config-client.html#GUC-BYTEA-OUTPUT
.. _adapt-date:
.. index::
single: Adaptation; Date/Time objects
single: Date objects; Adaptation
@ -272,8 +306,8 @@ the SQL string that would be sent to the database.
single: Interval objects; Adaptation
single: mx.DateTime; Adaptation
- Date and time objects: builtin `!datetime`, `!date`,
`!time`. `!timedelta` are converted into PostgreSQL's
- Date and time objects: builtin `~datetime.datetime`, `~datetime.date`,
`~datetime.time`, `~datetime.timedelta` are converted into PostgreSQL's
:sql:`timestamp`, :sql:`date`, :sql:`time`, :sql:`interval` data types.
Time zones are supported too. The Egenix `mx.DateTime`_ objects are adapted
the same way::
@ -288,6 +322,8 @@ the SQL string that would be sent to the database.
>>> cur.mogrify("SELECT %s;", (dt - datetime.datetime(2010,1,1),))
"SELECT '38 days 6027.425337 seconds';"
.. _adapt-list:
.. index::
single: Array; Adaptation
double: Lists; Adaptation
@ -297,6 +333,8 @@ the SQL string that would be sent to the database.
>>> cur.mogrify("SELECT %s;", ([10, 20, 30], ))
'SELECT ARRAY[10, 20, 30];'
.. _adapt-tuple:
.. index::
double: Tuple; Adaptation
single: IN operator
@ -325,11 +363,18 @@ the SQL string that would be sent to the database.
registered.
.. versionchanged:: 2.3
named tuples are adapted like regular tuples and can thus be used to
represent composite types.
`~collections.namedtuple` instances are adapted like regular tuples and
can thus be used to represent composite types.
- Python dictionaries are converted into the |hstore|_ data type. See
`~psycopg2.extras.register_hstore()` for further details.
.. _adapt-dict:
.. index::
single: dict; Adaptation
single: hstore; Adaptation
- Python dictionaries are converted into the |hstore|_ data type. By default
the adapter is not enabled: see `~psycopg2.extras.register_hstore()` for
further details.
.. |hstore| replace:: :sql:`hstore`
.. _hstore: http://www.postgresql.org/docs/9.0/static/hstore.html
@ -419,8 +464,8 @@ Time zones handling
^^^^^^^^^^^^^^^^^^^
The PostgreSQL type :sql:`timestamp with time zone` is converted into Python
`!datetime` objects with a `!tzinfo` attribute set to a
`~psycopg2.tz.FixedOffsetTimezone` instance.
`~datetime.datetime` objects with a `~datetime.datetime.tzinfo` attribute set
to a `~psycopg2.tz.FixedOffsetTimezone` instance.
>>> cur.execute("SET TIME ZONE 'Europe/Rome';") # UTC + 1 hour
>>> cur.execute("SELECT '2010-01-01 10:30:45'::timestamptz;")
@ -428,7 +473,7 @@ The PostgreSQL type :sql:`timestamp with time zone` is converted into Python
psycopg2.tz.FixedOffsetTimezone(offset=60, name=None)
Notice that only time zones with an integer number of minutes are supported:
this is a limitation of the Python `!datetime` module. A few historical time
this is a limitation of the Python `datetime` module. A few historical time
zones had seconds in the UTC offset: these time zones will have the offset
rounded to the nearest minute, with an error of up to 30 seconds.
@ -440,7 +485,7 @@ rounded to the nearest minute, with an error of up to 30 seconds.
.. versionchanged:: 2.2.2
timezones with seconds are supported (with rounding). Previously such
timezones raised an error. In order to deal with them in previous
versions use `psycopg2.extras.register_tstz_w_secs`.
versions use `psycopg2.extras.register_tstz_w_secs()`.
.. index:: Transaction, Begin, Commit, Rollback, Autocommit
@ -463,7 +508,7 @@ The connection is responsible to terminate its transaction, calling either the
`~connection.commit()` or `~connection.rollback()` method. Committed
changes are immediately made persistent into the database. Closing the
connection using the `~connection.close()` method or destroying the
connection object (calling `!__del__()` or letting it fall out of scope)
connection object (using `!del` or letting it fall out of scope)
will result in an implicit `!rollback()` call.
It is possible to set the connection in *autocommit* mode: this way all the
@ -507,6 +552,14 @@ allowing the user to move in the dataset using the `~cursor.scroll()`
method and to read the data using `~cursor.fetchone()` and
`~cursor.fetchmany()` methods.
Named cursors are also :ref:`iterable <cursor-iterable>` like regular cursors.
Notice however that before Psycopg 2.4 iteration was performed fetching one
record at time from the backend, resulting in a large overhead. The attribute
`~cursor.itersize` now controls how many records are now fetched at time
during the iteration: the default value of 2000 allows to fetch about 100KB
per roundtrip assuming records of 10-20 columns of mixed number and strings;
you may decrease this value if you are dealing with huge records.
.. |DECLARE| replace:: :sql:`DECLARE`
.. _DECLARE: http://www.postgresql.org/docs/9.0/static/sql-declare.html
@ -534,13 +587,11 @@ the same connection, all the commands will be executed in the same session
The above observations are only valid for regular threads: they don't apply to
forked processes nor to green threads. `libpq` connections `shouldn't be used by a
forked processes`__, so when using a module such as |multiprocessing|__ or a
forked processes`__, so when using a module such as `multiprocessing` or a
forking web deploy method such as FastCGI ensure to create the connections
*after* the fork.
.. __: http://www.postgresql.org/docs/9.0/static/libpq-connect.html#LIBPQ-CONNECT
.. |multiprocessing| replace:: `!multiprocessing`
.. __: http://docs.python.org/library/multiprocessing.html
Connections shouldn't be shared either by different green threads: doing so
may result in a deadlock. See :ref:`green-support` for further details.

View File

@ -62,7 +62,9 @@ if sys.version_info[0] >= 2 and sys.version_info[1] >= 4:
RuntimeWarning)
del sys, warnings
from psycopg2 import tz
# Note: the first internal import should be _psycopg, otherwise the real cause
# of a failed loading of the C module may get hidden, see
# http://archives.postgresql.org/psycopg/2011-02/msg00044.php
# Import the DBAPI-2.0 stuff into top-level module.
@ -78,6 +80,9 @@ from psycopg2._psycopg import NotSupportedError, OperationalError
from psycopg2._psycopg import connect, apilevel, threadsafety, paramstyle
from psycopg2._psycopg import __version__
from psycopg2 import tz
# Register default adapters.
import psycopg2.extensions as _ext

View File

@ -232,7 +232,7 @@ class RealDictCursor(DictCursorBase):
self._query_executed = 0
class RealDictRow(dict):
"""A ``dict`` subclass representing a data record."""
"""A `!dict` subclass representing a data record."""
__slots__ = ('_column_mapping')
@ -253,7 +253,7 @@ class NamedTupleConnection(_connection):
return _connection.cursor(self, *args, **kwargs)
class NamedTupleCursor(_cursor):
"""A cursor that generates results as |namedtuple|__.
"""A cursor that generates results as `~collections.namedtuple`.
`!fetch*()` methods will return named tuples instead of regular tuples, so
their elements can be accessed both as regular numeric items as well as
@ -267,9 +267,6 @@ class NamedTupleCursor(_cursor):
100
>>> rec.data
"abc'def"
.. |namedtuple| replace:: `!namedtuple`
.. __: http://docs.python.org/release/2.6/library/collections.html#collections.namedtuple
"""
Record = None
@ -327,9 +324,9 @@ class LoggingConnection(_connection):
"""
def initialize(self, logobj):
"""Initialize the connection to log to ``logobj``.
"""Initialize the connection to log to `!logobj`.
The ``logobj`` parameter can be an open file object or a Logger
The `!logobj` parameter can be an open file object or a Logger
instance from the standard logging module.
"""
self._logobj = logobj
@ -666,9 +663,7 @@ class HstoreAdapter(object):
@classmethod
def get_oids(self, conn_or_curs):
"""Return the oid of the hstore and hstore[] types.
Return None if hstore is not available.
"""Return the lists of OID of the hstore and hstore[] types.
"""
if hasattr(conn_or_curs, 'execute'):
conn = conn_or_curs.connection
@ -683,46 +678,69 @@ class HstoreAdapter(object):
# column typarray not available before PG 8.3
typarray = conn.server_version >= 80300 and "typarray" or "NULL"
rv0, rv1 = [], []
# get the oid for the hstore
curs.execute("""\
SELECT t.oid, %s
FROM pg_type t JOIN pg_namespace ns
ON typnamespace = ns.oid
WHERE typname = 'hstore' and nspname = 'public';
WHERE typname = 'hstore';
""" % typarray)
oids = curs.fetchone()
for oids in curs:
rv0.append(oids[0])
rv1.append(oids[1])
# revert the status of the connection as before the command
if (conn_status != _ext.STATUS_IN_TRANSACTION
and conn.isolation_level != _ext.ISOLATION_LEVEL_AUTOCOMMIT):
conn.rollback()
return oids
return tuple(rv0), tuple(rv1)
def register_hstore(conn_or_curs, globally=False, unicode=False):
"""Register adapter and typecaster for `dict`\-\ |hstore| conversions.
def register_hstore(conn_or_curs, globally=False, unicode=False, oid=None):
"""Register adapter and typecaster for `!dict`\-\ |hstore| conversions.
The function must receive a connection or cursor as the |hstore| oid is
different in each database. The typecaster will normally be registered
only on the connection or cursor passed as argument. If your application
uses a single database you can pass *globally*\=True to have the typecaster
registered on all the connections.
:param conn_or_curs: a connection or cursor: the typecaster will be
registered only on this object unless *globally* is set to `!True`
:param globally: register the adapter globally, not only on *conn_or_curs*
:param unicode: if `!True`, keys and values returned from the database
will be `!unicode` instead of `!str`. The option is not available on
Python 3
:param oid: the OID of the |hstore| type if known. If not, it will be
queried on *conn_or_curs*
On Python 2, by default the returned dicts will have `str` objects as keys and values:
use *unicode*\=True to return `unicode` objects instead. When adapting a
dictionary both `str` and `unicode` keys and values are handled (the
`unicode` values will be converted according to the current
`~connection.encoding`). The option is not available on Python 3.
The connection or cursor passed to the function will be used to query the
database and look for the OID of the |hstore| type (which may be different
across databases). If querying is not desirable (e.g. with
:ref:`asynchronous connections <async-support>`) you may specify it in the
*oid* parameter (it can be found using a query such as :sql:`SELECT
'hstore'::regtype::oid;`).
Note that, when passing a dictionary from Python to the database, both
strings and unicode keys and values are supported. Dictionaries returned
from the database have keys/values according to the *unicode* parameter.
The |hstore| contrib module must be already installed in the database
(executing the ``hstore.sql`` script in your ``contrib`` directory).
Raise `~psycopg2.ProgrammingError` if the type is not found.
.. versionchanged:: 2.4
added the *oid* parameter. If not specified, the typecaster is
installed also if |hstore| is not installed in the :sql:`public`
schema.
"""
oids = HstoreAdapter.get_oids(conn_or_curs)
if oids is None:
raise psycopg2.ProgrammingError(
"hstore type not found in the database. "
"please install it from your 'contrib/hstore.sql' file")
if oid is None:
oid = HstoreAdapter.get_oids(conn_or_curs)
if oid is None or not oid[0]:
raise psycopg2.ProgrammingError(
"hstore type not found in the database. "
"please install it from your 'contrib/hstore.sql' file")
else:
oid = oid[0] # for the moment we don't have a HSTOREARRAY
if isinstance(oid, int):
oid = (oid,)
# create and register the typecaster
if sys.version_info[0] < 3 and unicode:
@ -730,7 +748,7 @@ def register_hstore(conn_or_curs, globally=False, unicode=False):
else:
cast = HstoreAdapter.parse
HSTORE = _ext.new_type((oids[0],), "HSTORE", cast)
HSTORE = _ext.new_type(oid, "HSTORE", cast)
_ext.register_type(HSTORE, not globally and conn_or_curs or None)
_ext.register_adapter(dict, HstoreAdapter)
@ -750,9 +768,9 @@ class CompositeCaster(object):
.. attribute:: type
The type of the Python objects returned. If `!collections.namedtuple()`
The type of the Python objects returned. If :py:func:`collections.namedtuple()`
is available, it is a named tuple with attributes equal to the type
components. Otherwise it is just the `tuple` object.
components. Otherwise it is just the `!tuple` object.
.. attribute:: attnames
@ -875,8 +893,8 @@ def register_composite(name, conn_or_curs, globally=False):
the |CREATE TYPE|_ command
:param conn_or_curs: a connection or cursor used to find the type oid and
components; the typecaster is registered in a scope limited to this
object, unless *globally* is set to `True`
:param globally: if `False` (default) register the typecaster only on
object, unless *globally* is set to `!True`
:param globally: if `!False` (default) register the typecaster only on
*conn_or_curs*, otherwise register it globally
:return: the registered `CompositeCaster` instance responsible for the
conversion

View File

@ -39,7 +39,8 @@ asis_getquoted(asisObject *self, PyObject *args)
{
PyObject *rv;
if (self->wrapped == Py_None) {
rv = Bytes_FromString("NULL");
Py_INCREF(psyco_null);
rv = psyco_null;
}
else {
rv = PyObject_Str(self->wrapped);

View File

@ -47,54 +47,85 @@ binary_escape(unsigned char *from, size_t from_length,
return PQescapeBytea(from, from_length, to_length);
}
#define HAS_BUFFER (PY_MAJOR_VERSION < 3)
#define HAS_MEMORYVIEW (PY_MAJOR_VERSION > 2 || PY_MINOR_VERSION >= 6)
/* binary_quote - do the quote process on plain and unicode strings */
static PyObject *
binary_quote(binaryObject *self)
{
char *to;
const char *buffer;
char *to = NULL;
const char *buffer = NULL;
Py_ssize_t buffer_len;
size_t len = 0;
PyObject *rv = NULL;
#if HAS_MEMORYVIEW
Py_buffer view;
int got_view = 0;
#endif
/* if we got a plain string or a buffer we escape it and save the buffer */
if (Bytes_Check(self->wrapped)
#if PY_MAJOR_VERSION < 3
|| PyBuffer_Check(self->wrapped)
#else
|| PyByteArray_Check(self->wrapped)
|| PyMemoryView_Check(self->wrapped)
#endif
) {
/* escape and build quoted buffer */
if (PyObject_AsReadBuffer(self->wrapped, (const void **)&buffer,
&buffer_len) < 0)
return NULL;
to = (char *)binary_escape((unsigned char*)buffer, (size_t) buffer_len,
&len, self->conn ? ((connectionObject*)self->conn)->pgconn : NULL);
if (to == NULL) {
PyErr_NoMemory();
return NULL;
#if HAS_MEMORYVIEW
if (PyObject_CheckBuffer(self->wrapped)) {
if (0 > PyObject_GetBuffer(self->wrapped, &view, PyBUF_CONTIG_RO)) {
goto exit;
}
got_view = 1;
buffer = (const char *)(view.buf);
buffer_len = view.len;
}
#endif
if (len > 0)
self->buffer = Bytes_FromFormat(
(self->conn && ((connectionObject*)self->conn)->equote)
? "E'%s'::bytea" : "'%s'::bytea" , to);
else
self->buffer = Bytes_FromString("''::bytea");
#if HAS_BUFFER
if (!buffer && (Bytes_Check(self->wrapped) || PyBuffer_Check(self->wrapped))) {
if (PyObject_AsReadBuffer(self->wrapped, (const void **)&buffer,
&buffer_len) < 0) {
goto exit;
}
}
#endif
PQfreemem(to);
if (!buffer) {
goto exit;
}
/* if the wrapped object is not a string or a buffer, this is an error */
else {
PyErr_SetString(PyExc_TypeError, "can't escape non-string object");
return NULL;
/* escape and build quoted buffer */
to = (char *)binary_escape((unsigned char*)buffer, (size_t) buffer_len,
&len, self->conn ? ((connectionObject*)self->conn)->pgconn : NULL);
if (to == NULL) {
PyErr_NoMemory();
goto exit;
}
return self->buffer;
if (len > 0)
rv = Bytes_FromFormat(
(self->conn && ((connectionObject*)self->conn)->equote)
? "E'%s'::bytea" : "'%s'::bytea" , to);
else
rv = Bytes_FromString("''::bytea");
exit:
if (to) { PQfreemem(to); }
#if HAS_MEMORYVIEW
if (got_view) { PyBuffer_Release(&view); }
#endif
/* Allow Binary(None) to work */
if (self->wrapped == Py_None) {
Py_INCREF(psyco_null);
rv = psyco_null;
}
/* if the wrapped object is not bytes or a buffer, this is an error */
if (!rv && !PyErr_Occurred()) {
PyErr_Format(PyExc_TypeError, "can't escape %s to binary",
Py_TYPE(self->wrapped)->tp_name);
}
return rv;
}
/* binary_str, binary_getquoted - return result of quoting */
@ -103,11 +134,9 @@ static PyObject *
binary_getquoted(binaryObject *self, PyObject *args)
{
if (self->buffer == NULL) {
if (!(binary_quote(self))) {
return NULL;
}
self->buffer = binary_quote(self);
}
Py_INCREF(self->buffer);
Py_XINCREF(self->buffer);
return self->buffer;
}

View File

@ -45,19 +45,22 @@ list_quote(listObject *self)
/* empty arrays are converted to NULLs (still searching for a way to
insert an empty array in postgresql */
if (len == 0) return Bytes_FromString("'{}'::text[]");
if (len == 0) return Bytes_FromString("'{}'");
tmp = PyTuple_New(len);
for (i=0; i<len; i++) {
PyObject *quoted;
PyObject *wrapped = PyList_GET_ITEM(self->wrapped, i);
if (wrapped == Py_None)
quoted = Bytes_FromString("NULL");
else
quoted = microprotocol_getquoted(wrapped,
(connectionObject*)self->connection);
if (quoted == NULL) goto error;
PyObject *wrapped = PyList_GET_ITEM(self->wrapped, i);
if (wrapped == Py_None) {
Py_INCREF(psyco_null);
quoted = psyco_null;
}
else {
quoted = microprotocol_getquoted(wrapped,
(connectionObject*)self->connection);
if (quoted == NULL) goto error;
}
/* here we don't loose a refcnt: SET_ITEM does not change the
reference count and we are just transferring ownership of the tmp

View File

@ -51,6 +51,10 @@ extern HIDDEN int psycopg_debug_enabled;
#else /* !__GNUC__ or __APPLE__ */
#ifdef PSYCOPG_DEBUG
#include <stdarg.h>
#ifdef _WIN32
#include <process.h>
#define getpid _getpid
#endif
static void Dprintf(const char *fmt, ...)
{
va_list ap;

View File

@ -124,6 +124,8 @@ HIDDEN PyObject *conn_text_from_chars(connectionObject *pgconn, const char *str)
HIDDEN int conn_get_standard_conforming_strings(PGconn *pgconn);
HIDDEN int conn_get_isolation_level(PGresult *pgres);
HIDDEN int conn_get_protocol_version(PGconn *pgconn);
HIDDEN int conn_get_server_version(PGconn *pgconn);
HIDDEN PGcancel *conn_get_cancel(PGconn *pgconn);
HIDDEN void conn_notice_process(connectionObject *self);
HIDDEN void conn_notice_clean(connectionObject *self);
HIDDEN void conn_notifies_process(connectionObject *self);

View File

@ -69,7 +69,15 @@ conn_notice_callback(void *args, const char *message)
*/
notice = (struct connectionObject_notice *)
malloc(sizeof(struct connectionObject_notice));
if (NULL == notice) {
/* Discard the notice in case of failed allocation. */
return;
}
notice->message = strdup(message);
if (NULL == notice->message) {
free(notice);
return;
}
notice->next = self->notice_pending;
self->notice_pending = notice;
}
@ -984,14 +992,22 @@ conn_set_client_encoding(connectionObject *self, const char *enc)
}
/* no error, we can proceeed and store the new encoding */
PyMem_Free(self->encoding);
{
char *tmp = self->encoding;
self->encoding = NULL;
PyMem_Free(tmp);
}
if (!(self->encoding = psycopg_strdup(enc, 0))) {
res = 1; /* don't call pq_complete_error below */
goto endlock;
}
/* Store the python codec too. */
PyMem_Free(self->codec);
{
char *tmp = self->codec;
self->codec = NULL;
PyMem_Free(tmp);
}
self->codec = codec;
Dprintf("conn_set_client_encoding: set encoding to %s (codec: %s)",

View File

@ -502,7 +502,7 @@ psyco_conn_get_parameter_status(connectionObject *self, PyObject *args)
/* lobject method - allocate a new lobject */
#define psyco_conn_lobject_doc \
"cursor(oid=0, mode=0, new_oid=0, new_file=None,\n" \
"lobject(oid=0, mode=0, new_oid=0, new_file=None,\n" \
" lobject_factory=extensions.lobject) -- new lobject\n\n" \
"Return a new lobject.\n\nThe ``lobject_factory`` argument can be used\n" \
"to create non-standard lobjects by passing a class different from the\n" \
@ -820,28 +820,31 @@ static int
connection_setup(connectionObject *self, const char *dsn, long int async)
{
char *pos;
int res;
int res = -1;
Dprintf("connection_setup: init connection object at %p, "
"async %ld, refcnt = " FORMAT_CODE_PY_SSIZE_T,
self, async, Py_REFCNT(self)
);
self->dsn = strdup(dsn);
self->notice_list = PyList_New(0);
self->notifies = PyList_New(0);
if (!(self->dsn = strdup(dsn))) {
PyErr_NoMemory();
goto exit;
}
if (!(self->notice_list = PyList_New(0))) { goto exit; }
if (!(self->notifies = PyList_New(0))) { goto exit; }
self->async = async;
self->status = CONN_STATUS_SETUP;
self->async_status = ASYNC_DONE;
self->string_types = PyDict_New();
self->binary_types = PyDict_New();
if (!(self->string_types = PyDict_New())) { goto exit; }
if (!(self->binary_types = PyDict_New())) { goto exit; }
/* other fields have been zeroed by tp_alloc */
pthread_mutex_init(&(self->lock), NULL);
if (conn_connect(self, async) != 0) {
Dprintf("connection_init: FAILED");
res = -1;
goto exit;
}
else {
Dprintf("connection_setup: good connection object at %p, refcnt = "
@ -858,6 +861,7 @@ connection_setup(connectionObject *self, const char *dsn, long int async)
*pos = 'x';
}
exit:
return res;
}
@ -935,7 +939,7 @@ connection_repr(connectionObject *self)
static int
connection_traverse(connectionObject *self, visitproc visit, void *arg)
{
Py_VISIT(self->tpc_xid);
Py_VISIT((PyObject *)(self->tpc_xid));
Py_VISIT(self->async_cursor);
Py_VISIT(self->notice_list);
Py_VISIT(self->notice_filter);

View File

@ -34,7 +34,8 @@ extern "C" {
extern HIDDEN PyTypeObject cursorType;
typedef struct {
/* the typedef is forward-declared in psycopg.h */
struct cursorObject {
PyObject_HEAD
connectionObject *conn; /* connection owning the cursor */
@ -45,6 +46,7 @@ typedef struct {
long int rowcount; /* number of rows affected by last execute */
long int columns; /* number of columns fetched from the db */
long int arraysize; /* how many rows should fetchmany() return */
long int itersize; /* how many rows should iter(cur) fetch in named cursors */
long int row; /* the row counter for fetch*() operations */
long int mark; /* transaction marker, copied from conn */
@ -78,7 +80,8 @@ typedef struct {
PyObject *weakreflist; /* list of weak references */
} cursorObject;
};
/* C-callable functions in cursor_int.c and cursor_ext.c */
HIDDEN PyObject *curs_get_cast(cursorObject *self, PyObject *oid);

View File

@ -59,7 +59,7 @@ psyco_curs_close(cursorObject *self, PyObject *args)
char buffer[128];
EXC_IF_NO_MARK(self);
PyOS_snprintf(buffer, 127, "CLOSE %s", self->name);
PyOS_snprintf(buffer, 127, "CLOSE \"%s\"", self->name);
if (pq_execute(self, buffer, 0) == -1) return NULL;
}
@ -76,10 +76,10 @@ psyco_curs_close(cursorObject *self, PyObject *args)
/* mogrify a query string and build argument array or dict */
static int
_mogrify(PyObject *var, PyObject *fmt, connectionObject *conn, PyObject **new)
_mogrify(PyObject *var, PyObject *fmt, cursorObject *curs, PyObject **new)
{
PyObject *key, *value, *n, *item;
char *d, *c;
PyObject *key, *value, *n;
const char *d, *c;
Py_ssize_t index = 0;
int force = 0, kind = 0;
@ -90,33 +90,40 @@ _mogrify(PyObject *var, PyObject *fmt, connectionObject *conn, PyObject **new)
c = Bytes_AsString(fmt);
while(*c) {
/* handle plain percent symbol in format string */
if (c[0] == '%' && c[1] == '%') {
c+=2; force = 1;
if (*c++ != '%') {
/* a regular character */
continue;
}
switch (*c) {
/* handle plain percent symbol in format string */
case '%':
++c;
force = 1;
break;
/* if we find '%(' then this is a dictionary, we:
1/ find the matching ')' and extract the key name
2/ locate the value in the dictionary (or return an error)
3/ mogrify the value into something usefull (quoting)...
4/ ...and add it to the new dictionary to be used as argument
*/
else if (c[0] == '%' && c[1] == '(') {
case '(':
/* check if some crazy guy mixed formats */
if (kind == 2) {
Py_XDECREF(n);
psyco_set_error(ProgrammingError, (PyObject*)conn,
psyco_set_error(ProgrammingError, curs,
"argument formats can't be mixed", NULL, NULL);
return -1;
}
kind = 1;
/* let's have d point the end of the argument */
for (d = c + 2; *d && *d != ')'; d++);
for (d = c + 1; *d && *d != ')' && *d != '%'; d++);
if (*d == ')') {
key = Text_FromUTF8AndSize(c+2, (Py_ssize_t) (d-c-2));
key = Text_FromUTF8AndSize(c+1, (Py_ssize_t) (d-c-1));
value = PyObject_GetItem(var, key);
/* key has refcnt 1, value the original value + 1 */
@ -135,28 +142,20 @@ _mogrify(PyObject *var, PyObject *fmt, connectionObject *conn, PyObject **new)
n = PyDict_New();
}
if ((item = PyObject_GetItem(n, key)) == NULL) {
if (0 == PyDict_Contains(n, key)) {
PyObject *t = NULL;
PyErr_Clear();
/* None is always converted to NULL; this is an
optimization over the adapting code and can go away in
the future if somebody finds a None adapter usefull. */
the future if somebody finds a None adapter useful. */
if (value == Py_None) {
t = Bytes_FromString("NULL");
Py_INCREF(psyco_null);
t = psyco_null;
PyDict_SetItem(n, key, t);
/* t is a new object, refcnt = 1, key is at 2 */
/* if the value is None we need to substitute the
formatting char with 's' (FIXME: this should not be
necessary if we drop support for formats other than
%s!) */
while (*d && !isalpha(*d)) d++;
if (*d) *d = 's';
}
else {
t = microprotocol_getquoted(value, conn);
t = microprotocol_getquoted(value, curs->conn);
if (t != NULL) {
PyDict_SetItem(n, key, t);
@ -175,20 +174,21 @@ _mogrify(PyObject *var, PyObject *fmt, connectionObject *conn, PyObject **new)
if it was added to the dictionary directly; good */
Py_XDECREF(value);
}
else {
/* we have an item with one extra refcnt here, zap! */
Py_DECREF(item);
}
Py_DECREF(key); /* key has the original refcnt now */
Dprintf("_mogrify: after value refcnt: "
FORMAT_CODE_PY_SSIZE_T,
Py_REFCNT(value)
);
FORMAT_CODE_PY_SSIZE_T, Py_REFCNT(value));
}
c = d;
}
else {
/* we found %( but not a ) */
Py_XDECREF(n);
psyco_set_error(ProgrammingError, curs,
"incomplete placeholder: '%(' without ')'", NULL, NULL);
return -1;
}
c = d + 1; /* after the ) */
break;
else if (c[0] == '%' && c[1] != '(') {
default:
/* this is a format that expects a tuple; it is much easier,
because we don't need to check the old/new dictionary for
keys */
@ -196,7 +196,7 @@ _mogrify(PyObject *var, PyObject *fmt, connectionObject *conn, PyObject **new)
/* check if some crazy guy mixed formats */
if (kind == 1) {
Py_XDECREF(n);
psyco_set_error(ProgrammingError, (PyObject*)conn,
psyco_set_error(ProgrammingError, curs,
"argument formats can't be mixed", NULL, NULL);
return -1;
}
@ -217,16 +217,13 @@ _mogrify(PyObject *var, PyObject *fmt, connectionObject *conn, PyObject **new)
}
/* let's have d point just after the '%' */
d = c+1;
if (value == Py_None) {
PyTuple_SET_ITEM(n, index, Bytes_FromString("NULL"));
while (*d && !isalpha(*d)) d++;
if (*d) *d = 's';
Py_INCREF(psyco_null);
PyTuple_SET_ITEM(n, index, psyco_null);
Py_DECREF(value);
}
else {
PyObject *t = microprotocol_getquoted(value, conn);
PyObject *t = microprotocol_getquoted(value, curs->conn);
if (t != NULL) {
PyTuple_SET_ITEM(n, index, t);
@ -238,12 +235,8 @@ _mogrify(PyObject *var, PyObject *fmt, connectionObject *conn, PyObject **new)
return -1;
}
}
c = d;
index += 1;
}
else {
c++;
}
}
if (force && n == NULL)
@ -262,7 +255,7 @@ static PyObject *_psyco_curs_validate_sql_basic(
after having set an exception. */
if (!sql || !PyObject_IsTrue(sql)) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"can't execute an empty query", NULL, NULL);
goto fail;
}
@ -334,7 +327,7 @@ _psyco_curs_merge_query_args(cursorObject *self,
if (!strcmp(s, "not enough arguments for format string")
|| !strcmp(s, "not all arguments converted")) {
Dprintf("psyco_curs_execute: -> got a match");
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
s, NULL, NULL);
pe = 1;
}
@ -388,7 +381,7 @@ _psyco_curs_execute(cursorObject *self,
if (vars && vars != Py_None)
{
if(_mogrify(vars, operation, self->conn, &cvt) == -1) { goto fail; }
if(_mogrify(vars, operation, self, &cvt) == -1) { goto fail; }
}
if (vars && cvt) {
@ -398,7 +391,7 @@ _psyco_curs_execute(cursorObject *self,
if (self->name != NULL) {
self->query = Bytes_FromFormat(
"DECLARE %s CURSOR WITHOUT HOLD FOR %s",
"DECLARE \"%s\" CURSOR WITHOUT HOLD FOR %s",
self->name, Bytes_AS_STRING(fquery));
Py_DECREF(fquery);
}
@ -409,7 +402,7 @@ _psyco_curs_execute(cursorObject *self,
else {
if (self->name != NULL) {
self->query = Bytes_FromFormat(
"DECLARE %s CURSOR WITHOUT HOLD FOR %s",
"DECLARE \"%s\" CURSOR WITHOUT HOLD FOR %s",
self->name, Bytes_AS_STRING(operation));
}
else {
@ -458,18 +451,18 @@ psyco_curs_execute(cursorObject *self, PyObject *args, PyObject *kwargs)
if (self->name != NULL) {
if (self->query != Py_None) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"can't call .execute() on named cursors more than once",
NULL, NULL);
return NULL;
}
if (self->conn->isolation_level == ISOLATION_LEVEL_AUTOCOMMIT) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"can't use a named cursor outside of transactions", NULL, NULL);
return NULL;
}
if (self->conn->mark != self->mark) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"named cursor isn't valid anymore", NULL, NULL);
return NULL;
}
@ -513,7 +506,7 @@ psyco_curs_executemany(cursorObject *self, PyObject *args, PyObject *kwargs)
EXC_IF_TPC_PREPARED(self->conn, executemany);
if (self->name != NULL) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"can't call .executemany() on named cursors", NULL, NULL);
return NULL;
}
@ -571,7 +564,7 @@ _psyco_curs_mogrify(cursorObject *self,
if (vars && vars != Py_None)
{
if (_mogrify(vars, operation, self->conn, &cvt) == -1) {
if (_mogrify(vars, operation, self, &cvt) == -1) {
goto cleanup;
}
}
@ -645,7 +638,7 @@ psyco_curs_cast(cursorObject *self, PyObject *args)
"fetchone() -> tuple or None\n\n" \
"Return the next row of a query result set in the form of a tuple (by\n" \
"default) or using the sequence factory previously set in the\n" \
"`row_factory` attribute. Return `None` when no more data is available.\n"
"`row_factory` attribute. Return `!None` when no more data is available.\n"
static int
_psyco_curs_prefetch(cursorObject *self)
@ -755,7 +748,7 @@ psyco_curs_fetchone(cursorObject *self, PyObject *args)
EXC_IF_NO_MARK(self);
EXC_IF_TPC_PREPARED(self->conn, fetchone);
PyOS_snprintf(buffer, 127, "FETCH FORWARD 1 FROM %s", self->name);
PyOS_snprintf(buffer, 127, "FETCH FORWARD 1 FROM \"%s\"", self->name);
if (pq_execute(self, buffer, 0) == -1) return NULL;
if (_psyco_curs_prefetch(self) < 0) return NULL;
}
@ -809,12 +802,8 @@ psyco_curs_next_named(cursorObject *self)
if (self->row >= self->rowcount) {
char buffer[128];
/* fetch 'arraysize' records, but shun the default value of 1 */
long int size = self->arraysize;
if (size == 1) { size = 2000L; }
PyOS_snprintf(buffer, 127, "FETCH FORWARD %ld FROM %s",
size, self->name);
PyOS_snprintf(buffer, 127, "FETCH FORWARD %ld FROM \"%s\"",
self->itersize, self->name);
if (pq_execute(self, buffer, 0) == -1) return NULL;
if (_psyco_curs_prefetch(self) < 0) return NULL;
}
@ -848,7 +837,7 @@ psyco_curs_next_named(cursorObject *self)
"fetchmany(size=self.arraysize) -> list of tuple\n\n" \
"Return the next `size` rows of a query result set in the form of a list\n" \
"of tuples (by default) or using the sequence factory previously set in\n" \
"the `row_factory` attribute. Return `None` when no more data is available.\n"
"the `row_factory` attribute. Return `!None` when no more data is available.\n"
static PyObject *
psyco_curs_fetchmany(cursorObject *self, PyObject *args, PyObject *kwords)
@ -873,7 +862,7 @@ psyco_curs_fetchmany(cursorObject *self, PyObject *args, PyObject *kwords)
EXC_IF_NO_MARK(self);
EXC_IF_TPC_PREPARED(self->conn, fetchone);
PyOS_snprintf(buffer, 127, "FETCH FORWARD %d FROM %s",
PyOS_snprintf(buffer, 127, "FETCH FORWARD %d FROM \"%s\"",
(int)size, self->name);
if (pq_execute(self, buffer, 0) == -1) return NULL;
if (_psyco_curs_prefetch(self) < 0) return NULL;
@ -926,7 +915,7 @@ psyco_curs_fetchmany(cursorObject *self, PyObject *args, PyObject *kwords)
"Return all the remaining rows of a query result set.\n\n" \
"Rows are returned in the form of a list of tuples (by default) or using\n" \
"the sequence factory previously set in the `row_factory` attribute.\n" \
"Return `None` when no more data is available.\n"
"Return `!None` when no more data is available.\n"
static PyObject *
psyco_curs_fetchall(cursorObject *self, PyObject *args)
@ -944,7 +933,7 @@ psyco_curs_fetchall(cursorObject *self, PyObject *args)
EXC_IF_NO_MARK(self);
EXC_IF_TPC_PREPARED(self->conn, fetchall);
PyOS_snprintf(buffer, 127, "FETCH FORWARD ALL FROM %s", self->name);
PyOS_snprintf(buffer, 127, "FETCH FORWARD ALL FROM \"%s\"", self->name);
if (pq_execute(self, buffer, 0) == -1) return NULL;
if (_psyco_curs_prefetch(self) < 0) return NULL;
}
@ -1009,7 +998,7 @@ psyco_curs_callproc(cursorObject *self, PyObject *args, PyObject *kwargs)
EXC_IF_TPC_PREPARED(self->conn, callproc);
if (self->name != NULL) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"can't call .callproc() on named cursors", NULL, NULL);
return NULL;
}
@ -1022,7 +1011,9 @@ psyco_curs_callproc(cursorObject *self, PyObject *args, PyObject *kwargs)
/* allocate some memory, build the SQL and create a PyString from it */
sl = procname_len + 17 + nparameters*3 - (nparameters ? 1 : 0);
sql = (char*)PyMem_Malloc(sl);
if (sql == NULL) return NULL;
if (sql == NULL) {
return PyErr_NoMemory();
}
sprintf(sql, "SELECT * FROM %s(", procname);
for(i=0; i<nparameters; i++) {
@ -1132,13 +1123,13 @@ psyco_curs_scroll(cursorObject *self, PyObject *args, PyObject *kwargs)
} else if (strcmp( mode, "absolute") == 0) {
newpos = value;
} else {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"scroll mode must be 'relative' or 'absolute'", NULL, NULL);
return NULL;
}
if (newpos < 0 || newpos >= self->rowcount ) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, self,
"scroll destination out of bounds", NULL, NULL);
return NULL;
}
@ -1153,11 +1144,11 @@ psyco_curs_scroll(cursorObject *self, PyObject *args, PyObject *kwargs)
EXC_IF_TPC_PREPARED(self->conn, scroll);
if (strcmp(mode, "absolute") == 0) {
PyOS_snprintf(buffer, 127, "MOVE ABSOLUTE %d FROM %s",
PyOS_snprintf(buffer, 127, "MOVE ABSOLUTE %d FROM \"%s\"",
value, self->name);
}
else {
PyOS_snprintf(buffer, 127, "MOVE %d FROM %s", value, self->name);
PyOS_snprintf(buffer, 127, "MOVE %d FROM \"%s\"", value, self->name);
}
if (pq_execute(self, buffer, 0) == -1) return NULL;
if (_psyco_curs_prefetch(self) < 0) return NULL;
@ -1244,15 +1235,16 @@ _psyco_curs_has_read_check(PyObject* o, void* var)
static PyObject *
psyco_curs_copy_from(cursorObject *self, PyObject *args, PyObject *kwargs)
{
char *query = NULL;
char query_buffer[DEFAULT_COPYBUFF];
Py_ssize_t query_size;
char *query;
const char *table_name;
const char *sep = "\t", *null = NULL;
Py_ssize_t bufsize = DEFAULT_COPYBUFF;
PyObject *file, *columns = NULL, *res = NULL;
char columnlist[DEFAULT_COPYBUFF];
char *quoted_delimiter;
char *quoted_delimiter = NULL;
char *quoted_null = NULL;
static char *kwlist[] = {
"file", "table", "sep", "null", "size", "columns", NULL};
@ -1273,32 +1265,32 @@ psyco_curs_copy_from(cursorObject *self, PyObject *args, PyObject *kwargs)
EXC_IF_GREEN(copy_from);
EXC_IF_TPC_PREPARED(self->conn, copy_from);
quoted_delimiter = psycopg_escape_string((PyObject*)self->conn, sep, 0, NULL, NULL);
if (quoted_delimiter == NULL) {
if (!(quoted_delimiter = psycopg_escape_string(
(PyObject*)self->conn, sep, 0, NULL, NULL))) {
PyErr_NoMemory();
return NULL;
goto exit;
}
query = query_buffer;
if (null) {
char *quoted_null = psycopg_escape_string((PyObject*)self->conn, null, 0, NULL, NULL);
if (quoted_null == NULL) {
PyMem_Free(quoted_delimiter);
if (!(quoted_null = psycopg_escape_string(
(PyObject*)self->conn, null, 0, NULL, NULL))) {
PyErr_NoMemory();
return NULL;
goto exit;
}
query_size = PyOS_snprintf(query, DEFAULT_COPYBUFF,
"COPY %s%s FROM stdin WITH DELIMITER AS %s NULL AS %s",
table_name, columnlist, quoted_delimiter, quoted_null);
if (query_size >= DEFAULT_COPYBUFF) {
/* Got truncated, allocate dynamically */
query = (char *)PyMem_Malloc((query_size + 1) * sizeof(char));
if (!(query = PyMem_New(char, query_size + 1))) {
PyErr_NoMemory();
goto exit;
}
PyOS_snprintf(query, query_size + 1,
"COPY %s%s FROM stdin WITH DELIMITER AS %s NULL AS %s",
table_name, columnlist, quoted_delimiter, quoted_null);
}
PyMem_Free(quoted_null);
}
else {
query_size = PyOS_snprintf(query, DEFAULT_COPYBUFF,
@ -1306,14 +1298,16 @@ psyco_curs_copy_from(cursorObject *self, PyObject *args, PyObject *kwargs)
table_name, columnlist, quoted_delimiter);
if (query_size >= DEFAULT_COPYBUFF) {
/* Got truncated, allocate dynamically */
query = (char *)PyMem_Malloc((query_size + 1) * sizeof(char));
if (!(query = PyMem_New(char, query_size + 1))) {
PyErr_NoMemory();
goto exit;
}
PyOS_snprintf(query, query_size + 1,
"COPY %s%s FROM stdin WITH DELIMITER AS %s",
table_name, columnlist, quoted_delimiter);
}
}
PyMem_Free(quoted_delimiter);
}
Dprintf("psyco_curs_copy_from: query = %s", query);
self->copysize = bufsize;
@ -1324,11 +1318,13 @@ psyco_curs_copy_from(cursorObject *self, PyObject *args, PyObject *kwargs)
Py_INCREF(Py_None);
}
if (query && (query != query_buffer)) {
PyMem_Free(query);
}
self->copyfile = NULL;
exit:
PyMem_Free(quoted_delimiter);
PyMem_Free(quoted_null);
if (query != query_buffer) { PyMem_Free(query); }
return res;
}
@ -1363,7 +1359,8 @@ psyco_curs_copy_to(cursorObject *self, PyObject *args, PyObject *kwargs)
const char *table_name;
const char *sep = "\t", *null = NULL;
PyObject *file, *columns = NULL, *res = NULL;
char *quoted_delimiter;
char *quoted_delimiter = NULL;
char *quoted_null = NULL;
static char *kwlist[] = {"file", "table", "sep", "null", "columns", NULL};
@ -1381,31 +1378,32 @@ psyco_curs_copy_to(cursorObject *self, PyObject *args, PyObject *kwargs)
EXC_IF_GREEN(copy_to);
EXC_IF_TPC_PREPARED(self->conn, copy_to);
quoted_delimiter = psycopg_escape_string((PyObject*)self->conn, sep, 0, NULL, NULL);
if (quoted_delimiter == NULL) {
if (!(quoted_delimiter = psycopg_escape_string(
(PyObject*)self->conn, sep, 0, NULL, NULL))) {
PyErr_NoMemory();
return NULL;
goto exit;
}
query = query_buffer;
if (null) {
char *quoted_null = psycopg_escape_string((PyObject*)self->conn, null, 0, NULL, NULL);
if (NULL == quoted_null) {
PyMem_Free(quoted_delimiter);
if (!(quoted_null = psycopg_escape_string(
(PyObject*)self->conn, null, 0, NULL, NULL))) {
PyErr_NoMemory();
return NULL;
goto exit;
}
query_size = PyOS_snprintf(query, DEFAULT_COPYBUFF,
"COPY %s%s TO stdout WITH DELIMITER AS %s"
" NULL AS %s", table_name, columnlist, quoted_delimiter, quoted_null);
if (query_size >= DEFAULT_COPYBUFF) {
/* Got truncated, allocate dynamically */
query = (char *)PyMem_Malloc((query_size + 1) * sizeof(char));
if (!(query = PyMem_New(char, query_size + 1))) {
PyErr_NoMemory();
goto exit;
}
PyOS_snprintf(query, query_size + 1,
"COPY %s%s TO stdout WITH DELIMITER AS %s"
" NULL AS %s", table_name, columnlist, quoted_delimiter, quoted_null);
}
PyMem_Free(quoted_null);
}
else {
query_size = PyOS_snprintf(query, DEFAULT_COPYBUFF,
@ -1413,14 +1411,16 @@ psyco_curs_copy_to(cursorObject *self, PyObject *args, PyObject *kwargs)
table_name, columnlist, quoted_delimiter);
if (query_size >= DEFAULT_COPYBUFF) {
/* Got truncated, allocate dynamically */
query = (char *)PyMem_Malloc((query_size + 1) * sizeof(char));
if (!(query = PyMem_New(char, query_size + 1))) {
PyErr_NoMemory();
goto exit;
}
PyOS_snprintf(query, query_size + 1,
"COPY %s%s TO stdout WITH DELIMITER AS %s",
table_name, columnlist, quoted_delimiter);
}
}
PyMem_Free(quoted_delimiter);
Dprintf("psyco_curs_copy_to: query = %s", query);
self->copysize = 0;
@ -1430,11 +1430,13 @@ psyco_curs_copy_to(cursorObject *self, PyObject *args, PyObject *kwargs)
res = Py_None;
Py_INCREF(Py_None);
}
if (query && (query != query_buffer)) {
PyMem_Free(query);
}
self->copyfile = NULL;
exit:
PyMem_Free(quoted_delimiter);
PyMem_Free(quoted_null);
if (query != query_buffer) { PyMem_Free(query); }
return res;
}
@ -1620,6 +1622,8 @@ static struct PyMemberDef cursorObject_members[] = {
{"arraysize", T_LONG, OFFSETOF(arraysize), 0,
"Number of records `fetchmany()` must fetch if not explicitly " \
"specified."},
{"itersize", T_LONG, OFFSETOF(itersize), 0,
"Number of records ``iter(cur)`` must fetch per network roundtrip."},
{"description", T_OBJECT, OFFSETOF(description), READONLY,
"Cursor description as defined in DBAPI-2.0."},
{"lastrowid", T_LONG, OFFSETOF(lastoid), READONLY,
@ -1662,9 +1666,9 @@ cursor_setup(cursorObject *self, connectionObject *conn, const char *name)
Dprintf("cursor_setup: parameters: name = %s, conn = %p", name, conn);
if (name) {
self->name = PyMem_Malloc(strlen(name)+1);
if (self->name == NULL) return 1;
strncpy(self->name, name, strlen(name)+1);
if (!(self->name = psycopg_escape_identifier_easy(name, 0))) {
return 1;
}
}
/* FIXME: why does this raise an excpetion on the _next_ line of code?
@ -1682,6 +1686,7 @@ cursor_setup(cursorObject *self, connectionObject *conn, const char *name)
self->pgres = NULL;
self->notuples = 1;
self->arraysize = 1;
self->itersize = 2000;
self->rowcount = -1;
self->lastoid = InvalidOid;
@ -1723,7 +1728,7 @@ cursor_dealloc(PyObject* obj)
PyObject_GC_UnTrack(self);
if (self->name) PyMem_Free(self->name);
PyMem_Free(self->name);
Py_CLEAR(self->conn);
Py_CLEAR(self->casts);

View File

@ -55,7 +55,7 @@ HIDDEN PyObject *psyco_set_wait_callback(PyObject *self, PyObject *obj);
#define psyco_get_wait_callback_doc \
"Return the currently registered wait callback.\n" \
"\n" \
"Return `None` if no callback is currently registered.\n"
"Return `!None` if no callback is currently registered.\n"
HIDDEN PyObject *psyco_get_wait_callback(PyObject *self, PyObject *obj);
HIDDEN int psyco_green(void);

View File

@ -77,13 +77,13 @@ HIDDEN int lobject_close(lobjectObject *self);
#define EXC_IF_LOBJ_LEVEL0(self) \
if (self->conn->isolation_level == 0) { \
psyco_set_error(ProgrammingError, (PyObject*)self, \
psyco_set_error(ProgrammingError, NULL, \
"can't use a lobject outside of transactions", NULL, NULL); \
return NULL; \
}
#define EXC_IF_LOBJ_UNMARKED(self) \
if (self->conn->mark != self->mark) { \
psyco_set_error(ProgrammingError, (PyObject*)self, \
psyco_set_error(ProgrammingError, NULL, \
"lobject isn't valid anymore", NULL, NULL); \
return NULL; \
}

View File

@ -110,6 +110,8 @@ _lobject_parse_mode(const char *mode)
/* Return a string representing the lobject mode.
*
* The return value is a new string allocated on the Python heap.
*
* The function must be called holding the GIL.
*/
static char *
_lobject_unparse_mode(int mode)
@ -118,7 +120,10 @@ _lobject_unparse_mode(int mode)
char *c;
/* the longest is 'rwt' */
c = buf = PyMem_Malloc(4);
if (!(c = buf = PyMem_Malloc(4))) {
PyErr_NoMemory();
return NULL;
}
if (mode & LOBJECT_READ) { *c++ = 'r'; }
if (mode & LOBJECT_WRITE) { *c++ = 'w'; }
@ -204,7 +209,14 @@ lobject_open(lobjectObject *self, connectionObject *conn,
/* set the mode for future reference */
self->mode = mode;
Py_BLOCK_THREADS;
self->smode = _lobject_unparse_mode(mode);
Py_UNBLOCK_THREADS;
if (NULL == self->smode) {
retvalue = 1; /* exception already set */
goto end;
}
retvalue = 0;
end:
@ -213,6 +225,8 @@ lobject_open(lobjectObject *self, connectionObject *conn,
if (retvalue < 0)
pq_complete_error(self->conn, &pgres, &error);
/* if retvalue > 0, an exception is already set */
return retvalue;
}

View File

@ -124,10 +124,11 @@ static PyObject *
psyco_lobj_read(lobjectObject *self, PyObject *args)
{
PyObject *res;
int where, end, size = -1;
int where, end;
Py_ssize_t size = -1;
char *buffer;
if (!PyArg_ParseTuple(args, "|i", &size)) return NULL;
if (!PyArg_ParseTuple(args, "|" CONV_CODE_PY_SSIZE_T, &size)) return NULL;
EXC_IF_LOBJ_CLOSED(self);
EXC_IF_LOBJ_LEVEL0(self);
@ -331,7 +332,7 @@ lobject_setup(lobjectObject *self, connectionObject *conn,
Dprintf("lobject_setup: init lobject object at %p", self);
if (conn->isolation_level == ISOLATION_LEVEL_AUTOCOMMIT) {
psyco_set_error(ProgrammingError, (PyObject*)self,
psyco_set_error(ProgrammingError, NULL,
"can't use a lobject outside of transactions", NULL, NULL);
return -1;
}
@ -344,7 +345,7 @@ lobject_setup(lobjectObject *self, connectionObject *conn,
self->fd = -1;
self->oid = InvalidOid;
if (lobject_open(self, conn, oid, smode, new_oid, new_file) == -1)
if (0 != lobject_open(self, conn, oid, smode, new_oid, new_file))
return -1;
Dprintf("lobject_setup: good lobject object at %p, refcnt = "

View File

@ -189,10 +189,10 @@ exit:
}
long
static Py_hash_t
notify_hash(NotifyObject *self)
{
long rv = -1L;
Py_hash_t rv = -1L;
PyObject *tself = NULL;
/* if self == a tuple, then their hashes are the same. */

View File

@ -42,6 +42,9 @@
#include <string.h>
extern HIDDEN PyObject *psyco_DescriptionType;
/* Strip off the severity from a Postgres error message. */
static const char *
strip_severity(const char *msg)
@ -148,7 +151,6 @@ exception_from_sqlstate(const char *sqlstate)
static void
pq_raise(connectionObject *conn, cursorObject *curs, PGresult *pgres)
{
PyObject *pgc = (PyObject*)curs;
PyObject *exc = NULL;
const char *err = NULL;
const char *err2 = NULL;
@ -193,7 +195,7 @@ pq_raise(connectionObject *conn, cursorObject *curs, PGresult *pgres)
/* try to remove the initial "ERROR: " part from the postgresql error */
err2 = strip_severity(err);
psyco_set_error(exc, pgc, err2, err, code);
psyco_set_error(exc, curs, err2, err, code);
}
/* pq_set_critical, pq_resolve_critical - manage critical errors
@ -608,6 +610,8 @@ pq_tpc_command_locked(connectionObject *conn, const char *cmd, const char *tid,
conn->mark += 1;
PyEval_RestoreThread(*tstate);
/* convert the xid into the postgres transaction_id and quote it. */
if (!(etid = psycopg_escape_string((PyObject *)conn, tid, 0, NULL, NULL)))
{ goto exit; }
@ -621,12 +625,15 @@ pq_tpc_command_locked(connectionObject *conn, const char *cmd, const char *tid,
if (0 > PyOS_snprintf(buf, buflen, "%s %s;", cmd, etid)) { goto exit; }
/* run the command and let it handle the error cases */
*tstate = PyEval_SaveThread();
rv = pq_execute_command_locked(conn, buf, pgres, error, tstate);
PyEval_RestoreThread(*tstate);
exit:
PyMem_Free(buf);
PyMem_Free(etid);
*tstate = PyEval_SaveThread();
return rv;
}
@ -891,15 +898,19 @@ pq_get_last_result(connectionObject *conn)
1 - result from backend (possibly data is ready)
*/
static void
static int
_pq_fetch_tuples(cursorObject *curs)
{
int i, *dsize = NULL;
int pgnfields;
int pgbintuples;
int rv = -1;
PyObject *description = NULL;
PyObject *casts = NULL;
Py_BEGIN_ALLOW_THREADS;
pthread_mutex_lock(&(curs->conn->lock));
Py_END_ALLOW_THREADS;
pgnfields = PQnfields(curs->pgres);
pgbintuples = PQbinaryTuples(curs->pgres);
@ -907,20 +918,20 @@ _pq_fetch_tuples(cursorObject *curs)
curs->notuples = 0;
/* create the tuple for description and typecasting */
Py_BLOCK_THREADS;
Py_XDECREF(curs->description);
Py_XDECREF(curs->casts);
curs->description = PyTuple_New(pgnfields);
curs->casts = PyTuple_New(pgnfields);
Py_CLEAR(curs->description);
Py_CLEAR(curs->casts);
if (!(description = PyTuple_New(pgnfields))) { goto exit; }
if (!(casts = PyTuple_New(pgnfields))) { goto exit; }
curs->columns = pgnfields;
Py_UNBLOCK_THREADS;
/* calculate the display size for each column (cpu intensive, can be
switched off at configuration time) */
#ifdef PSYCOPG_DISPLAY_SIZE
Py_BLOCK_THREADS;
dsize = (int *)PyMem_Malloc(pgnfields * sizeof(int));
Py_UNBLOCK_THREADS;
if (!(dsize = PyMem_New(int, pgnfields))) {
PyErr_NoMemory();
goto exit;
}
Py_BEGIN_ALLOW_THREADS;
if (dsize != NULL) {
int j, len;
for (i=0; i < pgnfields; i++) {
@ -933,6 +944,7 @@ _pq_fetch_tuples(cursorObject *curs)
}
}
}
Py_END_ALLOW_THREADS;
#endif
/* calculate various parameters and typecasters */
@ -941,14 +953,11 @@ _pq_fetch_tuples(cursorObject *curs)
int fsize = PQfsize(curs->pgres, i);
int fmod = PQfmod(curs->pgres, i);
PyObject *dtitem;
PyObject *type;
PyObject *dtitem = NULL;
PyObject *type = NULL;
PyObject *cast = NULL;
Py_BLOCK_THREADS;
dtitem = PyTuple_New(7);
PyTuple_SET_ITEM(curs->description, i, dtitem);
if (!(dtitem = PyTuple_New(7))) { goto exit; }
/* fill the right cast function by accessing three different dictionaries:
- the per-cursor dictionary, if available (can be NULL or None)
@ -956,7 +965,9 @@ _pq_fetch_tuples(cursorObject *curs)
- the global dictionary (at module level)
if we get no defined cast use the default one */
type = PyInt_FromLong(ftype);
if (!(type = PyInt_FromLong(ftype))) {
goto err_for;
}
Dprintf("_pq_fetch_tuples: looking for cast %d:", ftype);
cast = curs_get_cast(curs, type);
@ -975,16 +986,25 @@ _pq_fetch_tuples(cursorObject *curs)
cast, Bytes_AS_STRING(((typecastObject*)cast)->name),
PQftype(curs->pgres,i));
Py_INCREF(cast);
PyTuple_SET_ITEM(curs->casts, i, cast);
PyTuple_SET_ITEM(casts, i, cast);
/* 1/ fill the other fields */
PyTuple_SET_ITEM(dtitem, 0,
conn_text_from_chars(curs->conn, PQfname(curs->pgres, i)));
{
PyObject *tmp;
if (!(tmp = conn_text_from_chars(
curs->conn, PQfname(curs->pgres, i)))) {
goto err_for;
}
PyTuple_SET_ITEM(dtitem, 0, tmp);
}
PyTuple_SET_ITEM(dtitem, 1, type);
type = NULL;
/* 2/ display size is the maximum size of this field result tuples. */
if (dsize && dsize[i] >= 0) {
PyTuple_SET_ITEM(dtitem, 2, PyInt_FromLong(dsize[i]));
PyObject *tmp;
if (!(tmp = PyInt_FromLong(dsize[i]))) { goto err_for; }
PyTuple_SET_ITEM(dtitem, 2, tmp);
}
else {
Py_INCREF(Py_None);
@ -995,21 +1015,35 @@ _pq_fetch_tuples(cursorObject *curs)
if (fmod > 0) fmod = fmod - sizeof(int);
if (fsize == -1) {
if (ftype == NUMERICOID) {
PyTuple_SET_ITEM(dtitem, 3,
PyInt_FromLong((fmod >> 16) & 0xFFFF));
PyObject *tmp;
if (!(tmp = PyInt_FromLong((fmod >> 16)))) { goto err_for; }
PyTuple_SET_ITEM(dtitem, 3, tmp);
}
else { /* If variable length record, return maximum size */
PyTuple_SET_ITEM(dtitem, 3, PyInt_FromLong(fmod));
PyObject *tmp;
if (!(tmp = PyInt_FromLong(fmod))) { goto err_for; }
PyTuple_SET_ITEM(dtitem, 3, tmp);
}
}
else {
PyTuple_SET_ITEM(dtitem, 3, PyInt_FromLong(fsize));
PyObject *tmp;
if (!(tmp = PyInt_FromLong(fsize))) { goto err_for; }
PyTuple_SET_ITEM(dtitem, 3, tmp);
}
/* 4,5/ scale and precision */
if (ftype == NUMERICOID) {
PyTuple_SET_ITEM(dtitem, 4, PyInt_FromLong((fmod >> 16) & 0xFFFF));
PyTuple_SET_ITEM(dtitem, 5, PyInt_FromLong(fmod & 0xFFFF));
PyObject *tmp;
if (!(tmp = PyInt_FromLong((fmod >> 16) & 0xFFFF))) {
goto err_for;
}
PyTuple_SET_ITEM(dtitem, 4, tmp);
if (!(tmp = PyInt_FromLong(fmod & 0xFFFF))) {
PyTuple_SET_ITEM(dtitem, 5, tmp);
}
PyTuple_SET_ITEM(dtitem, 5, tmp);
}
else {
Py_INCREF(Py_None);
@ -1021,18 +1055,40 @@ _pq_fetch_tuples(cursorObject *curs)
/* 6/ FIXME: null_ok??? */
Py_INCREF(Py_None);
PyTuple_SET_ITEM(dtitem, 6, Py_None);
Py_UNBLOCK_THREADS;
/* Convert into a namedtuple if available */
if (Py_None != psyco_DescriptionType) {
PyObject *tmp = dtitem;
dtitem = PyObject_CallObject(psyco_DescriptionType, tmp);
Py_DECREF(tmp);
if (NULL == dtitem) { goto err_for; }
}
PyTuple_SET_ITEM(description, i, dtitem);
dtitem = NULL;
continue;
err_for:
Py_XDECREF(type);
Py_XDECREF(dtitem);
goto exit;
}
if (dsize) {
Py_BLOCK_THREADS;
PyMem_Free(dsize);
Py_UNBLOCK_THREADS;
}
curs->description = description; description = NULL;
curs->casts = casts; casts = NULL;
rv = 0;
exit:
PyMem_Free(dsize);
Py_XDECREF(description);
Py_XDECREF(casts);
Py_BEGIN_ALLOW_THREADS;
pthread_mutex_unlock(&(curs->conn->lock));
Py_END_ALLOW_THREADS;
return rv;
}
static int
@ -1063,10 +1119,10 @@ _pq_copy_in_v3(cursorObject *curs)
break;
}
/* a file may return unicode in Py3: encode in client encoding. */
#if PY_MAJOR_VERSION > 2
/* a file may return unicode if implements io.TextIOBase */
if (PyUnicode_Check(o)) {
PyObject *tmp;
Dprintf("_pq_copy_in_v3: encoding in %s", curs->conn->codec);
if (!(tmp = PyUnicode_AsEncodedString(o, curs->conn->codec, NULL))) {
Dprintf("_pq_copy_in_v3: encoding() failed");
error = 1;
@ -1075,7 +1131,6 @@ _pq_copy_in_v3(cursorObject *curs)
Py_DECREF(o);
o = tmp;
}
#endif
if (!Bytes_Check(o)) {
Dprintf("_pq_copy_in_v3: got %s instead of bytes",
@ -1296,7 +1351,7 @@ pq_fetch(cursorObject *curs)
case PGRES_TUPLES_OK:
Dprintf("pq_fetch: data from a SELECT (got tuples)");
curs->rowcount = PQntuples(curs->pgres);
_pq_fetch_tuples(curs); ex = 0;
if (0 == _pq_fetch_tuples(curs)) { ex = 0; }
/* don't clear curs->pgres, because it contains the results! */
break;

View File

@ -44,6 +44,8 @@ HIDDEN int pq_commit(connectionObject *conn);
HIDDEN int pq_abort_locked(connectionObject *conn, PGresult **pgres,
char **error, PyThreadState **tstate);
HIDDEN int pq_abort(connectionObject *conn);
HIDDEN int pq_reset_locked(connectionObject *conn, PGresult **pgres,
char **error, PyThreadState **tstate);
HIDDEN int pq_reset(connectionObject *conn);
HIDDEN int pq_tpc_command_locked(connectionObject *conn,
const char *cmd, const char *tid,

View File

@ -105,6 +105,9 @@ import_psycopg(void)
/* postgresql<->python encoding map */
extern HIDDEN PyObject *psycoEncodings;
/* SQL NULL */
extern HIDDEN PyObject *psyco_null;
typedef struct {
char *pgenc;
char *pyenc;
@ -113,13 +116,16 @@ typedef struct {
/* the Decimal type, used by the DECIMAL typecaster */
HIDDEN PyObject *psyco_GetDecimalType(void);
/* forward declaration */
typedef struct cursorObject cursorObject;
/* some utility functions */
HIDDEN void psyco_set_error(PyObject *exc, PyObject *curs, const char *msg,
HIDDEN void psyco_set_error(PyObject *exc, cursorObject *curs, const char *msg,
const char *pgerror, const char *pgcode);
HIDDEN char *psycopg_escape_string(PyObject *conn,
const char *from, Py_ssize_t len, char *to, Py_ssize_t *tolen);
HIDDEN char *psycopg_escape_identifier_easy(const char *from, Py_ssize_t len);
HIDDEN char *psycopg_strdup(const char *from, Py_ssize_t len);
HIDDEN PyObject * psycopg_ensure_bytes(PyObject *obj);
HIDDEN PyObject * psycopg_ensure_text(PyObject *obj);

View File

@ -66,6 +66,12 @@ HIDDEN PyObject *psycoEncodings = NULL;
HIDDEN int psycopg_debug_enabled = 0;
#endif
/* Python representation of SQL NULL */
HIDDEN PyObject *psyco_null = NULL;
/* The type of the cursor.description items */
HIDDEN PyObject *psyco_DescriptionType = NULL;
/** connect module-level function **/
#define psyco_connect_doc \
"connect(dsn, ...) -- Create a new database connection.\n\n" \
@ -258,7 +264,7 @@ psyco_connect(PyObject *self, PyObject *args, PyObject *keywds)
" * `name`: Name for the new type\n" \
" * `adapter`: Callable to perform type conversion.\n" \
" It must have the signature ``fun(value, cur)`` where ``value`` is\n" \
" the string representation returned by PostgreSQL (`None` if ``NULL``)\n" \
" the string representation returned by PostgreSQL (`!None` if ``NULL``)\n" \
" and ``cur`` is the cursor from which data are read."
static void
@ -315,17 +321,26 @@ psyco_adapters_init(PyObject *mod)
microprotocols_add(&PyLong_Type, NULL, (PyObject*)&asisType);
microprotocols_add(&PyBool_Type, NULL, (PyObject*)&pbooleanType);
/* strings */
#if PY_MAJOR_VERSION < 3
microprotocols_add(&PyString_Type, NULL, (PyObject*)&qstringType);
#endif
microprotocols_add(&PyUnicode_Type, NULL, (PyObject*)&qstringType);
/* binary */
#if PY_MAJOR_VERSION < 3
microprotocols_add(&PyBuffer_Type, NULL, (PyObject*)&binaryType);
#else
microprotocols_add(&PyBytes_Type, NULL, (PyObject*)&binaryType);
#endif
#if PY_MAJOR_VERSION >= 3 || PY_MINOR_VERSION >= 6
microprotocols_add(&PyByteArray_Type, NULL, (PyObject*)&binaryType);
#endif
#if PY_MAJOR_VERSION >= 3 || PY_MINOR_VERSION >= 7
microprotocols_add(&PyMemoryView_Type, NULL, (PyObject*)&binaryType);
#endif
microprotocols_add(&PyList_Type, NULL, (PyObject*)&listType);
if ((type = (PyTypeObject*)psyco_GetDecimalType()) != NULL)
@ -574,18 +589,31 @@ psyco_errors_set(PyObject *type)
Create a new error of the given type with extra attributes. */
void
psyco_set_error(PyObject *exc, PyObject *curs, const char *msg,
psyco_set_error(PyObject *exc, cursorObject *curs, const char *msg,
const char *pgerror, const char *pgcode)
{
PyObject *t;
PyObject *pymsg;
PyObject *err = NULL;
connectionObject *conn = NULL;
PyObject *err = PyObject_CallFunction(exc, "s", msg);
if (curs) {
conn = ((cursorObject *)curs)->conn;
}
if ((pymsg = conn_text_from_chars(conn, msg))) {
err = PyObject_CallFunctionObjArgs(exc, pymsg, NULL);
Py_DECREF(pymsg);
}
else {
/* what's better than an error in an error handler in the morning?
* Anyway, some error was set, refcount is ok... get outta here. */
return;
}
if (err) {
connectionObject *conn = NULL;
if (curs) {
PyObject_SetAttrString(err, "cursor", curs);
conn = ((cursorObject *)curs)->conn;
PyObject_SetAttrString(err, "cursor", (PyObject *)curs);
}
if (pgerror) {
@ -673,6 +701,44 @@ psyco_GetDecimalType(void)
}
/* Create a namedtuple for cursor.description items
*
* Return None in case of expected errors (e.g. namedtuples not available)
* NULL in case of errors to propagate.
*/
static PyObject *
psyco_make_description_type(void)
{
PyObject *nt = NULL;
PyObject *coll = NULL;
PyObject *rv = NULL;
/* Try to import collections.namedtuple */
if (!(coll = PyImport_ImportModule("collections"))) {
Dprintf("psyco_make_description_type: collections import failed");
PyErr_Clear();
rv = Py_None;
goto exit;
}
if (!(nt = PyObject_GetAttrString(coll, "namedtuple"))) {
Dprintf("psyco_make_description_type: no collections.namedtuple");
PyErr_Clear();
rv = Py_None;
goto exit;
}
/* Build the namedtuple */
rv = PyObject_CallFunction(nt, "ss", "Column",
"name type_code display_size internal_size precision scale null_ok");
exit:
Py_XDECREF(coll);
Py_XDECREF(nt);
return rv;
}
/** method table and module initialization **/
static PyMethodDef psycopgMethods[] = {
@ -873,6 +939,8 @@ INIT_MODULE(_psycopg)(void)
/* other mixed initializations of module-level variables */
psycoEncodings = PyDict_New();
psyco_encodings_fill(psycoEncodings);
psyco_null = Bytes_FromString("NULL");
psyco_DescriptionType = psyco_make_description_type();
/* set some module's parameters */
PyModule_AddStringConstant(module, "__version__", PSYCOPG_VERSION);

View File

@ -54,6 +54,15 @@
#define CONV_CODE_PY_SSIZE_T "n"
#endif
/* hash() return size changed around version 3.2a4 on 64bit platforms. Before
* this, the return size was always a long, regardless of arch. ~3.2
* introduced the Py_hash_t & Py_uhash_t typedefs with the resulting sizes
* based upon arch. */
#if PY_VERSION_HEX < 0x030200A4
typedef long Py_hash_t;
typedef unsigned long Py_uhash_t;
#endif
/* Macros defined in Python 2.6 */
#ifndef Py_REFCNT
#define Py_REFCNT(ob) (((PyObject*)(ob))->ob_refcnt)

View File

@ -177,6 +177,28 @@ typecast_parse_time(const char* s, const char** t, Py_ssize_t* len,
#endif
#include "psycopg/typecast_array.c"
static long int typecast_default_DEFAULT[] = {0};
static typecastObject_initlist typecast_default = {
"DEFAULT", typecast_default_DEFAULT, typecast_STRING_cast};
static PyObject *
typecast_UNKNOWN_cast(const char *str, Py_ssize_t len, PyObject *curs)
{
Dprintf("typecast_UNKNOWN_cast: str = '%s',"
" len = " FORMAT_CODE_PY_SSIZE_T, str, len);
// PostgreSQL returns {} for empty array without explicit type. We convert
// that to list in order to handle empty lists.
if (len == 2 && str[0] == '{' && str[1] == '}') {
return PyList_New(0);
}
Dprintf("typecast_UNKNOWN_cast: fallback to default cast");
return typecast_default.cast(str, len, curs);
}
#include "psycopg/typecast_builtins.c"
#define typecast_PYDATETIMEARRAY_cast typecast_GENERIC_ARRAY_cast
@ -225,10 +247,6 @@ PyObject *psyco_default_cast;
PyObject *psyco_binary_types;
PyObject *psyco_default_binary_cast;
static long int typecast_default_DEFAULT[] = {0};
static typecastObject_initlist typecast_default = {
"DEFAULT", typecast_default_DEFAULT, typecast_STRING_cast};
/* typecast_init - initialize the dictionary and create default types */

View File

@ -135,7 +135,10 @@ typecast_array_tokenize(const char *str, Py_ssize_t strlength,
if (res == ASCAN_QUOTED) {
Py_ssize_t j;
char *buffer = PyMem_Malloc(l+1);
if (buffer == NULL) return ASCAN_ERROR;
if (buffer == NULL) {
PyErr_NoMemory();
return ASCAN_ERROR;
}
*token = buffer;

View File

@ -166,6 +166,19 @@ typecast_BINARY_cast(const char *s, Py_ssize_t l, PyObject *curs)
goto fail;
}
/* Check the escaping was successful */
if (s[0] == '\\' && s[1] == 'x' /* input encoded in hex format */
&& str[0] == 'x' /* output resulted in an 'x' */
&& s[2] != '7' && s[3] != '8') /* input wasn't really an x (0x78) */
{
PyErr_SetString(InterfaceError,
"can't receive bytea data from server >= 9.0 with the current "
"libpq client library: please update the libpq to at least 9.0 "
"or set bytea_output to 'escape' in the server config "
"or with a query");
goto fail;
}
chunk = (chunkObject *) PyObject_New(chunkObject, &chunkType);
if (chunk == NULL) goto fail;
@ -201,10 +214,8 @@ typecast_BINARY_cast(const char *s, Py_ssize_t l, PyObject *curs)
/* str's mem was allocated by PQunescapeBytea; must use PQfreemem: */
PQfreemem(str);
}
if (buffer != NULL) {
/* We allocated buffer with PyMem_Malloc; must use PyMem_Free: */
PyMem_Free(buffer);
}
/* We allocated buffer with PyMem_Malloc; must use PyMem_Free: */
PyMem_Free(buffer);
return res;
}

View File

@ -25,6 +25,7 @@ static long int typecast_DATEARRAY_types[] = {1182, 0};
static long int typecast_INTERVALARRAY_types[] = {1187, 0};
static long int typecast_BINARYARRAY_types[] = {1001, 0};
static long int typecast_ROWIDARRAY_types[] = {1028, 1013, 0};
static long int typecast_UNKNOWN_types[] = {705, 0};
static typecastObject_initlist typecast_builtins[] = {
@ -55,6 +56,7 @@ static typecastObject_initlist typecast_builtins[] = {
{"INTERVALARRAY", typecast_INTERVALARRAY_types, typecast_INTERVALARRAY_cast, "INTERVAL"},
{"BINARYARRAY", typecast_BINARYARRAY_types, typecast_BINARYARRAY_cast, "BINARY"},
{"ROWIDARRAY", typecast_ROWIDARRAY_types, typecast_ROWIDARRAY_cast, "ROWID"},
{"UNKNOWN", typecast_UNKNOWN_types, typecast_UNKNOWN_cast, NULL},
{NULL, NULL, NULL, NULL}
};

View File

@ -71,6 +71,43 @@ psycopg_escape_string(PyObject *obj, const char *from, Py_ssize_t len,
return to;
}
/* Escape a string to build a valid PostgreSQL identifier
*
* Allocate a new buffer on the Python heap containing the new string.
* 'len' is optional: if 0 the length is calculated.
*
* The returned string doesn't include quotes.
*
* WARNING: this function is not so safe to allow untrusted input: it does no
* check for multibyte chars. Such a function should be built on
* PQescapeIndentifier, which is only available from PostgreSQL 9.0.
*/
char *
psycopg_escape_identifier_easy(const char *from, Py_ssize_t len)
{
char *rv;
const char *src;
char *dst;
if (!len) { len = strlen(from); }
if (!(rv = PyMem_New(char, 1 + 2 * len))) {
PyErr_NoMemory();
return NULL;
}
/* The only thing to do is double quotes */
for (src = from, dst = rv; *src; ++src, ++dst) {
*dst = *src;
if ('"' == *src) {
*++dst = '"';
}
}
*dst = '\0';
return rv;
}
/* Duplicate a string.
*
* Allocate a new buffer on the Python heap containing the new string.

View File

@ -43,7 +43,7 @@ static const char xid_doc[] =
static const char format_id_doc[] =
"Format ID in a XA transaction.\n\n"
"A non-negative 32 bit integer.\n"
"`None` if the transaction doesn't follow the XA standard.";
"`!None` if the transaction doesn't follow the XA standard.";
static const char gtrid_doc[] =
"Global transaction ID in a XA transaction.\n\n"
@ -54,7 +54,7 @@ static const char bqual_doc[] =
"Branch qualifier of the transaction.\n\n"
"In a XA transaction every resource participating to a transaction\n"
"receives a distinct branch qualifier.\n"
"`None` if the transaction doesn't follow the XA standard.";
"`!None` if the transaction doesn't follow the XA standard.";
static const char prepared_doc[] =
"Timestamp (with timezone) in which a recovered transaction was prepared.";
@ -100,7 +100,8 @@ static int
xid_init(XidObject *self, PyObject *args, PyObject *kwargs)
{
static char *kwlist[] = {"format_id", "gtrid", "bqual", NULL};
int format_id, i, gtrid_len, bqual_len;
int format_id;
size_t i, gtrid_len, bqual_len;
const char *gtrid, *bqual;
PyObject *tmp;
@ -269,7 +270,7 @@ static const char xid_from_string_doc[] =
"the returned object will have `format_id`, `gtrid`, `bqual` set to\n"
"the values of the preparing XA id.\n"
"Otherwise only the `!gtrid` is populated with the unparsed string.\n"
"The operation is the inverse of the one performed by ``str(xid)``.";
"The operation is the inverse of the one performed by `!str(xid)`.";
static PyObject *
xid_from_string_method(PyObject *cls, PyObject *args)
@ -436,7 +437,6 @@ _xid_decode64(PyObject *s)
* in order to allow some form of interoperation.
*
* The function must be called while holding the GIL.
* Return a buffer allocated with PyMem_Malloc. Use PyMem_Free to free it.
*
* see also: the pgjdbc implementation
* http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/jdbc/pgjdbc/org/postgresql/xa/RecoveredXid.java?rev=1.2

View File

@ -89,7 +89,6 @@
<None Include="psycopg\microprotocols.h" />
<None Include="psycopg\microprotocols_proto.h" />
<None Include="psycopg\pgtypes.h" />
<None Include="psycopg\pgversion.h" />
<None Include="psycopg\pqpath.h" />
<None Include="psycopg\psycopg.h" />
<None Include="psycopg\python.h" />
@ -198,12 +197,11 @@
<None Include="psycopg\green.h" />
<None Include="doc\src\pool.rst" />
<None Include="sandbox\dec2float.py" />
<None Include="NEWS-2.0" />
<None Include="psycopg\notify.h" />
<None Include="psycopg\xid.h" />
<None Include="tests\dbapi20_tpc.py" />
<None Include="tests\test_cursor.py" />
<None Include="NEWS-2.3" />
<None Include="NEWS" />
</ItemGroup>
<ItemGroup>
<Compile Include="psycopg\adapter_asis.c" />

View File

@ -19,19 +19,22 @@ Global
Policies = $0
$0.TextStylePolicy = $1
$1.FileWidth = 120
$1.NoTabsAfterNonTabs = False
$1.TabWidth = 4
$1.inheritsSet = Mono
$1.inheritsScope = text/plain
$0.DotNetNamingPolicy = $2
$2.DirectoryNamespaceAssociation = None
$2.ResourceNamePolicy = FileName
$0.TextStylePolicy = $3
$3.NoTabsAfterNonTabs = False
$3.inheritsSet = Mono
$3.inheritsScope = text/x-python
$3.scope = text/plain
$0.StandardHeader = $4
$4.Text =
$4.IncludeInNewFiles = False
$4.inheritsSet = MITX11License
$0.StandardHeader = $3
$3.Text =
$3.IncludeInNewFiles = False
$0.TextStylePolicy = $4
$4.FileWidth = 72
$4.NoTabsAfterNonTabs = True
$4.RemoveTrailingWhitespace = True
$4.inheritsSet = VisualStudio
$4.inheritsScope = text/plain
$4.scope = text/x-readme
name = psycopg2
EndGlobalSection
EndGlobal

View File

@ -56,6 +56,10 @@ from distutils.sysconfig import get_python_inc
from distutils.ccompiler import get_default_compiler
from distutils.dep_util import newer_group
from distutils.util import get_platform
try:
from distutils.msvc9compiler import MSVCCompiler
except ImportError:
MSVCCompiler = None
try:
from distutils.command.build_py import build_py_2to3 as build_py
except ImportError:
@ -75,7 +79,7 @@ except ImportError:
# Take a look at http://www.python.org/dev/peps/pep-0386/
# for a consistent versioning pattern.
PSYCOPG_VERSION = '2.4-beta2'
PSYCOPG_VERSION = '2.4'
version_flags = ['dt', 'dec']
@ -151,14 +155,19 @@ class psycopg_build_ext(build_ext):
def get_pg_config(self, kind):
return get_pg_config(kind, self.pg_config)
def get_export_symbols(self, ext):
# Fix MSVC seeing two of the same export symbols.
if self.get_compiler().lower().startswith('msvc'):
return []
else:
return build_ext.get_export_symbols(self, ext)
def build_extension(self, ext):
build_ext.build_extension(self, ext)
# For MSVC compiler and Python 2.6/2.7 (aka VS 2008), re-insert the
# Manifest into the resulting .pyd file.
sysVer = sys.version_info[:2]
if self.get_compiler().lower().startswith('msvc') and \
sysVer in ((2,6), (2,7)):
# For Python versions that use MSVC compiler 2008, re-insert the
# manifest into the resulting .pyd file.
if MSVCCompiler and isinstance(self.compiler, MSVCCompiler):
platform = get_platform()
# Default to the x86 manifest
manifest = '_psycopg.vc9.x86.manifest'
@ -179,13 +188,7 @@ class psycopg_build_ext(build_ext):
compiler_name = self.get_compiler().lower()
compiler_is_msvc = compiler_name.startswith('msvc')
compiler_is_mingw = compiler_name.startswith('mingw')
if compiler_is_msvc:
# If we're using MSVC 7.1 or later on a 32-bit platform, add the
# /Wp64 option to generate warnings about Win64 portability
# problems.
if sysVer >= (2,4) and struct.calcsize('P') == 4:
extra_compiler_args.append('/Wp64')
elif compiler_is_mingw:
if compiler_is_mingw:
# Default MinGW compilation of Python extensions on Windows uses
# only -O:
extra_compiler_args.append('-O3')
@ -504,6 +507,15 @@ ext.append(Extension("psycopg2._psycopg", sources,
include_dirs=include_dirs,
depends=depends,
undef_macros=[]))
# Compute the direct download url.
# Note that the current package installation programs are stupidly intelligent
# and will try to install a beta if they find a link in the homepage instead of
# using these pretty metadata. But that's their problem, not ours.
download_url = (
"http://initd.org/psycopg/tarballs/PSYCOPG-%s/psycopg2-%s.tar.gz"
% ('-'.join(PSYCOPG_VERSION.split('.')[:2]), PSYCOPG_VERSION))
setup(name="psycopg2",
version=PSYCOPG_VERSION,
maintainer="Federico Di Gregorio",
@ -511,7 +523,7 @@ setup(name="psycopg2",
author="Federico Di Gregorio",
author_email="fog@initd.org",
url="http://initd.org/psycopg/",
download_url = "http://initd.org/psycopg/download/",
download_url = download_url,
license="GPL with exceptions or ZPL",
platforms = ["any"],
description=__doc__.split("\n")[0],

View File

@ -16,7 +16,7 @@
import psycopg2
import psycopg2.extras
from testutils import unittest
from testutils import unittest, skip_if_no_namedtuple
from testconfig import dsn
@ -112,18 +112,6 @@ class ExtrasDictCursorTests(unittest.TestCase):
self.failUnless(row[0] == 'qux')
def if_has_namedtuple(f):
def if_has_namedtuple_(self):
try:
from collections import namedtuple
except ImportError:
return self.skipTest("collections.namedtuple not available")
else:
return f(self)
if_has_namedtuple_.__name__ = f.__name__
return if_has_namedtuple_
class NamedTupleCursorTest(unittest.TestCase):
def setUp(self):
from psycopg2.extras import NamedTupleConnection
@ -147,7 +135,7 @@ class NamedTupleCursorTest(unittest.TestCase):
if self.conn is not None:
self.conn.close()
@if_has_namedtuple
@skip_if_no_namedtuple
def test_fetchone(self):
curs = self.conn.cursor()
curs.execute("select * from nttest where i = 1")
@ -157,7 +145,7 @@ class NamedTupleCursorTest(unittest.TestCase):
self.assertEqual(t[1], 'foo')
self.assertEqual(t.s, 'foo')
@if_has_namedtuple
@skip_if_no_namedtuple
def test_fetchmany(self):
curs = self.conn.cursor()
curs.execute("select * from nttest order by 1")
@ -168,7 +156,7 @@ class NamedTupleCursorTest(unittest.TestCase):
self.assertEqual(res[1].i, 2)
self.assertEqual(res[1].s, 'bar')
@if_has_namedtuple
@skip_if_no_namedtuple
def test_fetchall(self):
curs = self.conn.cursor()
curs.execute("select * from nttest order by 1")
@ -181,7 +169,7 @@ class NamedTupleCursorTest(unittest.TestCase):
self.assertEqual(res[2].i, 3)
self.assertEqual(res[2].s, 'baz')
@if_has_namedtuple
@skip_if_no_namedtuple
def test_iter(self):
curs = self.conn.cursor()
curs.execute("select * from nttest order by 1")
@ -219,7 +207,7 @@ class NamedTupleCursorTest(unittest.TestCase):
# skip the test
pass
@if_has_namedtuple
@skip_if_no_namedtuple
def test_record_updated(self):
curs = self.conn.cursor()
curs.execute("select 1 as foo;")
@ -231,7 +219,7 @@ class NamedTupleCursorTest(unittest.TestCase):
self.assertEqual(r.bar, 2)
self.assertRaises(AttributeError, getattr, r, 'foo')
@if_has_namedtuple
@skip_if_no_namedtuple
def test_no_result_no_surprise(self):
curs = self.conn.cursor()
curs.execute("update nttest set s = s")
@ -240,7 +228,7 @@ class NamedTupleCursorTest(unittest.TestCase):
curs.execute("update nttest set s = s")
self.assertRaises(psycopg2.ProgrammingError, curs.fetchall)
@if_has_namedtuple
@skip_if_no_namedtuple
def test_minimal_generation(self):
# Instrument the class to verify it gets called the minimum number of times.
from psycopg2.extras import NamedTupleCursor

View File

@ -23,7 +23,7 @@
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public
# License for more details.
from testutils import unittest, skip_if_no_pg_sleep
from testutils import unittest, skip_before_postgres
import psycopg2
from psycopg2 import extensions
@ -113,7 +113,7 @@ class AsyncTests(unittest.TestCase):
self.assertFalse(self.conn.isexecuting())
self.assertEquals(cur.fetchone()[0], "a")
@skip_if_no_pg_sleep('conn')
@skip_before_postgres(8, 2)
def test_async_callproc(self):
cur = self.conn.cursor()
cur.callproc("pg_sleep", (0.1, ))

View File

@ -31,7 +31,7 @@ import psycopg2.extensions
from psycopg2 import extras
from testconfig import dsn
from testutils import unittest, skip_if_no_pg_sleep
from testutils import unittest, skip_before_postgres
class CancelTests(unittest.TestCase):
@ -50,7 +50,7 @@ class CancelTests(unittest.TestCase):
def test_empty_cancel(self):
self.conn.cancel()
@skip_if_no_pg_sleep('conn')
@skip_before_postgres(8, 2)
def test_cancel(self):
errors = []
@ -86,7 +86,7 @@ class CancelTests(unittest.TestCase):
self.assertEqual(errors, [])
@skip_if_no_pg_sleep('conn')
@skip_before_postgres(8, 2)
def test_async_cancel(self):
async_conn = psycopg2.connect(dsn, async=True)
self.assertRaises(psycopg2.OperationalError, async_conn.cancel)

View File

@ -24,7 +24,7 @@
import time
import threading
from testutils import unittest, decorate_all_tests, skip_if_no_pg_sleep
from testutils import unittest, decorate_all_tests, skip_before_postgres
from operator import attrgetter
import psycopg2
@ -114,12 +114,12 @@ class ConnectionTests(unittest.TestCase):
self.assertRaises(psycopg2.NotSupportedError,
cnn.xid, 42, "foo", "bar")
@skip_if_no_pg_sleep('conn')
@skip_before_postgres(8, 2)
def test_concurrent_execution(self):
def slave():
cnn = psycopg2.connect(dsn)
cur = cnn.cursor()
cur.execute("select pg_sleep(2)")
cur.execute("select pg_sleep(4)")
cur.close()
cnn.close()
@ -130,7 +130,7 @@ class ConnectionTests(unittest.TestCase):
t2.start()
t1.join()
t2.join()
self.assert_(time.time() - t0 < 3,
self.assert_(time.time() - t0 < 7,
"something broken in concurrency")
def test_encoding_name(self):

View File

@ -78,7 +78,7 @@ class CopyTests(unittest.TestCase):
curs = self.conn.cursor()
curs.execute('''
CREATE TEMPORARY TABLE tcopy (
id int PRIMARY KEY,
id serial PRIMARY KEY,
data text
)''')
@ -180,6 +180,39 @@ class CopyTests(unittest.TestCase):
f.seek(0)
self.assertEqual(f.readline().rstrip(), about)
@skip_if_no_iobase
def test_copy_expert_textiobase(self):
self.conn.set_client_encoding('latin1')
self._create_temp_table() # the above call closed the xn
if sys.version_info[0] < 3:
abin = ''.join(map(chr, range(32, 127) + range(160, 256)))
abin = abin.decode('latin1')
about = abin.replace('\\', '\\\\')
else:
abin = bytes(range(32, 127) + range(160, 256)).decode('latin1')
about = abin.replace('\\', '\\\\')
import io
f = io.StringIO()
f.write(about)
f.seek(0)
curs = self.conn.cursor()
psycopg2.extensions.register_type(
psycopg2.extensions.UNICODE, curs)
curs.copy_expert('COPY tcopy (data) FROM STDIN', f)
curs.execute("select data from tcopy;")
self.assertEqual(curs.fetchone()[0], abin)
f = io.StringIO()
curs.copy_expert('COPY tcopy (data) TO STDOUT', f)
f.seek(0)
self.assertEqual(f.readline().rstrip(), about)
def _copy_from(self, curs, nrecs, srec, copykw):
f = StringIO()
for i, c in izip(xrange(nrecs), cycle(string.ascii_letters)):

View File

@ -23,11 +23,11 @@
# License for more details.
import time
import unittest
import psycopg2
import psycopg2.extensions
from psycopg2.extensions import b
from testconfig import dsn
from testutils import unittest, skip_before_postgres, skip_if_no_namedtuple
class CursorTests(unittest.TestCase):
@ -91,6 +91,17 @@ class CursorTests(unittest.TestCase):
self.assertEqual(b('SELECT 10.3;'),
cur.mogrify("SELECT %s;", (Decimal("10.3"),)))
def test_bad_placeholder(self):
cur = self.conn.cursor()
self.assertRaises(psycopg2.ProgrammingError,
cur.mogrify, "select %(foo", {})
self.assertRaises(psycopg2.ProgrammingError,
cur.mogrify, "select %(foo", {'foo': 1})
self.assertRaises(psycopg2.ProgrammingError,
cur.mogrify, "select %(foo, %(bar)", {'foo': 1})
self.assertRaises(psycopg2.ProgrammingError,
cur.mogrify, "select %(foo, %(bar)", {'foo': 1, 'bar': 2})
def test_cast(self):
curs = self.conn.cursor()
@ -130,6 +141,18 @@ class CursorTests(unittest.TestCase):
del curs
self.assert_(w() is None)
def test_invalid_name(self):
curs = self.conn.cursor()
curs.execute("create temp table invname (data int);")
for i in (10,20,30):
curs.execute("insert into invname values (%s)", (i,))
curs.close()
curs = self.conn.cursor(r'1-2-3 \ "test"')
curs.execute("select data from invname order by data")
self.assertEqual(curs.fetchall(), [(10,), (20,), (30,)])
@skip_before_postgres(8, 2)
def test_iter_named_cursor_efficient(self):
curs = self.conn.cursor('tmp')
# if these records are fetched in the same roundtrip their
@ -143,21 +166,59 @@ class CursorTests(unittest.TestCase):
"named cursor records fetched in 2 roundtrips (delta: %s)"
% (t2 - t1))
def test_iter_named_cursor_default_arraysize(self):
@skip_before_postgres(8, 0)
def test_iter_named_cursor_default_itersize(self):
curs = self.conn.cursor('tmp')
curs.execute('select generate_series(1,50)')
rv = [ (r[0], curs.rownumber) for r in curs ]
# everything swallowed in one gulp
self.assertEqual(rv, [(i,i) for i in range(1,51)])
def test_iter_named_cursor_arraysize(self):
@skip_before_postgres(8, 0)
def test_iter_named_cursor_itersize(self):
curs = self.conn.cursor('tmp')
curs.arraysize = 30
curs.itersize = 30
curs.execute('select generate_series(1,50)')
rv = [ (r[0], curs.rownumber) for r in curs ]
# everything swallowed in two gulps
self.assertEqual(rv, [(i,((i - 1) % 30) + 1) for i in range(1,51)])
@skip_if_no_namedtuple
def test_namedtuple_description(self):
curs = self.conn.cursor()
curs.execute("""select
3.14::decimal(10,2) as pi,
'hello'::text as hi,
'2010-02-18'::date as now;
""")
self.assertEqual(len(curs.description), 3)
for c in curs.description:
self.assertEqual(len(c), 7) # DBAPI happy
for a in ('name', 'type_code', 'display_size', 'internal_size',
'precision', 'scale', 'null_ok'):
self.assert_(hasattr(c, a), a)
c = curs.description[0]
self.assertEqual(c.name, 'pi')
self.assert_(c.type_code in psycopg2.extensions.DECIMAL.values)
self.assert_(c.internal_size > 0)
self.assertEqual(c.precision, 10)
self.assertEqual(c.scale, 2)
c = curs.description[1]
self.assertEqual(c.name, 'hi')
self.assert_(c.type_code in psycopg2.STRING.values)
self.assert_(c.internal_size < 0)
self.assertEqual(c.precision, None)
self.assertEqual(c.scale, None)
c = curs.description[2]
self.assertEqual(c.name, 'now')
self.assert_(c.type_code in psycopg2.extensions.DATE.values)
self.assert_(c.internal_size > 0)
self.assertEqual(c.precision, None)
self.assertEqual(c.scale, None)
def test_suite():
return unittest.TestLoader().loadTestsFromName(__name__)

View File

@ -129,7 +129,7 @@ conn.close()
self.autocommit(self.conn)
self.listen('foo')
self.notify('foo').communicate()
time.sleep(0.1)
time.sleep(0.5)
self.conn.poll()
notify = self.conn.notifies[0]
self.assert_(isinstance(notify, psycopg2.extensions.Notify))
@ -138,7 +138,7 @@ conn.close()
self.autocommit(self.conn)
self.listen('foo')
pid = int(self.notify('foo').communicate()[0])
time.sleep(0.1)
time.sleep(0.5)
self.conn.poll()
self.assertEqual(1, len(self.conn.notifies))
notify = self.conn.notifies[0]
@ -153,7 +153,7 @@ conn.close()
self.autocommit(self.conn)
self.listen('foo')
pid = int(self.notify('foo', payload="Hello, world!").communicate()[0])
time.sleep(0.1)
time.sleep(0.5)
self.conn.poll()
self.assertEqual(1, len(self.conn.notifies))
notify = self.conn.notifies[0]

View File

@ -83,6 +83,10 @@ class QuotingTestCase(unittest.TestCase):
else:
res = curs.fetchone()[0].tobytes()
if res[0] in (b('x'), ord(b('x'))) and self.conn.server_version >= 90000:
return self.skipTest(
"bytea broken with server >= 9.0, libpq < 9")
self.assertEqual(res, data)
self.assert_(not self.conn.notices)

View File

@ -23,7 +23,7 @@
# License for more details.
import threading
from testutils import unittest, skip_if_no_pg_sleep
from testutils import unittest, skip_before_postgres
import psycopg2
from psycopg2.extensions import (
@ -236,7 +236,7 @@ class QueryCancellationTests(unittest.TestCase):
def tearDown(self):
self.conn.close()
@skip_if_no_pg_sleep('conn')
@skip_before_postgres(8, 2)
def test_statement_timeout(self):
curs = self.conn.cursor()
# Set a low statement timeout, then sleep for a longer period.

View File

@ -102,31 +102,6 @@ def skip_if_no_uuid(f):
return skip_if_no_uuid_
def skip_if_no_pg_sleep(name):
"""Decorator to skip a test if pg_sleep is not supported by the server.
Pass it the name of an attribute containing a connection or of a method
returning a connection.
"""
def skip_if_no_pg_sleep_(f):
def skip_if_no_pg_sleep__(self):
cnn = getattr(self, name)
if callable(cnn):
cnn = cnn()
if cnn.server_version < 80200:
return self.skipTest(
"server version %s doesn't support pg_sleep"
% cnn.server_version)
return f(self)
skip_if_no_pg_sleep__.__name__ = f.__name__
return skip_if_no_pg_sleep__
return skip_if_no_pg_sleep_
def skip_if_tpc_disabled(f):
"""Skip a test if the server has tpc support disabled."""
def skip_if_tpc_disabled_(self):
@ -152,6 +127,37 @@ def skip_if_tpc_disabled(f):
return skip_if_tpc_disabled_
def skip_if_no_namedtuple(f):
def skip_if_no_namedtuple_(self):
try:
from collections import namedtuple
except ImportError:
return self.skipTest("collections.namedtuple not available")
else:
return f(self)
skip_if_no_namedtuple_.__name__ = f.__name__
return skip_if_no_namedtuple_
def skip_if_broken_hex_binary(f):
"""Decorator to detect libpq < 9.0 unable to parse bytea in hex format"""
def cope_with_hex_binary_(self):
from psycopg2 import InterfaceError
try:
return f(self)
except InterfaceError, e:
if '9.0' in str(e) and self.conn.server_version >= 90000:
return self.skipTest(
# FIXME: we are only assuming the libpq is older here,
# but we don't have a reliable way to detect the libpq
# version, not pre-9 at least.
"bytea broken with server >= 9.0, libpq < 9")
else:
raise
return cope_with_hex_binary_
def skip_if_no_iobase(f):
"""Skip a test if io.TextIOBase is not available."""
def skip_if_no_iobase_(self):
@ -165,25 +171,60 @@ def skip_if_no_iobase(f):
return skip_if_no_iobase_
def skip_on_python2(f):
"""Skip a test on Python 3 and following."""
def skip_on_python2_(self):
if sys.version_info[0] < 3:
return self.skipTest("skipped because Python 2")
else:
return f(self)
def skip_before_postgres(*ver):
"""Skip a test on PostgreSQL before a certain version."""
ver = ver + (0,) * (3 - len(ver))
def skip_before_postgres_(f):
def skip_before_postgres__(self):
if self.conn.server_version < int("%d%02d%02d" % ver):
return self.skipTest("skipped because PostgreSQL %s"
% self.conn.server_version)
else:
return f(self)
return skip_on_python2_
return skip_before_postgres__
return skip_before_postgres_
def skip_on_python3(f):
"""Skip a test on Python 3 and following."""
def skip_on_python3_(self):
if sys.version_info[0] >= 3:
return self.skipTest("skipped because Python 3")
else:
return f(self)
def skip_after_postgres(*ver):
"""Skip a test on PostgreSQL after (including) a certain version."""
ver = ver + (0,) * (3 - len(ver))
def skip_after_postgres_(f):
def skip_after_postgres__(self):
if self.conn.server_version >= int("%d%02d%02d" % ver):
return self.skipTest("skipped because PostgreSQL %s"
% self.conn.server_version)
else:
return f(self)
return skip_after_postgres__
return skip_after_postgres_
def skip_before_python(*ver):
"""Skip a test on Python before a certain version."""
def skip_before_python_(f):
def skip_before_python__(self):
if sys.version_info[:len(ver)] < ver:
return self.skipTest("skipped because Python %s"
% ".".join(map(str, sys.version_info[:len(ver)])))
else:
return f(self)
return skip_before_python__
return skip_before_python_
def skip_from_python(*ver):
"""Skip a test on Python after (including) a certain version."""
def skip_from_python_(f):
def skip_from_python__(self):
if sys.version_info[:len(ver)] >= ver:
return self.skipTest("skipped because Python %s"
% ".".join(map(str, sys.version_info[:len(ver)])))
else:
return f(self)
return skip_from_python__
return skip_from_python_
return skip_on_python3_
def script_to_py3(script):
"""Convert a script to Python3 syntax if required."""

View File

@ -28,7 +28,7 @@ except:
pass
import sys
import testutils
from testutils import unittest
from testutils import unittest, skip_if_broken_hex_binary
from testconfig import dsn
import psycopg2
@ -116,6 +116,7 @@ class TypesBasicTests(unittest.TestCase):
s = self.execute("SELECT %s AS foo", (float("-inf"),))
self.failUnless(str(s) == "-inf", "wrong float quoting: " + str(s))
@skip_if_broken_hex_binary
def testBinary(self):
if sys.version_info[0] < 3:
s = ''.join([chr(x) for x in range(256)])
@ -128,6 +129,11 @@ class TypesBasicTests(unittest.TestCase):
buf = self.execute("SELECT %s::bytea AS foo", (b,))
self.assertEqual(s, buf)
def testBinaryNone(self):
b = psycopg2.Binary(None)
buf = self.execute("SELECT %s::bytea AS foo", (b,))
self.assertEqual(buf, None)
def testBinaryEmptyString(self):
# test to make sure an empty Binary is converted to an empty string
if sys.version_info[0] < 3:
@ -137,6 +143,7 @@ class TypesBasicTests(unittest.TestCase):
b = psycopg2.Binary(bytes([]))
self.assertEqual(str(b), "''::bytea")
@skip_if_broken_hex_binary
def testBinaryRoundTrip(self):
# test to make sure buffers returned by psycopg2 are
# understood by execute:
@ -152,14 +159,40 @@ class TypesBasicTests(unittest.TestCase):
self.assertEqual(s, buf2)
def testArray(self):
s = self.execute("SELECT %s AS foo", ([],))
self.failUnlessEqual(s, [])
s = self.execute("SELECT %s AS foo", ([[1,2],[3,4]],))
self.failUnlessEqual(s, [[1,2],[3,4]])
s = self.execute("SELECT %s AS foo", (['one', 'two', 'three'],))
self.failUnlessEqual(s, ['one', 'two', 'three'])
@testutils.skip_on_python3
def testEmptyArrayRegression(self):
# ticket #42
import datetime
curs = self.conn.cursor()
curs.execute("create table array_test (id integer, col timestamp without time zone[])")
curs.execute("insert into array_test values (%s, %s)", (1, [datetime.date(2011,2,14)]))
curs.execute("select col from array_test where id = 1")
self.assertEqual(curs.fetchone()[0], [datetime.datetime(2011, 2, 14, 0, 0)])
curs.execute("insert into array_test values (%s, %s)", (2, []))
curs.execute("select col from array_test where id = 2")
self.assertEqual(curs.fetchone()[0], [])
def testEmptyArray(self):
s = self.execute("SELECT '{}' AS foo")
self.failUnlessEqual(s, [])
s = self.execute("SELECT '{}'::text[] AS foo")
self.failUnlessEqual(s, [])
s = self.execute("SELECT %s AS foo", ([],))
self.failUnlessEqual(s, [])
s = self.execute("SELECT 1 != ALL(%s)", ([],))
self.failUnlessEqual(s, True)
# but don't break the strings :)
s = self.execute("SELECT '{}'::text AS foo")
self.failUnlessEqual(s, "{}")
@skip_if_broken_hex_binary
@testutils.skip_from_python(3)
def testTypeRoundtripBuffer(self):
o1 = buffer("".join(map(chr, range(256))))
o2 = self.execute("select %s;", (o1,))
@ -169,15 +202,19 @@ class TypesBasicTests(unittest.TestCase):
o1 = buffer("")
o2 = self.execute("select %s;", (o1,))
self.assertEqual(type(o1), type(o2))
self.assertEqual(str(o1), str(o2))
@testutils.skip_on_python3
@skip_if_broken_hex_binary
@testutils.skip_from_python(3)
def testTypeRoundtripBufferArray(self):
o1 = buffer("".join(map(chr, range(256))))
o1 = [o1]
o2 = self.execute("select %s;", (o1,))
self.assertEqual(type(o1[0]), type(o2[0]))
self.assertEqual(str(o1[0]), str(o2[0]))
@testutils.skip_on_python2
@skip_if_broken_hex_binary
@testutils.skip_before_python(3)
def testTypeRoundtripBytes(self):
o1 = bytes(range(256))
o2 = self.execute("select %s;", (o1,))
@ -188,34 +225,63 @@ class TypesBasicTests(unittest.TestCase):
o2 = self.execute("select %s;", (o1,))
self.assertEqual(memoryview, type(o2))
@testutils.skip_on_python2
@skip_if_broken_hex_binary
@testutils.skip_before_python(3)
def testTypeRoundtripBytesArray(self):
o1 = bytes(range(256))
o1 = [o1]
o2 = self.execute("select %s;", (o1,))
self.assertEqual(memoryview, type(o2[0]))
@testutils.skip_on_python2
@skip_if_broken_hex_binary
@testutils.skip_before_python(2, 6)
def testAdaptBytearray(self):
o1 = bytearray(range(256))
o2 = self.execute("select %s;", (o1,))
self.assertEqual(memoryview, type(o2))
if sys.version_info[0] < 3:
self.assertEqual(buffer, type(o2))
else:
self.assertEqual(memoryview, type(o2))
self.assertEqual(len(o1), len(o2))
for c1, c2 in zip(o1, o2):
self.assertEqual(c1, ord(c2))
# Test with an empty buffer
o1 = bytearray([])
o2 = self.execute("select %s;", (o1,))
self.assertEqual(memoryview, type(o2))
@testutils.skip_on_python2
self.assertEqual(len(o2), 0)
if sys.version_info[0] < 3:
self.assertEqual(buffer, type(o2))
else:
self.assertEqual(memoryview, type(o2))
@skip_if_broken_hex_binary
@testutils.skip_before_python(2, 7)
def testAdaptMemoryview(self):
o1 = memoryview(bytes(range(256)))
o1 = memoryview(bytearray(range(256)))
o2 = self.execute("select %s;", (o1,))
self.assertEqual(memoryview, type(o2))
if sys.version_info[0] < 3:
self.assertEqual(buffer, type(o2))
else:
self.assertEqual(memoryview, type(o2))
# Test with an empty buffer
o1 = memoryview(bytes([]))
o1 = memoryview(bytearray([]))
o2 = self.execute("select %s;", (o1,))
self.assertEqual(memoryview, type(o2))
if sys.version_info[0] < 3:
self.assertEqual(buffer, type(o2))
else:
self.assertEqual(memoryview, type(o2))
def testByteaHexCheckFalsePositive(self):
# the check \x -> x to detect bad bytea decode
# may be fooled if the first char is really an 'x'
o1 = psycopg2.Binary(b('x'))
o2 = self.execute("SELECT %s::bytea AS foo", (o1,))
self.assertEqual(b('x'), o2[0])
class AdaptSubclassTest(unittest.TestCase):
@ -241,7 +307,7 @@ class AdaptSubclassTest(unittest.TestCase):
del psycopg2.extensions.adapters[A, psycopg2.extensions.ISQLQuote]
del psycopg2.extensions.adapters[B, psycopg2.extensions.ISQLQuote]
@testutils.skip_on_python3
@testutils.skip_from_python(3)
def test_no_mro_no_joy(self):
from psycopg2.extensions import adapt, register_adapter, AsIs
@ -255,7 +321,7 @@ class AdaptSubclassTest(unittest.TestCase):
del psycopg2.extensions.adapters[A, psycopg2.extensions.ISQLQuote]
@testutils.skip_on_python2
@testutils.skip_before_python(3)
def test_adapt_subtype_3(self):
from psycopg2.extensions import adapt, register_adapter, AsIs

View File

@ -120,7 +120,7 @@ def skip_if_no_hstore(f):
def skip_if_no_hstore_(self):
from psycopg2.extras import HstoreAdapter
oids = HstoreAdapter.get_oids(self.conn)
if oids is None:
if oids is None or not oids[0]:
return self.skipTest("hstore not available in test database")
return f(self)
@ -276,7 +276,7 @@ class HstoreTestCase(unittest.TestCase):
finally:
conn2.close()
finally:
psycopg2.extensions.string_types.pop(oids[0])
psycopg2.extensions.string_types.pop(oids[0][0])
# verify the caster is not around anymore
cur = self.conn.cursor()
@ -337,6 +337,26 @@ class HstoreTestCase(unittest.TestCase):
ok({u''.join(ab): u''.join(ab)})
ok(dict(zip(ab, ab)))
@skip_if_no_hstore
def test_oid(self):
cur = self.conn.cursor()
cur.execute("select 'hstore'::regtype::oid")
oid = cur.fetchone()[0]
# Note: None as conn_or_cursor is just for testing: not public
# interface and it may break in future.
from psycopg2.extras import register_hstore
register_hstore(None, globally=True, oid=oid)
try:
cur.execute("select null::hstore, ''::hstore, 'a => b'::hstore")
t = cur.fetchone()
self.assert_(t[0] is None)
self.assertEqual(t[1], {})
self.assertEqual(t[2], {'a': 'b'})
finally:
psycopg2.extensions.string_types.pop(oid)
def skip_if_no_composite(f):
def skip_if_no_composite_(self):