psycopg2/doc/src/faq.rst

175 lines
8.7 KiB
ReStructuredText
Raw Normal View History

Frequently Asked Questions
==========================
.. sectionauthor:: Daniele Varrazzo <daniele.varrazzo@gmail.com>
Here are a few gotchas you may encounter using `psycopg2`. Feel free to
suggest new entries!
2010-04-14 18:40:02 +04:00
Problems with transactions handling
-----------------------------------
.. cssclass:: faq
Why does `!psycopg2` leave database sessions "idle in transaction"?
Psycopg normally starts a new transaction the first time a query is
executed, e.g. calling `cursor.execute()`, even if the command is a
:sql:`SELECT`. The transaction is not closed until an explicit
`~connection.commit()` or `~connection.rollback()`.
2011-11-01 11:09:51 +04:00
If you are writing a long-living program, you should probably make sure to
call one of the transaction closing methods before leaving the connection
unused for a long time (which may also be a few seconds, depending on the
concurrency level in your database). Alternatively you can use a
connection in `~connection.autocommit` mode to avoid a new transaction to
be started at the first command.
2010-04-14 18:40:02 +04:00
I receive the error *current transaction is aborted, commands ignored until end of transaction block* and can't do anything else!
There was a problem *in the previous* command to the database, which
resulted in an error. The database will not recover automatically from
this condition: you must run a `~connection.rollback()` before sending
new commands to the session (if this seems too harsh, remember that
PostgreSQL supports nested transactions using the |SAVEPOINT|_ command).
.. |SAVEPOINT| replace:: :sql:`SAVEPOINT`
.. _SAVEPOINT: http://www.postgresql.org/docs/current/static/sql-savepoint.html
2010-04-14 18:40:02 +04:00
Why do I get the error *current transaction is aborted, commands ignored until end of transaction block* when I use `!multiprocessing` (or any other forking system) and not when use `!threading`?
2010-04-14 18:40:02 +04:00
Psycopg's connections can't be shared across processes (but are thread
2011-11-01 11:09:51 +04:00
safe). If you are forking the Python process make sure to create a new
connection in each forked child. See :ref:`thread-safety` for further
informations.
2010-04-14 18:40:02 +04:00
Problems with type conversions
------------------------------
.. cssclass:: faq
Why does `!cursor.execute()` raise the exception *can't adapt*?
Psycopg converts Python objects in a SQL string representation by looking
at the object class. The exception is raised when you are trying to pass
as query parameter an object for which there is no adapter registered for
its class. See :ref:`adapting-new-types` for informations.
I can't pass an integer or a float parameter to my query: it says *a number is required*, but *it is* a number!
In your query string, you always have to use ``%s`` placeholders,
event when passing a number. All Python objects are converted by Psycopg
in their SQL representation, so they get passed to the query as strings.
See :ref:`query-parameters`. ::
>>> cur.execute("INSERT INTO numbers VALUES (%d)", (42,)) # WRONG
>>> cur.execute("INSERT INTO numbers VALUES (%s)", (42,)) # correct
I try to execute a query but it fails with the error *not all arguments converted during string formatting* (or *object does not support indexing*). Why?
Psycopg always require positional arguments to be passed as a sequence, even
when the query takes a single parameter. And remember that to make a
single item tuple in Python you need a comma! See :ref:`query-parameters`.
::
>>> cur.execute("INSERT INTO foo VALUES (%s)", "bar") # WRONG
>>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar")) # WRONG
>>> cur.execute("INSERT INTO foo VALUES (%s)", ("bar",)) # correct
>>> cur.execute("INSERT INTO foo VALUES (%s)", ["bar"]) # correct
My database is Unicode, but I receive all the strings as UTF-8 `!str`. Can I receive `!unicode` objects instead?
The following magic formula will do the trick::
psycopg2.extensions.register_type(psycopg2.extensions.UNICODE)
psycopg2.extensions.register_type(psycopg2.extensions.UNICODEARRAY)
See :ref:`unicode-handling` for the gory details.
Psycopg converts :sql:`decimal`\/\ :sql:`numeric` database types into Python `!Decimal` objects. Can I have `!float` instead?
You can register a customized adapter for PostgreSQL decimal type::
DEC2FLOAT = psycopg2.extensions.new_type(
psycopg2.extensions.DECIMAL.values,
'DEC2FLOAT',
lambda value, curs: float(value) if value is not None else None)
psycopg2.extensions.register_type(DEC2FLOAT)
See :ref:`type-casting-from-sql-to-python` to read the relevant
documentation. If you find `!psycopg2.extensions.DECIMAL` not avalable, use
`!psycopg2._psycopg.DECIMAL` instead.
Transferring binary data from PostgreSQL 9.0 doesn't work.
PostgreSQL 9.0 uses by default `the "hex" format`__ to transfer
:sql:`bytea` data: the format can't be parsed by the libpq 8.4 and
earlier. The problem is solved in Psycopg 2.4.1, that uses its own parser
for the :sql:`bytea` format. For previous Psycopg releases, three options
to solve the problem are:
- set the bytea_output__ parameter to ``escape`` in the server;
- execute the database command ``SET bytea_output TO escape;`` in the
session before reading binary data;
- upgrade the libpq library on the client to at least 9.0.
.. __: http://www.postgresql.org/docs/current/static/datatype-binary.html
.. __: http://www.postgresql.org/docs/current/static/runtime-config-client.html#GUC-BYTEA-OUTPUT
Arrays of *TYPE* are not casted to list.
Arrays are only casted to list when their oid is known, and an array
typecaster is registered for them. If there is no typecaster, the array is
returned unparsed from PostgreSQL (e.g. ``{a,b,c}``). It is easy to create
a generic arrays typecaster, returning a list of array: an example is
provided in the `~psycopg2.extensions.new_array_type()` documentation.
2010-04-14 18:40:02 +04:00
Best practices
--------------
.. cssclass:: faq
When should I save and re-use a cursor as opposed to creating a new one as needed?
Cursors are lightweight objects and creating lots of them should not pose
any kind of problem. But note that cursors used to fetch result sets will
cache the data and use memory in proportion to the result set size. Our
suggestion is to almost always create a new cursor and dispose old ones as
soon as the data is not required anymore (call `~cursor.close()` on
them.) The only exception are tight loops where one usually use the same
cursor for a whole bunch of :sql:`INSERT`\s or :sql:`UPDATE`\s.
When should I save and re-use a connection as opposed to creating a new one as needed?
Creating a connection can be slow (think of SSL over TCP) so the best
practice is to create a single connection and keep it open as long as
required. It is also good practice to rollback or commit frequently (even
after a single :sql:`SELECT` statement) to make sure the backend is never
left "idle in transaction". See also `psycopg2.pool` for lightweight
connection pooling.
What are the advantages or disadvantages of using named cursors?
The only disadvantages is that they use up resources on the server and
that there is a little overhead because a at least two queries (one to
create the cursor and one to fetch the initial result set) are issued to
the backend. The advantage is that data is fetched one chunk at a time:
using small `~cursor.fetchmany()` values it is possible to use very
little memory on the client and to skip or discard parts of the result set.
2010-04-14 18:40:02 +04:00
Problems compiling and deploying psycopg2
-----------------------------------------
2010-04-14 18:40:02 +04:00
.. cssclass:: faq
I can't compile `!psycopg2`: the compiler says *error: Python.h: No such file or directory*. What am I missing?
You need to install a Python development package: it is usually called
``python-dev``.
I can't compile `!psycopg2`: the compiler says *error: libpq-fe.h: No such file or directory*. What am I missing?
You need to install the development version of the libpq: the package is
usually called ``libpq-dev``.
Psycopg raises *ImportError: cannot import name tz* on import in mod_wsgi / ASP, but it works fine otherwise.
If `!psycopg2` is installed in an egg_ (e.g. because installed by
:program:`easy_install`), the user running the program may be unable to
write in the `eggs cache`__. Set the env variable
:envvar:`PYTHON_EGG_CACHE` to a writable directory. With modwsgi you can
use the WSGIPythonEggs__ directive.
.. _egg: http://peak.telecommunity.com/DevCenter/PythonEggs
.. __: http://stackoverflow.com/questions/2192323/what-is-the-python-egg-cache-python-egg-cache
.. __: http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonEggs