Docs cleanup by Josh Kupershmidt

This commit is contained in:
Daniele Varrazzo 2011-11-01 07:09:51 +00:00
parent 5728649944
commit 00b52c78b3
4 changed files with 17 additions and 17 deletions

View File

@ -18,7 +18,7 @@ The ``connection`` class
Connections are created using the factory function Connections are created using the factory function
`~psycopg2.connect()`. `~psycopg2.connect()`.
Connections are thread safe and can be shared among many thread. See Connections are thread safe and can be shared among many threads. See
:ref:`thread-safety` for details. :ref:`thread-safety` for details.
.. method:: cursor([name] [, cursor_factory] [, withhold]) .. method:: cursor([name] [, cursor_factory] [, withhold])

View File

@ -18,7 +18,7 @@ Why does `!psycopg2` leave database sessions "idle in transaction"?
:sql:`SELECT`. The transaction is not closed until an explicit :sql:`SELECT`. The transaction is not closed until an explicit
`~connection.commit()` or `~connection.rollback()`. `~connection.commit()` or `~connection.rollback()`.
If you are writing a long-living program, you should probably ensure to If you are writing a long-living program, you should probably make sure to
call one of the transaction closing methods before leaving the connection call one of the transaction closing methods before leaving the connection
unused for a long time (which may also be a few seconds, depending on the unused for a long time (which may also be a few seconds, depending on the
concurrency level in your database). Alternatively you can use a concurrency level in your database). Alternatively you can use a
@ -37,7 +37,7 @@ I receive the error *current transaction is aborted, commands ignored until end
Why do I get the error *current transaction is aborted, commands ignored until end of transaction block* when I use `!multiprocessing` (or any other forking system) and not when use `!threading`? Why do I get the error *current transaction is aborted, commands ignored until end of transaction block* when I use `!multiprocessing` (or any other forking system) and not when use `!threading`?
Psycopg's connections can't be shared across processes (but are thread Psycopg's connections can't be shared across processes (but are thread
safe). If you are forking the Python process ensure to create a new safe). If you are forking the Python process make sure to create a new
connection in each forked child. See :ref:`thread-safety` for further connection in each forked child. See :ref:`thread-safety` for further
informations. informations.

View File

@ -10,7 +10,7 @@
Creating new PostgreSQL connections can be an expensive operation. This Creating new PostgreSQL connections can be an expensive operation. This
module offers a few pure Python classes implementing simple connection pooling module offers a few pure Python classes implementing simple connection pooling
directly into the client application. directly in the client application.
.. class:: AbstractConnectionPool(minconn, maxconn, \*args, \*\*kwargs) .. class:: AbstractConnectionPool(minconn, maxconn, \*args, \*\*kwargs)

View File

@ -39,7 +39,7 @@ basic commands::
>>> conn.close() >>> conn.close()
The main entry point of Psycopg are: The main entry points of Psycopg are:
- The function `~psycopg2.connect()` creates a new database session and - The function `~psycopg2.connect()` creates a new database session and
returns a new `connection` instance. returns a new `connection` instance.
@ -90,7 +90,7 @@ is converted into the SQL command::
Named arguments are supported too using :samp:`%({name})s` placeholders. Named arguments are supported too using :samp:`%({name})s` placeholders.
Using named arguments the values can be passed to the query in any order and Using named arguments the values can be passed to the query in any order and
many placeholder can use the same values:: many placeholders can use the same values::
>>> cur.execute( >>> cur.execute(
... """INSERT INTO some_table (an_int, a_date, another_date, a_string) ... """INSERT INTO some_table (an_int, a_date, another_date, a_string)
@ -165,9 +165,9 @@ hang it onto your desk.
.. _SQL injection: http://en.wikipedia.org/wiki/SQL_injection .. _SQL injection: http://en.wikipedia.org/wiki/SQL_injection
.. __: http://xkcd.com/327/ .. __: http://xkcd.com/327/
Psycopg can `convert automatically Python objects into and from SQL Psycopg can `automatically convert Python objects to and from SQL
literals`__: using this feature your code will result more robust and literals`__: using this feature your code will be more robust and
reliable. It is really the case to stress this point: reliable. We must stress this point:
.. __: python-types-adaptation_ .. __: python-types-adaptation_
@ -290,8 +290,8 @@ the SQL string that would be sent to the database.
emit :sql:`bytea` fields. Starting from Psycopg 2.4.1 the format is emit :sql:`bytea` fields. Starting from Psycopg 2.4.1 the format is
correctly supported. If you use a previous version you will need some correctly supported. If you use a previous version you will need some
extra care when receiving bytea from PostgreSQL: you must have at least extra care when receiving bytea from PostgreSQL: you must have at least
the libpq 9.0 installed on the client or alternatively you can set the libpq 9.0 installed on the client or alternatively you can set the
`bytea_output`__ configutation parameter to ``escape``, either in the `bytea_output`__ configuration parameter to ``escape``, either in the
server configuration file or in the client session (using a query such as server configuration file or in the client session (using a query such as
``SET bytea_output TO escape;``) before receiving binary data. ``SET bytea_output TO escape;``) before receiving binary data.
@ -444,7 +444,7 @@ the connection or globally: see the function
.. note:: .. note::
In Python 2, if you want to receive uniformly all your database input in In Python 2, if you want to uniformly receive all your database input in
Unicode, you can register the related typecasters globally as soon as Unicode, you can register the related typecasters globally as soon as
Psycopg is imported:: Psycopg is imported::
@ -526,7 +526,7 @@ older versions).
long-running programs, if no further action is taken, the session will long-running programs, if no further action is taken, the session will
remain "idle in transaction", a condition non desiderable for several remain "idle in transaction", a condition non desiderable for several
reasons (locks are held by the session, tables bloat...). For long lived reasons (locks are held by the session, tables bloat...). For long lived
scripts, either ensure to terminate a transaction as soon as possible or scripts, either make sure to terminate a transaction as soon as possible or
use an autocommit connection. use an autocommit connection.
A few other transaction properties can be set session-wide by the A few other transaction properties can be set session-wide by the
@ -634,7 +634,7 @@ Thread and process safety
The Psycopg module and the `connection` objects are *thread-safe*: many The Psycopg module and the `connection` objects are *thread-safe*: many
threads can access the same database either using separate sessions and threads can access the same database either using separate sessions and
creating a `!connection` per thread or using the same using the same creating a `!connection` per thread or using the same
connection and creating separate `cursor`\ s. In |DBAPI|_ parlance, Psycopg is connection and creating separate `cursor`\ s. In |DBAPI|_ parlance, Psycopg is
*level 2 thread safe*. *level 2 thread safe*.
@ -648,7 +648,7 @@ the same connection, all the commands will be executed in the same session
The above observations are only valid for regular threads: they don't apply to The above observations are only valid for regular threads: they don't apply to
forked processes nor to green threads. `libpq` connections `shouldn't be used by a forked processes nor to green threads. `libpq` connections `shouldn't be used by a
forked processes`__, so when using a module such as `multiprocessing` or a forked processes`__, so when using a module such as `multiprocessing` or a
forking web deploy method such as FastCGI ensure to create the connections forking web deploy method such as FastCGI make sure to create the connections
*after* the fork. *after* the fork.
.. __: http://www.postgresql.org/docs/9.0/static/libpq-connect.html#LIBPQ-CONNECT .. __: http://www.postgresql.org/docs/9.0/static/libpq-connect.html#LIBPQ-CONNECT
@ -699,7 +699,7 @@ examples.
Access to PostgreSQL large objects Access to PostgreSQL large objects
---------------------------------- ----------------------------------
PostgreSQL offers support to `large objects`__, which provide stream-style PostgreSQL offers support for `large objects`__, which provide stream-style
access to user data that is stored in a special large-object structure. They access to user data that is stored in a special large-object structure. They
are useful with data values too large to be manipulated conveniently as a are useful with data values too large to be manipulated conveniently as a
whole. whole.
@ -734,7 +734,7 @@ Two-Phase Commit protocol support
Psycopg exposes the two-phase commit features available since PostgreSQL 8.1 Psycopg exposes the two-phase commit features available since PostgreSQL 8.1
implementing the *two-phase commit extensions* proposed by the |DBAPI|. implementing the *two-phase commit extensions* proposed by the |DBAPI|.
The |DBAPI| model of two-phase commit is inspired to the `XA specification`__, The |DBAPI| model of two-phase commit is inspired by the `XA specification`__,
according to which transaction IDs are formed from three components: according to which transaction IDs are formed from three components:
- a format ID (non-negative 32 bit integer) - a format ID (non-negative 32 bit integer)