diff --git a/doc/src/faq.rst b/doc/src/faq.rst index 00501d7f..9041eadc 100644 --- a/doc/src/faq.rst +++ b/doc/src/faq.rst @@ -85,14 +85,15 @@ When should I save and re-use a cursor as opposed to creating a new one as neede suggestion is to almost always create a new cursor and dispose old ones as soon as the data is not required anymore (call :meth:`~cursor.close` on them.) The only exception are tight loops where one usually use the same - cursor for a whole bunch of INSERTs or UPDATEs. + cursor for a whole bunch of :sql:`INSERT`\s or :sql:`UPDATE`\s. When should I save and re-use a connection as opposed to creating a new one as needed? Creating a connection can be slow (think of SSL over TCP) so the best practice is to create a single connection and keep it open as long as required. It is also good practice to rollback or commit frequently (even - after a single SELECT statement) to make sure the backend is never left - "idle in transaction". + after a single :sql:`SELECT` statement) to make sure the backend is never + left "idle in transaction". See also :mod:`psycopg2.pool` for lightweight + connection pooling. What are the advantages or disadvantages of using named cursors? The only disadvantages is that they use up resources on the server and diff --git a/doc/src/index.rst b/doc/src/index.rst index 247b5013..064d13b8 100644 --- a/doc/src/index.rst +++ b/doc/src/index.rst @@ -38,6 +38,7 @@ PostgreSQL arrays. advanced extensions tz + pool extras errorcodes faq diff --git a/doc/src/pool.rst b/doc/src/pool.rst new file mode 100644 index 00000000..1be2c6a3 --- /dev/null +++ b/doc/src/pool.rst @@ -0,0 +1,62 @@ +`psycopg2.pool` -- Connections pooling +====================================== + +.. sectionauthor:: Daniele Varrazzo + +.. index:: + pair: Connection; Pooling + +.. module:: psycopg2.pool + +Creating new PostgreSQL connections can be an expensive operation. This +module offers a few pure Python classes implementing simple connection pooling +directly into the client application. + +.. class:: AbstractConnectionPool(minconn, maxconn, \*args, \*\*kwargs) + + Base class implementing generic key-based pooling code. + + New *minconn* connections are created automatically. The pool will support + a maximum of about *maxconn* connections. *\*args* and *\*\*kwargs* are + passed to the :func:`~psycopg2.connect` function. + + The following methods are expected to be implemented by subclasses: + + .. method:: getconn(key=None) + + Get a free connection and assign it to *key* if not ``None``. + + .. method:: putconn(conn, key=None) + + Put away a connection. + + .. method:: closeall + + Close all the connections handled by the pool. + + Notice that all the connections are closed, including ones + eventually in use by the application. + + +The following classes are :class:`AbstractConnectionPool` subclasses ready to +be used. + +.. autoclass:: SimpleConnectionPool + + .. note:: This pool class is useful only for single-threaded applications. + + +.. index:: Multithread; Connection pooling + +.. autoclass:: ThreadedConnectionPool + + .. note:: This pool class can be safely used in multi-threaded applications. + + +.. autoclass:: PersistentConnectionPool + + .. note:: + + This pool class is mostly designed to interact with Zope and probably + not useful in generic applications. + diff --git a/lib/pool.py b/lib/pool.py index 7ebeff3b..030cb686 100644 --- a/lib/pool.py +++ b/lib/pool.py @@ -197,9 +197,9 @@ class PersistentConnectionPool(AbstractConnectionPool): """A pool that assigns persistent connections to different threads. Note that this connection pool generates by itself the required keys - using the current thread id. This means that untill a thread put away + using the current thread id. This means that until a thread puts away a connection it will always get the same connection object by successive - .getconn() calls. This also means that a thread can't use more than one + :meth:`!getconn` calls. This also means that a thread can't use more than one single connection from the pool. """