Trim trailing whitespace from all files throughout project

Many editors automatically trim whitespace on save. By trimming all
 files in one go, makes future diffs cleaner without extraneous
 whitespace changes.
This commit is contained in:
Jon Dufresne 2017-12-01 21:37:49 -08:00
parent a51160317c
commit e335d6d223
52 changed files with 286 additions and 324 deletions

View File

@ -6,7 +6,7 @@ For the win32 port:
Jason Erickson <jerickso@indian.com> Jason Erickson <jerickso@indian.com>
Additional Help: Additional Help:
Peter Fein contributed a logging connection/cursor class that even if it Peter Fein contributed a logging connection/cursor class that even if it
was not used directly heavily influenced the implementation currently in was not used directly heavily influenced the implementation currently in
psycopg2.extras. psycopg2.extras.

View File

@ -47,8 +47,8 @@ psycopg/microprotocol*.{h,c}:
claim that you wrote the original software. If you use this claim that you wrote the original software. If you use this
software in a product, an acknowledgment in the product documentation software in a product, an acknowledgment in the product documentation
would be appreciated but is not required. would be appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not 2. Altered source versions must be plainly marked as such, and must not
be misrepresented as being the original software. be misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution. 3. This notice may not be removed or altered from any source distribution.

View File

@ -10,7 +10,7 @@
the terms and conditions of version 3 of the GNU General Public the terms and conditions of version 3 of the GNU General Public
License, supplemented by the additional permissions listed below. License, supplemented by the additional permissions listed below.
0. Additional Definitions. 0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser As used herein, "this License" refers to version 3 of the GNU Lesser
General Public License, and the "GNU GPL" refers to version 3 of the GNU General Public License, and the "GNU GPL" refers to version 3 of the GNU
@ -111,7 +111,7 @@ the following:
a copy of the Library already present on the user's computer a copy of the Library already present on the user's computer
system, and (b) will operate properly with a modified version system, and (b) will operate properly with a modified version
of the Library that is interface-compatible with the Linked of the Library that is interface-compatible with the Linked
Version. Version.
e) Provide Installation Information, but only if you would otherwise e) Provide Installation Information, but only if you would otherwise
be required to provide such information under section 6 of the be required to provide such information under section 6 of the

View File

@ -1,10 +1,10 @@
From: Jack Moffitt <jack@xiph.org> From: Jack Moffitt <jack@xiph.org>
To: Psycopg Mailing List <psycopg@lists.initd.org> To: Psycopg Mailing List <psycopg@lists.initd.org>
Subject: Re: [Psycopg] preparing for 1.0 Subject: Re: [Psycopg] preparing for 1.0
Date: 22 Oct 2001 11:16:21 -0600 Date: 22 Oct 2001 11:16:21 -0600
www.vorbis.com is serving from 5-10k pages per day with psycopg serving www.vorbis.com is serving from 5-10k pages per day with psycopg serving
data for most of that. data for most of that.
I plan to use it for several of our other sites, so that number will I plan to use it for several of our other sites, so that number will
increase. increase.
@ -19,7 +19,7 @@ jack.
From: Yury Don <gercon@vpcit.ru> From: Yury Don <gercon@vpcit.ru>
To: Psycopg Mailing List <psycopg@lists.initd.org> To: Psycopg Mailing List <psycopg@lists.initd.org>
Subject: Re: [Psycopg] preparing for 1.0 Subject: Re: [Psycopg] preparing for 1.0
Date: 23 Oct 2001 09:53:11 +0600 Date: 23 Oct 2001 09:53:11 +0600
We use psycopg and psycopg zope adapter since fisrt public We use psycopg and psycopg zope adapter since fisrt public
release (it seems version 0.4). Now it works on 3 our sites and in intranet release (it seems version 0.4). Now it works on 3 our sites and in intranet
@ -32,7 +32,7 @@ to solve the problem, even thouth my knowledge of c were poor.
BTW, segfault with dictfetchall on particular data set (see [Psycopg] BTW, segfault with dictfetchall on particular data set (see [Psycopg]
dictfetchXXX() problems) disappeared in 0.99.8pre2. dictfetchXXX() problems) disappeared in 0.99.8pre2.
-- --
Best regards, Best regards,
Yury Don Yury Don
@ -42,7 +42,7 @@ To: Federico Di Gregorio <fog@debian.org>
Cc: Psycopg Mailing List <psycopg@lists.initd.org> Cc: Psycopg Mailing List <psycopg@lists.initd.org>
Subject: Re: [Psycopg] preparing for 1.0 Subject: Re: [Psycopg] preparing for 1.0
Date: 23 Oct 2001 08:25:52 -0400 Date: 23 Oct 2001 08:25:52 -0400
The US Govt Department of Labor's Office of Disability Employment The US Govt Department of Labor's Office of Disability Employment
Policy's DisabilityDirect website is run on zope and zpsycopg. Policy's DisabilityDirect website is run on zope and zpsycopg.
@ -50,7 +50,7 @@ Policy's DisabilityDirect website is run on zope and zpsycopg.
From: Scott Leerssen <sleerssen@racemi.com> From: Scott Leerssen <sleerssen@racemi.com>
To: Federico Di Gregorio <fog@debian.org> To: Federico Di Gregorio <fog@debian.org>
Subject: Re: [Psycopg] preparing for 1.0 Subject: Re: [Psycopg] preparing for 1.0
Date: 23 Oct 2001 09:56:10 -0400 Date: 23 Oct 2001 09:56:10 -0400
Racemi's load management software infrastructure uses psycopg to handle Racemi's load management software infrastructure uses psycopg to handle
complex server allocation decisions, plus storage and access of complex server allocation decisions, plus storage and access of
@ -66,10 +66,10 @@ From: Andre Schubert <andre.schubert@geyer.kabeljournal.de>
To: Federico Di Gregorio <fog@debian.org> To: Federico Di Gregorio <fog@debian.org>
Cc: Psycopg Mailing List <psycopg@lists.initd.org> Cc: Psycopg Mailing List <psycopg@lists.initd.org>
Subject: Re: [Psycopg] preparing for 1.0 Subject: Re: [Psycopg] preparing for 1.0
Date: 23 Oct 2001 11:46:07 +0200 Date: 23 Oct 2001 11:46:07 +0200
i have changed the psycopg version to 0.99.8pre2 on all devel-machines i have changed the psycopg version to 0.99.8pre2 on all devel-machines
and all segfaults are gone. after my holiday i wil change to 0.99.8pre2 and all segfaults are gone. after my holiday i wil change to 0.99.8pre2
or 1.0 on our production-server. or 1.0 on our production-server.
this server contains several web-sites which are all connected to this server contains several web-sites which are all connected to
postgres over ZPsycopgDA. postgres over ZPsycopgDA.
@ -81,7 +81,7 @@ From: Fred Wilson Horch <fhorch@ecoaccess.org>
To: <psycopg@lists.initd.org> To: <psycopg@lists.initd.org>
Subject: [Psycopg] Success story for psycopg Subject: [Psycopg] Success story for psycopg
Date: 23 Oct 2001 10:59:17 -0400 Date: 23 Oct 2001 10:59:17 -0400
Due to various quirks of PyGreSQL and PoPy, EcoAccess has been looking for Due to various quirks of PyGreSQL and PoPy, EcoAccess has been looking for
a reliable, fast and relatively bug-free Python-PostgreSQL interface for a reliable, fast and relatively bug-free Python-PostgreSQL interface for
our project. our project.
@ -98,7 +98,7 @@ reports and feature requests, and we're looking forward to using psycopg
as the Python interface for additional database-backed web applications. as the Python interface for additional database-backed web applications.
Keep up the good work! Keep up the good work!
-- --
Fred Wilson Horch mailto:fhorch@ecoaccess.org Fred Wilson Horch mailto:fhorch@ecoaccess.org
Executive Director, EcoAccess http://ecoaccess.org/ Executive Director, EcoAccess http://ecoaccess.org/

View File

@ -9,15 +9,15 @@ Replaces: 248
Release-Date: 07 Apr 1999 Release-Date: 07 Apr 1999
Introduction Introduction
This API has been defined to encourage similarity between the This API has been defined to encourage similarity between the
Python modules that are used to access databases. By doing this, Python modules that are used to access databases. By doing this,
we hope to achieve a consistency leading to more easily understood we hope to achieve a consistency leading to more easily understood
modules, code that is generally more portable across databases, modules, code that is generally more portable across databases,
and a broader reach of database connectivity from Python. and a broader reach of database connectivity from Python.
The interface specification consists of several sections: The interface specification consists of several sections:
* Module Interface * Module Interface
* Connection Objects * Connection Objects
* Cursor Objects * Cursor Objects
@ -25,7 +25,7 @@ Introduction
* Type Objects and Constructors * Type Objects and Constructors
* Implementation Hints * Implementation Hints
* Major Changes from 1.0 to 2.0 * Major Changes from 1.0 to 2.0
Comments and questions about this specification may be directed Comments and questions about this specification may be directed
to the SIG for Database Interfacing with Python to the SIG for Database Interfacing with Python
(db-sig@python.org). (db-sig@python.org).
@ -41,7 +41,7 @@ Introduction
basis for new interfaces. basis for new interfaces.
Module Interface Module Interface
Access to the database is made available through connection Access to the database is made available through connection
objects. The module must provide the following constructor for objects. The module must provide the following constructor for
these: these:
@ -51,17 +51,17 @@ Module Interface
Constructor for creating a connection to the database. Constructor for creating a connection to the database.
Returns a Connection Object. It takes a number of Returns a Connection Object. It takes a number of
parameters which are database dependent. [1] parameters which are database dependent. [1]
These module globals must be defined: These module globals must be defined:
apilevel apilevel
String constant stating the supported DB API level. String constant stating the supported DB API level.
Currently only the strings '1.0' and '2.0' are allowed. Currently only the strings '1.0' and '2.0' are allowed.
If not given, a DB-API 1.0 level interface should be If not given, a DB-API 1.0 level interface should be
assumed. assumed.
threadsafety threadsafety
Integer constant stating the level of thread safety the Integer constant stating the level of thread safety the
@ -81,33 +81,33 @@ Module Interface
or other external sources that are beyond your control. or other external sources that are beyond your control.
paramstyle paramstyle
String constant stating the type of parameter marker String constant stating the type of parameter marker
formatting expected by the interface. Possible values are formatting expected by the interface. Possible values are
[2]: [2]:
'qmark' Question mark style, 'qmark' Question mark style,
e.g. '...WHERE name=?' e.g. '...WHERE name=?'
'numeric' Numeric, positional style, 'numeric' Numeric, positional style,
e.g. '...WHERE name=:1' e.g. '...WHERE name=:1'
'named' Named style, 'named' Named style,
e.g. '...WHERE name=:name' e.g. '...WHERE name=:name'
'format' ANSI C printf format codes, 'format' ANSI C printf format codes,
e.g. '...WHERE name=%s' e.g. '...WHERE name=%s'
'pyformat' Python extended format codes, 'pyformat' Python extended format codes,
e.g. '...WHERE name=%(name)s' e.g. '...WHERE name=%(name)s'
The module should make all error information available through The module should make all error information available through
these exceptions or subclasses thereof: these exceptions or subclasses thereof:
Warning Warning
Exception raised for important warnings like data Exception raised for important warnings like data
truncations while inserting, etc. It must be a subclass of truncations while inserting, etc. It must be a subclass of
the Python StandardError (defined in the module the Python StandardError (defined in the module
exceptions). exceptions).
Error Error
Exception that is the base class of all other error Exception that is the base class of all other error
exceptions. You can use this to catch all errors with one exceptions. You can use this to catch all errors with one
@ -115,7 +115,7 @@ Module Interface
errors and thus should not use this class as base. It must errors and thus should not use this class as base. It must
be a subclass of the Python StandardError (defined in the be a subclass of the Python StandardError (defined in the
module exceptions). module exceptions).
InterfaceError InterfaceError
Exception raised for errors that are related to the Exception raised for errors that are related to the
@ -126,50 +126,50 @@ Module Interface
Exception raised for errors that are related to the Exception raised for errors that are related to the
database. It must be a subclass of Error. database. It must be a subclass of Error.
DataError DataError
Exception raised for errors that are due to problems with Exception raised for errors that are due to problems with
the processed data like division by zero, numeric value the processed data like division by zero, numeric value
out of range, etc. It must be a subclass of DatabaseError. out of range, etc. It must be a subclass of DatabaseError.
OperationalError OperationalError
Exception raised for errors that are related to the Exception raised for errors that are related to the
database's operation and not necessarily under the control database's operation and not necessarily under the control
of the programmer, e.g. an unexpected disconnect occurs, of the programmer, e.g. an unexpected disconnect occurs,
the data source name is not found, a transaction could not the data source name is not found, a transaction could not
be processed, a memory allocation error occurred during be processed, a memory allocation error occurred during
processing, etc. It must be a subclass of DatabaseError. processing, etc. It must be a subclass of DatabaseError.
IntegrityError IntegrityError
Exception raised when the relational integrity of the Exception raised when the relational integrity of the
database is affected, e.g. a foreign key check fails. It database is affected, e.g. a foreign key check fails. It
must be a subclass of DatabaseError. must be a subclass of DatabaseError.
InternalError InternalError
Exception raised when the database encounters an internal Exception raised when the database encounters an internal
error, e.g. the cursor is not valid anymore, the error, e.g. the cursor is not valid anymore, the
transaction is out of sync, etc. It must be a subclass of transaction is out of sync, etc. It must be a subclass of
DatabaseError. DatabaseError.
ProgrammingError ProgrammingError
Exception raised for programming errors, e.g. table not Exception raised for programming errors, e.g. table not
found or already exists, syntax error in the SQL found or already exists, syntax error in the SQL
statement, wrong number of parameters specified, etc. It statement, wrong number of parameters specified, etc. It
must be a subclass of DatabaseError. must be a subclass of DatabaseError.
NotSupportedError NotSupportedError
Exception raised in case a method or database API was used Exception raised in case a method or database API was used
which is not supported by the database, e.g. requesting a which is not supported by the database, e.g. requesting a
.rollback() on a connection that does not support .rollback() on a connection that does not support
transaction or has transactions turned off. It must be a transaction or has transactions turned off. It must be a
subclass of DatabaseError. subclass of DatabaseError.
This is the exception inheritance layout: This is the exception inheritance layout:
StandardError StandardError
@ -183,17 +183,17 @@ Module Interface
|__InternalError |__InternalError
|__ProgrammingError |__ProgrammingError
|__NotSupportedError |__NotSupportedError
Note: The values of these exceptions are not defined. They should Note: The values of these exceptions are not defined. They should
give the user a fairly good idea of what went wrong, though. give the user a fairly good idea of what went wrong, though.
Connection Objects Connection Objects
Connection Objects should respond to the following methods: Connection Objects should respond to the following methods:
.close() .close()
Close the connection now (rather than whenever __del__ is Close the connection now (rather than whenever __del__ is
called). The connection will be unusable from this point called). The connection will be unusable from this point
forward; an Error (or subclass) exception will be raised forward; an Error (or subclass) exception will be raised
@ -203,52 +203,52 @@ Connection Objects
committing the changes first will cause an implicit committing the changes first will cause an implicit
rollback to be performed. rollback to be performed.
.commit() .commit()
Commit any pending transaction to the database. Note that Commit any pending transaction to the database. Note that
if the database supports an auto-commit feature, this must if the database supports an auto-commit feature, this must
be initially off. An interface method may be provided to be initially off. An interface method may be provided to
turn it back on. turn it back on.
Database modules that do not support transactions should Database modules that do not support transactions should
implement this method with void functionality. implement this method with void functionality.
.rollback() .rollback()
This method is optional since not all databases provide This method is optional since not all databases provide
transaction support. [3] transaction support. [3]
In case a database does provide transactions this method In case a database does provide transactions this method
causes the the database to roll back to the start of any causes the the database to roll back to the start of any
pending transaction. Closing a connection without pending transaction. Closing a connection without
committing the changes first will cause an implicit committing the changes first will cause an implicit
rollback to be performed. rollback to be performed.
.cursor() .cursor()
Return a new Cursor Object using the connection. If the Return a new Cursor Object using the connection. If the
database does not provide a direct cursor concept, the database does not provide a direct cursor concept, the
module will have to emulate cursors using other means to module will have to emulate cursors using other means to
the extent needed by this specification. [4] the extent needed by this specification. [4]
Cursor Objects Cursor Objects
These objects represent a database cursor, which is used to These objects represent a database cursor, which is used to
manage the context of a fetch operation. Cursors created from manage the context of a fetch operation. Cursors created from
the same connection are not isolated, i.e., any changes the same connection are not isolated, i.e., any changes
done to the database by a cursor are immediately visible by the done to the database by a cursor are immediately visible by the
other cursors. Cursors created from different connections can other cursors. Cursors created from different connections can
or can not be isolated, depending on how the transaction support or can not be isolated, depending on how the transaction support
is implemented (see also the connection's rollback() and commit() is implemented (see also the connection's rollback() and commit()
methods.) methods.)
Cursor Objects should respond to the following methods and Cursor Objects should respond to the following methods and
attributes: attributes:
.description .description
This read-only attribute is a sequence of 7-item This read-only attribute is a sequence of 7-item
sequences. Each of these sequences contains information sequences. Each of these sequences contains information
describing one result column: (name, type_code, describing one result column: (name, type_code,
@ -260,17 +260,17 @@ Cursor Objects
This attribute will be None for operations that This attribute will be None for operations that
do not return rows or if the cursor has not had an do not return rows or if the cursor has not had an
operation invoked via the executeXXX() method yet. operation invoked via the executeXXX() method yet.
The type_code can be interpreted by comparing it to the The type_code can be interpreted by comparing it to the
Type Objects specified in the section below. Type Objects specified in the section below.
.rowcount .rowcount
This read-only attribute specifies the number of rows that This read-only attribute specifies the number of rows that
the last executeXXX() produced (for DQL statements like the last executeXXX() produced (for DQL statements like
'select') or affected (for DML statements like 'update' or 'select') or affected (for DML statements like 'update' or
'insert'). 'insert').
The attribute is -1 in case no executeXXX() has been The attribute is -1 in case no executeXXX() has been
performed on the cursor or the rowcount of the last performed on the cursor or the rowcount of the last
operation is not determinable by the interface. [7] operation is not determinable by the interface. [7]
@ -278,96 +278,96 @@ Cursor Objects
Note: Future versions of the DB API specification could Note: Future versions of the DB API specification could
redefine the latter case to have the object return None redefine the latter case to have the object return None
instead of -1. instead of -1.
.callproc(procname[,parameters]) .callproc(procname[,parameters])
(This method is optional since not all databases provide (This method is optional since not all databases provide
stored procedures. [3]) stored procedures. [3])
Call a stored database procedure with the given name. The Call a stored database procedure with the given name. The
sequence of parameters must contain one entry for each sequence of parameters must contain one entry for each
argument that the procedure expects. The result of the argument that the procedure expects. The result of the
call is returned as modified copy of the input call is returned as modified copy of the input
sequence. Input parameters are left untouched, output and sequence. Input parameters are left untouched, output and
input/output parameters replaced with possibly new values. input/output parameters replaced with possibly new values.
The procedure may also provide a result set as The procedure may also provide a result set as
output. This must then be made available through the output. This must then be made available through the
standard fetchXXX() methods. standard fetchXXX() methods.
.close() .close()
Close the cursor now (rather than whenever __del__ is Close the cursor now (rather than whenever __del__ is
called). The cursor will be unusable from this point called). The cursor will be unusable from this point
forward; an Error (or subclass) exception will be raised forward; an Error (or subclass) exception will be raised
if any operation is attempted with the cursor. if any operation is attempted with the cursor.
.execute(operation[,parameters]) .execute(operation[,parameters])
Prepare and execute a database operation (query or Prepare and execute a database operation (query or
command). Parameters may be provided as sequence or command). Parameters may be provided as sequence or
mapping and will be bound to variables in the operation. mapping and will be bound to variables in the operation.
Variables are specified in a database-specific notation Variables are specified in a database-specific notation
(see the module's paramstyle attribute for details). [5] (see the module's paramstyle attribute for details). [5]
A reference to the operation will be retained by the A reference to the operation will be retained by the
cursor. If the same operation object is passed in again, cursor. If the same operation object is passed in again,
then the cursor can optimize its behavior. This is most then the cursor can optimize its behavior. This is most
effective for algorithms where the same operation is used, effective for algorithms where the same operation is used,
but different parameters are bound to it (many times). but different parameters are bound to it (many times).
For maximum efficiency when reusing an operation, it is For maximum efficiency when reusing an operation, it is
best to use the setinputsizes() method to specify the best to use the setinputsizes() method to specify the
parameter types and sizes ahead of time. It is legal for parameter types and sizes ahead of time. It is legal for
a parameter to not match the predefined information; the a parameter to not match the predefined information; the
implementation should compensate, possibly with a loss of implementation should compensate, possibly with a loss of
efficiency. efficiency.
The parameters may also be specified as list of tuples to The parameters may also be specified as list of tuples to
e.g. insert multiple rows in a single operation, but this e.g. insert multiple rows in a single operation, but this
kind of usage is depreciated: executemany() should be used kind of usage is depreciated: executemany() should be used
instead. instead.
Return values are not defined. Return values are not defined.
.executemany(operation,seq_of_parameters) .executemany(operation,seq_of_parameters)
Prepare a database operation (query or command) and then Prepare a database operation (query or command) and then
execute it against all parameter sequences or mappings execute it against all parameter sequences or mappings
found in the sequence seq_of_parameters. found in the sequence seq_of_parameters.
Modules are free to implement this method using multiple Modules are free to implement this method using multiple
calls to the execute() method or by using array operations calls to the execute() method or by using array operations
to have the database process the sequence as a whole in to have the database process the sequence as a whole in
one call. one call.
Use of this method for an operation which produces one or Use of this method for an operation which produces one or
more result sets constitutes undefined behavior, and the more result sets constitutes undefined behavior, and the
implementation is permitted (but not required) to raise implementation is permitted (but not required) to raise
an exception when it detects that a result set has been an exception when it detects that a result set has been
created by an invocation of the operation. created by an invocation of the operation.
The same comments as for execute() also apply accordingly The same comments as for execute() also apply accordingly
to this method. to this method.
Return values are not defined. Return values are not defined.
.fetchone() .fetchone()
Fetch the next row of a query result set, returning a Fetch the next row of a query result set, returning a
single sequence, or None when no more data is single sequence, or None when no more data is
available. [6] available. [6]
An Error (or subclass) exception is raised if the previous An Error (or subclass) exception is raised if the previous
call to executeXXX() did not produce any result set or no call to executeXXX() did not produce any result set or no
call was issued yet. call was issued yet.
fetchmany([size=cursor.arraysize]) fetchmany([size=cursor.arraysize])
Fetch the next set of rows of a query result, returning a Fetch the next set of rows of a query result, returning a
sequence of sequences (e.g. a list of tuples). An empty sequence of sequences (e.g. a list of tuples). An empty
sequence is returned when no more rows are available. sequence is returned when no more rows are available.
The number of rows to fetch per call is specified by the The number of rows to fetch per call is specified by the
parameter. If it is not given, the cursor's arraysize parameter. If it is not given, the cursor's arraysize
determines the number of rows to be fetched. The method determines the number of rows to be fetched. The method
@ -375,62 +375,62 @@ Cursor Objects
parameter. If this is not possible due to the specified parameter. If this is not possible due to the specified
number of rows not being available, fewer rows may be number of rows not being available, fewer rows may be
returned. returned.
An Error (or subclass) exception is raised if the previous An Error (or subclass) exception is raised if the previous
call to executeXXX() did not produce any result set or no call to executeXXX() did not produce any result set or no
call was issued yet. call was issued yet.
Note there are performance considerations involved with Note there are performance considerations involved with
the size parameter. For optimal performance, it is the size parameter. For optimal performance, it is
usually best to use the arraysize attribute. If the size usually best to use the arraysize attribute. If the size
parameter is used, then it is best for it to retain the parameter is used, then it is best for it to retain the
same value from one fetchmany() call to the next. same value from one fetchmany() call to the next.
.fetchall() .fetchall()
Fetch all (remaining) rows of a query result, returning Fetch all (remaining) rows of a query result, returning
them as a sequence of sequences (e.g. a list of tuples). them as a sequence of sequences (e.g. a list of tuples).
Note that the cursor's arraysize attribute can affect the Note that the cursor's arraysize attribute can affect the
performance of this operation. performance of this operation.
An Error (or subclass) exception is raised if the previous An Error (or subclass) exception is raised if the previous
call to executeXXX() did not produce any result set or no call to executeXXX() did not produce any result set or no
call was issued yet. call was issued yet.
.nextset() .nextset()
(This method is optional since not all databases support (This method is optional since not all databases support
multiple result sets. [3]) multiple result sets. [3])
This method will make the cursor skip to the next This method will make the cursor skip to the next
available set, discarding any remaining rows from the available set, discarding any remaining rows from the
current set. current set.
If there are no more sets, the method returns If there are no more sets, the method returns
None. Otherwise, it returns a true value and subsequent None. Otherwise, it returns a true value and subsequent
calls to the fetch methods will return rows from the next calls to the fetch methods will return rows from the next
result set. result set.
An Error (or subclass) exception is raised if the previous An Error (or subclass) exception is raised if the previous
call to executeXXX() did not produce any result set or no call to executeXXX() did not produce any result set or no
call was issued yet. call was issued yet.
.arraysize .arraysize
This read/write attribute specifies the number of rows to This read/write attribute specifies the number of rows to
fetch at a time with fetchmany(). It defaults to 1 meaning fetch at a time with fetchmany(). It defaults to 1 meaning
to fetch a single row at a time. to fetch a single row at a time.
Implementations must observe this value with respect to Implementations must observe this value with respect to
the fetchmany() method, but are free to interact with the the fetchmany() method, but are free to interact with the
database a single row at a time. It may also be used in database a single row at a time. It may also be used in
the implementation of executemany(). the implementation of executemany().
.setinputsizes(sizes) .setinputsizes(sizes)
This can be used before a call to executeXXX() to This can be used before a call to executeXXX() to
predefine memory areas for the operation's parameters. predefine memory areas for the operation's parameters.
sizes is specified as a sequence -- one item for each sizes is specified as a sequence -- one item for each
input parameter. The item should be a Type Object that input parameter. The item should be a Type Object that
corresponds to the input that will be used, or it should corresponds to the input that will be used, or it should
@ -438,27 +438,27 @@ Cursor Objects
parameter. If the item is None, then no predefined memory parameter. If the item is None, then no predefined memory
area will be reserved for that column (this is useful to area will be reserved for that column (this is useful to
avoid predefined areas for large inputs). avoid predefined areas for large inputs).
This method would be used before the executeXXX() method This method would be used before the executeXXX() method
is invoked. is invoked.
Implementations are free to have this method do nothing Implementations are free to have this method do nothing
and users are free to not use it. and users are free to not use it.
.setoutputsize(size[,column]) .setoutputsize(size[,column])
Set a column buffer size for fetches of large columns Set a column buffer size for fetches of large columns
(e.g. LONGs, BLOBs, etc.). The column is specified as an (e.g. LONGs, BLOBs, etc.). The column is specified as an
index into the result sequence. Not specifying the column index into the result sequence. Not specifying the column
will set the default size for all large columns in the will set the default size for all large columns in the
cursor. cursor.
This method would be used before the executeXXX() method This method would be used before the executeXXX() method
is invoked. is invoked.
Implementations are free to have this method do nothing Implementations are free to have this method do nothing
and users are free to not use it. and users are free to not use it.
Type Objects and Constructors Type Objects and Constructors
@ -485,15 +485,15 @@ Type Objects and Constructors
Implementation Hints below for details). Implementation Hints below for details).
The module exports the following constructors and singletons: The module exports the following constructors and singletons:
Date(year,month,day) Date(year,month,day)
This function constructs an object holding a date value. This function constructs an object holding a date value.
Time(hour,minute,second) Time(hour,minute,second)
This function constructs an object holding a time value. This function constructs an object holding a time value.
Timestamp(year,month,day,hour,minute,second) Timestamp(year,month,day,hour,minute,second)
This function constructs an object holding a time stamp This function constructs an object holding a time stamp
@ -507,12 +507,12 @@ Type Objects and Constructors
module for details). module for details).
TimeFromTicks(ticks) TimeFromTicks(ticks)
This function constructs an object holding a time value This function constructs an object holding a time value
from the given ticks value (number of seconds since the from the given ticks value (number of seconds since the
epoch; see the documentation of the standard Python time epoch; see the documentation of the standard Python time
module for details). module for details).
TimestampFromTicks(ticks) TimestampFromTicks(ticks)
This function constructs an object holding a time stamp This function constructs an object holding a time stamp
@ -521,10 +521,10 @@ Type Objects and Constructors
time module for details). time module for details).
Binary(string) Binary(string)
This function constructs an object capable of holding a This function constructs an object capable of holding a
binary (long) string value. binary (long) string value.
STRING STRING
@ -535,22 +535,22 @@ Type Objects and Constructors
This type object is used to describe (long) binary columns This type object is used to describe (long) binary columns
in a database (e.g. LONG, RAW, BLOBs). in a database (e.g. LONG, RAW, BLOBs).
NUMBER NUMBER
This type object is used to describe numeric columns in a This type object is used to describe numeric columns in a
database. database.
DATETIME DATETIME
This type object is used to describe date/time columns in This type object is used to describe date/time columns in
a database. a database.
ROWID ROWID
This type object is used to describe the "Row ID" column This type object is used to describe the "Row ID" column
in a database. in a database.
SQL NULL values are represented by the Python None singleton on SQL NULL values are represented by the Python None singleton on
input and output. input and output.
@ -563,7 +563,7 @@ Implementation Hints for Module Authors
* The preferred object types for the date/time objects are those * The preferred object types for the date/time objects are those
defined in the mxDateTime package. It provides all necessary defined in the mxDateTime package. It provides all necessary
constructors and methods both at Python and C level. constructors and methods both at Python and C level.
* The preferred object type for Binary objects are the * The preferred object type for Binary objects are the
buffer types available in standard Python starting with buffer types available in standard Python starting with
version 1.5.2. Please see the Python documentation for version 1.5.2. Please see the Python documentation for
@ -577,7 +577,7 @@ Implementation Hints for Module Authors
processing. However, it should be noted that this does not processing. However, it should be noted that this does not
expose a C API like mxDateTime does which means that integration expose a C API like mxDateTime does which means that integration
with C based database modules is more difficult. with C based database modules is more difficult.
* Here is a sample implementation of the Unix ticks based * Here is a sample implementation of the Unix ticks based
constructors for date/time delegating work to the generic constructors for date/time delegating work to the generic
constructors: constructors:
@ -645,7 +645,7 @@ Implementation Hints for Module Authors
class NotSupportedError(DatabaseError): class NotSupportedError(DatabaseError):
pass pass
In C you can use the PyErr_NewException(fullname, In C you can use the PyErr_NewException(fullname,
base, NULL) API to create the exception objects. base, NULL) API to create the exception objects.
@ -760,7 +760,7 @@ Optional DB API Extensions
Warning Message: "DB-API extension connection.messages used" Warning Message: "DB-API extension connection.messages used"
Cursor Method .next() Cursor Method .next()
Return the next row from the currently executing SQL statement Return the next row from the currently executing SQL statement
using the same semantics as .fetchone(). A StopIteration using the same semantics as .fetchone(). A StopIteration
exception is raised when the result set is exhausted for Python exception is raised when the result set is exhausted for Python
@ -790,13 +790,13 @@ Optional DB API Extensions
Warning Message: "DB-API extension cursor.lastrowid used" Warning Message: "DB-API extension cursor.lastrowid used"
Optional Error Handling Extension Optional Error Handling Extension
The core DB API specification only introduces a set of exceptions The core DB API specification only introduces a set of exceptions
which can be raised to report errors to the user. In some cases, which can be raised to report errors to the user. In some cases,
exceptions may be too disruptive for the flow of a program or even exceptions may be too disruptive for the flow of a program or even
render execution impossible. render execution impossible.
For these cases and in order to simplify error handling when For these cases and in order to simplify error handling when
dealing with databases, database module authors may choose to dealing with databases, database module authors may choose to
@ -806,7 +806,7 @@ Optional Error Handling Extension
Cursor/Connection Attribute .errorhandler Cursor/Connection Attribute .errorhandler
Read/write attribute which references an error handler to call Read/write attribute which references an error handler to call
in case an error condition is met. in case an error condition is met.
The handler must be a Python callable taking the following The handler must be a Python callable taking the following
arguments: errorhandler(connection, cursor, errorclass, arguments: errorhandler(connection, cursor, errorclass,
@ -836,7 +836,7 @@ Frequently Asked Questions
specification. This section covers some of the issues people specification. This section covers some of the issues people
sometimes have with the specification. sometimes have with the specification.
Question: Question:
How can I construct a dictionary out of the tuples returned by How can I construct a dictionary out of the tuples returned by
.fetchxxx(): .fetchxxx():
@ -855,7 +855,7 @@ Frequently Asked Questions
* Some databases don't support case-sensitive column names or * Some databases don't support case-sensitive column names or
auto-convert them to all lowercase or all uppercase auto-convert them to all lowercase or all uppercase
characters. characters.
* Columns in the result set which are generated by the query * Columns in the result set which are generated by the query
(e.g. using SQL functions) don't map to table column names (e.g. using SQL functions) don't map to table column names
and databases usually generate names for these columns in a and databases usually generate names for these columns in a
@ -872,9 +872,9 @@ Major Changes from Version 1.0 to Version 2.0
compared to the 1.0 version. Because some of these changes will compared to the 1.0 version. Because some of these changes will
cause existing DB API 1.0 based scripts to break, the major cause existing DB API 1.0 based scripts to break, the major
version number was adjusted to reflect this change. version number was adjusted to reflect this change.
These are the most important changes from 1.0 to 2.0: These are the most important changes from 1.0 to 2.0:
* The need for a separate dbi module was dropped and the * The need for a separate dbi module was dropped and the
functionality merged into the module interface itself. functionality merged into the module interface itself.
@ -886,10 +886,10 @@ Major Changes from Version 1.0 to Version 2.0
* New constants (apilevel, threadlevel, paramstyle) and * New constants (apilevel, threadlevel, paramstyle) and
methods (executemany, nextset) were added to provide better methods (executemany, nextset) were added to provide better
database bindings. database bindings.
* The semantics of .callproc() needed to call stored * The semantics of .callproc() needed to call stored
procedures are now clearly defined. procedures are now clearly defined.
* The definition of the .execute() return value changed. * The definition of the .execute() return value changed.
Previously, the return value was based on the SQL statement Previously, the return value was based on the SQL statement
type (which was hard to implement right) -- it is undefined type (which was hard to implement right) -- it is undefined
@ -898,7 +898,7 @@ Major Changes from Version 1.0 to Version 2.0
values, but these are no longer mandated by the values, but these are no longer mandated by the
specification and should be considered database interface specification and should be considered database interface
dependent. dependent.
* Class based exceptions were incorporated into the * Class based exceptions were incorporated into the
specification. Module implementors are free to extend the specification. Module implementors are free to extend the
exception layout defined in this specification by exception layout defined in this specification by
@ -916,10 +916,10 @@ Open Issues
questions that were left open in the 1.0 version, there are still questions that were left open in the 1.0 version, there are still
some remaining issues which should be addressed in future some remaining issues which should be addressed in future
versions: versions:
* Define a useful return value for .nextset() for the case where * Define a useful return value for .nextset() for the case where
a new result set is available. a new result set is available.
* Create a fixed point numeric type for use as loss-less * Create a fixed point numeric type for use as loss-less
monetary and decimal interchange format. monetary and decimal interchange format.
@ -929,17 +929,17 @@ Footnotes
[1] As a guideline the connection constructor parameters should be [1] As a guideline the connection constructor parameters should be
implemented as keyword parameters for more intuitive use and implemented as keyword parameters for more intuitive use and
follow this order of parameters: follow this order of parameters:
dsn Data source name as string dsn Data source name as string
user User name as string (optional) user User name as string (optional)
password Password as string (optional) password Password as string (optional)
host Hostname (optional) host Hostname (optional)
database Database name (optional) database Database name (optional)
E.g. a connect could look like this: E.g. a connect could look like this:
connect(dsn='myhost:MYDB',user='guido',password='234$') connect(dsn='myhost:MYDB',user='guido',password='234$')
[2] Module implementors should prefer 'numeric', 'named' or [2] Module implementors should prefer 'numeric', 'named' or
'pyformat' over the other formats because these offer more 'pyformat' over the other formats because these offer more
clarity and flexibility. clarity and flexibility.
@ -947,41 +947,41 @@ Footnotes
[3] If the database does not support the functionality required [3] If the database does not support the functionality required
by the method, the interface should throw an exception in by the method, the interface should throw an exception in
case the method is used. case the method is used.
The preferred approach is to not implement the method and The preferred approach is to not implement the method and
thus have Python generate an AttributeError in thus have Python generate an AttributeError in
case the method is requested. This allows the programmer to case the method is requested. This allows the programmer to
check for database capabilities using the standard check for database capabilities using the standard
hasattr() function. hasattr() function.
For some dynamically configured interfaces it may not be For some dynamically configured interfaces it may not be
appropriate to require dynamically making the method appropriate to require dynamically making the method
available. These interfaces should then raise a available. These interfaces should then raise a
NotSupportedError to indicate the non-ability NotSupportedError to indicate the non-ability
to perform the roll back when the method is invoked. to perform the roll back when the method is invoked.
[4] a database interface may choose to support named cursors by [4] a database interface may choose to support named cursors by
allowing a string argument to the method. This feature is allowing a string argument to the method. This feature is
not part of the specification, since it complicates not part of the specification, since it complicates
semantics of the .fetchXXX() methods. semantics of the .fetchXXX() methods.
[5] The module will use the __getitem__ method of the parameters [5] The module will use the __getitem__ method of the parameters
object to map either positions (integers) or names (strings) object to map either positions (integers) or names (strings)
to parameter values. This allows for both sequences and to parameter values. This allows for both sequences and
mappings to be used as input. mappings to be used as input.
The term "bound" refers to the process of binding an input The term "bound" refers to the process of binding an input
value to a database execution buffer. In practical terms, value to a database execution buffer. In practical terms,
this means that the input value is directly used as a value this means that the input value is directly used as a value
in the operation. The client should not be required to in the operation. The client should not be required to
"escape" the value so that it can be used -- the value "escape" the value so that it can be used -- the value
should be equal to the actual database value. should be equal to the actual database value.
[6] Note that the interface may implement row fetching using [6] Note that the interface may implement row fetching using
arrays and other optimizations. It is not arrays and other optimizations. It is not
guaranteed that a call to this method will only move the guaranteed that a call to this method will only move the
associated cursor forward by one row. associated cursor forward by one row.
[7] The rowcount attribute may be coded in a way that updates [7] The rowcount attribute may be coded in a way that updates
its value dynamically. This can be useful for databases that its value dynamically. This can be useful for databases that
return usable rowcount values only after the first call to return usable rowcount values only after the first call to

View File

@ -36,7 +36,7 @@ How to make a psycopg2 release
- Create a signed tag with the content of the relevant NEWS bit and push it. - Create a signed tag with the content of the relevant NEWS bit and push it.
E.g.:: E.g.::
$ git tag -a -s 2_7 $ git tag -a -s 2_7
Psycopg 2.7 released Psycopg 2.7 released

View File

@ -188,7 +188,7 @@ representation into the previously defined `!Point` class:
... return Point(float(m.group(1)), float(m.group(2))) ... return Point(float(m.group(1)), float(m.group(2)))
... else: ... else:
... raise InterfaceError("bad point representation: %r" % value) ... raise InterfaceError("bad point representation: %r" % value)
In order to create a mapping from a PostgreSQL type (either standard or In order to create a mapping from a PostgreSQL type (either standard or
user-defined), its OID must be known. It can be retrieved either by the second user-defined), its OID must be known. It can be retrieved either by the second

View File

@ -22,7 +22,7 @@ The ``connection`` class
:ref:`thread-safety` for details. :ref:`thread-safety` for details.
.. method:: cursor(name=None, cursor_factory=None, scrollable=None, withhold=False) .. method:: cursor(name=None, cursor_factory=None, scrollable=None, withhold=False)
Return a new `cursor` object using the connection. Return a new `cursor` object using the connection.
If *name* is specified, the returned cursor will be a :ref:`server If *name* is specified, the returned cursor will be a :ref:`server
@ -274,8 +274,8 @@ The ``connection`` class
.. __: http://jdbc.postgresql.org/ .. __: http://jdbc.postgresql.org/
Xids returned by `!tpc_recover()` also have extra attributes Xids returned by `!tpc_recover()` also have extra attributes
`~psycopg2.extensions.Xid.prepared`, `~psycopg2.extensions.Xid.owner`, `~psycopg2.extensions.Xid.prepared`, `~psycopg2.extensions.Xid.owner`,
`~psycopg2.extensions.Xid.database` populated with the values read `~psycopg2.extensions.Xid.database` populated with the values read
from the server. from the server.
@ -626,7 +626,7 @@ The ``connection`` class
pair: Server; Parameters pair: Server; Parameters
.. method:: get_parameter_status(parameter) .. method:: get_parameter_status(parameter)
Look up a current parameter setting of the server. Look up a current parameter setting of the server.
Potential values for ``parameter`` are: ``server_version``, Potential values for ``parameter`` are: ``server_version``,
@ -708,7 +708,7 @@ The ``connection`` class
The number is formed by converting the major, minor, and revision The number is formed by converting the major, minor, and revision
numbers into two-decimal-digit numbers and appending them together. numbers into two-decimal-digit numbers and appending them together.
For example, version 8.1.5 will be returned as ``80105``. For example, version 8.1.5 will be returned as ``80105``.
.. seealso:: libpq docs for `PQserverVersion()`__ for details. .. seealso:: libpq docs for `PQserverVersion()`__ for details.
.. __: http://www.postgresql.org/docs/current/static/libpq-status.html#LIBPQ-PQSERVERVERSION .. __: http://www.postgresql.org/docs/current/static/libpq-status.html#LIBPQ-PQSERVERVERSION
@ -722,7 +722,7 @@ The ``connection`` class
.. attribute:: status .. attribute:: status
A read-only integer representing the status of the connection. A read-only integer representing the status of the connection.
Symbolic constants for the values are defined in the module Symbolic constants for the values are defined in the module
`psycopg2.extensions`: see :ref:`connection-status-constants` `psycopg2.extensions`: see :ref:`connection-status-constants`
for the available values. for the available values.

View File

@ -34,10 +34,10 @@ The ``cursor`` class
many cursors from the same connection and should use each cursor from many cursors from the same connection and should use each cursor from
a single thread. See :ref:`thread-safety` for details. a single thread. See :ref:`thread-safety` for details.
.. attribute:: description
This read-only attribute is a sequence of 7-item sequences. .. attribute:: description
This read-only attribute is a sequence of 7-item sequences.
Each of these sequences is a named tuple (a regular tuple if Each of these sequences is a named tuple (a regular tuple if
:func:`collections.namedtuple` is not available) containing information :func:`collections.namedtuple` is not available) containing information
@ -65,7 +65,7 @@ The ``cursor`` class
This attribute will be `!None` for operations that do not return rows This attribute will be `!None` for operations that do not return rows
or if the cursor has not had an operation invoked via the or if the cursor has not had an operation invoked via the
|execute*|_ methods yet. |execute*|_ methods yet.
.. |pg_type| replace:: :sql:`pg_type` .. |pg_type| replace:: :sql:`pg_type`
.. _pg_type: http://www.postgresql.org/docs/current/static/catalog-pg-type.html .. _pg_type: http://www.postgresql.org/docs/current/static/catalog-pg-type.html
.. _PQgetlength: http://www.postgresql.org/docs/current/static/libpq-exec.html#LIBPQ-PQGETLENGTH .. _PQgetlength: http://www.postgresql.org/docs/current/static/libpq-exec.html#LIBPQ-PQGETLENGTH
@ -78,7 +78,7 @@ The ``cursor`` class
regular tuples. regular tuples.
.. method:: close() .. method:: close()
Close the cursor now (rather than whenever `del` is executed). Close the cursor now (rather than whenever `del` is executed).
The cursor will be unusable from this point forward; an The cursor will be unusable from this point forward; an
`~psycopg2.InterfaceError` will be raised if any operation is `~psycopg2.InterfaceError` will be raised if any operation is
@ -88,7 +88,7 @@ The ``cursor`` class
the method is automatically called at the end of the ``with`` the method is automatically called at the end of the ``with``
block. block.
.. attribute:: closed .. attribute:: closed
Read-only boolean attribute: specifies if the cursor is closed Read-only boolean attribute: specifies if the cursor is closed
@ -235,7 +235,7 @@ The ``cursor`` class
The `mogrify()` method is a Psycopg extension to the |DBAPI|. The `mogrify()` method is a Psycopg extension to the |DBAPI|.
.. method:: setinputsizes(sizes) .. method:: setinputsizes(sizes)
This method is exposed in compliance with the |DBAPI|. It currently This method is exposed in compliance with the |DBAPI|. It currently
does nothing but it is safe to call it. does nothing but it is safe to call it.
@ -281,17 +281,17 @@ The ``cursor`` class
>>> cur.execute("SELECT * FROM test WHERE id = %s", (3,)) >>> cur.execute("SELECT * FROM test WHERE id = %s", (3,))
>>> cur.fetchone() >>> cur.fetchone()
(3, 42, 'bar') (3, 42, 'bar')
A `~psycopg2.ProgrammingError` is raised if the previous call A `~psycopg2.ProgrammingError` is raised if the previous call
to |execute*|_ did not produce any result set or no call was issued to |execute*|_ did not produce any result set or no call was issued
yet. yet.
.. method:: fetchmany([size=cursor.arraysize]) .. method:: fetchmany([size=cursor.arraysize])
Fetch the next set of rows of a query result, returning a list of Fetch the next set of rows of a query result, returning a list of
tuples. An empty list is returned when no more rows are available. tuples. An empty list is returned when no more rows are available.
The number of rows to fetch per call is specified by the parameter. The number of rows to fetch per call is specified by the parameter.
If it is not given, the cursor's `~cursor.arraysize` determines If it is not given, the cursor's `~cursor.arraysize` determines
the number of rows to be fetched. The method should try to fetch as the number of rows to be fetched. The method should try to fetch as
@ -309,7 +309,7 @@ The ``cursor`` class
A `~psycopg2.ProgrammingError` is raised if the previous call to A `~psycopg2.ProgrammingError` is raised if the previous call to
|execute*|_ did not produce any result set or no call was issued yet. |execute*|_ did not produce any result set or no call was issued yet.
Note there are performance considerations involved with the size Note there are performance considerations involved with the size
parameter. For optimal performance, it is usually best to use the parameter. For optimal performance, it is usually best to use the
`~cursor.arraysize` attribute. If the size parameter is used, `~cursor.arraysize` attribute. If the size parameter is used,
@ -344,7 +344,7 @@ The ``cursor`` class
`~psycopg2.ProgrammingError` is raised and the cursor position is `~psycopg2.ProgrammingError` is raised and the cursor position is
not changed. not changed.
.. note:: .. note::
According to the |DBAPI|_, the exception raised for a cursor out According to the |DBAPI|_, the exception raised for a cursor out
of bound should have been `!IndexError`. The best option is of bound should have been `!IndexError`. The best option is
@ -364,7 +364,7 @@ The ``cursor`` class
.. attribute:: arraysize .. attribute:: arraysize
This read/write attribute specifies the number of rows to fetch at a This read/write attribute specifies the number of rows to fetch at a
time with `~cursor.fetchmany()`. It defaults to 1 meaning to fetch time with `~cursor.fetchmany()`. It defaults to 1 meaning to fetch
a single row at a time. a single row at a time.
@ -378,20 +378,20 @@ The ``cursor`` class
default is 2000. default is 2000.
.. versionadded:: 2.4 .. versionadded:: 2.4
.. extension:: .. extension::
The `itersize` attribute is a Psycopg extension to the |DBAPI|. The `itersize` attribute is a Psycopg extension to the |DBAPI|.
.. attribute:: rowcount .. attribute:: rowcount
This read-only attribute specifies the number of rows that the last This read-only attribute specifies the number of rows that the last
|execute*|_ produced (for :abbr:`DQL (Data Query Language)` statements |execute*|_ produced (for :abbr:`DQL (Data Query Language)` statements
like :sql:`SELECT`) or affected (for like :sql:`SELECT`) or affected (for
:abbr:`DML (Data Manipulation Language)` statements like :sql:`UPDATE` :abbr:`DML (Data Manipulation Language)` statements like :sql:`UPDATE`
or :sql:`INSERT`). or :sql:`INSERT`).
The attribute is -1 in case no |execute*| has been performed on The attribute is -1 in case no |execute*| has been performed on
the cursor or the row count of the last operation if it can't be the cursor or the row count of the last operation if it can't be
determined by the interface. determined by the interface.
@ -400,7 +400,7 @@ The ``cursor`` class
The |DBAPI|_ interface reserves to redefine the latter case to The |DBAPI|_ interface reserves to redefine the latter case to
have the object return `!None` instead of -1 in future versions have the object return `!None` instead of -1 in future versions
of the specification. of the specification.
.. attribute:: rownumber .. attribute:: rownumber
@ -457,7 +457,7 @@ The ``cursor`` class
command: command:
>>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar')) >>> cur.execute("INSERT INTO test (num, data) VALUES (%s, %s)", (42, 'bar'))
>>> cur.statusmessage >>> cur.statusmessage
'INSERT 0 1' 'INSERT 0 1'
.. extension:: .. extension::
@ -490,13 +490,13 @@ The ``cursor`` class
.. method:: nextset() .. method:: nextset()
This method is not supported (PostgreSQL does not have multiple data This method is not supported (PostgreSQL does not have multiple data
sets) and will raise a `~psycopg2.NotSupportedError` exception. sets) and will raise a `~psycopg2.NotSupportedError` exception.
.. method:: setoutputsize(size [, column]) .. method:: setoutputsize(size [, column])
This method is exposed in compliance with the |DBAPI|. It currently This method is exposed in compliance with the |DBAPI|. It currently
does nothing but it is safe to call it. does nothing but it is safe to call it.

View File

@ -334,4 +334,3 @@ Psycopg raises *ImportError: cannot import name tz* on import in mod_wsgi / ASP,
.. _egg: http://peak.telecommunity.com/DevCenter/PythonEggs .. _egg: http://peak.telecommunity.com/DevCenter/PythonEggs
.. __: http://stackoverflow.com/questions/2192323/what-is-the-python-egg-cache-python-egg-cache .. __: http://stackoverflow.com/questions/2192323/what-is-the-python-egg-cache-python-egg-cache
.. __: http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonEggs .. __: http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIPythonEggs

View File

@ -65,4 +65,3 @@ Psycopg 2 is both Unicode and Python 3 friendly.
**To Do items in the documentation** **To Do items in the documentation**
.. todolist:: .. todolist::

View File

@ -57,8 +57,7 @@ be used.
.. autoclass:: PersistentConnectionPool .. autoclass:: PersistentConnectionPool
.. note:: .. note::
This pool class is mostly designed to interact with Zope and probably This pool class is mostly designed to interact with Zope and probably
not useful in generic applications. not useful in generic applications.

View File

@ -49,4 +49,3 @@ def setup(app):
text=(visit_extension_node, depart_extension_node)) text=(visit_extension_node, depart_extension_node))
app.add_directive('extension', Extension) app.add_directive('extension', Extension)

View File

@ -12,10 +12,9 @@ from docutils import nodes, utils
from docutils.parsers.rst import roles from docutils.parsers.rst import roles
def sql_role(name, rawtext, text, lineno, inliner, options={}, content=[]): def sql_role(name, rawtext, text, lineno, inliner, options={}, content=[]):
text = utils.unescape(text) text = utils.unescape(text)
options['classes'] = ['sql'] options['classes'] = ['sql']
return [nodes.literal(rawtext, text, **options)], [] return [nodes.literal(rawtext, text, **options)], []
def setup(app): def setup(app):
roles.register_local_role('sql', sql_role) roles.register_local_role('sql', sql_role)

View File

@ -56,4 +56,3 @@ def setup(app):
app.add_config_value('ticket_remap_offset', None, 'env') app.add_config_value('ticket_remap_offset', None, 'env')
app.add_role('ticket', ticket_role) app.add_role('ticket', ticket_role)
app.add_role('tickets', ticket_role) app.add_role('tickets', ticket_role)

View File

@ -57,7 +57,6 @@ def emit(basename, txt_dir):
# some space between sections # some space between sections
sys.stdout.write("\n\n") sys.stdout.write("\n\n")
if __name__ == '__main__': if __name__ == '__main__':
sys.exit(main()) sys.exit(main())

View File

@ -8,9 +8,8 @@
This module holds two different tzinfo implementations that can be used as the This module holds two different tzinfo implementations that can be used as the
`tzinfo` argument to `~datetime.datetime` constructors, directly passed to `tzinfo` argument to `~datetime.datetime` constructors, directly passed to
Psycopg functions or used to set the `cursor.tzinfo_factory` attribute in Psycopg functions or used to set the `cursor.tzinfo_factory` attribute in
cursors. cursors.
.. autoclass:: psycopg2.tz.FixedOffsetTimezone .. autoclass:: psycopg2.tz.FixedOffsetTimezone
.. autoclass:: psycopg2.tz.LocalTimezone .. autoclass:: psycopg2.tz.LocalTimezone

View File

@ -1017,4 +1017,3 @@ For further details see the documentation for the above methods.
.. __: http://www.opengroup.org/bookstore/catalog/c193.htm .. __: http://www.opengroup.org/bookstore/catalog/c193.htm
.. __: http://jdbc.postgresql.org/ .. __: http://jdbc.postgresql.org/

View File

@ -62,7 +62,7 @@ for row in curs.fetchall():
open(new_name, 'wb').write(row[2]) open(new_name, 'wb').write(row[2])
print "done" print "done"
print " python type of image data is", type(row[2]) print " python type of image data is", type(row[2])
# extract exactly the same data but using a binary cursor # extract exactly the same data but using a binary cursor
print "Extracting the images using a binary cursor:" print "Extracting the images using a binary cursor:"
@ -78,7 +78,7 @@ for row in curs.fetchall():
open(new_name, 'wb').write(row[0]) open(new_name, 'wb').write(row[0])
print "done" print "done"
print " python type of image data is", type(row[0]) print " python type of image data is", type(row[0])
# this rollback is required because we can't drop a table with a binary cursor # this rollback is required because we can't drop a table with a binary cursor
# declared and still open # declared and still open
conn.rollback() conn.rollback()

View File

@ -1,4 +1,4 @@
# copy_from.py -- example about copy_from # copy_from.py -- example about copy_from
# #
# Copyright (C) 2002 Tom Jenkins <tjenkins@devis.com> # Copyright (C) 2002 Tom Jenkins <tjenkins@devis.com>
# Copyright (C) 2005 Federico Di Gregorio <fog@initd.org> # Copyright (C) 2005 Federico Di Gregorio <fog@initd.org>
@ -172,6 +172,3 @@ conn.rollback()
curs.execute("DROP TABLE test_copy") curs.execute("DROP TABLE test_copy")
os.unlink('copy_from.txt') os.unlink('copy_from.txt')
conn.commit() conn.commit()

View File

@ -1,4 +1,4 @@
# copy_to.py -- example about copy_to # copy_to.py -- example about copy_to
# #
# Copyright (C) 2002 Tom Jenkins <tjenkins@devis.com> # Copyright (C) 2002 Tom Jenkins <tjenkins@devis.com>
# Copyright (C) 2005 Federico Di Gregorio <fog@initd.org> # Copyright (C) 2005 Federico Di Gregorio <fog@initd.org>

View File

@ -49,7 +49,7 @@ class Cursor(psycopg2.extensions.cursor):
if d is None: if d is None:
raise NoDataError("no more data") raise NoDataError("no more data")
return d return d
curs = conn.cursor(cursor_factory=Cursor) curs = conn.cursor(cursor_factory=Cursor)
curs.execute("SELECT 1 AS foo") curs.execute("SELECT 1 AS foo")
print("Result of fetchone():", curs.fetchone()) print("Result of fetchone():", curs.fetchone())

View File

@ -6,14 +6,14 @@ Mapping arbitrary objects to a PostgreSQL database with psycopg2
- Problem - Problem
You need to store arbitrary objects in a PostgreSQL database without being You need to store arbitrary objects in a PostgreSQL database without being
intrusive for your classes (don't want inheritance from an 'Item' or intrusive for your classes (don't want inheritance from an 'Item' or
'Persistent' object). 'Persistent' object).
- Solution - Solution
""" """
from datetime import datetime from datetime import datetime
import psycopg2 import psycopg2
from psycopg2.extensions import adapt, register_adapter from psycopg2.extensions import adapt, register_adapter
@ -24,7 +24,7 @@ except:
seq.sort() seq.sort()
return seq return seq
# Here is the adapter for every object that we may ever need to # Here is the adapter for every object that we may ever need to
# insert in the database. It receives the original object and does # insert in the database. It receives the original object and does
# its job on that instance # its job on that instance
@ -33,7 +33,7 @@ class ObjectMapper(object):
self.orig = orig self.orig = orig
self.tmp = {} self.tmp = {}
self.items, self.fields = self._gatherState() self.items, self.fields = self._gatherState()
def _gatherState(self): def _gatherState(self):
adaptee_name = self.orig.__class__.__name__ adaptee_name = self.orig.__class__.__name__
fields = sorted([(field, getattr(self.orig, field)) fields = sorted([(field, getattr(self.orig, field))
@ -42,19 +42,19 @@ class ObjectMapper(object):
for item, value in fields: for item, value in fields:
items.append(item) items.append(item)
return items, fields return items, fields
def getTableName(self): def getTableName(self):
return self.orig.__class__.__name__ return self.orig.__class__.__name__
def getMappedValues(self): def getMappedValues(self):
tmp = [] tmp = []
for i in self.items: for i in self.items:
tmp.append("%%(%s)s"%i) tmp.append("%%(%s)s"%i)
return ", ".join(tmp) return ", ".join(tmp)
def getValuesDict(self): def getValuesDict(self):
return dict(self.fields) return dict(self.fields)
def getFields(self): def getFields(self):
return self.items return self.items
@ -66,14 +66,14 @@ class ObjectMapper(object):
return qry, self.getValuesDict() return qry, self.getValuesDict()
# Here are the objects # Here are the objects
class Album(object): class Album(object):
id = 0 id = 0
def __init__(self): def __init__(self):
self.creation_time = datetime.now() self.creation_time = datetime.now()
self.album_id = self.id self.album_id = self.id
Album.id = Album.id + 1 Album.id = Album.id + 1
self.binary_data = buffer('12312312312121') self.binary_data = buffer('12312312312121')
class Order(object): class Order(object):
id = 0 id = 0
def __init__(self): def __init__(self):
@ -84,7 +84,7 @@ class Order(object):
register_adapter(Album, ObjectMapper) register_adapter(Album, ObjectMapper)
register_adapter(Order, ObjectMapper) register_adapter(Order, ObjectMapper)
# Describe what is needed to save on each object # Describe what is needed to save on each object
# This is actually just configuration, you can use xml with a parser if you # This is actually just configuration, you can use xml with a parser if you
# like to have plenty of wasted CPU cycles ;P. # like to have plenty of wasted CPU cycles ;P.
@ -92,7 +92,7 @@ register_adapter(Order, ObjectMapper)
persistent_fields = {'Album': ['album_id', 'creation_time', 'binary_data'], persistent_fields = {'Album': ['album_id', 'creation_time', 'binary_data'],
'Order': ['order_id', 'items', 'price'] 'Order': ['order_id', 'items', 'price']
} }
print adapt(Album()).generateInsert() print adapt(Album()).generateInsert()
print adapt(Album()).generateInsert() print adapt(Album()).generateInsert()
print adapt(Album()).generateInsert() print adapt(Album()).generateInsert()
@ -103,42 +103,42 @@ print adapt(Order()).generateInsert()
""" """
- Discussion - Discussion
Psycopg 2 has a great new feature: adaptation. The big thing about Psycopg 2 has a great new feature: adaptation. The big thing about
adaptation is that it enables the programmer to glue most of the adaptation is that it enables the programmer to glue most of the
code out there without many difficulties. code out there without many difficulties.
This recipe tries to focus attention on a way to generate SQL queries to This recipe tries to focus attention on a way to generate SQL queries to
insert completely new objects inside a database. As you can see objects do insert completely new objects inside a database. As you can see objects do
not know anything about the code that is handling them. We specify all the not know anything about the code that is handling them. We specify all the
fields that we need for each object through the persistent_fields dict. fields that we need for each object through the persistent_fields dict.
The most important lines of this recipe are: The most important lines of this recipe are:
register_adapter(Album, ObjectMapper) register_adapter(Album, ObjectMapper)
register_adapter(Order, ObjectMapper) register_adapter(Order, ObjectMapper)
In these lines we notify the system that when we call adapt with an Album instance In these lines we notify the system that when we call adapt with an Album instance
as an argument we want it to istantiate ObjectMapper passing the Album instance as an argument we want it to istantiate ObjectMapper passing the Album instance
as argument (self.orig in the ObjectMapper class). as argument (self.orig in the ObjectMapper class).
The output is something like this (for each call to generateInsert): The output is something like this (for each call to generateInsert):
('INSERT INTO Album (album_id, binary_data, creation_time) VALUES ('INSERT INTO Album (album_id, binary_data, creation_time) VALUES
(%(album_id)s, %(binary_data)s, %(creation_time)s)', (%(album_id)s, %(binary_data)s, %(creation_time)s)',
{'binary_data': <read-only buffer for 0x402de070, ...>, {'binary_data': <read-only buffer for 0x402de070, ...>,
'creation_time': datetime.datetime(2004, 9, 10, 20, 48, 29, 633728), 'creation_time': datetime.datetime(2004, 9, 10, 20, 48, 29, 633728),
'album_id': 1} 'album_id': 1}
) )
This is a tuple of {SQL_QUERY, FILLING_DICT}, and all the quoting/converting This is a tuple of {SQL_QUERY, FILLING_DICT}, and all the quoting/converting
stuff (from python's datetime to postgres s and from python's buffer to stuff (from python's datetime to postgres s and from python's buffer to
postgres' blob) is handled with the same adaptation process hunder the hood postgres' blob) is handled with the same adaptation process hunder the hood
by psycopg2. by psycopg2.
At last, just notice that ObjectMapper is working for both Album and Order At last, just notice that ObjectMapper is working for both Album and Order
instances without any glitches at all, and both classes could have easily been instances without any glitches at all, and both classes could have easily been
coming from closed source libraries or C coded ones (which are not easily coming from closed source libraries or C coded ones (which are not easily
modified), whereas a common pattern in todays ORMs or OODBs is to provide modified), whereas a common pattern in todays ORMs or OODBs is to provide
a basic 'Persistent' object that already knows how to store itself in the a basic 'Persistent' object that already knows how to store itself in the
database. database.
""" """

View File

@ -29,7 +29,7 @@ print "Opening connection using dsn:", DSN
conn = psycopg2.connect(DSN) conn = psycopg2.connect(DSN)
print "Encoding for this connection is", conn.encoding print "Encoding for this connection is", conn.encoding
curs = conn.cursor(cursor_factory=psycopg2.extras.DictCursor) curs = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
curs.execute("SELECT 1 AS foo, 'cip' AS bar, date(now()) as zot") curs.execute("SELECT 1 AS foo, 'cip' AS bar, date(now()) as zot")
print "Cursor's row factory is", curs.row_factory print "Cursor's row factory is", curs.row_factory

View File

@ -38,7 +38,7 @@ encs.sort()
for a, b in encs: for a, b in encs:
print " ", a, "<->", b print " ", a, "<->", b
print "Using STRING typecaster" print "Using STRING typecaster"
print "Setting backend encoding to LATIN1 and executing queries:" print "Setting backend encoding to LATIN1 and executing queries:"
conn.set_client_encoding('LATIN1') conn.set_client_encoding('LATIN1')
curs = conn.cursor() curs = conn.cursor()

View File

@ -34,7 +34,7 @@ a not-yet well defined protocol that we can call ISQLQuote:
def getbinary(self): def getbinary(self):
"Returns a binary quoted string representing the bound variable." "Returns a binary quoted string representing the bound variable."
def getbuffer(self): def getbuffer(self):
"Returns the wrapped object itself." "Returns the wrapped object itself."
@ -86,10 +86,10 @@ class AsIs(object):
self.__obj = obj self.__obj = obj
def getquoted(self): def getquoted(self):
return self.__obj return self.__obj
class SQL_IN(object): class SQL_IN(object):
"""Adapt a tuple to an SQL quotable object.""" """Adapt a tuple to an SQL quotable object."""
def __init__(self, seq): def __init__(self, seq):
self._seq = seq self._seq = seq
@ -103,10 +103,10 @@ class SQL_IN(object):
qobjs = [str(psycoadapt(o).getquoted()) for o in self._seq] qobjs = [str(psycoadapt(o).getquoted()) for o in self._seq]
return '(' + ', '.join(qobjs) + ')' return '(' + ', '.join(qobjs) + ')'
__str__ = getquoted __str__ = getquoted
# add our new adapter class to psycopg list of adapters # add our new adapter class to psycopg list of adapters
register_adapter(tuple, SQL_IN) register_adapter(tuple, SQL_IN)
register_adapter(float, AsIs) register_adapter(float, AsIs)
@ -117,7 +117,7 @@ register_adapter(int, AsIs)
# conn = psycopg.connect("...") # conn = psycopg.connect("...")
# curs = conn.cursor() # curs = conn.cursor()
# curs.execute("SELECT ...", (("this", "is", "the", "tuple"),)) # curs.execute("SELECT ...", (("this", "is", "the", "tuple"),))
# #
# but we have no connection to a database right now, so we just check # but we have no connection to a database right now, so we just check
# the SQL_IN class by calling psycopg's adapt() directly: # the SQL_IN class by calling psycopg's adapt() directly:

View File

@ -44,7 +44,7 @@ if len(sys.argv) > 1:
DSN = sys.argv[1] DSN = sys.argv[1]
if len(sys.argv) > 2: if len(sys.argv) > 2:
MODE = int(sys.argv[2]) MODE = int(sys.argv[2])
print "Opening connection using dsn:", DSN print "Opening connection using dsn:", DSN
conn = psycopg2.connect(DSN) conn = psycopg2.connect(DSN)
curs = conn.cursor() curs = conn.cursor()
@ -70,7 +70,7 @@ def insert_func(conn_or_pool, rows):
conn = conn_or_pool conn = conn_or_pool
else: else:
conn = conn_or_pool.getconn() conn = conn_or_pool.getconn()
for i in range(rows): for i in range(rows):
if divmod(i, COMMIT_STEP)[1] == 0: if divmod(i, COMMIT_STEP)[1] == 0:
conn.commit() conn.commit()
@ -91,14 +91,14 @@ def insert_func(conn_or_pool, rows):
## a nice select function that prints the current number of rows in the ## a nice select function that prints the current number of rows in the
## database (and transfer them, putting some pressure on the network) ## database (and transfer them, putting some pressure on the network)
def select_func(conn_or_pool, z): def select_func(conn_or_pool, z):
name = threading.currentThread().getName() name = threading.currentThread().getName()
if MODE == 0: if MODE == 0:
conn = conn_or_pool conn = conn_or_pool
conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT) conn.set_isolation_level(ISOLATION_LEVEL_AUTOCOMMIT)
for i in range(SELECT_SIZE): for i in range(SELECT_SIZE):
if divmod(i, SELECT_STEP)[1] == 0: if divmod(i, SELECT_STEP)[1] == 0:
try: try:
@ -125,7 +125,7 @@ else:
m = len(INSERT_THREADS) + len(SELECT_THREADS) m = len(INSERT_THREADS) + len(SELECT_THREADS)
n = m/2 n = m/2
conn_insert = conn_select = ThreadedConnectionPool(n, m, DSN) conn_insert = conn_select = ThreadedConnectionPool(n, m, DSN)
## create the threads ## create the threads
threads = [] threads = []

View File

@ -63,5 +63,3 @@ print "Some text from cursor with typecaster:", curs.fetchone()[0]
curs = conn.cursor() curs = conn.cursor()
curs.execute("SELECT 'some text.'::text AS foo") curs.execute("SELECT 'some text.'::text AS foo")
print "Some text from connection with typecaster again:", curs.fetchone()[0] print "Some text from connection with typecaster again:", curs.fetchone()[0]

View File

@ -60,10 +60,10 @@ print "Time zone:", d.tzinfo.tzname(d), "offset:", d.tzinfo.utcoffset(d)
curs.execute("SELECT * FROM test_tz") curs.execute("SELECT * FROM test_tz")
for d in curs: for d in curs:
u = d[0].utcoffset() or ZERO u = d[0].utcoffset() or ZERO
print "UTC time: ", d[0] - u print "UTC time: ", d[0] - u
print "Local time:", d[0] print "Local time:", d[0]
print "Time zone:", d[0].tzinfo.tzname(d[0]), d[0].tzinfo.utcoffset(d[0]) print "Time zone:", d[0].tzinfo.tzname(d[0]), d[0].tzinfo.utcoffset(d[0])
curs.execute("DROP TABLE test_tz") curs.execute("DROP TABLE test_tz")
conn.commit() conn.commit()

View File

@ -58,7 +58,7 @@ class Rect(object):
and eventually as a type-caster for the data extracted from the database and eventually as a type-caster for the data extracted from the database
(that's why __init__ takes the curs argument.) (that's why __init__ takes the curs argument.)
""" """
def __init__(self, s=None, curs=None): def __init__(self, s=None, curs=None):
"""Init the rectangle from the optional string s.""" """Init the rectangle from the optional string s."""
self.x = self.y = self.width = self.height = 0.0 self.x = self.y = self.width = self.height = 0.0
@ -68,7 +68,7 @@ class Rect(object):
"""This is a terrible hack, just ignore proto and return self.""" """This is a terrible hack, just ignore proto and return self."""
if proto == psycopg2.extensions.ISQLQuote: if proto == psycopg2.extensions.ISQLQuote:
return self return self
def from_points(self, x0, y0, x1, y1): def from_points(self, x0, y0, x1, y1):
"""Init the rectangle from points.""" """Init the rectangle from points."""
if x0 > x1: (x0, x1) = (x1, x0) if x0 > x1: (x0, x1) = (x1, x0)
@ -94,7 +94,7 @@ class Rect(object):
s = "X: %d\tY: %d\tWidth: %d\tHeight: %d" % ( s = "X: %d\tY: %d\tWidth: %d\tHeight: %d" % (
self.x, self.y, self.width, self.height) self.x, self.y, self.width, self.height)
return s return s
# here we select from the empty table, just to grab the description # here we select from the empty table, just to grab the description
curs.execute("SELECT b FROM test_cast WHERE 0=1") curs.execute("SELECT b FROM test_cast WHERE 0=1")
boxoid = curs.description[0][1] boxoid = curs.description[0][1]

View File

@ -295,4 +295,3 @@ Bytes_Format(PyObject *format, PyObject *args)
} }
return NULL; return NULL;
} }

View File

@ -295,5 +295,3 @@ PyTypeObject notifyType = {
0, /*tp_alloc*/ 0, /*tp_alloc*/
notify_new, /*tp_new*/ notify_new, /*tp_new*/
}; };

View File

@ -312,4 +312,3 @@ psycopg_parse_escape(const char *bufin, Py_ssize_t sizein, Py_ssize_t *sizeout)
exit: exit:
return ret; return ret;
} }

View File

@ -69,4 +69,3 @@ static typecastObject_initlist typecast_builtins[] = {
{"MACADDRARRAY", typecast_MACADDRARRAY_types, typecast_STRINGARRAY_cast, "STRING"}, {"MACADDRARRAY", typecast_MACADDRARRAY_types, typecast_STRINGARRAY_cast, "STRING"},
{NULL, NULL, NULL, NULL} {NULL, NULL, NULL, NULL}
}; };

View File

@ -250,4 +250,3 @@ typecast_MXINTERVAL_cast(const char *str, Py_ssize_t len, PyObject *curs)
#define typecast_DATETIME_cast typecast_MXDATE_cast #define typecast_DATETIME_cast typecast_MXDATE_cast
#define typecast_DATETIMETZ_cast typecast_MXDATE_cast #define typecast_DATETIMETZ_cast typecast_MXDATE_cast
#endif #endif

View File

@ -1,5 +1,4 @@
 Microsoft Visual Studio Solution File, Format Version 10.00
Microsoft Visual Studio Solution File, Format Version 10.00
# Visual Studio 2008 # Visual Studio 2008
Project("{2857B73E-F847-4B02-9238-064979017E93}") = "psycopg2", "psycopg2.cproj", "{CFD80D18-3EE5-49ED-992A-E6D433BC7641}" Project("{2857B73E-F847-4B02-9238-064979017E93}") = "psycopg2", "psycopg2.cproj", "{CFD80D18-3EE5-49ED-992A-E6D433BC7641}"
EndProject EndProject
@ -26,7 +25,7 @@ Global
$2.DirectoryNamespaceAssociation = None $2.DirectoryNamespaceAssociation = None
$2.ResourceNamePolicy = FileName $2.ResourceNamePolicy = FileName
$0.StandardHeader = $3 $0.StandardHeader = $3
$3.Text = $3.Text =
$3.IncludeInNewFiles = False $3.IncludeInNewFiles = False
$0.TextStylePolicy = $4 $0.TextStylePolicy = $4
$4.FileWidth = 72 $4.FileWidth = 72

View File

@ -28,4 +28,3 @@ curs = conn.cursor()
curs.execute("SELECT %s", ([1,2,None],)) curs.execute("SELECT %s", ([1,2,None],))
print curs.fetchone() print curs.fetchone()

View File

@ -17,7 +17,7 @@ def sleep(curs):
while not curs.isready(): while not curs.isready():
print "." print "."
time.sleep(.1) time.sleep(.1)
#curs.execute(""" #curs.execute("""
# DECLARE zz INSENSITIVE SCROLL CURSOR WITH HOLD FOR # DECLARE zz INSENSITIVE SCROLL CURSOR WITH HOLD FOR
# SELECT now(); # SELECT now();
@ -33,4 +33,3 @@ print curs.fetchall()
curs.execute("SELECT now() AS bar") curs.execute("SELECT now() AS bar")
sleep(curs) sleep(curs)

View File

@ -16,4 +16,3 @@ sql()
import gtk import gtk
print "AFTER" print "AFTER"
sql() sql()

View File

@ -7,8 +7,7 @@ curs = conn.cursor(cursor_factory=psycopg2.extras.DictCursor)
curs.execute("SELECT '2005-2-12'::date AS foo, 'boo!' as bar") curs.execute("SELECT '2005-2-12'::date AS foo, 'boo!' as bar")
for x in curs.fetchall(): for x in curs.fetchall():
print type(x), x[0], x[1], x['foo'], x['bar'] print type(x), x[0], x[1], x['foo'], x['bar']
curs.execute("SELECT '2005-2-12'::date AS foo, 'boo!' as bar") curs.execute("SELECT '2005-2-12'::date AS foo, 'boo!' as bar")
for x in curs: for x in curs:
print type(x), x[0], x[1], x['foo'], x['bar'] print type(x), x[0], x[1], x['foo'], x['bar']

View File

@ -14,7 +14,7 @@ two functions:
# leak() will cause increasingly more RAM to be used by the script. # leak() will cause increasingly more RAM to be used by the script.
$ python <script_nam> leak $ python <script_nam> leak
# noleak() does not have the RAM usage problem. The only difference # noleak() does not have the RAM usage problem. The only difference
# between it and leak() is that 'rows' is created once, before the loop. # between it and leak() is that 'rows' is created once, before the loop.
$ python <script_name> noleak $ python <script_name> noleak
@ -80,4 +80,3 @@ except IndexError:
# Run leak() or noleak(), whichever was indicated on the command line # Run leak() or noleak(), whichever was indicated on the command line
run_function() run_function()

View File

@ -28,8 +28,8 @@ import psycopg2 as dbapi
conn = dbapi.connect(database='test') conn = dbapi.connect(database='test')
cursor = conn.cursor() cursor = conn.cursor()
cursor.execute(""" cursor.execute("""
@ -41,5 +41,3 @@ cursor.execute("""
for row in cursor: for row in cursor:
print row print row

View File

@ -5,7 +5,7 @@ class Portal(psycopg2.extensions.cursor):
def __init__(self, name, curs): def __init__(self, name, curs):
psycopg2.extensions.cursor.__init__( psycopg2.extensions.cursor.__init__(
self, curs.connection, '"'+name+'"') self, curs.connection, '"'+name+'"')
CURSOR = psycopg2.extensions.new_type((1790,), "CURSOR", Portal) CURSOR = psycopg2.extensions.new_type((1790,), "CURSOR", Portal)
psycopg2.extensions.register_type(CURSOR) psycopg2.extensions.register_type(CURSOR)

View File

@ -10,4 +10,3 @@ class B(object):
return 'It is True' return 'It is True'
else: else:
return 'It is False' return 'It is False'

View File

@ -31,7 +31,7 @@ def sleep(curs):
while not curs.isready(): while not curs.isready():
print "." print "."
time.sleep(.1) time.sleep(.1)
#curs.execute(""" #curs.execute("""
# DECLARE zz INSENSITIVE SCROLL CURSOR WITH HOLD FOR # DECLARE zz INSENSITIVE SCROLL CURSOR WITH HOLD FOR
# SELECT now(); # SELECT now();
@ -47,4 +47,3 @@ def sleep(curs):
#curs.execute("SELECT now() AS bar"); #curs.execute("SELECT now() AS bar");
#sleep(curs) #sleep(curs)

View File

@ -6,4 +6,3 @@ curs = conn.cursor()
curs.execute("SELECT true AS foo WHERE 'a' in %s", (("aa", "bb"),)) curs.execute("SELECT true AS foo WHERE 'a' in %s", (("aa", "bb"),))
print curs.fetchall() print curs.fetchall()
print curs.query print curs.query

View File

@ -40,4 +40,3 @@ dbconn.commit()
cursor.close() cursor.close()
dbconn.close() dbconn.close()

View File

@ -6,4 +6,3 @@ c = o.cursor()
c.execute("SELECT 1.23::float AS foo") c.execute("SELECT 1.23::float AS foo")
x = c.fetchone()[0] x = c.fetchone()[0]
print x, type(x) print x, type(x)

View File

@ -71,5 +71,3 @@ done = 1
cur.close() cur.close()
conn.close() conn.close()

View File

@ -335,7 +335,7 @@
{ {
Debian unstable with libc-i686 suppressions Debian unstable with libc-i686 suppressions
Memcheck:Cond Memcheck:Cond
obj:/lib/ld-2.3.5.so obj:/lib/ld-2.3.5.so
obj:/lib/ld-2.3.5.so obj:/lib/ld-2.3.5.so
obj:/lib/tls/i686/cmov/libc-2.3.5.so obj:/lib/tls/i686/cmov/libc-2.3.5.so
@ -348,10 +348,10 @@
fun:_PyImport_GetDynLoadFunc fun:_PyImport_GetDynLoadFunc
fun:_PyImport_LoadDynamicModule fun:_PyImport_LoadDynamicModule
} }
{ {
Debian unstable with libc-i686 suppressions Debian unstable with libc-i686 suppressions
Memcheck:Cond Memcheck:Cond
obj:/lib/ld-2.3.5.so obj:/lib/ld-2.3.5.so
obj:/lib/ld-2.3.5.so obj:/lib/ld-2.3.5.so
obj:/lib/ld-2.3.5.so obj:/lib/ld-2.3.5.so
@ -365,7 +365,7 @@
fun:_PyImport_GetDynLoadFunc fun:_PyImport_GetDynLoadFunc
fun:_PyImport_LoadDynamicModule fun:_PyImport_LoadDynamicModule
} }
{ {
Debian unstable with libc-i686 suppressions Debian unstable with libc-i686 suppressions
Memcheck:Addr4 Memcheck:Addr4
@ -471,7 +471,7 @@
{ {
Debian unstable with libc-i686 suppressions Debian unstable with libc-i686 suppressions
Memcheck:Cond Memcheck:Cond
obj:/lib/ld-2.3.5.so obj:/lib/ld-2.3.5.so
obj:/lib/tls/i686/cmov/libc-2.3.5.so obj:/lib/tls/i686/cmov/libc-2.3.5.so
obj:/lib/ld-2.3.5.so obj:/lib/ld-2.3.5.so
fun:_dl_open fun:_dl_open

View File

@ -28,7 +28,7 @@ PGMINOR="`echo $PGVERSION | cut -d. -f2`"
echo checking for postgresql major: $PGMAJOR echo checking for postgresql major: $PGMAJOR
echo checking for postgresql minor: $PGMINOR echo checking for postgresql minor: $PGMINOR
echo -n generating pgtypes.h ... echo -n generating pgtypes.h ...
awk '/#define .+OID/ {print "#define " $2 " " $3}' "$PGTYPE" \ awk '/#define .+OID/ {print "#define " $2 " " $3}' "$PGTYPE" \
> $SRCDIR/pgtypes.h > $SRCDIR/pgtypes.h
@ -37,5 +37,3 @@ echo -n generating typecast_builtins.c ...
awk '/#define .+OID/ {print $2 " " $3}' "$PGTYPE" | \ awk '/#define .+OID/ {print $2 " " $3}' "$PGTYPE" | \
python $SCRIPTSDIR/buildtypes.py >$SRCDIR/typecast_builtins.c python $SCRIPTSDIR/buildtypes.py >$SRCDIR/typecast_builtins.c
echo " done" echo " done"

View File

@ -1,6 +1,6 @@
#!/usr/bin/env python #!/usr/bin/env python
''' Python DB API 2.0 driver compliance unit test suite. ''' Python DB API 2.0 driver compliance unit test suite.
This software is Public Domain and may be used without restrictions. This software is Public Domain and may be used without restrictions.
"Now we have booze and barflies entering the discussion, plus rumours of "Now we have booze and barflies entering the discussion, plus rumours of
@ -79,8 +79,8 @@ def str2bytes(sval):
class DatabaseAPI20Test(unittest.TestCase): class DatabaseAPI20Test(unittest.TestCase):
''' Test a database self.driver for DB API 2.0 compatibility. ''' Test a database self.driver for DB API 2.0 compatibility.
This implementation tests Gadfly, but the TestCase This implementation tests Gadfly, but the TestCase
is structured so that other self.drivers can subclass this is structured so that other self.drivers can subclass this
test case to ensure compiliance with the DB-API. It is test case to ensure compiliance with the DB-API. It is
expected that this TestCase may be expanded in the future expected that this TestCase may be expanded in the future
if ambiguities or edge conditions are discovered. if ambiguities or edge conditions are discovered.
@ -90,9 +90,9 @@ class DatabaseAPI20Test(unittest.TestCase):
self.driver, connect_args and connect_kw_args. Class specification self.driver, connect_args and connect_kw_args. Class specification
should be as follows: should be as follows:
import dbapi20 import dbapi20
class mytest(dbapi20.DatabaseAPI20Test): class mytest(dbapi20.DatabaseAPI20Test):
[...] [...]
Don't 'import DatabaseAPI20Test from dbapi20', or you will Don't 'import DatabaseAPI20Test from dbapi20', or you will
confuse the unit tester - just 'import dbapi20'. confuse the unit tester - just 'import dbapi20'.
@ -111,7 +111,7 @@ class DatabaseAPI20Test(unittest.TestCase):
xddl2 = 'drop table %sbarflys' % table_prefix xddl2 = 'drop table %sbarflys' % table_prefix
lowerfunc = 'lower' # Name of stored procedure to convert string->lowercase lowerfunc = 'lower' # Name of stored procedure to convert string->lowercase
# Some drivers may need to override these helpers, for example adding # Some drivers may need to override these helpers, for example adding
# a 'commit' after the execute. # a 'commit' after the execute.
def executeDDL1(self,cursor): def executeDDL1(self,cursor):
@ -135,10 +135,10 @@ class DatabaseAPI20Test(unittest.TestCase):
try: try:
cur = con.cursor() cur = con.cursor()
for ddl in (self.xddl1,self.xddl2): for ddl in (self.xddl1,self.xddl2):
try: try:
cur.execute(ddl) cur.execute(ddl)
con.commit() con.commit()
except self.driver.Error: except self.driver.Error:
# Assume table didn't exist. Other tests will check if # Assume table didn't exist. Other tests will check if
# execute is busted. # execute is busted.
pass pass
@ -255,7 +255,7 @@ class DatabaseAPI20Test(unittest.TestCase):
con.rollback() con.rollback()
except self.driver.NotSupportedError: except self.driver.NotSupportedError:
pass pass
def test_cursor(self): def test_cursor(self):
con = self._connect() con = self._connect()
try: try:
@ -411,7 +411,7 @@ class DatabaseAPI20Test(unittest.TestCase):
) )
elif self.driver.paramstyle == 'named': elif self.driver.paramstyle == 'named':
cur.execute( cur.execute(
'insert into %sbooze values (:beer)' % self.table_prefix, 'insert into %sbooze values (:beer)' % self.table_prefix,
{'beer':"Cooper's"} {'beer':"Cooper's"}
) )
elif self.driver.paramstyle == 'format': elif self.driver.paramstyle == 'format':
@ -551,7 +551,7 @@ class DatabaseAPI20Test(unittest.TestCase):
tests. tests.
''' '''
populate = [ populate = [
"insert into %sbooze values ('%s')" % (self.table_prefix,s) "insert into %sbooze values ('%s')" % (self.table_prefix,s)
for s in self.samples for s in self.samples
] ]
return populate return populate
@ -612,7 +612,7 @@ class DatabaseAPI20Test(unittest.TestCase):
self.assertEqual(len(rows),6) self.assertEqual(len(rows),6)
rows = [r[0] for r in rows] rows = [r[0] for r in rows]
rows.sort() rows.sort()
# Make sure we get the right data back out # Make sure we get the right data back out
for i in range(0,6): for i in range(0,6):
self.assertEqual(rows[i],self.samples[i], self.assertEqual(rows[i],self.samples[i],
@ -683,10 +683,10 @@ class DatabaseAPI20Test(unittest.TestCase):
'cursor.fetchall should return an empty list if ' 'cursor.fetchall should return an empty list if '
'a select query returns no rows' 'a select query returns no rows'
) )
finally: finally:
con.close() con.close()
def test_mixedfetch(self): def test_mixedfetch(self):
con = self._connect() con = self._connect()
try: try:
@ -722,7 +722,7 @@ class DatabaseAPI20Test(unittest.TestCase):
def help_nextset_setUp(self,cur): def help_nextset_setUp(self,cur):
''' Should create a procedure called deleteme ''' Should create a procedure called deleteme
that returns two result sets, first the that returns two result sets, first the
number of rows in booze then "name from booze" number of rows in booze then "name from booze"
''' '''
raise NotImplementedError('Helper not implemented') raise NotImplementedError('Helper not implemented')
@ -869,4 +869,3 @@ class DatabaseAPI20Test(unittest.TestCase):
self.failUnless(hasattr(self.driver,'ROWID'), self.failUnless(hasattr(self.driver,'ROWID'),
'module.ROWID must be defined.' 'module.ROWID must be defined.'
) )