.c files only need to import psycopg.h: it will in turn import
dependencies from Python and libpq and configure.h. psycopg.h should be
the first to be imported, so the basic imports are not required in
the .h's
As a guideline I'm trying to import from the most specific to the most
generic to detect missing imports in the .h's.
Note: the functions are private because typecast.c imports the .c's of
typecast_[mx]datetime, not the .h's.
Work around the warning for 'skip_until_space' not used with an #ifdef.
Furthermore, those functions are now static.
This cuts off server whose version is older than 7.4. But enables us to
remove large portions of code rarely used and tested (e.g. p2 copy) and
will allow us to drop the query we do at each connection to establish
the client encoding and the datestyle.
Explicit comparison with the tuple is required if we want to make
Notify() == (pid, channel) work: item access is not enough (and a test
in the suite fails if we get this wrong).
We don't do somersaults to ensure people can use snowmen as transaction
ids anyway: it would require passing the connection to xid_ensure and
down below to use the correct encoding.
By James Henstridge on 2008-07-23.
Merged from lp:~jamesh/psycopg/two-phase-commit/revision/356
* psycopg/connection_type.c (psyco_conn_xid): add a
Connection.xid() method that instantiates Xid objects.
* psycopg/psycopgmodule.c (init_psycopg): initialise the Xid
object type.
* psycopg/xid.h:
* psycopg/xid_type.c: Implement a basic transaction ID object for
use in two phase commit.
It was deprecated for version 2.1. There are bugs to be fixed made more
complex by its presence and it doesn't keep into account PostgreSQL 9.0
new binary format.
Time to go!
Allow the objects to be recognized as the proper type by Postgres in not
strong contexts: problem reported by Peter Eisentraut.
Added tests to check the types are respected in a complete Py -> PG ->
Py roundtrip without context.
Don't rely on Postgres casting the literal according to the context:
this doesn't work e.g. passing the object as function argument where a
function with the same name but taking a text exists. It doesn't work
either when the object is in an ARRAY construct.
Added test to check the type is respected in a complete Py -> PG -> Py
roundtrip without context.
Bug and solution reported by Peter Eisentraut.
Dropped set/unset nonblocking mode for copy and lobject operations:
lobjects don't work in nonblocking mode so they will hardly be supported
in green/async branches. Support for copy is still feasible, but it
will be done in other code paths (called by poll).
Failing in doing that broke notifications reception.
The responsibility for changing the async_status has been moved to the
poll function: this is consistent with how the async branch is
implemented.
With this commit all the test suite passes in "green" mode.
With the current implementation, at best they would silently block. They
actually hang everything.
Implementation posponed after some refactoring of the polling system,
because it will be probably possible to provide an implementation for
'poll()' during COPY which is good for both async and green modes.
The function is called without holding the GIL. Because it is necessary
to execute the Python callback if set, we need to re-acquire the GIL and
tnen release it again. In order to correctly bookkeep the thread state,
the pointer of the _save variable is passed to the function.
If the connection is sync, notices will be processed by pq_fetch()
downstream.
If the connection is async, here we have only sent the query: no result
is ready yet, and neither notices have had a chance to arrive: they will
be retrieved later by pq_is_busy().
Added tests to check the above statement don't break.
Instead, the code should be using the fileno() and poll() methods of
the cursor's connection. Handle the case when poll() is called on an
already built connection as a request to poll the asynchronous query
(if there is one) and get NOTIFY events.
Update the tests to reflect that change, add a test for NOTIFY.
Do it by keeping the reference to the last PGresult in the cursor and
calling pq_fetch() before ending the asynchronous execution. This
takes care of handling the possible error state of the PGresult and
also allows the removal of the needsfetch flag, since now after
execution ends the results are already fetched and parsed.
Without this a query that did not get flushed completely to the server
would cause cursor.poll() to always go into the curs_poll_send()
branch even if it was retuning ASYNC_READ.
Bug report by Daniele Varrazzo.
This hides from the user the libpq's implementation detail of
requiring the first select() to wait for the connection socket to
become writable and makes it possible to have a uniform select loop
for both cursors and connections, in which you always start by polling
the object and then acting according to the result from poll().
Idea and implementation by Daniele Varrazzo.
If there is an asynchronous query, polling a cursor that did not
initiate it will raise an exception. Polling while there is no
asynchronous query underway still works, because the user needs to
have a way to get asynchronous NOTIFYs.
POLL_OK has been changed from 3 to 0 to let the user specify a short loop
just as "if not curs.poll()" instead of having to check for write and read
separately. For an example of this, see examples/notify.py.
It was trying to get all pending results from the connection and if
the client sent many and anyone except the first one would not be
immediately available the loop in curs_get_last_result would call
PQgetResult blockingly.
Avoid that by calling PQisBusy every time and telling the client to
wait for more data if it returns 1.
The CONN_STATUS_SENT_* statuses were not being handled at all, and
they indicate that a query has been sent, but not fully, so the client
should wait for the socket to become writable again and flush the output.
The methods changed are connection.commit(), rollback(), reset(),
set_isolation_level(), set_client_encoding(), lobject(), cursor(str)
as well as cursor.execute() and cursor.callproc() if another query is
in progress and cursor.executemany(), cursor.copy_{from,to,expert)().
Drop the async kwarg from cursor.execute(), cursors created by
asynchronous connections will be asynchronous by default, ones created
by synchronous connections will be synchronous.
Mind that this might break third party subclasses of
psycopg2.extensions.cursor, if they try to chain to the superclass in
their execute() implementation and are passing the async kwarg. The
example cursors in psycopg2.extras have been fixed no to do that.
Clients using async connections are expected to do their own
transaction management by sending (asynchronously) BEGIN and COMMIT
statements.
As a bonus, it allows to drop one step from the async connection
building, namely getting the default isolation level from the server.
The isread() API was not safe, because the query might have not been
sent fully to the server after calling execute(). To make the async
API complete, a similar mechanism to async connections must be used.
The cursor now has a poll() method that you would use identically to
the poll() method of the connection class.
After calling psycopg2.connect(dsn, async=True) you can poll the
connection that will tell you whether its file descriptor should be
waited on to become writable or readable or that the connection
attempt has succeeded.
Edited commit by Jan to not expose internal state in extensions.py.
Remove the big loop in pq_fetch with the select() call, among
others. Code paths that don't support asynchronous queries should now
be adequately guarded.
The lobject.truncate(len=0) method will be available if psycopg2 has
been built against libpq from 8.3 or later (which is when the lobject
truncating support has been introduced).
* psycopg/adapter_binary.c (binary_escape): simplify PostgreSQL
version check.
* setup.py (psycopg_build_ext.finalize_options): use a single
define of the PostgreSQL version in a form that can easily be used
by #ifdefs.
coverage for datetime and time strings with and without time zone
information.
* psycopg/typecast_datetime.c (typecast_PYDATETIME_cast): adjust
to handle the changes in typecast_parse_time.
(typecast_PYTIME_cast): add support for time zone aware time
values.
* psycopg/typecast_mxdatetime.c (typecast_MXDATE_cast): make sure
that values with time zones are correctly processed (even though
that means ignoring the time zone value).
(typecast_MXTIME_cast): same here.
* psycopg/typecast.c (typecast_parse_time): Update method to parse
second resolution timezone offsets.
negative timezone offsets with a non-zero minutes field.
* tests/test_dates.py (DatetimeTests): Add tests for time zone
parsing. The test for HH:MM:SS time zones is disabled because we
don't currently support it.
* psycopg/cursor_type.c: add support for cyclic GC.
* psycopg/python.h: add definitions for Py_CLEAR() and Py_VISIT()
for compatibility with old versions of Python.
and change the Python level attribute to a getset.
* psycopg/lobject.h (lobjectObject): remove the closed member,
since "fd < 0" gives us the same information. Reorder the struct
members for better packing.
including behaviour on closed lobjects and stale lobjects.
* psycopg/lobject_type.c (psyco_lobj_close): don't mark the
connection closed here because it is done by
lobject_close_locked().
* psycopg/lobject_int.c (lobject_open): mark objects as not closed
if we successfully open them.
(lobject_close_locked): mark the lobject closed here.
(lobject_export): ensure we are in a transaction, since
lo_export() issues multiple queries.
* psycopg/lobject_type.c (lobject_setup): make lobjects start closed.
connection->encoding with free() instead of PyMem_Free().
* psycopg/connection_int.c (conn_connect): use malloc() to
allocate connection->encoding instead of PyMem_Malloc(), since it
is freed in other places with free() and assigned to with
strdup().
return value for partially parsed time values.
* psycopg/typecast_mxdatetime.c (typecast_MXDATE_cast): return
NULL after setting DataError. Also, don't treat it as an error if
typecast_parse_time() returns 0 (as might happen if the remainder
of the string is " BC").
* psycopg/typecast_datetime.c (typecast_PYDATE_cast): return NULL
after setting DataError.
(typecast_PYDATETIME_cast): same here.
(typecast_PYTIME_cast): same here.
* tests/test_dates.py
(CommonDatetimeTestsMixin.test_parse_incomplete_date): test that
parsing incomplete date values results in DataError.
(CommonDatetimeTestsMixin.test_parse_incomplete_time): same for
times.
(CommonDatetimeTestsMixin.test_parse_incomplete_time): same for
datetimes.
* psycopg/pqpath.c (pq_raise): if PSYCOPG_EXTENSIONS is not
defined, raise OperationalError rather than
TransactionRollbackError for deadlock or serialisation errors for
protocol versions less than 3.
(typecast_mxdatetime): same here.
* psycopg/typecast_builtins.c (typecast_builtins): make array
static.
* psycopg/psycopgmodule.c: add hidden visibility to a bunch of
global variables here.
* psycopg/psycopg.h: add set QueryCanceledError and
TransactionRollbackError to hidden visibility.
that should not be exported from the module. This results in a 5%
reduction in code size and shortens the dynamic symbol table.
* psycopg/config.h: If GCC >= 4.0 is installed, define the HIDDEN
symbol to apply the "hidden" visibility attribute.
the Connection and Cursor "closed" attributes.
* psycopg/cursor_type.c (psyco_curs_get_closed): add a "closed"
attribute to cursors. It will be True if either the cursor or its
associated connection are closed. This fixes bug #164.
over some tests for serialisation and deadlock errors,
demonstrating that TransactionRollbackError is generated.
(QueryCancelationTests): add a test to show that
QueryCanceledError is raised on statement timeouts.
* psycopg2da/adapter.py (_handle_psycopg_exception): rather than
checking exception messages, check for TransactionRollbackError.
* psycopg/pqpath.c (exception_from_sqlstate): return
TransactionRollbackError for 40xxx errors, and QueryCanceledError
for 57014 errors.
(pq_raise): If we are using an old server, use
TransactionRollbackError if the error message contains "could not
serialize" or "deadlock detected".
* psycopg/psycopgmodule.c (_psyco_connect_fill_exc): remove
function, since we no longer need to store pointers to the
exceptions in the connection. This also fixes a reference leak.
(psyco_connect): remove _psyco_connect_fill_exc() function call.
* psycopg/connection.h (connectionObject): remove exception
members from struct.
* psycopg/connection_type.c (connectionObject_getsets): modify the
exception attributes on the connection object from members to
getsets. This reduces the size of the struct.
* lib/extensions.py: import the two new extensions.
* psycopg/psycopgmodule.c (exctable): add new QueryCanceledError
and TransactionRollbackError exceptions.
* tests/test_dates.py: add tests for date/time typecasting and
adaption.
* psycopg/adapter_mxdatetime.c (mxdatetime_str): add support for
outputting BC dates (which involves switching them to one-based
dates). Also remove broken handling of microseconds.
* psycopg/typecast.c (typecast_parse_date): if the string ends
with "BC" adjust the year value to be a zero-based BC value as
used by mx.DateTime (datetime doesn't support BC dates).
(typecast_parse_time): ignore ' ', 'B' and 'C' in time strings
rather than treating them as part of the seconds part of the time.
(TransactionTestCase.test_failed_commit): Expect IntegrityError
instead of OperationalError.
* psycopg/pqpath.c (exception_from_sqlstate): new function that
converts an SQLSTATE error code to the corresponding exception
class.
(pq_raise): use exception_from_sqlstate() to pick which exception
to use when working with protocol version 3.
(pq_complete_error): Let pq_raise() pick an appropriate exception
rather than forcing OperationalError.
patch from ticket #209 to check return value from
PyObject_AsCharBuffer(). This fixes the segfault.
(binary_quote): switch from PyObject_AsCharBuffer() to
PyObject_AsReadBuffer() to support buffer objects that don't
implement the bf_getcharbuf protocol.
* tests/types_basic.py (TypesBasicTests.testBinary): Test round
tripping of bytea buffers. Currently segfaults.
pq_abort_locked() prototype.
(conn_switch_isolation_level): fix for new pq_abort_locked()
prototype, and use pq_complete_error() to show error message.
(conn_set_client_encoding): same here.
* psycopg/pqpath.c (pq_execute_command_locked): remove static
modifier.
(pq_complete_error): same here.
(pq_abort_locked): add pgres and error arguments.
(pq_abort): call pq_abort_locked() to reduce code duplication.
* psycopg/pqpath.c (pq_execute_command_locked): add an error
argument to hold an error when no PGresult is returned by PQexec,
rather than using pq_set_critical().
(pq_complete_error): new function that converts the error returned
by pq_execute_command_locked() to a Python exception.
(pq_begin_locked): add error argument.
(pq_commit): use pq_complete_error().
(pq_abort): use pq_complete_error().
(pq_abort_locked): always call pq_set_critical() on error, and
clear the error message from pq_execute_command_locked().
(pq_execute): use pq_complete_error() to handle the error from
pq_begin_locked().
* psycopg/pqpath.c (pq_begin): remove unused function.
* psycopg/connection_type.c (psyco_conn_commit): if conn_commit()
raises an error, just return NULL, since it is now setting an
exception itself.
(psyco_conn_rollback): same here.
* psycopg/connection_int.c (conn_commit): don't drop GIL and lock
connection before calling pq_commit().
(conn_rollback): same here.
(conn_close): use pq_abort_locked().
(conn_switch_isolation_level): same here.
(conn_set_client_encoding): same here.
* psycopg/pqpath.h: add prototype for pq_abort_locked().
* psycopg/pqpath.c (pq_commit): convert function to run with GIL
held, and handle errors appropriately.
(pq_abort): same here.
(pq_abort_locked): new function to abort a locked connection.
2007-12-22 James Henstridge <james@jamesh.id.au>
* psycopg/pqpath.c (pq_raise): add a "pgres" argument so we can
generate nice errors not related to a particular cursor.
(pq_execute): use pq_begin_locked() rather than pq_begin(). Use
pq_raise() to handle any errors from it.
* psycopg/pqpath.c (pq_execute_command_locked): helper function
used to execute a command-style query on a locked connection.
(pq_begin_locked): a variant of pq_begin() that uses
pq_execute_command_locked().
(pq_begin): rewrite to use pq_begin_locked().
* psycopg/config.h: only print debug messages if
psycopg_debug_enabled is true.
* psycopg/psycopgmodule.c (init_psycopg): set
psycopg_debug_enabled to true if the $PSYCOPG_DEBUG environment
variable is set.
NULL" error handler after the PQexec() call. This is needed to
catch database disconnects (and probably other errors). According
to Federico, it was commented out to avoid a spurious error, so we
should watch for problems.
of the exception message if it actually gives the severity.
* psycopg/pqpath.h (pq_resolve_critical): add prototype, since
this function is being used from connection_int.c.
* psycopg/psycopg.h: update psyco_set_error() prototype.
* psycopg/psycopgmodule.c (psyco_errors_init): set pgerror, pgcode
and cursor class attributes to None on psycopg2.Error so that the
attributes will always be available (simplifies error handling).
(psyco_set_error): add const qualifiers to msg, pgerror and pgcode
arguments.
Don't bother setting pgerror, pgcode or cursor to None if they are
not provided -- the class defaults take care of this.
requiring it.
Added a connection flag to store whether E''-style quoting is required: this
avoids repeated PQparameterStatus() calls.
Added a test case to verify correct behavior on strings, unicode and binary
data. Tested with PG versions from 7.4 to 8.3b2, with any server
'standard_conforming_strings' setting and with 'PSYCOPG_OWN_QUOTING' too.
interpreters) as proposed by Graham Dumpleton.
If running in the main interpreter, use a cached version of the Decimal
object. Else repeat the object lookup.