Named cursors on old server versions have a different prefetch behaviour.
This has hidden me the supported range of the 24:00 time format.
Let's have another go at full testing...
This is for people using dtuple.py; a dtuple.DatabaseTuple instance
keeps a reference to cursor.description, which is not picklable because
psycopg2 doesn't export the Column namedtuple it uses.
This commit exports the Column namedtuple, and includes a test to verify
the pickle/unpickle works after exporting Column.
It is raised on 32 bits by PyArg_ParseTuple. We may work around on
truncate (maybe parsing a py_ssize_t) but we would have the same problem
on seek as the offset is signed.
`close()` is implicitly called by `__exit__()`, so an exit on error
would run a query on a inerr connection, causing another exception
hiding the original one. The fix is on `close()`, not on `__exit__()`,
because the semantic of the latter is simply to call the former.
Closes#262.
The Windows server version of PostgreSQL uses a function called pgkill in the
file kill.c in place of the UNIX kill function. This pgkill function
simulates some of the SIGHUP like commands by passing signals through a named
pipe. Because it is passing the signal through a pipe, the server doesn't get
the kill signal immediately and therefore fails the test on
test_connection.ConnectionTests.test_cleanup_on_badconn_close.
Ideally, the test should check to see if the server is running on Windows, not
the psycopg.
On Windows, the select.select() hangs/waits forever on the
test_async_connection_error_message() test. Adding a 10 second timeout
allows the tests to continue.
This matches postgres server-side behaviour and helps client applications that need to sort based on the primary key of tables where the primary key is or contains a range.
Dropped encoding parameter in the constructor: it is used
nowhere and not documented. Use directly the connection
encoding if available, else the previous latin1 fallback.
Postgres 9.3 turns messages about implicit indexes and sequences from NOTICE
to DEBUG1 so the tests fail with a default 9.3 server configuration because
the client doesn't get any NOTICE. Fix it by also asking for DEBUG1 messages
from the server when testing against Postgres >= 9.3.
Unfortunately PQcancel blocks, so it's not better than PQgetResult.
It has been suggested to use PQreset in non-blocking way but this would give
the Python program the burden of handling a connection done but not configured
in an unexpected place.
TypeError is the standard Python error raised in this case:
$ python -c "(lambda a: None)(b=10)"
TypeError: <lambda>() got an unexpected keyword argument 'b'
We only used to raise InterfaceError when connect was used without
any parameter at all, so it's hard to think a program depending on
that design. Furthermore the function has always raised (and still
does) OperationalError too, if the bad argument is detected by the
libpq, and that cannot be changed because we can't tell the
difference from a normal connection error.
We don't need to look for stuff implicitly into pg_catalog as all
the builtin ranges are already registered. So just search into
'public' if the schema is not specified.
I was avoiding Numeric to avoid conflicting with the 'numeric'
Postgres type, which is an alias for 'decimal'. But now that there
is a single numeric range I can use the preferred name
In Python 3.3 items are returned as int instead of chars.
I'm not sure the way I did it is correct: worth asking some
hardcore Python dev.
Fixed tests after the stricter memview comparison rules in Py 3.3.
Pass a dumps function instead. Allow customizing by either arg passing or
subclassing.
The basic Json class now raises ImportError on getquoted() if json is not
available, thus allowing using a customized Json subclass even when the json
module is not available.
Makes invocation from subclasses and generic code easier.
Code simplified by using default values for keyword arguments
and avoiding needless conversions back and forth between Python and C
strings. Also added connection type check to cursor's init.
Failing to do so was causing the issue reported in ticket #103. The issue
as reported was fixed when SET ISOLATION LEVEL was dropped, but the real
problem wasn't fixed.
The correction is similar to the other one for the other subclasses.
Also added tests for rowcount and rownumber during different fetch styles.
Just in case.
Regression introduced to fix ticket #80. Don't use fetchmany to get the
chunks of values. I did it that way because I was ending up into infinite
recursion calling __iter__ from __iter__: the solution has been the
"while 1: yield next()" idiom.
Actually *it doesn't*: once we iterate the first itersize records, rowcount
is reset to zero. If we want to fix it we need an extra member in the
cursor.
Avoid creating new a new FixedOffsetTimezone instance if one with the
same offset and name has been created before. This will save memory
when returning many rows containing "timestamp with timezone" columns,
and also improves comparability.
The offset displayed was always positive and somewhat confusing. The
offset displayed now is the offset that the instance was created
with.
Also added some tests for initialisation.
This basically removes the READ UNCOMMITED level (that internally
PostgreSQL maps to READ COMMITED anyway) to keep the numeric values
compattible with old psycopg versions. For full details and discussion
see this thread:
http://archives.postgresql.org/psycopg/2011-12/msg00008.php
In fact it doesn't change "the transaction", as there has to be no
transaction when invoked. The effect instead is to execute SET SESSION
CHARACTERISTICS.
The encoding can be set by PGCLIENTENCODING, which may be an alternative
spelling. Bug reported by Peter Eisentraut.
At this point the idea of considering one of the random spellings such as
EUC_CN as somewhat "blessed" is debunked. So just store the cleaned-up
version of the encoding in the mapping table. Note that the cleaned-up
version was needed by the unicode adapter: this requirement has been
surpassed as the connection now contains a copy of the Python codec name
set whenever the client encoding is set.
PG 9.0 uses the hex format by default, and clients < 9.0 can't parse that
format, requiring client update and great care in what is linked at runtime,
and generally giving headache to users and transitively us.
Looks like there is a case for installing hstore somewhere else (see
ticket #45). And after all the typecaster can be registered on a list of
OIDs, so let's grab them all.
Empty array can be returned untyped by postgres. To handle
this case, a special handler is added for the type UNKNOWNOID.
If the value return by the database is strictly equal to "{}",
the value is converted. Otherwise, the conversion fallback on
the default handler.
- Raise an exception on incomplete placeholders.
- Minor speedups.
- Don't change the string in place (??!!) if the placeholder is not s
and the value is null.
The latter point can be done because downstream we don't accept anything
different from s anyway (in the Bytes_Format function).
Notice that now the format string is constant whatever the arguments.
This means that executemany is still more inefficient than it should be
as mogrify may work only on the parameters. However this is an
implementation only worthwhile if we start supporting real parameters.
Let's talk about that for the next release.
The value is used to control the number of records to fetch per network
roundtrip in named cursors iteration. Used to avoid the inefficient
arraysize default of 1 without giving this value the magic meaning of
2000.
The feature in itself is not extremely useful and instead PostgreSQL is
not always able to cast away from text[], which is a regression see
(ticket #42).
With test_concurrent_execution test, checking two threads issuing a pg_sleep
of 2 seconds and and check if they complete in under 3 seconds occasionally
fails when the test is run in a virtual machine on a VM Server with other
virtual machines running. Increased the sleep to 4, and the check to 7,
giving 3 seconds buffer instead of 1 second.
Windows is not able to create a tempfile with NamedTemporaryFile and then
open it with a second file handle without closing the first one. Added
code to close the handle, and keep the file around a little longer so it
can be reopened and rewritten to again.
Using self.conn.dsn as the dsn connection string actually has the password
'x'ed out. The initial connection replaces the password with 'x' to
obfuscate it. Using tests.dsn instead of self.conn.dsn ensures that the
correct connection string is used.
If a connection is destroyed before an async operation is completed, the
`async_cursor` member creates a reference loop, leaving the connection and
the cursor alive. `async_cursor` is now a weak reference.
Don't know why I changed the defaultTest argument into a function when I
converted the test suite into a package: that argument should be really
a string.
Added an adapter for None: it is usually not invoked as adaptation to
NULL is a fast path in mogrify, but can be invoked by composite types.
Notice that composite types still have the option to fast-path None
(e.g. list adapter does).
Dropped cyclic import from modules to tests: they were only working
because a second copy of the package was found in the project dir.
Use relative import so that 2to3 can do a good conversion.
We mangle the encoding names a little bit before asking it to the
backend: be sure to be able to find the equivalent Python code back or
decoding (unicode cast or Py3) will barf.
Explicit comparison with the tuple is required if we want to make
Notify() == (pid, channel) work: item access is not enough (and a test
in the suite fails if we get this wrong).
We don't do somersaults to ensure people can use snowmen as transaction
ids anyway: it would require passing the connection to xid_ensure and
down below to use the correct encoding.
Allow the objects to be recognized as the proper type by Postgres in not
strong contexts: problem reported by Peter Eisentraut.
Added tests to check the types are respected in a complete Py -> PG ->
Py roundtrip without context.
Don't rely on Postgres casting the literal according to the context:
this doesn't work e.g. passing the object as function argument where a
function with the same name but taking a text exists. It doesn't work
either when the object is in an ARRAY construct.
Added test to check the type is respected in a complete Py -> PG -> Py
roundtrip without context.
Bug and solution reported by Peter Eisentraut.
With the current implementation, at best they would silently block. They
actually hang everything.
Implementation posponed after some refactoring of the polling system,
because it will be probably possible to provide an implementation for
'poll()' during COPY which is good for both async and green modes.
If the connection is sync, notices will be processed by pq_fetch()
downstream.
If the connection is async, here we have only sent the query: no result
is ready yet, and neither notices have had a chance to arrive: they will
be retrieved later by pq_is_busy().
Added tests to check the above statement don't break.
Instead, the code should be using the fileno() and poll() methods of
the cursor's connection. Handle the case when poll() is called on an
already built connection as a request to poll the asynchronous query
(if there is one) and get NOTIFY events.
Update the tests to reflect that change, add a test for NOTIFY.
Do it by keeping the reference to the last PGresult in the cursor and
calling pq_fetch() before ending the asynchronous execution. This
takes care of handling the possible error state of the PGresult and
also allows the removal of the needsfetch flag, since now after
execution ends the results are already fetched and parsed.
This hides from the user the libpq's implementation detail of
requiring the first select() to wait for the connection socket to
become writable and makes it possible to have a uniform select loop
for both cursors and connections, in which you always start by polling
the object and then acting according to the result from poll().
Idea and implementation by Daniele Varrazzo.
If there is an asynchronous query, polling a cursor that did not
initiate it will raise an exception. Polling while there is no
asynchronous query underway still works, because the user needs to
have a way to get asynchronous NOTIFYs.
When a large query is sent to the backend (and probably in high
concurrency situations), writing the query could block. In
this case PQflush() should be called until it returns 0. The test checks
this is done correctly.
Some methods were forbidden in asynchronous mode, the isolation level
of an asynchronous connection is not always 0 and these changes
influenced expected test results.
The lobject.truncate(len=0) method will be available if psycopg2 has
been built against libpq from 8.3 or later (which is when the lobject
truncating support has been introduced).
coverage for datetime and time strings with and without time zone
information.
* psycopg/typecast_datetime.c (typecast_PYDATETIME_cast): adjust
to handle the changes in typecast_parse_time.
(typecast_PYTIME_cast): add support for time zone aware time
values.
* psycopg/typecast_mxdatetime.c (typecast_MXDATE_cast): make sure
that values with time zones are correctly processed (even though
that means ignoring the time zone value).
(typecast_MXTIME_cast): same here.
* psycopg/typecast.c (typecast_parse_time): Update method to parse
second resolution timezone offsets.
negative timezone offsets with a non-zero minutes field.
* tests/test_dates.py (DatetimeTests): Add tests for time zone
parsing. The test for HH:MM:SS time zones is disabled because we
don't currently support it.
Currently the second fails for negative offsets due to bugs in the
parser, and the third fails because it doesn't even try to parse second
offset values (as Python doesn't either).
including behaviour on closed lobjects and stale lobjects.
* psycopg/lobject_type.c (psyco_lobj_close): don't mark the
connection closed here because it is done by
lobject_close_locked().
* psycopg/lobject_int.c (lobject_open): mark objects as not closed
if we successfully open them.
(lobject_close_locked): mark the lobject closed here.
(lobject_export): ensure we are in a transaction, since
lo_export() issues multiple queries.
* psycopg/lobject_type.c (lobject_setup): make lobjects start closed.
* tests/*.py: use the DSN constructed in tests/__init__.py.
* tests/__init__.py: allow setting the host, port and user for the
DSN used by the tests through the environment.
return value for partially parsed time values.
* psycopg/typecast_mxdatetime.c (typecast_MXDATE_cast): return
NULL after setting DataError. Also, don't treat it as an error if
typecast_parse_time() returns 0 (as might happen if the remainder
of the string is " BC").
* psycopg/typecast_datetime.c (typecast_PYDATE_cast): return NULL
after setting DataError.
(typecast_PYDATETIME_cast): same here.
(typecast_PYTIME_cast): same here.
* tests/test_dates.py
(CommonDatetimeTestsMixin.test_parse_incomplete_date): test that
parsing incomplete date values results in DataError.
(CommonDatetimeTestsMixin.test_parse_incomplete_time): same for
times.
(CommonDatetimeTestsMixin.test_parse_incomplete_time): same for
datetimes.
the Connection and Cursor "closed" attributes.
* psycopg/cursor_type.c (psyco_curs_get_closed): add a "closed"
attribute to cursors. It will be True if either the cursor or its
associated connection are closed. This fixes bug #164.
over some tests for serialisation and deadlock errors,
demonstrating that TransactionRollbackError is generated.
(QueryCancelationTests): add a test to show that
QueryCanceledError is raised on statement timeouts.
* psycopg2da/adapter.py (_handle_psycopg_exception): rather than
checking exception messages, check for TransactionRollbackError.
* psycopg/pqpath.c (exception_from_sqlstate): return
TransactionRollbackError for 40xxx errors, and QueryCanceledError
for 57014 errors.
(pq_raise): If we are using an old server, use
TransactionRollbackError if the error message contains "could not
serialize" or "deadlock detected".
* psycopg/psycopgmodule.c (_psyco_connect_fill_exc): remove
function, since we no longer need to store pointers to the
exceptions in the connection. This also fixes a reference leak.
(psyco_connect): remove _psyco_connect_fill_exc() function call.
* psycopg/connection.h (connectionObject): remove exception
members from struct.
* psycopg/connection_type.c (connectionObject_getsets): modify the
exception attributes on the connection object from members to
getsets. This reduces the size of the struct.
* lib/extensions.py: import the two new extensions.
* psycopg/psycopgmodule.c (exctable): add new QueryCanceledError
and TransactionRollbackError exceptions.
* tests/test_dates.py: add tests for date/time typecasting and
adaption.
* psycopg/adapter_mxdatetime.c (mxdatetime_str): add support for
outputting BC dates (which involves switching them to one-based
dates). Also remove broken handling of microseconds.
* psycopg/typecast.c (typecast_parse_date): if the string ends
with "BC" adjust the year value to be a zero-based BC value as
used by mx.DateTime (datetime doesn't support BC dates).
(typecast_parse_time): ignore ' ', 'B' and 'C' in time strings
rather than treating them as part of the seconds part of the time.
(TransactionTestCase.test_failed_commit): Expect IntegrityError
instead of OperationalError.
* psycopg/pqpath.c (exception_from_sqlstate): new function that
converts an SQLSTATE error code to the corresponding exception
class.
(pq_raise): use exception_from_sqlstate() to pick which exception
to use when working with protocol version 3.
(pq_complete_error): Let pq_raise() pick an appropriate exception
rather than forcing OperationalError.
patch from ticket #209 to check return value from
PyObject_AsCharBuffer(). This fixes the segfault.
(binary_quote): switch from PyObject_AsCharBuffer() to
PyObject_AsReadBuffer() to support buffer objects that don't
implement the bf_getcharbuf protocol.
* tests/types_basic.py (TypesBasicTests.testBinary): Test round
tripping of bytea buffers. Currently segfaults.
requiring it.
Added a connection flag to store whether E''-style quoting is required: this
avoids repeated PQparameterStatus() calls.
Added a test case to verify correct behavior on strings, unicode and binary
data. Tested with PG versions from 7.4 to 8.3b2, with any server
'standard_conforming_strings' setting and with 'PSYCOPG_OWN_QUOTING' too.