Unfortunately PQcancel blocks, so it's not better than PQgetResult.
It has been suggested to use PQreset in non-blocking way but this would give
the Python program the burden of handling a connection done but not configured
in an unexpected place.
TypeError is the standard Python error raised in this case:
$ python -c "(lambda a: None)(b=10)"
TypeError: <lambda>() got an unexpected keyword argument 'b'
We only used to raise InterfaceError when connect was used without
any parameter at all, so it's hard to think a program depending on
that design. Furthermore the function has always raised (and still
does) OperationalError too, if the bad argument is detected by the
libpq, and that cannot be changed because we can't tell the
difference from a normal connection error.
We don't need to look for stuff implicitly into pg_catalog as all
the builtin ranges are already registered. So just search into
'public' if the schema is not specified.
I was avoiding Numeric to avoid conflicting with the 'numeric'
Postgres type, which is an alias for 'decimal'. But now that there
is a single numeric range I can use the preferred name
In Python 3.3 items are returned as int instead of chars.
I'm not sure the way I did it is correct: worth asking some
hardcore Python dev.
Fixed tests after the stricter memview comparison rules in Py 3.3.
Pass a dumps function instead. Allow customizing by either arg passing or
subclassing.
The basic Json class now raises ImportError on getquoted() if json is not
available, thus allowing using a customized Json subclass even when the json
module is not available.
Makes invocation from subclasses and generic code easier.
Code simplified by using default values for keyword arguments
and avoiding needless conversions back and forth between Python and C
strings. Also added connection type check to cursor's init.
Failing to do so was causing the issue reported in ticket #103. The issue
as reported was fixed when SET ISOLATION LEVEL was dropped, but the real
problem wasn't fixed.
The correction is similar to the other one for the other subclasses.
Also added tests for rowcount and rownumber during different fetch styles.
Just in case.
Regression introduced to fix ticket #80. Don't use fetchmany to get the
chunks of values. I did it that way because I was ending up into infinite
recursion calling __iter__ from __iter__: the solution has been the
"while 1: yield next()" idiom.
Actually *it doesn't*: once we iterate the first itersize records, rowcount
is reset to zero. If we want to fix it we need an extra member in the
cursor.
Avoid creating new a new FixedOffsetTimezone instance if one with the
same offset and name has been created before. This will save memory
when returning many rows containing "timestamp with timezone" columns,
and also improves comparability.
The offset displayed was always positive and somewhat confusing. The
offset displayed now is the offset that the instance was created
with.
Also added some tests for initialisation.
This basically removes the READ UNCOMMITED level (that internally
PostgreSQL maps to READ COMMITED anyway) to keep the numeric values
compattible with old psycopg versions. For full details and discussion
see this thread:
http://archives.postgresql.org/psycopg/2011-12/msg00008.php
In fact it doesn't change "the transaction", as there has to be no
transaction when invoked. The effect instead is to execute SET SESSION
CHARACTERISTICS.
The encoding can be set by PGCLIENTENCODING, which may be an alternative
spelling. Bug reported by Peter Eisentraut.
At this point the idea of considering one of the random spellings such as
EUC_CN as somewhat "blessed" is debunked. So just store the cleaned-up
version of the encoding in the mapping table. Note that the cleaned-up
version was needed by the unicode adapter: this requirement has been
surpassed as the connection now contains a copy of the Python codec name
set whenever the client encoding is set.
PG 9.0 uses the hex format by default, and clients < 9.0 can't parse that
format, requiring client update and great care in what is linked at runtime,
and generally giving headache to users and transitively us.
Looks like there is a case for installing hstore somewhere else (see
ticket #45). And after all the typecaster can be registered on a list of
OIDs, so let's grab them all.
Empty array can be returned untyped by postgres. To handle
this case, a special handler is added for the type UNKNOWNOID.
If the value return by the database is strictly equal to "{}",
the value is converted. Otherwise, the conversion fallback on
the default handler.
- Raise an exception on incomplete placeholders.
- Minor speedups.
- Don't change the string in place (??!!) if the placeholder is not s
and the value is null.
The latter point can be done because downstream we don't accept anything
different from s anyway (in the Bytes_Format function).
Notice that now the format string is constant whatever the arguments.
This means that executemany is still more inefficient than it should be
as mogrify may work only on the parameters. However this is an
implementation only worthwhile if we start supporting real parameters.
Let's talk about that for the next release.
The value is used to control the number of records to fetch per network
roundtrip in named cursors iteration. Used to avoid the inefficient
arraysize default of 1 without giving this value the magic meaning of
2000.
The feature in itself is not extremely useful and instead PostgreSQL is
not always able to cast away from text[], which is a regression see
(ticket #42).
With test_concurrent_execution test, checking two threads issuing a pg_sleep
of 2 seconds and and check if they complete in under 3 seconds occasionally
fails when the test is run in a virtual machine on a VM Server with other
virtual machines running. Increased the sleep to 4, and the check to 7,
giving 3 seconds buffer instead of 1 second.
Windows is not able to create a tempfile with NamedTemporaryFile and then
open it with a second file handle without closing the first one. Added
code to close the handle, and keep the file around a little longer so it
can be reopened and rewritten to again.
Using self.conn.dsn as the dsn connection string actually has the password
'x'ed out. The initial connection replaces the password with 'x' to
obfuscate it. Using tests.dsn instead of self.conn.dsn ensures that the
correct connection string is used.