Allows removing many duplicate imports and better follows PEP8
guidelines:
https://www.python.org/dev/peps/pep-0008/#imports
> Imports are always put at the top of the file, just after any module
> comments and docstrings, and before module globals and constants.
For library end users, there is no need to install tests alongside the
package itself. This keeps the tests available for development without
adding extra packages to user's site-packages directory. Reduces the
size of the installed package. Avoids accidental execution of test code
by an installed package.
The tests relied on Python2 relative import semantics. Python3 changed
import semantics to always search sys.path by default. To import using a
relative path it must have a leading dot.
Forward compatible with newer Pythons.
Works towards the goal of moving tests outside of the installed package.
For more information, see PEP-328:
https://www.python.org/dev/peps/pep-0328/
There is no need to import testutils.unittest instead of simply
unittest. They are simple aliases. Use system unittest to be more
regular, consistent as well as idiomatic with the wider Python
community.
The decimal module is available on all Python versions supported by
psycopg2. It has been available since Python 2.4. No need to catch an
ImportError.
https://docs.python.org/2/library/decimal.html
namedtuple is available on all Python versions supported by psycopg2. It
was first introduced in Python 2.6. Can remove all workarounds and
special documentation.
From the DB-API (https://www.python.org/dev/peps/pep-0249/):
OperationalError
Exception raised for errors that are related to the database's
operation and not necessarily under the control of the programmer,
e.g. an unexpected disconnect occurs, [...]
Additionally, psycopg2 was inconsistent, at least in the async case:
depending on how the "connection closed" error was reported from the
kernel to libpq, it would sometimes raise OperationalError and
sometimes DatabaseError. Now it always raises OperationalError.
There's a race condition that only seems to happen over Unix-domain
sockets. Sometimes, the closed socket is reported by the kernel to
libpq like this (captured with strace):
sendto(3, "Q\0\0\0\34select pg_backend_pid()\0", 29, MSG_NOSIGNAL, NULL, 0) = 29
recvfrom(3, "E\0\0\0mSFATAL\0C57P01\0Mterminating "..., 16384, 0, NULL, NULL) = 110
recvfrom(3, 0x12d0330, 16384, 0, 0, 0) = -1 ECONNRESET (Connection reset by peer)
That is, psycopg2/libpq sees no error when sending the first query
after the connection is closed, but gets an error reading the result.
In that case, everything worked fine.
But sometimes, the error manifests like this:
sendto(3, "Q\0\0\0\34select pg_backend_pid()\0", 29, MSG_NOSIGNAL, NULL, 0) = -1 EPIPE (Broken pipe)
recvfrom(3, "E\0\0\0mSFATAL\0C57P01\0Mterminating "..., 16384, 0, NULL, NULL) = 110
recvfrom(3, "", 16274, 0, NULL, NULL) = 0
recvfrom(3, "", 16274, 0, NULL, NULL) = 0
i.e. libpq received an error when sending the query. This manifests as
a slightly different exception from a slightly different place. More
importantly, in this case connection.closed is left at 0 rather than
being set to 2, and that is the bug I'm fixing here.
Note that we see almost identical behaviour for sync and async
connections, and the fixes are the same. So I added extremely similar
test cases.
Finally, there is still a bug here: for async connections, we
sometimes raise DatabaseError (incorrect) and sometimes raise
OperationalError (correct). Will fix that next.
It's not so much about tests being slow: some just get stuck and timeout
travis.
Skipped all tests taking about more than 0.2s to run on my laptop.
Fast testing takes about 8s instead of 24.
This is for people using dtuple.py; a dtuple.DatabaseTuple instance
keeps a reference to cursor.description, which is not picklable because
psycopg2 doesn't export the Column namedtuple it uses.
This commit exports the Column namedtuple, and includes a test to verify
the pickle/unpickle works after exporting Column.
Makes invocation from subclasses and generic code easier.
Code simplified by using default values for keyword arguments
and avoiding needless conversions back and forth between Python and C
strings. Also added connection type check to cursor's init.
Actually *it doesn't*: once we iterate the first itersize records, rowcount
is reset to zero. If we want to fix it we need an extra member in the
cursor.