The libpq's PQconsumeInput() returns 0 in case of an error only, but
we need to know if it was able to actually read something. Work
around this by setting an internal flag before retry.
This change exposes lower level functions for operating the
(logical) replication protocol, while keeping the high-level
start_replication function that does all the job for you in
case of a synchronous connection.
A number of other changes and fixes are put into this commit.
Move libpq-specific code for streaming replication support into a
separate file. Also provide gettimeofday() on Win32, implementation
copied from Postgres core.
Introduce ReplicationConnection and ReplicationCursor classes, that
incapsulate initiation of special type of PostgreSQL connection and
handling of special replication commands only available in this special
connection mode.
The handling of stream of replication data from the server is modelled
largely after the existing support for "COPY table TO file" command and
pg_recvlogical tool supplied with PostgreSQL (though, it can also be
used for physical replication.)
Calls PQconninfoParse to parse the dsn into a list of keyword and value
structs, then constructs a dictionary from that. Can be useful when one
needs to alter some part of the the connection string reliably, but
doesn't want to get into all the details of parsing a dsn string:
quoting, URL format, etc.
Multithreaded programs using libcrypto (part of OpenSSL) need to set up
callbacks to ensure safe execution. Both Python and libpq set up those
callbacks, which might lead to a conflict.
To avoid leaving dangling function pointers when being unloaded, libpq sets up
and removes the callbacks every time a SSL connection it opened and closed. If
another Python thread is performing unrelated SSL operations (like connecting
to a HTTPS server), this might lead to deadlocks, as described in
http://www.postgresql.org/message-id/871tlzrlkq.fsf@wulczer.org
Even if the problem will be remediated in libpq, it's still useful to have it
fixed in psycopg2. The solution is to use Python's own libcrypto callbacks and
completely disable handling them in libpq.
This is for people using dtuple.py; a dtuple.DatabaseTuple instance
keeps a reference to cursor.description, which is not picklable because
psycopg2 doesn't export the Column namedtuple it uses.
This commit exports the Column namedtuple, and includes a test to verify
the pickle/unpickle works after exporting Column.
If psycopg supports lo64 but the server doesn't the user may pass values
that would overflow the api range, resulting in:
lo.seek((2<<30))
*** OperationalError: ERROR: invalid seek offset: -2147483648
Also improved the error messages and guard against INT_MIN for negative
seek offsets.
`close()` is implicitly called by `__exit__()`, so an exit on error
would run a query on a inerr connection, causing another exception
hiding the original one. The fix is on `close()`, not on `__exit__()`,
because the semantic of the latter is simply to call the former.
Closes#262.
Deallocating closed large objects failed to decrement the connection
refcount. The fact the lobject is closed doesn't matter for refcount.
Issue detected by the always useful scripts/refcounter.py
With an extra bit of unrequested whitespace love.
This makes possible to import _psycopg directly, after adding the
package directory to the pythonpath. This enables hacks such as:
sys.path.insert(0, '/path/to/psycopg2')
import _psycopg
sys.modules['psycopg2._psycopg'] = _psycopg
sys.path.pop(0)
which can work around e.g. the problem of #201, freeze that cannot
freeze psycopg2. Well, freeze cannot freeze it because it's just not
designed to deal with C extensions. At least now the frozen application
can hack the pythonpath and work around the limitation by importing
_psycopg as above and then doing the rest of the imports normally.
Keeping long-lived references to python objects is bad anyway: the
tz module couldn't be reloaded before.
Introduced in 2.0 beta 8, 2006 A.D. Went absolutely untouched in 8 years
of refactoring, when Python 2.5 and PostgreSQL 8.1 roamed the earth.
I would say it has stood the test of the time.
Building without extensions has been long broken and nobody really cares
about a pure-DBAPI implementation (which could be created using a wrapper
instead).
Also, don't start an implicit transaction when fetching with
named with hold cursor, since it already returns results
from a previously committed transaction.
The default repr is enough: it prints <TypeName at 0xADDR> instead of
<TypeName object at 0xADDR>.
The only people being hurt by this change are the ones using doctests:
they deserve it.
This happens for Socket connections, not for TCP ones, where a result
containing an error is returned and correctly handled by pq_raise()
Closes ticket #196 but not #192: poll() still doesn't change the
connection closed.
The moment it is called shouldn't have really changed, but it's more
explicit when it happens. Previously it was sort of obfuscated behind a
roundtrip through the green callback and poll.
Dropped encoding parameter in the constructor: it is used
nowhere and not documented. Use directly the connection
encoding if available, else the previous latin1 fallback.
tp_clear should only be used to break the reference cycles. tp_clear was
causing a segfault because it was called twice (by the gc and by _dealloc) so
self->codec was freed twice.
Amazingly the double free was only causing a segfault on Python 3.3 (released
in late 2012) talking to Postgres 8.1 (released in 2005) in async mode... no
other combination crashed. Thank you buildbot.
Unfortunately PQcancel blocks, so it's not better than PQgetResult.
It has been suggested to use PQreset in non-blocking way but this would give
the Python program the burden of handling a connection done but not configured
in an unexpected place.
If the network is down, trying to read blocking will hang the process hard
(ctrl-c not working). Send a cancel signal instead (as suggested in
http://archives.postgresql.org/pgsql-hackers/2012-07/msg00903.php) and go
back into a green polling: this should allow a further error (e.g. another
ctrl-c) to break the loop. In this case we cannot assume anything about
the state of the connection, so we close it.
In Python 3.3 items are returned as int instead of chars.
I'm not sure the way I did it is correct: worth asking some
hardcore Python dev.
Fixed tests after the stricter memview comparison rules in Py 3.3.
Makes invocation from subclasses and generic code easier.
Code simplified by using default values for keyword arguments
and avoiding needless conversions back and forth between Python and C
strings. Also added connection type check to cursor's init.