Previously, the CRDB's pg server version was 9.5, which meant this test
wasn't run as it is skipped for versions 9.6 and before. Now that the
server version of CRDB is 13, this check no longer applies.
This patch explicitly skips test_9_6_diagnostics for CRDB. The reason
for this is the same as test_9_3_diagnostics, which is currently
skipped for CRDB.
Features not supported seem:
- isolation level (always serializable)
- client encodings
- notices (maybe there is a way to generate them)
- 2 phase commit
- reset (because of the lack of transaction deferrable)
- backend pid
If the CREATE TABLE statement fails, the setup would fail
without committing or rolling back the active transaction, so the
transaction would hold onto its resources indefinitely.
Normally, the transaction would be closed when the connection is closed
in the `tearDown` function. However, `tearDown` is not called if there
was an error during `setUp` ([as specified by the `unittest` docs](https://docs.python.org/3/library/unittest.html#unittest.TestCase.tearDown)), so
we need to handle this case specially.
Previously, this test had a bug, because if the CREATE TABLE statement
failed, the setup would fail without committing or rolling back the
active transaction.
This commit makes psycopg2 responsible for sending the status update
(feedback) messages to the server regardless of whether a synchronous or
asynchronous connection is used.
Feedback is sent every *status_update* (default value is 10) seconds,
which could be configured by passing a corresponding parameter to the
`start_replication()` or `start_replication_expert()` methods.
The actual feedback message is sent by the
`pq_read_replication_message()` when the *status_update* timeout is
reached.
The default behavior of the `send_feedback()` method is changed.
It doesn't send a feedback message on every call anymore but just
updates internal structures. There is still a way to *force* sending
a message if *force* or *reply* parameters are set.
The new approach has certain advantages:
1. The client can simply call the `send_feedback()` for every
processed message and the library will take care of not overwhelming
the server. Actually, in the synchronous mode it is even mandatory
to confirm every processed message.
2. The library tracks internally the pointer of the last received
message which is not keepalive. If the client confirmed the last
message and after that server sends only keepalives with increasing
*wal_end*, the library can safely move forward *flush* position to
the *wal_end* and later automatically report it to the server.
Reporting of the *wal_end* received from keepalive messages is very
important. Not doing so casing:
1. Excessive disk usage, because the replication slot prevents from
WAL being cleaned up.
2. The smart and fast shutdown of the server could last indefinitely
because walsender waits until the client report *flush* position
equal to the *wal_end*.
This implementation is only extending the existing API and therefore
should not break any of the existing code.
Now its state is unmodified, so apart from special-casing creation
and initial population can work unmodified, and all the desired
properties just work (modifiability, picklability...)
Close#886.
I don't know why it returns 0 instead of the right value. At least it
doesn't segfault, so don't skip the test altogether.
The test is unrelated to this branch: will cherry-pick elsewhere (if I
remember it...)
It won't work on windows if it's in the script: failing with errors
such as:
AttributeError: 'module' object has no attribute 'process'
or:
Can't get attribute 'process' on <module '__main__' (built-in)>
The new function keeps together PQconsumeInput() with PQisBusy(), in
order to handle the condition in which not all the results of a sequence
of statements arrive in the same roundtrip.
Added pointer to a PGresult to the connection to keep the state across
async communication: it can probably be used to simplify other code
paths where a result is brought forward manually.
Close#802Close#855Close#856
Allows removing many duplicate imports and better follows PEP8
guidelines:
https://www.python.org/dev/peps/pep-0008/#imports
> Imports are always put at the top of the file, just after any module
> comments and docstrings, and before module globals and constants.