For some reason, the server seems to kick us after 1024 items from
the network are received. Tested with the following code, 1022
updates were received, after BadServerSalt, NewSessionCreated and
MsgsAck:
client = TelegramClient(..., spawn_read_thread=False)
client.connect(_sync_updates=False)
sender = client._sender
client = None
while True:
try:
sender.receive(None)
except TimeoutError:
pass
except ConnectionResetError:
sender.connect()
If one were to run this code after being kicked no further items
will be retrieved and it will always timeout. Invoking a ping has
no effect either. Invoking some "high level" request like getState
seems to do the trick.
It was causing some strange behaviour with the synchronized Queue
used by the UpdateState class. Calling .get() with any timeout
would block forever. Perhaps something else got released when
the script ended and then any call would block forever, thus the
thread never joining.
TLObject's __init__ used to call utils.get_input_* methods and
similar to auto-cast things like User into InputPeerUser as
required. Now there's a custom .resolve() method for this purpose
with several advantages:
- Old behaviour still works, autocasts work like usual.
- A request can be constructed and later modified, before the
autocast only occured on the constructor but now while invoking.
- This allows us to not only use the utils module but also the
client, so it's even possible to use usernames or phone numbers
for things that require an InputPeer. This actually assumes
the TelegramClient subclass is being used and not the bare version
which would fail when calling .get_input_peer().
Caching the inputFile values would not persist accross several
days so the cache was nearly unnecessary. Saving the id/hash of
the actual inputMedia sent is a much better/persistent idea.
This caused some multithreading bugs, for instance, when there was
no read thread and the main thread was idling, and there were some
update workers. Several threads would try to read from the socket
at the same time (since there's no lock for reading), causing
reads to be corrupted and receiving "invalid packet lengths"
from the network. Closes#538.
This removes the need for a .clear_cache() method as now files
are identified by their MD5 (which needs to be calculated
always) and their file size (to make collisions even more
unlikely) instead using the file path (which can now change).
Server salts change every 30 minutes after all, so keeping them
in the long-term storage session file doesn't make much sense.
Saving the layer doesn't make sense either, as it was only used
to know whether to init connection or not, but it should be done
always.