For some reason, the server seems to kick us after 1024 items from
the network are received. Tested with the following code, 1022
updates were received, after BadServerSalt, NewSessionCreated and
MsgsAck:
client = TelegramClient(..., spawn_read_thread=False)
client.connect(_sync_updates=False)
sender = client._sender
client = None
while True:
try:
sender.receive(None)
except TimeoutError:
pass
except ConnectionResetError:
sender.connect()
If one were to run this code after being kicked no further items
will be retrieved and it will always timeout. Invoking a ping has
no effect either. Invoking some "high level" request like getState
seems to do the trick.
It was causing some strange behaviour with the synchronized Queue
used by the UpdateState class. Calling .get() with any timeout
would block forever. Perhaps something else got released when
the script ended and then any call would block forever, thus the
thread never joining.
Telegram would refuse to reply any further unless the message ID
had the correct time (causing some behaviour like .connect()
never connecting, due to the first request being sent always
failing). The fix was to use time_offset when calculating the
message ID, while this was right, it wasn't in use.
- Made the documentation even more friendly towards newbies.
- Eased the usage of methods like get history which now set
a default empty message for message actions and vice versa.
- Fixed some docstring documentations too.
- Updated the old normal docs/ to link back and forth RTD.
- Fixed the version of the documentation, now auto-loaded.
Rationale: if the user is doing things right, the penalty for
being friendly (i.e. autocasting to the right version, like
User -> InputPeerUser), should be as little as possible.
Removing the redundant type() call to access .SUBCLASS_OF_ID
and assuming the user provided a TLObject (through excepting
whenever the attribute is not available) is x2 and x4 times
faster respectively.
Of course, this is a micro-optimization, but I still consider
it's good to benefit users doing things right or avoiding
redundant calls.
TLObject's __init__ used to call utils.get_input_* methods and
similar to auto-cast things like User into InputPeerUser as
required. Now there's a custom .resolve() method for this purpose
with several advantages:
- Old behaviour still works, autocasts work like usual.
- A request can be constructed and later modified, before the
autocast only occured on the constructor but now while invoking.
- This allows us to not only use the utils module but also the
client, so it's even possible to use usernames or phone numbers
for things that require an InputPeer. This actually assumes
the TelegramClient subclass is being used and not the bare version
which would fail when calling .get_input_peer().
Since uploading a file is done on the TelegramClient, and the
InputFiles are only valid for a short period of time, it only
makes sense to cache the sent media instead (which should not
expire). The problem is the MD5 is only needed when uploading
the file.
The solution is to allow this method to check for the wanted
cache, and if available, return an instance of that, so to
preserve the flexibility of both options (always InputFile,
or the cached InputPhoto/InputDocument) instead reuploading.
Caching the inputFile values would not persist accross several
days so the cache was nearly unnecessary. Saving the id/hash of
the actual inputMedia sent is a much better/persistent idea.
This caused some multithreading bugs, for instance, when there was
no read thread and the main thread was idling, and there were some
update workers. Several threads would try to read from the socket
at the same time (since there's no lock for reading), causing
reads to be corrupted and receiving "invalid packet lengths"
from the network. Closes#538.
This removes the need for a .clear_cache() method as now files
are identified by their MD5 (which needs to be calculated
always) and their file size (to make collisions even more
unlikely) instead using the file path (which can now change).