Compare commits

..

No commits in common. "v1" and "v1.16.0" have entirely different histories.
v1 ... v1.16.0

154 changed files with 5237 additions and 10056 deletions

27
.github/ISSUE_TEMPLATE/bug-report.md vendored Normal file
View File

@ -0,0 +1,27 @@
---
name: Bug Report
about: Create a report about a bug inside the library or issues with the documentation
title: ''
labels: ''
assignees: ''
---
**Checklist**
* [ ] The error is in the library's code, and not in my own.
* [ ] I have searched for this issue before posting it and there isn't a duplicate.
* [ ] I ran `pip install -U https://github.com/LonamiWebs/Telethon/archive/master.zip` and triggered the bug in the latest version.
**Code that causes the issue**
```python
from telethon.sync import TelegramClient
...
```
**Traceback**
```
Traceback (most recent call last):
File "code.py", line 1, in <code>
```

View File

@ -1,96 +0,0 @@
name: Bug Report
description: Create a report about a bug inside the library.
body:
- type: textarea
id: reproducing-example
attributes:
label: Code that causes the issue
description: Provide a code example that reproduces the problem. Try to keep it short without other dependencies.
placeholder: |
```python
from telethon.sync import TelegramClient
...
```
validations:
required: true
- type: textarea
id: expected-behavior
attributes:
label: Expected behavior
description: Explain what you should expect to happen. Include reproduction steps.
placeholder: |
"I was doing... I was expecting the following to happen..."
validations:
required: true
- type: textarea
id: actual-behavior
attributes:
label: Actual behavior
description: Explain what actually happens.
placeholder: |
"This happened instead..."
validations:
required: true
- type: textarea
id: traceback
attributes:
label: Traceback
description: |
The traceback, if the problem is a crash.
placeholder: |
```
Traceback (most recent call last):
File "code.py", line 1, in <code>
```
- type: input
id: telethon-version
attributes:
label: Telethon version
description: The output of `python -c "import telethon; print(telethon.__version__)"`.
placeholder: "1.x"
validations:
required: true
- type: input
id: python-version
attributes:
label: Python version
description: The output of `python --version`.
placeholder: "3.x"
validations:
required: true
- type: input
id: os
attributes:
label: Operating system (including distribution name and version)
placeholder: Windows 11, macOS 13.4, Ubuntu 23.04...
validations:
required: true
- type: textarea
id: other-details
attributes:
label: Other details
placeholder: |
Additional details and attachments. Is it a server? Network condition?
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: Read this carefully, we will close and ignore your issue if you skimmed through this.
options:
- label: The error is in the library's code, and not in my own.
required: true
- label: I have searched for this issue before posting it and there isn't an open duplicate.
required: true
- label: I ran `pip install -U https://github.com/LonamiWebs/Telethon/archive/v1.zip` and triggered the bug in the latest version.
required: true

View File

@ -1,4 +1,3 @@
blank_issues_enabled: false
contact_links:
- name: Ask questions in StackOverflow
url: https://stackoverflow.com/questions/ask?tags=telethon

View File

@ -1,22 +0,0 @@
name: Documentation Issue
description: Report a problem with the documentation.
labels: [documentation]
body:
- type: textarea
id: description
attributes:
label: Description
description: Describe the problem in detail.
placeholder: This part is unclear...
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: Read this carefully, we will close and ignore your issue if you skimmed through this.
options:
- label: This is a documentation problem, not a question or a bug report.
required: true
- label: I have searched for this issue before posting it and there isn't a duplicate.
required: true

View File

@ -0,0 +1,10 @@
---
name: Feature Request
about: Suggest ideas, changes or other enhancements for the library
title: ''
labels: enhancement
assignees: ''
---
Please describe your idea. Would you like another friendly method? Renaming them to something more appropriated? Changing the way something works?

View File

@ -1,22 +0,0 @@
name: Feature Request
description: Suggest ideas, changes or other enhancements for the library.
labels: [enhancement]
body:
- type: textarea
id: feature-description
attributes:
label: Describe your suggested feature
description: Please describe your idea. Would you like another friendly method? Renaming them to something more appropriate? Changing the way something works?
placeholder: "It should work like this..."
validations:
required: true
- type: checkboxes
id: checklist
attributes:
label: Checklist
description: Read this carefully, we will close and ignore your issue if you skimmed through this.
options:
- label: I have searched for this issue before posting it and there isn't a duplicate.
required: true

View File

@ -1,5 +0,0 @@
<!--
Thanks for the PR! Please keep in mind that v1 is *feature frozen*.
New features very likely won't be merged, although fixes can be sent.
All new development should happen in v2. Thanks!
-->

View File

@ -1,6 +1,6 @@
name: Python Library
on: [push, pull_request]
on: [push]
jobs:
build:

111
.gitignore vendored
View File

@ -1,23 +1,112 @@
# Docs
/_build/
/docs/
# Generated code
/telethon/tl/functions/
/telethon/tl/types/
/telethon/tl/patched/
/telethon/tl/alltlobjects.py
/telethon/errors/rpcerrorlist.py
# User session
*.session
/usermedia/
usermedia/
# Builds and testing
# Quick tests should live in this file
example.py
# Byte-compiled / optimized / DLL files
__pycache__/
/dist/
/build/
/*.egg-info/
/readthedocs/_build/
/.tox/
*.py[cod]
*$py.class
# API reference docs
/docs/
# C extensions
*.so
# File used to manually test new changes, contains sensitive data
/example.py
# Distribution / packaging
.Python
env/
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
*.egg-info/
.installed.cfg
*.egg
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
/docs/_build/
# PyBuilder
target/
# IPython Notebook
.ipynb_checkpoints
# pyenv
.python-version
# celery beat schedule file
celerybeat-schedule
# dotenv
.env
# virtualenv
.venv/
venv/
ENV/
# Spyder project settings
.spyderproject
# Rope project settings
.ropeproject
# Nix build results
result
result-*

View File

@ -1,18 +0,0 @@
# https://docs.readthedocs.io/en/stable/config-file/v2.html
version: 2
build:
os: ubuntu-22.04
tools:
python: "3.11"
sphinx:
configuration: readthedocs/conf.py
formats:
- pdf
- epub
python:
install:
- requirements: readthedocs/requirements.txt

View File

@ -1,6 +1,6 @@
MIT License
Copyright (c) 2016-Present LonamiWebs
Copyright (c) 2016-2019 LonamiWebs
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

4
MANIFEST.in Normal file
View File

@ -0,0 +1,4 @@
include LICENSE
include README.rst
recursive-include telethon *

View File

@ -12,8 +12,6 @@ as a user or through a bot account (bot API alternative).
If you have code using Telethon before its 1.0 version, you must
read `Compatibility and Convenience`_ to learn how to migrate.
As with any third-party library for Telegram, be careful not to
break `Telegram's ToS`_ or `Telegram can ban the account`_.
What is this?
-------------
@ -77,9 +75,7 @@ useful information.
.. _asyncio: https://docs.python.org/3/library/asyncio.html
.. _MTProto: https://core.telegram.org/mtproto
.. _Telegram: https://telegram.org
.. _Compatibility and Convenience: https://docs.telethon.dev/en/stable/misc/compatibility-and-convenience.html
.. _Telegram's ToS: https://core.telegram.org/api/terms
.. _Telegram can ban the account: https://docs.telethon.dev/en/stable/quick-references/faq.html#my-account-was-deleted-limited-when-using-the-library
.. _Compatibility and Convenience: https://docs.telethon.dev/en/latest/misc/compatibility-and-convenience.html
.. _Read The Docs: https://docs.telethon.dev
.. |logo| image:: logo.svg

126
default.nix Normal file
View File

@ -0,0 +1,126 @@
# A NUR-compatible package specification.
{ pkgs ? import <nixpkgs> {}, useRelease ? true }:
rec {
# The `lib`, `modules`, and `overlay` names are special
lib = ({ pkgs }: { }) { inherit pkgs; }; # functions
modules = { }; # NixOS modules
overlays = { }; # nixpkgs overlays
# # development
# ## development.python-modules
# use in a shell like
# ```nix
# ((pkgs.python3.override {
# packageOverrides = pythonPackageOverrides;
# }).withPackages (ps: [ ps.telethon ])).env
# ```
pythonPackageOverrides = self: super: let
defaultTelethonArgs = { inherit useRelease; };
telethonPkg = v: args: self.callPackage (./nix/telethon + "/${v}.nix")
(defaultTelethonArgs // args);
in rec {
telethon = telethon_1;
telethon-devel = self.callPackage ./nix/telethon/devel.nix { };
telethon_1 = telethon_1_10;
telethon_1_10 = telethon_1_10_1;
telethon_1_10_1 = telethonPkg "1.10" { version = "1.10.1"; };
telethon_1_10_0 = telethonPkg "1.10" { version = "1.10.0"; };
telethon_1_9 = telethon_1_9_0;
telethon_1_9_0 = telethonPkg "1.9" { version = "1.9.0"; };
telethon_1_8 = telethon_1_8_0;
telethon_1_8_0 = telethonPkg "1.8" { version = "1.8.0"; };
telethon_1_7 = telethon_1_7_7;
telethon_1_7_7 = telethonPkg "1.7" { version = "1.7.7"; };
telethon_1_7_6 = telethonPkg "1.7" { version = "1.7.6"; };
telethon_1_7_5 = telethonPkg "1.7" { version = "1.7.5"; };
telethon_1_7_4 = telethonPkg "1.7" { version = "1.7.4"; };
telethon_1_7_3 = telethonPkg "1.7" { version = "1.7.3"; };
telethon_1_7_2 = telethonPkg "1.7" { version = "1.7.2"; };
telethon_1_7_1 = telethonPkg "1.7" { version = "1.7.1"; };
telethon_1_7_0 = telethonPkg "1.7" { version = "1.7.0"; };
telethon_1_6 = telethon_1_6_2;
telethon_1_6_2 = telethonPkg "1.6" { version = "1.6.2"; };
# 1.6.1.post1: hotpatch that fixed Telethon.egg-info dir perms
telethon_1_6_1 = telethonPkg "1.6" { version = "1.6.1"; };
telethon_1_6_0 = telethonPkg "1.6" { version = "1.6.0"; };
telethon_1_5 = telethon_1_5_5;
telethon_1_5_5 = telethonPkg "1.5" { version = "1.5.5"; };
telethon_1_5_4 = telethonPkg "1.5" { version = "1.5.4"; };
telethon_1_5_3 = telethonPkg "1.5" { version = "1.5.3"; };
telethon_1_5_2 = telethonPkg "1.5" { version = "1.5.2"; };
telethon_1_5_1 = telethonPkg "1.5" { version = "1.5.1"; };
telethon_1_5_0 = telethonPkg "1.5" { version = "1.5.0"; };
telethon_1_4 = telethon_1_4_3;
telethon_1_4_3 = telethonPkg "1.4" { version = "1.4.3"; };
telethon_1_4_2 = telethonPkg "1.4" { version = "1.4.2"; };
telethon_1_4_1 = telethonPkg "1.4" { version = "1.4.1"; };
telethon_1_4_0 = telethonPkg "1.4" { version = "1.4.0"; };
#telethon_1_3_0
#telethon_1_2_0
#telethon_1_1_1
#telethon_1_1_0
#telethon_1_0_4
#telethon_1_0_3
#telethon_1_0_2
#telethon_1_0_1
#telethon_1_0_0-rc1
#telethon_1_0_0
#telethon_0_19_1
#telethon_0_19_0
#telethon_0_18_3
#telethon_0_18_2
#telethon_0_18_1
#telethon_0_18_0
#telethon_0_17_4
#telethon_0_17_3
#telethon_0_17_2
#telethon_0_17_1
#telethon_0_17_0
#telethon_0_16_2
#telethon_0_16_1
#telethon_0_16_0
#telethon_0_15_5
#telethon_0_15_4
#telethon_0_15_3
#telethon_0_15_2
#telethon_0_15_1
#telethon_0_15_0
#telethon_0_14_2
#telethon_0_14_1
#telethon_0_14_0
#telethon_0_13_6
#telethon_0_13_5
#telethon_0_13_4
#telethon_0_13_3
#telethon_0_13_2
#telethon_0_13_1
#telethon_0_13_0
#telethon_0_12_2
#telethon_0_12_1
#telethon_0_12_0
#telethon_0_11_5
#telethon_0_11_4
#telethon_0_11_3
#telethon_0_11_2
#telethon_0_11_1
#telethon_0_11_0
#telethon_0_10_1
#telethon_0_10_0
#telethon_0_9_1
#telethon_0_9_0
#telethon_0_8_0
#telethon_0_7_1
#telethon_0_7_0
#telethon_0_6_0
#telethon_0_5_0
#telethon_0_4_0
#telethon_0_3_0
#telethon_0_2_0
#telethon_0_1_0
};
}

59
nix/ci.nix Normal file
View File

@ -0,0 +1,59 @@
# This file provides all the buildable and cacheable packages and
# package outputs in you package set. These are what gets built by CI,
# so if you correctly mark packages as
#
# - broken (using `meta.broken`),
# - unfree (using `meta.license.free`), and
# - locally built (using `preferLocalBuild`)
#
# then your CI will be able to build and cache only those packages for
# which this is possible.
{ pkgs ? import <nixpkgs> {}, enableEnvs ? false }:
with builtins;
let
isReserved = n: n == "lib" || n == "overlays" || n == "modules";
isDerivation = p: isAttrs p && p ? type && p.type == "derivation";
isBuildable = p: !(p.meta.broken or false) && p.meta.license.free or true;
isCacheable = p: !(p.preferLocalBuild or false);
shouldRecurseForDerivations = p:
isAttrs p && p.recurseForDerivations or false;
nameValuePair = n: v: { name = n; value = v; };
concatMap = builtins.concatMap or (f: xs: concatLists (map f xs));
flattenPkgs = s:
let
f = p:
if shouldRecurseForDerivations p then flattenPkgs p
else if isDerivation p then [p]
else [];
in
concatMap f (attrValues s);
outputsOf = p: map (o: p.${o}) p.outputs;
# build & test packages across Python versions
# (withPackages "distributions" are also generated for testing)
nurAttrs = import ./extended.nix { inherit pkgs enableEnvs; };
nurPkgs =
flattenPkgs
(listToAttrs
(map (n: nameValuePair n nurAttrs.${n})
(filter (n: !isReserved n)
(attrNames nurAttrs))));
in
rec {
buildPkgs = filter isBuildable nurPkgs;
cachePkgs = filter isCacheable buildPkgs;
buildOutputs = concatMap outputsOf buildPkgs;
cacheOutputs = concatMap outputsOf cachePkgs;
}

86
nix/extended.nix Normal file
View File

@ -0,0 +1,86 @@
{ pkgs ? import <nixpkgs> { }, enableEnvs ? true, useRelease ? true }:
# packages built against all Python versions (along with withPackages
# environments for testing)
# to use for testing, you'll probably want a variant of:
# ```sh
# nix-shell nix/extended.nix -A telethon-devel-python37 --run "python"
# ```
let
inherit (pkgs.lib) attrNames attrValues concatMap head listToAttrs
mapAttrsToList optional optionals tail;
nurAttrs = import ../default.nix { inherit pkgs useRelease; };
pyVersions = concatMap (n: optional (pkgs ? ${n}) n) [
"python3"
"python35"
"python36"
"python37"
# "pypy3"
# "pypy35"
# "pypy36"
# "pypy37"
];
pyPkgEnvs = [
[ "telethon" "telethon" ]
[ "telethon-devel" "telethon-devel" ]
[ "telethon_1" "telethon_1" ]
[ "telethon_1_10" "telethon_1_10" ]
[ "telethon_1_10_1" "telethon_1_10_1" ]
[ "telethon_1_10_0" "telethon_1_10_0" ]
[ "telethon_1_9" "telethon_1_9" ]
[ "telethon_1_9_0" "telethon_1_9_0" ]
[ "telethon_1_8" "telethon_1_8" ]
[ "telethon_1_8_0" "telethon_1_8_0" ]
[ "telethon_1_7" "telethon_1_7" ]
[ "telethon_1_7_7" "telethon_1_7_7" ]
[ "telethon_1_7_6" "telethon_1_7_6" ]
[ "telethon_1_7_5" "telethon_1_7_5" ]
[ "telethon_1_7_4" "telethon_1_7_4" ]
[ "telethon_1_7_3" "telethon_1_7_3" ]
[ "telethon_1_7_2" "telethon_1_7_2" ]
[ "telethon_1_7_1" "telethon_1_7_1" ]
[ "telethon_1_7_0" "telethon_1_7_0" ]
[ "telethon_1_6" "telethon_1_6" ]
[ "telethon_1_6_2" "telethon_1_6_2" ]
[ "telethon_1_6_1" "telethon_1_6_1" ]
[ "telethon_1_6_0" "telethon_1_6_0" ]
[ "telethon_1_5" "telethon_1_5" ]
[ "telethon_1_5_5" "telethon_1_5_5" ]
[ "telethon_1_5_4" "telethon_1_5_4" ]
[ "telethon_1_5_3" "telethon_1_5_3" ]
[ "telethon_1_5_2" "telethon_1_5_2" ]
[ "telethon_1_5_1" "telethon_1_5_1" ]
[ "telethon_1_5_0" "telethon_1_5_0" ]
[ "telethon_1_4" "telethon_1_4" ]
[ "telethon_1_4_3" "telethon_1_4_3" ]
# [ "telethon_1_4_2" "telethon_1_4_2" ]
# [ "telethon_1_4_1" "telethon_1_4_1" ]
# [ "telethon_1_4_0" "telethon_1_4_0" ]
];
getPkgPair = pkgs: n: let p = pkgs.${n}; in { name = n; value = p; };
getPkgPairs = pkgs: map (getPkgPair pkgs);
pyPkgPairs = py:
concatMap (d: map (getPkgPair py.pkgs) (tail d)) pyPkgEnvs;
pyPkgEnvPair = pyNm: py: envNm: env: {
name = "${envNm}-env-${pyNm}";
value = (py.withPackages (ps: map (pn: ps.${pn}) env)).overrideAttrs (o: {
name = "${envNm}-${py.name}-env";
preferLocalBuild = true;
});
};
pyNurPairs = pyNm: py:
map ({ name, value }: { name = "${name}-${pyNm}"; inherit value; })
(pyPkgPairs py) ++
optionals enableEnvs
(map (d: pyPkgEnvPair pyNm py (head d) (tail d)) pyPkgEnvs);
in nurAttrs // (listToAttrs (concatMap (py: let
python = pkgs.${py}.override {
packageOverrides = nurAttrs.pythonPackageOverrides;
}; in
pyNurPairs py python) pyVersions))

18
nix/overlay.nix Normal file
View File

@ -0,0 +1,18 @@
# You can use this file as a nixpkgs overlay. This is useful in the
# case where you don't want to add the whole NUR namespace to your
# configuration.
self: super:
let
isReserved = n: n == "lib" || n == "overlays" || n == "modules";
nameValuePair = n: v: { name = n; value = v; };
nurAttrs = import ./default.nix { pkgs = super; };
in
builtins.listToAttrs
(map (n: nameValuePair n nurAttrs.${n})
(builtins.filter (n: !isReserved n)
(builtins.attrNames nurAttrs)))

39
nix/telethon/1.10.nix Normal file
View File

@ -0,0 +1,39 @@
{ lib, buildPythonPackage, pythonOlder
, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null
, pyaes, rsa
, version
, useRelease ? true
}:
assert useRelease -> fetchPypi != null;
assert !useRelease -> fetchFromGitHub != null;
let
common = import ./common.nix {
inherit lib fetchFromGitHub fetchPypi fetchpatch;
};
versions = {
"1.10.1" = {
pypiSha256 = "1ql8ai01c6v3l13lh3csh37jjkrb33gj50jyvdfi3qjn60qs2rfl";
sourceSha256 = "1skckq4lai51p476r3shgld89x5yg5snrcrzjfxxxai00lm65cbv";
};
"1.10.0" = {
pypiSha256 = "1n2g2r5w44nlhn229r8kamhwjxggv16gl3jxq25bpg5y4qgrxzd8";
sourceSha256 = "1rvrc63j6i7yr887g2csciv4zyy407yhdn4n8q2q00dkildh64qw";
};
};
in buildPythonPackage rec {
pname = "telethon";
inherit version;
src = common.fetchTelethon {
inherit useRelease version;
versionData = versions.${version};
};
propagatedBuildInputs = [ rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

56
nix/telethon/1.4.nix Normal file
View File

@ -0,0 +1,56 @@
{ lib, buildPythonPackage, pythonOlder
, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null
, async_generator, pyaes, rsa
, version
, useRelease ? true
}:
assert useRelease -> fetchPypi != null;
assert !useRelease -> fetchFromGitHub != null && fetchpatch != null;
let
common = import ./common.nix {
inherit lib fetchFromGitHub fetchPypi fetchpatch;
};
versions = {
"1.4.3" = {
pypiSha256 = "1igslvhd743qy9p4kfs7lg09s8d5vhn9jhzngpv12797569p4lcj";
sourceSha256 = "19vz0ppk7lq1dmqzf47n6h023i08pqvcwnixvm28vrijykq0z315";
};
"1.4.2" = {
pypiSha256 = "1f4ncyfzqj4b6zib0417r01pgnd0hb1p4aiinhlkxkmk7vy5fqfy";
sourceSha256 = "0rsbz5kqp0d10gasadir3mgalc9aqq4fcv8xa1p7fg263f43rjl4";
};
"1.4.1" = {
pypiSha256 = "1n0jhdqflinyamzy5krnww7hc0s7pw9yfck1p7816pdbgir74qsw";
sourceSha256 = "07q48gw4ry3wf9yzi6kf8lw3b23a0dvk9r8sabpxwrlqy7gnksxx";
};
"1.4.0" = {
version = "1.4";
pypiSha256 = "1g7rznwmj87n9k86zby9i75h570hm84izrv0srhsmxi52pjan1ml";
sourceSha256 = "14nv86yrj01wmlj5cfg6iq5w03ssl67av1arfy9mq1935mly5nly";
};
};
in buildPythonPackage rec {
pname = "telethon";
inherit version;
src = common.fetchTelethon {
inherit useRelease version;
versionData = versions.${version};
};
patches = lib.optionals (!useRelease) [
(if (lib.versionOlder version "1.4.3") then
common.patches.generator-use-pathlib-to-1_4_3
else
common.patches.generator-use-pathlib-from-1_4_3-to-1_5_0)
common.patches.generator-use-pathlib-open-to-1_5_3
common.patches.sort-generated-tlobjects-to-1_7_1
];
propagatedBuildInputs = [ async_generator rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

60
nix/telethon/1.5.nix Normal file
View File

@ -0,0 +1,60 @@
{ lib, buildPythonPackage, pythonOlder
, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null
, async_generator, pyaes, rsa
, version
, useRelease ? true
}:
assert useRelease -> fetchPypi != null;
assert !useRelease -> fetchFromGitHub != null && fetchpatch != null;
let
common = import ./common.nix {
inherit lib fetchFromGitHub fetchPypi fetchpatch;
};
versions = {
"1.5.5" = {
pypiSha256 = "1qpc4vc3lidhlp1c7521nxizjr6y5c3l9x41knqv02x8n3l9knxa";
sourceSha256 = "1x5niscjbrg5a0cg261z6awln57v3nn8si5j58vhsnckws2c48a5";
};
"1.5.4" = {
pypiSha256 = "1kjqi3wy4hswsf3vmrjg7z5c3f9wpdfk4wz1yfsqmj9ppwllkjsj";
sourceSha256 = "0rmp9zk7a354nb39c01mjcrhi2j6v9im40xmdcvmizx990vlv476";
};
"1.5.3" = {
pypiSha256 = "11xd5ni0chzsfny0vwwqyh37mvmrwrk2bmkhwp1ipbxyis8jjjia";
sourceSha256 = "1l3i6wx3fgcy3vmr75qdbv5fvc5qnk0j47hv7jszsqq9rvqvz2xs";
};
"1.5.2" = {
pypiSha256 = "0ymv6l9xn41sgpkilqkivwbjna89m43i0a728lak2cppp7i1i1h7";
sourceSha256 = "0gnqvlhh3qyvibl7icn6774rshlx1nnhb5f78609da44743lyv17";
};
"1.5.1" = {
pypiSha256 = "1ypxpsfj814gzln4fl7z17l1l6q0bzd5p1ivas85yim3a992ixww";
sourceSha256 = "15w5nshvmj8hgqdcbpw0fjcf1cspaci8dldm9ml1pmijw7zgmpdg";
};
"1.5.0" = {
version = "1.5";
pypiSha256 = "1kzkzcxyz7adjzvm2ml9faz2c5yx469j211yvi5xfvjwp58ic2jc";
sourceSha256 = "12232d3xfv0bbykk9xaxpxsr3656ywjx4ra1q5q99rpp6wv438n1";
};
};
in buildPythonPackage rec {
pname = "telethon";
inherit version;
src = common.fetchTelethon {
inherit useRelease version;
versionData = versions.${version};
};
patches = lib.optionals (!useRelease) ([
common.patches.sort-generated-tlobjects-to-1_7_1
] ++ lib.optional (lib.versionOlder version "1.5.3")
common.patches.generator-use-pathlib-open-to-1_5_3);
propagatedBuildInputs = [ async_generator rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

50
nix/telethon/1.6.nix Normal file
View File

@ -0,0 +1,50 @@
{ lib, buildPythonPackage, pythonOlder
, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null
, pyaes, rsa
, version
, useRelease ? true
}:
assert useRelease -> fetchPypi != null;
assert !useRelease -> fetchFromGitHub != null && fetchpatch != null;
let
common = import ./common.nix {
inherit lib fetchFromGitHub fetchPypi fetchpatch;
};
versions = {
"1.6.2" = {
pypiSha256 = "074h5gj0c330rb1nxzpqm31fp1vw7calh1cdkapbjx90j769iz18";
sourceSha256 = "1daqlb4sva5qkljzbjr8xvjfgp7bdcrl2li1i4434za6a0isgd3j";
};
"1.6.1" = {
# hotpatch with missing .pyc files and fixed Telethon.egg-info perms
pypiVersion = "1.6.1.post1";
pypiSha256 = "17s1qp69bbj6jniam9wbcpaj60ah56sjw0q3kr8ca28y17s88si7";
# pypiVersion = "1.6.1";
# pypiSha256 = "036lhr1jr79np74c6ih51c4pjy828r3lvwcq07q5wynyjprm1qbz";
sourceSha256 = "1hk1bpnk51rpsifb67s31c2qph5hmw28i2vgh97i4i56vynx2yxz";
};
"1.6.0" = {
version = "1.6";
pypiSha256 = "06prmld9068zcm9rfmq3rpq1szw72c6dkxl62b035i9w8wdpvg0m";
sourceSha256 = "0qk14mrnvv9a043ik0y2w6q97l83abvbvn441zn2jl00w4ykfqrh";
};
};
in buildPythonPackage rec {
pname = "telethon";
inherit version;
src = common.fetchTelethon {
inherit useRelease version;
versionData = versions.${version};
};
patches = lib.optional (!useRelease)
common.patches.sort-generated-tlobjects-to-1_7_1;
propagatedBuildInputs = [ rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

66
nix/telethon/1.7.nix Normal file
View File

@ -0,0 +1,66 @@
{ lib, buildPythonPackage, pythonOlder
, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null
, pyaes, rsa
, version
, useRelease ? true
}:
assert useRelease -> fetchPypi != null;
assert !useRelease -> fetchFromGitHub != null;
let
common = import ./common.nix {
inherit lib fetchFromGitHub fetchPypi fetchpatch;
};
versions = {
"1.7.7" = {
pypiSha256 = "0mgpihjc7g4gfrq57srripdavxbsgivn4qsjanv3yds5drskciv0";
sourceSha256 = "08c3iakd7fyacc79pg8hyzpa6zx3gbp7xivi10af34zj775lp2pi";
};
"1.7.6" = {
pypiSha256 = "192xda98685s3hmz7ircxpsn7yq913y0r1kmqrsav90m4g4djn4j";
sourceSha256 = "1ss2pfpd3hby25g9ighbr7ccp66awfzda4srsnvr9s6i28har6ag";
};
"1.7.5" = {
pypiSha256 = "0i5s7ahicw5k0s1i7pi26vc6rp6ppr1gr848sa61yh3qqa4c0qnr";
sourceSha256 = "1rssh0l466h9y6v0z095c9aa63nz9im7gg5771jjj5w70mkpm5w6";
};
"1.7.4" = {
pypiSha256 = "1qpc9f1y559zdwz59qqz4hbf1mrynjjbcg357nzaa2x5a2q4lz0s";
sourceSha256 = "1q43lwfp67q4skfcrb6sdlnjw4ajrpizf08fd9wjrw521kkd8g4y";
};
"1.7.3" = {
pypiSha256 = "0s8qmsarlfgpb0k3w50siv354hpa7b1dnrjjd0iqz7vc5bc7ni84";
sourceSha256 = "0c393smp1qm8kk39r0k31p74p89qzvjdjxq4bxq75h07a1yqbs8x";
};
"1.7.2" = {
pypiSha256 = "0465dwikhpbka2sj1g952rac03jkixq497gbmmyx2i9xb594db27";
sourceSha256 = "1gw09zbaqvn074skwjhmm4yp8p75rw9njwjbkcfvqb4gr6dg8wpq";
};
"1.7.1" = {
pypiSha256 = "186z6imf7zqy8vf4yv2w2kxpd7lxmfppa1qi8nxjdgq8rz7wbglf";
sourceSha256 = "05mpqfj4w5qxyl1ai5p0f31pkagz55xxh8060r8y9i3d44j9bn1c";
};
"1.7.0" = {
version = "1.7";
pypiSha256 = "06cqb121k2y0h3x7gvckyvbsn97wc1a25pghinxz2vb7vg8wwxvw";
sourceSha256 = "0myx32hqax71ijfw6ksxvk27cb6x06kbz8jb7ib9d1cayr2viir6";
};
};
in buildPythonPackage rec {
pname = "telethon";
inherit version;
src = common.fetchTelethon {
inherit useRelease version;
versionData = versions.${version};
};
patches = lib.optional (!useRelease && lib.versionOlder version "1.7.1")
common.patches.sort-generated-tlobjects-to-1_7_1;
propagatedBuildInputs = [ rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

35
nix/telethon/1.8.nix Normal file
View File

@ -0,0 +1,35 @@
{ lib, buildPythonPackage, pythonOlder
, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null
, pyaes, rsa
, version
, useRelease ? true
}:
assert useRelease -> fetchPypi != null;
assert !useRelease -> fetchFromGitHub != null;
let
common = import ./common.nix {
inherit lib fetchFromGitHub fetchPypi fetchpatch;
};
versions = {
"1.8.0" = {
pypiSha256 = "099br8ldjrfzwipv7g202lnjghmqj79j6gicgx11s0vawb5mb3vf";
sourceSha256 = "1q5mcijmjw2m2v3ilw28xnavmcdck5md0k98kwnz0kyx4iqckcv0";
};
};
in buildPythonPackage rec {
pname = "telethon";
inherit version;
src = common.fetchTelethon {
inherit useRelease version;
versionData = versions.${version};
};
propagatedBuildInputs = [ rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

35
nix/telethon/1.9.nix Normal file
View File

@ -0,0 +1,35 @@
{ lib, buildPythonPackage, pythonOlder
, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null
, pyaes, rsa
, version
, useRelease ? true
}:
assert useRelease -> fetchPypi != null;
assert !useRelease -> fetchFromGitHub != null;
let
common = import ./common.nix {
inherit lib fetchFromGitHub fetchPypi fetchpatch;
};
versions = {
"1.9.0" = {
pypiSha256 = "1p4y4qd1ndzi1lg4fhnvq1rqz7611yrwnwwvzh63aazfpzaplyd8";
sourceSha256 = "1g6khxc7mvm3q8rqksw9dwn4l2w8wzvr3zb74n2lb7g5ilpxsadd";
};
};
in buildPythonPackage rec {
pname = "telethon";
inherit version;
src = common.fetchTelethon {
inherit useRelease version;
versionData = versions.${version};
};
propagatedBuildInputs = [ rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

60
nix/telethon/common.nix Normal file
View File

@ -0,0 +1,60 @@
{ lib, fetchFromGitHub ? null, fetchPypi ? null, fetchpatch ? null }:
rec {
fetchTelethon = { useRelease, version, versionData }:
if useRelease then assert versionData.pypiSha256 != null; fetchPypi {
pname = "Telethon";
version = versionData.pypiVersion or (versionData.version or version);
sha256 = versionData.pypiSha256;
} else assert versionData.sourceSha256 != null; fetchFromGitHub {
owner = "LonamiWebs";
repo = "Telethon";
rev = versionData.rev or "v${versionData.version or version}";
sha256 = versionData.sourceSha256;
};
fetchpatchTelethon = { rev, ... } @ args:
fetchpatch ({
url = "https://github.com/LonamiWebs/Telethon/commit/${rev}.patch";
} // (builtins.removeAttrs args [ "rev" ]));
# sorted by name, then by logical version range
patches = rec {
generator-use-pathlib-to-1_4_3 = ./generator-use-pathlib-to-1_4_3.patch;
generator-use-pathlib-from-1_4_3-to-1_5_0 = [
(fetchpatchTelethon {
rev = "e71c556ca71aec11166dc66f949a05e700aeb24f";
sha256 = "058phfaggf22j0cjpy9j17y63zgd9m8j4qf7ldsg0jqm1vrym76w";
})
(fetchpatchTelethon {
rev = "8224e5aabf18bb31c6af8c460c38ced11756f080";
sha256 = "0x3xfkld4d2kc0a1a8ldxy85pi57zaipq3b401b16r6rzbi4sh1j";
})
(fetchpatchTelethon {
rev = "aefa429236d28ae68bec4e4ef9f12d13f647dfe6";
sha256 = "043hks8hg5sli1amfv5453h831nwy4dgyw8xr4xxfaxh74754icx";
})
];
generator-use-pathlib-open-to-1_5_3 = fetchpatchTelethon {
rev = "b57e3e3e0a752903fe7d539fb87787ec6712a3d9";
sha256 = "1rl3lkwfi3h62ppzglrmz13zfai8i8cchzqgbjccr4l7nzh1n6nq";
};
sort-generated-tlobjects-to-1_7_1 = fetchpatchTelethon {
rev = "08f8aa3c526c043c107ec1b489b89c011555722f";
sha256 = "1lkvvjzhm9jfrxpm4hbvvysz5f3qi0v4f7vqnfmrzawl73s8qk80";
};
};
meta = let inherit (lib) licenses maintainers; in {
description = "Full-featured Telegram client library for Python 3";
fullDescription = ''
Telegram is a popular messaging application. This library is meant to
make it easy for you to write Python programs that can interact with
Telegram. Think of it as a wrapper that has already done the heavy job
for you, so you can focus on developing an application.
'';
homepage = https://github.com/LonamiWebs/Telethon;
license = licenses.mit;
maintainers = [ maintainers.bb010g maintainers.nyanloutre ];
};
}

27
nix/telethon/devel.nix Normal file
View File

@ -0,0 +1,27 @@
{ lib, buildPythonPackage, nix-gitignore, pythonOlder
, async_generator, pyaes, rsa
}:
let
common = import ./common.nix { inherit lib; };
in buildPythonPackage rec {
pname = "telethon";
# If pinning to a specific commit, use the following output instead:
# ```sh
# TZ=UTC git show -s --format=format:%cd --date=short-local
# ```
version = "HEAD";
src = nix-gitignore.gitignoreSource ''
/.git
/default.nix
/nix
'' ../..;
propagatedBuildInputs = [ async_generator rsa pyaes ];
doCheck = false; # No tests available
disabled = pythonOlder "3.5";
meta = common.meta;
}

View File

@ -0,0 +1,819 @@
--- a/setup.py
+++ b/setup.py
@@ -12,10 +12,11 @@
import itertools
import json
-import os
import re
import shutil
-from codecs import open
+from os import chdir
+from pathlib import Path
+from subprocess import run
from sys import argv
from setuptools import find_packages, setup
@@ -29,30 +30,29 @@
self.original = None
def __enter__(self):
- self.original = os.path.abspath(os.path.curdir)
- os.chdir(os.path.abspath(os.path.dirname(__file__)))
+ self.original = Path('.')
+ chdir(str(Path(__file__).parent))
return self
def __exit__(self, *args):
- os.chdir(self.original)
+ chdir(str(self.original))
-GENERATOR_DIR = 'telethon_generator'
-LIBRARY_DIR = 'telethon'
+GENERATOR_DIR = Path('telethon_generator')
+LIBRARY_DIR = Path('telethon')
-ERRORS_IN_JSON = os.path.join(GENERATOR_DIR, 'data', 'errors.json')
-ERRORS_IN_DESC = os.path.join(GENERATOR_DIR, 'data', 'error_descriptions')
-ERRORS_OUT = os.path.join(LIBRARY_DIR, 'errors', 'rpcerrorlist.py')
+ERRORS_IN_JSON = GENERATOR_DIR / 'data/errors.json'
+ERRORS_IN_DESC = GENERATOR_DIR / 'data/error_descriptions'
+ERRORS_OUT = LIBRARY_DIR / 'errors/rpcerrorlist.py'
-INVALID_BM_IN = os.path.join(GENERATOR_DIR, 'data', 'invalid_bot_methods.json')
+INVALID_BM_IN = GENERATOR_DIR / 'data/invalid_bot_methods.json'
-TLOBJECT_IN_CORE_TL = os.path.join(GENERATOR_DIR, 'data', 'mtproto_api.tl')
-TLOBJECT_IN_TL = os.path.join(GENERATOR_DIR, 'data', 'telegram_api.tl')
-TLOBJECT_OUT = os.path.join(LIBRARY_DIR, 'tl')
+TLOBJECT_IN_TLS = [Path(x) for x in GENERATOR_DIR.glob('data/*.tl')]
+TLOBJECT_OUT = LIBRARY_DIR / 'tl'
IMPORT_DEPTH = 2
-DOCS_IN_RES = os.path.join(GENERATOR_DIR, 'data', 'html')
-DOCS_OUT = 'docs'
+DOCS_IN_RES = GENERATOR_DIR / 'data/html'
+DOCS_OUT = Path('docs')
def generate(which):
@@ -60,15 +60,12 @@
from telethon_generator.generators import\
generate_errors, generate_tlobjects, generate_docs, clean_tlobjects
- # Older Python versions open the file as bytes instead (3.4.2)
- with open(INVALID_BM_IN, 'r') as f:
+ with INVALID_BM_IN.open('r') as f:
invalid_bot_methods = set(json.load(f))
-
- layer = find_layer(TLOBJECT_IN_TL)
+ layer = next(filter(None, map(find_layer, TLOBJECT_IN_TLS)))
errors = list(parse_errors(ERRORS_IN_JSON, ERRORS_IN_DESC))
- tlobjects = list(itertools.chain(
- parse_tl(TLOBJECT_IN_CORE_TL, layer, invalid_bot_methods),
- parse_tl(TLOBJECT_IN_TL, layer, invalid_bot_methods)))
+ tlobjects = list(itertools.chain(*(
+ parse_tl(file, layer, invalid_bot_methods) for file in TLOBJECT_IN_TLS)))
if not which:
which.extend(('tl', 'errors'))
@@ -96,30 +93,29 @@
which.remove('errors')
print(action, 'RPCErrors...')
if clean:
- if os.path.isfile(ERRORS_OUT):
- os.remove(ERRORS_OUT)
+ if ERRORS_OUT.is_file():
+ ERRORS_OUT.unlink()
else:
- with open(ERRORS_OUT, 'w', encoding='utf-8') as file:
+ with ERRORS_OUT.open('w') as file:
generate_errors(errors, file)
if 'docs' in which:
which.remove('docs')
print(action, 'documentation...')
if clean:
- if os.path.isdir(DOCS_OUT):
- shutil.rmtree(DOCS_OUT)
+ if DOCS_OUT.is_dir():
+ shutil.rmtree(str(DOCS_OUT))
else:
generate_docs(tlobjects, methods, layer, DOCS_IN_RES, DOCS_OUT)
if 'json' in which:
which.remove('json')
print(action, 'JSON schema...')
- mtproto = 'mtproto_api.json'
- telegram = 'telegram_api.json'
+ json_files = [x.with_suffix('.json') for x in TLOBJECT_IN_TLS]
if clean:
- for x in (mtproto, telegram):
- if os.path.isfile(x):
- os.remove(x)
+ for file in json_files:
+ if file.is_file():
+ file.unlink()
else:
def gen_json(fin, fout):
methods = []
@@ -131,8 +130,8 @@
with open(fout, 'w') as f:
json.dump(what, f, indent=2)
- gen_json(TLOBJECT_IN_CORE_TL, mtproto)
- gen_json(TLOBJECT_IN_TL, telegram)
+ for fin, fout in zip(TLOBJECT_IN_TLS, json_files):
+ gen_json(fin, fout)
if which:
print('The following items were not understood:', which)
@@ -156,22 +155,17 @@
print('Packaging for PyPi aborted, importing the module failed.')
return
- # Need python3.5 or higher, but Telethon is supposed to support 3.x
- # Place it here since noone should be running ./setup.py pypi anyway
- from subprocess import run
- from shutil import rmtree
-
for x in ('build', 'dist', 'Telethon.egg-info'):
- rmtree(x, ignore_errors=True)
+ shutil.rmtree(x, ignore_errors=True)
run('python3 setup.py sdist', shell=True)
run('python3 setup.py bdist_wheel', shell=True)
run('twine upload dist/*', shell=True)
for x in ('build', 'dist', 'Telethon.egg-info'):
- rmtree(x, ignore_errors=True)
+ shutil.rmtree(x, ignore_errors=True)
else:
# e.g. install from GitHub
- if os.path.isdir(GENERATOR_DIR):
+ if GENERATOR_DIR.is_dir():
generate(['tl', 'errors'])
# Get the long description from the README file
--- a/telethon_generator/docswriter.py
+++ b/telethon_generator/docswriter.py
@@ -2,0 +2,0 @@
class DocsWriter:
- """Utility class used to write the HTML files used on the documentation"""
- def __init__(self, filename, type_to_path):
- """Initializes the writer to the specified output file,
- creating the parent directories when used if required.
-
- 'type_to_path_function' should be a function which, given a type
- name and a named argument relative_to, returns the file path for
- the specified type, relative to the given filename
+ """
+ Utility class used to write the HTML files used on the documentation.
+ """
+ def __init__(self, root, filename, type_to_path):
"""
+ Initializes the writer to the specified output file,
+ creating the parent directories when used if required.
+ """
+ self.root = root
self.filename = filename
+ self._parent = str(self.filename.parent)
self.handle = None
+ self.title = ''
# Should be set before calling adding items to the menu
self.menu_separator_tag = None
- # Utility functions TODO There must be a better way
- self.type_to_path = lambda t: type_to_path(
- t, relative_to=self.filename
- )
+ # Utility functions
+ self.type_to_path = lambda t: self._rel(type_to_path(t))
# Control signals
self.menu_began = False
@@ -30,11 +30,20 @@
self.write_copy_script = False
self._script = ''
+ def _rel(self, path):
+ """
+ Get the relative path for the given path from the current
+ file by working around https://bugs.python.org/issue20012.
+ """
+ return os.path.relpath(str(path), self._parent)
+
# High level writing
- def write_head(self, title, relative_css_path, default_css):
+ def write_head(self, title, css_path, default_css):
"""Writes the head part for the generated document,
with the given title and CSS
"""
+ #
+ self.title = title
self.write(
'''<!DOCTYPE html>
<html>
@@ -54,17 +63,17 @@
<body>
<div id="main_div">''',
title=title,
- rel_css=relative_css_path.rstrip('/'),
+ rel_css=self._rel(css_path),
def_css=default_css
)
- def set_menu_separator(self, relative_image_path):
+ def set_menu_separator(self, img):
"""Sets the menu separator.
Must be called before adding entries to the menu
"""
- if relative_image_path:
- self.menu_separator_tag = \
- '<img src="{}" alt="/" />'.format(relative_image_path)
+ if img:
+ self.menu_separator_tag = '<img src="{}" alt="/" />'.format(
+ self._rel(img))
else:
self.menu_separator_tag = None
@@ -80,7 +89,7 @@
self.write('<li>')
if link:
- self.write('<a href="{}">', link)
+ self.write('<a href="{}">', self._rel(link))
# Write the real menu entry text
self.write(name)
@@ -210,7 +219,7 @@
if bold:
self.write('<b>')
if link:
- self.write('<a href="{}">', link)
+ self.write('<a href="{}">', self._rel(link))
# Finally write the real table data, the given text
self.write(text)
@@ -278,10 +287,7 @@
# With block
def __enter__(self):
# Sanity check
- parent = os.path.dirname(self.filename)
- if parent:
- os.makedirs(parent, exist_ok=True)
-
+ self.filename.parent.mkdir(parents=True, exist_ok=True)
self.handle = open(self.filename, 'w', encoding='utf-8')
return self
--- a/telethon_generator/generators/docs.py
+++ b/telethon_generator/generators/docs.py
@@ -1,7 +1,6 @@
#!/usr/bin/env python3
-import csv
import functools
-import os
import re
import shutil
from collections import defaultdict
+from pathlib import Path
from ..docswriter import DocsWriter
from ..parsers import TLObject, Usability
@@ -35,41 +34,33 @@
def _get_create_path_for(root, tlobject, make=True):
"""Creates and returns the path for the given TLObject at root."""
- out_dir = 'methods' if tlobject.is_function else 'constructors'
+ # TODO Can we pre-create all required directories?
+ out_dir = root / ('methods' if tlobject.is_function else 'constructors')
if tlobject.namespace:
- out_dir = os.path.join(out_dir, tlobject.namespace)
+ out_dir /= tlobject.namespace
- out_dir = os.path.join(root, out_dir)
if make:
- os.makedirs(out_dir, exist_ok=True)
- return os.path.join(out_dir, _get_file_name(tlobject))
+ out_dir.mkdir(parents=True, exist_ok=True)
+ return out_dir / _get_file_name(tlobject)
-def _get_path_for_type(root, type_, relative_to='.'):
+
+def _get_path_for_type(type_):
"""Similar to `_get_create_path_for` but for only type names."""
if type_.lower() in CORE_TYPES:
- path = 'index.html#%s' % type_.lower()
+ return Path('index.html#%s' % type_.lower())
elif '.' in type_:
namespace, name = type_.split('.')
- path = 'types/%s/%s' % (namespace, _get_file_name(name))
+ return Path('types', namespace, _get_file_name(name))
else:
- path = 'types/%s' % _get_file_name(type_)
-
- return _get_relative_path(os.path.join(root, path), relative_to)
-
-
-def _get_relative_path(destination, relative_to, folder=False):
- """Return the relative path to destination from relative_to."""
- if not folder:
- relative_to = os.path.dirname(relative_to)
-
- return os.path.relpath(destination, start=relative_to)
+ return Path('types', _get_file_name(type_))
def _find_title(html_file):
"""Finds the <title> for the given HTML file, or (Unknown)."""
- with open(html_file, 'r') as fp:
- for line in fp:
+ # TODO Is it necessary to read files like this?
+ with html_file.open() as f:
+ for line in f:
if '<title>' in line:
# + 7 to skip len('<title>')
return line[line.index('<title>') + 7:line.index('</title>')]
@@ -77,25 +68,27 @@
return '(Unknown)'
-def _build_menu(docs, filename, root, relative_main_index):
- """Builds the menu using the given DocumentWriter up to 'filename',
- which must be a file (it cannot be a directory)"""
- filename = _get_relative_path(filename, root)
- docs.add_menu('API', relative_main_index)
-
- items = filename.split('/')
- for i in range(len(items) - 1):
- item = items[i]
- link = '../' * (len(items) - (i + 2))
- link += 'index.html'
- docs.add_menu(item.title(), link=link)
+def _build_menu(docs):
+ """
+ Builds the menu used for the current ``DocumentWriter``.
+ """
+
+ paths = []
+ current = docs.filename
+ while current != docs.root:
+ current = current.parent
+ paths.append(current)
+
+ for path in reversed(paths):
+ docs.add_menu(path.stem.title(), link=path / 'index.html')
+
+ if docs.filename.stem != 'index':
+ docs.add_menu(docs.title, link=docs.filename)
- if items[-1] != 'index.html':
- docs.add_menu(os.path.splitext(items[-1])[0])
docs.end_menu()
-def _generate_index(folder, original_paths, root,
+def _generate_index(root, folder, paths,
bots_index=False, bots_index_paths=()):
"""Generates the index file for the specified folder"""
# Determine the namespaces listed here (as sub folders)
@@ -105,38 +98,24 @@
INDEX = 'index.html'
BOT_INDEX = 'botindex.html'
- if not bots_index:
- for item in os.listdir(folder):
- if os.path.isdir(os.path.join(folder, item)):
- namespaces.append(item)
- elif item not in (INDEX, BOT_INDEX):
- files.append(item)
- else:
- # bots_index_paths should be a list of "namespace/method.html"
- # or "method.html"
- for item in bots_index_paths:
- dirname = os.path.dirname(item)
- if dirname and dirname not in namespaces:
- namespaces.append(dirname)
- elif not dirname and item not in (INDEX, BOT_INDEX):
- files.append(item)
-
- paths = {k: _get_relative_path(v, folder, folder=True)
- for k, v in original_paths.items()}
+ for item in (bots_index_paths or folder.iterdir()):
+ if item.is_dir():
+ namespaces.append(item)
+ elif item.name not in (INDEX, BOT_INDEX):
+ files.append(item)
# Now that everything is setup, write the index.html file
- filename = os.path.join(folder, BOT_INDEX if bots_index else INDEX)
- with DocsWriter(filename, type_to_path=_get_path_for_type) as docs:
+ filename = folder / (BOT_INDEX if bots_index else INDEX)
+ with DocsWriter(root, filename, _get_path_for_type) as docs:
# Title should be the current folder name
- docs.write_head(folder.title(),
- relative_css_path=paths['css'],
- default_css=original_paths['default_css'])
+ docs.write_head(str(folder).title(),
+ css_path=paths['css'],
+ default_css=paths['default_css'])
docs.set_menu_separator(paths['arrow'])
- _build_menu(docs, filename, root,
- relative_main_index=paths['index_all'])
+ _build_menu(docs)
+ docs.write_title(str(filename.parent.relative_to(root)).title())
- docs.write_title(_get_relative_path(folder, root, folder=True).title())
if bots_index:
docs.write_text('These are the methods that you may be able to '
'use as a bot. Click <a href="{}">here</a> to '
@@ -153,24 +132,22 @@
namespace_paths = []
if bots_index:
for item in bots_index_paths:
- if os.path.dirname(item) == namespace:
- namespace_paths.append(os.path.basename(item))
- _generate_index(os.path.join(folder, namespace),
- original_paths, root,
+ if item.parent == namespace:
+ namespace_paths.append(item)
+
+ _generate_index(root, namespace, paths,
bots_index, namespace_paths)
- if bots_index:
- docs.add_row(namespace.title(),
- link=os.path.join(namespace, BOT_INDEX))
- else:
- docs.add_row(namespace.title(),
- link=os.path.join(namespace, INDEX))
+
+ docs.add_row(
+ namespace.stem.title(),
+ link=namespace / (BOT_INDEX if bots_index else INDEX))
docs.end_table()
docs.write_title('Available items')
docs.begin_table(2)
- files = [(f, _find_title(os.path.join(folder, f))) for f in files]
+ files = [(f, _find_title(f)) for f in files]
files.sort(key=lambda t: t[1])
for file, title in files:
@@ -231,7 +208,7 @@
))
-def _write_html_pages(tlobjects, methods, layer, input_res, output_dir):
+def _write_html_pages(root, tlobjects, methods, layer, input_res):
"""
Generates the documentation HTML files from from ``scheme.tl``
to ``/methods`` and ``/constructors``, etc.
@@ -239,21 +216,18 @@
# Save 'Type: [Constructors]' for use in both:
# * Seeing the return type or constructors belonging to the same type.
# * Generating the types documentation, showing available constructors.
- original_paths = {
- 'css': 'css',
- 'arrow': 'img/arrow.svg',
- 'search.js': 'js/search.js',
- '404': '404.html',
- 'index_all': 'index.html',
- 'bot_index': 'botindex.html',
- 'index_types': 'types/index.html',
- 'index_methods': 'methods/index.html',
- 'index_constructors': 'constructors/index.html'
- }
- original_paths = {k: os.path.join(output_dir, v)
- for k, v in original_paths.items()}
-
- original_paths['default_css'] = 'light' # docs.<name>.css, local path
+ paths = {k: root / v for k, v in (
+ ('css', 'css'),
+ ('arrow', 'img/arrow.svg'),
+ ('search.js', 'js/search.js'),
+ ('404', '404.html'),
+ ('index_all', 'index.html'),
+ ('bot_index', 'botindex.html'),
+ ('index_types', 'types/index.html'),
+ ('index_methods', 'methods/index.html'),
+ ('index_constructors', 'constructors/index.html')
+ )}
+ paths['default_css'] = 'light' # docs.<name>.css, local path
type_to_constructors = defaultdict(list)
type_to_functions = defaultdict(list)
for tlobject in tlobjects:
@@ -266,24 +240,20 @@
methods = {m.name: m for m in methods}
# Since the output directory is needed everywhere partially apply it now
- create_path_for = functools.partial(_get_create_path_for, output_dir)
- path_for_type = functools.partial(_get_path_for_type, output_dir)
+ create_path_for = functools.partial(_get_create_path_for, root)
+ path_for_type = lambda t: root / _get_path_for_type(t)
bot_docs_paths = []
for tlobject in tlobjects:
filename = create_path_for(tlobject)
- paths = {k: _get_relative_path(v, filename)
- for k, v in original_paths.items()}
-
- with DocsWriter(filename, type_to_path=path_for_type) as docs:
+ with DocsWriter(root, filename, path_for_type) as docs:
docs.write_head(title=tlobject.class_name,
- relative_css_path=paths['css'],
- default_css=original_paths['default_css'])
+ css_path=paths['css'],
+ default_css=paths['default_css'])
# Create the menu (path to the current TLObject)
docs.set_menu_separator(paths['arrow'])
- _build_menu(docs, filename, output_dir,
- relative_main_index=paths['index_all'])
+ _build_menu(docs)
# Create the page title
docs.write_title(tlobject.class_name)
@@ -333,9 +303,7 @@
inner = tlobject.result
docs.begin_table(column_count=1)
- docs.add_row(inner, link=path_for_type(
- inner, relative_to=filename
- ))
+ docs.add_row(inner, link=path_for_type(inner))
docs.end_table()
cs = type_to_constructors.get(inner, [])
@@ -349,7 +317,6 @@
docs.begin_table(column_count=2)
for constructor in cs:
link = create_path_for(constructor)
- link = _get_relative_path(link, relative_to=filename)
docs.add_row(constructor.class_name, link=link)
docs.end_table()
@@ -380,8 +347,8 @@
docs.add_row('!' + friendly_type, align='center')
else:
docs.add_row(
- friendly_type, align='center', link=
- path_for_type(arg.type, relative_to=filename)
+ friendly_type, align='center',
+ link=path_for_type(arg.type)
)
# Add a description for this argument
@@ -441,18 +408,13 @@
docs.add_script(relative_src=paths['search.js'])
docs.end_body()
- temp = []
- for item in bot_docs_paths:
- temp.append(os.path.sep.join(item.split(os.path.sep)[2:]))
- bot_docs_paths = temp
-
# Find all the available types (which are not the same as the constructors)
# Each type has a list of constructors associated to it, hence is a map
for t, cs in type_to_constructors.items():
filename = path_for_type(t)
- out_dir = os.path.dirname(filename)
+ out_dir = filename.parent
if out_dir:
- os.makedirs(out_dir, exist_ok=True)
+ out_dir.mkdir(parents=True, exist_ok=True)
# Since we don't have access to the full TLObject, split the type
if '.' in t:
@@ -460,17 +422,13 @@
else:
namespace, name = None, t
- paths = {k: _get_relative_path(v, out_dir, folder=True)
- for k, v in original_paths.items()}
-
- with DocsWriter(filename, type_to_path=path_for_type) as docs:
+ with DocsWriter(root, filename, path_for_type) as docs:
docs.write_head(title=snake_to_camel_case(name),
- relative_css_path=paths['css'],
- default_css=original_paths['default_css'])
+ css_path=paths['css'],
+ default_css=paths['default_css'])
docs.set_menu_separator(paths['arrow'])
- _build_menu(docs, filename, output_dir,
- relative_main_index=paths['index_all'])
+ _build_menu(docs)
# Main file title
docs.write_title(snake_to_camel_case(name))
@@ -489,7 +447,6 @@
for constructor in cs:
# Constructor full name
link = create_path_for(constructor)
- link = _get_relative_path(link, relative_to=filename)
docs.add_row(constructor.class_name, link=link)
docs.end_table()
@@ -509,7 +466,6 @@
docs.begin_table(2)
for func in functions:
link = create_path_for(func)
- link = _get_relative_path(link, relative_to=filename)
docs.add_row(func.class_name, link=link)
docs.end_table()
@@ -534,7 +490,6 @@
docs.begin_table(2)
for ot in other_methods:
link = create_path_for(ot)
- link = _get_relative_path(link, relative_to=filename)
docs.add_row(ot.class_name, link=link)
docs.end_table()
@@ -560,7 +515,6 @@
docs.begin_table(2)
for ot in other_types:
link = create_path_for(ot)
- link = _get_relative_path(link, relative_to=filename)
docs.add_row(ot.class_name, link=link)
docs.end_table()
docs.end_body()
@@ -570,11 +524,10 @@
# information that we have available, simply a file listing all the others
# accessible by clicking on their title
for folder in ['types', 'methods', 'constructors']:
- _generate_index(os.path.join(output_dir, folder), original_paths,
- output_dir)
+ _generate_index(root, root / folder, paths)
- _generate_index(os.path.join(output_dir, 'methods'), original_paths,
- output_dir, True, bot_docs_paths)
+ _generate_index(root, root / 'methods', paths, True,
+ bot_docs_paths)
# Write the final core index, the main index for the rest of files
types = set()
@@ -596,9 +549,8 @@
methods = sorted(methods, key=lambda m: m.name)
cs = sorted(cs, key=lambda c: c.name)
- shutil.copy(os.path.join(input_res, '404.html'), original_paths['404'])
- _copy_replace(os.path.join(input_res, 'core.html'),
- original_paths['index_all'], {
+ shutil.copy(str(input_res / '404.html'), str(paths['404']))
+ _copy_replace(input_res / 'core.html', paths['index_all'], {
'{type_count}': len(types),
'{method_count}': len(methods),
'{constructor_count}': len(tlobjects) - len(methods),
@@ -624,17 +576,15 @@
type_names = fmt(types, formatter=lambda x: x)
# Local URLs shouldn't rely on the output's root, so set empty root
- create_path_for = functools.partial(_get_create_path_for, '', make=False)
- path_for_type = functools.partial(_get_path_for_type, '')
+ create_path_for = functools.partial(
+ _get_create_path_for, Path(), make=False)
+
request_urls = fmt(methods, create_path_for)
- type_urls = fmt(types, path_for_type)
+ type_urls = fmt(types, _get_path_for_type)
constructor_urls = fmt(cs, create_path_for)
- os.makedirs(os.path.abspath(os.path.join(
- original_paths['search.js'], os.path.pardir
- )), exist_ok=True)
- _copy_replace(os.path.join(input_res, 'js', 'search.js'),
- original_paths['search.js'], {
+ paths['search.js'].parent.mkdir(parents=True, exist_ok=True)
+ _copy_replace(input_res / 'js/search.js', paths['search.js'], {
'{request_names}': request_names,
'{type_names}': type_names,
'{constructor_names}': constructor_names,
@@ -649,11 +599,11 @@
('img', ['arrow.svg'])]:
- dirpath = os.path.join(out_dir, dirname)
- os.makedirs(dirpath, exist_ok=True)
+ dirpath = out_dir / dirname
+ dirpath.mkdir(parents=True, exist_ok=True)
for file in files:
- shutil.copy(os.path.join(res_dir, dirname, file), dirpath)
+ shutil.copy(str(res_dir / dirname / file), str(dirpath))
def generate_docs(tlobjects, methods, layer, input_res, output_dir):
- os.makedirs(output_dir, exist_ok=True)
- _write_html_pages(tlobjects, methods, layer, input_res, output_dir)
+ output_dir.mkdir(parents=True, exist_ok=True)
+ _write_html_pages(output_dir, tlobjects, methods, layer, input_res)
_copy_resources(input_res, output_dir)
--- a/telethon_generator/generators/tlobject.py
+++ b/telethon_generator/generators/tlobject.py
@@ -48,9 +48,8 @@
def _write_modules(
out_dir, depth, kind, namespace_tlobjects, type_constructors):
# namespace_tlobjects: {'namespace', [TLObject]}
- os.makedirs(out_dir, exist_ok=True)
+ out_dir.mkdir(parents=True, exist_ok=True)
for ns, tlobjects in namespace_tlobjects.items():
- file = os.path.join(out_dir, '{}.py'.format(ns or '__init__'))
- with open(file, 'w', encoding='utf-8') as f,\
- SourceBuilder(f) as builder:
+ file = out_dir / '{}.py'.format(ns or '__init__')
+ with file.open('w') as f, SourceBuilder(f) as builder:
builder.writeln(AUTO_GEN_NOTICE)
builder.writeln('from {}.tl.tlobject import TLObject', '.' * depth)
@@ -635,11 +634,10 @@
def _write_patched(out_dir, namespace_tlobjects):
- os.makedirs(out_dir, exist_ok=True)
+ out_dir.mkdir(parents=True, exist_ok=True)
for ns, tlobjects in namespace_tlobjects.items():
- file = os.path.join(out_dir, '{}.py'.format(ns or '__init__'))
- with open(file, 'w', encoding='utf-8') as f,\
- SourceBuilder(f) as builder:
+ file = out_dir / '{}.py'.format(ns or '__init__')
+ with file.open('w') as f, SourceBuilder(f) as builder:
builder.writeln(AUTO_GEN_NOTICE)
builder.writeln('import struct')
@@ -715,26 +713,24 @@
if tlobject.fullname in PATCHED_TYPES:
namespace_patched[tlobject.namespace].append(tlobject)
- get_file = functools.partial(os.path.join, output_dir)
- _write_modules(get_file('functions'), import_depth, 'TLRequest',
+ _write_modules(output_dir / 'functions', import_depth, 'TLRequest',
namespace_functions, type_constructors)
- _write_modules(get_file('types'), import_depth, 'TLObject',
+ _write_modules(output_dir / 'types', import_depth, 'TLObject',
namespace_types, type_constructors)
- _write_patched(get_file('patched'), namespace_patched)
+ _write_patched(output_dir / 'patched', namespace_patched)
- filename = os.path.join(get_file('alltlobjects.py'))
- with open(filename, 'w', encoding='utf-8') as file:
+ filename = output_dir / 'alltlobjects.py'
+ with filename.open('w') as file:
with SourceBuilder(file) as builder:
_write_all_tlobjects(tlobjects, layer, builder)
def clean_tlobjects(output_dir):
- get_file = functools.partial(os.path.join, output_dir)
for d in ('functions', 'types'):
- d = get_file(d)
- if os.path.isdir(d):
- shutil.rmtree(d)
+ d = output_dir / d
+ if d.is_dir():
+ shutil.rmtree(str(d))
- tl = get_file('alltlobjects.py')
- if os.path.isfile(tl):
- os.remove(tl)
+ tl = output_dir / 'alltlobjects.py'
+ if tl.is_file():
+ tl.unlink()
--- a/telethon_generator/parsers/errors.py
+++ b/telethon_generator/parsers/errors.py
@@ -57,7 +57,7 @@
Parses the input CSV file with columns (name, error codes, description)
and yields `Error` instances as a result.
"""
- with open(csv_file, newline='') as f:
+ with csv_file.open(newline='') as f:
f = csv.reader(f)
next(f, None) # header
for line, (name, codes, description) in enumerate(f, start=2):
--- a/telethon_generator/parsers/methods.py
+++ b/telethon_generator/parsers/methods.py
@@ -30,7 +30,7 @@
Parses the input CSV file with columns (method, usability, errors)
and yields `MethodInfo` instances as a result.
"""
- with open(csv_file, newline='') as f:
+ with csv_file.open(newline='') as f:
f = csv.reader(f)
next(f, None) # header
for line, (method, usability, errors) in enumerate(f, start=2):
--- a/telethon_generator/parsers/tlobject/parser.py
+++ b/telethon_generator/parsers/tlobject/parser.py
@@ -86,7 +86,7 @@
obj_all = []
obj_by_name = {}
obj_by_type = collections.defaultdict(list)
- with open(file_path, 'r', encoding='utf-8') as file:
+ with file_path.open() as file:
is_function = False
for line in file:
comment_index = line.find('//')

View File

@ -1,6 +1,4 @@
cryptg
pysocks
python-socks[asyncio]
hachoir
pillow
isal

View File

@ -6,16 +6,14 @@ Installation
Telethon is a Python library, which means you need to download and install
Python from https://www.python.org/downloads/ if you haven't already. Once
you have Python installed, `upgrade pip`__ and run:
you have Python installed, run:
.. code-block:: sh
python3 -m pip install --upgrade pip
python3 -m pip install --upgrade telethon
pip3 install -U telethon --user
…to install or upgrade the library to the latest version.
To install or upgrade the library to the latest version.
.. __: https://pythonspeed.com/articles/upgrade-pip/
Installing Development Versions
===============================
@ -25,7 +23,7 @@ you can run the following command instead:
.. code-block:: sh
python3 -m pip install --upgrade https://github.com/LonamiWebs/Telethon/archive/v1.zip
pip3 install -U https://github.com/LonamiWebs/Telethon/archive/master.zip --user
.. note::
@ -76,7 +74,7 @@ manually.
Some of the modules may require additional dependencies before being
installed through ``pip``. If you have an ``apt``-based system, consider
installing the most commonly missing dependencies (with the right ``pip``):
installing the most commonly missing dependencies:
.. code-block:: sh
@ -87,7 +85,6 @@ manually.
Thanks to `@bb010g`_ for writing down this nice list.
.. _cryptg: https://github.com/cher-nov/cryptg
.. _pyaes: https://github.com/ricmoo/pyaes
.. _pillow: https://python-pillow.org

View File

@ -20,27 +20,3 @@ that are worth learning and understanding.
From now on, you can keep pressing the "Next" button if you want,
or use the menu on the left, since some pages are quite lengthy.
A note on developing applications
=================================
If you're using the library to make an actual application (and not just
automate things), you should make sure to `comply with the ToS`__:
[…] when logging in as an existing user, apps are supposed to call
[:tl:`GetTermsOfServiceUpdate`] to check for any updates to the Terms of
Service; this call should be repeated after ``expires`` seconds have
elapsed. If an update to the Terms Of Service is available, clients are
supposed to show a consent popup; if accepted, clients should call
[:tl:`AcceptTermsOfService`], providing the ``termsOfService id`` JSON
object; in case of denial, clients are to delete the account using
[:tl:`DeleteAccount`], providing Decline ToS update as deletion reason.
.. __: https://core.telegram.org/api/config#terms-of-service
However, if you use the library to automate or enhance your Telegram
experience, it's very likely that you are using other applications doing this
check for you (so you wouldn't run the risk of violating the ToS).
The library itself will not automatically perform this check or accept the ToS
because it should require user action (the only exception is during sign-up).

View File

@ -19,7 +19,7 @@ use these if possible.
# Getting information about yourself
me = await client.get_me()
# "me" is a user object. You can pretty-print
# "me" is an User object. You can pretty-print
# any Telegram object with the "stringify" method:
print(me.stringify())
@ -41,7 +41,7 @@ use these if possible.
# ...to your contacts
await client.send_message('+34600123123', 'Hello, friend!')
# ...or even to any username
await client.send_message('username', 'Testing Telethon!')
await client.send_message('TelethonChat', 'Hello, Telethon!')
# You can, of course, use markdown in your messages:
message = await client.send_message(

View File

@ -117,12 +117,7 @@ Signing In behind a Proxy
=========================
If you need to use a proxy to access Telegram,
you will need to either:
* For Python >= 3.6 : `install python-socks[asyncio]`__
* For Python <= 3.5 : `install PySocks`__
and then change
you will need to `install PySocks`__ and then change:
.. code-block:: python
@ -132,48 +127,13 @@ with
.. code-block:: python
TelegramClient('anon', api_id, api_hash, proxy=("socks5", '127.0.0.1', 4444))
TelegramClient('anon', api_id, api_hash, proxy=(socks.SOCKS5, '127.0.0.1', 4444))
(of course, replacing the protocol, IP and port with the protocol, IP and port of the proxy).
(of course, replacing the IP and port with the IP and port of the proxy).
The ``proxy=`` argument should be a dict (or tuple, for backwards compatibility),
The ``proxy=`` argument should be a tuple, a list or a dict,
consisting of parameters described `in PySocks usage`__.
The allowed values for the argument ``proxy_type`` are:
* For Python <= 3.5:
* ``socks.SOCKS5`` or ``'socks5'``
* ``socks.SOCKS4`` or ``'socks4'``
* ``socks.HTTP`` or ``'http'``
* For Python >= 3.6:
* All of the above
* ``python_socks.ProxyType.SOCKS5``
* ``python_socks.ProxyType.SOCKS4``
* ``python_socks.ProxyType.HTTP``
Example:
.. code-block:: python
proxy = {
'proxy_type': 'socks5', # (mandatory) protocol to use (see above)
'addr': '1.1.1.1', # (mandatory) proxy IP address
'port': 5555, # (mandatory) proxy port number
'username': 'foo', # (optional) username if the proxy requires auth
'password': 'bar', # (optional) password if the proxy requires auth
'rdns': True # (optional) whether to use remote or local resolve, default remote
}
For backwards compatibility with ``PySocks`` the following format
is possible (but discouraged):
.. code-block:: python
proxy = (socks.SOCKS5, '1.1.1.1', 5555, True, 'foo', 'bar')
.. __: https://github.com/romis2012/python-socks#installation
.. __: https://github.com/Anorov/PySocks#installation
.. __: https://github.com/Anorov/PySocks#usage-1

View File

@ -16,7 +16,7 @@ For that, you can use **events**.
.. code-block:: python
import logging
logging.basicConfig(format='[%(levelname) %(asctime)s] %(name)s: %(message)s',
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s',
level=logging.WARNING)

View File

@ -40,22 +40,22 @@ because tasks are smaller than threads, which are smaller than processes.
What are asyncio basics?
========================
The code samples below assume that you have Python 3.7 or greater installed.
.. code-block:: python
# First we need the asyncio library
import asyncio
# Then we need a loop to work with
loop = asyncio.get_event_loop()
# We also need something to run
async def main():
for char in 'Hello, world!\n':
print(char, end='', flush=True)
await asyncio.sleep(0.2)
# Then, we can create a new asyncio loop and use it to run our coroutine.
# The creation and tear-down of the loop is hidden away from us.
asyncio.run(main())
# Then, we need to run the loop with a task
loop.run_until_complete(main())
What does telethon.sync do?
@ -101,7 +101,7 @@ Instead of this:
# or, using asyncio's default loop (it's the same)
import asyncio
loop = asyncio.get_running_loop() # == client.loop
loop = asyncio.get_event_loop() # == client.loop
me = loop.run_until_complete(client.get_me())
print(me.username)
@ -158,10 +158,13 @@ loops or use ``async with``:
print(message.sender.username)
asyncio.run(main())
# ^ this will create a new asyncio loop behind the scenes and tear it down
# once the function returns. It will run the loop untiil main finishes.
# You should only use this function if there is no other loop running.
loop = asyncio.get_event_loop()
# ^ this assigns the default event loop from the main thread to a variable
loop.run_until_complete(main())
# ^ this runs the *entire* loop until the main() function finishes.
# While the main() function does not finish, the loop will be running.
# While the loop is running, you can't run it again.
The ``await`` keyword blocks the *current* task, and the loop can run
@ -181,14 +184,14 @@ concurrently:
await asyncio.sleep(delay) # await tells the loop this task is "busy"
print('world') # eventually the loop finishes all tasks
async def main():
asyncio.create_task(world(2)) # create the world task, passing 2 as delay
asyncio.create_task(hello(delay=1)) # another task, but with delay 1
await asyncio.sleep(3) # wait for three seconds before exiting
loop = asyncio.get_event_loop() # get the default loop for the main thread
loop.create_task(world(2)) # create the world task, passing 2 as delay
loop.create_task(hello(delay=1)) # another task, but with delay 1
try:
# create a new temporary asyncio loop and use it to run main
asyncio.run(main())
# run the event loop forever; ctrl+c to stop it
# we could also run the loop for three seconds:
# loop.run_until_complete(asyncio.sleep(3))
loop.run_forever()
except KeyboardInterrupt:
pass
@ -206,15 +209,10 @@ The same example, but without the comment noise:
await asyncio.sleep(delay)
print('world')
async def main():
asyncio.create_task(world(2))
asyncio.create_task(hello(delay=1))
await asyncio.sleep(3)
try:
asyncio.run(main())
except KeyboardInterrupt:
pass
loop = asyncio.get_event_loop()
loop.create_task(world(2))
loop.create_task(hello(1))
loop.run_until_complete(asyncio.sleep(3))
Can I use threads?
@ -252,9 +250,9 @@ You may have seen this error:
RuntimeError: There is no current event loop in thread 'Thread-1'.
It just means you didn't create a loop for that thread. Please refer to
the ``asyncio`` documentation to correctly learn how to set the event loop
for non-main threads.
It just means you didn't create a loop for that thread, and if you don't
pass a loop when creating the client, it uses ``asyncio.get_event_loop()``,
which only works in the main thread.
client.run_until_disconnected() blocks!
@ -312,26 +310,27 @@ you can run requests in parallel:
async def main():
last, sent, download_path = await asyncio.gather(
client.get_messages('telegram', 10),
client.send_message('me', 'Using asyncio!'),
client.download_profile_photo('telegram')
client.get_messages('TelethonChat', 10),
client.send_message('TelethonOfftopic', 'Hey guys!'),
client.download_profile_photo('TelethonChat')
)
loop.run_until_complete(main())
This code will get the 10 last messages from `@telegram
<https://t.me/telegram>`_, send one to the chat with yourself, and also
download the profile photo of the channel. `asyncio` will run all these
three tasks at the same time. You can run all the tasks you want this way.
This code will get the 10 last messages from `@TelethonChat
<https://t.me/TelethonChat>`_, send one to `@TelethonOfftopic
<https://t.me/TelethonOfftopic>`_, and also download the profile
photo of the main group. `asyncio` will run all these three tasks
at the same time. You can run all the tasks you want this way.
A different way would be:
.. code-block:: python
loop.create_task(client.get_messages('telegram', 10))
loop.create_task(client.send_message('me', 'Using asyncio!'))
loop.create_task(client.download_profile_photo('telegram'))
loop.create_task(client.get_messages('TelethonChat', 10))
loop.create_task(client.send_message('TelethonOfftopic', 'Hey guys!'))
loop.create_task(client.download_profile_photo('TelethonChat'))
They will run in the background as long as the loop is running too.

View File

@ -28,9 +28,6 @@ their own Telegram bots. Quoting their main page:
Bot API is simply an HTTP endpoint which translates your requests to it into
MTProto calls through tdlib_, their bot backend.
Configuration of your bot, such as its available commands and auto-completion,
is configured through `@BotFather <https://t.me/BotFather>`_.
What is MTProto?
================
@ -91,7 +88,7 @@ Next, we will see some examples from the most popular libraries.
Migrating from python-telegram-bot
----------------------------------
Let's take their `echobot.py`_ example and shorten it a bit:
Let's take their `echobot2.py`_ example and shorten it a bit:
.. code-block:: python
@ -110,7 +107,7 @@ Let's take their `echobot.py`_ example and shorten it a bit:
updater = Updater("TOKEN")
dp = updater.dispatcher
dp.add_handler(CommandHandler("start", start))
dp.add_handler(MessageHandler(Filters.text & ~Filters.command, echo))
dp.add_handler(MessageHandler(Filters.text, echo))
updater.start_polling()
@ -148,7 +145,7 @@ After using Telethon:
Key differences:
* The recommended way to do it imports fewer things.
* The recommended way to do it imports less things.
* All handlers trigger by default, so we need ``events.StopPropagation``.
* Adding handlers, responding and running is a lot less verbose.
* Telethon needs ``async def`` and ``await``.
@ -299,7 +296,7 @@ After rewriting:
class Subbot(TelegramClient):
def __init__(self, *a, **kw):
super().__init__(*a, **kw)
await super().__init__(*a, **kw)
self.add_event_handler(self.on_update, events.NewMessage)
async def connect():
@ -333,4 +330,4 @@ Key differences:
.. _aiohttp: https://docs.aiohttp.org/en/stable
.. _aiogram: https://aiogram.readthedocs.io
.. _dumbot: https://github.com/Lonami/dumbot
.. _echobot.py: https://github.com/python-telegram-bot/python-telegram-bot/blob/master/examples/echobot.py
.. _echobot2.py: https://github.com/python-telegram-bot/python-telegram-bot/blob/master/examples/echobot2.py

View File

@ -1,169 +0,0 @@
.. _chats-channels:
=================
Chats vs Channels
=================
Telegram's raw API can get very confusing sometimes, in particular when it
comes to talking about "chats", "channels", "groups", "megagroups", and all
those concepts.
This section will try to explain what each of these concepts are.
Chats
=====
A ``Chat`` can be used to talk about either the common "subclass" that both
chats and channels share, or the concrete :tl:`Chat` type.
Technically, both :tl:`Chat` and :tl:`Channel` are a form of the `Chat type`_.
**Most of the time**, the term :tl:`Chat` is used to talk about *small group
chats*. When you create a group through an official application, this is the
type that you get. Official applications refer to these as "Group".
Both the bot API and Telethon will add a minus sign (negate) the real chat ID
so that you can tell at a glance, with just a number, the entity type.
For example, if you create a chat with :tl:`CreateChatRequest`, the real chat
ID might be something like `123`. If you try printing it from a
`message.chat_id` you will see `-123`. This ID helps Telethon know you're
talking about a :tl:`Chat`.
Channels
========
Official applications create a *broadcast* channel when you create a new
channel (used to broadcast messages, only administrators can post messages).
Official applications implicitly *migrate* an *existing* :tl:`Chat` to a
*megagroup* :tl:`Channel` when you perform certain actions (exceed user limit,
add a public username, set certain permissions, etc.).
A ``Channel`` can be created directly with :tl:`CreateChannelRequest`, as
either a ``megagroup`` or ``broadcast``.
Official applications use the term "channel" **only** for broadcast channels.
The API refers to the different types of :tl:`Channel` with certain attributes:
* A **broadcast channel** is a :tl:`Channel` with the ``channel.broadcast``
attribute set to `True`.
* A **megagroup channel** is a :tl:`Channel` with the ``channel.megagroup``
attribute set to `True`. Official applications refer to this as "supergroup".
* A **gigagroup channel** is a :tl:`Channel` with the ``channel.gigagroup``
attribute set to `True`. Official applications refer to this as "broadcast
groups", and is used when a megagroup becomes very large and administrators
want to transform it into something where only they can post messages.
Both the bot API and Telethon will "concatenate" ``-100`` to the real chat ID
so that you can tell at a glance, with just a number, the entity type.
For example, if you create a new broadcast channel, the real channel ID might
be something like `456`. If you try printing it from a `message.chat_id` you
will see `-1000000000456`. This ID helps Telethon know you're talking about a
:tl:`Channel`.
Converting IDs
==============
You can convert between the "marked" identifiers (prefixed with a minus sign)
and the real ones with ``utils.resolve_id``. It will return a tuple with the
real ID, and the peer type (the class):
.. code-block:: python
from telethon import utils
real_id, peer_type = utils.resolve_id(-1000000000456)
print(real_id) # 456
print(peer_type) # <class 'telethon.tl.types.PeerChannel'>
peer = peer_type(real_id)
print(peer) # PeerChannel(channel_id=456)
The reverse operation can be done with ``utils.get_peer_id``:
.. code-block:: python
print(utils.get_peer_id(types.PeerChannel(456))) # -1000000000456
Note that this function can also work with other types, like :tl:`Chat` or
:tl:`Channel` instances.
If you need to convert other types like usernames which might need to perform
API calls to find out the identifier, you can use ``client.get_peer_id``:
.. code-block:: python
print(await client.get_peer_id('me')) # your id
If there is no "mark" (no minus sign), Telethon will assume your identifier
refers to a :tl:`User`. If this is **not** the case, you can manually fix it:
.. code-block:: python
from telethon import types
await client.send_message(types.PeerChannel(456), 'hello')
# ^^^^^^^^^^^^^^^^^ explicit peer type
A note on raw API
=================
Certain methods only work on a :tl:`Chat`, and some others only work on a
:tl:`Channel` (and these may only work in broadcast, or megagroup). Your code
likely knows what it's working with, so it shouldn't be too much of an issue.
If you need to find the :tl:`Channel` from a :tl:`Chat` that migrated to it,
access the `migrated_to` property:
.. code-block:: python
# chat is a Chat
channel = await client.get_entity(chat.migrated_to)
# channel is now a Channel
Channels do not have a "migrated_from", but a :tl:`ChannelFull` does. You can
use :tl:`GetFullChannelRequest` to obtain this:
.. code-block:: python
from telethon import functions
full = await client(functions.channels.GetFullChannelRequest(your_channel))
full_channel = full.full_chat
# full_channel is a ChannelFull
print(full_channel.migrated_from_chat_id)
This way, you can also access the linked discussion megagroup of a broadcast channel:
.. code-block:: python
print(full_channel.linked_chat_id) # prints ID of linked discussion group or None
You do not need to use ``client.get_entity`` to access the
``migrated_from_chat_id`` :tl:`Chat` or the ``linked_chat_id`` :tl:`Channel`.
They are in the ``full.chats`` attribute:
.. code-block:: python
if full_channel.migrated_from_chat_id:
migrated_from_chat = next(c for c in full.chats if c.id == full_channel.migrated_from_chat_id)
print(migrated_from_chat.title)
if full_channel.linked_chat_id:
linked_group = next(c for c in full.chats if c.id == full_channel.linked_chat_id)
print(linked_group.username)
.. _Chat type: https://tl.telethon.dev/types/chat.html

View File

@ -268,7 +268,7 @@ That means you can do this:
.. code-block:: python
message.user_id
await message.get_input_sender()
await message.get_input_user()
message.user
# ...etc
@ -289,17 +289,17 @@ applications"? Now do the same with the library. Use what applies:
# (These examples assume you are inside an "async def")
async with client:
# Does it have a username? Use it!
# Does it have an username? Use it!
entity = await client.get_entity(username)
# Do you have a conversation open with them? Get dialogs.
await client.get_dialogs()
# Are they participant of some group? Get them.
await client.get_participants('username')
await client.get_participants('TelethonChat')
# Is the entity the original sender of a forwarded message? Get it.
await client.get_messages('username', 100)
await client.get_messages('TelethonChat', 100)
# NOW you can use the ID, anywhere!
await client.send_message(123456, 'Hi!')

View File

@ -150,6 +150,6 @@ You can also except it and act as you prefer:
VoIP numbers are very limited, and some countries are more limited too.
.. _list of known errors: https://github.com/LonamiWebs/Telethon/blob/v1/telethon_generator/data/errors.csv
.. _list of known errors: https://github.com/LonamiWebs/Telethon/blob/master/telethon_generator/data/errors.csv
.. _raw API page: https://tl.telethon.dev/
.. _messages.sendMessage: https://tl.telethon.dev/methods/messages/send_message.html

View File

@ -10,20 +10,13 @@ The Full API
methods listed on :ref:`client-ref` unless you have a better reason
not to, like a method not existing or you wanting more control.
.. contents::
Introduction
============
The :ref:`telethon-client` doesn't offer a method for every single request
the Telegram API supports. However, it's very simple to *call* or *invoke*
any request defined in Telegram's API.
any request. Whenever you need something, don't forget to `check the documentation`_
and look for the `method you need`_. There you can go through a sorted list
of everything you can do.
This section will teach you how to use what Telethon calls the `TL reference`_.
The linked page contains a list and a way to search through *all* types
generated from the definition of Telegram's API (in ``.tl`` file format,
hence the name). These types include requests and constructors.
.. note::
@ -32,193 +25,19 @@ hence the name). These types include requests and constructors.
as you type, and a "Copy import" button. If you like namespaces, you
can also do ``from telethon.tl import types, functions``. Both work.
Telegram makes these ``.tl`` files public, which other implementations, such
as Telethon, can also use to generate code. These files are versioned under
what's called "layers". ``.tl`` files consist of thousands of definitions,
and newer layers often add, change, or remove them. Each definition refers
to either a Remote Procedure Call (RPC) function, or a type (which the
`TL reference`_ calls "constructors", as they construct particular type
instances).
As such, the `TL reference`_ is a good place to go to learn about all possible
requests, types, and what they look like. If you're curious about what's been
changed between layers, you can refer to the `TL diff`_ site.
.. important::
All the examples in this documentation assume that you have
``from telethon import sync`` or ``import telethon.sync`` for the
sake of simplicity and that you understand what it does (see
:ref:`compatibility-and-convenience` for more). Simply add
either line at the beginning of your project and it will work.
Navigating the TL reference
===========================
Functions
---------
"Functions" is the term used for the Remote Procedure Calls (RPC) that can be
sent to Telegram to ask it to perform something (e.g. "send message"). These
requests have an associated return type. These can be invoked ("called"):
.. code-block:: python
client = TelegramClient(...)
function_instance = SomeRequest(...)
# Invoke the request
returned_type = await client(function_instance)
Whenever you find the type for a function in the `TL reference`_, the page
will contain the following information:
* What type of account can use the method. This information is regenerated
from time to time (by attempting to invoke the function under both account
types and finding out where it fails). Some requests can only be used by
bot accounts, others by user accounts, and others by both.
* The TL definition. This helps you get a feel for the what the function
looks like. This is not Python code. It just contains the definition in
a concise manner.
* "Copy import" button. Does what it says: it will copy the necessary Python
code to import the function to your system's clipboard for easy access.
* Returns. The returned type. When you invoke the function, this is what the
result will be. It also includes which of the constructors can be returned
inline, to save you a click.
* Parameters. The parameters accepted by the function, including their type,
whether they expect a list, and whether they're optional.
* Known RPC errors. A best-effort list of known errors the request may cause.
This list is not complete and may be out of date, but should provide an
overview of what could go wrong.
* Example. Autogenerated example, showcasing how you may want to call it.
Bear in mind that this is *autogenerated*. It may be spitting out non-sense.
The goal of this example is not to show you everything you can do with the
request, only to give you a feel for what it looks like to use it.
It is very important to click through the links and navigate to get the full
picture. A specific page will show you what the specific function returns and
needs as input parameters. But it may reference other types, so you need to
navigate to those to learn what those contain or need.
Types
-----
"Types" as understood by TL are not actually generated in Telethon.
They would be the "abstract base class" of the constructors, but since Python
is duck-typed, there is hardly any need to generate mostly unnecessary code.
The page for a type contains:
* Constructors. Every type will have one or more constructors. These
constructors *are* generated and can be immported and used.
* Requests returning this type. A helpful way to find out "what requests can
return this?". This is how you may learn what request you need to use to
obtain a particular instance of a type.
* Requests accepting this type as input. A helpful way to find out "what
requests can use this type as one of their input parameters?". This is how
you may learn where a type is used.
* Other types containing this type. A helpful way to find out "where else
does this type appear?". This is how you can walk back through nested
objects.
Constructors
------------
Constructors are used to create instances of a particular type, and are also
returned when invoking requests. You will have to create instances yourself
when invoking requests that need a particular type as input.
The page for a constructor contains:
* Belongs to. The parent type. This is a link back to the types page for the
specific constructor. It also contains the sibling constructors inline, to
save you a click.
* Members. Both the input parameters *and* fields the constructor contains.
Using the TL reference
======================
After you've found a request you want to send, a good start would be to simply
copy and paste the autogenerated example into your script. Then you can simply
tweak it to your needs.
If you want to do it from scratch, first, make sure to import the request into
your code (either using the "Copy import" button near the top, or by manually
spelling out the package under ``telethon.tl.functions.*``).
Then, start reading the parameters one by one. If the parameter cannot be
omitted, you **will** need to specify it, so make sure to spell it out as
an input parameter when constructing the request instance. Let's look at
`PingRequest`_ for example. First, we copy the import:
.. code-block:: python
from telethon.tl.functions import PingRequest
Then, we look at the parameters:
ping_id - long
A single parameter, and it's a long (a integer number with a large range of
values). It doesn't say it can be omitted, so we must provide it, like so:
.. code-block:: python
PingRequest(
ping_id=48641868471
)
(In this case, the ping ID is a random number. You often have to guess what
the parameter needs just by looking at the name.)
Now that we have our request, we can invoke it:
.. code-block:: python
response = await client(PingRequest(
ping_id=48641868471
))
To find out what ``response`` looks like, we can do as the autogenerated
example suggests and "stringify" the result as a pretty-printed string:
.. code-block:: python
print(result.stringify())
This will print out the following:
.. code-block:: python
Pong(
msg_id=781875678118,
ping_id=48641868471
)
Which is a very easy way to get a feel for a response. You should nearly
always print the stringified result, at least once, when trying out requests,
to get a feel for what the response may look like.
But of course, you don't need to do that. Without writing any code, you could
have navigated through the "Returns" link to learn ``PingRequest`` returns a
``Pong``, which only has one constructor, and the constructor has two members,
``msg_id`` and ``ping_id``.
If you wanted to create your own ``Pong``, you would use both members as input
parameters:
.. code-block:: python
my_pong = Pong(
msg_id=781875678118,
ping_id=48641868471
)
(Yes, constructing object instances can use the same code that ``.stringify``
would return!)
And if you wanted to access the ``msg_id`` member, you would simply access it
like any other attribute access in Python:
.. code-block:: python
print(response.msg_id)
Example walkthrough
===================
You should also refer to the documentation to see what the objects
(constructors) Telegram returns look like. Every constructor inherits
from a common type, and that's the reason for this distinction.
Say `client.send_message()
<telethon.client.messages.MessageMethods.send_message>` didn't exist,
@ -414,7 +233,6 @@ and still access the successful results:
# The second request failed.
second = e.exceptions[1]
.. _TL reference: https://tl.telethon.dev
.. _TL diff: https://diff.telethon.dev
.. _PingRequest: https://tl.telethon.dev/methods/ping.html
.. _check the documentation: https://tl.telethon.dev
.. _method you need: https://tl.telethon.dev/methods/index.html
.. _use the search: https://tl.telethon.dev/?q=message&redirect=no

View File

@ -143,7 +143,7 @@ output (likely your terminal).
.. warning::
**Keep this string safe!** Anyone with this string can use it
to login into your account and do anything they want to.
to login into your account and do anything they want to to do.
This is similar to leaking your ``*.session`` files online,
but it is easier to leak a string than it is to leak a file.

View File

@ -3,7 +3,7 @@ String-based Debugging
======================
Debugging is *really* important. Telegram's API is really big and there
are a lot of things that you should know. Such as, what attributes or fields
is a lot of things that you should know. Such as, what attributes or fields
does a result have? Well, the easiest thing to do is printing it:
.. code-block:: python
@ -32,7 +32,7 @@ Can we get better than the shown string, though? Yes!
print(entity.stringify())
Will show a much better representation:
Will show a much better:
.. code-block:: python

View File

@ -191,7 +191,8 @@ so the code above and the following are equivalent:
async def main():
await client.disconnected
asyncio.run(main())
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
You could also run `client.disconnected
@ -206,7 +207,7 @@ Notice that unlike `client.disconnected
<telethon.client.telegrambaseclient.TelegramBaseClient.disconnected>`,
`client.run_until_disconnected
<telethon.client.updates.UpdateMethods.run_until_disconnected>` will
handle ``KeyboardInterrupt`` for you. This method is special and can
handle ``KeyboardInterrupt`` with you. This method is special and can
also be ran while the loop is running, so you can do this:
.. code-block:: python

View File

@ -85,7 +85,7 @@ release = version
#
# This is also used if you do content translation via gettext catalogs.
# Usually you set "language" from the command line for these cases.
language = 'en'
language = None
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.

View File

@ -2,12 +2,81 @@
Telegram API in Other Languages
===============================
Telethon was made for **Python**, and it has inspired other libraries such as
`gramjs <https://github.com/gram-js/gramjs>`__ (JavaScript) and `grammers
<https://github.com/Lonami/grammers>`__ (Rust). But there is a lot more beyond
those, made independently by different developers.
If you're looking for something like Telethon but in a different programming
language, head over to `Telegram API in Other Languages in the official wiki
<https://github.com/LonamiWebs/Telethon/wiki/Telegram-API-in-Other-Languages>`__
for a (mostly) up-to-date list.
Telethon was made for **Python**, and as far as I know, there is no
*exact* port to other languages. However, there *are* other
implementations made by awesome people (one needs to be awesome to
understand the official Telegram documentation) on several languages
(even more Python too), listed below:
C
=
Possibly the most well-known unofficial open source implementation out
there by `@vysheng <https://github.com/vysheng>`__,
`tgl <https://github.com/vysheng/tgl>`__, and its console client
`telegram-cli <https://github.com/vysheng/tg>`__. Latest development
has been moved to `BitBucket <https://bitbucket.org/vysheng/tdcli>`__.
C++
===
The newest (and official) library, written from scratch, is called
`tdlib <https://github.com/tdlib/td>`__ and is what the Telegram X
uses. You can find more information in the official documentation,
published `here <https://core.telegram.org/tdlib/docs/>`__.
JavaScript
==========
`Ali Gasymov <https://github.com/alik0211>`__ made the `@mtproto/core <https://github.com/alik0211/mtproto-core>`__ library for the browser and nodejs installable via `npm <https://www.npmjs.com/package/@mtproto/core>`__.
`painor <https://github.com/painor>`__ is the primary author of `gramjs <https://github.com/gram-js/gramjs>`__,
a Telegram client implementation in JavaScript.
Kotlin
======
`Kotlogram <https://github.com/badoualy/kotlogram>`__ is a Telegram
implementation written in Kotlin (one of the
`official <https://blog.jetbrains.com/kotlin/2017/05/kotlin-on-android-now-official/>`__
languages for
`Android <https://developer.android.com/kotlin/index.html>`__) by
`@badoualy <https://github.com/badoualy>`__, currently as a beta
yet working.
Language-Agnostic
=================
`Taas <https://www.t-a-a-s.ru/>`__ is a service that lets you use Telegram API with any HTTP client via API. Using tdlib under the hood, Taas is commercial service, but allows free access if you use under 5000 requests per month.
PHP
===
A PHP implementation is also available thanks to
`@danog <https://github.com/danog>`__ and his
`MadelineProto <https://github.com/danog/MadelineProto>`__ project, with
a very nice `online
documentation <https://daniil.it/MadelineProto/API_docs/>`__ too.
Python
======
A fairly new (as of the end of 2017) Telegram library written from the
ground up in Python by
`@delivrance <https://github.com/delivrance>`__ and his
`Pyrogram <https://github.com/pyrogram/pyrogram>`__ library.
There isn't really a reason to pick it over Telethon and it'd be kinda
sad to see you go, but it would be nice to know what you miss from each
other library in either one so both can improve.
Rust
====
The `grammers <https://github.com/Lonami/grammers>`__ library is made by
the `same author as Telethon's <https://github.com/Lonami>`__! If you are
looking for a Telethon alternative written in Rust, this is a valid option!
Another older, work-in-progress implementation, on Rust is made by
`@JuanPotato <https://github.com/JuanPotato>`__ under the fancy
name of `Vail <https://github.com/JuanPotato/Vail>`__.

View File

@ -35,7 +35,3 @@ times, in this case, ``22222`` so we can hardcode that:
client.start(
phone='9996621234', code_callback=lambda: '22222'
)
Note that Telegram has changed the length of login codes multiple times in the
past, so if ``dc_id`` repeated five times does not work, try repeating it six
times.

View File

@ -84,10 +84,6 @@ use is very straightforward, or :tl:`InviteToChannelRequest` for channels:
[users_to_add]
))
Note that this method will only really work for friends or bot accounts.
Trying to mass-add users with this approach will not work, and can put both
your account and group to risk, possibly being flagged as spam and limited.
Checking a link without joining
===============================

View File

@ -25,7 +25,7 @@ you should use :tl:`GetFullUser`:
# or even
full = await client(GetFullUserRequest('username'))
bio = full.full_user.about
bio = full.about
See :tl:`UserFull` to know what other fields you can access.
@ -71,4 +71,4 @@ through :tl:`UploadProfilePhoto`:
await client(UploadProfilePhotoRequest(
await client.upload_file('/path/to/some/file')
))
)))

View File

@ -2,12 +2,44 @@
Working with messages
=====================
.. note::
These examples assume you have read :ref:`full-api`.
This section has been `moved to the wiki`_, where it can be easily edited as new
features arrive and the API changes. Please refer to the linked page to learn how
to send spoilers, custom emoji, stickers, react to messages, and more things.
.. contents::
.. _moved to the wiki: https://github.com/LonamiWebs/Telethon/wiki/Sending-more-than-just-messages
Sending stickers
================
Stickers are nothing else than ``files``, and when you successfully retrieve
the stickers for a certain sticker set, all you will have are ``handles`` to
these files. Remember, the files Telegram holds on their servers can be
referenced through this pair of ID/hash (unique per user), and you need to
use this handle when sending a "document" message. This working example will
send yourself the very first sticker you have:
.. code-block:: python
# Get all the sticker sets this user has
from telethon.tl.functions.messages import GetAllStickersRequest
sticker_sets = await client(GetAllStickersRequest(0))
# Choose a sticker set
from telethon.tl.functions.messages import GetStickerSetRequest
from telethon.tl.types import InputStickerSetID
sticker_set = sticker_sets.sets[0]
# Get the stickers for this sticker set
stickers = await client(GetStickerSetRequest(
stickerset=InputStickerSetID(
id=sticker_set.id, access_hash=sticker_set.access_hash
)
))
# Stickers are nothing more than files, so send that
await client.send_file('me', stickers.documents[0])
.. _issues: https://github.com/LonamiWebs/Telethon/issues/215

View File

@ -68,7 +68,6 @@ You can also use the menu on the left to quickly skip over sections.
concepts/strings
concepts/entities
concepts/chats-vs-channels
concepts/updates
concepts/sessions
concepts/full-api
@ -103,6 +102,7 @@ You can also use the menu on the left to quickly skip over sections.
:caption: Miscellaneous
misc/changelog
misc/wall-of-shame.rst
misc/compatibility-and-convenience
.. toctree::

File diff suppressed because it is too large Load Diff

View File

@ -161,17 +161,19 @@ just get rid of ``telethon.sync`` and work inside an ``async def``:
await client.run_until_disconnected()
asyncio.run(main())
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
The ``telethon.sync`` magic module essentially wraps every method behind:
The ``telethon.sync`` magic module simply wraps every method behind:
.. code-block:: python
asyncio.run(main())
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
With some other tricks, so that you don't have to write it yourself every time.
That's the overhead you pay if you import it, and what you save if you don't.
So that you don't have to write it yourself every time. That's the
overhead you pay if you import it, and what you save if you don't.
Learning
========

View File

@ -0,0 +1,65 @@
=============
Wall of Shame
=============
This project has an
`issues <https://github.com/LonamiWebs/Telethon/issues>`__ section for
you to file **issues** whenever you encounter any when working with the
library. Said section is **not** for issues on *your* program but rather
issues with Telethon itself.
If you have not made the effort to 1. read through the docs and 2.
`look for the method you need <https://tl.telethon.dev/>`__,
you will end up on the `Wall of
Shame <https://github.com/LonamiWebs/Telethon/issues?q=is%3Aissue+label%3ARTFM+is%3Aclosed>`__,
i.e. all issues labeled
`"RTFM" <http://www.urbandictionary.com/define.php?term=RTFM>`__:
**rtfm**
Literally "Read The F--king Manual"; a term showing the
frustration of being bothered with questions so trivial that the asker
could have quickly figured out the answer on their own with minimal
effort, usually by reading readily-available documents. People who
say"RTFM!" might be considered rude, but the true rude ones are the
annoying people who take absolutely no self-responibility and expect to
have all the answers handed to them personally.
*"Damn, that's the twelveth time that somebody posted this question
to the messageboard today! RTFM, already!"*
*by Bill M. July 27, 2004*
If you have indeed read the docs, and have tried looking for the method,
and yet you didn't find what you need, **that's fine**. Telegram's API
can have some obscure names at times, and for this reason, there is a
`"question"
label <https://github.com/LonamiWebs/Telethon/issues?utf8=%E2%9C%93&q=is%3Aissue%20is%3Aclosed%20label%3Aquestion%20>`__
with questions that are okay to ask. Just state what you've tried so
that we know you've made an effort, or you'll go to the Wall of Shame.
Of course, if the issue you're going to open is not even a question but
a real issue with the library (thankfully, most of the issues have been
that!), you won't end up here. Don't worry.
Current winner
--------------
The current winner is `issue
213 <https://github.com/LonamiWebs/Telethon/issues/213>`__:
**Issue:**
.. figure:: https://user-images.githubusercontent.com/6297805/29822978-9a9a6ef0-8ccd-11e7-9ec5-934ea0f57681.jpg
:alt: Winner issue
Winner issue
**Answer:**
.. figure:: https://user-images.githubusercontent.com/6297805/29822983-9d523402-8ccd-11e7-9fb1-5783740ee366.jpg
:alt: Winner issue answer
Winner issue answer

View File

@ -136,19 +136,10 @@ MessageButton
:show-inheritance:
ParticipantPermissions
======================
.. automodule:: telethon.tl.custom.participantpermissions
:members:
:undoc-members:
:show-inheritance:
QRLogin
=======
.. automodule:: telethon.tl.custom.qrlogin
.. automodule:: telethon.qrlogin
:members:
:undoc-members:
:show-inheritance:

View File

@ -9,7 +9,7 @@ you may need when using Telethon. They are sorted by relevance and are not in
alphabetical order.
You should use this page to learn about which methods are available, and
if you need a usage example or further description of the arguments, be
if you need an usage example or further description of the arguments, be
sure to follow the links.
.. contents::
@ -32,6 +32,7 @@ Auth
send_code_request
sign_in
qr_login
sign_up
log_out
edit_2fa
@ -48,7 +49,6 @@ Base
is_connected
disconnected
loop
set_proxy
Messages
--------
@ -65,7 +65,6 @@ Messages
iter_messages
get_messages
pin_message
unpin_message
send_read_acknowledge
Uploads
@ -140,7 +139,6 @@ Chats
get_profile_photos
edit_admin
edit_permissions
get_permissions
get_stats
action
@ -168,7 +166,6 @@ Updates
remove_event_handler
list_event_handlers
catch_up
set_receive_updates
Bots
----

View File

@ -20,7 +20,7 @@ To enable logging, at the following code to the top of your main file:
.. code-block:: python
import logging
logging.basicConfig(format='[%(levelname) %(asctime)s] %(name)s: %(message)s',
logging.basicConfig(format='[%(levelname) 5s/%(asctime)s] %(name)s: %(message)s',
level=logging.WARNING)
You can change the logging level to be something different, from less to more information:
@ -60,16 +60,6 @@ And except them as such:
My account was deleted/limited when using the library
=====================================================
First and foremost, **this is not a problem exclusive to Telethon.
Any third-party library is prone to cause the accounts to appear banned.**
Even official applications can make Telegram ban an account under certain
circumstances. Third-party libraries such as Telethon are a lot easier to
use, and as such, they are misused to spam, which causes Telegram to learn
certain patterns and ban suspicious activity.
There is no point in Telethon trying to circumvent this. Even if it succeeded,
spammers would then abuse the library again, and the cycle would repeat.
The library will only do things that you tell it to do. If you use
the library with bad intentions, Telegram will hopefully ban you.
@ -77,7 +67,8 @@ However, you may also be part of a limited country, such as Iran or Russia.
In that case, we have bad news for you. Telegram is much more likely to ban
these numbers, as they are often used to spam other accounts, likely through
the use of libraries like this one. The best advice we can give you is to not
abuse the API, like calling many requests really quickly.
abuse the API, like calling many requests really quickly, and to sign up with
these phones through an official application.
We have also had reports from Kazakhstan and China, where connecting
would fail. To solve these connection problems, you should use a proxy.
@ -85,16 +76,6 @@ would fail. To solve these connection problems, you should use a proxy.
Telegram may also ban virtual (VoIP) phone numbers,
as again, they're likely to be used for spam.
More recently (year 2023 onwards), Telegram has started putting a lot more
measures to prevent spam (with even additions such as anonymous participants
in groups or the inability to fetch group members at all). This means some
of the anti-spam measures have gotten more aggressive.
The recommendation has usually been to use the library only on well-established
accounts (and not an account you just created), and to not perform actions that
could be seen as abuse. Telegram decides what those actions are, and they're
free to change how they operate at any time.
If you want to check if your account has been limited,
simply send a private message to `@SpamBot`_ through Telegram itself.
You should notice this by getting errors like ``PeerFloodError``,
@ -198,137 +179,6 @@ won't do unnecessary work unless you need to:
sender = await event.get_sender()
File download is slow or sending files takes too long
=====================================================
The communication with Telegram is encrypted. Encryption requires a lot of
math, and doing it in pure Python is very slow. ``cryptg`` is a library which
containns the encryption functions used by Telethon. If it is installed (via
``pip install cryptg``), it will automatically be used and should provide
a considerable speed boost. You can know whether it's used by configuring
``logging`` (at ``INFO`` level or lower) *before* importing ``telethon``.
Note that the library does *not* download or upload files in parallel, which
can also help with the speed of downloading or uploading a single file. There
are snippets online implementing that. The reason why this is not built-in
is because the limiting factor in the long run are ``FloodWaitError``, and
using parallel download or uploads only makes them occur sooner.
What does "Server sent a very new message with ID" mean?
========================================================
You may also see this error as "Server sent a very old message with ID".
This is a security feature from Telethon that cannot be disabled and is
meant to protect you against replay attacks.
When this message is incorrectly reported as a "bug",
the most common patterns seem to be:
* Your system time is incorrect.
* The proxy you're using may be interfering somehow.
* The Telethon session is being used or has been used from somewhere else.
Make sure that you created the session from Telethon, and are not using the
same session anywhere else. If you need to use the same account from
multiple places, login and use a different session for each place you need.
What does "Server replied with a wrong session ID" mean?
========================================================
This is a security feature from Telethon that cannot be disabled and is
meant to protect you against unwanted session reuse.
When this message is reported as a "bug", the most common patterns seem to be:
* The proxy you're using may be interfering somehow.
* The Telethon session is being used or has been used from somewhere else.
Make sure that you created the session from Telethon, and are not using the
same session anywhere else. If you need to use the same account from
multiple places, login and use a different session for each place you need.
* You may be using multiple connections to the Telegram server, which seems
to confuse Telegram.
Most of the time it should be safe to ignore this warning. If the library
still doesn't behave correctly, make sure to check if any of the above bullet
points applies in your case and try to work around it.
If the issue persists and there is a way to reliably reproduce this error,
please add a comment with any additional details you can provide to
`issue 3759`_, and perhaps some additional investigation can be done
(but it's unlikely, as Telegram *is* sending unexpected data).
What does "Could not find a matching Constructor ID for the TLObject" mean?
===========================================================================
Telegram uses "layers", which you can think of as "versions" of the API they
offer. When Telethon reads responses that the Telegram servers send, these
need to be deserialized (into what Telethon calls "TLObjects").
Every Telethon version understands a single Telegram layer. When Telethon
connects to Telegram, both agree on the layer to use. If the layers don't
match, Telegram may send certain objects which Telethon no longer understands.
When this message is reported as a "bug", the most common patterns seem to be
that the Telethon session is being used or has been used from somewhere else.
Make sure that you created the session from Telethon, and are not using the
same session anywhere else. If you need to use the same account from
multiple places, login and use a different session for each place you need.
What does "Task was destroyed but it is pending" mean?
======================================================
Your script likely finished abruptly, the ``asyncio`` event loop got
destroyed, and the library did not get a chance to properly close the
connection and close the session.
Make sure you're either using the context manager for the client or always
call ``await client.disconnect()`` (by e.g. using a ``try/finally``).
What does "The asyncio event loop must not change after connection" mean?
=========================================================================
Telethon uses ``asyncio``, and makes use of things like tasks and queues
internally to manage the connection to the server and match responses to the
requests you make. Most of them are initialized after the client is connected.
For example, if the library expects a result to a request made in loop A, but
you attempt to get that result in loop B, you will very likely find a deadlock.
To avoid a deadlock, the library checks to make sure the loop in use is the
same as the one used to initialize everything, and if not, it throws an error.
The most common cause is ``asyncio.run``, since it creates a new event loop.
If you ``asyncio.run`` a function to create the client and set it up, and then
you ``asyncio.run`` another function to do work, things won't work, so the
library throws an error early to let you know something is wrong.
Instead, it's often a good idea to have a single ``async def main`` and simply
``asyncio.run()`` it and do all the work there. From it, you're also able to
call other ``async def`` without having to touch ``asyncio.run`` again:
.. code-block:: python
# It's fine to create the client outside as long as you don't connect
client = TelegramClient(...)
async def main():
# Now the client will connect, so the loop must not change from now on.
# But as long as you do all the work inside main, including calling
# other async functions, things will work.
async with client:
....
if __name__ == '__main__':
asyncio.run(main())
Be sure to read the ``asyncio`` documentation if you want a better
understanding of event loop, tasks, and what functions you can use.
What does "bases ChatGetter" mean?
==================================
@ -354,36 +204,6 @@ Telegram has a lot to offer, and inheritance helps the library reduce
boilerplate, so it's important to know this concept. For newcomers,
this may be a problem, so we explain what it means here in the FAQ.
Can I send files by ID?
=======================
When people talk about IDs, they often refer to one of two things:
the integer ID inside media, and a random-looking long string.
You cannot use the integer ID to send media. Generally speaking, sending media
requires a combination of ID, ``access_hash`` and ``file_reference``.
The first two are integers, while the last one is a random ``bytes`` sequence.
* The integer ``id`` will always be the same for every account, so every user
or bot looking at a particular media file, will see a consistent ID.
* The ``access_hash`` will always be the same for a given account, but
different accounts will each see their own, different ``access_hash``.
This makes it impossible to get media object from one account and use it in
another. The other account must fetch the media object itself.
* The ``file_reference`` is random for everyone and will only work for a few
hours before it expires. It must be refetched before the media can be used
(to either resend the media or download it).
The second type of "`file ID <https://core.telegram.org/bots/api#inputfile>`_"
people refer to is a concept from the HTTP Bot API. It's a custom format which
encodes enough information to use the media.
Telethon provides an old version of these HTTP Bot API-style file IDs via
``message.file.id``, however, this feature is no longer maintained, so it may
not work. It will be removed in future versions. Nonetheless, it is possible
to find a different Python package (or write your own) to parse these file IDs
and construct the necessary input file objects to send or download the media.
Can I use Flask with the library?
=================================
@ -419,5 +239,4 @@ file and run that, or use the normal ``python`` interpreter.
.. _logging: https://docs.python.org/3/library/logging.html
.. _@SpamBot: https://t.me/SpamBot
.. _issue 297: https://github.com/LonamiWebs/Telethon/issues/297
.. _issue 3759: https://github.com/LonamiWebs/Telethon/issues/3759
.. _quart_login.py: https://github.com/LonamiWebs/Telethon/tree/v1/telethon_examples#quart_loginpy
.. _quart_login.py: https://github.com/LonamiWebs/Telethon/tree/master/telethon_examples#quart_loginpy

View File

@ -1,2 +1 @@
./
sphinx-rtd-theme~=1.3.0
telethon

View File

@ -15,15 +15,12 @@ import json
import os
import re
import shutil
import sys
import urllib.request
from pathlib import Path
from subprocess import run
from sys import argv
from setuptools import find_packages, setup
# Needed since we're importing local files
sys.path.insert(0, os.path.dirname(__file__))
class TempWorkDir:
"""Switches the working directory to be the one on which this file lives,
@ -44,8 +41,6 @@ class TempWorkDir:
os.chdir(self.original)
API_REF_URL = 'https://tl.telethon.dev/'
GENERATOR_DIR = Path('telethon_generator')
LIBRARY_DIR = Path('telethon')
@ -153,46 +148,23 @@ def generate(which, action='gen'):
)
def main(argv):
def main():
if len(argv) >= 2 and argv[1] in ('gen', 'clean'):
generate(argv[2:], argv[1])
elif len(argv) >= 2 and argv[1] == 'pypi':
# Make sure tl.telethon.dev is up-to-date first
with urllib.request.urlopen(API_REF_URL) as resp:
html = resp.read()
m = re.search(br'layer\s+(\d+)', html)
if not m:
print('Failed to check that the API reference is up to date:', API_REF_URL)
return
from telethon_generator.parsers import find_layer
layer = next(filter(None, map(find_layer, TLOBJECT_IN_TLS)))
published_layer = int(m[1])
if published_layer != layer:
print('Published layer', published_layer, 'does not match current layer', layer, '.')
print('Make sure to update the API reference site first:', API_REF_URL)
return
# (Re)generate the code to make sure we don't push without it
generate(['tl', 'errors'])
# Try importing the telethon module to assert it has no errors
try:
import telethon
except Exception as e:
except:
print('Packaging for PyPi aborted, importing the module failed.')
print(e)
return
remove_dirs = ['__pycache__', 'build', 'dist', 'Telethon.egg-info']
for root, _dirs, _files in os.walk(LIBRARY_DIR, topdown=False):
# setuptools is including __pycache__ for some reason (#1605)
if root.endswith('/__pycache__'):
remove_dirs.append(root)
for x in remove_dirs:
for x in ('build', 'dist', 'Telethon.egg-info'):
shutil.rmtree(x, ignore_errors=True)
run('python3 setup.py sdist', shell=True)
run('python3 setup.py bdist_wheel', shell=True)
run('twine upload dist/*', shell=True)
@ -244,13 +216,11 @@ def main(argv):
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: 3.8',
'Programming Language :: Python :: 3.6'
],
keywords='telegram api chat client library messaging mtproto',
packages=find_packages(exclude=[
'telethon_*', 'tests*'
'telethon_*', 'run_tests.py', 'try_telethon.py'
]),
install_requires=['pyaes', 'rsa'],
extras_require={
@ -261,4 +231,4 @@ def main(argv):
if __name__ == '__main__':
with TempWorkDir():
main(sys.argv)
main()

View File

@ -1,8 +1,8 @@
from .client.telegramclient import TelegramClient
from .network import connection
from .tl import types, functions, custom
from .tl.custom import Button
from .tl import patched as _ # import for its side-effects
from . import version, events, utils, errors, types, functions, custom
from . import version, events, utils, errors
__version__ = version.__version__

View File

@ -1,3 +0,0 @@
from .entitycache import EntityCache
from .messagebox import MessageBox, GapError, PrematureEndReason
from .session import SessionState, ChannelState, Entity, EntityType

View File

@ -1,59 +0,0 @@
from .session import EntityType, Entity
_sentinel = object()
class EntityCache:
def __init__(
self,
hash_map: dict = _sentinel,
self_id: int = None,
self_bot: bool = None
):
self.hash_map = {} if hash_map is _sentinel else hash_map
self.self_id = self_id
self.self_bot = self_bot
def set_self_user(self, id, bot, hash):
self.self_id = id
self.self_bot = bot
if hash:
self.hash_map[id] = (hash, EntityType.BOT if bot else EntityType.USER)
def get(self, id):
try:
hash, ty = self.hash_map[id]
return Entity(ty, id, hash)
except KeyError:
return None
def extend(self, users, chats):
# See https://core.telegram.org/api/min for "issues" with "min constructors".
self.hash_map.update(
(u.id, (
u.access_hash,
EntityType.BOT if u.bot else EntityType.USER,
))
for u in users
if getattr(u, 'access_hash', None) and not u.min
)
self.hash_map.update(
(c.id, (
c.access_hash,
EntityType.MEGAGROUP if c.megagroup else (
EntityType.GIGAGROUP if getattr(c, 'gigagroup', None) else EntityType.CHANNEL
),
))
for c in chats
if getattr(c, 'access_hash', None) and not getattr(c, 'min', None)
)
def put(self, entity):
self.hash_map[entity.id] = (entity.hash, entity.ty)
def retain(self, filter):
self.hash_map = {k: v for k, v in self.hash_map.items() if filter(k)}
def __len__(self):
return len(self.hash_map)

View File

@ -1,825 +0,0 @@
"""
This module deals with correct handling of updates, including gaps, and knowing when the code
should "get difference" (the set of updates that the client should know by now minus the set
of updates that it actually knows).
Each chat has its own [`Entry`] in the [`MessageBox`] (this `struct` is the "entry point").
At any given time, the message box may be either getting difference for them (entry is in
[`MessageBox::getting_diff_for`]) or not. If not getting difference, a possible gap may be
found for the updates (entry is in [`MessageBox::possible_gaps`]). Otherwise, the entry is
on its happy path.
Gaps are cleared when they are either resolved on their own (by waiting for a short time)
or because we got the difference for the corresponding entry.
While there are entries for which their difference must be fetched,
[`MessageBox::check_deadlines`] will always return [`Instant::now`], since "now" is the time
to get the difference.
"""
import asyncio
import datetime
import time
import logging
from enum import Enum
from .session import SessionState, ChannelState
from ..tl import types as tl, functions as fn
from ..helpers import get_running_loop
# Telegram sends `seq` equal to `0` when "it doesn't matter", so we use that value too.
NO_SEQ = 0
# See https://core.telegram.org/method/updates.getChannelDifference.
BOT_CHANNEL_DIFF_LIMIT = 100000
USER_CHANNEL_DIFF_LIMIT = 100
# > It may be useful to wait up to 0.5 seconds
POSSIBLE_GAP_TIMEOUT = 0.5
# After how long without updates the client will "timeout".
#
# When this timeout occurs, the client will attempt to fetch updates by itself, ignoring all the
# updates that arrive in the meantime. After all updates are fetched when this happens, the
# client will resume normal operation, and the timeout will reset.
#
# Documentation recommends 15 minutes without updates (https://core.telegram.org/api/updates).
NO_UPDATES_TIMEOUT = 15 * 60
# object() but with a tag to make it easier to debug
class Sentinel:
__slots__ = ('tag',)
def __init__(self, tag=None):
self.tag = tag or '_'
def __repr__(self):
return self.tag
# Entry "enum".
# Account-wide `pts` includes private conversations (one-to-one) and small group chats.
ENTRY_ACCOUNT = Sentinel('ACCOUNT')
# Account-wide `qts` includes only "secret" one-to-one chats.
ENTRY_SECRET = Sentinel('SECRET')
# Integers will be Channel-specific `pts`, and includes "megagroup", "broadcast" and "supergroup" channels.
# Python's logging doesn't define a TRACE level. Pick halfway between DEBUG and NOTSET.
# We don't define a name for this as libraries shouldn't do that though.
LOG_LEVEL_TRACE = (logging.DEBUG - logging.NOTSET) // 2
_sentinel = Sentinel()
def next_updates_deadline():
return get_running_loop().time() + NO_UPDATES_TIMEOUT
def epoch():
return datetime.datetime(*time.gmtime(0)[:6]).replace(tzinfo=datetime.timezone.utc)
class GapError(ValueError):
def __repr__(self):
return 'GapError()'
class PrematureEndReason(Enum):
TEMPORARY_SERVER_ISSUES = 'tmp'
BANNED = 'ban'
# Represents the information needed to correctly handle a specific `tl::enums::Update`.
class PtsInfo:
__slots__ = ('pts', 'pts_count', 'entry')
def __init__(
self,
pts: int,
pts_count: int,
entry: object
):
self.pts = pts
self.pts_count = pts_count
self.entry = entry
@classmethod
def from_update(cls, update):
pts = getattr(update, 'pts', None)
if pts:
pts_count = getattr(update, 'pts_count', None) or 0
try:
entry = update.message.peer_id.channel_id
except AttributeError:
entry = getattr(update, 'channel_id', None) or ENTRY_ACCOUNT
return cls(pts=pts, pts_count=pts_count, entry=entry)
qts = getattr(update, 'qts', None)
if qts:
return cls(pts=qts, pts_count=1, entry=ENTRY_SECRET)
return None
def __repr__(self):
return f'PtsInfo(pts={self.pts}, pts_count={self.pts_count}, entry={self.entry})'
# The state of a particular entry in the message box.
class State:
__slots__ = ('pts', 'deadline')
def __init__(
self,
# Current local persistent timestamp.
pts: int,
# Next instant when we would get the update difference if no updates arrived before then.
deadline: float
):
self.pts = pts
self.deadline = deadline
def __repr__(self):
return f'State(pts={self.pts}, deadline={self.deadline})'
# > ### Recovering gaps
# > […] Manually obtaining updates is also required in the following situations:
# > • Loss of sync: a gap was found in `seq` / `pts` / `qts` (as described above).
# > It may be useful to wait up to 0.5 seconds in this situation and abort the sync in case a new update
# > arrives, that fills the gap.
#
# This is really easy to trigger by spamming messages in a channel (with as little as 3 members works), because
# the updates produced by the RPC request take a while to arrive (whereas the read update comes faster alone).
class PossibleGap:
__slots__ = ('deadline', 'updates')
def __init__(
self,
deadline: float,
# Pending updates (those with a larger PTS, producing the gap which may later be filled).
updates: list # of updates
):
self.deadline = deadline
self.updates = updates
def __repr__(self):
return f'PossibleGap(deadline={self.deadline}, update_count={len(self.updates)})'
# Represents a "message box" (event `pts` for a specific entry).
#
# See https://core.telegram.org/api/updates#message-related-event-sequences.
class MessageBox:
__slots__ = ('_log', 'map', 'date', 'seq', 'next_deadline', 'possible_gaps', 'getting_diff_for')
def __init__(
self,
log,
# Map each entry to their current state.
map: dict = _sentinel, # entry -> state
# Additional fields beyond PTS needed by `ENTRY_ACCOUNT`.
date: datetime.datetime = epoch() + datetime.timedelta(seconds=1),
seq: int = NO_SEQ,
# Holds the entry with the closest deadline (optimization to avoid recalculating the minimum deadline).
next_deadline: object = None, # entry
# Which entries have a gap and may soon trigger a need to get difference.
#
# If a gap is found, stores the required information to resolve it (when should it timeout and what updates
# should be held in case the gap is resolved on its own).
#
# Not stored directly in `map` as an optimization (else we would need another way of knowing which entries have
# a gap in them).
possible_gaps: dict = _sentinel, # entry -> possiblegap
# For which entries are we currently getting difference.
getting_diff_for: set = _sentinel, # entry
):
self._log = log
self.map = {} if map is _sentinel else map
self.date = date
self.seq = seq
self.next_deadline = next_deadline
self.possible_gaps = {} if possible_gaps is _sentinel else possible_gaps
self.getting_diff_for = set() if getting_diff_for is _sentinel else getting_diff_for
if __debug__:
self._trace('MessageBox initialized')
def _trace(self, msg, *args, **kwargs):
# Calls to trace can't really be removed beforehand without some dark magic.
# So every call to trace is prefixed with `if __debug__`` instead, to remove
# it when using `python -O`. Probably unnecessary, but it's nice to avoid
# paying the cost for something that is not used.
self._log.log(LOG_LEVEL_TRACE, 'Current MessageBox state: seq = %r, date = %s, map = %r',
self.seq, self.date.isoformat(), self.map)
self._log.log(LOG_LEVEL_TRACE, msg, *args, **kwargs)
# region Creation, querying, and setting base state.
def load(self, session_state, channel_states):
"""
Create a [`MessageBox`] from a previously known update state.
"""
if __debug__:
self._trace('Loading MessageBox with session_state = %r, channel_states = %r', session_state, channel_states)
deadline = next_updates_deadline()
self.map.clear()
if session_state.pts != NO_SEQ:
self.map[ENTRY_ACCOUNT] = State(pts=session_state.pts, deadline=deadline)
if session_state.qts != NO_SEQ:
self.map[ENTRY_SECRET] = State(pts=session_state.qts, deadline=deadline)
self.map.update((s.channel_id, State(pts=s.pts, deadline=deadline)) for s in channel_states)
self.date = datetime.datetime.fromtimestamp(session_state.date, tz=datetime.timezone.utc)
self.seq = session_state.seq
self.next_deadline = ENTRY_ACCOUNT
def session_state(self):
"""
Return the current state.
This should be used for persisting the state.
"""
return dict(
pts=self.map[ENTRY_ACCOUNT].pts if ENTRY_ACCOUNT in self.map else NO_SEQ,
qts=self.map[ENTRY_SECRET].pts if ENTRY_SECRET in self.map else NO_SEQ,
date=self.date,
seq=self.seq,
), {id: state.pts for id, state in self.map.items() if isinstance(id, int)}
def is_empty(self) -> bool:
"""
Return true if the message box is empty and has no state yet.
"""
return ENTRY_ACCOUNT not in self.map
def check_deadlines(self):
"""
Return the next deadline when receiving updates should timeout.
If a deadline expired, the corresponding entries will be marked as needing to get its difference.
While there are entries pending of getting their difference, this method returns the current instant.
"""
now = get_running_loop().time()
if self.getting_diff_for:
return now
deadline = next_updates_deadline()
# Most of the time there will be zero or one gap in flight so finding the minimum is cheap.
if self.possible_gaps:
deadline = min(deadline, *(gap.deadline for gap in self.possible_gaps.values()))
elif self.next_deadline in self.map:
deadline = min(deadline, self.map[self.next_deadline].deadline)
# asyncio's loop time precision only seems to be about 3 decimal places, so it's possible that
# we find the same number again on repeated calls. Without the "or equal" part we would log the
# timeout for updates several times (it also makes sense to get difference if now is the deadline).
if now >= deadline:
# Check all expired entries and add them to the list that needs getting difference.
self.getting_diff_for.update(entry for entry, gap in self.possible_gaps.items() if now >= gap.deadline)
self.getting_diff_for.update(entry for entry, state in self.map.items() if now >= state.deadline)
if __debug__:
self._trace('Deadlines met, now getting diff for %r', self.getting_diff_for)
# When extending `getting_diff_for`, it's important to have the moral equivalent of
# `begin_get_diff` (that is, clear possible gaps if we're now getting difference).
for entry in self.getting_diff_for:
self.possible_gaps.pop(entry, None)
return deadline
# Reset the deadline for the periods without updates for the given entries.
#
# It also updates the next deadline time to reflect the new closest deadline.
def reset_deadlines(self, entries, deadline):
if not entries:
return
for entry in entries:
if entry not in self.map:
raise RuntimeError('Called reset_deadline on an entry for which we do not have state')
self.map[entry].deadline = deadline
if self.next_deadline in entries:
# If the updated deadline was the closest one, recalculate the new minimum.
self.next_deadline = min(self.map.items(), key=lambda entry_state: entry_state[1].deadline)[0]
elif self.next_deadline in self.map and deadline < self.map[self.next_deadline].deadline:
# If the updated deadline is smaller than the next deadline, change the next deadline to be the new one.
# Any entry will do, so the one from the last iteration is fine.
self.next_deadline = entry
# else an unrelated deadline was updated, so the closest one remains unchanged.
# Convenience to reset a channel's deadline, with optional timeout.
def reset_channel_deadline(self, channel_id, timeout):
self.reset_deadlines({channel_id}, get_running_loop().time() + (timeout or NO_UPDATES_TIMEOUT))
# Sets the update state.
#
# Should be called right after login if [`MessageBox::new`] was used, otherwise undesirable
# updates will be fetched.
def set_state(self, state, reset=True):
if __debug__:
self._trace('Setting state %s', state)
deadline = next_updates_deadline()
if state.pts != NO_SEQ or not reset:
self.map[ENTRY_ACCOUNT] = State(pts=state.pts, deadline=deadline)
else:
self.map.pop(ENTRY_ACCOUNT, None)
# Telegram seems to use the `qts` for bot accounts, but while applying difference,
# it might be reset back to 0. See issue #3873 for more details.
#
# During login, a value of zero would mean the `pts` is unknown,
# so the map shouldn't contain that entry.
# But while applying difference, if the value is zero, it (probably)
# truly means that's what should be used (hence the `reset` flag).
if state.qts != NO_SEQ or not reset:
self.map[ENTRY_SECRET] = State(pts=state.qts, deadline=deadline)
else:
self.map.pop(ENTRY_SECRET, None)
self.date = state.date
self.seq = state.seq
# Like [`MessageBox::set_state`], but for channels. Useful when getting dialogs.
#
# The update state will only be updated if no entry was known previously.
def try_set_channel_state(self, id, pts):
if __debug__:
self._trace('Trying to set channel state for %r: %r', id, pts)
if id not in self.map:
self.map[id] = State(pts=pts, deadline=next_updates_deadline())
# Try to begin getting difference for the given entry.
# Fails if the entry does not have a previously-known state that can be used to get its difference.
#
# Clears any previous gaps.
def try_begin_get_diff(self, entry, reason):
if entry not in self.map:
# Won't actually be able to get difference for this entry if we don't have a pts to start off from.
if entry in self.possible_gaps:
raise RuntimeError('Should not have a possible_gap for an entry not in the state map')
if __debug__:
self._trace('Should get difference for %r because %s but cannot due to missing hash', entry, reason)
return
if __debug__:
self._trace('Marking %r as needing difference because %s', entry, reason)
self.getting_diff_for.add(entry)
self.possible_gaps.pop(entry, None)
# Finish getting difference for the given entry.
#
# It also resets the deadline.
def end_get_diff(self, entry):
try:
self.getting_diff_for.remove(entry)
except KeyError:
raise RuntimeError('Called end_get_diff on an entry which was not getting diff for')
self.reset_deadlines({entry}, next_updates_deadline())
assert entry not in self.possible_gaps, "gaps shouldn't be created while getting difference"
# endregion Creation, querying, and setting base state.
# region "Normal" updates flow (processing and detection of gaps).
# Process an update and return what should be done with it.
#
# Updates corresponding to entries for which their difference is currently being fetched
# will be ignored. While according to the [updates' documentation]:
#
# > Implementations [have] to postpone updates received via the socket while
# > filling gaps in the event and `Update` sequences, as well as avoid filling
# > gaps in the same sequence.
#
# In practice, these updates should have also been retrieved through getting difference.
#
# [updates documentation] https://core.telegram.org/api/updates
def process_updates(
self,
updates,
chat_hashes,
result, # out list of updates; returns list of user, chat, or raise if gap
):
# v1 has never sent updates produced by the client itself to the handlers.
# However proper update handling requires those to be processed.
# This is an ugly workaround for that.
self_outgoing = getattr(updates, '_self_outgoing', False)
real_result = result
result = []
date = getattr(updates, 'date', None)
seq = getattr(updates, 'seq', None)
seq_start = getattr(updates, 'seq_start', None)
users = getattr(updates, 'users', None) or []
chats = getattr(updates, 'chats', None) or []
if __debug__:
self._trace('Processing updates with seq = %r, seq_start = %r, date = %s: %s',
seq, seq_start, date.isoformat() if date else None, updates)
if date is None:
# updatesTooLong is the only one with no date (we treat it as a gap)
self.try_begin_get_diff(ENTRY_ACCOUNT, 'received updatesTooLong')
raise GapError
if seq is None:
seq = NO_SEQ
if seq_start is None:
seq_start = seq
# updateShort is the only update which cannot be dispatched directly but doesn't have 'updates' field
updates = getattr(updates, 'updates', None) or [updates.update if isinstance(updates, tl.UpdateShort) else updates]
for u in updates:
u._self_outgoing = self_outgoing
# > For all the other [not `updates` or `updatesCombined`] `Updates` type constructors
# > there is no need to check `seq` or change a local state.
if seq_start != NO_SEQ:
if self.seq + 1 > seq_start:
# Skipping updates that were already handled
if __debug__:
self._trace('Skipping updates as they should have already been handled')
return (users, chats)
elif self.seq + 1 < seq_start:
# Gap detected
self.try_begin_get_diff(ENTRY_ACCOUNT, 'detected gap')
raise GapError
# else apply
def _sort_gaps(update):
pts = PtsInfo.from_update(update)
return pts.pts - pts.pts_count if pts else 0
reset_deadlines = set() # temporary buffer
result.extend(filter(None, (
self.apply_pts_info(u, reset_deadlines=reset_deadlines)
# Telegram can send updates out of order (e.g. ReadChannelInbox first
# and then NewChannelMessage, both with the same pts, but the count is
# 0 and 1 respectively), so we sort them first.
for u in sorted(updates, key=_sort_gaps))))
self.reset_deadlines(reset_deadlines, next_updates_deadline())
if self.possible_gaps:
if __debug__:
self._trace('Trying to re-apply %r possible gaps', len(self.possible_gaps))
# For each update in possible gaps, see if the gap has been resolved already.
for key in list(self.possible_gaps.keys()):
self.possible_gaps[key].updates.sort(key=_sort_gaps)
for _ in range(len(self.possible_gaps[key].updates)):
update = self.possible_gaps[key].updates.pop(0)
# If this fails to apply, it will get re-inserted at the end.
# All should fail, so the order will be preserved (it would've cycled once).
update = self.apply_pts_info(update, reset_deadlines=None)
if update:
result.append(update)
if __debug__:
self._trace('Resolved gap with %r: %s', PtsInfo.from_update(update), update)
# Clear now-empty gaps.
self.possible_gaps = {entry: gap for entry, gap in self.possible_gaps.items() if gap.updates}
real_result.extend(u for u in result if not u._self_outgoing)
if result and not self.possible_gaps:
# > If the updates were applied, local *Updates* state must be updated
# > with `seq` (unless it's 0) and `date` from the constructor.
if __debug__:
self._trace('Updating seq as all updates were applied')
if date != epoch():
self.date = date
if seq != NO_SEQ:
self.seq = seq
return (users, chats)
# Tries to apply the input update if its `PtsInfo` follows the correct order.
#
# If the update can be applied, it is returned; otherwise, the update is stored in a
# possible gap (unless it was already handled or would be handled through getting
# difference) and `None` is returned.
def apply_pts_info(
self,
update,
*,
reset_deadlines,
):
# This update means we need to call getChannelDifference to get the updates from the channel
if isinstance(update, tl.UpdateChannelTooLong):
self.try_begin_get_diff(update.channel_id, 'received updateChannelTooLong')
return None
pts = PtsInfo.from_update(update)
if not pts:
# No pts means that the update can be applied in any order.
if __debug__:
self._trace('No pts in update, so it can be applied in any order: %s', update)
return update
# As soon as we receive an update of any form related to messages (has `PtsInfo`),
# the "no updates" period for that entry is reset.
#
# Build the `HashSet` to avoid calling `reset_deadline` more than once for the same entry.
#
# By the time this method returns, self.map will have an entry for which we can reset its deadline.
if reset_deadlines:
reset_deadlines.add(pts.entry)
if pts.entry in self.getting_diff_for:
# Note: early returning here also prevents gap from being inserted (which they should
# not be while getting difference).
if __debug__:
self._trace('Skipping update with %r as its difference is being fetched', pts)
return None
if pts.entry in self.map:
local_pts = self.map[pts.entry].pts
if local_pts + pts.pts_count > pts.pts:
# Ignore
if __debug__:
self._trace('Skipping update since local pts %r > %r: %s', local_pts, pts, update)
return None
elif local_pts + pts.pts_count < pts.pts:
# Possible gap
# TODO store chats too?
if __debug__:
self._trace('Possible gap since local pts %r < %r: %s', local_pts, pts, update)
if pts.entry not in self.possible_gaps:
self.possible_gaps[pts.entry] = PossibleGap(
deadline=get_running_loop().time() + POSSIBLE_GAP_TIMEOUT,
updates=[]
)
self.possible_gaps[pts.entry].updates.append(update)
return None
else:
# Apply
if __debug__:
self._trace('Applying update pts since local pts %r = %r: %s', local_pts, pts, update)
# In a channel, we may immediately receive:
# * ReadChannelInbox (pts = X, pts_count = 0)
# * NewChannelMessage (pts = X, pts_count = 1)
#
# Notice how both `pts` are the same. If they were to be applied out of order, the first
# one however would've triggered a gap because `local_pts` + `pts_count` of 0 would be
# less than `remote_pts`. So there is no risk by setting the `local_pts` to match the
# `remote_pts` here of missing the new message.
#
# The message would however be lost if we initialized the pts with the first one, since
# the second one would appear "already handled". To prevent this we set the pts to be
# one less when the count is 0 (which might be wrong and trigger a gap later on, but is
# unlikely). This will prevent us from losing updates in the unlikely scenario where these
# two updates arrive in different packets (and therefore couldn't be sorted beforehand).
if pts.entry in self.map:
self.map[pts.entry].pts = pts.pts
else:
# When a chat is migrated to a megagroup, the first update can be a `ReadChannelInbox`
# with `pts = 1, pts_count = 0` followed by a `NewChannelMessage` with `pts = 2, pts_count=1`.
# Note how the `pts` for the message is 2 and not 1 unlike the case described before!
# This is likely because the `pts` cannot be 0 (or it would fail with PERSISTENT_TIMESTAMP_EMPTY),
# which forces the first update to be 1. But if we got difference with 1 and the second update
# also used 1, we would miss it, so Telegram probably uses 2 to work around that.
self.map[pts.entry] = State(
pts=(pts.pts - (0 if pts.pts_count else 1)) or 1,
deadline=next_updates_deadline()
)
return update
# endregion "Normal" updates flow (processing and detection of gaps).
# region Getting and applying account difference.
# Return the request that needs to be made to get the difference, if any.
def get_difference(self):
for entry in (ENTRY_ACCOUNT, ENTRY_SECRET):
if entry in self.getting_diff_for:
if entry not in self.map:
raise RuntimeError('Should not try to get difference for an entry without known state')
gd = fn.updates.GetDifferenceRequest(
pts=self.map[ENTRY_ACCOUNT].pts,
pts_total_limit=None,
date=self.date,
qts=self.map[ENTRY_SECRET].pts if ENTRY_SECRET in self.map else NO_SEQ,
)
if __debug__:
self._trace('Requesting account difference %s', gd)
return gd
return None
# Similar to [`MessageBox::process_updates`], but using the result from getting difference.
def apply_difference(
self,
diff,
chat_hashes,
):
if __debug__:
self._trace('Applying account difference %s', diff)
finish = None
result = None
if isinstance(diff, tl.updates.DifferenceEmpty):
finish = True
self.date = diff.date
self.seq = diff.seq
result = [], [], []
elif isinstance(diff, tl.updates.Difference):
finish = True
chat_hashes.extend(diff.users, diff.chats)
result = self.apply_difference_type(diff, chat_hashes)
elif isinstance(diff, tl.updates.DifferenceSlice):
finish = False
chat_hashes.extend(diff.users, diff.chats)
result = self.apply_difference_type(diff, chat_hashes)
elif isinstance(diff, tl.updates.DifferenceTooLong):
finish = True
self.map[ENTRY_ACCOUNT].pts = diff.pts # the deadline will be reset once the diff ends
result = [], [], []
if finish:
account = ENTRY_ACCOUNT in self.getting_diff_for
secret = ENTRY_SECRET in self.getting_diff_for
if not account and not secret:
raise RuntimeError('Should not be applying the difference when neither account or secret was diff was active')
# Both may be active if both expired at the same time.
if account:
self.end_get_diff(ENTRY_ACCOUNT)
if secret:
self.end_get_diff(ENTRY_SECRET)
return result
def apply_difference_type(
self,
diff,
chat_hashes,
):
state = getattr(diff, 'intermediate_state', None) or diff.state
self.set_state(state, reset=False)
# diff.other_updates can contain things like UpdateChannelTooLong and UpdateNewChannelMessage.
# We need to process those as if they were socket updates to discard any we have already handled.
updates = []
self.process_updates(tl.Updates(
updates=diff.other_updates,
users=diff.users,
chats=diff.chats,
date=epoch(),
seq=NO_SEQ, # this way date is not used
), chat_hashes, updates)
updates.extend(tl.UpdateNewMessage(
message=m,
pts=NO_SEQ,
pts_count=NO_SEQ,
) for m in diff.new_messages)
updates.extend(tl.UpdateNewEncryptedMessage(
message=m,
qts=NO_SEQ,
) for m in diff.new_encrypted_messages)
return updates, diff.users, diff.chats
def end_difference(self):
if __debug__:
self._trace('Ending account difference')
account = ENTRY_ACCOUNT in self.getting_diff_for
secret = ENTRY_SECRET in self.getting_diff_for
if not account and not secret:
raise RuntimeError('Should not be ending get difference when neither account or secret was diff was active')
# Both may be active if both expired at the same time.
if account:
self.end_get_diff(ENTRY_ACCOUNT)
if secret:
self.end_get_diff(ENTRY_SECRET)
# endregion Getting and applying account difference.
# region Getting and applying channel difference.
# Return the request that needs to be made to get a channel's difference, if any.
def get_channel_difference(
self,
chat_hashes,
):
entry = next((id for id in self.getting_diff_for if isinstance(id, int)), None)
if not entry:
return None
packed = chat_hashes.get(entry)
if not packed:
# Cannot get channel difference as we're missing its hash
# TODO we should probably log this
self.end_get_diff(entry)
# Remove the outdated `pts` entry from the map so that the next update can correct
# it. Otherwise, it will spam that the access hash is missing.
self.map.pop(entry, None)
return None
state = self.map.get(entry)
if not state:
raise RuntimeError('Should not try to get difference for an entry without known state')
gd = fn.updates.GetChannelDifferenceRequest(
force=False,
channel=tl.InputChannel(packed.id, packed.hash),
filter=tl.ChannelMessagesFilterEmpty(),
pts=state.pts,
limit=BOT_CHANNEL_DIFF_LIMIT if chat_hashes.self_bot else USER_CHANNEL_DIFF_LIMIT
)
if __debug__:
self._trace('Requesting channel difference %s', gd)
return gd
# Similar to [`MessageBox::process_updates`], but using the result from getting difference.
def apply_channel_difference(
self,
request,
diff,
chat_hashes,
):
entry = request.channel.channel_id
if __debug__:
self._trace('Applying channel difference for %r: %s', entry, diff)
self.possible_gaps.pop(entry, None)
if isinstance(diff, tl.updates.ChannelDifferenceEmpty):
assert diff.final
self.end_get_diff(entry)
self.map[entry].pts = diff.pts
return [], [], []
elif isinstance(diff, tl.updates.ChannelDifferenceTooLong):
assert diff.final
self.map[entry].pts = diff.dialog.pts
chat_hashes.extend(diff.users, diff.chats)
self.reset_channel_deadline(entry, diff.timeout)
# This `diff` has the "latest messages and corresponding chats", but it would
# be strange to give the user only partial changes of these when they would
# expect all updates to be fetched. Instead, nothing is returned.
return [], [], []
elif isinstance(diff, tl.updates.ChannelDifference):
if diff.final:
self.end_get_diff(entry)
self.map[entry].pts = diff.pts
chat_hashes.extend(diff.users, diff.chats)
updates = []
self.process_updates(tl.Updates(
updates=diff.other_updates,
users=diff.users,
chats=diff.chats,
date=epoch(),
seq=NO_SEQ, # this way date is not used
), chat_hashes, updates)
updates.extend(tl.UpdateNewChannelMessage(
message=m,
pts=NO_SEQ,
pts_count=NO_SEQ,
) for m in diff.new_messages)
self.reset_channel_deadline(entry, None)
return updates, diff.users, diff.chats
def end_channel_difference(self, request, reason: PrematureEndReason, chat_hashes):
entry = request.channel.channel_id
if __debug__:
self._trace('Ending channel difference for %r because %s', entry, reason)
if reason == PrematureEndReason.TEMPORARY_SERVER_ISSUES:
# Temporary issues. End getting difference without updating the pts so we can retry later.
self.possible_gaps.pop(entry, None)
self.end_get_diff(entry)
elif reason == PrematureEndReason.BANNED:
# Banned in the channel. Forget its state since we can no longer fetch updates from it.
self.possible_gaps.pop(entry, None)
self.end_get_diff(entry)
del self.map[entry]
else:
raise RuntimeError('Unknown reason to end channel difference')
# endregion Getting and applying channel difference.

View File

@ -1,195 +0,0 @@
from typing import Optional, Tuple
from enum import IntEnum
from ..tl.types import InputPeerUser, InputPeerChat, InputPeerChannel
import struct
class SessionState:
"""
Stores the information needed to fetch updates and about the current user.
* user_id: 64-bit number representing the user identifier.
* dc_id: 32-bit number relating to the datacenter identifier where the user is.
* bot: is the logged-in user a bot?
* pts: 64-bit number holding the state needed to fetch updates.
* qts: alternative 64-bit number holding the state needed to fetch updates.
* date: 64-bit number holding the date needed to fetch updates.
* seq: 64-bit-number holding the sequence number needed to fetch updates.
* takeout_id: 64-bit-number holding the identifier of the current takeout session.
Note that some of the numbers will only use 32 out of the 64 available bits.
However, for future-proofing reasons, we recommend you pretend they are 64-bit long.
"""
__slots__ = ('user_id', 'dc_id', 'bot', 'pts', 'qts', 'date', 'seq', 'takeout_id')
def __init__(
self,
user_id: int,
dc_id: int,
bot: bool,
pts: int,
qts: int,
date: int,
seq: int,
takeout_id: Optional[int]
):
self.user_id = user_id
self.dc_id = dc_id
self.bot = bot
self.pts = pts
self.qts = qts
self.date = date
self.seq = seq
self.takeout_id = takeout_id
def __repr__(self):
return repr({k: getattr(self, k) for k in self.__slots__})
class ChannelState:
"""
Stores the information needed to fetch updates from a channel.
* channel_id: 64-bit number representing the channel identifier.
* pts: 64-bit number holding the state needed to fetch updates.
"""
__slots__ = ('channel_id', 'pts')
def __init__(
self,
channel_id: int,
pts: int,
):
self.channel_id = channel_id
self.pts = pts
def __repr__(self):
return repr({k: getattr(self, k) for k in self.__slots__})
class EntityType(IntEnum):
"""
You can rely on the type value to be equal to the ASCII character one of:
* 'U' (85): this entity belongs to a :tl:`User` who is not a ``bot``.
* 'B' (66): this entity belongs to a :tl:`User` who is a ``bot``.
* 'G' (71): this entity belongs to a small group :tl:`Chat`.
* 'C' (67): this entity belongs to a standard broadcast :tl:`Channel`.
* 'M' (77): this entity belongs to a megagroup :tl:`Channel`.
* 'E' (69): this entity belongs to an "enormous" "gigagroup" :tl:`Channel`.
"""
USER = ord('U')
BOT = ord('B')
GROUP = ord('G')
CHANNEL = ord('C')
MEGAGROUP = ord('M')
GIGAGROUP = ord('E')
def canonical(self):
"""
Return the canonical version of this type.
"""
return _canon_entity_types[self]
_canon_entity_types = {
EntityType.USER: EntityType.USER,
EntityType.BOT: EntityType.USER,
EntityType.GROUP: EntityType.GROUP,
EntityType.CHANNEL: EntityType.CHANNEL,
EntityType.MEGAGROUP: EntityType.CHANNEL,
EntityType.GIGAGROUP: EntityType.CHANNEL,
}
class Entity:
"""
Stores the information needed to use a certain user, chat or channel with the API.
* ty: 8-bit number indicating the type of the entity (of type `EntityType`).
* id: 64-bit number uniquely identifying the entity among those of the same type.
* hash: 64-bit signed number needed to use this entity with the API.
The string representation of this class is considered to be stable, for as long as
Telegram doesn't need to add more fields to the entities. It can also be converted
to bytes with ``bytes(entity)``, for a more compact representation.
"""
__slots__ = ('ty', 'id', 'hash')
def __init__(
self,
ty: EntityType,
id: int,
hash: int
):
self.ty = ty
self.id = id
self.hash = hash
@property
def is_user(self):
"""
``True`` if the entity is either a user or a bot.
"""
return self.ty in (EntityType.USER, EntityType.BOT)
@property
def is_group(self):
"""
``True`` if the entity is a small group chat or `megagroup`_.
.. _megagroup: https://telegram.org/blog/supergroups5k
"""
return self.ty in (EntityType.GROUP, EntityType.MEGAGROUP)
@property
def is_broadcast(self):
"""
``True`` if the entity is a broadcast channel or `broadcast group`_.
.. _broadcast group: https://telegram.org/blog/autodelete-inv2#groups-with-unlimited-members
"""
return self.ty in (EntityType.CHANNEL, EntityType.GIGAGROUP)
@classmethod
def from_str(cls, string: str):
"""
Convert the string into an `Entity`.
"""
try:
ty, id, hash = string.split('.')
ty, id, hash = ord(ty), int(id), int(hash)
except AttributeError:
raise TypeError(f'expected str, got {string!r}') from None
except (TypeError, ValueError):
raise ValueError(f'malformed entity str (must be T.id.hash), got {string!r}') from None
return cls(EntityType(ty), id, hash)
@classmethod
def from_bytes(cls, blob):
"""
Convert the bytes into an `Entity`.
"""
try:
ty, id, hash = struct.unpack('<Bqq', blob)
except struct.error:
raise ValueError(f'malformed entity data, got {blob!r}') from None
return cls(EntityType(ty), id, hash)
def __str__(self):
return f'{chr(self.ty)}.{self.id}.{self.hash}'
def __bytes__(self):
return struct.pack('<Bqq', self.ty, self.id, self.hash)
def _as_input_peer(self):
if self.is_user:
return InputPeerUser(self.id, self.hash)
elif self.ty == EntityType.GROUP:
return InputPeerChat(self.id)
else:
return InputPeerChannel(self.id, self.hash)
def __repr__(self):
return repr({k: getattr(self, k) for k in self.__slots__})

View File

@ -3,11 +3,9 @@ import inspect
import os
import sys
import typing
import warnings
from .. import utils, helpers, errors, password as pwd_mod
from ..tl import types, functions, custom
from .._updates import SessionState
if typing.TYPE_CHECKING:
from .telegramclient import TelegramClient
@ -19,8 +17,8 @@ class AuthMethods:
def start(
self: 'TelegramClient',
phone: typing.Union[typing.Callable[[], str], str] = lambda: input('Please enter your phone (or bot token): '),
password: typing.Union[typing.Callable[[], str], str] = lambda: getpass.getpass('Please enter your password: '),
phone: typing.Callable[[], str] = lambda: input('Please enter your phone (or bot token): '),
password: typing.Callable[[], str] = lambda: getpass.getpass('Please enter your password: '),
*,
bot_token: str = None,
force_sms: bool = False,
@ -34,6 +32,12 @@ class AuthMethods:
By default, this method will be interactive (asking for
user input if needed), and will handle 2FA if enabled too.
If the phone doesn't belong to an existing account (and will hence
`sign_up` for a new one), **you are agreeing to Telegram's
Terms of Service. This is required and your account
will be banned otherwise.** See https://telegram.org/tos
and https://core.telegram.org/api/terms.
If the event loop is already running, this method returns a
coroutine that you should await on your own code; otherwise
the loop is ran until said coroutine completes.
@ -87,7 +91,7 @@ class AuthMethods:
# Starting as a bot account
await client.start(bot_token=bot_token)
# Starting as a user account
# Starting as an user account
await client.start(phone)
# Please enter the code you received: 12345
# Please enter your password: *******
@ -134,31 +138,7 @@ class AuthMethods:
if not self.is_connected():
await self.connect()
# Rather than using `is_user_authorized`, use `get_me`. While this is
# more expensive and needs to retrieve more data from the server, it
# enables the library to warn users trying to login to a different
# account. See #1172.
me = await self.get_me()
if me is not None:
# The warnings here are on a best-effort and may fail.
if bot_token:
# bot_token's first part has the bot ID, but it may be invalid
# so don't try to parse as int (instead cast our ID to string).
if bot_token[:bot_token.find(':')] != str(me.id):
warnings.warn(
'the session already had an authorized user so it did '
'not login to the bot account using the provided bot_token; '
'if you were expecting a different user, check whether '
'you are accidentally reusing an existing session'
)
elif phone and not callable(phone) and utils.parse_phone(phone) != me.phone:
warnings.warn(
'the session already had an authorized user so it did '
'not login to the user account using the provided phone; '
'if you were expecting a different user, check whether '
'you are accidentally reusing an existing session'
)
if await self.is_user_authorized():
return self
if not bot_token:
@ -184,6 +164,7 @@ class AuthMethods:
two_step_detected = False
await self.send_code_request(phone, force_sms=force_sms)
sign_up = False # assume login
while attempts < max_attempts:
try:
value = code_callback()
@ -196,12 +177,19 @@ class AuthMethods:
if not value:
raise errors.PhoneCodeEmptyError(request=None)
# Raises SessionPasswordNeededError if 2FA enabled
me = await self.sign_in(phone, code=value)
if sign_up:
me = await self.sign_up(value, first_name, last_name)
else:
# Raises SessionPasswordNeededError if 2FA enabled
me = await self.sign_in(phone, code=value)
break
except errors.SessionPasswordNeededError:
two_step_detected = True
break
except errors.PhoneNumberOccupiedError:
sign_up = False
except errors.PhoneNumberUnoccupiedError:
sign_up = True
except (errors.PhoneCodeEmptyError,
errors.PhoneCodeExpiredError,
errors.PhoneCodeHashEmptyError,
@ -240,14 +228,13 @@ class AuthMethods:
me = await self.sign_in(phone=phone, password=password)
# We won't reach here if any step failed (exit by exception)
signed, name = 'Signed in successfully as ', utils.get_display_name(me)
tos = '; remember to not break the ToS or you will risk an account ban!'
signed, name = 'Signed in successfully as', utils.get_display_name(me)
try:
print(signed, name, tos, sep='')
print(signed, name)
except UnicodeEncodeError:
# Some terminals don't support certain characters
print(signed, name.encode('utf-8', errors='ignore')
.decode('ascii', errors='ignore'), tos, sep='')
.decode('ascii', errors='ignore'))
return self
@ -355,18 +342,13 @@ class AuthMethods:
'and a password only if an RPCError was raised before.'
)
try:
result = await self(request)
except errors.PhoneCodeExpiredError:
self._phone_code_hash.pop(phone, None)
raise
result = await self(request)
if isinstance(result, types.auth.AuthorizationSignUpRequired):
# Emulate pre-layer 104 behaviour
self._tos = result.terms_of_service
raise errors.PhoneNumberUnoccupiedError(request=request)
return await self._on_login(result.user)
return self._on_login(result.user)
async def sign_up(
self: 'TelegramClient',
@ -377,41 +359,92 @@ class AuthMethods:
phone: str = None,
phone_code_hash: str = None) -> 'types.User':
"""
This method can no longer be used, and will immediately raise a ``ValueError``.
See `issue #4050 <https://github.com/LonamiWebs/Telethon/issues/4050>`_ for context.
"""
raise ValueError('Third-party applications cannot sign up for Telegram. See https://github.com/LonamiWebs/Telethon/issues/4050 for details')
Signs up to Telegram as a new user account.
async def _on_login(self, user):
Use this if you don't have an account yet.
You must call `send_code_request` first.
**By using this method you're agreeing to Telegram's
Terms of Service. This is required and your account
will be banned otherwise.** See https://telegram.org/tos
and https://core.telegram.org/api/terms.
Arguments
code (`str` | `int`):
The code sent by Telegram
first_name (`str`):
The first name to be used by the new account.
last_name (`str`, optional)
Optional last name.
phone (`str` | `int`, optional):
The phone to sign up. This will be the last phone used by
default (you normally don't need to set this).
phone_code_hash (`str`, optional):
The hash returned by `send_code_request`. This can be left as
`None` to use the last hash known for the phone to be used.
Returns
The new created :tl:`User`.
Example
.. code-block:: python
phone = '+34 123 123 123'
await client.send_code_request(phone)
code = input('enter code: ')
await client.sign_up(code, first_name='Anna', last_name='Banana')
"""
me = await self.get_me()
if me:
return me
if self._tos and self._tos.text:
if self.parse_mode:
t = self.parse_mode.unparse(self._tos.text, self._tos.entities)
else:
t = self._tos.text
sys.stderr.write("{}\n".format(t))
sys.stderr.flush()
phone, phone_code_hash = \
self._parse_phone_and_hash(phone, phone_code_hash)
result = await self(functions.auth.SignUpRequest(
phone_number=phone,
phone_code_hash=phone_code_hash,
first_name=first_name,
last_name=last_name
))
if self._tos:
await self(
functions.help.AcceptTermsOfServiceRequest(self._tos.id))
return self._on_login(result.user)
def _on_login(self, user):
"""
Callback called whenever the login or sign up process completes.
Returns the input user parameter.
"""
self._mb_entity_cache.set_self_user(user.id, user.bot, user.access_hash)
self._bot = bool(user.bot)
self._self_input_peer = utils.get_input_peer(user, allow_self=False)
self._authorized = True
state = await self(functions.updates.GetStateRequest())
# the server may send an old qts in getState
difference = await self(functions.updates.GetDifferenceRequest(pts=state.pts, date=state.date, qts=state.qts))
if isinstance(difference, types.updates.Difference):
state = difference.state
elif isinstance(difference, types.updates.DifferenceSlice):
state = difference.intermediate_state
elif isinstance(difference, types.updates.DifferenceTooLong):
state.pts = difference.pts
self._message_box.load(SessionState(0, 0, 0, state.pts, state.qts, int(state.date.timestamp()), state.seq, 0), [])
return user
async def send_code_request(
self: 'TelegramClient',
phone: str,
*,
force_sms: bool = False,
_retry_count: int = 0) -> 'types.auth.SentCode':
force_sms: bool = False) -> 'types.auth.SentCode':
"""
Sends the Telegram code needed to login to the given phone number.
@ -420,8 +453,7 @@ class AuthMethods:
The phone to which the code will be sent.
force_sms (`bool`, optional):
Whether to force sending as SMS. This has been deprecated.
See `issue #4050 <https://github.com/LonamiWebs/Telethon/issues/4050>`_ for context.
Whether to force sending as SMS.
Returns
An instance of :tl:`SentCode`.
@ -433,10 +465,6 @@ class AuthMethods:
sent = await client.send_code_request(phone)
print(sent)
"""
if force_sms:
warnings.warn('force_sms has been deprecated and no longer works')
force_sms = False
result = None
phone = utils.parse_phone(phone) or self._phone
phone_hash = self._phone_code_hash.get(phone)
@ -446,14 +474,7 @@ class AuthMethods:
result = await self(functions.auth.SendCodeRequest(
phone, self.api_id, self.api_hash, types.CodeSettings()))
except errors.AuthRestartError:
if _retry_count > 2:
raise
return await self.send_code_request(
phone, force_sms=force_sms, _retry_count=_retry_count+1)
# TODO figure out when/if/how this can happen
if isinstance(result, types.auth.SentCodeSuccess):
raise RuntimeError('logged in right after sending the code')
return await self.send_code_request(phone, force_sms=force_sms)
# If we already sent a SMS, do not resend the code (hash may be empty)
if isinstance(result.type, types.auth.SentCodeTypeSms):
@ -468,21 +489,8 @@ class AuthMethods:
self._phone = phone
if force_sms:
try:
result = await self(
functions.auth.ResendCodeRequest(phone, phone_hash))
except errors.PhoneCodeExpiredError:
if _retry_count > 2:
raise
self._phone_code_hash.pop(phone, None)
self._log[__name__].info(
"Phone code expired in ResendCodeRequest, requesting a new code"
)
return await self.send_code_request(
phone, force_sms=False, _retry_count=_retry_count+1)
if isinstance(result, types.auth.SentCodeSuccess):
raise RuntimeError('logged in right after resending the code')
result = await self(
functions.auth.ResendCodeRequest(phone, phone_hash))
self._phone_code_hash[phone] = result.phone_code_hash
@ -520,9 +528,6 @@ class AuthMethods:
# Important! You need to wait for the login to complete!
await qr_login.wait()
# If you have 2FA enabled, `wait` will raise `telethon.errors.SessionPasswordNeededError`.
# You should except that error and call `sign_in` with the password if this happens.
"""
qr_login = custom.QRLogin(self, ignored_ids or [])
await qr_login.recreate()
@ -532,8 +537,6 @@ class AuthMethods:
"""
Logs out Telegram and deletes the current ``*.session`` file.
The client is unusable after logging out and a new instance should be created.
Returns
`True` if the operation was successful.
@ -548,12 +551,13 @@ class AuthMethods:
except errors.RPCError:
return False
self._mb_entity_cache.set_self_user(None, None, None)
self._bot = None
self._self_input_peer = None
self._authorized = False
self._state_cache.reset()
await self.disconnect()
await utils.maybe_async(self.session.delete())
self.session = None
self.session.delete()
return True
async def edit_2fa(

View File

@ -13,7 +13,6 @@ class BotMethods:
bot: 'hints.EntityLike',
query: str,
*,
entity: 'hints.EntityLike' = None,
offset: str = None,
geo_point: 'types.GeoPoint' = None) -> custom.InlineResults:
"""
@ -26,15 +25,6 @@ class BotMethods:
query (`str`):
The query that should be made to the bot.
entity (`entity`, optional):
The entity where the inline query is being made from. Certain
bots use this to display different results depending on where
it's used, such as private chats, groups or channels.
If specified, it will also be the default entity where the
message will be sent after clicked. Otherwise, the "empty
peer" will be used, which some bots may not handle correctly.
offset (`str`, optional):
The string offset to use for the bot.
@ -56,17 +46,12 @@ class BotMethods:
message = await results[0].click('TelethonOffTopic')
"""
bot = await self.get_input_entity(bot)
if entity:
peer = await self.get_input_entity(entity)
else:
peer = types.InputPeerEmpty()
result = await self(functions.messages.GetInlineBotResultsRequest(
bot=bot,
peer=peer,
peer=types.InputPeerEmpty(),
query=query,
offset=offset or '',
geo_point=geo_point
))
return custom.InlineResults(self, result, entity=peer if entity else None)
return custom.InlineResults(self, result)

View File

@ -7,8 +7,8 @@ from ..tl import types, custom
class ButtonMethods:
@staticmethod
def build_reply_markup(
buttons: 'typing.Optional[hints.MarkupLike]'
) -> 'typing.Optional[types.TypeReplyMarkup]':
buttons: 'typing.Optional[hints.MarkupLike]',
inline_only: bool = False) -> 'typing.Optional[types.TypeReplyMarkup]':
"""
Builds a :tl:`ReplyInlineMarkup` or :tl:`ReplyKeyboardMarkup` for
the given buttons.
@ -26,6 +26,9 @@ class ButtonMethods:
The button, list of buttons, array of buttons or markup
to convert into a markup.
inline_only (`bool`, optional):
Whether the buttons **must** be inline buttons only or not.
Example
.. code-block:: python
@ -39,8 +42,8 @@ class ButtonMethods:
return None
try:
if buttons.SUBCLASS_OF_ID == 0xe2e10ef2: # crc32(b'ReplyMarkup'):
return buttons
if buttons.SUBCLASS_OF_ID == 0xe2e10ef2:
return buttons # crc32(b'ReplyMarkup'):
except AttributeError:
pass
@ -54,8 +57,6 @@ class ButtonMethods:
resize = None
single_use = None
selective = None
persistent = None
placeholder = None
rows = []
for row in buttons:
@ -68,10 +69,6 @@ class ButtonMethods:
single_use = button.single_use
if button.selective is not None:
selective = button.selective
if button.persistent is not None:
persistent = button.persistent
if button.placeholder is not None:
placeholder = button.placeholder
button = button.button
elif isinstance(button, custom.MessageButton):
@ -81,21 +78,19 @@ class ButtonMethods:
is_inline |= inline
is_normal |= not inline
if button.SUBCLASS_OF_ID == 0xbad74a3: # crc32(b'KeyboardButton')
if button.SUBCLASS_OF_ID == 0xbad74a3:
# 0xbad74a3 == crc32(b'KeyboardButton')
current.append(button)
if current:
rows.append(types.KeyboardButtonRow(current))
if is_inline and is_normal:
if inline_only and is_normal:
raise ValueError('You cannot use non-inline buttons here')
elif is_inline == is_normal and is_normal:
raise ValueError('You cannot mix inline with normal buttons')
elif is_inline:
return types.ReplyInlineMarkup(rows)
# elif is_normal:
return types.ReplyKeyboardMarkup(
rows=rows,
resize=resize,
single_use=single_use,
selective=selective,
persistent=persistent,
placeholder=placeholder
)
rows, resize=resize, single_use=single_use, selective=selective)

View File

@ -22,7 +22,6 @@ class _ChatAction:
'contact': types.SendMessageChooseContactAction(),
'game': types.SendMessageGamePlayAction(),
'location': types.SendMessageGeoLocationAction(),
'sticker': types.SendMessageChooseStickerAction(),
'record-audio': types.SendMessageRecordAudioAction(),
'record-voice': types.SendMessageRecordAudioAction(), # alias
@ -97,7 +96,7 @@ class _ChatAction:
class _ParticipantsIter(RequestIter):
async def _init(self, entity, filter, search):
async def _init(self, entity, filter, search, aggressive):
if isinstance(filter, type):
if filter in (types.ChannelParticipantsBanned,
types.ChannelParticipantsKicked,
@ -122,25 +121,33 @@ class _ParticipantsIter(RequestIter):
self.filter_entity = lambda ent: True
# Only used for channels, but we should always set the attribute
# Called `requests` even though it's just one for legacy purposes.
self.requests = None
self.requests = []
if ty == helpers._EntityType.CHANNEL:
self.total = (await self.client(
functions.channels.GetFullChannelRequest(entity)
)).full_chat.participants_count
if self.limit <= 0:
# May not have access to the channel, but getFull can get the .total.
self.total = (await self.client(
functions.channels.GetFullChannelRequest(entity)
)).full_chat.participants_count
raise StopAsyncIteration
self.seen = set()
self.requests = functions.channels.GetParticipantsRequest(
channel=entity,
filter=filter or types.ChannelParticipantsSearch(search),
offset=0,
limit=_MAX_PARTICIPANTS_CHUNK_SIZE,
hash=0
)
if aggressive and not filter:
self.requests.extend(functions.channels.GetParticipantsRequest(
channel=entity,
filter=types.ChannelParticipantsSearch(x),
offset=0,
limit=_MAX_PARTICIPANTS_CHUNK_SIZE,
hash=0
) for x in (search or string.ascii_lowercase))
else:
self.requests.append(functions.channels.GetParticipantsRequest(
channel=entity,
filter=filter or types.ChannelParticipantsSearch(search),
offset=0,
limit=_MAX_PARTICIPANTS_CHUNK_SIZE,
hash=0
))
elif ty == helpers._EntityType.CHAT:
full = await self.client(
@ -155,18 +162,11 @@ class _ParticipantsIter(RequestIter):
users = {user.id: user for user in full.users}
for participant in full.full_chat.participants.participants:
if isinstance(participant, types.ChannelParticipantLeft):
# See issue #3231 to learn why this is ignored.
continue
elif isinstance(participant, types.ChannelParticipantBanned):
user_id = participant.peer.user_id
else:
user_id = participant.user_id
user = users[user_id]
user = users[participant.user_id]
if not self.filter_entity(user):
continue
user = users[user_id]
user = users[participant.user_id]
user.participant = participant
self.buffer.append(user)
@ -185,74 +185,51 @@ class _ParticipantsIter(RequestIter):
if not self.requests:
return True
self.requests.limit = min(self.limit - self.requests.offset, _MAX_PARTICIPANTS_CHUNK_SIZE)
# Only care about the limit for the first request
# (small amount of people, won't be aggressive).
#
# Most people won't care about getting exactly 12,345
# members so it doesn't really matter not to be 100%
# precise with being out of the offset/limit here.
self.requests[0].limit = min(
self.limit - self.requests[0].offset, _MAX_PARTICIPANTS_CHUNK_SIZE)
if self.requests.offset > self.limit:
if self.requests[0].offset > self.limit:
return True
if self.total is None:
f = self.requests.filter
if (
not isinstance(f, types.ChannelParticipantsRecent)
and (not isinstance(f, types.ChannelParticipantsSearch) or f.q)
):
# Only do an additional getParticipants here to get the total
# if there's a filter which would reduce the real total number.
# getParticipants is cheaper than getFull.
self.total = (await self.client(functions.channels.GetParticipantsRequest(
channel=self.requests.channel,
filter=types.ChannelParticipantsRecent(),
offset=0,
limit=1,
hash=0
))).count
participants = await self.client(self.requests)
if self.total is None:
# Will only get here if there was one request with a filter that matched all users.
self.total = participants.count
if not participants.users:
self.requests = None
return
self.requests.offset += len(participants.participants)
users = {user.id: user for user in participants.users}
for participant in participants.participants:
if isinstance(participant, types.ChannelParticipantLeft):
# See issue #3231 to learn why this is ignored.
results = await self.client(self.requests)
for i in reversed(range(len(self.requests))):
participants = results[i]
if not participants.users:
self.requests.pop(i)
continue
elif isinstance(participant, types.ChannelParticipantBanned):
if not isinstance(participant.peer, types.PeerUser):
# May have the entire channel banned. See #3105.
self.requests[i].offset += len(participants.participants)
users = {user.id: user for user in participants.users}
for participant in participants.participants:
user = users[participant.user_id]
if not self.filter_entity(user) or user.id in self.seen:
continue
user_id = participant.peer.user_id
else:
user_id = participant.user_id
user = users[user_id]
if not self.filter_entity(user) or user.id in self.seen:
continue
self.seen.add(user_id)
user = users[user_id]
user.participant = participant
self.buffer.append(user)
self.seen.add(participant.user_id)
user = users[participant.user_id]
user.participant = participant
self.buffer.append(user)
class _AdminLogIter(RequestIter):
async def _init(
self, entity, admins, search, min_id, max_id,
join, leave, invite, restrict, unrestrict, ban, unban,
promote, demote, info, settings, pinned, edit, delete,
group_call
promote, demote, info, settings, pinned, edit, delete
):
if any((join, leave, invite, restrict, unrestrict, ban, unban,
promote, demote, info, settings, pinned, edit, delete,
group_call)):
promote, demote, info, settings, pinned, edit, delete)):
events_filter = types.ChannelAdminLogEventsFilter(
join=join, leave=leave, invite=invite, ban=restrict,
unban=unrestrict, kick=ban, unkick=unban, promote=promote,
demote=demote, info=info, settings=settings, pinned=pinned,
edit=edit, delete=delete, group_call=group_call
edit=edit, delete=delete
)
else:
events_filter = None
@ -360,31 +337,9 @@ class _ProfilePhotoIter(RequestIter):
else:
self.request.offset += len(result.photos)
else:
# Some broadcast channels have a photo that this request doesn't
# retrieve for whatever random reason the Telegram server feels.
#
# This means the `total` count may be wrong but there's not much
# that can be done around it (perhaps there are too many photos
# and this is only a partial result so it's not possible to just
# use the len of the result).
self.buffer = [x.action.photo for x in result.messages
if isinstance(x.action, types.MessageActionChatEditPhoto)]
self.total = getattr(result, 'count', None)
# Unconditionally fetch the full channel to obtain this photo and
# yield it with the rest (unless it's a duplicate).
seen_id = None
if isinstance(result, types.messages.ChannelMessages):
channel = await self.client(functions.channels.GetFullChannelRequest(self.request.peer))
photo = channel.full_chat.chat_photo
if isinstance(photo, types.Photo):
self.buffer.append(photo)
seen_id = photo.id
self.buffer.extend(
x.action.photo for x in result.messages
if isinstance(x.action, types.MessageActionChatEditPhoto)
and x.action.photo.id != seen_id
)
if len(result.messages) < self.request.limit:
self.left = len(self.buffer)
elif result.messages:
@ -419,6 +374,9 @@ class ChatMethods:
search (`str`, optional):
Look for participants with this string in name/username.
If ``aggressive is True``, the symbols from this string will
be used.
filter (:tl:`ChannelParticipantsFilter`, optional):
The filter to be used, if you want e.g. only admins
Note that you might not have permissions for some filter.
@ -431,11 +389,14 @@ class ChatMethods:
use :tl:`ChannelParticipantsKicked` instead.
aggressive (`bool`, optional):
Does nothing. This is kept for backwards-compatibility.
Aggressively looks for all participants in the chat.
There have been several changes to Telegram's API that limits
the amount of members that can be retrieved, and this was a
hack that no longer works.
This is useful for channels since 20 July 2018,
Telegram added a server-side limit where only the
first 200 members can be retrieved. With this flag
set, more than 200 will be often be retrieved.
This has no effect if a ``filter`` is given.
Yields
The :tl:`User` objects returned by :tl:`GetParticipantsRequest`
@ -464,7 +425,8 @@ class ChatMethods:
limit,
entity=entity,
filter=filter,
search=search
search=search,
aggressive=aggressive
)
async def get_participants(
@ -489,7 +451,6 @@ class ChatMethods:
get_participants.__signature__ = inspect.signature(iter_participants)
def iter_admin_log(
self: 'TelegramClient',
entity: 'hints.EntityLike',
@ -512,8 +473,7 @@ class ChatMethods:
settings: bool = None,
pinned: bool = None,
edit: bool = None,
delete: bool = None,
group_call: bool = None) -> _AdminLogIter:
delete: bool = None) -> _AdminLogIter:
"""
Iterator over the admin log for the specified channel.
@ -600,9 +560,6 @@ class ChatMethods:
delete (`bool`):
If `True`, events of message deletions will be returned.
group_call (`bool`):
If `True`, events related to group calls will be returned.
Yields
Instances of `AdminLogEvent <telethon.tl.custom.adminlogevent.AdminLogEvent>`.
@ -634,8 +591,7 @@ class ChatMethods:
settings=settings,
pinned=pinned,
edit=edit,
delete=delete,
group_call=group_call
delete=delete
)
async def get_admin_log(
@ -755,7 +711,6 @@ class ChatMethods:
* ``'contact'``: choosing a contact.
* ``'game'``: playing a game.
* ``'location'``: choosing a geo location.
* ``'sticker'``: choosing a sticker.
* ``'record-audio'``: recording a voice note.
You may use ``'record-voice'`` as alias.
* ``'record-round'``: recording a round video.
@ -805,8 +760,7 @@ class ChatMethods:
try:
action = _ChatAction._str_mapping[action.lower()]
except KeyError:
raise ValueError(
'No such action "{}"'.format(action)) from None
raise ValueError('No such action "{}"'.format(action)) from None
elif not isinstance(action, types.TLObject) or action.SUBCLASS_OF_ID != 0x20b2cc21:
# 0x20b2cc21 = crc32(b'SendMessageAction')
if isinstance(action, type):
@ -835,8 +789,6 @@ class ChatMethods:
invite_users: bool = None,
pin_messages: bool = None,
add_admins: bool = None,
manage_call: bool = None,
anonymous: bool = None,
is_admin: bool = None,
title: str = None) -> types.Updates:
"""
@ -880,21 +832,6 @@ class ChatMethods:
add_admins (`bool`, optional):
Whether the user will be able to add admins.
manage_call (`bool`, optional):
Whether the user will be able to manage group calls.
anonymous (`bool`, optional):
Whether the user will remain anonymous when sending messages.
The sender of the anonymous messages becomes the group itself.
.. note::
Users may be able to identify the anonymous admin by its
custom title, so additional care is needed when using both
``anonymous`` and custom titles. For example, if multiple
anonymous admins share the same title, users won't be able
to distinguish them.
is_admin (`bool`, optional):
Whether the user will be an admin in the chat.
This will only work in small group chats.
@ -928,11 +865,13 @@ class ChatMethods:
"""
entity = await self.get_input_entity(entity)
user = await self.get_input_entity(user)
ty = helpers._entity_type(user)
if ty != helpers._EntityType.USER:
raise ValueError('You must pass a user entity')
perm_names = (
'change_info', 'post_messages', 'edit_messages', 'delete_messages',
'ban_users', 'invite_users', 'pin_messages', 'add_admins',
'anonymous', 'manage_call',
'ban_users', 'invite_users', 'pin_messages', 'add_admins'
)
ty = helpers._entity_type(entity)
@ -965,11 +904,10 @@ class ChatMethods:
is_admin = any(locals()[x] for x in perm_names)
return await self(functions.messages.EditChatAdminRequest(
entity.chat_id, user, is_admin=is_admin))
entity, user, is_admin=is_admin))
else:
raise ValueError(
'You can only edit permissions in groups and channels')
raise ValueError('You can only edit permissions in groups and channels')
async def edit_permissions(
self: 'TelegramClient',
@ -1114,10 +1052,16 @@ class ChatMethods:
))
user = await self.get_input_entity(user)
ty = helpers._entity_type(user)
if ty != helpers._EntityType.USER:
raise ValueError('You must pass a user entity')
if isinstance(user, types.InputPeerSelf):
raise ValueError('You cannot restrict yourself')
return await self(functions.channels.EditBannedRequest(
channel=entity,
participant=user,
user_id=user,
banned_rights=rights
))
@ -1144,121 +1088,44 @@ class ChatMethods:
user (`entity`, optional):
The user to kick.
Returns
Returns the service `Message <telethon.tl.custom.message.Message>`
produced about a user being kicked, if any.
Example
.. code-block:: python
# Kick some user from some chat, and deleting the service message
msg = await client.kick_participant(chat, user)
await msg.delete()
# Kick some user from some chat
await client.kick_participant(chat, user)
# Leaving chat
await client.kick_participant(chat, 'me')
"""
entity = await self.get_input_entity(entity)
user = await self.get_input_entity(user)
if helpers._entity_type(user) != helpers._EntityType.USER:
raise ValueError('You must pass a user entity')
ty = helpers._entity_type(entity)
if ty == helpers._EntityType.CHAT:
resp = await self(functions.messages.DeleteChatUserRequest(entity.chat_id, user))
await self(functions.messages.DeleteChatUserRequest(entity.chat_id, user))
elif ty == helpers._EntityType.CHANNEL:
if isinstance(user, types.InputPeerSelf):
# Despite no longer being in the channel, the account still
# seems to get the service message.
resp = await self(functions.channels.LeaveChannelRequest(entity))
await self(functions.channels.LeaveChannelRequest(entity))
else:
resp = await self(functions.channels.EditBannedRequest(
await self(functions.channels.EditBannedRequest(
channel=entity,
participant=user,
banned_rights=types.ChatBannedRights(
until_date=None, view_messages=True)
user_id=user,
banned_rights=types.ChatBannedRights(until_date=None, view_messages=True)
))
await asyncio.sleep(0.5)
await self(functions.channels.EditBannedRequest(
channel=entity,
participant=user,
user_id=user,
banned_rights=types.ChatBannedRights(until_date=None)
))
else:
raise ValueError('You must pass either a channel or a chat')
return self._get_response_message(None, resp, entity)
async def get_permissions(
self: 'TelegramClient',
entity: 'hints.EntityLike',
user: 'hints.EntityLike' = None
) -> 'typing.Optional[custom.ParticipantPermissions]':
"""
Fetches the permissions of a user in a specific chat or channel or
get Default Restricted Rights of Chat or Channel.
.. note::
This request has to fetch the entire chat for small group chats,
which can get somewhat expensive, so use of a cache is advised.
Arguments
entity (`entity`):
The channel or chat the user is participant of.
user (`entity`, optional):
Target user.
Returns
A `ParticipantPermissions <telethon.tl.custom.participantpermissions.ParticipantPermissions>`
instance. Refer to its documentation to see what properties are
available.
Example
.. code-block:: python
permissions = await client.get_permissions(chat, user)
if permissions.is_admin:
# do something
# Get Banned Permissions of Chat
await client.get_permissions(chat)
"""
entity = await self.get_entity(entity)
if not user:
if isinstance(entity, types.Channel):
FullChat = await self(functions.channels.GetFullChannelRequest(entity))
elif isinstance(entity, types.Chat):
FullChat = await self(functions.messages.GetFullChatRequest(entity.id))
else:
return
return FullChat.chats[0].default_banned_rights
entity = await self.get_input_entity(entity)
user = await self.get_input_entity(user)
if helpers._entity_type(entity) == helpers._EntityType.CHANNEL:
participant = await self(functions.channels.GetParticipantRequest(
entity,
user
))
return custom.ParticipantPermissions(participant.participant, False)
elif helpers._entity_type(entity) == helpers._EntityType.CHAT:
chat = await self(functions.messages.GetFullChatRequest(
entity.chat_id
))
if isinstance(user, types.InputPeerSelf):
user = await self.get_me(input_peer=True)
for participant in chat.full_chat.participants.participants:
if participant.user_id == user.user_id:
return custom.ParticipantPermissions(participant, True)
raise errors.UserNotParticipantError(None)
raise ValueError('You must pass either a channel or a chat')
async def get_stats(
self: 'TelegramClient',
entity: 'hints.EntityLike',
message: 'typing.Union[int, types.Message]' = None,
):
"""
Retrieves statistics from the given megagroup or broadcast channel.
@ -1271,10 +1138,6 @@ class ChatMethods:
entity (`entity`):
The channel from which to get statistics.
message (`int` | ``Message``, optional):
The message ID from which to get statistics, if your goal is
to obtain the statistics of a single message.
Raises
If the given entity is not a channel (broadcast or megagroup),
a `TypeError` is raised.
@ -1283,10 +1146,8 @@ class ChatMethods:
``telethon.errors.ChatAdminRequiredError`` will appear.
Returns
If both ``entity`` and ``message`` were provided, returns
:tl:`MessageStats`. Otherwise, either :tl:`BroadcastStats` or
:tl:`MegagroupStats`, depending on whether the input belonged to a
broadcast channel or megagroup.
Either :tl:`BroadcastStats` or :tl:`MegagroupStats`, depending on
whether the input belonged to a broadcast channel or megagroup.
Example
.. code-block:: python
@ -1301,30 +1162,22 @@ class ChatMethods:
"""
entity = await self.get_input_entity(entity)
if helpers._entity_type(entity) != helpers._EntityType.CHANNEL:
raise TypeError('You must pass a channel entity')
raise TypeError('You must pass a user entity')
message = utils.get_message_id(message)
if message is not None:
# Don't bother fetching the Channel entity (costs a request), instead
# try to guess and if it fails we know it's the other one (best case
# no extra request, worst just one).
try:
req = functions.stats.GetBroadcastStatsRequest(entity)
return await self(req)
except errors.StatsMigrateError as e:
dc = e.dc
except errors.BroadcastRequiredError:
req = functions.stats.GetMegagroupStatsRequest(entity)
try:
req = functions.stats.GetMessageStatsRequest(entity, message)
return await self(req)
except errors.StatsMigrateError as e:
dc = e.dc
else:
# Don't bother fetching the Channel entity (costs a request), instead
# try to guess and if it fails we know it's the other one (best case
# no extra request, worst just one).
try:
req = functions.stats.GetBroadcastStatsRequest(entity)
return await self(req)
except errors.StatsMigrateError as e:
dc = e.dc
except errors.BroadcastRequiredError:
req = functions.stats.GetMegagroupStatsRequest(entity)
try:
return await self(req)
except errors.StatsMigrateError as e:
dc = e.dc
sender = await self._borrow_exported_sender(dc)
try:

View File

@ -55,15 +55,12 @@ class _DialogsIter(RequestIter):
self.total = getattr(r, 'count', len(r.dialogs))
entities = {utils.get_peer_id(x): x
for x in itertools.chain(r.users, r.chats)
if not isinstance(x, (types.UserEmpty, types.ChatEmpty))}
self.client._mb_entity_cache.extend(r.users, r.chats)
for x in itertools.chain(r.users, r.chats)}
messages = {}
for m in r.messages:
m._finish_init(self.client, entities, None)
messages[_dialog_message_key(m.peer_id, m.id)] = m
messages[_dialog_message_key(m.to_id, m.id)] = m
for d in r.dialogs:
# We check the offset date here because Telegram may ignore it
@ -76,24 +73,16 @@ class _DialogsIter(RequestIter):
peer_id = utils.get_peer_id(d.peer)
if peer_id not in self.seen:
self.seen.add(peer_id)
if peer_id not in entities:
# > In which case can a UserEmpty appear in the list of banned members?
# > In a very rare cases. This is possible but isn't an expected behavior.
# Real world example: https://t.me/TelethonChat/271471
continue
cd = custom.Dialog(self.client, d, entities, message)
if cd.dialog.pts:
self.client._message_box.try_set_channel_state(
utils.get_peer_id(d.peer, add_mark=False), cd.dialog.pts)
self.client._channel_pts[cd.id] = cd.dialog.pts
if not self.ignore_migrated or getattr(
cd.entity, 'migrated_to', None) is None:
self.buffer.append(cd)
if not self.buffer or len(r.dialogs) < self.request.limit\
if len(r.dialogs) < self.request.limit\
or not isinstance(r, types.messages.DialogsSlice):
# Buffer being empty means all returned dialogs were skipped (due to offsets).
# Less than we requested means we reached the end, or
# we didn't get a DialogsSlice which means we got all.
return True
@ -110,7 +99,8 @@ class _DialogsIter(RequestIter):
self.request.exclude_pinned = True
self.request.offset_id = last_message.id if last_message else 0
self.request.offset_date = last_message.date if last_message else None
self.request.offset_peer = self.buffer[-1].input_entity
self.request.offset_peer =\
entities[utils.get_peer_id(r.dialogs[-1].peer)]
class _DraftsIter(RequestIter):
@ -374,7 +364,7 @@ class DialogMethods:
await client.edit_folder(dialogs, [0, 1])
# Un-archiving all dialogs
await client.edit_folder(unpack=1)
await client.archive(unpack=1)
"""
if (entity is None) == (unpack is None):
raise ValueError('You can only set either entities or unpack, not both')
@ -460,8 +450,7 @@ class DialogMethods:
if ty == helpers._EntityType.CHAT and not deactivated:
try:
result = await self(functions.messages.DeleteChatUserRequest(
entity.chat_id, types.InputUserSelf(), revoke_history=revoke
))
entity.chat_id, types.InputUserSelf()))
except errors.PeerIdInvalidError:
# Happens if we didn't have the deactivated information
result = None
@ -486,16 +475,6 @@ class DialogMethods:
Creates a `Conversation <telethon.tl.custom.conversation.Conversation>`
with the given entity.
.. note::
This Conversation API has certain shortcomings, such as lacking
persistence, poor interaction with other event handlers, and
overcomplicated usage for anything beyond the simplest case.
If you plan to interact with a bot without handlers, this works
fine, but when running a bot yourself, you may instead prefer
to follow the advice from https://stackoverflow.com/a/62246569/.
This is not the same as just sending a message to create a "dialog"
with them, but rather a way to easily send messages and await for
responses or other reactions. Refer to its documentation for more.

View File

@ -4,7 +4,6 @@ import os
import pathlib
import typing
import inspect
import asyncio
from ..crypto import AES
@ -24,34 +23,20 @@ if typing.TYPE_CHECKING:
MIN_CHUNK_SIZE = 4096
MAX_CHUNK_SIZE = 512 * 1024
# 2021-01-15, users reported that `errors.TimeoutError` can occur while downloading files.
TIMED_OUT_SLEEP = 1
class _CdnRedirect(Exception):
def __init__(self, cdn_redirect=None):
self.cdn_redirect = cdn_redirect
class _DirectDownloadIter(RequestIter):
async def _init(
self, file, dc_id, offset, stride, chunk_size, request_size, file_size, msg_data, cdn_redirect=None):
self, file, dc_id, offset, stride, chunk_size, request_size, file_size
):
self.request = functions.upload.GetFileRequest(
file, offset=offset, limit=request_size)
self._client = self.client
self._cdn_redirect = cdn_redirect
if cdn_redirect is not None:
self.request = functions.upload.GetCdnFileRequest(cdn_redirect.file_token, offset=offset, limit=request_size)
self._client = await self.client._get_cdn_client(cdn_redirect)
file, offset=offset, limit=request_size)
self.total = file_size
self._stride = stride
self._chunk_size = chunk_size
self._last_part = None
self._msg_data = msg_data
self._timed_out = False
self._exported = dc_id and self._client.session.dc_id != dc_id
self._exported = dc_id and self.client.session.dc_id != dc_id
if not self._exported:
# The used sender will also change if ``FileMigrateError`` occurs
self._sender = self.client._sender
@ -63,12 +48,9 @@ class _DirectDownloadIter(RequestIter):
config = await self.client(functions.help.GetConfigRequest())
for option in config.dc_options:
if option.ip_address == self.client.session.server_address:
await utils.maybe_async(
self.client.session.set_dc(
option.id, option.ip_address, option.port
)
)
await utils.maybe_async(self.client.session.save())
self.client.session.set_dc(
option.id, option.ip_address, option.port)
self.client.session.save()
break
# TODO Figure out why the session may have the wrong DC ID
@ -86,58 +68,18 @@ class _DirectDownloadIter(RequestIter):
async def _request(self):
try:
result = await self._client._call(self._sender, self.request)
self._timed_out = False
result = await self.client._call(self._sender, self.request)
if isinstance(result, types.upload.FileCdnRedirect):
if self.client._mb_entity_cache.self_bot:
raise ValueError('FileCdnRedirect but the GetCdnFileRequest API access for bot users is restricted. Try to change api_id to avoid FileCdnRedirect')
raise _CdnRedirect(result)
if isinstance(result, types.upload.CdnFileReuploadNeeded):
await self.client._call(self.client._sender, functions.upload.ReuploadCdnFileRequest(file_token=self._cdn_redirect.file_token, request_token=result.request_token))
result = await self._client._call(self._sender, self.request)
return result.bytes
raise NotImplementedError # TODO Implement
else:
return result.bytes
except errors.TimedOutError as e:
if self._timed_out:
self.client._log[__name__].warning('Got two timeouts in a row while downloading file')
raise
self._timed_out = True
self.client._log[__name__].info('Got timeout while downloading file, retrying once')
await asyncio.sleep(TIMED_OUT_SLEEP)
return await self._request()
except errors.FileMigrateError as e:
self.client._log[__name__].info('File lives in another DC')
self._sender = await self.client._borrow_exported_sender(e.new_dc)
self._exported = True
return await self._request()
except (errors.FilerefUpgradeNeededError, errors.FileReferenceExpiredError) as e:
# Only implemented for documents which are the ones that may take that long to download
if not self._msg_data \
or not isinstance(self.request.location, types.InputDocumentFileLocation) \
or self.request.location.thumb_size != '':
raise
self.client._log[__name__].info('File ref expired during download; refetching message')
chat, msg_id = self._msg_data
msg = await self.client.get_messages(chat, ids=msg_id)
if not isinstance(msg.media, types.MessageMediaDocument):
raise
document = msg.media.document
# Message media may have been edited for something else
if document.id != self.request.location.id:
raise
self.request.location.file_reference = document.file_reference
return await self._request()
async def close(self):
if not self._sender:
return
@ -161,12 +103,12 @@ class _DirectDownloadIter(RequestIter):
class _GenericDownloadIter(_DirectDownloadIter):
async def _load_next_chunk(self):
async def _load_next_chunk(self, mask=MIN_CHUNK_SIZE - 1):
# 1. Fetch enough for one chunk
data = b''
# 1.1. ``bad`` is how much into the data we have we need to offset
bad = self.request.offset % self.request.limit
bad = self.request.offset & mask
before = self.request.offset
# 1.2. We have to fetch from a valid offset, so remove that bad part
@ -242,8 +184,7 @@ class DownloadMethods:
The output file path, directory, or stream-like object.
If the path exists and is a file, it will be overwritten.
If file is the type `bytes`, it will be downloaded in-memory
and returned as a bytestring (i.e. ``file=bytes``, without
parentheses or quotes).
as a bytestring (e.g. ``file=bytes``).
download_big (`bool`, optional):
Whether to use the big version of the available photos.
@ -291,11 +232,11 @@ class DownloadMethods:
if isinstance(photo, (types.UserProfilePhoto, types.ChatPhoto)):
dc_id = photo.dc_id
which = photo.photo_big if download_big else photo.photo_small
loc = types.InputPeerPhotoFileLocation(
# min users can be used to download profile photos
# self.get_input_entity would otherwise not accept those
peer=utils.get_input_peer(entity, check_hash=False),
photo_id=photo.photo_id,
peer=await self.get_input_entity(entity),
local_id=which.local_id,
volume_id=which.volume_id,
big=download_big
)
else:
@ -353,8 +294,7 @@ class DownloadMethods:
The output file path, directory, or stream-like object.
If the path exists and is a file, it will be overwritten.
If file is the type `bytes`, it will be downloaded in-memory
and returned as a bytestring (i.e. ``file=bytes``, without
parentheses or quotes).
as a bytestring (e.g. ``file=bytes``).
progress_callback (`callable`, optional):
A callback function accepting two parameters:
@ -370,20 +310,13 @@ class DownloadMethods:
The parameter should be an integer index between ``0`` and
``len(sizes)``. ``0`` will download the smallest thumbnail,
and ``len(sizes) - 1`` will download the largest thumbnail.
You can also use negative indices, which work the same as
they do in Python's `list`.
You can also use negative indices.
You can also pass the :tl:`PhotoSize` instance to use.
Alternatively, the thumb size type `str` may be used.
In short, use ``thumb=0`` if you want the smallest thumbnail
and ``thumb=-1`` if you want the largest thumbnail.
.. note::
The largest thumbnail may be a video instead of a photo,
as they are available since layer 116 and are bigger than
any of the photos.
Returns
`None` if no media was provided, or if it was Empty. On success
the file path is returned since it may differ from the one given.
@ -397,9 +330,6 @@ class DownloadMethods:
path = await message.download_media()
await message.download_media(filename)
# Downloading to memory
blob = await client.download_media(message, bytes)
# Printing download progress
def callback(current, total):
print('Downloaded', current, 'out of', total,
@ -407,16 +337,10 @@ class DownloadMethods:
await client.download_media(message, progress_callback=callback)
"""
# Downloading large documents may be slow enough to require a new file reference
# to be obtained mid-download. Store (input chat, message id) so that the message
# can be re-fetched.
msg_data = None
# TODO This won't work for messageService
if isinstance(message, types.Message):
date = message.date
media = message.media
msg_data = (message.input_chat, message.id) if message.input_chat else None
else:
date = datetime.datetime.now()
media = message
@ -424,11 +348,6 @@ class DownloadMethods:
if isinstance(media, str):
media = utils.resolve_bot_file_id(media)
if isinstance(media, types.MessageService):
if isinstance(message.action,
types.MessageActionChatEditPhoto):
media = media.photo
if isinstance(media, types.MessageMediaWebPage):
if isinstance(media.webpage, types.WebPage):
media = media.webpage.document or media.webpage.photo
@ -439,7 +358,7 @@ class DownloadMethods:
)
elif isinstance(media, (types.MessageMediaDocument, types.Document)):
return await self._download_document(
media, file, date, thumb, progress_callback, msg_data
media, file, date, thumb, progress_callback
)
elif isinstance(media, types.MessageMediaContact) and thumb is None:
return self._download_contact(
@ -513,31 +432,6 @@ class DownloadMethods:
data = await client.download_file(input_file, bytes)
print(data[:16])
"""
return await self._download_file(
input_location,
file,
part_size_kb=part_size_kb,
file_size=file_size,
progress_callback=progress_callback,
dc_id=dc_id,
key=key,
iv=iv,
)
async def _download_file(
self: 'TelegramClient',
input_location: 'hints.FileLike',
file: 'hints.OutFileLike' = None,
*,
part_size_kb: float = None,
file_size: int = None,
progress_callback: 'hints.ProgressCallback' = None,
dc_id: int = None,
key: bytes = None,
iv: bytes = None,
msg_data: tuple = None,
cdn_redirect: types.upload.FileCdnRedirect = None
) -> typing.Optional[bytes]:
if not part_size_kb:
if not file_size:
part_size_kb = 64 # Reasonable default
@ -563,8 +457,8 @@ class DownloadMethods:
f = file
try:
async for chunk in self._iter_download(
input_location, request_size=part_size, dc_id=dc_id, msg_data=msg_data, cdn_redirect=cdn_redirect):
async for chunk in self.iter_download(
input_location, request_size=part_size, dc_id=dc_id):
if iv and key:
chunk = AES.decrypt_ige(chunk, key, iv)
r = f.write(chunk)
@ -582,20 +476,6 @@ class DownloadMethods:
if in_memory:
return f.getvalue()
except _CdnRedirect as e:
self._log[__name__].info('FileCdnRedirect to CDN data center %s', e.cdn_redirect.dc_id)
return await self._download_file(
input_location=input_location,
file=file,
part_size_kb=part_size_kb,
file_size=file_size,
progress_callback=progress_callback,
dc_id=e.cdn_redirect.dc_id,
key=e.cdn_redirect.encryption_key,
iv=e.cdn_redirect.encryption_iv,
msg_data=msg_data,
cdn_redirect=e.cdn_redirect
)
finally:
if isinstance(file, str) or in_memory:
f.close()
@ -682,7 +562,7 @@ class DownloadMethods:
# Streaming `media` to an output file
# After the iteration ends, the sender is cleaned up
with open('photo.jpg', 'wb') as fd:
async for chunk in client.iter_download(media):
async for chunk client.iter_download(media):
fd.write(chunk)
# Fetching only the header of a file (32 bytes)
@ -695,31 +575,6 @@ class DownloadMethods:
await stream.close()
assert len(header) == 32
"""
return self._iter_download(
file,
offset=offset,
stride=stride,
limit=limit,
chunk_size=chunk_size,
request_size=request_size,
file_size=file_size,
dc_id=dc_id,
)
def _iter_download(
self: 'TelegramClient',
file: 'hints.FileLike',
*,
offset: int = 0,
stride: int = None,
limit: int = None,
chunk_size: int = None,
request_size: int = MAX_CHUNK_SIZE,
file_size: int = None,
dc_id: int = None,
msg_data: tuple = None,
cdn_redirect: types.upload.FileCdnRedirect = None
):
info = utils._get_file_info(file)
if info.dc_id is not None:
dc_id = info.dc_id
@ -748,8 +603,7 @@ class DownloadMethods:
if chunk_size == request_size \
and offset % MIN_CHUNK_SIZE == 0 \
and stride % MIN_CHUNK_SIZE == 0 \
and (limit is None or offset % limit == 0):
and stride % MIN_CHUNK_SIZE == 0:
cls = _DirectDownloadIter
self._log[__name__].info('Starting direct file download in chunks of '
'%d at %d, stride %d', request_size, offset, stride)
@ -767,9 +621,7 @@ class DownloadMethods:
stride=stride,
chunk_size=chunk_size,
request_size=request_size,
file_size=file_size,
msg_data=msg_data,
cdn_redirect=cdn_redirect
file_size=file_size
)
# endregion
@ -778,44 +630,12 @@ class DownloadMethods:
@staticmethod
def _get_thumb(thumbs, thumb):
if not thumbs:
return None
# Seems Telegram has changed the order and put `PhotoStrippedSize`
# last while this is the smallest (layer 116). Ensure we have the
# sizes sorted correctly with a custom function.
def sort_thumbs(thumb):
if isinstance(thumb, types.PhotoStrippedSize):
return 1, len(thumb.bytes)
if isinstance(thumb, types.PhotoCachedSize):
return 1, len(thumb.bytes)
if isinstance(thumb, types.PhotoSize):
return 1, thumb.size
if isinstance(thumb, types.PhotoSizeProgressive):
return 1, max(thumb.sizes)
if isinstance(thumb, types.VideoSize):
return 2, thumb.size
# Empty size or invalid should go last
return 0, 0
thumbs = list(sorted(thumbs, key=sort_thumbs))
for i in reversed(range(len(thumbs))):
# :tl:`PhotoPathSize` is used for animated stickers preview, and the thumb is actually
# a SVG path of the outline. Users expect thumbnails to be JPEG files, so pretend this
# thumb size doesn't actually exist (#1655).
if isinstance(thumbs[i], types.PhotoPathSize):
thumbs.pop(i)
if thumb is None:
return thumbs[-1]
elif isinstance(thumb, int):
return thumbs[thumb]
elif isinstance(thumb, str):
return next((t for t in thumbs if t.type == thumb), None)
elif isinstance(thumb, (types.PhotoSize, types.PhotoCachedSize,
types.PhotoStrippedSize, types.VideoSize)):
types.PhotoStrippedSize)):
return thumb
else:
return None
@ -850,24 +670,14 @@ class DownloadMethods:
if not isinstance(photo, types.Photo):
return
# Include video sizes here (but they may be None so provide an empty list)
size = self._get_thumb(photo.sizes + (photo.video_sizes or []), thumb)
size = self._get_thumb(photo.sizes, thumb)
if not size or isinstance(size, types.PhotoSizeEmpty):
return
if isinstance(size, types.VideoSize):
file = self._get_proper_filename(file, 'video', '.mp4', date=date)
else:
file = self._get_proper_filename(file, 'photo', '.jpg', date=date)
file = self._get_proper_filename(file, 'photo', '.jpg', date=date)
if isinstance(size, (types.PhotoCachedSize, types.PhotoStrippedSize)):
return self._download_cached_photo_size(size, file)
if isinstance(size, types.PhotoSizeProgressive):
file_size = max(size.sizes)
else:
file_size = size.size
result = await self.download_file(
types.InputPhotoFileLocation(
id=photo.id,
@ -876,7 +686,7 @@ class DownloadMethods:
thumb_size=size.type
),
file,
file_size=file_size,
file_size=size.size,
progress_callback=progress_callback
)
return result if file is bytes else file
@ -906,7 +716,7 @@ class DownloadMethods:
return kind, possible_names
async def _download_document(
self, document, file, date, thumb, progress_callback, msg_data):
self, document, file, date, thumb, progress_callback):
"""Specialized version of .download_media() for documents."""
if isinstance(document, types.MessageMediaDocument):
document = document.document
@ -923,13 +733,10 @@ class DownloadMethods:
else:
file = self._get_proper_filename(file, 'photo', '.jpg', date=date)
size = self._get_thumb(document.thumbs, thumb)
if not size or isinstance(size, types.PhotoSizeEmpty):
return
if isinstance(size, (types.PhotoCachedSize, types.PhotoStrippedSize)):
return self._download_cached_photo_size(size, file)
result = await self._download_file(
result = await self.download_file(
types.InputDocumentFileLocation(
id=document.id,
access_hash=document.access_hash,
@ -938,8 +745,7 @@ class DownloadMethods:
),
file,
file_size=size.size if size else document.size,
progress_callback=progress_callback,
msg_data=msg_data,
progress_callback=progress_callback
)
return result if file is bytes else file
@ -966,19 +772,22 @@ class DownloadMethods:
'END:VCARD\n'
).format(f=first_name, l=last_name, p=phone_number).encode('utf-8')
file = cls._get_proper_filename(
file, 'contact', '.vcard',
possible_names=[first_name, phone_number, last_name]
)
if file is bytes:
return result
f = file if hasattr(file, 'write') else open(file, 'wb')
elif isinstance(file, str):
file = cls._get_proper_filename(
file, 'contact', '.vcard',
possible_names=[first_name, phone_number, last_name]
)
f = open(file, 'wb')
else:
f = file
try:
f.write(result)
finally:
# Only close the stream if we opened it
if f != file:
if isinstance(file, str):
f.close()
return file
@ -995,20 +804,21 @@ class DownloadMethods:
)
# TODO Better way to get opened handles of files and auto-close
kind, possible_names = cls._get_kind_and_names(web.attributes)
file = cls._get_proper_filename(
file, kind, utils.get_extension(web),
possible_names=possible_names
)
if file is bytes:
in_memory = file is bytes
if in_memory:
f = io.BytesIO()
elif hasattr(file, 'write'):
f = file
else:
elif isinstance(file, str):
kind, possible_names = cls._get_kind_and_names(web.attributes)
file = cls._get_proper_filename(
file, kind, utils.get_extension(web),
possible_names=possible_names
)
f = open(file, 'wb')
else:
f = file
try:
async with aiohttp.ClientSession() as session:
with aiohttp.ClientSession() as session:
# TODO Use progress_callback; get content length from response
# https://github.com/telegramdesktop/tdesktop/blob/c7e773dd9aeba94e2be48c032edc9a78bb50234e/Telegram/SourceFiles/ui/images.cpp#L1318-L1319
async with session.get(web.url) as response:
@ -1018,10 +828,10 @@ class DownloadMethods:
break
f.write(chunk)
finally:
if f != file:
if isinstance(file, str) or file is bytes:
f.close()
return f.getvalue() if file is bytes else file
return f.getvalue() if in_memory else file
@staticmethod
def _get_proper_filename(file, kind, extension,

View File

@ -67,7 +67,7 @@ class MessageParseMethods:
entities[i].offset, entities[i].length,
await self.get_input_entity(user)
)
return True
return True
except (ValueError, TypeError):
return False
@ -83,19 +83,10 @@ class MessageParseMethods:
if not parse_mode:
return message, []
original_message = message
message, msg_entities = parse_mode.parse(message)
if original_message and not message and not msg_entities:
raise ValueError("Failed to parse message")
for i in reversed(range(len(msg_entities))):
e = msg_entities[i]
if not e.length:
# 0-length MessageEntity is no longer valid #3884.
# Because the user can provide their own parser (with reasonable 0-length
# entities), strip them here rather than fixing the built-in parsers.
del msg_entities[i]
elif isinstance(e, types.MessageEntityTextUrl):
if isinstance(e, types.MessageEntityTextUrl):
m = re.match(r'^@|\+|tg://user\?id=(\d+)', e.url)
if m:
user = int(m.group(1)) if m.group(1) else e.url
@ -132,6 +123,7 @@ class MessageParseMethods:
random_to_id = {}
id_to_message = {}
sched_to_message = {} # scheduled IDs may collide with normal IDs
for update in updates:
if isinstance(update, types.UpdateMessageID):
random_to_id[update.random_id] = update.id
@ -142,7 +134,7 @@ class MessageParseMethods:
# Pinning a message with `updatePinnedMessage` seems to
# always produce a service message we can't map so return
# it directly. The same happens for kicking users.
# it directly.
#
# It could also be a list (e.g. when sending albums).
#
@ -165,23 +157,20 @@ class MessageParseMethods:
elif (isinstance(update, types.UpdateEditChannelMessage)
and utils.get_peer_id(request.peer) ==
utils.get_peer_id(update.message.peer_id)):
utils.get_peer_id(update.message.to_id)):
if request.id == update.message.id:
update.message._finish_init(self, entities, input_chat)
return update.message
elif isinstance(update, types.UpdateNewScheduledMessage):
update.message._finish_init(self, entities, input_chat)
# Scheduled IDs may collide with normal IDs. However, for a
# single request there *shouldn't* be a mix between "some
# scheduled and some not".
id_to_message[update.message.id] = update.message
sched_to_message[update.message.id] = update.message
elif isinstance(update, types.UpdateMessagePoll):
if request.media.poll.id == update.poll_id:
m = types.Message(
id=request.id,
peer_id=utils.get_peer(request.peer),
to_id=utils.get_peer(request.peer),
media=types.MessageMediaPoll(
poll=update.poll,
results=update.results
@ -193,15 +182,22 @@ class MessageParseMethods:
if request is None:
return id_to_message
random_id = request if isinstance(request, (int, list)) else getattr(request, 'random_id', None)
if random_id is None:
# Can happen when pinning a message does not actually produce a service message.
self._log[__name__].warning(
'No random_id in %s to map to, returning None message for %s', request, result)
return None
# Use the scheduled mapping if we got a request with a scheduled message
#
# This breaks if the schedule date is too young, however, since the message
# is sent immediately, so have a fallback.
if getattr(request, 'schedule_date', None) is None:
mapping = id_to_message
opposite = {} # if there's no schedule it can never be scheduled
else:
mapping = sched_to_message
opposite = id_to_message # scheduled may be treated as normal, though
random_id = request if isinstance(request, (int, list)) else request.random_id
if not utils.is_list_like(random_id):
msg = id_to_message.get(random_to_id.get(random_id))
msg = mapping.get(random_to_id.get(random_id))
if not msg:
msg = opposite.get(random_to_id.get(random_id))
if not msg:
self._log[__name__].warning(
@ -210,21 +206,24 @@ class MessageParseMethods:
return msg
try:
return [id_to_message[random_to_id[rnd]] for rnd in random_id]
return [mapping[random_to_id[rnd]] for rnd in random_id]
except KeyError:
# Sometimes forwards fail (`MESSAGE_ID_INVALID` if a message gets
# deleted or `WORKER_BUSY_TOO_LONG_RETRY` if there are issues at
# Telegram), in which case we get some "missing" message mappings.
# Log them with the hope that we can better work around them.
#
# This also happens when trying to forward messages that can't
# be forwarded because they don't exist (0, service, deleted)
# among others which could be (like deleted or existing).
self._log[__name__].warning(
'Request %s had missing message mappings %s', request, result)
try:
return [opposite[random_to_id[rnd]] for rnd in random_id]
except KeyError:
# Sometimes forwards fail (`MESSAGE_ID_INVALID` if a message gets
# deleted or `WORKER_BUSY_TOO_LONG_RETRY` if there are issues at
# Telegram), in which case we get some "missing" message mappings.
# Log them with the hope that we can better work around them.
#
# This also happens when trying to forward messages that can't
# be forwarded because they don't exist (0, service, deleted)
# among others which could be (like deleted or existing).
self._log[__name__].warning(
'Request %s had missing message mappings %s', request, result)
return [
id_to_message.get(random_to_id[rnd])
(mapping.get(random_to_id[rnd]) or opposite.get(random_to_id[rnd]))
if rnd in random_to_id
else None
for rnd in random_id

View File

@ -1,7 +1,6 @@
import inspect
import itertools
import typing
import warnings
from .. import helpers, utils, errors, hints
from ..requestiter import RequestIter
@ -19,8 +18,7 @@ class _MessagesIter(RequestIter):
"""
async def _init(
self, entity, offset_id, min_id, max_id,
from_user, offset_date, add_offset, filter, search, reply_to,
scheduled
from_user, offset_date, add_offset, filter, search
):
# Note that entity being `None` will perform a global search.
if entity:
@ -59,50 +57,34 @@ class _MessagesIter(RequestIter):
if from_user:
from_user = await self.client.get_input_entity(from_user)
ty = helpers._entity_type(from_user)
if ty != helpers._EntityType.USER:
from_user = None # Ignore from_user unless it's a user
if from_user:
self.from_id = await self.client.get_peer_id(from_user)
else:
self.from_id = None
# `messages.searchGlobal` only works with text `search` or `filter` queries.
# If we want to perform global a search with `from_user` we have to perform
# a normal `messages.search`, *but* we can make the entity be `inputPeerEmpty`.
if not self.entity and from_user:
# `messages.searchGlobal` only works with text `search` queries.
# If we want to perform global a search with `from_user` or `filter`,
# we have to perform a normal `messages.search`, *but* we can make the
# entity be `inputPeerEmpty`.
if not self.entity and (filter or from_user):
self.entity = types.InputPeerEmpty()
if filter is None:
filter = types.InputMessagesFilterEmpty()
else:
filter = filter() if isinstance(filter, type) else filter
if not self.entity:
self.request = functions.messages.SearchGlobalRequest(
q=search or '',
filter=filter,
min_date=None,
max_date=offset_date,
offset_rate=0,
offset_rate=offset_date,
offset_peer=types.InputPeerEmpty(),
offset_id=offset_id,
limit=1
)
elif scheduled:
self.request = functions.messages.GetScheduledHistoryRequest(
peer=entity,
hash=0
)
elif reply_to is not None:
self.request = functions.messages.GetRepliesRequest(
peer=self.entity,
msg_id=reply_to,
offset_id=offset_id,
offset_date=offset_date,
add_offset=add_offset,
limit=1,
max_id=0,
min_id=0,
hash=0
)
elif search is not None or not isinstance(filter, types.InputMessagesFilterEmpty) or from_user:
elif search is not None or filter or from_user:
if filter is None:
filter = types.InputMessagesFilterEmpty()
# Telegram completely ignores `from_id` in private chats
ty = helpers._entity_type(self.entity)
if ty == helpers._EntityType.USER:
@ -117,7 +99,7 @@ class _MessagesIter(RequestIter):
self.request = functions.messages.SearchRequest(
peer=self.entity,
q=search or '',
filter=filter,
filter=filter() if isinstance(filter, type) else filter,
min_date=None,
max_date=offset_date,
offset_id=offset_id,
@ -136,8 +118,7 @@ class _MessagesIter(RequestIter):
#
# Even better, using `filter` and `from_id` seems to always
# trigger `RPC_CALL_FAIL` which is "internal issues"...
if not isinstance(filter, types.InputMessagesFilterEmpty) \
and offset_date and not search and not offset_id:
if filter and offset_date and not search and not offset_id:
async for m in self.client.iter_messages(
self.entity, 1, offset_date=offset_date):
self.request.offset_id = m.id + 1
@ -190,7 +171,7 @@ class _MessagesIter(RequestIter):
messages = reversed(r.messages) if self.reverse else r.messages
for message in messages:
if (isinstance(message, types.MessageEmpty)
or self.from_id and message.sender_id != self.from_id):
or self.from_id and message.from_id != self.from_id):
continue
if not self._message_in_range(message):
@ -204,30 +185,13 @@ class _MessagesIter(RequestIter):
message._finish_init(self.client, entities, self.entity)
self.buffer.append(message)
# Not a slice (using offset would return the same, with e.g. SearchGlobal).
if isinstance(r, types.messages.Messages):
return True
# Some channels are "buggy" and may return less messages than
# requested (apparently, the messages excluded are, for example,
# "not displayable due to local laws").
#
# This means it's not safe to rely on `len(r.messages) < req.limit` as
# the stop condition. Unfortunately more requests must be made.
#
# However we can still check if the highest ID is equal to or lower
# than the limit, in which case there won't be any more messages
# because the lowest message ID is 1.
#
# We also assume the API will always return, at least, one message if
# there is more to fetch.
if not r.messages or (not self.reverse and r.messages[0].id <= self.request.limit):
if len(r.messages) < self.request.limit:
return True
# Get the last message that's not empty (in some rare cases
# it can happen that the last message is :tl:`MessageEmpty`)
if self.buffer:
self._update_offset(self.buffer[-1], r)
self._update_offset(self.buffer[-1])
else:
# There are some cases where all the messages we get start
# being empty. This can happen on migrated mega-groups if
@ -253,7 +217,7 @@ class _MessagesIter(RequestIter):
return True
def _update_offset(self, last_message, response):
def _update_offset(self, last_message):
"""
After making the request, update its offset with the last message.
"""
@ -269,16 +233,11 @@ class _MessagesIter(RequestIter):
# (only for the first request), it's safe to just clear it off.
self.request.max_date = None
else:
# getHistory, searchGlobal and getReplies call it offset_date
# getHistory and searchGlobal call it offset_date
self.request.offset_date = last_message.date
if isinstance(self.request, functions.messages.SearchGlobalRequest):
if last_message.input_chat:
self.request.offset_peer = last_message.input_chat
else:
self.request.offset_peer = types.InputPeerEmpty()
self.request.offset_rate = getattr(response, 'next_rate', 0)
self.request.offset_peer = last_message.input_chat
class _IDsIter(RequestIter):
@ -311,7 +270,7 @@ class _IDsIter(RequestIter):
else:
r = await self.client(functions.messages.GetMessagesRequest(ids))
if self._entity:
from_id = await self.client._get_peer(self._entity)
from_id = await self.client.get_peer_id(self._entity)
if isinstance(r, types.messages.MessagesNotModified):
self.buffer.extend(None for _ in ids)
@ -330,7 +289,7 @@ class _IDsIter(RequestIter):
# arbitrary chats. Validate these unless ``from_id is None``.
for message in r.messages:
if isinstance(message, types.MessageEmpty) or (
from_id and message.peer_id != from_id):
from_id and message.chat_id != from_id):
self.buffer.append(None)
else:
message._finish_init(self.client, entities, self._entity)
@ -358,9 +317,7 @@ class MessageMethods:
from_user: 'hints.EntityLike' = None,
wait_time: float = None,
ids: 'typing.Union[int, typing.Sequence[int]]' = None,
reverse: bool = False,
reply_to: int = None,
scheduled: bool = False
reverse: bool = False
) -> 'typing.Union[_MessagesIter, _IDsIter]':
"""
Iterator over the messages for the given chat.
@ -427,7 +384,8 @@ class MessageMethods:
containing photos.
from_user (`entity`):
Only messages from this entity will be returned.
Only messages from this user will be returned.
This parameter will be ignored if it is not an user.
wait_time (`int`):
Wait time (in seconds) between different
@ -467,30 +425,6 @@ class MessageMethods:
You cannot use this if both `entity` and `ids` are `None`.
reply_to (`int`, optional):
If set to a message ID, the messages that reply to this ID
will be returned. This feature is also known as comments in
posts of broadcast channels, or viewing threads in groups.
This feature can only be used in broadcast channels and their
linked megagroups. Using it in a chat or private conversation
will result in ``telethon.errors.PeerIdInvalidError`` to occur.
When using this parameter, the ``filter`` and ``search``
parameters have no effect, since Telegram's API doesn't
support searching messages in replies.
.. note::
This feature is used to get replies to a message in the
*discussion* group. If the same broadcast channel sends
a message and replies to it itself, that reply will not
be included in the results.
scheduled (`bool`, optional):
If set to `True`, messages which are scheduled will be returned.
All other parameter will be ignored for this, except `entity`.
Yields
Instances of `Message <telethon.tl.custom.message.Message>`.
@ -517,10 +451,6 @@ class MessageMethods:
from telethon.tl.types import InputMessagesFilterPhotos
async for message in client.iter_messages(chat, filter=InputMessagesFilterPhotos):
print(message.photo)
# Getting comments from a post in a channel:
async for message in client.iter_messages(channel, reply_to=123):
print(message.chat.title, message.text)
"""
if ids is not None:
if not utils.is_list_like(ids):
@ -548,14 +478,10 @@ class MessageMethods:
offset_date=offset_date,
add_offset=add_offset,
filter=filter,
search=search,
reply_to=reply_to,
scheduled=scheduled
search=search
)
async def get_messages(
self: 'TelegramClient', *args, **kwargs
) -> typing.Union['hints.TotalList', typing.Optional['types.Message']]:
async def get_messages(self: 'TelegramClient', *args, **kwargs) -> 'hints.TotalList':
"""
Same as `iter_messages()`, but returns a
`TotalList <telethon.helpers.TotalList>` instead.
@ -610,42 +536,20 @@ class MessageMethods:
# region Message sending/editing/deleting
async def _get_comment_data(
self: 'TelegramClient',
entity: 'hints.EntityLike',
message: 'typing.Union[int, types.Message]'
):
r = await self(functions.messages.GetDiscussionMessageRequest(
peer=entity,
msg_id=utils.get_message_id(message)
))
m = min(r.messages, key=lambda msg: msg.id)
chat = next(c for c in r.chats if c.id == m.peer_id.channel_id)
return utils.get_input_peer(chat), m.id
async def send_message(
self: 'TelegramClient',
entity: 'hints.EntityLike',
message: 'hints.MessageLike' = '',
*,
reply_to: 'typing.Union[int, types.Message]' = None,
attributes: 'typing.Sequence[types.TypeDocumentAttribute]' = None,
parse_mode: typing.Optional[str] = (),
formatting_entities: typing.Optional[typing.List[types.TypeMessageEntity]] = None,
link_preview: bool = True,
file: 'typing.Union[hints.FileLike, typing.Sequence[hints.FileLike]]' = None,
thumb: 'hints.FileLike' = None,
force_document: bool = False,
clear_draft: bool = False,
buttons: typing.Optional['hints.MarkupLike'] = None,
buttons: 'hints.MarkupLike' = None,
silent: bool = None,
background: bool = None,
supports_streaming: bool = False,
schedule: 'hints.DateLike' = None,
comment_to: 'typing.Union[int, types.Message]' = None,
nosound_video: bool = None,
send_as: typing.Optional['hints.EntityLike'] = None,
message_effect_id: typing.Optional[int] = None
schedule: 'hints.DateLike' = None
) -> 'types.Message':
"""
Sends a message to the specified user, chat or channel.
@ -680,19 +584,12 @@ class MessageMethods:
Whether to reply to a message or not. If an integer is provided,
it should be the ID of the message that it should reply to.
attributes (`list`, optional):
Optional attributes that override the inferred ones, like
:tl:`DocumentAttributeFilename` and so on.
parse_mode (`object`, optional):
See the `TelegramClient.parse_mode
<telethon.client.messageparse.MessageParseMethods.parse_mode>`
property for allowed values. Markdown parsing will be used by
default.
formatting_entities (`list`, optional):
A list of message formatting entities. When provided, the ``parse_mode`` is ignored.
link_preview (`bool`, optional):
Should the link preview be shown?
@ -700,17 +597,6 @@ class MessageMethods:
Sends a message with a file attached (e.g. a photo,
video, audio or document). The ``message`` may be empty.
thumb (`str` | `bytes` | `file`, optional):
Optional JPEG thumbnail (for documents). **Telegram will
ignore this parameter** unless you pass a ``.jpg`` file!
The file must also be small in dimensions and in disk size.
Successful thumbnails were files below 20kB and 320x320px.
Width/height and dimensions/size ratios may be important.
For Telegram to accept a thumbnail, you must provide the
dimensions of the underlying media through ``attributes=``
with :tl:`DocumentAttributesVideo` or by installing the
optional ``hachoir`` dependency.
force_document (`bool`, optional):
Whether to send the given file as a document or not.
@ -736,48 +622,11 @@ class MessageMethods:
channel or not. Defaults to `False`, which means it will
notify them. Set it to `True` to alter this behaviour.
background (`bool`, optional):
Whether the message should be send in background.
supports_streaming (`bool`, optional):
Whether the sent video supports streaming or not. Note that
Telegram only recognizes as streamable some formats like MP4,
and others like AVI or MKV will not work. You should convert
these to MP4 before sending if you want them to be streamable.
Unsupported formats will result in ``VideoContentTypeError``.
schedule (`hints.DateLike`, optional):
If set, the message won't send immediately, and instead
it will be scheduled to be automatically sent at a later
time.
comment_to (`int` | `Message <telethon.tl.custom.message.Message>`, optional):
Similar to ``reply_to``, but replies in the linked group of a
broadcast channel instead (effectively leaving a "comment to"
the specified message).
This parameter takes precedence over ``reply_to``. If there is
no linked chat, `telethon.errors.sgIdInvalidError` is raised.
nosound_video (`bool`, optional):
Only applicable when sending a video file without an audio
track. If set to ``True``, the video will be displayed in
Telegram as a video. If set to ``False``, Telegram will attempt
to display the video as an animated gif. (It may still display
as a video due to other factors.) The value is ignored if set
on non-video files. This is set to ``True`` for albums, as gifs
cannot be sent in albums.
send_as (`entity`):
Unique identifier (int) or username (str) of the chat or channel to send the message as.
You can use this to send the message on behalf of a chat or channel where you have appropriate permissions.
Use the GetSendAs to return the list of message sender identifiers, which can be used to send messages in the chat,
This setting applies to the current message and will remain effective for future messages unless explicitly changed.
To set this behavior permanently for all messages, use SaveDefaultSendAs.
message_effect_id (`int`, optional):
Unique identifier of the message effect to be added to the message; for private chats only
Returns
The sent `custom.Message <telethon.tl.custom.message.Message>`.
@ -838,27 +687,14 @@ class MessageMethods:
await client.send_message(chat, 'Hi, future!', schedule=timedelta(minutes=5))
"""
if file is not None:
if isinstance(message, types.Message):
formatting_entities = formatting_entities or message.entities
message = message.message
return await self.send_file(
entity, file, caption=message, reply_to=reply_to,
attributes=attributes, parse_mode=parse_mode,
force_document=force_document, thumb=thumb,
parse_mode=parse_mode, force_document=force_document,
buttons=buttons, clear_draft=clear_draft, silent=silent,
schedule=schedule, supports_streaming=supports_streaming,
formatting_entities=formatting_entities,
comment_to=comment_to, background=background,
nosound_video=nosound_video,
send_as=send_as, message_effect_id=message_effect_id
schedule=schedule
)
entity = await self.get_input_entity(entity)
if comment_to is not None:
entity, reply_to = await self._get_comment_data(entity, comment_to)
else:
reply_to = utils.get_message_id(reply_to)
if isinstance(message, types.Message):
if buttons is None:
markup = message.reply_markup
@ -875,34 +711,27 @@ class MessageMethods:
message.media,
caption=message.message,
silent=silent,
background=background,
reply_to=reply_to,
buttons=markup,
formatting_entities=message.entities,
parse_mode=None, # explicitly disable parse_mode to force using even empty formatting_entities
schedule=schedule,
send_as=send_as, message_effect_id=message_effect_id
entities=message.entities,
schedule=schedule
)
request = functions.messages.SendMessageRequest(
peer=entity,
message=message.message or '',
silent=silent,
background=background,
reply_to=None if reply_to is None else types.InputReplyToMessage(reply_to),
reply_to_msg_id=utils.get_message_id(reply_to),
reply_markup=markup,
entities=message.entities,
clear_draft=clear_draft,
no_webpage=not isinstance(
message.media, types.MessageMediaWebPage),
schedule_date=schedule,
send_as=await self.get_input_entity(send_as) if send_as else None,
effect=message_effect_id
schedule_date=schedule
)
message = message.message
else:
if formatting_entities is None:
message, formatting_entities = await self._parse_message_text(message, parse_mode)
message, msg_ent = await self._parse_message_text(message, parse_mode)
if not message:
raise ValueError(
'The message cannot be empty unless a file is provided'
@ -911,31 +740,26 @@ class MessageMethods:
request = functions.messages.SendMessageRequest(
peer=entity,
message=message,
entities=formatting_entities,
entities=msg_ent,
no_webpage=not link_preview,
reply_to=None if reply_to is None else types.InputReplyToMessage(reply_to),
reply_to_msg_id=utils.get_message_id(reply_to),
clear_draft=clear_draft,
silent=silent,
background=background,
reply_markup=self.build_reply_markup(buttons),
schedule_date=schedule,
send_as=await self.get_input_entity(send_as) if send_as else None,
effect=message_effect_id
schedule_date=schedule
)
result = await self(request)
if isinstance(result, types.UpdateShortSentMessage):
message = types.Message(
id=result.id,
peer_id=await self._get_peer(entity),
to_id=utils.get_peer(entity),
message=message,
date=result.date,
out=result.out,
media=result.media,
entities=result.entities,
reply_markup=request.reply_markup,
ttl_period=result.ttl_period,
reply_to=request.reply_to
reply_markup=request.reply_markup
)
message._finish_init(self, {}, entity)
return message
@ -948,13 +772,9 @@ class MessageMethods:
messages: 'typing.Union[hints.MessageIDLike, typing.Sequence[hints.MessageIDLike]]',
from_peer: 'hints.EntityLike' = None,
*,
background: bool = None,
with_my_score: bool = None,
silent: bool = None,
as_album: bool = None,
schedule: 'hints.DateLike' = None,
drop_author: bool = None,
drop_media_captions: bool = None,
schedule: 'hints.DateLike' = None
) -> 'typing.Sequence[types.Message]':
"""
Forwards the given messages to the specified entity.
@ -984,26 +804,22 @@ class MessageMethods:
the person has the chat muted). Set it to `True` to alter
this behaviour.
background (`bool`, optional):
Whether the message should be forwarded in background.
with_my_score (`bool`, optional):
Whether forwarded should contain your game score.
as_album (`bool`, optional):
This flag no longer has any effect.
Whether several image messages should be forwarded as an
album (grouped) or not. The default behaviour is to treat
albums specially and send outgoing requests with
``as_album=True`` only for the albums if message objects
are used. If IDs are used it will group by default.
In short, the default should do what you expect,
`True` will group always (even converting separate
images into albums), and `False` will never group.
schedule (`hints.DateLike`, optional):
If set, the message(s) won't forward immediately, and
instead they will be scheduled to be automatically sent
at a later time.
drop_author (`bool`, optional):
Whether to forward messages without quoting the original author.
drop_media_captions (`bool`, optional):
Whether to strip captions from media. Setting this to `True` requires that `drop_author` also be set to `True`.
Returns
The list of forwarded `Message <telethon.tl.custom.message.Message>`,
or a single one if a list wasn't provided as input.
@ -1030,9 +846,6 @@ class MessageMethods:
# Forwarding as a copy
await client.send_message(chat, message)
"""
if as_album is not None:
warnings.warn('the as_album argument is deprecated and no longer has any effect')
single = not utils.is_list_like(messages)
if single:
messages = (messages,)
@ -1045,24 +858,44 @@ class MessageMethods:
else:
from_peer_id = None
def get_key(m):
def _get_key(m):
if isinstance(m, int):
if from_peer_id is not None:
return from_peer_id
return from_peer_id, None
raise ValueError('from_peer must be given if integer IDs are used')
elif isinstance(m, types.Message):
return m.chat_id
return m.chat_id, m.grouped_id
else:
raise TypeError('Cannot forward messages of type {}'.format(type(m)))
# We want to group outgoing chunks differently if we are "smart"
# about sending as album.
#
# Why? We need separate requests for ``as_album=True/False``, so
# if we want that behaviour, when we group messages to create the
# chunks, we need to consider the grouped ID too. But if we don't
# care about that, we don't need to consider it for creating the
# chunks, so we can make less requests.
if as_album is None:
get_key = _get_key
else:
def get_key(m):
return _get_key(m)[0] # Ignore grouped_id
sent = []
for _chat_id, chunk in itertools.groupby(messages, key=get_key):
for chat_id, chunk in itertools.groupby(messages, key=get_key):
chunk = list(chunk)
if isinstance(chunk[0], int):
chat = from_peer
grouped = True if as_album is None else as_album
else:
chat = from_peer or await self.get_input_entity(chunk[0].peer_id)
chat = await chunk[0].get_input_chat()
if as_album is None:
grouped = any(m.grouped_id is not None for m in chunk)
else:
grouped = as_album
chunk = [m.id for m in chunk]
req = functions.messages.ForwardMessagesRequest(
@ -1070,11 +903,11 @@ class MessageMethods:
id=chunk,
to_peer=entity,
silent=silent,
background=background,
with_my_score=with_my_score,
schedule_date=schedule,
drop_author=drop_author,
drop_media_captions=drop_media_captions
# Trying to send a single message as grouped will cause
# GROUPED_MEDIA_INVALID. If more than one message is forwarded
# (even without media...), this error goes away.
grouped=len(chunk) > 1 and grouped,
schedule_date=schedule
)
result = await self(req)
sent.extend(self._get_response_message(req, result, entity))
@ -1084,18 +917,14 @@ class MessageMethods:
async def edit_message(
self: 'TelegramClient',
entity: 'typing.Union[hints.EntityLike, types.Message]',
message: 'typing.Union[int, types.Message, types.InputMessageID, str]' = None,
message: 'hints.MessageLike' = None,
text: str = None,
*,
parse_mode: str = (),
attributes: 'typing.Sequence[types.TypeDocumentAttribute]' = None,
formatting_entities: typing.Optional[typing.List[types.TypeMessageEntity]] = None,
link_preview: bool = True,
file: 'hints.FileLike' = None,
thumb: 'hints.FileLike' = None,
force_document: bool = False,
buttons: typing.Optional['hints.MarkupLike'] = None,
supports_streaming: bool = False,
buttons: 'hints.MarkupLike' = None,
schedule: 'hints.DateLike' = None
) -> 'types.Message':
"""
@ -1110,11 +939,11 @@ class MessageMethods:
from it, so the next parameter will be assumed to be the
message text.
You may also pass a :tl:`InputBotInlineMessageID` or :tl:`InputBotInlineMessageID64`,
You may also pass a :tl:`InputBotInlineMessageID`,
which is the only way to edit messages that were sent
after the user selects an inline query result.
message (`int` | `Message <telethon.tl.custom.message.Message>` | :tl:`InputMessageID` | `str`):
message (`int` | `Message <telethon.tl.custom.message.Message>` | `str`):
The ID of the message (or `Message
<telethon.tl.custom.message.Message>` itself) to be edited.
If the `entity` was a `Message
@ -1131,13 +960,6 @@ class MessageMethods:
property for allowed values. Markdown parsing will be used by
default.
attributes (`list`, optional):
Optional attributes that override the inferred ones, like
:tl:`DocumentAttributeFilename` and so on.
formatting_entities (`list`, optional):
A list of message formatting entities. When provided, the ``parse_mode`` is ignored.
link_preview (`bool`, optional):
Should the link preview be shown?
@ -1145,17 +967,6 @@ class MessageMethods:
The file object that should replace the existing media
in the message.
thumb (`str` | `bytes` | `file`, optional):
Optional JPEG thumbnail (for documents). **Telegram will
ignore this parameter** unless you pass a ``.jpg`` file!
The file must also be small in dimensions and in disk size.
Successful thumbnails were files below 20kB and 320x320px.
Width/height and dimensions/size ratios may be important.
For Telegram to accept a thumbnail, you must provide the
dimensions of the underlying media through ``attributes=``
with :tl:`DocumentAttributesVideo` or by installing the
optional ``hachoir`` dependency.
force_document (`bool`, optional):
Whether to send the given file as a document or not.
@ -1165,13 +976,6 @@ class MessageMethods:
you have signed in as a bot. You can also pass your own
:tl:`ReplyMarkup` here.
supports_streaming (`bool`, optional):
Whether the sent video supports streaming or not. Note that
Telegram only recognizes as streamable some formats like MP4,
and others like AVI or MKV will not work. You should convert
these to MP4 before sending if you want them to be streamable.
Unsupported formats will result in ``VideoContentTypeError``.
schedule (`hints.DateLike`, optional):
If set, the message won't be edited immediately, and instead
it will be scheduled to be automatically edited at a later
@ -1182,7 +986,7 @@ class MessageMethods:
Returns
The edited `Message <telethon.tl.custom.message.Message>`,
unless `entity` was a :tl:`InputBotInlineMessageID` or :tl:`InputBotInlineMessageID64` in which
unless `entity` was a :tl:`InputBotInlineMessageID` in which
case this method returns a boolean.
Raises
@ -1208,28 +1012,24 @@ class MessageMethods:
# or
await client.edit_message(message, 'hello!!!')
"""
if isinstance(entity, (types.InputBotInlineMessageID, types.InputBotInlineMessageID64)):
text = text or message
if isinstance(entity, types.InputBotInlineMessageID):
text = message
message = entity
elif isinstance(entity, types.Message):
text = message # Shift the parameters to the right
message = entity
entity = entity.peer_id
entity = entity.to_id
if formatting_entities is None:
text, formatting_entities = await self._parse_message_text(text, parse_mode)
text, msg_entities = await self._parse_message_text(text, parse_mode)
file_handle, media, image = await self._file_to_media(file,
supports_streaming=supports_streaming,
thumb=thumb,
attributes=attributes,
force_document=force_document)
if isinstance(entity, (types.InputBotInlineMessageID, types.InputBotInlineMessageID64)):
if isinstance(entity, types.InputBotInlineMessageID):
request = functions.messages.EditInlineBotMessageRequest(
id=entity,
message=text,
no_webpage=not link_preview,
entities=formatting_entities,
entities=msg_entities,
media=media,
reply_markup=self.build_reply_markup(buttons)
)
@ -1251,7 +1051,7 @@ class MessageMethods:
id=utils.get_message_id(message),
message=text,
no_webpage=not link_preview,
entities=formatting_entities,
entities=msg_entities,
media=media,
reply_markup=self.build_reply_markup(buttons),
schedule_date=schedule
@ -1342,8 +1142,7 @@ class MessageMethods:
message: 'typing.Union[hints.MessageIDLike, typing.Sequence[hints.MessageIDLike]]' = None,
*,
max_id: int = None,
clear_mentions: bool = False,
clear_reactions: bool = False) -> bool:
clear_mentions: bool = False) -> bool:
"""
Marks messages as read and optionally clears mentions.
@ -1377,13 +1176,6 @@ class MessageMethods:
If no message is provided, this will be the only action
taken.
clear_reactions (`bool`):
Whether the reactions badge should be cleared (so that
there are no more reaction notifications) or not for the given entity.
If no message is provided, this will be the only action
taken.
Example
.. code-block:: python
@ -1406,10 +1198,6 @@ class MessageMethods:
entity = await self.get_input_entity(entity)
if clear_mentions:
await self(functions.messages.ReadMentionsRequest(entity))
if max_id is None and not clear_reactions:
return True
if clear_reactions:
await self(functions.messages.ReadReactionsRequest(entity))
if max_id is None:
return True
@ -1428,11 +1216,10 @@ class MessageMethods:
entity: 'hints.EntityLike',
message: 'typing.Optional[hints.MessageIDLike]',
*,
notify: bool = False,
pm_oneside: bool = False
notify: bool = False
):
"""
Pins a message in a chat.
Pins or unpins a message in a chat.
The default behaviour is to *not* notify members, unlike the
official applications.
@ -1445,16 +1232,11 @@ class MessageMethods:
message (`int` | `Message <telethon.tl.custom.message.Message>`):
The message or the message ID to pin. If it's
`None`, all messages will be unpinned instead.
`None`, the message will be unpinned instead.
notify (`bool`, optional):
Whether the pin should notify people or not.
pm_oneside (`bool`, optional):
Whether the message should be pinned for everyone or not.
By default it has the opposite behaviour of official clients,
and it will pin the message for both sides, in private chats.
Example
.. code-block:: python
@ -1462,59 +1244,22 @@ class MessageMethods:
message = await client.send_message(chat, 'Pinotifying is fun!')
await client.pin_message(chat, message, notify=True)
"""
return await self._pin(entity, message, unpin=False, notify=notify, pm_oneside=pm_oneside)
async def unpin_message(
self: 'TelegramClient',
entity: 'hints.EntityLike',
message: 'typing.Optional[hints.MessageIDLike]' = None,
*,
notify: bool = False
):
"""
Unpins a message in a chat.
If no message ID is specified, all pinned messages will be unpinned.
See also `Message.unpin() <telethon.tl.custom.message.Message.unpin>`.
Arguments
entity (`entity`):
The chat where the message should be pinned.
message (`int` | `Message <telethon.tl.custom.message.Message>`):
The message or the message ID to unpin. If it's
`None`, all messages will be unpinned instead.
Example
.. code-block:: python
# Unpin all messages from a chat
await client.unpin_message(chat)
"""
return await self._pin(entity, message, unpin=True, notify=notify)
async def _pin(self, entity, message, *, unpin, notify=False, pm_oneside=False):
message = utils.get_message_id(message) or 0
entity = await self.get_input_entity(entity)
if message <= 0: # old behaviour accepted negative IDs to unpin
await self(functions.messages.UnpinAllMessagesRequest(entity))
return
request = functions.messages.UpdatePinnedMessageRequest(
peer=entity,
id=message,
silent=not notify,
unpin=unpin,
pm_oneside=pm_oneside
silent=not notify
)
result = await self(request)
# Unpinning does not produce a service message.
# Pinning a message that was already pinned also produces no service message.
# Pinning a message in your own chat does not produce a service message,
# but pinning on a private conversation with someone else does.
if unpin or not result.updates:
# Unpinning does not produce a service message, and technically
# users can pass negative IDs which seem to behave as unpinning too.
if message <= 0:
return
# Pinning in User chats (just with yourself really) does not produce a service message
if helpers._entity_type(entity) == helpers._EntityType.USER:
return
# Pinning a message that doesn't exist would RPC-error earlier

View File

@ -1,33 +1,31 @@
import abc
import inspect
import re
import asyncio
import collections
import logging
import platform
import time
import typing
import datetime
import pathlib
from .. import utils, version, helpers, __name__ as __base_name__
from .. import version, helpers, __name__ as __base_name__
from ..crypto import rsa
from ..entitycache import EntityCache
from ..extensions import markdown
from ..network import MTProtoSender, Connection, ConnectionTcpFull, TcpMTProxy
from ..sessions import Session, SQLiteSession, MemorySession
from ..statecache import StateCache
from ..tl import functions, types
from ..tl.alltlobjects import LAYER
from .._updates import MessageBox, EntityCache as MbEntityCache, SessionState, ChannelState, Entity, EntityType
DEFAULT_DC_ID = 2
DEFAULT_IPV4_IP = '149.154.167.51'
DEFAULT_IPV6_IP = '2001:67c:4e8:f002::a'
DEFAULT_IPV6_IP = '[2001:67c:4e8:f002::a]'
DEFAULT_PORT = 443
if typing.TYPE_CHECKING:
from .telegramclient import TelegramClient
_base_log = logging.getLogger(__base_name__)
__default_log__ = logging.getLogger(__base_name__)
__default_log__.addHandler(logging.NullHandler())
# In seconds, how long to wait before disconnecting a exported sender.
@ -91,7 +89,7 @@ class TelegramBaseClient(abc.ABC):
The API ID you obtained from https://my.telegram.org.
api_hash (`str`):
The API hash you obtained from https://my.telegram.org.
The API ID you obtained from https://my.telegram.org.
connection (`telethon.network.connection.common.Connection`, optional):
The connection instance to be used when creating a new connection
@ -111,11 +109,6 @@ class TelegramBaseClient(abc.ABC):
function parameters for PySocks, like ``(type, 'hostname', port)``.
See https://github.com/Anorov/PySocks#usage-1 for more.
local_addr (`str` | `tuple`, optional):
Local host address (and port, optionally) used to bind the socket to locally.
You only need to use this if you have multiple network cards and
want to use a specific one.
timeout (`int` | `float`, optional):
The timeout in seconds to be used when connecting.
This is **not** the timeout to be used when ``await``'ing for
@ -166,19 +159,13 @@ class TelegramBaseClient(abc.ABC):
was for 21s, it would ``raise FloodWaitError`` instead. Values
larger than a day (like ``float('inf')``) will be changed to a day.
raise_last_call_error (`bool`, optional):
When API calls fail in a way that causes Telethon to retry
automatically, should the RPC error of the last attempt be raised
instead of a generic ValueError. This is mostly useful for
detecting when Telegram has internal issues.
device_model (`str`, optional):
"Device model" to be sent when creating the initial connection.
Defaults to 'PC (n)bit' derived from ``platform.uname().machine``, or its direct value if unknown.
Defaults to ``platform.node()``.
system_version (`str`, optional):
"System version" to be sent when creating the initial connection.
Defaults to ``platform.uname().release`` stripped of everything ahead of -.
Defaults to ``platform.system()``.
app_version (`str`, optional):
"App version" to be sent when creating the initial connection.
@ -193,7 +180,7 @@ class TelegramBaseClient(abc.ABC):
Defaults to `lang_code`.
loop (`asyncio.AbstractEventLoop`, optional):
Asyncio event loop to use. Defaults to `asyncio.get_running_loop()`.
Asyncio event loop to use. Defaults to `asyncio.get_event_loop()`.
This argument is ignored.
base_logger (`str` | `logging.Logger`, optional):
@ -201,29 +188,6 @@ class TelegramBaseClient(abc.ABC):
If a `str` is given, it'll be passed to `logging.getLogger()`. If a
`logging.Logger` is given, it'll be used directly. If something
else or nothing is given, the default logger will be used.
receive_updates (`bool`, optional):
Whether the client will receive updates or not. By default, updates
will be received from Telegram as they occur.
Turning this off means that Telegram will not send updates at all
so event handlers, conversations, and QR login will not work.
However, certain scripts don't need updates, so this will reduce
the amount of bandwidth used.
entity_cache_limit (`int`, optional):
How many users, chats and channels to keep in the in-memory cache
at most. This limit is checked against when processing updates.
When this limit is reached or exceeded, all entities that are not
required for update handling will be flushed to the session file.
Note that this implies that there is a lower bound to the amount
of entities that must be kept in memory.
Setting this limit too low will cause the library to attempt to
flush entities to the session file even if no entities can be
removed from the in-memory cache, which will degrade performance.
"""
# Current TelegramClient version
@ -237,44 +201,39 @@ class TelegramBaseClient(abc.ABC):
def __init__(
self: 'TelegramClient',
session: 'typing.Union[str, pathlib.Path, Session]',
session: 'typing.Union[str, Session]',
api_id: int,
api_hash: str,
*,
connection: 'typing.Type[Connection]' = ConnectionTcpFull,
use_ipv6: bool = False,
proxy: typing.Union[tuple, dict] = None,
local_addr: typing.Union[str, tuple] = None,
timeout: int = 10,
request_retries: int = 5,
connection_retries: int = 5,
connection_retries: int =5,
retry_delay: int = 1,
auto_reconnect: bool = True,
sequential_updates: bool = False,
flood_sleep_threshold: int = 60,
raise_last_call_error: bool = False,
device_model: str = None,
system_version: str = None,
app_version: str = None,
lang_code: str = 'en',
system_lang_code: str = 'en',
loop: asyncio.AbstractEventLoop = None,
base_logger: typing.Union[str, logging.Logger] = None,
receive_updates: bool = True,
catch_up: bool = False,
entity_cache_limit: int = 5000
):
base_logger: typing.Union[str, logging.Logger] = None):
if not api_id or not api_hash:
raise ValueError(
"Your API ID or Hash cannot be empty or None. "
"Refer to telethon.rtfd.io for more information.")
self._use_ipv6 = use_ipv6
self._loop = asyncio.get_event_loop()
if isinstance(base_logger, str):
base_logger = logging.getLogger(base_logger)
elif not isinstance(base_logger, logging.Logger):
base_logger = _base_log
base_logger = __default_log__
class _Loggers(dict):
def __missing__(self, key):
@ -286,9 +245,9 @@ class TelegramBaseClient(abc.ABC):
self._log = _Loggers()
# Determine what session object we have
if isinstance(session, (str, pathlib.Path)):
if isinstance(session, str) or session is None:
try:
session = SQLiteSession(str(session))
session = SQLiteSession(session)
except ImportError:
import warnings
warnings.warn(
@ -299,17 +258,24 @@ class TelegramBaseClient(abc.ABC):
'you use another session storage'
)
session = MemorySession()
elif session is None:
session = MemorySession()
elif not isinstance(session, Session):
raise TypeError(
'The given session must be a str or a Session instance.'
)
# ':' in session.server_address is True if it's an IPv6 address
if (not session.server_address or
(':' in session.server_address) != use_ipv6):
session.set_dc(
DEFAULT_DC_ID,
DEFAULT_IPV6_IP if self._use_ipv6 else DEFAULT_IPV4_IP,
DEFAULT_PORT
)
self.flood_sleep_threshold = flood_sleep_threshold
# TODO Use AsyncClassWrapper(session)
# ChatGetter and SenderGetter can use the in-memory _mb_entity_cache
# ChatGetter and SenderGetter can use the in-memory _entity_cache
# to avoid network access and the need for await in session files.
#
# The session files only wants the entities to persist
@ -317,6 +283,7 @@ class TelegramBaseClient(abc.ABC):
# TODO Session should probably return all cached
# info of entities, not just the input versions
self.session = session
self._entity_cache = EntityCache()
self.api_id = int(api_id)
self.api_hash = api_hash
@ -327,34 +294,21 @@ class TelegramBaseClient(abc.ABC):
# TODO A better fix is obviously avoiding the use of `sock_connect`
#
# See https://github.com/LonamiWebs/Telethon/issues/1337 for details.
if not callable(getattr(self.loop, 'sock_connect', None)):
if not callable(getattr(self._loop, 'sock_connect', None)):
raise TypeError(
'Event loop of type {} lacks `sock_connect`, which is needed to use proxies.\n\n'
'Change the event loop in use to use proxies:\n'
'# https://github.com/LonamiWebs/Telethon/issues/1337\n'
'import asyncio\n'
'asyncio.set_event_loop(asyncio.SelectorEventLoop())'.format(
self.loop.__class__.__name__
self._loop.__class__.__name__
)
)
if local_addr is not None:
if use_ipv6 is False and ':' in local_addr:
raise TypeError(
'A local IPv6 address must only be used with `use_ipv6=True`.'
)
elif use_ipv6 is True and ':' not in local_addr:
raise TypeError(
'`use_ipv6=True` must only be used with a local IPv6 address.'
)
self._raise_last_call_error = raise_last_call_error
self._request_retries = request_retries
self._connection_retries = connection_retries
self._retry_delay = retry_delay or 0
self._proxy = proxy
self._local_addr = local_addr
self._timeout = timeout
self._auto_reconnect = auto_reconnect
@ -366,25 +320,30 @@ class TelegramBaseClient(abc.ABC):
# Used on connection. Capture the variables in a lambda since
# exporting clients need to create this InvokeWithLayerRequest.
system = platform.uname()
self._init_with = lambda x: functions.InvokeWithLayerRequest(
LAYER, functions.InitConnectionRequest(
api_id=self.api_id,
device_model=device_model or system.system or 'Unknown',
system_version=system_version or system.release or '1.0',
app_version=app_version or self.__version__,
lang_code=lang_code,
system_lang_code=system_lang_code,
lang_pack='', # "langPacks are for official apps only"
query=x,
proxy=init_proxy
)
)
if system.machine in ('x86_64', 'AMD64'):
default_device_model = 'PC 64bit'
elif system.machine in ('i386','i686','x86'):
default_device_model = 'PC 32bit'
else:
default_device_model = system.machine
default_system_version = re.sub(r'-.+','',system.release)
self._init_request = functions.InitConnectionRequest(
api_id=self.api_id,
device_model=device_model or default_device_model or 'Unknown',
system_version=system_version or default_system_version or '1.0',
app_version=app_version or self.__version__,
lang_code=lang_code,
system_lang_code=system_lang_code,
lang_pack='', # "langPacks are for official apps only"
query=None,
proxy=init_proxy
self._sender = MTProtoSender(
self.session.auth_key,
loggers=self._log,
retries=self._connection_retries,
delay=self._retry_delay,
auto_reconnect=self._auto_reconnect,
connect_timeout=self._timeout,
auth_key_callback=self._auth_key_callback,
update_callback=self._handle_update,
auto_reconnect_callback=self._handle_auto_reconnect
)
# Remember flood-waited requests to avoid making them again
@ -393,21 +352,27 @@ class TelegramBaseClient(abc.ABC):
# Cache ``{dc_id: (_ExportState, MTProtoSender)}`` for all borrowed senders
self._borrowed_senders = {}
self._borrow_sender_lock = asyncio.Lock()
self._exported_sessions = {}
self._loop = None # only used as a sanity check
self._updates_error = None
self._updates_handle = None
self._keepalive_handle = None
self._last_request = time.time()
self._no_updates = not receive_updates
self._channel_pts = {}
# Used for non-sequential updates, in order to terminate all pending tasks on disconnect.
self._sequential_updates = sequential_updates
self._event_handler_tasks = set()
if sequential_updates:
self._updates_queue = asyncio.Queue()
self._dispatching_updates_queue = asyncio.Event()
else:
# Use a set of pending instead of a queue so we can properly
# terminate all pending updates on disconnect.
self._updates_queue = set()
self._dispatching_updates_queue = None
self._authorized = None # None = unknown, False = no, True = yes
# Update state (for catching up after a disconnection)
# TODO Get state from channels too
self._state_cache = StateCache(
self.session.get_update_state(0), self._log)
# Some further state for subclasses
self._event_builders = []
@ -432,29 +397,13 @@ class TelegramBaseClient(abc.ABC):
self._phone = None
self._tos = None
# Sometimes we need to know who we are, cache the self peer
self._self_input_peer = None
self._bot = None
# A place to store if channels are a megagroup or not (see `edit_admin`)
self._megagroup_cache = {}
# This is backported from v2 in a very ad-hoc way just to get proper update handling
self._catch_up = catch_up
self._updates_queue = asyncio.Queue()
self._message_box = MessageBox(self._log['messagebox'])
self._mb_entity_cache = MbEntityCache() # required for proper update handling (to know when to getDifference)
self._entity_cache_limit = entity_cache_limit
self._sender = MTProtoSender(
self.session.auth_key,
loggers=self._log,
retries=self._connection_retries,
delay=self._retry_delay,
auto_reconnect=self._auto_reconnect,
connect_timeout=self._timeout,
auth_key_callback=self._auth_key_callback,
updates_queue=self._updates_queue,
auto_reconnect_callback=self._handle_auto_reconnect
)
# endregion
# region Properties
@ -476,7 +425,7 @@ class TelegramBaseClient(abc.ABC):
# Join the task (wait for it to complete)
await task
"""
return helpers.get_running_loop()
return self._loop
@property
def disconnected(self: 'TelegramClient') -> asyncio.Future:
@ -529,86 +478,23 @@ class TelegramBaseClient(abc.ABC):
except OSError:
print('Failed to connect')
"""
if self.session is None:
raise ValueError('TelegramClient instance cannot be reused after logging out')
if self._loop is None:
self._loop = helpers.get_running_loop()
elif self._loop != helpers.get_running_loop():
raise RuntimeError('The asyncio event loop must not change after connection (see the FAQ for details)')
# ':' in session.server_address is True if it's an IPv6 address
if (not self.session.server_address or
(':' in self.session.server_address) != self._use_ipv6):
await utils.maybe_async(
self.session.set_dc(
DEFAULT_DC_ID,
DEFAULT_IPV6_IP if self._use_ipv6 else DEFAULT_IPV4_IP,
DEFAULT_PORT
)
)
await utils.maybe_async(self.session.save())
if not await self._sender.connect(self._connection(
self.session.server_address,
self.session.port,
self.session.dc_id,
loggers=self._log,
proxy=self._proxy,
local_addr=self._local_addr
proxy=self._proxy
)):
# We don't want to init or modify anything if we were already connected
return
self.session.auth_key = self._sender.auth_key
await utils.maybe_async(self.session.save())
self.session.save()
try:
# See comment when saving entities to understand this hack
self_entity = await utils.maybe_async(self.session.get_input_entity(0))
self_id = self_entity.access_hash
self_user = await utils.maybe_async(self.session.get_input_entity(self_id))
self._mb_entity_cache.set_self_user(self_id, None, self_user.access_hash)
except ValueError:
pass
await self._sender.send(self._init_with(
functions.help.GetConfigRequest()))
if self._catch_up:
ss = SessionState(0, 0, False, 0, 0, 0, 0, None)
cs = []
update_states = await utils.maybe_async(self.session.get_update_states())
for entity_id, state in update_states:
if entity_id == 0:
# TODO current session doesn't store self-user info but adding that is breaking on downstream session impls
ss = SessionState(0, 0, False, state.pts, state.qts, int(state.date.timestamp()), state.seq, None)
else:
cs.append(ChannelState(entity_id, state.pts))
self._message_box.load(ss, cs)
for state in cs:
try:
entity = await utils.maybe_async(self.session.get_input_entity(state.channel_id))
except ValueError:
self._log[__name__].warning(
'No access_hash in cache for channel %s, will not catch up', state.channel_id)
else:
self._mb_entity_cache.put(Entity(EntityType.CHANNEL, entity.channel_id, entity.access_hash))
self._init_request.query = functions.help.GetConfigRequest()
req = self._init_request
if self._no_updates:
req = functions.InvokeWithoutUpdatesRequest(req)
await self._sender.send(functions.InvokeWithLayerRequest(LAYER, req))
if self._message_box.is_empty():
me = await self.get_me()
if me:
await self._on_login(me) # also calls GetState to initialize the MessageBox
self._updates_handle = self.loop.create_task(self._update_loop())
self._keepalive_handle = self.loop.create_task(self._keepalive_loop())
self._updates_handle = self._loop.create_task(self._update_loop())
def is_connected(self: 'TelegramClient') -> bool:
"""
@ -633,27 +519,17 @@ class TelegramBaseClient(abc.ABC):
coroutine that you should await on your own code; otherwise
the loop is ran until said coroutine completes.
Event handlers which are currently running will be cancelled before
this function returns (in order to properly clean-up their tasks).
In particular, this means that using ``disconnect`` in a handler
will cause code after the ``disconnect`` to never run. If this is
needed, consider spawning a separate task to do the remaining work.
Example
.. code-block:: python
# You don't need to use this if you used "with client"
await client.disconnect()
"""
if self.loop.is_running():
# Disconnect may be called from an event handler, which would
# cancel itself during itself and never actually complete the
# disconnection. Shield the task to prevent disconnect itself
# from being cancelled. See issue #3942 for more details.
return asyncio.shield(self.loop.create_task(self._disconnect_coro()))
if self._loop.is_running():
return self._disconnect_coro()
else:
try:
self.loop.run_until_complete(self._disconnect_coro())
self._loop.run_until_complete(self._disconnect_coro())
except RuntimeError:
# Python 3.5.x complains when called from
# `__aexit__` and there were pending updates with:
@ -662,90 +538,38 @@ class TelegramBaseClient(abc.ABC):
# However, it doesn't really make a lot of sense.
pass
def set_proxy(self: 'TelegramClient', proxy: typing.Union[tuple, dict]):
"""
Changes the proxy which will be used on next (re)connection.
Method has no immediate effects if the client is currently connected.
The new proxy will take it's effect on the next reconnection attempt:
- on a call `await client.connect()` (after complete disconnect)
- on auto-reconnect attempt (e.g, after previous connection was lost)
"""
init_proxy = None if not issubclass(self._connection, TcpMTProxy) else \
types.InputClientProxy(*self._connection.address_info(proxy))
self._init_request.proxy = init_proxy
self._proxy = proxy
# While `await client.connect()` passes new proxy on each new call,
# auto-reconnect attempts use already set up `_connection` inside
# the `_sender`, so the only way to change proxy between those
# is to directly inject parameters.
connection = getattr(self._sender, "_connection", None)
if connection:
if isinstance(connection, TcpMTProxy):
connection._ip = proxy[0]
connection._port = proxy[1]
else:
connection._proxy = proxy
async def _save_states_and_entities(self: 'TelegramClient'):
# As a hack to not need to change the session files, save ourselves with ``id=0`` and ``access_hash`` of our ``id``.
# This way it is possible to determine our own ID by querying for 0. However, whether we're a bot is not saved.
# Piggy-back on an arbitrary TL type with users and chats so the session can understand to read the entities.
# It doesn't matter if we put users in the list of chats.
if self._mb_entity_cache.self_id:
await utils.maybe_async(
self.session.process_entities(
types.contacts.ResolvedPeer(None, [types.InputPeerUser(0, self._mb_entity_cache.self_id)], [])
)
)
ss, cs = self._message_box.session_state()
await utils.maybe_async(self.session.set_update_state(0, types.updates.State(**ss, unread_count=0)))
now = datetime.datetime.now() # any datetime works; channels don't need it
for channel_id, pts in cs.items():
await utils.maybe_async(
self.session.set_update_state(
channel_id, types.updates.State(pts, 0, now, 0, unread_count=0)
)
)
async def _disconnect_coro(self: 'TelegramClient'):
if self.session is None:
return # already logged out and disconnected
await self._disconnect()
# Also clean-up all exported senders because we're done with them
async with self._borrow_sender_lock:
for state, sender in self._borrowed_senders.values():
# Note that we're not checking for `state.should_disconnect()`.
# If the user wants to disconnect the client, ALL connections
# to Telegram (including exported senders) should be closed.
#
# Disconnect should never raise, so there's no try/except.
await sender.disconnect()
# Can't use `mark_disconnected` because it may be borrowed.
state._connected = False
if state.should_disconnect():
# disconnect should never raise
await sender.disconnect()
# If any was borrowed
self._borrowed_senders.clear()
# trio's nurseries would handle this for us, but this is asyncio.
# All tasks spawned in the background should properly be terminated.
if self._event_handler_tasks:
for task in self._event_handler_tasks:
if self._dispatching_updates_queue is None and self._updates_queue:
for task in self._updates_queue:
task.cancel()
await asyncio.wait(self._event_handler_tasks)
self._event_handler_tasks.clear()
await asyncio.wait(self._updates_queue)
self._updates_queue.clear()
await self._save_states_and_entities()
pts, date = self._state_cache[None]
if pts and date:
self.session.set_update_state(0, types.updates.State(
pts=pts,
qts=0,
date=date,
seq=0,
unread_count=0
))
await utils.maybe_async(self.session.close())
self.session.close()
async def _disconnect(self: 'TelegramClient'):
"""
@ -756,8 +580,7 @@ class TelegramBaseClient(abc.ABC):
"""
await self._sender.disconnect()
await helpers._cancel(self._log[__name__],
updates_handle=self._updates_handle,
keepalive_handle=self._keepalive_handle)
updates_handle=self._updates_handle)
async def _switch_dc(self: 'TelegramClient', new_dc):
"""
@ -766,22 +589,22 @@ class TelegramBaseClient(abc.ABC):
self._log[__name__].info('Reconnecting to new data center %s', new_dc)
dc = await self._get_dc(new_dc)
await utils.maybe_async(self.session.set_dc(dc.id, dc.ip_address, dc.port))
self.session.set_dc(dc.id, dc.ip_address, dc.port)
# auth_key's are associated with a server, which has now changed
# so it's not valid anymore. Set to None to force recreating it.
self._sender.auth_key.key = None
self.session.auth_key = None
await utils.maybe_async(self.session.save())
self.session.save()
await self._disconnect()
return await self.connect()
async def _auth_key_callback(self: 'TelegramClient', auth_key):
def _auth_key_callback(self: 'TelegramClient', auth_key):
"""
Callback from the sender whenever it needed to generate a
new authorization key. This means we are not authorized.
"""
self.session.auth_key = auth_key
await utils.maybe_async(self.session.save())
self.session.save()
# endregion
@ -796,27 +619,13 @@ class TelegramBaseClient(abc.ABC):
if cdn and not self._cdn_config:
cls._cdn_config = await self(functions.help.GetCdnConfigRequest())
for pk in cls._cdn_config.public_keys:
if pk.dc_id == dc_id:
rsa.add_key(pk.public_key, old=False)
rsa.add_key(pk.public_key)
try:
return next(
dc for dc in cls._config.dc_options
if dc.id == dc_id
and bool(dc.ipv6) == self._use_ipv6 and bool(dc.cdn) == cdn
)
except StopIteration:
self._log[__name__].warning(
'Failed to get DC %s (cdn = %s) with use_ipv6 = %s; retrying ignoring IPv6 check',
dc_id, cdn, self._use_ipv6
)
try:
return next(
dc for dc in cls._config.dc_options
if dc.id == dc_id and bool(dc.cdn) == cdn
)
except StopIteration:
raise ValueError(f'Failed to get DC {dc_id} (cdn = {cdn})')
return next(
dc for dc in cls._config.dc_options
if dc.id == dc_id
and bool(dc.ipv6) == self._use_ipv6 and bool(dc.cdn) == cdn
)
async def _create_exported_sender(self: 'TelegramClient', dc_id):
"""
@ -836,13 +645,13 @@ class TelegramBaseClient(abc.ABC):
dc.port,
dc.id,
loggers=self._log,
proxy=self._proxy,
local_addr=self._local_addr
proxy=self._proxy
))
self._log[__name__].info('Exporting auth for new borrowed sender in %s', dc)
auth = await self(functions.auth.ExportAuthorizationRequest(dc_id))
self._init_request.query = functions.auth.ImportAuthorizationRequest(id=auth.id, bytes=auth.bytes)
req = functions.InvokeWithLayerRequest(LAYER, self._init_request)
req = self._init_with(functions.auth.ImportAuthorizationRequest(
id=auth.id, bytes=auth.bytes
))
await sender.send(req)
return sender
@ -871,8 +680,7 @@ class TelegramBaseClient(abc.ABC):
dc.port,
dc.id,
loggers=self._log,
proxy=self._proxy,
local_addr=self._local_addr
proxy=self._proxy
))
state.add_borrow()
@ -904,30 +712,28 @@ class TelegramBaseClient(abc.ABC):
async def _get_cdn_client(self: 'TelegramClient', cdn_redirect):
"""Similar to ._borrow_exported_client, but for CDNs"""
# TODO Implement
raise NotImplementedError
session = self._exported_sessions.get(cdn_redirect.dc_id)
if not session:
dc = await self._get_dc(cdn_redirect.dc_id, cdn=True)
session = await utils.maybe_async(self.session.clone())
await utils.maybe_async(session.set_dc(dc.id, dc.ip_address, dc.port))
session = self.session.clone()
await session.set_dc(dc.id, dc.ip_address, dc.port)
self._exported_sessions[cdn_redirect.dc_id] = session
self._log[__name__].info('Creating new CDN client')
client = self.__class__(
client = TelegramBaseClient(
session, self.api_id, self.api_hash,
proxy=self._proxy,
timeout=self._timeout,
loop=self.loop
proxy=self._sender.connection.conn.proxy,
timeout=self._sender.connection.get_timeout()
)
session.auth_key = self._sender.auth_key
await client._sender.connect(self._connection(
session.server_address,
session.port,
session.dc_id,
loggers=self._log,
proxy=self._proxy,
local_addr=self._local_addr
))
# This will make use of the new RSA keys for this specific CDN.
#
# We won't be calling GetConfigRequest because it's only called
# when needed by ._get_dc, and also it's static so it's likely
# set already. Avoid invoking non-CDN methods by not syncing updates.
client.connect(_sync_updates=False)
return client
# endregion
@ -949,17 +755,16 @@ class TelegramBaseClient(abc.ABC):
executed sequentially on the server. They run in arbitrary
order by default.
flood_sleep_threshold (`int` | `None`, optional):
The flood sleep threshold to use for this request. This overrides
the default value stored in
`client.flood_sleep_threshold <telethon.client.telegrambaseclient.TelegramBaseClient.flood_sleep_threshold>`
Returns:
The result of the request (often a `TLObject`) or a list of
results if more than one request was given.
"""
raise NotImplementedError
@abc.abstractmethod
def _handle_update(self: 'TelegramClient', update):
raise NotImplementedError
@abc.abstractmethod
def _update_loop(self: 'TelegramClient'):
raise NotImplementedError

View File

@ -2,29 +2,17 @@ import asyncio
import inspect
import itertools
import random
import sys
import time
import traceback
import typing
import logging
import warnings
from collections import deque
import sqlite3
from .. import events, utils, errors
from ..events.common import EventBuilder, EventCommon
from ..tl import types, functions
from .._updates import GapError, PrematureEndReason
from ..helpers import get_running_loop
from ..version import __version__
if typing.TYPE_CHECKING:
from .telegramclient import TelegramClient
Callback = typing.Callable[[typing.Any], typing.Any]
class UpdateMethods:
# region Public methods
@ -33,34 +21,18 @@ class UpdateMethods:
try:
# Make a high-level request to notify that we want updates
await self(functions.updates.GetStateRequest())
result = await self.disconnected
if self._updates_error is not None:
raise self._updates_error
return result
return await self.disconnected
except KeyboardInterrupt:
pass
finally:
await self.disconnect()
async def set_receive_updates(self: 'TelegramClient', receive_updates):
"""
Change the value of `receive_updates`.
This is an `async` method, because in order for Telegram to start
sending updates again, a request must be made.
"""
self._no_updates = not receive_updates
if receive_updates:
await self(functions.updates.GetStateRequest())
def run_until_disconnected(self: 'TelegramClient'):
"""
Runs the event loop until the library is disconnected.
It also notifies Telegram that we want to receive updates
as described in https://core.telegram.org/api/updates.
If an unexpected error occurs during update handling,
the client will disconnect and said error will be raised.
Manual disconnections can be made by calling `disconnect()
<telethon.client.telegrambaseclient.TelegramBaseClient.disconnect>`
@ -129,7 +101,7 @@ class UpdateMethods:
def add_event_handler(
self: 'TelegramClient',
callback: Callback,
callback: callable,
event: EventBuilder = None):
"""
Registers a new event handler callback.
@ -178,7 +150,7 @@ class UpdateMethods:
def remove_event_handler(
self: 'TelegramClient',
callback: Callback,
callback: callable,
event: EventBuilder = None) -> int:
"""
Inverse operation of `add_event_handler()`.
@ -216,7 +188,7 @@ class UpdateMethods:
return found
def list_event_handlers(self: 'TelegramClient')\
-> 'typing.Sequence[typing.Tuple[Callback, EventBuilder]]':
-> 'typing.Sequence[typing.Tuple[callable, EventBuilder]]':
"""
Lists all registered event handlers.
@ -249,240 +221,106 @@ class UpdateMethods:
await client.catch_up()
"""
await self._updates_queue.put(types.UpdatesTooLong())
pts, date = self._state_cache[None]
if not pts:
return
self.session.catching_up = True
try:
while True:
d = await self(functions.updates.GetDifferenceRequest(
pts, date, 0
))
if isinstance(d, (types.updates.DifferenceSlice,
types.updates.Difference)):
if isinstance(d, types.updates.Difference):
state = d.state
else:
state = d.intermediate_state
pts, date = state.pts, state.date
self._handle_update(types.Updates(
users=d.users,
chats=d.chats,
date=state.date,
seq=state.seq,
updates=d.other_updates + [
types.UpdateNewMessage(m, 0, 0)
for m in d.new_messages
]
))
# TODO Implement upper limit (max_pts)
# We don't want to fetch updates we already know about.
#
# We may still get duplicates because the Difference
# contains a lot of updates and presumably only has
# the state for the last one, but at least we don't
# unnecessarily fetch too many.
#
# updates.getDifference's pts_total_limit seems to mean
# "how many pts is the request allowed to return", and
# if there is more than that, it returns "too long" (so
# there would be duplicate updates since we know about
# some). This can be used to detect collisions (i.e.
# it would return an update we have already seen).
else:
if isinstance(d, types.updates.DifferenceEmpty):
date = d.date
elif isinstance(d, types.updates.DifferenceTooLong):
pts = d.pts
break
except (ConnectionError, asyncio.CancelledError):
pass
finally:
# TODO Save new pts to session
self._state_cache._pts_date = (pts, date)
self.session.catching_up = False
# endregion
# region Private methods
# It is important to not make _handle_update async because we rely on
# the order that the updates arrive in to update the pts and date to
# be always-increasing. There is also no need to make this async.
def _handle_update(self: 'TelegramClient', update):
self.session.process_entities(update)
self._entity_cache.add(update)
if isinstance(update, (types.Updates, types.UpdatesCombined)):
entities = {utils.get_peer_id(x): x for x in
itertools.chain(update.users, update.chats)}
for u in update.updates:
self._process_update(u, update.updates, entities=entities)
elif isinstance(update, types.UpdateShort):
self._process_update(update.update, None)
else:
self._process_update(update, None)
self._state_cache.update(update)
def _process_update(self: 'TelegramClient', update, others, entities=None):
update._entities = entities or {}
# This part is somewhat hot so we don't bother patching
# update with channel ID/its state. Instead we just pass
# arguments which is faster.
channel_id = self._state_cache.get_channel_id(update)
args = (update, others, channel_id, self._state_cache[channel_id])
if self._dispatching_updates_queue is None:
task = self._loop.create_task(self._dispatch_update(*args))
self._updates_queue.add(task)
task.add_done_callback(lambda _: self._updates_queue.discard(task))
else:
self._updates_queue.put_nowait(args)
if not self._dispatching_updates_queue.is_set():
self._dispatching_updates_queue.set()
self._loop.create_task(self._dispatch_queue_updates())
self._state_cache.update(update)
async def _update_loop(self: 'TelegramClient'):
# If the MessageBox is not empty, the account had to be logged-in to fill in its state.
# This flag is used to propagate the "you got logged-out" error up (but getting logged-out
# can only happen if it was once logged-in).
was_once_logged_in = self._authorized is True or not self._message_box.is_empty()
self._updates_error = None
try:
if self._catch_up:
# User wants to catch up as soon as the client is up and running,
# so this is the best place to do it.
await self.catch_up()
updates_to_dispatch = deque()
while self.is_connected():
if updates_to_dispatch:
if self._sequential_updates:
await self._dispatch_update(updates_to_dispatch.popleft())
else:
while updates_to_dispatch:
# TODO if _dispatch_update fails for whatever reason, it's not logged! this should be fixed
task = self.loop.create_task(self._dispatch_update(updates_to_dispatch.popleft()))
self._event_handler_tasks.add(task)
task.add_done_callback(self._event_handler_tasks.discard)
continue
if len(self._mb_entity_cache) >= self._entity_cache_limit:
self._log[__name__].info(
'In-memory entity cache limit reached (%s/%s), flushing to session',
len(self._mb_entity_cache),
self._entity_cache_limit
)
await self._save_states_and_entities()
self._mb_entity_cache.retain(lambda id: id == self._mb_entity_cache.self_id or id in self._message_box.map)
if len(self._mb_entity_cache) >= self._entity_cache_limit:
warnings.warn('in-memory entities exceed entity_cache_limit after flushing; consider setting a larger limit')
self._log[__name__].info(
'In-memory entity cache at %s/%s after flushing to session',
len(self._mb_entity_cache),
self._entity_cache_limit
)
get_diff = self._message_box.get_difference()
if get_diff:
self._log[__name__].debug('Getting difference for account updates')
try:
diff = await self(get_diff)
except (
errors.ServerError,
errors.TimedOutError,
errors.FloodWaitError,
ValueError
) as e:
# Telegram is having issues
self._log[__name__].info('Cannot get difference since Telegram is having issues: %s', type(e).__name__)
self._message_box.end_difference()
continue
except (errors.UnauthorizedError, errors.AuthKeyError) as e:
# Not logged in or broken authorization key, can't get difference
self._log[__name__].info('Cannot get difference since the account is not logged in: %s', type(e).__name__)
self._message_box.end_difference()
if was_once_logged_in:
self._updates_error = e
await self.disconnect()
break
continue
except (errors.TypeNotFoundError, sqlite3.OperationalError) as e:
# User is likely doing weird things with their account or session and Telegram gets confused as to what layer they use
self._log[__name__].warning('Cannot get difference since the account is likely misusing the session: %s', e)
self._message_box.end_difference()
self._updates_error = e
await self.disconnect()
break
except OSError as e:
# Network is likely down, but it's unclear for how long.
# If disconnect is called this task will be cancelled along with the sleep.
# If disconnect is not called, getting difference should be retried after a few seconds.
self._log[__name__].info('Cannot get difference since the network is down: %s: %s', type(e).__name__, e)
await asyncio.sleep(5)
continue
updates, users, chats = self._message_box.apply_difference(diff, self._mb_entity_cache)
if updates:
self._log[__name__].info('Got difference for account updates')
_preprocess_updates = await self._preprocess_updates(updates, users, chats)
updates_to_dispatch.extend(_preprocess_updates)
continue
get_diff = self._message_box.get_channel_difference(self._mb_entity_cache)
if get_diff:
self._log[__name__].debug('Getting difference for channel %s updates', get_diff.channel.channel_id)
try:
diff = await self(get_diff)
except (errors.UnauthorizedError, errors.AuthKeyError) as e:
# Not logged in or broken authorization key, can't get difference
self._log[__name__].warning(
'Cannot get difference for channel %s since the account is not logged in: %s',
get_diff.channel.channel_id, type(e).__name__
)
self._message_box.end_channel_difference(
get_diff,
PrematureEndReason.TEMPORARY_SERVER_ISSUES,
self._mb_entity_cache
)
if was_once_logged_in:
self._updates_error = e
await self.disconnect()
break
continue
except (errors.TypeNotFoundError, sqlite3.OperationalError) as e:
self._log[__name__].warning(
'Cannot get difference for channel %s since the account is likely misusing the session: %s',
get_diff.channel.channel_id, e
)
self._message_box.end_channel_difference(
get_diff,
PrematureEndReason.TEMPORARY_SERVER_ISSUES,
self._mb_entity_cache
)
self._updates_error = e
await self.disconnect()
break
except (
errors.PersistentTimestampOutdatedError,
errors.PersistentTimestampInvalidError,
errors.ServerError,
errors.TimedOutError,
errors.FloodWaitError,
ValueError
) as e:
# According to Telegram's docs:
# "Channel internal replication issues, try again later (treat this like an RPC_CALL_FAIL)."
# We can treat this as "empty difference" and not update the local pts.
# Then this same call will be retried when another gap is detected or timeout expires.
#
# Another option would be to literally treat this like an RPC_CALL_FAIL and retry after a few
# seconds, but if Telegram is having issues it's probably best to wait for it to send another
# update (hinting it may be okay now) and retry then.
#
# This is a bit hacky because MessageBox doesn't really have a way to "not update" the pts.
# Instead we manually extract the previously-known pts and use that.
#
# For PersistentTimestampInvalidError:
# Somehow our pts is either too new or the server does not know about this.
# We treat this as PersistentTimestampOutdatedError for now.
# TODO investigate why/when this happens and if this is the proper solution
self._log[__name__].warning(
'Getting difference for channel updates %s caused %s;'
' ending getting difference prematurely until server issues are resolved',
get_diff.channel.channel_id, type(e).__name__
)
self._message_box.end_channel_difference(
get_diff,
PrematureEndReason.TEMPORARY_SERVER_ISSUES,
self._mb_entity_cache
)
continue
except (errors.ChannelPrivateError, errors.ChannelInvalidError):
# Timeout triggered a get difference, but we have been banned in the channel since then.
# Because we can no longer fetch updates from this channel, we should stop keeping track
# of it entirely.
self._log[__name__].info(
'Account is now banned in %d so we can no longer fetch updates from it',
get_diff.channel.channel_id
)
self._message_box.end_channel_difference(
get_diff,
PrematureEndReason.BANNED,
self._mb_entity_cache
)
continue
except OSError as e:
self._log[__name__].info(
'Cannot get difference for channel %d since the network is down: %s: %s',
get_diff.channel.channel_id, type(e).__name__, e
)
await asyncio.sleep(5)
continue
updates, users, chats = self._message_box.apply_channel_difference(get_diff, diff, self._mb_entity_cache)
if updates:
self._log[__name__].info('Got difference for channel %d updates', get_diff.channel.channel_id)
_preprocess_updates = await self._preprocess_updates(updates, users, chats)
updates_to_dispatch.extend(_preprocess_updates)
continue
deadline = self._message_box.check_deadlines()
deadline_delay = deadline - get_running_loop().time()
if deadline_delay > 0:
# Don't bother sleeping and timing out if the delay is already 0 (pollutes the logs).
try:
updates = await asyncio.wait_for(self._updates_queue.get(), deadline_delay)
except asyncio.TimeoutError:
self._log[__name__].debug('Timeout waiting for updates expired')
continue
else:
continue
processed = []
try:
users, chats = self._message_box.process_updates(updates, self._mb_entity_cache, processed)
except GapError:
continue # get(_channel)_difference will start returning requests
_preprocess_updates = await self._preprocess_updates(processed, users, chats)
updates_to_dispatch.extend(_preprocess_updates)
except asyncio.CancelledError:
pass
except Exception as e:
self._log[__name__].exception(f'Fatal error handling updates (this is a bug in Telethon v{__version__}, please report it)')
self._updates_error = e
await self.disconnect()
async def _preprocess_updates(self, updates, users, chats):
self._mb_entity_cache.extend(users, chats)
await utils.maybe_async(self.session.process_entities(types.contacts.ResolvedPeer(None, users, chats)))
entities = {utils.get_peer_id(x): x
for x in itertools.chain(users, chats)}
for u in updates:
u._entities = entities
return updates
async def _keepalive_loop(self: 'TelegramClient'):
# Pings' ID don't really need to be secure, just "random"
rnd = lambda: random.randrange(-2**63, 2**63)
while self.is_connected():
@ -510,7 +348,7 @@ class UpdateMethods:
# We also don't really care about their result.
# Just send them periodically.
try:
self._sender._keepalive_ping(rnd())
self._sender.send(functions.PingRequest(rnd()))
except (ConnectionError, asyncio.CancelledError):
return
@ -518,25 +356,56 @@ class UpdateMethods:
# inserted because this is a rather expensive operation
# (default's sqlite3 takes ~0.1s to commit changes). Do
# it every minute instead. No-op if there's nothing new.
await self._save_states_and_entities()
self.session.save()
await utils.maybe_async(self.session.save())
# We need to send some content-related request at least hourly
# for Telegram to keep delivering updates, otherwise they will
# just stop even if we're connected. Do so every 30 minutes.
#
# TODO Call getDifference instead since it's more relevant
if time.time() - self._last_request > 30 * 60:
if not await self.is_user_authorized():
# What can be the user doing for so
# long without being logged in...?
continue
async def _dispatch_update(self: 'TelegramClient', update):
# TODO only used for AlbumHack, and MessageBox is not really designed for this
others = None
try:
await self(functions.updates.GetStateRequest())
except (ConnectionError, asyncio.CancelledError):
return
if not self._mb_entity_cache.self_id:
async def _dispatch_queue_updates(self: 'TelegramClient'):
while not self._updates_queue.empty():
await self._dispatch_update(*self._updates_queue.get_nowait())
self._dispatching_updates_queue.clear()
async def _dispatch_update(self: 'TelegramClient', update, others, channel_id, pts_date):
if not self._entity_cache.ensure_cached(update):
# We could add a lock to not fetch the same pts twice if we are
# already fetching it. However this does not happen in practice,
# which makes sense, because different updates have different pts.
if self._state_cache.update(update, check_only=True):
# If the update doesn't have pts, fetching won't do anything.
# For example, UpdateUserStatus or UpdateChatUserTyping.
try:
await self._get_difference(update, channel_id, pts_date)
except OSError:
pass # We were disconnected, that's okay
except errors.RPCError:
# There's a high chance the request fails because we lack
# the channel. Because these "happen sporadically" (#1428)
# we should be okay (no flood waits) even if more occur.
pass
if not self._self_input_peer:
# Some updates require our own ID, so we must make sure
# that the event builder has offline access to it. Calling
# `get_me()` will cache it under `self._mb_entity_cache`.
# `get_me()` will cache it under `self._self_input_peer`.
#
# It will return `None` if we haven't logged in yet which is
# fine, we will just retry next time anyway.
try:
await self.get_me(input_peer=True)
except OSError:
pass # might not have connection
await self.get_me(input_peer=True)
built = EventBuilderDict(self, update, others)
for conv_set in self._conversations.values():
@ -587,7 +456,8 @@ class UpdateMethods:
except Exception as e:
if not isinstance(e, asyncio.CancelledError) or self.is_connected():
name = getattr(callback, '__name__', repr(callback))
self._log[__name__].exception('Unhandled exception on %s', name)
self._log[__name__].exception('Unhandled exception on %s',
name)
async def _dispatch_event(self: 'TelegramClient', event):
"""
@ -597,8 +467,6 @@ class UpdateMethods:
# the name of speed; we don't want to make it worse for all updates
# just because albums may need it.
for builder, callback in self._event_builders:
if isinstance(builder, events.Raw):
continue
if not isinstance(event, builder.Event):
continue
@ -628,7 +496,64 @@ class UpdateMethods:
except Exception as e:
if not isinstance(e, asyncio.CancelledError) or self.is_connected():
name = getattr(callback, '__name__', repr(callback))
self._log[__name__].exception('Unhandled exception on %s', name)
self._log[__name__].exception('Unhandled exception on %s',
name)
async def _get_difference(self: 'TelegramClient', update, channel_id, pts_date):
"""
Get the difference for this `channel_id` if any, then load entities.
Calls :tl:`updates.getDifference`, which fills the entities cache
(always done by `__call__`) and lets us know about the full entities.
"""
# Fetch since the last known pts/date before this update arrived,
# in order to fetch this update at full, including its entities.
self._log[__name__].debug('Getting difference for entities '
'for %r', update.__class__)
if channel_id:
try:
where = await self.get_input_entity(channel_id)
except ValueError:
# There's a high chance that this fails, since
# we are getting the difference to fetch entities.
return
if not pts_date:
# First-time, can't get difference. Get pts instead.
result = await self(functions.channels.GetFullChannelRequest(
utils.get_input_channel(where)
))
self._state_cache[channel_id] = result.full_chat.pts
return
result = await self(functions.updates.GetChannelDifferenceRequest(
channel=where,
filter=types.ChannelMessagesFilterEmpty(),
pts=pts_date, # just pts
limit=100,
force=True
))
else:
if not pts_date[0]:
# First-time, can't get difference. Get pts instead.
result = await self(functions.updates.GetStateRequest())
self._state_cache[None] = result.pts, result.date
return
result = await self(functions.updates.GetDifferenceRequest(
pts=pts_date[0],
date=pts_date[1],
qts=0
))
if isinstance(result, (types.updates.Difference,
types.updates.DifferenceSlice,
types.updates.ChannelDifference,
types.updates.ChannelDifferenceTooLong)):
update._entities.update({
utils.get_peer_id(x): x for x in
itertools.chain(result.users, result.chats)
})
async def _handle_auto_reconnect(self: 'TelegramClient'):
# TODO Catch-up
@ -670,8 +595,8 @@ class UpdateMethods:
self._log[__name__].warning('Failed to get missed updates after '
'reconnect: %r', e)
except Exception:
self._log[__name__].exception(
'Unhandled exception while getting update difference after reconnect')
self._log[__name__].exception('Unhandled exception while getting '
'update difference after reconnect')
# endregion
@ -689,8 +614,15 @@ class EventBuilderDict:
try:
return self.__dict__[builder]
except KeyError:
# Updates may arrive before login (like updateLoginToken) and we
# won't have our self ID yet (anyway only new messages need it).
self_id = (
self.client._self_input_peer.user_id
if self.client._self_input_peer
else None
)
event = self.__dict__[builder] = builder.build(
self.update, self.others, self.client._self_id)
self.update, self.others, self_id)
if isinstance(event, EventCommon):
event.original_update = self.update

View File

@ -18,6 +18,7 @@ try:
except ImportError:
PIL = None
if typing.TYPE_CHECKING:
from .telegramclient import TelegramClient
@ -35,7 +36,7 @@ class _CacheType:
def _resize_photo_if_needed(
file, is_image, width=2560, height=2560, background=(255, 255, 255)):
file, is_image, width=1280, height=1280, background=(255, 255, 255)):
# https://github.com/telegramdesktop/tdesktop/blob/12905f0dcb9d513378e7db11989455a1b764ef75/Telegram/SourceFiles/boxes/photo_crop_box.cpp#L254
if (not is_image
@ -46,64 +47,40 @@ def _resize_photo_if_needed(
if isinstance(file, bytes):
file = io.BytesIO(file)
if isinstance(file, io.IOBase):
# Pillow seeks to 0 unconditionally later anyway
old_pos = file.tell()
file.seek(0, io.SEEK_END)
before = file.tell()
elif isinstance(file, str) and os.path.exists(file):
# Check if file exists as a path and if so, get its size on disk
before = os.path.getsize(file)
else:
# Would be weird...
before = None
before = file.tell() if isinstance(file, io.IOBase) else None
try:
# Don't use a `with` block for `image`, or `file` would be closed.
# See https://github.com/LonamiWebs/Telethon/issues/1121 for more.
image = PIL.Image.open(file)
try:
kwargs = {'exif': image.info['exif']}
except KeyError:
kwargs = {}
if image.width <= width and image.height <= height:
return file
if image.mode == 'RGB':
# Check if image is within acceptable bounds, if so, check if the image is at or below 10 MB, or assume it isn't if size is None or 0
if image.width <= width and image.height <= height and (before <= 10000000 if before else False):
return file
image.thumbnail((width, height), PIL.Image.ANTIALIAS)
# If the image is already RGB, don't convert it
# certain modes such as 'P' have no alpha index but can't be saved as JPEG directly
image.thumbnail((width, height), PIL.Image.LANCZOS)
alpha_index = image.mode.find('A')
if alpha_index == -1:
# If the image mode doesn't have alpha
# channel then don't bother masking it away.
result = image
else:
# We could save the resized image with the original format, but
# JPEG often compresses better -> smaller size -> faster upload
# We need to mask away the alpha channel ([3]), since otherwise
# IOError is raised when trying to save alpha channels in JPEG.
image.thumbnail((width, height), PIL.Image.LANCZOS)
result = PIL.Image.new('RGB', image.size, background)
mask = None
if image.has_transparency_data:
if image.mode == 'RGBA':
mask = image.getchannel('A')
else:
mask = image.convert('RGBA').getchannel('A')
result.paste(image, mask=mask)
result.paste(image, mask=image.split()[alpha_index])
buffer = io.BytesIO()
result.save(buffer, 'JPEG', progressive=True, **kwargs)
result.save(buffer, 'JPEG')
buffer.seek(0)
buffer.name = 'a.jpg'
return buffer
except IOError:
return file
finally:
# The original position might matter
if isinstance(file, io.IOBase):
file.seek(old_pos)
if before is not None:
file.seek(before, io.SEEK_SET)
class UploadMethods:
@ -117,7 +94,6 @@ class UploadMethods:
*,
caption: typing.Union[str, typing.Sequence[str]] = None,
force_document: bool = False,
mime_type: str = None,
file_size: int = None,
clear_draft: bool = False,
progress_callback: 'hints.ProgressCallback' = None,
@ -126,24 +102,13 @@ class UploadMethods:
thumb: 'hints.FileLike' = None,
allow_cache: bool = True,
parse_mode: str = (),
formatting_entities: typing.Optional[
typing.Union[
typing.List[types.TypeMessageEntity], typing.List[typing.List[types.TypeMessageEntity]]
]
] = None,
voice_note: bool = False,
video_note: bool = False,
buttons: typing.Optional['hints.MarkupLike'] = None,
buttons: 'hints.MarkupLike' = None,
silent: bool = None,
background: bool = None,
supports_streaming: bool = False,
schedule: 'hints.DateLike' = None,
comment_to: 'typing.Union[int, types.Message]' = None,
ttl: int = None,
nosound_video: bool = None,
send_as: typing.Optional['hints.EntityLike'] = None,
message_effect_id: typing.Optional[int] = None,
**kwargs) -> typing.Union[typing.List[typing.Any], typing.Any]:
**kwargs) -> 'types.Message':
"""
Sends message with the given file to the specified entity.
@ -210,13 +175,6 @@ class UploadMethods:
the extension of an image file or a video file, it will be
sent as such. Otherwise always as a document.
mime_type (`str`, optional):
Custom mime type to use for the file to be sent (for example,
``audio/mpeg``, ``audio/x-vorbis+ogg``, etc.).
It can change the type of files displayed.
If not set to any value, the mime type will be determined
automatically based on the file's extension.
file_size (`int`, optional):
The size of the file to be uploaded if it needs to be uploaded,
which will be determined automatically if not specified.
@ -242,14 +200,9 @@ class UploadMethods:
Optional JPEG thumbnail (for documents). **Telegram will
ignore this parameter** unless you pass a ``.jpg`` file!
The file must also be small in dimensions and in disk size.
Successful thumbnails were files below 20kB and 320x320px.
The file must also be small in dimensions and in-disk size.
Successful thumbnails were files below 20kb and 200x200px.
Width/height and dimensions/size ratios may be important.
For Telegram to accept a thumbnail, you must provide the
dimensions of the underlying media through ``attributes=``
with :tl:`DocumentAttributesVideo` or by installing the
optional ``hachoir`` dependency.
allow_cache (`bool`, optional):
This parameter currently does nothing, but is kept for
@ -262,13 +215,6 @@ class UploadMethods:
property for allowed values. Markdown parsing will be used by
default.
formatting_entities (`list`, optional):
Optional formatting entities for the sent media message. When sending an album,
`formatting_entities` can be a list of lists, where each inner list contains
`types.TypeMessageEntity`. Each inner list will be assigned to the corresponding
file in a pairwise manner with the caption. If provided, the ``parse_mode``
parameter will be ignored.
voice_note (`bool`, optional):
If `True` the audio will be sent as a voice note.
@ -288,9 +234,6 @@ class UploadMethods:
the person has the chat muted). Set it to `True` to alter
this behaviour.
background (`bool`, optional):
Whether the message should be send in background.
supports_streaming (`bool`, optional):
Whether the sent video supports streaming or not. Note that
Telegram only recognizes as streamable some formats like MP4,
@ -303,45 +246,6 @@ class UploadMethods:
it will be scheduled to be automatically sent at a later
time.
comment_to (`int` | `Message <telethon.tl.custom.message.Message>`, optional):
Similar to ``reply_to``, but replies in the linked group of a
broadcast channel instead (effectively leaving a "comment to"
the specified message).
This parameter takes precedence over ``reply_to``. If there is
no linked chat, `telethon.errors.sgIdInvalidError` is raised.
ttl (`int`. optional):
The Time-To-Live of the file (also known as "self-destruct timer"
or "self-destructing media"). If set, files can only be viewed for
a short period of time before they disappear from the message
history automatically.
The value must be at least 1 second, and at most 60 seconds,
otherwise Telegram will ignore this parameter.
Not all types of media can be used with this parameter, such
as text documents, which will fail with ``TtlMediaInvalidError``.
nosound_video (`bool`, optional):
Only applicable when sending a video file without an audio
track. If set to ``True``, the video will be displayed in
Telegram as a video. If set to ``False``, Telegram will attempt
to display the video as an animated gif. (It may still display
as a video due to other factors.) The value is ignored if set
on non-video files. This is set to ``True`` for albums, as gifs
cannot be sent in albums.
send_as (`entity`):
Unique identifier (int) or username (str) of the chat or channel to send the message as.
You can use this to send the message on behalf of a chat or channel where you have appropriate permissions.
Use the GetSendAs to return the list of message sender identifiers, which can be used to send messages in the chat,
This setting applies to the current message and will remain effective for future messages unless explicitly changed.
To set this behavior permanently for all messages, use SaveDefaultSendAs.
message_effect_id (`int`, optional):
Unique identifier of the message effect to be added to the message; for private chats only
Returns
The `Message <telethon.tl.custom.message.Message>` (or messages)
containing the sent file, or messages if a list of them was passed.
@ -399,72 +303,73 @@ class UploadMethods:
if not caption:
caption = ''
if not formatting_entities:
formatting_entities = []
entity = await self.get_input_entity(entity)
if comment_to is not None:
entity, reply_to = await self._get_comment_data(entity, comment_to)
else:
reply_to = utils.get_message_id(reply_to)
# First check if the user passed an iterable, in which case
# we may want to send grouped.
# we may want to send as an album if all are photo files.
if utils.is_list_like(file):
sent_count = 0
used_callback = None if not progress_callback else (
lambda s, t: progress_callback(sent_count + s, len(file))
)
media_captions = []
document_captions = []
if utils.is_list_like(caption):
captions = caption
else:
captions = [caption]
# Check that formatting_entities list is valid
if all(utils.is_list_like(obj) for obj in formatting_entities):
formatting_entities = formatting_entities
elif utils.is_list_like(formatting_entities):
formatting_entities = [formatting_entities]
# TODO Fix progress_callback
media = []
if force_document:
documents = file
else:
raise TypeError('The formatting_entities argument must be a list or a sequence of lists')
# Check that all entities in all lists are of the correct type
if not all(isinstance(ent, types.TypeMessageEntity) for sublist in formatting_entities for ent in sublist):
raise TypeError('All entities must be instances of <types.TypeMessageEntity>')
documents = []
for doc, cap in itertools.zip_longest(file, captions):
if utils.is_image(doc) or utils.is_video(doc):
media.append(doc)
media_captions.append(cap)
else:
documents.append(doc)
document_captions.append(cap)
result = []
while file:
while media:
result += await self._send_album(
entity, file[:10], caption=captions[:10], formatting_entities=formatting_entities[:10],
progress_callback=used_callback, reply_to=reply_to,
entity, media[:10], caption=media_captions[:10],
progress_callback=progress_callback, reply_to=reply_to,
parse_mode=parse_mode, silent=silent, schedule=schedule,
supports_streaming=supports_streaming, clear_draft=clear_draft,
force_document=force_document, background=background,
send_as=send_as, message_effect_id=message_effect_id
supports_streaming=supports_streaming, clear_draft=clear_draft
)
file = file[10:]
captions = captions[10:]
formatting_entities = formatting_entities[10:]
sent_count += 10
media = media[10:]
media_captions = media_captions[10:]
for doc, cap in zip(documents, captions):
result.append(await self.send_file(
entity, doc, allow_cache=allow_cache,
caption=cap, force_document=force_document,
progress_callback=progress_callback, reply_to=reply_to,
attributes=attributes, thumb=thumb, voice_note=voice_note,
video_note=video_note, buttons=buttons, silent=silent,
supports_streaming=supports_streaming, schedule=schedule,
clear_draft=clear_draft,
**kwargs
))
return result
if formatting_entities:
msg_entities = formatting_entities
entity = await self.get_input_entity(entity)
reply_to = utils.get_message_id(reply_to)
# Not document since it's subject to change.
# Needed when a Message is passed to send_message and it has media.
if 'entities' in kwargs:
msg_entities = kwargs['entities']
else:
caption, msg_entities =\
await self._parse_message_text(caption, parse_mode)
file_handle, media, image = await self._file_to_media(
file, force_document=force_document,
mime_type=mime_type,
file_size=file_size,
progress_callback=progress_callback,
attributes=attributes, allow_cache=allow_cache, thumb=thumb,
attributes=attributes, allow_cache=allow_cache, thumb=thumb,
voice_note=voice_note, video_note=video_note,
supports_streaming=supports_streaming, ttl=ttl,
nosound_video=nosound_video,
supports_streaming=supports_streaming
)
# e.g. invalid cast from :tl:`MessageMediaWebPage`
@ -472,25 +377,17 @@ class UploadMethods:
raise TypeError('Cannot use {!r} as file'.format(file))
markup = self.build_reply_markup(buttons)
reply_to = None if reply_to is None else types.InputReplyToMessage(reply_to)
request = functions.messages.SendMediaRequest(
entity, media, reply_to=reply_to, message=caption,
entity, media, reply_to_msg_id=reply_to, message=caption,
entities=msg_entities, reply_markup=markup, silent=silent,
schedule_date=schedule, clear_draft=clear_draft,
background=background,
send_as=await self.get_input_entity(send_as) if send_as else None,
effect=message_effect_id
schedule_date=schedule, clear_draft=clear_draft
)
return self._get_response_message(request, await self(request), entity)
async def _send_album(self: 'TelegramClient', entity, files, caption='',
formatting_entities=None,
progress_callback=None, reply_to=None,
parse_mode=(), silent=None, schedule=None,
supports_streaming=None, clear_draft=None,
force_document=False, background=None, ttl=None,
send_as: typing.Optional['hints.EntityLike'] = None,
message_effect_id: typing.Optional[int] = None):
supports_streaming=None, clear_draft=None):
"""Specialized version of .send_file for albums"""
# We don't care if the user wants to avoid cache, we will use it
# anyway. Why? The cached version will be exactly the same thing
@ -498,51 +395,35 @@ class UploadMethods:
# cache only makes a difference for documents where the user may
# want the attributes used on them to change.
#
# In theory documents can be sent inside the albums, but they appear
# In theory documents can be sent inside the albums but they appear
# as different messages (not inside the album), and the logic to set
# the attributes/avoid cache is already written in .send_file().
entity = await self.get_input_entity(entity)
if not utils.is_list_like(caption):
caption = (caption,)
if not all(isinstance(obj, list) for obj in formatting_entities):
formatting_entities = (formatting_entities,)
captions = []
# If the formatting_entities argument is provided, we don't use parse_mode
if formatting_entities:
# Pop from the end (so reverse)
capt_with_ent = itertools.zip_longest(reversed(caption), reversed(formatting_entities), fillvalue=None)
for msg_caption, msg_entities in capt_with_ent:
captions.append((msg_caption, msg_entities))
else:
for c in reversed(caption): # Pop from the end (so reverse)
captions.append(await self._parse_message_text(c or '', parse_mode))
for c in reversed(caption): # Pop from the end (so reverse)
captions.append(await self._parse_message_text(c or '', parse_mode))
reply_to = utils.get_message_id(reply_to)
used_callback = None if not progress_callback else (
# use an integer when sent matches total, to easily determine a file has been fully sent
lambda s, t: progress_callback(sent_count + 1 if s == t else sent_count + s / t, len(files))
)
# Need to upload the media first, but only if they're not cached yet
media = []
for sent_count, file in enumerate(files):
for file in files:
# Albums want :tl:`InputMedia` which, in theory, includes
# :tl:`InputMediaUploadedPhoto`. However, using that will
# :tl:`InputMediaUploadedPhoto`. However using that will
# make it `raise MediaInvalidError`, so we need to upload
# it as media and then convert that to :tl:`InputMediaPhoto`.
fh, fm, _ = await self._file_to_media(
file, supports_streaming=supports_streaming,
force_document=force_document, ttl=ttl,
progress_callback=used_callback, nosound_video=True)
if isinstance(fm, (types.InputMediaUploadedPhoto, types.InputMediaPhotoExternal)):
file, supports_streaming=supports_streaming)
if isinstance(fm, types.InputMediaUploadedPhoto):
r = await self(functions.messages.UploadMediaRequest(
entity, media=fm
))
fm = utils.get_input_media(r.photo)
elif isinstance(fm, (types.InputMediaUploadedDocument, types.InputMediaDocumentExternal)):
elif isinstance(fm, types.InputMediaUploadedDocument):
r = await self(functions.messages.UploadMediaRequest(
entity, media=fm
))
@ -563,11 +444,8 @@ class UploadMethods:
# Now we can construct the multi-media request
request = functions.messages.SendMultiMediaRequest(
entity, reply_to=None if reply_to is None else types.InputReplyToMessage(reply_to), multi_media=media,
silent=silent, schedule_date=schedule, clear_draft=clear_draft,
background=background,
send_as=await self.get_input_entity(send_as) if send_as else None,
effect=message_effect_id
entity, reply_to_msg_id=reply_to, multi_media=media,
silent=silent, schedule_date=schedule, clear_draft=clear_draft
)
result = await self(request)
@ -638,13 +516,6 @@ class UploadMethods:
A callback function accepting two parameters:
``(sent bytes, total)``.
When sending an album, the callback will receive a number
between 0 and the amount of files as the "sent" parameter,
and the amount of files as the "total". Note that the first
parameter will be a floating point number to indicate progress
within a file (e.g. ``2.5`` means it has sent 50% of the third
file, because it's between 2 and 3).
Returns
:tl:`InputFileBig` if the file size is larger than 10MB,
`InputSizedFile <telethon.tl.custom.inputsizedfile.InputSizedFile>`
@ -669,43 +540,83 @@ class UploadMethods:
if isinstance(file, (types.InputFile, types.InputFileBig)):
return file # Already uploaded
if not file_name and getattr(file, 'name', None):
file_name = file.name
if file_size is not None:
pass # do nothing as it's already kwown
elif isinstance(file, str):
file_size = os.path.getsize(file)
stream = open(file, 'rb')
close_stream = True
elif isinstance(file, bytes):
file_size = len(file)
stream = io.BytesIO(file)
close_stream = True
else:
if not callable(getattr(file, 'read', None)):
raise TypeError('file description should have a `read` method')
if callable(getattr(file, 'seekable', None)):
seekable = await helpers._maybe_await(file.seekable())
else:
seekable = False
if seekable:
pos = await helpers._maybe_await(file.tell())
await helpers._maybe_await(file.seek(0, os.SEEK_END))
file_size = await helpers._maybe_await(file.tell())
await helpers._maybe_await(file.seek(pos, os.SEEK_SET))
stream = file
close_stream = False
else:
self._log[__name__].warning(
'Could not determine file size beforehand so the entire '
'file will be read in-memory')
data = await helpers._maybe_await(file.read())
stream = io.BytesIO(data)
close_stream = True
file_size = len(data)
# File will now either be a string or bytes
if not part_size_kb:
part_size_kb = utils.get_appropriated_part_size(file_size)
if part_size_kb > 512:
raise ValueError('The part size must be less or equal to 512KB')
part_size = int(part_size_kb * 1024)
if part_size % 1024 != 0:
raise ValueError(
'The part size must be evenly divisible by 1024')
# Set a default file name if None was specified
file_id = helpers.generate_random_long()
if not file_name:
if isinstance(file, str):
file_name = os.path.basename(file)
else:
file_name = str(file_id)
# If the file name lacks extension, add it if possible.
# Else Telegram complains with `PHOTO_EXT_INVALID_ERROR`
# even if the uploaded image is indeed a photo.
if not os.path.splitext(file_name)[-1]:
file_name += utils._get_extension(file)
# Determine whether the file is too big (over 10MB) or not
# Telegram does make a distinction between smaller or larger files
is_big = file_size > 10 * 1024 * 1024
hash_md5 = hashlib.md5()
part_count = (file_size + part_size - 1) // part_size
self._log[__name__].info('Uploading file of %d bytes in %d chunks of %d',
file_size, part_count, part_size)
pos = 0
async with helpers._FileStream(file, file_size=file_size) as stream:
# Opening the stream will determine the correct file size
file_size = stream.file_size
if not part_size_kb:
part_size_kb = utils.get_appropriated_part_size(file_size)
if part_size_kb > 512:
raise ValueError('The part size must be less or equal to 512KB')
part_size = int(part_size_kb * 1024)
if part_size % 1024 != 0:
raise ValueError(
'The part size must be evenly divisible by 1024')
# Set a default file name if None was specified
file_id = helpers.generate_random_long()
if not file_name:
file_name = stream.name or str(file_id)
# If the file name lacks extension, add it if possible.
# Else Telegram complains with `PHOTO_EXT_INVALID_ERROR`
# even if the uploaded image is indeed a photo.
if not os.path.splitext(file_name)[-1]:
file_name += utils._get_extension(stream)
# Determine whether the file is too big (over 10MB) or not
# Telegram does make a distinction between smaller or larger files
is_big = file_size > 10 * 1024 * 1024
hash_md5 = hashlib.md5()
part_count = (file_size + part_size - 1) // part_size
self._log[__name__].info('Uploading file of %d bytes in %d chunks of %d',
file_size, part_count, part_size)
pos = 0
try:
for part_index in range(part_count):
# Read the file by in chunks of size part_size
part = await helpers._maybe_await(stream.read(part_size))
@ -724,16 +635,16 @@ class UploadMethods:
pos += len(part)
# Encryption part if needed
if key and iv:
part = AES.encrypt_ige(part, key, iv)
if not is_big:
# Bit odd that MD5 is only needed for small files and not
# big ones with more chance for corruption, but that's
# what Telegram wants.
hash_md5.update(part)
# Encryption part if needed
if key and iv:
part = AES.encrypt_ige(part, key, iv)
# The SavePartRequest is different depending on whether
# the file is too large or not (over or less than 10MB)
if is_big:
@ -752,6 +663,9 @@ class UploadMethods:
else:
raise RuntimeError(
'Failed to upload file part {}.'.format(part_index))
finally:
if close_stream:
await helpers._maybe_await(stream.close())
if is_big:
return types.InputFileBig(file_id, part_count, file_name)
@ -766,21 +680,19 @@ class UploadMethods:
self, file, force_document=False, file_size=None,
progress_callback=None, attributes=None, thumb=None,
allow_cache=True, voice_note=False, video_note=False,
supports_streaming=False, mime_type=None, as_image=None,
ttl=None, nosound_video=None):
supports_streaming=False, mime_type=None, as_image=None):
if not file:
return None, None, None
if isinstance(file, pathlib.Path):
file = str(file.absolute())
is_image = utils.is_image(file)
if as_image is None:
as_image = is_image and not force_document
as_image = utils.is_image(file) and not force_document
# `aiofiles` do not base `io.IOBase` but do have `read`, so we
# just check for the read attribute to see if it's file-like.
if not isinstance(file, (str, bytes, types.InputFile, types.InputFileBig)) \
if not isinstance(file, (str, bytes, types.InputFile, types.InputFileBig))\
and not hasattr(file, 'read'):
# The user may pass a Message containing media (or the media,
# or anything similar) that should be treated as a file. Try
@ -796,8 +708,7 @@ class UploadMethods:
force_document=force_document,
voice_note=voice_note,
video_note=video_note,
supports_streaming=supports_streaming,
ttl=ttl
supports_streaming=supports_streaming
), as_image)
except TypeError:
# Can't turn whatever was given into media
@ -816,13 +727,13 @@ class UploadMethods:
)
elif re.match('https?://', file):
if as_image:
media = types.InputMediaPhotoExternal(file, ttl_seconds=ttl)
media = types.InputMediaPhotoExternal(file)
else:
media = types.InputMediaDocumentExternal(file, ttl_seconds=ttl)
media = types.InputMediaDocumentExternal(file)
else:
bot_file = utils.resolve_bot_file_id(file)
if bot_file:
media = utils.get_input_media(bot_file, ttl=ttl)
media = utils.get_input_media(bot_file)
if media:
pass # Already have media, don't check the rest
@ -832,17 +743,16 @@ class UploadMethods:
'an HTTP URL or a valid bot-API-like file ID'.format(file)
)
elif as_image:
media = types.InputMediaUploadedPhoto(file_handle, ttl_seconds=ttl)
media = types.InputMediaUploadedPhoto(file_handle)
else:
attributes, mime_type = utils.get_attributes(
file,
mime_type=mime_type,
attributes=attributes,
force_document=force_document and not is_image,
force_document=force_document,
voice_note=voice_note,
video_note=video_note,
supports_streaming=supports_streaming,
thumb=thumb
supports_streaming=supports_streaming
)
if not thumb:
@ -852,18 +762,12 @@ class UploadMethods:
thumb = str(thumb.absolute())
thumb = await self.upload_file(thumb, file_size=file_size)
# setting `nosound_video` to `True` doesn't affect videos with sound
# instead it prevents sending silent videos as GIFs
nosound_video = nosound_video if mime_type.split("/")[0] == 'video' else None
media = types.InputMediaUploadedDocument(
file=file_handle,
mime_type=mime_type,
attributes=attributes,
thumb=thumb,
force_file=force_document and not is_image,
ttl_seconds=ttl,
nosound_video=nosound_video
force_file=force_document
)
return file_handle, media, as_image

View File

@ -26,19 +26,12 @@ def _fmt_flood(delay, request, *, early=False, td=datetime.timedelta):
class UserMethods:
async def __call__(self: 'TelegramClient', request, ordered=False, flood_sleep_threshold=None):
async def __call__(self: 'TelegramClient', request, ordered=False):
return await self._call(self._sender, request, ordered=ordered)
async def _call(self: 'TelegramClient', sender, request, ordered=False, flood_sleep_threshold=None):
if self._loop is not None and self._loop != helpers.get_running_loop():
raise RuntimeError('The asyncio event loop must not change after connection (see the FAQ for details)')
# if the loop is None it will fail with a connection error later on
if flood_sleep_threshold is None:
flood_sleep_threshold = self.flood_sleep_threshold
requests = list(request) if utils.is_list_like(request) else [request]
request = list(request) if utils.is_list_like(request) else request
for i, r in enumerate(requests):
async def _call(self: 'TelegramClient', sender, request, ordered=False):
requests = (request if utils.is_list_like(request) else (request,))
for r in requests:
if not isinstance(r, TLRequest):
raise _NOT_A_REQUEST()
await r.resolve(self, utils)
@ -49,24 +42,15 @@ class UserMethods:
diff = round(due - time.time())
if diff <= 3: # Flood waits below 3 seconds are "ignored"
self._flood_waited_requests.pop(r.CONSTRUCTOR_ID, None)
elif diff <= flood_sleep_threshold:
elif diff <= self.flood_sleep_threshold:
self._log[__name__].info(*_fmt_flood(diff, r, early=True))
await asyncio.sleep(diff)
self._flood_waited_requests.pop(r.CONSTRUCTOR_ID, None)
else:
raise errors.FloodWaitError(request=r, capture=diff)
if self._no_updates:
if utils.is_list_like(request):
request[i] = functions.InvokeWithoutUpdatesRequest(r)
else:
# This should only run once as requests should be a list of 1 item
request = functions.InvokeWithoutUpdatesRequest(r)
request_index = 0
last_error = None
self._last_request = time.time()
for attempt in retry_range(self._request_retries):
try:
future = sender.send(request, ordered=ordered)
@ -80,7 +64,8 @@ class UserMethods:
exceptions.append(e)
results.append(None)
continue
await utils.maybe_async(self.session.process_entities(result))
self.session.process_entities(result)
self._entity_cache.add(result)
exceptions.append(None)
results.append(result)
request_index += 1
@ -90,28 +75,22 @@ class UserMethods:
return results
else:
result = await future
await utils.maybe_async(self.session.process_entities(result))
self.session.process_entities(result)
self._entity_cache.add(result)
return result
except (errors.ServerError, errors.RpcCallFailError,
errors.RpcMcgetFailError, errors.InterdcCallErrorError,
errors.TimedOutError,
errors.InterdcCallRichErrorError) as e:
last_error = e
errors.RpcMcgetFailError) as e:
self._log[__name__].warning(
'Telegram is having internal issues %s: %s',
e.__class__.__name__, e)
await asyncio.sleep(2)
except (errors.FloodWaitError, errors.FloodPremiumWaitError,
errors.SlowModeWaitError, errors.FloodTestPhoneWaitError) as e:
last_error = e
except (errors.FloodWaitError, errors.SlowModeWaitError, errors.FloodTestPhoneWaitError) as e:
if utils.is_list_like(request):
request = request[request_index]
# SLOW_MODE_WAIT is chat-specific, not request-specific
if not isinstance(e, errors.SlowModeWaitError):
self._flood_waited_requests\
[request.CONSTRUCTOR_ID] = time.time() + e.seconds
self._flood_waited_requests\
[request.CONSTRUCTOR_ID] = time.time() + e.seconds
# In test servers, FLOOD_WAIT_0 has been observed, and sleeping for
# such a short amount will cause retries very fast leading to issues.
@ -125,7 +104,6 @@ class UserMethods:
raise
except (errors.PhoneMigrateError, errors.NetworkMigrateError,
errors.UserMigrateError) as e:
last_error = e
self._log[__name__].info('Phone migrated to %d', e.new_dc)
should_raise = isinstance(e, (
errors.PhoneMigrateError, errors.NetworkMigrateError
@ -134,8 +112,6 @@ class UserMethods:
raise
await self._switch_dc(e.new_dc)
if self._raise_last_call_error and last_error is not None:
raise last_error
raise ValueError('Request was unsuccessful {} time(s)'
.format(attempt))
@ -163,30 +139,23 @@ class UserMethods:
me = await client.get_me()
print(me.username)
"""
if input_peer and self._mb_entity_cache.self_id:
return self._mb_entity_cache.get(self._mb_entity_cache.self_id)._as_input_peer()
if input_peer and self._self_input_peer:
return self._self_input_peer
try:
me = (await self(
functions.users.GetUsersRequest([types.InputUserSelf()])))[0]
if not self._mb_entity_cache.self_id:
self._mb_entity_cache.set_self_user(me.id, me.bot, me.access_hash)
self._bot = me.bot
if not self._self_input_peer:
self._self_input_peer = utils.get_input_peer(
me, allow_self=False
)
return utils.get_input_peer(me, allow_self=False) if input_peer else me
return self._self_input_peer if input_peer else me
except errors.UnauthorizedError:
return None
@property
def _self_id(self: 'TelegramClient') -> typing.Optional[int]:
"""
Returns the ID of the logged-in user, if known.
This property is used in every update, and some like `updateLoginToken`
occur prior to login, so it gracefully handles when no ID is known yet.
"""
return self._mb_entity_cache.self_id
async def is_bot(self: 'TelegramClient') -> bool:
"""
Return `True` if the signed-in user is a bot, `False` otherwise.
@ -199,10 +168,10 @@ class UserMethods:
else:
print('Hello')
"""
if self._mb_entity_cache.self_bot is None:
await self.get_me(input_peer=True)
if self._bot is None:
self._bot = (await self.get_me()).bot
return self._mb_entity_cache.self_bot
return self._bot
async def is_user_authorized(self: 'TelegramClient') -> bool:
"""
@ -228,7 +197,7 @@ class UserMethods:
async def get_entity(
self: 'TelegramClient',
entity: 'hints.EntitiesLike') -> typing.Union['hints.Entity', typing.List['hints.Entity']]:
entity: 'hints.EntitiesLike') -> 'hints.Entity':
"""
Turns the given entity into a valid Telegram :tl:`User`, :tl:`Chat`
or :tl:`Channel`. You can also pass a list or iterable of entities,
@ -327,9 +296,7 @@ class UserMethods:
# Merge users, chats and channels into a single dictionary
id_entity = {
# `get_input_entity` might've guessed the type from a non-marked ID,
# so the only way to match that with the input is by not using marks here.
utils.get_peer_id(x, add_mark=False): x
utils.get_peer_id(x): x
for x in itertools.chain(users, chats, channels)
}
@ -342,7 +309,7 @@ class UserMethods:
if isinstance(x, str):
result.append(await self._get_entity_from_string(x))
elif not isinstance(x, types.InputPeerSelf):
result.append(id_entity[utils.get_peer_id(x, add_mark=False)])
result.append(id_entity[utils.get_peer_id(x)])
else:
result.append(next(
u for u in id_entity.values()
@ -425,8 +392,8 @@ class UserMethods:
try:
# 0x2d45687 == crc32(b'Peer')
if isinstance(peer, int) or peer.SUBCLASS_OF_ID == 0x2d45687:
return self._mb_entity_cache.get(utils.get_peer_id(peer, add_mark=False))._as_input_peer()
except AttributeError:
return self._entity_cache[peer]
except (AttributeError, KeyError):
pass
# Then come known strings that take precedence
@ -435,8 +402,7 @@ class UserMethods:
# No InputPeer, cached peer, or known string. Fetch from disk cache
try:
input_entity = await utils.maybe_async(self.session.get_input_entity(peer))
return input_entity
return self.session.get_input_entity(peer)
except ValueError:
pass
@ -473,16 +439,12 @@ class UserMethods:
pass
raise ValueError(
'Could not find the input entity for {} ({}). Please read https://'
'docs.telethon.dev/en/stable/concepts/entities.html to'
'Could not find the input entity for {!r}. Please read https://'
'docs.telethon.dev/en/latest/concepts/entities.html to'
' find out more details.'
.format(peer, type(peer).__name__)
.format(peer)
)
async def _get_peer(self: 'TelegramClient', peer: 'hints.EntityLike'):
i, cls = utils.resolve_id(await self.get_peer_id(peer))
return cls(i)
async def get_peer_id(
self: 'TelegramClient',
peer: 'hints.EntityLike',
@ -575,8 +537,8 @@ class UserMethods:
pass
try:
# Nobody with this username, maybe it's an exact name/title
input_entity = await utils.maybe_async(self.session.get_input_entity(string))
return await self.get_entity(input_entity)
return await self.get_entity(
self.session.get_input_entity(string))
except ValueError:
pass
@ -613,8 +575,6 @@ class UserMethods:
notify.peer = await self.get_input_entity(notify.peer)
return notify
except AttributeError:
pass
return types.InputNotifyPeer(await self.get_input_entity(notify))
return types.InputNotifyPeer(await self.get_input_entity(notify))
# endregion

View File

@ -23,11 +23,11 @@ def _find_ssl_lib():
# https://www.shh.sh/2020/01/04/python-abort-trap-6.html
if sys.platform == 'darwin':
release, _version_info, _machine = platform.mac_ver()
ver, major, *_ = release.split('.')
ten, major, *minor = release.split('.')
# macOS 10.14 "mojave" is the last known major release
# to support unversioned libssl.dylib. Anything above
# needs specific versions
if int(ver) > 10 or int(ver) == 10 and int(major) > 14:
if major and int(major) > 14:
lib = (
ctypes.util.find_library('libssl.46') or
ctypes.util.find_library('libssl.44') or

View File

@ -1 +0,0 @@
from .tl.custom import *

141
telethon/entitycache.py Normal file
View File

@ -0,0 +1,141 @@
import inspect
import itertools
from . import utils
from .tl import types
# Which updates have the following fields?
_has_field = {
('user_id', int): [],
('chat_id', int): [],
('channel_id', int): [],
('peer', 'TypePeer'): [],
('peer', 'TypeDialogPeer'): [],
('message', 'TypeMessage'): [],
}
# Note: We don't bother checking for some rare:
# * `UpdateChatParticipantAdd.inviter_id` integer.
# * `UpdateNotifySettings.peer` dialog peer.
# * `UpdatePinnedDialogs.order` list of dialog peers.
# * `UpdateReadMessagesContents.messages` list of messages.
# * `UpdateChatParticipants.participants` list of participants.
#
# There are also some uninteresting `update.message` of type string.
def _fill():
for name in dir(types):
update = getattr(types, name)
if getattr(update, 'SUBCLASS_OF_ID', None) == 0x9f89304e:
cid = update.CONSTRUCTOR_ID
sig = inspect.signature(update.__init__)
for param in sig.parameters.values():
vec = _has_field.get((param.name, param.annotation))
if vec is not None:
vec.append(cid)
# Future-proof check: if the documentation format ever changes
# then we won't be able to pick the update types we are interested
# in, so we must make sure we have at least an update for each field
# which likely means we are doing it right.
if not all(_has_field.values()):
raise RuntimeError('FIXME: Did the init signature or updates change?')
# We use a function to avoid cluttering the globals (with name/update/cid/doc)
_fill()
class EntityCache:
"""
In-memory input entity cache, defaultdict-like behaviour.
"""
def add(self, entities):
"""
Adds the given entities to the cache, if they weren't saved before.
"""
if not utils.is_list_like(entities):
# Invariant: all "chats" and "users" are always iterables,
# and "user" never is (so we wrap it inside a list).
entities = itertools.chain(
getattr(entities, 'chats', []),
getattr(entities, 'users', []),
(hasattr(entities, 'user') and [entities.user]) or []
)
for entity in entities:
try:
pid = utils.get_peer_id(entity)
if pid not in self.__dict__:
# Note: `get_input_peer` already checks for `access_hash`
self.__dict__[pid] = utils.get_input_peer(entity)
except TypeError:
pass
def __getitem__(self, item):
"""
Gets the corresponding :tl:`InputPeer` for the given ID or peer,
or raises ``KeyError`` on any error (i.e. cannot be found).
"""
if not isinstance(item, int) or item < 0:
try:
return self.__dict__[utils.get_peer_id(item)]
except TypeError:
raise KeyError('Invalid key will not have entity') from None
for cls in (types.PeerUser, types.PeerChat, types.PeerChannel):
result = self.__dict__.get(utils.get_peer_id(cls(item)))
if result:
return result
raise KeyError('No cached entity for the given key')
def ensure_cached(
self,
update,
has_user_id=frozenset(_has_field[('user_id', int)]),
has_chat_id=frozenset(_has_field[('chat_id', int)]),
has_channel_id=frozenset(_has_field[('channel_id', int)]),
has_peer=frozenset(_has_field[('peer', 'TypePeer')] + _has_field[('peer', 'TypeDialogPeer')]),
has_message=frozenset(_has_field[('message', 'TypeMessage')])
):
"""
Ensures that all the relevant entities in the given update are cached.
"""
# This method is called pretty often and we want it to have the lowest
# overhead possible. For that, we avoid `isinstance` and constantly
# getting attributes out of `types.` by "caching" the constructor IDs
# in sets inside the arguments, and using local variables.
dct = self.__dict__
cid = update.CONSTRUCTOR_ID
if cid in has_user_id and \
update.user_id not in dct:
return False
if cid in has_chat_id and \
utils.get_peer_id(types.PeerChat(update.chat_id)) not in dct:
return False
if cid in has_channel_id and \
utils.get_peer_id(types.PeerChannel(update.channel_id)) not in dct:
return False
if cid in has_peer and \
utils.get_peer_id(update.peer) not in dct:
return False
if cid in has_message:
x = update.message
y = getattr(x, 'to_id', None) # handle MessageEmpty
if y and utils.get_peer_id(y) not in dct:
return False
y = getattr(x, 'from_id', None)
if y and y not in dct:
return False
# We don't quite worry about entities anywhere else.
# This is enough.
return True

View File

@ -6,7 +6,7 @@ import re
from .common import (
ReadCancelledError, TypeNotFoundError, InvalidChecksumError,
InvalidBufferError, AuthKeyNotFound, SecurityError, CdnFileTamperedError,
InvalidBufferError, SecurityError, CdnFileTamperedError,
AlreadyInConversationError, BadMessageError, MultiError
)
@ -24,8 +24,7 @@ def rpc_message_to_error(rpc_error, request):
:return: the RPCError as a Python exception that represents this error.
"""
# Try to get the error by direct look-up, otherwise regex
# Case-insensitive, for things like "timeout" which don't conform.
cls = rpc_errors_dict.get(rpc_error.error_message.upper(), None)
cls = rpc_errors_dict.get(rpc_error.error_message, None)
if cls:
return cls(request=request)

View File

@ -1,6 +1,5 @@
"""Errors not related to the Telegram API itself"""
import struct
import textwrap
from ..tl import TLRequest
@ -19,8 +18,8 @@ class TypeNotFoundError(Exception):
def __init__(self, invalid_constructor_id, remaining):
super().__init__(
'Could not find a matching Constructor ID for the TLObject '
'that was supposed to be read with ID {:08x}. See the FAQ '
'for more details. '
'that was supposed to be read with ID {:08x}. Most likely, '
'a TLObject was trying to be read when it should not be read. '
'Remaining bytes: {!r}'.format(invalid_constructor_id, remaining))
self.invalid_constructor_id = invalid_constructor_id
@ -59,22 +58,6 @@ class InvalidBufferError(BufferError):
'Invalid response buffer (too short {})'.format(self.payload))
class AuthKeyNotFound(Exception):
"""
The server claims it doesn't know about the authorization key (session
file) currently being used. This might be because it either has never
seen this authorization key, or it used to know about the authorization
key but has forgotten it, either temporarily or permanently (possibly
due to server errors).
If the issue persists, you may need to recreate the session file and login
again. This is not done automatically because it is not possible to know
if the issue is temporary or permanent.
"""
def __init__(self):
super().__init__(textwrap.dedent(self.__class__.__doc__))
class SecurityError(Exception):
"""
Generic security error, mostly used when generating a new AuthKey.

View File

@ -1,15 +1,3 @@
from ..tl import functions
_NESTS_QUERY = (
functions.InvokeAfterMsgRequest,
functions.InvokeAfterMsgsRequest,
functions.InitConnectionRequest,
functions.InvokeWithLayerRequest,
functions.InvokeWithoutUpdatesRequest,
functions.InvokeWithMessagesRangeRequest,
functions.InvokeWithTakeoutRequest,
)
class RPCError(Exception):
"""Base class for all Remote Procedure Call errors."""
code = None
@ -25,15 +13,7 @@ class RPCError(Exception):
@staticmethod
def _fmt_request(request):
n = 0
reason = ''
while isinstance(request, _NESTS_QUERY):
n += 1
reason += request.__class__.__name__ + '('
request = request.query
reason += request.__class__.__name__ + ')' * n
return ' (caused by {})'.format(reason)
return ' (caused by {})'.format(request.__class__.__name__)
def __reduce__(self):
return type(self), (self.request, self.message, self.code)

View File

@ -97,10 +97,8 @@ class Album(EventBuilder):
@classmethod
def build(cls, update, others=None, self_id=None):
# TODO normally we'd only check updates if they come with other updates
# but MessageBox is not designed for this so others will always be None.
# In essence we always rely on AlbumHack rather than returning early if not others.
others = [update]
if not others:
return # We only care about albums which come inside the same Updates
if isinstance(update,
(types.UpdateNewMessage, types.UpdateNewChannelMessage)):
@ -152,15 +150,23 @@ class Album(EventBuilder):
"""
def __init__(self, messages):
message = messages[0]
super().__init__(chat_peer=message.peer_id,
if not message.out and isinstance(message.to_id, types.PeerUser):
# Incoming message (e.g. from a bot) has to_id=us, and
# from_id=bot (the actual "chat" from a user's perspective).
chat_peer = types.PeerUser(message.from_id)
else:
chat_peer = message.to_id
super().__init__(chat_peer=chat_peer,
msg_id=message.id, broadcast=bool(message.post))
SenderGetter.__init__(self, message.sender_id)
self.messages = messages
def _set_client(self, client):
super()._set_client(client)
self._sender, self._input_sender = utils._get_entity_pair(
self.sender_id, self._entities, client._mb_entity_cache)
self.sender_id, self._entities, client._entity_cache)
for msg in self.messages:
msg._finish_init(client, self._entities, None)
@ -254,6 +260,7 @@ class Album(EventBuilder):
"""
if self._client:
kwargs['messages'] = self.messages
kwargs['as_album'] = True
kwargs['from_peer'] = await self.get_input_chat()
return await self._client.forward_messages(*args, **kwargs)

View File

@ -135,7 +135,7 @@ class CallbackQuery(EventBuilder):
The object returned by the ``data=`` parameter
when creating the event builder, if any. Similar
to ``pattern_match`` for the new message event.
pattern_match (`obj`, optional):
Alias for ``data_match``.
"""
@ -151,7 +151,7 @@ class CallbackQuery(EventBuilder):
def _set_client(self, client):
super()._set_client(client)
self._sender, self._input_sender = utils._get_entity_pair(
self.sender_id, self._entities, client._mb_entity_cache)
self.sender_id, self._entities, client._entity_cache)
@property
def id(self):
@ -208,9 +208,8 @@ class CallbackQuery(EventBuilder):
if not getattr(self._input_sender, 'access_hash', True):
# getattr with True to handle the InputPeerSelf() case
try:
self._input_sender = self._client._mb_entity_cache.get(
utils.resolve_id(self._sender_id)[0])._as_input_peer()
except AttributeError:
self._input_sender = self._client._entity_cache[self._sender_id]
except KeyError:
m = await self.get_message()
if m:
self._sender = m._sender
@ -300,7 +299,7 @@ class CallbackQuery(EventBuilder):
"""
Edits the message. Shorthand for
`telethon.client.messages.MessageMethods.edit_message` with
the ``entity`` set to the correct :tl:`InputBotInlineMessageID` or :tl:`InputBotInlineMessageID64`.
the ``entity`` set to the correct :tl:`InputBotInlineMessageID`.
Returns `True` if the edit was successful.
@ -313,7 +312,7 @@ class CallbackQuery(EventBuilder):
since the message object is normally not present.
"""
self._client.loop.create_task(self.answer())
if isinstance(self.query.msg_id, (types.InputBotInlineMessageID, types.InputBotInlineMessageID64)):
if isinstance(self.query.msg_id, types.InputBotInlineMessageID):
return await self._client.edit_message(
self.query.msg_id, *args, **kwargs
)
@ -338,8 +337,6 @@ class CallbackQuery(EventBuilder):
This method will likely fail if `via_inline` is `True`.
"""
self._client.loop.create_task(self.answer())
if isinstance(self.query.msg_id, (types.InputBotInlineMessageID, types.InputBotInlineMessageID64)):
raise TypeError('Inline messages cannot be deleted as there is no API request available to do so')
return await self._client.delete_messages(
await self.get_input_chat(), [self.query.msg_id],
*args, **kwargs

View File

@ -11,7 +11,6 @@ class ChatAction(EventBuilder):
* Whenever a new chat is created.
* Whenever a chat's title or photo is changed or removed.
* Whenever a new message is pinned.
* Whenever a user scores in a game.
* Whenever a user joins or is added to the group.
* Whenever a user is removed or leaves a group if it has
less than 50 members or the removed user was a bot.
@ -30,21 +29,18 @@ class ChatAction(EventBuilder):
if event.user_joined:
await event.reply('Welcome to the group!')
"""
@classmethod
def build(cls, update, others=None, self_id=None):
# Rely on specific pin updates for unpins, but otherwise ignore them
# for new pins (we'd rather handle the new service message with pin,
# so that we can act on that message').
if isinstance(update, types.UpdatePinnedChannelMessages) and not update.pinned:
if isinstance(update, types.UpdateChannelPinnedMessage) and update.id == 0:
# Telegram does not always send
# UpdateChannelPinnedMessage for new pins
# but always for unpin, with update.id = 0
return cls.Event(types.PeerChannel(update.channel_id),
pin_ids=update.messages,
pin=update.pinned)
unpin=True)
elif isinstance(update, types.UpdatePinnedMessages) and not update.pinned:
return cls.Event(update.peer,
pin_ids=update.messages,
pin=update.pinned)
elif isinstance(update, types.UpdateChatPinnedMessage) and update.id == 0:
return cls.Event(types.PeerChat(update.chat_id),
unpin=True)
elif isinstance(update, types.UpdateChatParticipantAdd):
return cls.Event(types.PeerChat(update.chat_id),
@ -56,9 +52,20 @@ class ChatAction(EventBuilder):
kicked_by=True,
users=update.user_id)
# UpdateChannel is sent if we leave a channel, and the update._entities
# set by _process_update would let us make some guesses. However it's
# better not to rely on this. Rely only in MessageActionChatDeleteUser.
elif isinstance(update, types.UpdateChannel):
# We rely on the fact that update._entities is set by _process_update
# This update only has the channel ID, and Telegram *should* have sent
# the entity in the Updates.chats list. If it did, check Channel.left
# to determine what happened.
peer = types.PeerChannel(update.channel_id)
channel = update._entities.get(utils.get_peer_id(peer))
if channel is not None:
if isinstance(channel, types.ChannelForbidden) or channel.left:
return cls.Event(peer,
kicked_by=True)
else:
return cls.Event(peer,
added_by=True)
elif (isinstance(update, (
types.UpdateNewMessage, types.UpdateNewChannelMessage))
@ -70,14 +77,14 @@ class ChatAction(EventBuilder):
added_by=True,
users=msg.from_id)
elif isinstance(action, types.MessageActionChatAddUser):
# If a user adds itself, it means they joined via the public chat username
added_by = ([msg.sender_id] == action.users) or msg.from_id
# If a user adds itself, it means they joined
added_by = ([msg.from_id] == action.users) or msg.from_id
return cls.Event(msg,
added_by=added_by,
users=action.users)
elif isinstance(action, types.MessageActionChatDeleteUser):
return cls.Event(msg,
kicked_by=utils.get_peer_id(msg.from_id) if msg.from_id else True,
kicked_by=msg.from_id or True,
users=action.user_id)
elif isinstance(action, types.MessageActionChatCreate):
return cls.Event(msg,
@ -101,23 +108,12 @@ class ChatAction(EventBuilder):
return cls.Event(msg,
users=msg.from_id,
new_photo=True)
elif isinstance(action, types.MessageActionPinMessage) and msg.reply_to:
elif isinstance(action, types.MessageActionPinMessage) and msg.reply_to_msg_id:
# Seems to not be reliable on unpins, but when pinning
# we prefer this because we know who caused it.
return cls.Event(msg,
pin_ids=[msg.reply_to_msg_id])
elif isinstance(action, types.MessageActionGameScore):
return cls.Event(msg,
new_score=action.score)
elif isinstance(update, types.UpdateChannelParticipant) \
and bool(update.new_participant) != bool(update.prev_participant):
# If members are hidden, bots will receive this update instead,
# as there won't be a service message. Promotions and demotions
# seem to have both new and prev participant, which are ignored
# by this event.
return cls.Event(types.PeerChannel(update.channel_id),
users=update.user_id,
added_by=update.actor_id if update.new_participant else None,
kicked_by=update.actor_id if update.prev_participant else None)
users=msg.from_id,
new_pin=msg.reply_to_msg_id)
class Event(EventCommon):
"""
@ -154,29 +150,22 @@ class ChatAction(EventBuilder):
new_title (`str`, optional):
The new title string for the chat, if applicable.
new_score (`str`, optional):
The new score string for the game, if applicable.
unpin (`bool`):
`True` if the existing pin gets unpinned.
"""
def __init__(self, where, new_photo=None,
def __init__(self, where, new_pin=None, new_photo=None,
added_by=None, kicked_by=None, created=None,
users=None, new_title=None, pin_ids=None, pin=None, new_score=None):
users=None, new_title=None, unpin=None):
if isinstance(where, types.MessageService):
self.action_message = where
where = where.peer_id
where = where.to_id
else:
self.action_message = None
# TODO needs some testing (can there be more than one id, and do they follow pin order?)
# same in get_pinned_message
super().__init__(chat_peer=where, msg_id=pin_ids[0] if pin_ids else None)
super().__init__(chat_peer=where, msg_id=new_pin)
self.new_pin = pin_ids is not None
self._pin_ids = pin_ids
self._pinned_messages = None
self.new_pin = isinstance(new_pin, int)
self._pinned_message = new_pin
self.new_photo = new_photo is not None
self.photo = \
@ -204,17 +193,16 @@ class ChatAction(EventBuilder):
self.created = bool(created)
if isinstance(users, list):
self._user_ids = [utils.get_peer_id(u) for u in users]
self._user_ids = users
elif users:
self._user_ids = [utils.get_peer_id(users)]
self._user_ids = [users]
else:
self._user_ids = []
self._users = None
self._input_users = None
self.new_title = new_title
self.new_score = new_score
self.unpin = not pin
self.unpin = unpin
def _set_client(self, client):
super()._set_client(client)
@ -268,26 +256,16 @@ class ChatAction(EventBuilder):
If ``new_pin`` is `True`, this returns the `Message
<telethon.tl.custom.message.Message>` object that was pinned.
"""
if self._pinned_messages is None:
await self.get_pinned_messages()
if self._pinned_message == 0:
return None
if self._pinned_messages:
return self._pinned_messages[0]
if isinstance(self._pinned_message, int)\
and await self.get_input_chat():
self._pinned_message = await self._client.get_messages(
self._input_chat, ids=self._pinned_message)
async def get_pinned_messages(self):
"""
If ``new_pin`` is `True`, this returns a `list` of `Message
<telethon.tl.custom.message.Message>` objects that were pinned.
"""
if not self._pin_ids:
return self._pin_ids # either None or empty list
chat = await self.get_input_chat()
if chat:
self._pinned_messages = await self._client.get_messages(
self._input_chat, ids=self._pin_ids)
return self._pinned_messages
if isinstance(self._pinned_message, types.Message):
return self._pinned_message
@property
def added_by(self):
@ -425,10 +403,9 @@ class ChatAction(EventBuilder):
# If missing, try from the entity cache
try:
self._input_users.append(self._client._mb_entity_cache.get(
utils.resolve_id(user_id)[0])._as_input_peer())
self._input_users.append(self._client._entity_cache[user_id])
continue
except AttributeError:
except KeyError:
pass
return self._input_users or []

View File

@ -154,7 +154,7 @@ class EventCommon(ChatGetter, abc.ABC):
self._client = client
if self._chat_peer:
self._chat, self._input_chat = utils._get_entity_pair(
self.chat_id, self._entities, client._mb_entity_cache)
self.chat_id, self._entities, client._entity_cache)
else:
self._chat = self._input_chat = None

View File

@ -4,7 +4,7 @@ import re
import asyncio
from .common import EventBuilder, EventCommon, name_inner_event
from .. import utils, helpers
from .. import utils
from ..tl import types, functions, custom
from ..tl.custom.sendergetter import SenderGetter
@ -99,7 +99,7 @@ class InlineQuery(EventBuilder):
def _set_client(self, client):
super()._set_client(client)
self._sender, self._input_sender = utils._get_entity_pair(
self.sender_id, self._entities, client._mb_entity_cache)
self.sender_id, self._entities, client._entity_cache)
@property
def id(self):
@ -130,7 +130,7 @@ class InlineQuery(EventBuilder):
and the user's device is able to send it, this will return
the :tl:`GeoPoint` with the position of the user.
"""
return self.query.geo
return
@property
def builder(self):
@ -147,9 +147,6 @@ class InlineQuery(EventBuilder):
"""
Answers the inline query with the given results.
See the documentation for `builder` to know what kind of answers
can be given.
Args:
results (`list`, optional):
A list of :tl:`InputBotInlineResult` to use.
@ -174,9 +171,9 @@ class InlineQuery(EventBuilder):
gallery (`bool`, optional):
Whether the results should show as a gallery (grid) or not.
next_offset (`str`, optional):
The offset the client will send when the user scrolls the
The offset the client will send when the user scrolls the
results and it repeats the request.
private (`bool`, optional):
@ -242,6 +239,6 @@ class InlineQuery(EventBuilder):
if inspect.isawaitable(obj):
return asyncio.ensure_future(obj)
f = helpers.get_running_loop().create_future()
f = asyncio.get_event_loop().create_future()
f.set_result(obj)
return f

View File

@ -1,7 +1,6 @@
import re
from .common import EventBuilder, EventCommon, name_inner_event, _into_id_set
from .. import utils
from ..tl import types
@ -107,15 +106,16 @@ class NewMessage(EventBuilder):
media_unread=update.media_unread,
silent=update.silent,
id=update.id,
peer_id=types.PeerUser(update.user_id),
from_id=types.PeerUser(self_id if update.out else update.user_id),
# Note that to_id/from_id complement each other in private
# messages, depending on whether the message was outgoing.
to_id=types.PeerUser(update.user_id if update.out else self_id),
from_id=self_id if update.out else update.user_id,
message=update.message,
date=update.date,
fwd_from=update.fwd_from,
via_bot_id=update.via_bot_id,
reply_to=update.reply_to,
entities=update.entities,
ttl_period=update.ttl_period
reply_to_msg_id=update.reply_to_msg_id,
entities=update.entities
))
elif isinstance(update, types.UpdateShortChatMessage):
event = cls.Event(types.Message(
@ -124,19 +124,25 @@ class NewMessage(EventBuilder):
media_unread=update.media_unread,
silent=update.silent,
id=update.id,
from_id=types.PeerUser(self_id if update.out else update.from_id),
peer_id=types.PeerChat(update.chat_id),
from_id=update.from_id,
to_id=types.PeerChat(update.chat_id),
message=update.message,
date=update.date,
fwd_from=update.fwd_from,
via_bot_id=update.via_bot_id,
reply_to=update.reply_to,
entities=update.entities,
ttl_period=update.ttl_period
reply_to_msg_id=update.reply_to_msg_id,
entities=update.entities
))
else:
return
# Make messages sent to ourselves outgoing unless they're forwarded.
# This makes it consistent with official client's appearance.
ori = event.message
if isinstance(ori.to_id, types.PeerUser):
if ori.from_id == ori.to_id.user_id and not ori.fwd_from:
event.message.out = True
return event
def filter(self, event):
@ -152,7 +158,7 @@ class NewMessage(EventBuilder):
return
if self.from_users is not None:
if event.message.sender_id not in self.from_users:
if event.message.from_id not in self.from_users:
return
if self.pattern:
@ -198,7 +204,14 @@ class NewMessage(EventBuilder):
"""
def __init__(self, message):
self.__dict__['_init'] = False
super().__init__(chat_peer=message.peer_id,
if not message.out and isinstance(message.to_id, types.PeerUser):
# Incoming message (e.g. from a bot) has to_id=us, and
# from_id=bot (the actual "chat" from a user's perspective).
chat_peer = types.PeerUser(message.from_id)
else:
chat_peer = message.to_id
super().__init__(chat_peer=chat_peer,
msg_id=message.id, broadcast=bool(message.post))
self.pattern_match = None

View File

@ -14,7 +14,7 @@ from ..tl.custom.sendergetter import SenderGetter
# in a single place will make it annoying to use (since
# the user needs to check for the existence of `None`).
#
# TODO Handle UpdateUserBlocked, UpdateUserName, UpdateUserPhone, UpdateUser
# TODO Handle UpdateUserBlocked, UpdateUserName, UpdateUserPhone, UpdateUserPhoto
def _requires_action(function):
@functools.wraps(function)
@ -51,15 +51,12 @@ class UserUpdate(EventBuilder):
@classmethod
def build(cls, update, others=None, self_id=None):
if isinstance(update, types.UpdateUserStatus):
return cls.Event(types.PeerUser(update.user_id),
return cls.Event(update.user_id,
status=update.status)
elif isinstance(update, types.UpdateChannelUserTyping):
return cls.Event(update.from_id,
chat_peer=types.PeerChannel(update.channel_id),
typing=update.action)
elif isinstance(update, types.UpdateChatUserTyping):
return cls.Event(update.from_id,
chat_peer=types.PeerChat(update.chat_id),
# Unfortunately, we can't know whether `chat_id`'s type
return cls.Event(update.user_id,
chat_id=update.chat_id,
typing=update.action)
elif isinstance(update, types.UpdateUserTyping):
return cls.Event(update.user_id,
@ -85,17 +82,38 @@ class UserUpdate(EventBuilder):
of the typing properties, since they will all be `None`
if the action is not set.
"""
def __init__(self, peer, *, status=None, chat_peer=None, typing=None):
super().__init__(chat_peer or peer)
SenderGetter.__init__(self, utils.get_peer_id(peer))
def __init__(self, user_id, *, status=None, chat_id=None, typing=None):
if chat_id is None:
super().__init__(types.PeerUser(user_id))
else:
# Temporarily set the chat_peer to the ID until ._set_client.
# We need the client to actually figure out its type.
super().__init__(chat_id)
SenderGetter.__init__(self, user_id)
self.status = status
self.action = typing
def _set_client(self, client):
if isinstance(self._chat_peer, int):
try:
chat = client._entity_cache[self._chat_peer]
if isinstance(chat, types.InputPeerChat):
self._chat_peer = types.PeerChat(self._chat_peer)
elif isinstance(chat, types.InputPeerChannel):
self._chat_peer = types.PeerChannel(self._chat_peer)
else:
# Should not happen
self._chat_peer = types.PeerUser(self._chat_peer)
except KeyError:
# Hope for the best. We don't know where this event
# occurred but it was most likely in a channel.
self._chat_peer = types.PeerChannel(self._chat_peer)
super()._set_client(client)
self._sender, self._input_sender = utils._get_entity_pair(
self.sender_id, self._entities, client._mb_entity_cache)
self.sender_id, self._entities, client._entity_cache)
@property
def user(self):
@ -136,7 +154,6 @@ class UserUpdate(EventBuilder):
"""
return isinstance(self.action, (
types.SendMessageChooseContactAction,
types.SendMessageChooseStickerAction,
types.SendMessageUploadAudioAction,
types.SendMessageUploadDocumentAction,
types.SendMessageUploadPhotoAction,
@ -229,14 +246,6 @@ class UserUpdate(EventBuilder):
"""
return isinstance(self.action, types.SendMessageUploadDocumentAction)
@property
@_requires_action
def sticker(self):
"""
`True` if what's being uploaded is a sticker.
"""
return isinstance(self.action, types.SendMessageChooseStickerAction)
@property
@_requires_action
def photo(self):
@ -246,7 +255,7 @@ class UserUpdate(EventBuilder):
return isinstance(self.action, types.SendMessageUploadPhotoAction)
@property
@_requires_status
@_requires_action
def last_seen(self):
"""
Exact `datetime.datetime` when the user was last seen if known.

View File

@ -1,6 +1,6 @@
"""
Several extensions Python is missing, such as a proper class to handle a TCP
communication with support for cancelling the operation, and a utility class
communication with support for cancelling the operation, and an utility class
to read arbitrary binary data in a more comfortable way, with int/strings/etc.
"""
from .binaryreader import BinaryReader

View File

@ -1,9 +1,11 @@
"""
This module contains the BinaryReader utility class.
"""
import struct
import os
import time
from datetime import datetime, timedelta, timezone
from datetime import datetime, timezone, timedelta
from io import BytesIO
from struct import unpack
from ..errors import TypeNotFoundError
from ..tl.alltlobjects import tlobjects
@ -19,8 +21,7 @@ class BinaryReader:
"""
def __init__(self, data):
self.stream = data or b''
self.position = 0
self.stream = BytesIO(data)
self._last = None # Should come in handy to spot -404 errors
# region Reading
@ -29,35 +30,23 @@ class BinaryReader:
# https://core.telegram.org/mtproto
def read_byte(self):
"""Reads a single byte value."""
value, = struct.unpack_from("<B", self.stream, self.position)
self.position += 1
return value
return self.read(1)[0]
def read_int(self, signed=True):
"""Reads an integer (4 bytes) value."""
fmt = '<i' if signed else '<I'
value, = struct.unpack_from(fmt, self.stream, self.position)
self.position += 4
return value
return int.from_bytes(self.read(4), byteorder='little', signed=signed)
def read_long(self, signed=True):
"""Reads a long integer (8 bytes) value."""
fmt = '<q' if signed else '<Q'
value, = struct.unpack_from(fmt, self.stream, self.position)
self.position += 8
return value
return int.from_bytes(self.read(8), byteorder='little', signed=signed)
def read_float(self):
"""Reads a real floating point (4 bytes) value."""
value, = struct.unpack_from("<f", self.stream, self.position)
self.position += 4
return value
return unpack('<f', self.read(4))[0]
def read_double(self):
"""Reads a real floating point (8 bytes) value."""
value, = struct.unpack_from("<d", self.stream, self.position)
self.position += 8
return value
return unpack('<d', self.read(8))[0]
def read_large_int(self, bits, signed=True):
"""Reads a n-bits long integer value."""
@ -66,12 +55,7 @@ class BinaryReader:
def read(self, length=-1):
"""Read the given amount of bytes, or -1 to read all remaining."""
if length >= 0:
result = self.stream[self.position:self.position + length]
self.position += length
else:
result = self.stream[self.position:]
self.position += len(result)
result = self.stream.read(length)
if (length >= 0) and (len(result) != length):
raise BufferError(
'No more data left to read (need {}, got {}: {}); last read {}'
@ -83,7 +67,7 @@ class BinaryReader:
def get_bytes(self):
"""Gets the byte array representing the current buffer as a whole."""
return self.stream
return self.stream.getvalue()
# endregion
@ -169,24 +153,24 @@ class BinaryReader:
def close(self):
"""Closes the reader, freeing the BytesIO stream."""
self.stream = b''
self.stream.close()
# region Position related
def tell_position(self):
"""Tells the current position on the stream."""
return self.position
return self.stream.tell()
def set_position(self, position):
"""Sets the current position on the stream."""
self.position = position
self.stream.seek(position)
def seek(self, offset):
"""
Seeks the stream position given an offset from the current position.
The offset may be negative.
"""
self.position += offset
self.stream.seek(offset, os.SEEK_CUR)
# endregion

View File

@ -1,22 +1,34 @@
"""
Simple HTML -> Telegram entity parser.
"""
import struct
from collections import deque
from html import escape
from html.parser import HTMLParser
from typing import Iterable, Tuple, List
from typing import Iterable, Optional, Tuple, List
from ..helpers import add_surrogate, del_surrogate, within_surrogate, strip_text
from ..tl import TLObject
from .. import helpers
from ..tl.types import (
MessageEntityBold, MessageEntityItalic, MessageEntityCode,
MessageEntityPre, MessageEntityEmail, MessageEntityUrl,
MessageEntityTextUrl, MessageEntityMentionName,
MessageEntityUnderline, MessageEntityStrike, MessageEntityBlockquote,
MessageEntityCustomEmoji, TypeMessageEntity
TypeMessageEntity
)
# Helpers from markdown.py
def _add_surrogate(text):
return ''.join(
''.join(chr(y) for y in struct.unpack('<HH', x.encode('utf-16le')))
if (0x10000 <= ord(x) <= 0x10FFFF) else x for x in text
)
def _del_surrogate(text):
return text.encode('utf-16', 'surrogatepass').decode('utf-16')
class HTMLToTelegramParser(HTMLParser):
def __init__(self):
super().__init__()
@ -74,19 +86,11 @@ class HTMLToTelegramParser(HTMLParser):
EntityType = MessageEntityUrl
else:
EntityType = MessageEntityTextUrl
args['url'] = del_surrogate(url)
args['url'] = url
url = None
self._open_tags_meta.popleft()
self._open_tags_meta.appendleft(url)
elif tag == 'tg-emoji':
try:
emoji_id = int(attrs['emoji-id'])
except (KeyError, ValueError):
return
EntityType = MessageEntityCustomEmoji
args['document_id'] = emoji_id
if EntityType and tag not in self._building_entities:
self._building_entities[tag] = EntityType(
offset=len(self.text),
@ -129,36 +133,13 @@ def parse(html: str) -> Tuple[str, List[TypeMessageEntity]]:
return html, []
parser = HTMLToTelegramParser()
parser.feed(add_surrogate(html))
text = strip_text(parser.text, parser.entities)
parser.entities.reverse()
parser.entities.sort(key=lambda entity: entity.offset)
return del_surrogate(text), parser.entities
parser.feed(_add_surrogate(html))
text = helpers.strip_text(parser.text, parser.entities)
return _del_surrogate(text), parser.entities
ENTITY_TO_FORMATTER = {
MessageEntityBold: ('<strong>', '</strong>'),
MessageEntityItalic: ('<em>', '</em>'),
MessageEntityCode: ('<code>', '</code>'),
MessageEntityUnderline: ('<u>', '</u>'),
MessageEntityStrike: ('<del>', '</del>'),
MessageEntityBlockquote: ('<blockquote>', '</blockquote>'),
MessageEntityPre: lambda e, _: (
"<pre>\n"
" <code class='language-{}'>\n"
" ".format(e.language), "{}\n"
" </code>\n"
"</pre>"
),
MessageEntityEmail: lambda _, t: ('<a href="mailto:{}">'.format(t), '</a>'),
MessageEntityUrl: lambda _, t: ('<a href="{}">'.format(t), '</a>'),
MessageEntityTextUrl: lambda e, _: ('<a href="{}">'.format(escape(e.url)), '</a>'),
MessageEntityMentionName: lambda e, _: ('<a href="tg://user?id={}">'.format(e.user_id), '</a>'),
MessageEntityCustomEmoji: lambda e, _: ('<tg-emoji emoji-id="{}">'.format(e.document_id), '</tg-emoji>'),
}
def unparse(text: str, entities: Iterable[TypeMessageEntity]) -> str:
def unparse(text: str, entities: Iterable[TypeMessageEntity], _offset: int = 0,
_length: Optional[int] = None) -> str:
"""
Performs the reverse operation to .parse(), effectively returning HTML
given a normal text and its MessageEntity's.
@ -172,32 +153,77 @@ def unparse(text: str, entities: Iterable[TypeMessageEntity]) -> str:
elif not entities:
return escape(text)
if isinstance(entities, TLObject):
entities = (entities,)
text = add_surrogate(text)
insert_at = []
text = _add_surrogate(text)
if _length is None:
_length = len(text)
html = []
last_offset = 0
for i, entity in enumerate(entities):
s = entity.offset
e = entity.offset + entity.length
delimiter = ENTITY_TO_FORMATTER.get(type(entity), None)
if delimiter:
if callable(delimiter):
delimiter = delimiter(entity, text[s:e])
insert_at.append((s, i, delimiter[0]))
insert_at.append((e, -i, delimiter[1]))
if entity.offset >= _offset + _length:
break
relative_offset = entity.offset - _offset
if relative_offset > last_offset:
html.append(escape(text[last_offset:relative_offset]))
elif relative_offset < last_offset:
continue
insert_at.sort(key=lambda t: (t[0], t[1]))
next_escape_bound = len(text)
while insert_at:
# Same logic as markdown.py
at, _, what = insert_at.pop()
while within_surrogate(text, at):
at += 1
skip_entity = False
length = entity.length
text = text[:at] + what + escape(text[at:next_escape_bound]) + text[next_escape_bound:]
next_escape_bound = at
# If we are in the middle of a surrogate nudge the position by +1.
# Otherwise we would end up with malformed text and fail to encode.
# For example of bad input: "Hi \ud83d\ude1c"
# https://en.wikipedia.org/wiki/UTF-16#U+010000_to_U+10FFFF
while helpers.within_surrogate(text, relative_offset, length=_length):
relative_offset += 1
text = escape(text[:next_escape_bound]) + text[next_escape_bound:]
while helpers.within_surrogate(text, relative_offset + length, length=_length):
length += 1
return del_surrogate(text)
entity_text = unparse(text=text[relative_offset:relative_offset + length],
entities=entities[i + 1:],
_offset=entity.offset, _length=length)
entity_type = type(entity)
if entity_type == MessageEntityBold:
html.append('<strong>{}</strong>'.format(entity_text))
elif entity_type == MessageEntityItalic:
html.append('<em>{}</em>'.format(entity_text))
elif entity_type == MessageEntityCode:
html.append('<code>{}</code>'.format(entity_text))
elif entity_type == MessageEntityUnderline:
html.append('<u>{}</u>'.format(entity_text))
elif entity_type == MessageEntityStrike:
html.append('<del>{}</del>'.format(entity_text))
elif entity_type == MessageEntityBlockquote:
html.append('<blockquote>{}</blockquote>'.format(entity_text))
elif entity_type == MessageEntityPre:
if entity.language:
html.append(
"<pre>\n"
" <code class='language-{}'>\n"
" {}\n"
" </code>\n"
"</pre>".format(entity.language, entity_text))
else:
html.append('<pre><code>{}</code></pre>'
.format(entity_text))
elif entity_type == MessageEntityEmail:
html.append('<a href="mailto:{0}">{0}</a>'.format(entity_text))
elif entity_type == MessageEntityUrl:
html.append('<a href="{0}">{0}</a>'.format(entity_text))
elif entity_type == MessageEntityTextUrl:
html.append('<a href="{}">{}</a>'
.format(escape(entity.url), entity_text))
elif entity_type == MessageEntityMentionName:
html.append('<a href="tg://user?id={}">{}</a>'
.format(entity.user_id, entity_text))
else:
skip_entity = True
last_offset = relative_offset + (0 if skip_entity else length)
while helpers.within_surrogate(text, last_offset, length=_length):
last_offset += 1
html.append(escape(text[last_offset:]))
return _del_surrogate(''.join(html))

View File

@ -22,10 +22,14 @@ DEFAULT_DELIMITERS = {
'```': MessageEntityPre
}
DEFAULT_URL_RE = re.compile(r'\[([^]]*?)\]\(([\s\S]*?)\)')
DEFAULT_URL_RE = re.compile(r'\[([\S\s]+?)\]\((.+?)\)')
DEFAULT_URL_FORMAT = '[{0}]({1})'
def overlap(a, b, x, y):
return max(a, x) < min(b, y)
def parse(message, delimiters=None, url_re=None):
"""
Parses the given markdown message and returns its stripped representation
@ -86,8 +90,8 @@ def parse(message, delimiters=None, url_re=None):
for ent in result:
# If the end is after our start, it is affected
if ent.offset + ent.length > i:
# If the old start is before ours and the old end is after ours, we are fully enclosed
if ent.offset <= i and ent.offset + ent.length >= end + len(delim):
# If the old start is also before ours, it is fully enclosed
if ent.offset <= i:
ent.length -= len(delim) * 2
else:
ent.length -= len(delim)
@ -115,7 +119,7 @@ def parse(message, delimiters=None, url_re=None):
message[m.end():]
))
delim_size = m.end() - m.start() - len(m.group(1))
delim_size = m.end() - m.start() - len(m.group())
for ent in result:
# If the end is after our start, it is affected
if ent.offset + ent.length > m.start():
@ -160,13 +164,13 @@ def unparse(text, entities, delimiters=None, url_fmt=None):
text = add_surrogate(text)
delimiters = {v: k for k, v in delimiters.items()}
insert_at = []
for i, entity in enumerate(entities):
for entity in entities:
s = entity.offset
e = entity.offset + entity.length
delimiter = delimiters.get(type(entity), None)
if delimiter:
insert_at.append((s, i, delimiter))
insert_at.append((e, -i, delimiter))
insert_at.append((s, delimiter))
insert_at.append((e, delimiter))
else:
url = None
if isinstance(entity, MessageEntityTextUrl):
@ -174,12 +178,12 @@ def unparse(text, entities, delimiters=None, url_fmt=None):
elif isinstance(entity, MessageEntityMentionName):
url = 'tg://user?id={}'.format(entity.user_id)
if url:
insert_at.append((s, i, '['))
insert_at.append((e, -i, ']({})'.format(url)))
insert_at.append((s, '['))
insert_at.append((e, ']({})'.format(url)))
insert_at.sort(key=lambda t: (t[0], t[1]))
insert_at.sort(key=lambda t: t[0])
while insert_at:
at, _, what = insert_at.pop()
at, what = insert_at.pop()
# If we are in the middle of a surrogate nudge the position by -1.
# Otherwise we would end up with malformed text and fail to encode.

View File

@ -1 +0,0 @@
from .tl.functions import *

View File

@ -1,14 +1,9 @@
"""Various helpers not related to the Telegram API itself"""
import asyncio
import io
import enum
import os
import struct
import inspect
import logging
import functools
import sys
from pathlib import Path
from hashlib import sha1
@ -18,9 +13,6 @@ class _EntityType(enum.Enum):
CHANNEL = 2
_log = logging.getLogger(__name__)
# region Multiple utilities
@ -58,102 +50,62 @@ def within_surrogate(text, index, *, length=None):
return (
1 < index < len(text) and # in bounds
'\ud800' <= text[index - 1] <= '\udbff' and # previous is
'\ud800' <= text[index - 1] <= '\udfff' and # previous is
'\ud800' <= text[index] <= '\udfff' # current is
)
def strip_text(text, entities):
"""
Strips whitespace from the given surrogated text modifying the provided
entities, also removing any empty (0-length) entities.
Strips whitespace from the given text modifying the provided entities.
This assumes that the length of entities is greater or equal to 0, and
that no entity is out of bounds.
This assumes that there are no overlapping entities, that their length
is greater or equal to one, and that their length is not out of bounds.
"""
if not entities:
return text.strip()
len_ori = len(text)
text = text.lstrip()
left_offset = len_ori - len(text)
text = text.rstrip()
len_final = len(text)
for i in reversed(range(len(entities))):
e = entities[i]
if e.length == 0:
del entities[i]
continue
if e.offset + e.length > left_offset:
if e.offset >= left_offset:
# 0 1|2 3 4 5 | 0 1|2 3 4 5
# ^ ^ | ^
# lo(2) o(5) | o(2)/lo(2)
e.offset -= left_offset
# |0 1 2 3 | |0 1 2 3
# ^ | ^
# o=o-lo(3=5-2) | o=o-lo(0=2-2)
while text and text[-1].isspace():
e = entities[-1]
if e.offset + e.length == len(text):
if e.length == 1:
del entities[-1]
if not entities:
return text.strip()
else:
# e.offset < left_offset and e.offset + e.length > left_offset
# 0 1 2 3|4 5 6 7 8 9 10
# ^ ^ ^
# o(1) lo(4) o+l(1+9)
e.length = e.offset + e.length - left_offset
e.offset = 0
# |0 1 2 3 4 5 6
# ^ ^
# o(0) o+l=0+o+l-lo(6=0+6=0+1+9-4)
else:
# e.offset + e.length <= left_offset
# 0 1 2 3|4 5
# ^ ^
# o(0) o+l(4)
# lo(4)
del entities[i]
continue
e.length -= 1
text = text[:-1]
if e.offset + e.length <= len_final:
# |0 1 2 3 4 5 6 7 8 9
# ^ ^
# o(1) o+l(1+9)/lf(10)
continue
if e.offset >= len_final:
# |0 1 2 3 4
# ^
# o(5)/lf(5)
del entities[i]
else:
# e.offset < len_final and e.offset + e.length > len_final
# |0 1 2 3 4 5 (6) (7) (8) (9)
# ^ ^ ^
# o(1) lf(6) o+l(1+8)
e.length = len_final - e.offset
# |0 1 2 3 4 5
# ^ ^
# o(1) o+l=o+lf-o=lf(6=1+5=1+6-1)
while text and text[0].isspace():
for i in reversed(range(len(entities))):
e = entities[i]
if e.offset != 0:
e.offset -= 1
continue
if e.length == 1:
del entities[0]
if not entities:
return text.lstrip()
else:
e.length -= 1
text = text[1:]
return text
def retry_range(retries, force_retry=True):
def retry_range(retries):
"""
Generates an integer sequence starting from 1. If `retries` is
not a zero or a positive integer value, the sequence will be
infinite, otherwise it will end at `retries + 1`.
"""
# We need at least one iteration even if the retries are 0
# when force_retry is True.
if force_retry and not (retries is None or retries < 0):
retries += 1
yield 1
attempt = 0
while attempt != retries:
attempt += 1
yield attempt
yield 1 + attempt
async def _maybe_await(value):
@ -322,113 +274,4 @@ class TotalList(list):
', '.join(repr(x) for x in self), self.total)
class _FileStream(io.IOBase):
"""
Proxy around things that represent a file and need to be used as streams
which may or not need to be closed.
This will handle `pathlib.Path`, `str` paths, in-memory `bytes`, and
anything IO-like (including `aiofiles`).
It also provides access to the name and file size (also necessary).
"""
def __init__(self, file, *, file_size=None):
if isinstance(file, Path):
file = str(file.absolute())
self._file = file
self._name = None
self._size = file_size
self._stream = None
self._close_stream = None
async def __aenter__(self):
if isinstance(self._file, str):
self._name = os.path.basename(self._file)
self._size = os.path.getsize(self._file)
self._stream = open(self._file, 'rb')
self._close_stream = True
return self
if isinstance(self._file, bytes):
self._size = len(self._file)
self._stream = io.BytesIO(self._file)
self._close_stream = True
return self
if not callable(getattr(self._file, 'read', None)):
raise TypeError('file description should have a `read` method')
self._name = getattr(self._file, 'name', None)
self._stream = self._file
self._close_stream = False
if self._size is None:
if callable(getattr(self._file, 'seekable', None)):
seekable = await _maybe_await(self._file.seekable())
else:
seekable = False
if seekable:
pos = await _maybe_await(self._file.tell())
await _maybe_await(self._file.seek(0, os.SEEK_END))
self._size = await _maybe_await(self._file.tell())
await _maybe_await(self._file.seek(pos, os.SEEK_SET))
else:
_log.warning(
'Could not determine file size beforehand so the entire '
'file will be read in-memory')
data = await _maybe_await(self._file.read())
self._size = len(data)
self._stream = io.BytesIO(data)
self._close_stream = True
return self
async def __aexit__(self, exc_type, exc_val, exc_tb):
if self._close_stream and self._stream:
await _maybe_await(self._stream.close())
@property
def file_size(self):
return self._size
@property
def name(self):
return self._name
# Proxy all the methods. Doesn't need to be readable (makes multiline edits easier)
def read(self, *args, **kwargs): return self._stream.read(*args, **kwargs)
def readinto(self, *args, **kwargs): return self._stream.readinto(*args, **kwargs)
def write(self, *args, **kwargs): return self._stream.write(*args, **kwargs)
def fileno(self, *args, **kwargs): return self._stream.fileno(*args, **kwargs)
def flush(self, *args, **kwargs): return self._stream.flush(*args, **kwargs)
def isatty(self, *args, **kwargs): return self._stream.isatty(*args, **kwargs)
def readable(self, *args, **kwargs): return self._stream.readable(*args, **kwargs)
def readline(self, *args, **kwargs): return self._stream.readline(*args, **kwargs)
def readlines(self, *args, **kwargs): return self._stream.readlines(*args, **kwargs)
def seek(self, *args, **kwargs): return self._stream.seek(*args, **kwargs)
def seekable(self, *args, **kwargs): return self._stream.seekable(*args, **kwargs)
def tell(self, *args, **kwargs): return self._stream.tell(*args, **kwargs)
def truncate(self, *args, **kwargs): return self._stream.truncate(*args, **kwargs)
def writable(self, *args, **kwargs): return self._stream.writable(*args, **kwargs)
def writelines(self, *args, **kwargs): return self._stream.writelines(*args, **kwargs)
# close is special because it will be called by __del__ but we do NOT
# want to close the file unless we have to (we're just a wrapper).
# Instead, we do nothing (we should be used through the decorator which
# has its own mechanism to close the file correctly).
def close(self, *args, **kwargs):
pass
# endregion
def get_running_loop():
if sys.version_info >= (3, 7):
try:
return asyncio.get_running_loop()
except RuntimeError:
return asyncio.get_event_loop_policy().get_event_loop()
else:
return asyncio.get_event_loop()

View File

@ -44,12 +44,7 @@ FileLike = typing.Union[
typing.BinaryIO,
types.TypeMessageMedia,
types.TypeInputFile,
types.TypeInputFileLocation,
types.TypeInputMedia,
types.TypePhoto,
types.TypeInputPhoto,
types.TypeDocument,
types.TypeInputDocument
types.TypeInputFileLocation
]
# Can't use `typing.Type` in Python 3.5.2

View File

@ -8,12 +8,7 @@ try:
except ImportError:
ssl_mod = None
try:
import python_socks
except ImportError:
python_socks = None
from ...errors import InvalidChecksumError, InvalidBufferError
from ...errors import InvalidChecksumError
from ... import helpers
@ -33,13 +28,12 @@ class Connection(abc.ABC):
# should be one of `PacketCodec` implementations
packet_codec = None
def __init__(self, ip, port, dc_id, *, loggers, proxy=None, local_addr=None):
def __init__(self, ip, port, dc_id, *, loggers, proxy=None):
self._ip = ip
self._port = port
self._dc_id = dc_id # only for MTProxy, it's an abstraction leak
self._log = loggers[__name__]
self._proxy = proxy
self._local_addr = local_addr
self._reader = None
self._writer = None
self._connected = False
@ -50,194 +44,47 @@ class Connection(abc.ABC):
self._send_queue = asyncio.Queue(1)
self._recv_queue = asyncio.Queue(1)
@staticmethod
def _wrap_socket_ssl(sock):
if ssl_mod is None:
raise RuntimeError(
'Cannot use proxy that requires SSL '
'without the SSL module being available'
async def _connect(self, timeout=None, ssl=None):
if not self._proxy:
self._reader, self._writer = await asyncio.wait_for(
asyncio.open_connection(self._ip, self._port, ssl=ssl),
timeout=timeout
)
return ssl_mod.wrap_socket(
sock,
do_handshake_on_connect=True,
ssl_version=ssl_mod.PROTOCOL_SSLv23,
ciphers='ADH-AES256-SHA')
@staticmethod
def _parse_proxy(proxy_type, addr, port, rdns=True, username=None, password=None):
if isinstance(proxy_type, str):
proxy_type = proxy_type.lower()
# Always prefer `python_socks` when available
if python_socks:
from python_socks import ProxyType
# We do the check for numerical values here
# to be backwards compatible with PySocks proxy format,
# (since socks.SOCKS5 == 2, socks.SOCKS4 == 1, socks.HTTP == 3)
if proxy_type == ProxyType.SOCKS5 or proxy_type == 2 or proxy_type == "socks5":
protocol = ProxyType.SOCKS5
elif proxy_type == ProxyType.SOCKS4 or proxy_type == 1 or proxy_type == "socks4":
protocol = ProxyType.SOCKS4
elif proxy_type == ProxyType.HTTP or proxy_type == 3 or proxy_type == "http":
protocol = ProxyType.HTTP
else:
raise ValueError("Unknown proxy protocol type: {}".format(proxy_type))
# This tuple must be compatible with `python_socks`' `Proxy.create()` signature
return protocol, addr, port, username, password, rdns
else:
from socks import SOCKS5, SOCKS4, HTTP
if proxy_type == 2 or proxy_type == "socks5":
protocol = SOCKS5
elif proxy_type == 1 or proxy_type == "socks4":
protocol = SOCKS4
elif proxy_type == 3 or proxy_type == "http":
protocol = HTTP
else:
raise ValueError("Unknown proxy protocol type: {}".format(proxy_type))
# This tuple must be compatible with `PySocks`' `socksocket.set_proxy()` signature
return protocol, addr, port, rdns, username, password
async def _proxy_connect(self, timeout=None, local_addr=None):
if isinstance(self._proxy, (tuple, list)):
parsed = self._parse_proxy(*self._proxy)
elif isinstance(self._proxy, dict):
parsed = self._parse_proxy(**self._proxy)
else:
raise TypeError("Proxy of unknown format: {}".format(type(self._proxy)))
# Always prefer `python_socks` when available
if python_socks:
# python_socks internal errors are not inherited from
# builtin IOError (just from Exception). Instead of adding those
# in exceptions clauses everywhere through the code, we
# rather monkey-patch them in place. Keep in mind that
# ProxyError takes error_code as keyword argument.
class ConnectionErrorExtra(ConnectionError):
def __init__(self, message, error_code=None):
super().__init__(message)
self.error_code = error_code
python_socks._errors.ProxyError = ConnectionErrorExtra
python_socks._errors.ProxyConnectionError = ConnectionError
python_socks._errors.ProxyTimeoutError = ConnectionError
from python_socks.async_.asyncio import Proxy
proxy = Proxy.create(*parsed)
# WARNING: If `local_addr` is set we use manual socket creation, because,
# unfortunately, `Proxy.connect()` does not expose `local_addr`
# argument, so if we want to bind socket locally, we need to manually
# create, bind and connect socket, and then pass to `Proxy.connect()` method.
if local_addr is None:
sock = await proxy.connect(
dest_host=self._ip,
dest_port=self._port,
timeout=timeout
)
else:
# Here we start manual setup of the socket.
# The `address` represents the proxy ip and proxy port,
# not the destination one (!), because the socket
# connects to the proxy server, not destination server.
# IPv family is also checked on proxy address.
if ':' in proxy.proxy_host:
mode, address = socket.AF_INET6, (proxy.proxy_host, proxy.proxy_port, 0, 0)
else:
mode, address = socket.AF_INET, (proxy.proxy_host, proxy.proxy_port)
# Create a non-blocking socket and bind it (if local address is specified).
sock = socket.socket(mode, socket.SOCK_STREAM)
sock.setblocking(False)
sock.bind(local_addr)
# Actual TCP connection is performed here.
await asyncio.wait_for(
helpers.get_running_loop().sock_connect(sock=sock, address=address),
timeout=timeout
)
# As our socket is already created and connected,
# this call sets the destination host/port and
# starts protocol negotiations with the proxy server.
sock = await proxy.connect(
dest_host=self._ip,
dest_port=self._port,
timeout=timeout,
_socket=sock
)
else:
import socks
# Here `address` represents destination address (not proxy), because of
# the `PySocks` implementation of the connection routine.
# IPv family is checked on proxy address, not destination address.
if ':' in parsed[1]:
if ':' in self._ip:
mode, address = socket.AF_INET6, (self._ip, self._port, 0, 0)
else:
mode, address = socket.AF_INET, (self._ip, self._port)
# Setup socket, proxy, timeout and bind it (if necessary).
sock = socks.socksocket(mode, socket.SOCK_STREAM)
sock.set_proxy(*parsed)
sock.settimeout(timeout)
s = socks.socksocket(mode, socket.SOCK_STREAM)
if isinstance(self._proxy, dict):
s.set_proxy(**self._proxy)
else:
s.set_proxy(*self._proxy)
if local_addr is not None:
sock.bind(local_addr)
# Actual TCP connection and negotiation performed here.
s.settimeout(timeout)
await asyncio.wait_for(
helpers.get_running_loop().sock_connect(sock=sock, address=address),
asyncio.get_event_loop().sock_connect(s, address),
timeout=timeout
)
sock.setblocking(False)
return sock
async def _connect(self, timeout=None, ssl=None):
if self._local_addr is not None:
# NOTE: If port is not specified, we use 0 port
# to notify the OS that port should be chosen randomly
# from the available ones.
if isinstance(self._local_addr, tuple) and len(self._local_addr) == 2:
local_addr = self._local_addr
elif isinstance(self._local_addr, str):
local_addr = (self._local_addr, 0)
else:
raise ValueError("Unknown local address format: {}".format(self._local_addr))
else:
local_addr = None
if not self._proxy:
self._reader, self._writer = await asyncio.wait_for(
asyncio.open_connection(
host=self._ip,
port=self._port,
ssl=ssl,
local_addr=local_addr
), timeout=timeout)
else:
# Proxy setup, connection and negotiation is performed here.
sock = await self._proxy_connect(
timeout=timeout,
local_addr=local_addr
)
# Wrap socket in SSL context (if provided)
if ssl:
sock = self._wrap_socket_ssl(sock)
if ssl_mod is None:
raise RuntimeError(
'Cannot use proxy that requires SSL'
'without the SSL module being available'
)
self._reader, self._writer = await asyncio.open_connection(sock=sock)
s = ssl_mod.wrap_socket(
s,
do_handshake_on_connect=True,
ssl_version=ssl_mod.PROTOCOL_SSLv23,
ciphers='ADH-AES256-SHA'
)
s.setblocking(False)
self._reader, self._writer = await asyncio.open_connection(sock=s)
self._codec = self.packet_codec(self)
self._init_conn()
@ -250,7 +97,7 @@ class Connection(abc.ABC):
await self._connect(timeout=timeout, ssl=ssl)
self._connected = True
loop = helpers.get_running_loop()
loop = asyncio.get_event_loop()
self._send_task = loop.create_task(self._send_loop())
self._recv_task = loop.create_task(self._recv_loop())
@ -259,9 +106,6 @@ class Connection(abc.ABC):
Disconnects from the server, and clears
pending outgoing and incoming messages.
"""
if not self._connected:
return
self._connected = False
await helpers._cancel(
@ -274,12 +118,7 @@ class Connection(abc.ABC):
self._writer.close()
if sys.version_info >= (3, 7):
try:
await asyncio.wait_for(self._writer.wait_closed(), timeout=10)
except asyncio.TimeoutError:
# See issue #3917. For some users, this line was hanging indefinitely.
# The hard timeout is not ideal (connection won't be properly closed),
# but the code will at least be able to procceed.
self._log.warning('Graceful disconnection timed out, forcibly ignoring cleanup')
await self._writer.wait_closed()
except Exception as e:
# Disconnecting should never raise. Seen:
# * OSError: No route to host and
@ -305,10 +144,8 @@ class Connection(abc.ABC):
This method returns a coroutine.
"""
while self._connected:
result, err = await self._recv_queue.get()
if err:
raise err
if result:
result = await self._recv_queue.get()
if result: # None = sentinel value = keep trying
return result
raise ConnectionError('Not connected')
@ -335,31 +172,34 @@ class Connection(abc.ABC):
"""
This loop is constantly putting items on the queue as they're read.
"""
try:
while self._connected:
try:
data = await self._recv()
except asyncio.CancelledError:
break
except (IOError, asyncio.IncompleteReadError) as e:
self._log.warning('Server closed the connection: %s', e)
await self._recv_queue.put((None, e))
await self.disconnect()
except InvalidChecksumError as e:
self._log.warning('Server response had invalid checksum: %s', e)
await self._recv_queue.put((None, e))
except InvalidBufferError as e:
self._log.warning('Server response had invalid buffer: %s', e)
await self._recv_queue.put((None, e))
except Exception as e:
self._log.exception('Unexpected exception in the receive loop')
await self._recv_queue.put((None, e))
await self.disconnect()
while self._connected:
try:
data = await self._recv()
except asyncio.CancelledError:
break
except Exception as e:
if isinstance(e, (IOError, asyncio.IncompleteReadError)):
msg = 'The server closed the connection'
self._log.info(msg)
elif isinstance(e, InvalidChecksumError):
msg = 'The server response had an invalid checksum'
self._log.info(msg)
else:
await self._recv_queue.put((data, None))
finally:
await self.disconnect()
msg = 'Unexpected exception in the receive loop'
self._log.exception(msg)
await self.disconnect()
# Add a sentinel value to unstuck recv
if self._recv_queue.empty():
self._recv_queue.put_nowait(None)
break
try:
await self._recv_queue.put(data)
except asyncio.CancelledError:
break
def _init_conn(self):
"""

View File

@ -2,7 +2,7 @@ import struct
from zlib import crc32
from .connection import Connection, PacketCodec
from ...errors import InvalidChecksumError, InvalidBufferError
from ...errors import InvalidChecksumError
class FullPacketCodec(PacketCodec):
@ -24,18 +24,6 @@ class FullPacketCodec(PacketCodec):
async def read_packet(self, reader):
packet_len_seq = await reader.readexactly(8) # 4 and 4
packet_len, seq = struct.unpack('<ii', packet_len_seq)
if packet_len < 0 and seq < 0:
# It has been observed that the length and seq can be -429,
# followed by the body of 4 bytes also being -429.
# See https://github.com/LonamiWebs/Telethon/issues/4042.
body = await reader.readexactly(4)
raise InvalidBufferError(body)
elif packet_len < 8:
# Currently unknown why packet_len may be less than 8 but not negative.
# Attempting to `readexactly` with less than 0 fails without saying what
# the number was which is less helpful.
raise InvalidBufferError(packet_len_seq)
body = await reader.readexactly(packet_len - 8)
checksum = struct.unpack('<I', body[-4:])[0]
body = body[:-4]

View File

@ -1,6 +1,5 @@
import asyncio
import hashlib
import base64
import os
from .connection import ObfuscatedConnection
@ -96,10 +95,10 @@ class TcpMTProxy(ObfuscatedConnection):
obfuscated_io = MTProxyIO
# noinspection PyUnusedLocal
def __init__(self, ip, port, dc_id, *, loggers, proxy=None, local_addr=None):
def __init__(self, ip, port, dc_id, *, loggers, proxy=None):
# connect to proxy's host and port instead of telegram's ones
proxy_host, proxy_port = self.address_info(proxy)
self._secret = self.normalize_secret(proxy[2])
self._secret = bytes.fromhex(proxy[2])
super().__init__(
proxy_host, proxy_port, dc_id, loggers=loggers)
@ -131,18 +130,6 @@ class TcpMTProxy(ObfuscatedConnection):
raise ValueError("No proxy info specified for MTProxy connection")
return proxy_info[:2]
@staticmethod
def normalize_secret(secret):
if secret[:2] in ("ee", "dd"): # Remove extra bytes
secret = secret[2:]
try:
secret_bytes = bytes.fromhex(secret)
except ValueError:
secret = secret + '=' * (-len(secret) % 4)
secret_bytes = base64.b64decode(secret.encode())
return secret_bytes[:16] # Remove the domain from the secret (until domain support is added)
class ConnectionTcpMTProxyAbridged(TcpMTProxy):
"""

View File

@ -1,8 +1,6 @@
import asyncio
import collections
import struct
import datetime
import time
from . import authenticator
from ..extensions.messagepacker import MessagePacker
@ -12,20 +10,17 @@ from .mtprotostate import MTProtoState
from ..tl.tlobject import TLRequest
from .. import helpers, utils
from ..errors import (
BadMessageError, InvalidBufferError, AuthKeyNotFound, SecurityError,
BadMessageError, InvalidBufferError, SecurityError,
TypeNotFoundError, rpc_message_to_error
)
from ..extensions import BinaryReader
from ..tl.core import RpcResult, MessageContainer, GzipPacked
from ..tl.functions.auth import LogOutRequest
from ..tl.functions import PingRequest, DestroySessionRequest, DestroyAuthKeyRequest
from ..tl.types import (
MsgsAck, Pong, BadServerSalt, BadMsgNotification, FutureSalts,
MsgNewDetailedInfo, NewSessionCreated, MsgDetailedInfo, MsgsStateReq,
MsgsStateInfo, MsgsAllInfo, MsgResendReq, upload, DestroySessionOk, DestroySessionNone,
DestroyAuthKeyOk, DestroyAuthKeyNone, DestroyAuthKeyFail
MsgsStateInfo, MsgsAllInfo, MsgResendReq, upload
)
from ..tl import types as _tl
from ..crypto import AuthKey
from ..helpers import retry_range
@ -48,7 +43,7 @@ class MTProtoSender:
def __init__(self, auth_key, *, loggers,
retries=5, delay=1, auto_reconnect=True, connect_timeout=None,
auth_key_callback=None,
updates_queue=None, auto_reconnect_callback=None):
update_callback=None, auto_reconnect_callback=None):
self._connection = None
self._loggers = loggers
self._log = loggers[__name__]
@ -57,10 +52,9 @@ class MTProtoSender:
self._auto_reconnect = auto_reconnect
self._connect_timeout = connect_timeout
self._auth_key_callback = auth_key_callback
self._updates_queue = updates_queue
self._update_callback = update_callback
self._auto_reconnect_callback = auto_reconnect_callback
self._connect_lock = asyncio.Lock()
self._ping = None
# Whether the user has explicitly connected or disconnected.
#
@ -70,7 +64,7 @@ class MTProtoSender:
# pending futures should be cancelled.
self._user_connected = False
self._reconnecting = False
self._disconnected = helpers.get_running_loop().create_future()
self._disconnected = asyncio.get_event_loop().create_future()
self._disconnected.set_result(None)
# We need to join the loops upon disconnection
@ -112,11 +106,6 @@ class MTProtoSender:
MsgsStateReq.CONSTRUCTOR_ID: self._handle_state_forgotten,
MsgResendReq.CONSTRUCTOR_ID: self._handle_state_forgotten,
MsgsAllInfo.CONSTRUCTOR_ID: self._handle_msg_all,
DestroySessionOk.CONSTRUCTOR_ID: self._handle_destroy_session,
DestroySessionNone.CONSTRUCTOR_ID: self._handle_destroy_session,
DestroyAuthKeyOk.CONSTRUCTOR_ID: self._handle_destroy_auth_key,
DestroyAuthKeyNone.CONSTRUCTOR_ID: self._handle_destroy_auth_key,
DestroyAuthKeyFail.CONSTRUCTOR_ID: self._handle_destroy_auth_key,
}
# Public API
@ -228,7 +217,6 @@ class MTProtoSender:
self._log.info('Connecting to %s...', self._connection)
connected = False
for attempt in retry_range(self._retries):
if not connected:
connected = await self._try_connect(attempt)
@ -263,7 +251,7 @@ class MTProtoSender:
await self._disconnect(error=e)
raise e
loop = helpers.get_running_loop()
loop = asyncio.get_event_loop()
self._log.debug('Starting send loop')
self._send_loop_handle = loop.create_task(self._send_loop())
@ -302,7 +290,7 @@ class MTProtoSender:
# notify whenever we change it. This is crucial when we
# switch to different data centers.
if self._auth_key_callback:
await self._auth_key_callback(self.auth_key)
self._auth_key_callback(self.auth_key)
self._log.debug('auth_key generation success!')
return True
@ -349,7 +337,7 @@ class MTProtoSender:
"""
Cleanly disconnects and then reconnects.
"""
self._log.info('Closing current connection to begin reconnect...')
self._log.debug('Closing current connection...')
await self._connection.disconnect()
await helpers._cancel(
@ -369,28 +357,15 @@ class MTProtoSender:
self._state.reset()
retries = self._retries if self._auto_reconnect else 0
attempt = 0
ok = True
# We're already "retrying" to connect, so we don't want to force retries
for attempt in retry_range(retries, force_retry=False):
for attempt in retry_range(retries):
try:
await self._connect()
except (IOError, asyncio.TimeoutError) as e:
last_error = e
self._log.info('Failed reconnection attempt %d with %s',
attempt, e.__class__.__name__)
await asyncio.sleep(self._delay)
except BufferError as e:
# TODO there should probably only be one place to except all these errors
if isinstance(e, InvalidBufferError) and e.code == 404:
self._log.info('Server does not know about the current auth key; the session may need to be recreated')
last_error = AuthKeyNotFound()
ok = False
break
else:
self._log.warning('Invalid buffer %s', e)
await asyncio.sleep(self._delay)
except Exception as e:
last_error = e
self._log.exception('Unexpected exception reconnecting on '
@ -402,17 +377,12 @@ class MTProtoSender:
self._pending_state.clear()
if self._auto_reconnect_callback:
helpers.get_running_loop().create_task(self._auto_reconnect_callback())
asyncio.get_event_loop().create_task(self._auto_reconnect_callback())
break
else:
ok = False
if not ok:
self._log.error('Automatic reconnection failed %d time(s)', attempt)
# There may be no error (e.g. automatic reconnection was turned off).
error = last_error.with_traceback(None) if last_error else None
await self._disconnect(error=error)
await self._disconnect(error=last_error.with_traceback(None))
def _start_reconnect(self, error):
"""Starts a reconnection in the background."""
@ -427,19 +397,7 @@ class MTProtoSender:
# gets stuck.
# TODO It still gets stuck? Investigate where and why.
self._reconnecting = True
helpers.get_running_loop().create_task(self._reconnect(error))
def _keepalive_ping(self, rnd_id):
"""
Send a keep-alive ping. If a pong for the last ping was not received
yet, this means we're probably not connected.
"""
# TODO this is ugly, update loop shouldn't worry about this, sender should
if self._ping is None:
self._ping = rnd_id
self.send(PingRequest(rnd_id))
else:
self._start_reconnect(None)
asyncio.get_event_loop().create_task(self._reconnect(error))
# Loops
@ -470,12 +428,13 @@ class MTProtoSender:
len(batch), len(data))
data = self._state.encrypt_message_data(data)
try:
await self._connection.send(data)
except IOError as e:
self._log.info('Connection closed while sending data')
self._start_reconnect(e)
return
# Whether sending succeeds or not, the popped requests are now
# pending because they're removed from the queue. If a reconnect
# occurs, they will be removed from pending state and re-enqueued
# so even if the network fails they won't be lost. If they were
# never re-enqueued, the future waiting for a response "locks".
for state in batch:
if not isinstance(state, list):
if isinstance(state.request, TLRequest):
@ -485,13 +444,6 @@ class MTProtoSender:
if isinstance(s.request, TLRequest):
self._pending_state[s.msg_id] = s
try:
await self._connection.send(data)
except IOError as e:
self._log.info('Connection closed while sending data')
self._start_reconnect(e)
return
self._log.debug('Encrypted messages put in a queue to be sent')
async def _recv_loop(self):
@ -505,29 +457,13 @@ class MTProtoSender:
self._log.debug('Receiving items from the network...')
try:
body = await self._connection.recv()
except asyncio.CancelledError:
raise # bypass except Exception
except (IOError, asyncio.IncompleteReadError) as e:
self._log.info('Connection closed while receiving data: %s', e)
self._start_reconnect(e)
return
except InvalidBufferError as e:
if e.code == 429:
self._log.warning('Server indicated flood error at transport level: %s', e)
await self._disconnect(error=e)
else:
self._log.exception('Server sent invalid buffer')
self._start_reconnect(e)
return
except Exception as e:
self._log.exception('Unhandled error while receiving data')
except IOError as e:
self._log.info('Connection closed while receiving data')
self._start_reconnect(e)
return
try:
message = self._state.decrypt_message_data(body)
if message is None:
continue # this message is to be ignored
except TypeNotFoundError as e:
# Received object which we don't know how to deserialize
self._log.info('Type %08x not found, remaining data %r',
@ -541,14 +477,18 @@ class MTProtoSender:
continue
except BufferError as e:
if isinstance(e, InvalidBufferError) and e.code == 404:
self._log.info('Server does not know about the current auth key; the session may need to be recreated')
await self._disconnect(error=AuthKeyNotFound())
self._log.info('Broken authorization key; resetting')
else:
self._log.warning('Invalid buffer %s', e)
self._start_reconnect(e)
self.auth_key.key = None
if self._auth_key_callback:
self._auth_key_callback(None)
self._start_reconnect(e)
return
except Exception as e:
self._log.exception('Unhandled error while decrypting data')
self._log.exception('Unhandled error while receiving data')
self._start_reconnect(e)
return
@ -612,20 +552,12 @@ class MTProtoSender:
# However receiving a File() with empty bytes is "common".
# See #658, #759 and #958. They seem to happen in a container
# which contain the real response right after.
#
# But, it might also happen that we get an *error* for no parent request.
# If that's the case attempting to read from body which is None would fail with:
# "BufferError: No more data left to read (need 4, got 0: b''); last read None".
# This seems to be particularly common for "RpcError(error_code=-500, error_message='No workers running')".
if rpc_result.error:
self._log.info('Received error without parent request: %s', rpc_result.error)
else:
try:
with BinaryReader(rpc_result.body) as reader:
if not isinstance(reader.tgread_object(), upload.File):
raise ValueError('Not an upload.File')
except (TypeNotFoundError, ValueError):
self._log.info('Received response without parent request: %s', rpc_result.body)
try:
with BinaryReader(rpc_result.body) as reader:
if not isinstance(reader.tgread_object(), upload.File):
raise ValueError('Not an upload.File')
except (TypeNotFoundError, ValueError):
self._log.info('Received response without parent request: %s', rpc_result.body)
return
if rpc_result.error:
@ -636,17 +568,11 @@ class MTProtoSender:
if not state.future.cancelled():
state.future.set_exception(error)
else:
try:
with BinaryReader(rpc_result.body) as reader:
result = state.request.read_result(reader)
except Exception as e:
# e.g. TypeNotFoundError, should be propagated to caller
if not state.future.cancelled():
state.future.set_exception(e)
else:
self._store_own_updates(result)
if not state.future.cancelled():
state.future.set_result(result)
with BinaryReader(rpc_result.body) as reader:
result = state.request.read_result(reader)
if not state.future.cancelled():
state.future.set_result(result)
async def _handle_container(self, message):
"""
@ -673,54 +599,12 @@ class MTProtoSender:
try:
assert message.obj.SUBCLASS_OF_ID == 0x8af52aac # crc32(b'Updates')
except AssertionError:
self._log.warning(
'Note: %s is not an update, not dispatching it %s',
message.obj.__class__.__name__,
message.obj
)
self._log.warning('Note: %s is not an update, not dispatching it %s', message.obj)
return
self._log.debug('Handling update %s', message.obj.__class__.__name__)
self._updates_queue.put_nowait(message.obj)
def _store_own_updates(self, obj, *, _update_ids=frozenset((
_tl.UpdateShortMessage.CONSTRUCTOR_ID,
_tl.UpdateShortChatMessage.CONSTRUCTOR_ID,
_tl.UpdateShort.CONSTRUCTOR_ID,
_tl.UpdatesCombined.CONSTRUCTOR_ID,
_tl.Updates.CONSTRUCTOR_ID,
_tl.UpdateShortSentMessage.CONSTRUCTOR_ID,
)), _update_like_ids=frozenset((
_tl.messages.AffectedHistory.CONSTRUCTOR_ID,
_tl.messages.AffectedMessages.CONSTRUCTOR_ID,
_tl.messages.AffectedFoundMessages.CONSTRUCTOR_ID,
))):
try:
if obj.CONSTRUCTOR_ID in _update_ids:
obj._self_outgoing = True # flag to only process, but not dispatch these
self._updates_queue.put_nowait(obj)
elif obj.CONSTRUCTOR_ID in _update_like_ids:
# Ugly "hack" (?) - otherwise bots reliably detect gaps when deleting messages.
#
# Note: the `date` being `None` is used to check for `updatesTooLong`, so epoch
# is used instead. It is still not read, because `updateShort` has no `seq`.
#
# Some requests, such as `readHistory`, also return these types. But the `pts_count`
# seems to be zero, so while this will produce some bogus `updateDeleteMessages`,
# it's still one of the "cleaner" approaches to handling the new `pts`.
# `updateDeleteMessages` is probably the "least-invasive" update that can be used.
upd = _tl.UpdateShort(
_tl.UpdateDeleteMessages([], obj.pts, obj.pts_count),
datetime.datetime(*time.gmtime(0)[:6]).replace(tzinfo=datetime.timezone.utc)
)
upd._self_outgoing = True
self._updates_queue.put_nowait(upd)
elif obj.CONSTRUCTOR_ID == _tl.messages.InvitedUsers.CONSTRUCTOR_ID:
obj.updates._self_outgoing = True
self._updates_queue.put_nowait(obj.updates)
except AttributeError:
pass
if self._update_callback:
self._update_callback(message.obj)
async def _handle_pong(self, message):
"""
@ -731,9 +615,6 @@ class MTProtoSender:
"""
pong = message.obj
self._log.debug('Handling pong for message %d', pong.msg_id)
if self._ping == pong.ping_id:
self._ping = None
state = self._pending_state.pop(pong.msg_id, None)
if state:
state.future.set_result(pong)
@ -846,8 +727,7 @@ class MTProtoSender:
state = self._pending_state.get(msg_id)
if state and isinstance(state.request, LogOutRequest):
del self._pending_state[msg_id]
if not state.future.cancelled():
state.future.set_result(True)
state.future.set_result(True)
async def _handle_future_salts(self, message):
"""
@ -877,42 +757,3 @@ class MTProtoSender:
"""
Handles :tl:`MsgsAllInfo` by doing nothing (yet).
"""
async def _handle_destroy_session(self, message):
"""
Handles both :tl:`DestroySessionOk` and :tl:`DestroySessionNone`.
It behaves pretty much like handling an RPC result.
"""
for msg_id, state in self._pending_state.items():
if isinstance(state.request, DestroySessionRequest)\
and state.request.session_id == message.obj.session_id:
break
else:
return
del self._pending_state[msg_id]
if not state.future.cancelled():
state.future.set_result(message.obj)
async def _handle_destroy_auth_key(self, message):
"""
Handles :tl:`DestroyAuthKeyFail`, :tl:`DestroyAuthKeyNone`, and :tl:`DestroyAuthKeyOk`.
:tl:`DestroyAuthKey` is not intended for users to use, but they still
might, and the response won't come in `rpc_result`, so thhat's worked
around here.
"""
self._log.debug('Handling destroy auth key %s', message.obj)
for msg_id, state in list(self._pending_state.items()):
if isinstance(state.request, DestroyAuthKeyRequest):
del self._pending_state[msg_id]
if not state.future.cancelled():
state.future.set_result(message.obj)
# If the auth key has been destroyed, that pretty much means the
# library can't continue as our auth key will no longer be found
# on the server.
# Even if the library didn't disconnect, the server would (and then
# the library would reconnect and learn about auth key being invalid).
if isinstance(message.obj, DestroyAuthKeyOk):
await self._disconnect(error=AuthKeyNotFound())

View File

@ -2,38 +2,13 @@ import os
import struct
import time
from hashlib import sha256
from collections import deque
from ..crypto import AES
from ..errors import SecurityError, InvalidBufferError
from ..extensions import BinaryReader
from ..tl.core import TLMessage
from ..tl.tlobject import TLRequest
from ..tl.functions import InvokeAfterMsgRequest
from ..tl.core.gzippacked import GzipPacked
from ..tl.types import BadServerSalt, BadMsgNotification
# N is not specified in https://core.telegram.org/mtproto/security_guidelines#checking-msg-id, but 500 is reasonable
MAX_RECENT_MSG_IDS = 500
MSG_TOO_NEW_DELTA = 30
MSG_TOO_OLD_DELTA = 300
# Something must be wrong if we ignore too many messages at the same time
MAX_CONSECUTIVE_IGNORED = 10
class _OpaqueRequest(TLRequest):
"""
Wraps a serialized request into a type that can be serialized again.
"""
def __init__(self, data: bytes):
self.data = data
def _bytes(self):
return self.data
class MTProtoState:
@ -66,9 +41,6 @@ class MTProtoState:
self.salt = 0
self.id = self._sequence = self._last_msg_id = None
self._recent_remote_ids = deque(maxlen=MAX_RECENT_MSG_IDS)
self._highest_remote_id = 0
self._ignore_count = 0
self.reset()
def reset(self):
@ -79,9 +51,6 @@ class MTProtoState:
self.id = struct.unpack('q', os.urandom(8))[0]
self._sequence = 0
self._last_msg_id = 0
self._recent_remote_ids.clear()
self._highest_remote_id = 0
self._ignore_count = 0
def update_message_id(self, message):
"""
@ -118,10 +87,8 @@ class MTProtoState:
if after_id is None:
body = GzipPacked.gzip_if_smaller(content_related, data)
else:
# The `RequestState` stores `bytes(request)`, not the request itself.
# `invokeAfterMsg` wants a `TLRequest` though, hence the wrapping.
body = GzipPacked.gzip_if_smaller(content_related,
bytes(InvokeAfterMsgRequest(after_id, _OpaqueRequest(data))))
bytes(InvokeAfterMsgRequest(after_id, data)))
buffer.write(struct.pack('<qii', msg_id, seq_no, len(body)))
buffer.write(body)
@ -152,8 +119,6 @@ class MTProtoState:
"""
Inverse of `encrypt_message_data` for incoming server messages.
"""
now = time.time() # get the time as early as possible, even if other checks make it go unused
if len(body) < 8:
raise InvalidBufferError(body)
@ -176,19 +141,9 @@ class MTProtoState:
reader = BinaryReader(body)
reader.read_long() # remote_salt
if reader.read_long() != self.id:
raise SecurityError('Server replied with a wrong session ID (see FAQ for details)')
raise SecurityError('Server replied with a wrong session ID')
remote_msg_id = reader.read_long()
if remote_msg_id % 2 != 1:
raise SecurityError('Server sent an even msg_id')
# Only perform the (somewhat expensive) check of duplicate if we did receive a lower ID
if remote_msg_id <= self._highest_remote_id and remote_msg_id in self._recent_remote_ids:
self._log.warning('Server resent the older message %d, ignoring', remote_msg_id)
self._count_ignored()
return None
remote_sequence = reader.read_int()
reader.read_int() # msg_len for the inner object, padding ignored
@ -197,45 +152,8 @@ class MTProtoState:
# reader isn't used for anything else after this, it's unnecessary.
obj = reader.tgread_object()
# "Certain client-to-server service messages containing data sent by the client to the
# server (for example, msg_id of a recent client query) may, nonetheless, be processed
# on the client even if the time appears to be "incorrect". This is especially true of
# messages to change server_salt and notifications about invalid time on the client."
#
# This means we skip the time check for certain types of messages.
if obj.CONSTRUCTOR_ID in (BadServerSalt.CONSTRUCTOR_ID, BadMsgNotification.CONSTRUCTOR_ID):
if not self._highest_remote_id and not self.time_offset:
# If the first message we receive is a bad notification, take this opportunity
# to adjust the time offset. Assume it will remain stable afterwards. Updating
# the offset unconditionally would make the next checks pointless.
self.update_time_offset(remote_msg_id)
else:
remote_msg_time = remote_msg_id >> 32
time_delta = (now + self.time_offset) - remote_msg_time
if time_delta > MSG_TOO_OLD_DELTA:
self._log.warning('Server sent a very old message with ID %d, ignoring (see FAQ for details)', remote_msg_id)
self._count_ignored()
return None
if -time_delta > MSG_TOO_NEW_DELTA:
self._log.warning('Server sent a very new message with ID %d, ignoring (see FAQ for details)', remote_msg_id)
self._count_ignored()
return None
self._recent_remote_ids.append(remote_msg_id)
self._highest_remote_id = remote_msg_id
self._ignore_count = 0
return TLMessage(remote_msg_id, remote_sequence, obj)
def _count_ignored(self):
# It's possible that ignoring a message "bricks" the connection,
# but this should not happen unless there's something else wrong.
self._ignore_count += 1
if self._ignore_count >= MAX_CONSECUTIVE_IGNORED:
raise SecurityError('Too many messages had to be ignored consecutively')
def _get_new_msg_id(self):
"""
Generates a new unique message ID based on the current

View File

@ -98,11 +98,6 @@ class Session(ABC):
raise NotImplementedError
@abstractmethod
def get_update_states(self):
"""
Returns an iterable over all known pairs of ``(entity ID, update state)``.
"""
def close(self):
"""
Called on client disconnection. Should be used to

Some files were not shown because too many files have changed in this diff Show More