mirror of
https://github.com/Infinidat/infi.clickhouse_orm.git
synced 2025-02-21 10:40:34 +03:00
Update docs
This commit is contained in:
parent
ae3011e88f
commit
e791923493
|
@ -8,6 +8,9 @@ Unreleased
|
||||||
- Distributed engine support (tsionyx)
|
- Distributed engine support (tsionyx)
|
||||||
- `_fields` and `_writable_fields` are OrderedDicts - note that this might break backwards compatibility (tsionyx)
|
- `_fields` and `_writable_fields` are OrderedDicts - note that this might break backwards compatibility (tsionyx)
|
||||||
- Improve error messages returned from the database with the `ServerError` class (tsionyx)
|
- Improve error messages returned from the database with the `ServerError` class (tsionyx)
|
||||||
|
- Added support of custom partitioning (M1hacka)
|
||||||
|
- Added attribute `server_version` to Database class (M1hacka)
|
||||||
|
- Changed `Engine.create_table_sql()`, `Engine.drop_table_sql()`, `Model.create_table_sql()`, `Model.drop_table_sql()` parameter to db from db_name (M1hacka)
|
||||||
|
|
||||||
v0.9.8
|
v0.9.8
|
||||||
------
|
------
|
||||||
|
|
|
@ -24,18 +24,6 @@ created on the ClickHouse server if it does not already exist.
|
||||||
- `autocreate`: automatically create the database if does not exist (unless in readonly mode).
|
- `autocreate`: automatically create the database if does not exist (unless in readonly mode).
|
||||||
|
|
||||||
|
|
||||||
#### server_timezone
|
|
||||||
|
|
||||||
|
|
||||||
Contains [pytz](http://pytz.sourceforge.net/) timezone used on database server
|
|
||||||
|
|
||||||
|
|
||||||
#### server_version
|
|
||||||
|
|
||||||
|
|
||||||
Contains a version tuple of database server, for example (1, 1, 54310)
|
|
||||||
|
|
||||||
|
|
||||||
#### count(model_class, conditions=None)
|
#### count(model_class, conditions=None)
|
||||||
|
|
||||||
|
|
||||||
|
@ -353,10 +341,10 @@ invalid values will cause a `ValueError` to be raised.
|
||||||
Unrecognized field names will cause an `AttributeError`.
|
Unrecognized field names will cause an `AttributeError`.
|
||||||
|
|
||||||
|
|
||||||
#### DistributedModel.create_table_sql(db_name)
|
#### DistributedModel.create_table_sql(db)
|
||||||
|
|
||||||
|
|
||||||
#### DistributedModel.drop_table_sql(db_name)
|
#### DistributedModel.drop_table_sql(db)
|
||||||
|
|
||||||
|
|
||||||
Returns the SQL command for deleting this model's table.
|
Returns the SQL command for deleting this model's table.
|
||||||
|
@ -692,15 +680,13 @@ During a read, the table indexes on remote servers are used, if there are any.
|
||||||
See full documentation here
|
See full documentation here
|
||||||
https://clickhouse.yandex/docs/en/table_engines/distributed.html
|
https://clickhouse.yandex/docs/en/table_engines/distributed.html
|
||||||
|
|
||||||
#### Distributed(cluster, table=None, db_name=None, sharding_key=None)
|
#### Distributed(cluster, table=None, sharding_key=None)
|
||||||
|
|
||||||
|
|
||||||
:param cluster: what cluster to access data from
|
:param cluster: what cluster to access data from
|
||||||
:param table: underlying table that actually stores data.
|
:param table: underlying table that actually stores data.
|
||||||
If you are not specifying any table here, ensure that it can be inferred
|
If you are not specifying any table here, ensure that it can be inferred
|
||||||
from your model's superclass (see models.DistributedModel.fix_engine_table)
|
from your model's superclass (see models.DistributedModel.fix_engine_table)
|
||||||
:param db_name: which database to access data from
|
|
||||||
By default it is 'currentDatabase()'
|
|
||||||
:param sharding_key: how to distribute data among shards when inserting
|
:param sharding_key: how to distribute data among shards when inserting
|
||||||
straightly into Distributed table, optional
|
straightly into Distributed table, optional
|
||||||
|
|
||||||
|
@ -709,21 +695,21 @@ straightly into Distributed table, optional
|
||||||
|
|
||||||
Extends MergeTree
|
Extends MergeTree
|
||||||
|
|
||||||
#### CollapsingMergeTree(date_col, key_cols, sign_col, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
|
#### CollapsingMergeTree(date_col, order_by, sign_col, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
|
||||||
|
|
||||||
|
|
||||||
### SummingMergeTree
|
### SummingMergeTree
|
||||||
|
|
||||||
Extends MergeTree
|
Extends MergeTree
|
||||||
|
|
||||||
#### SummingMergeTree(date_col, key_cols, summing_cols=None, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
|
#### SummingMergeTree(date_col, order_by, summing_cols=None, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
|
||||||
|
|
||||||
|
|
||||||
### ReplacingMergeTree
|
### ReplacingMergeTree
|
||||||
|
|
||||||
Extends MergeTree
|
Extends MergeTree
|
||||||
|
|
||||||
#### ReplacingMergeTree(date_col, key_cols, ver_col=None, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
|
#### ReplacingMergeTree(date_col, order_by, ver_col=None, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
|
||||||
|
|
||||||
|
|
||||||
infi.clickhouse_orm.query
|
infi.clickhouse_orm.query
|
||||||
|
|
|
@ -58,13 +58,14 @@ For a `ReplacingMergeTree` you can optionally specify the version column:
|
||||||
### Custom partitioning
|
### Custom partitioning
|
||||||
|
|
||||||
ClickHouse supports [custom partitioning](https://clickhouse.yandex/docs/en/table_engines/custom_partitioning_key/) expressions since version 1.1.54310
|
ClickHouse supports [custom partitioning](https://clickhouse.yandex/docs/en/table_engines/custom_partitioning_key/) expressions since version 1.1.54310
|
||||||
You can use custom partitioning with any MergeTree family engine.
|
|
||||||
To set custom partitioning:
|
|
||||||
* skip date_col (first) constructor parameter or fill it with None value
|
|
||||||
* add name to order_by (second) constructor parameter
|
|
||||||
* add partition_key parameter. It should be a tuple of expressions, by which partition are built.
|
|
||||||
|
|
||||||
Standard partitioning by date column can be added using toYYYYMM(date) function.
|
You can use custom partitioning with any `MergeTree` family engine.
|
||||||
|
To set custom partitioning:
|
||||||
|
|
||||||
|
* Instead of specifying the `date_col` (first) constructor parameter, pass a tuple of field names or expressions in the `order_by` (second) constructor parameter.
|
||||||
|
* Add `partition_key` parameter. It should be a tuple of expressions, by which partitions are built.
|
||||||
|
|
||||||
|
Standard monthly partitioning by date column can be specified using the `toYYYYMM(date)` function.
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
|
|
|
@ -36,6 +36,7 @@
|
||||||
* [Table Engines](table_engines.md#table-engines)
|
* [Table Engines](table_engines.md#table-engines)
|
||||||
* [Simple Engines](table_engines.md#simple-engines)
|
* [Simple Engines](table_engines.md#simple-engines)
|
||||||
* [Engines in the MergeTree Family](table_engines.md#engines-in-the-mergetree-family)
|
* [Engines in the MergeTree Family](table_engines.md#engines-in-the-mergetree-family)
|
||||||
|
* [Custom partitioning](table_engines.md#custom-partitioning)
|
||||||
* [Data Replication](table_engines.md#data-replication)
|
* [Data Replication](table_engines.md#data-replication)
|
||||||
* [Buffer Engine](table_engines.md#buffer-engine)
|
* [Buffer Engine](table_engines.md#buffer-engine)
|
||||||
* [Merge Engine](table_engines.md#merge-engine)
|
* [Merge Engine](table_engines.md#merge-engine)
|
||||||
|
|
Loading…
Reference in New Issue
Block a user