A Python library for working with the ClickHouse database (https://clickhouse.yandex/)
Go to file
dependabot[bot] 37045530c7
Bump certifi from 2020.4.5.2 to 2022.12.7 in /examples/db_explorer
Bumps [certifi](https://github.com/certifi/python-certifi) from 2020.4.5.2 to 2022.12.7.
- [Release notes](https://github.com/certifi/python-certifi/releases)
- [Commits](https://github.com/certifi/python-certifi/compare/2020.04.05.2...2022.12.07)

---
updated-dependencies:
- dependency-name: certifi
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
2022-12-08 10:48:32 +00:00
docs Added functions for working with external dictionaries 2020-07-14 22:01:50 +03:00
examples Bump certifi from 2020.4.5.2 to 2022.12.7 in /examples/db_explorer 2022-12-08 10:48:32 +00:00
scripts Support for data skipping indexes 2020-06-06 20:56:32 +03:00
src/infi Fix pagination for models with alias fields 2022-11-29 15:06:44 +02:00
tests fix precedence of ~ operator in Q objects 2021-10-21 05:33:23 +00:00
.gitignore Functions WIP 2020-04-19 07:17:52 +03:00
.noseids Merge branch 'develop' into funcs 2019-07-13 11:51:10 +03:00
buildout.cfg Bump python version to 3.8.12 2022-07-19 10:59:30 +03:00
CHANGELOG.md releasing 2.1.3 2022-11-29 15:15:57 +02:00
LICENSE HOSTDEV-2736 change license and add license file 2017-06-18 12:35:33 +03:00
README.md Added usage examples 2020-06-26 17:53:39 +03:00
setup.in HOSTDEV-2736 change license and add license file 2017-06-18 12:35:33 +03:00
tox.ini add instructions to test with tox 2018-04-21 11:49:14 +03:00

Introduction

This project is simple ORM for working with the ClickHouse database. It allows you to define model classes whose instances can be written to the database and read from it.

Let's jump right in with a simple example of monitoring CPU usage. First we need to define the model class, connect to the database and create a table for the model:

from infi.clickhouse_orm import Database, Model, DateTimeField, UInt16Field, Float32Field, Memory, F

class CPUStats(Model):

    timestamp = DateTimeField()
    cpu_id = UInt16Field()
    cpu_percent = Float32Field()

    engine = Memory()

db = Database('demo')
db.create_table(CPUStats)

Now we can collect usage statistics per CPU, and write them to the database:

import psutil, time, datetime

psutil.cpu_percent(percpu=True) # first sample should be discarded
while True:
    time.sleep(1)
    stats = psutil.cpu_percent(percpu=True)
    timestamp = datetime.datetime.now()
    db.insert([
        CPUStats(timestamp=timestamp, cpu_id=cpu_id, cpu_percent=cpu_percent)
        for cpu_id, cpu_percent in enumerate(stats)
    ])

Querying the table is easy, using either the query builder or raw SQL:

# Calculate what percentage of the time CPU 1 was over 95% busy
queryset = CPUStats.objects_in(db)
total = queryset.filter(CPUStats.cpu_id == 1).count()
busy = queryset.filter(CPUStats.cpu_id == 1, CPUStats.cpu_percent > 95).count()
print('CPU 1 was busy {:.2f}% of the time'.format(busy * 100.0 / total))

# Calculate the average usage per CPU
for row in queryset.aggregate(CPUStats.cpu_id, average=F.avg(CPUStats.cpu_percent)):
    print('CPU {row.cpu_id}: {row.average:.2f}%'.format(row=row))

This and other examples can be found in the examples folder.

To learn more please visit the documentation.