Generate a class reference document

This commit is contained in:
Itai Shirav 2017-05-03 08:36:47 +03:00
parent f1ab9b6179
commit 4625a7e00f
8 changed files with 813 additions and 30 deletions

526
docs/ref.md Normal file
View File

@ -0,0 +1,526 @@
Class Reference
===============
infi.clickhouse_orm.database
----------------------------
### Database
#### Database(db_name, db_url="http://localhost:8123/", username=None, password=None, readonly=False)
Initializes a database instance. Unless it's readonly, the database will be
created on the ClickHouse server if it does not already exist.
- `db_name`: name of the database to connect to.
- `db_url`: URL of the ClickHouse server.
- `username`: optional connection credentials.
- `password`: optional connection credentials.
- `readonly`: use a read-only connection.
#### count(model_class, conditions=None)
Counts the number of records in the model's table.
- `model_class`: the model to count.
- `conditions`: optional SQL conditions (contents of the WHERE clause).
#### create_database()
Creates the database on the ClickHouse server if it does not already exist.
#### create_table(model_class)
Creates a table for the given model class, if it does not exist already.
#### drop_database()
Deletes the database on the ClickHouse server.
#### drop_table(model_class)
Drops the database table of the given model class, if it exists.
#### insert(model_instances, batch_size=1000)
Insert records into the database.
- `model_instances`: any iterable containing instances of a single model class.
- `batch_size`: number of records to send per chunk (use a lower number if your records are very large).
#### migrate(migrations_package_name, up_to=9999)
Executes schema migrations.
- `migrations_package_name` - fully qualified name of the Python package
containing the migrations.
- `up_to` - number of the last migration to apply.
#### paginate(model_class, order_by, page_num=1, page_size=100, conditions=None, settings=None)
Selects records and returns a single page of model instances.
- `model_class`: the model class matching the query's table,
or `None` for getting back instances of an ad-hoc model.
- `order_by`: columns to use for sorting the query (contents of the ORDER BY clause).
- `page_num`: the page number (1-based), or -1 to get the last page.
- `page_size`: number of records to return per page.
- `conditions`: optional SQL conditions (contents of the WHERE clause).
- `settings`: query settings to send as HTTP GET parameters
The result is a namedtuple containing `objects` (list), `number_of_objects`,
`pages_total`, `number` (of the current page), and `page_size`.
#### raw(query, settings=None, stream=False)
Performs a query and returns its output as text.
- `query`: the SQL query to execute.
- `settings`: query settings to send as HTTP GET parameters
- `stream`: if true, the HTTP response from ClickHouse will be streamed.
#### select(query, model_class=None, settings=None)
Performs a query and returns a generator of model instances.
- `query`: the SQL query to execute.
- `model_class`: the model class matching the query's table,
or `None` for getting back instances of an ad-hoc model.
- `settings`: query settings to send as HTTP GET parameters
### DatabaseException
Extends Exception
Raised when a database operation fails.
infi.clickhouse_orm.models
--------------------------
### Model
A base class for ORM models.
#### Model(**kwargs)
Creates a model instance, using keyword arguments as field values.
Since values are immediately converted to their Pythonic type,
invalid values will cause a `ValueError` to be raised.
Unrecognized field names will cause an `AttributeError`.
#### Model.create_table_sql(db_name)
Returns the SQL command for creating a table for this model.
#### Model.drop_table_sql(db_name)
Returns the SQL command for deleting this model's table.
#### Model.from_tsv(line, field_names=None, timezone_in_use=UTC, database=None)
Create a model instance from a tab-separated line. The line may or may not include a newline.
The `field_names` list must match the fields defined in the model, but does not have to include all of them.
If omitted, it is assumed to be the names of all fields in the model, in order of definition.
- `line`: the TSV-formatted data.
- `field_names`: names of the model fields in the data.
- `timezone_in_use`: the timezone to use when parsing dates and datetimes.
- `database`: if given, sets the database that this instance belongs to.
#### get_database()
Gets the `Database` that this model instance belongs to.
Returns `None` unless the instance was read from the database or written to it.
#### get_field(name)
Gets a `Field` instance given its name, or `None` if not found.
#### Model.objects_in(database)
Returns a `QuerySet` for selecting instances of this model class.
#### set_database(db)
Sets the `Database` that this model instance belongs to.
This is done automatically when the instance is read from the database or written to it.
#### Model.table_name()
Returns the model's database table name. By default this is the
class name converted to lowercase. Override this if you want to use
a different table name.
#### to_dict(include_readonly=True, field_names=None)
Returns the instance's column values as a dict.
- `include_readonly`: if false, returns only fields that can be inserted into database.
- `field_names`: an iterable of field names to return (optional)
#### to_tsv(include_readonly=True)
Returns the instance's column values as a tab-separated line. A newline is not included.
- `include_readonly`: if false, returns only fields that can be inserted into database.
### BufferModel
Extends Model
#### BufferModel(**kwargs)
Creates a model instance, using keyword arguments as field values.
Since values are immediately converted to their Pythonic type,
invalid values will cause a `ValueError` to be raised.
Unrecognized field names will cause an `AttributeError`.
#### BufferModel.create_table_sql(db_name)
Returns the SQL command for creating a table for this model.
#### BufferModel.drop_table_sql(db_name)
Returns the SQL command for deleting this model's table.
#### BufferModel.from_tsv(line, field_names=None, timezone_in_use=UTC, database=None)
Create a model instance from a tab-separated line. The line may or may not include a newline.
The `field_names` list must match the fields defined in the model, but does not have to include all of them.
If omitted, it is assumed to be the names of all fields in the model, in order of definition.
- `line`: the TSV-formatted data.
- `field_names`: names of the model fields in the data.
- `timezone_in_use`: the timezone to use when parsing dates and datetimes.
- `database`: if given, sets the database that this instance belongs to.
#### get_database()
Gets the `Database` that this model instance belongs to.
Returns `None` unless the instance was read from the database or written to it.
#### get_field(name)
Gets a `Field` instance given its name, or `None` if not found.
#### BufferModel.objects_in(database)
Returns a `QuerySet` for selecting instances of this model class.
#### set_database(db)
Sets the `Database` that this model instance belongs to.
This is done automatically when the instance is read from the database or written to it.
#### BufferModel.table_name()
Returns the model's database table name. By default this is the
class name converted to lowercase. Override this if you want to use
a different table name.
#### to_dict(include_readonly=True, field_names=None)
Returns the instance's column values as a dict.
- `include_readonly`: if false, returns only fields that can be inserted into database.
- `field_names`: an iterable of field names to return (optional)
#### to_tsv(include_readonly=True)
Returns the instance's column values as a tab-separated line. A newline is not included.
- `include_readonly`: if false, returns only fields that can be inserted into database.
infi.clickhouse_orm.fields
--------------------------
### Field
Abstract base class for all field types.
#### Field(default=None, alias=None, materialized=None)
### StringField
Extends Field
#### StringField(default=None, alias=None, materialized=None)
### DateField
Extends Field
#### DateField(default=None, alias=None, materialized=None)
### DateTimeField
Extends Field
#### DateTimeField(default=None, alias=None, materialized=None)
### BaseIntField
Extends Field
Abstract base class for all integer-type fields.
#### BaseIntField(default=None, alias=None, materialized=None)
### BaseFloatField
Extends Field
Abstract base class for all float-type fields.
#### BaseFloatField(default=None, alias=None, materialized=None)
### BaseEnumField
Extends Field
Abstract base class for all enum-type fields.
#### BaseEnumField(enum_cls, default=None, alias=None, materialized=None)
### ArrayField
Extends Field
#### ArrayField(inner_field, default=None, alias=None, materialized=None)
### FixedStringField
Extends StringField
#### FixedStringField(length, default=None, alias=None, materialized=None)
### UInt8Field
Extends BaseIntField
#### UInt8Field(default=None, alias=None, materialized=None)
### UInt16Field
Extends BaseIntField
#### UInt16Field(default=None, alias=None, materialized=None)
### UInt32Field
Extends BaseIntField
#### UInt32Field(default=None, alias=None, materialized=None)
### UInt64Field
Extends BaseIntField
#### UInt64Field(default=None, alias=None, materialized=None)
### Int8Field
Extends BaseIntField
#### Int8Field(default=None, alias=None, materialized=None)
### Int16Field
Extends BaseIntField
#### Int16Field(default=None, alias=None, materialized=None)
### Int32Field
Extends BaseIntField
#### Int32Field(default=None, alias=None, materialized=None)
### Int64Field
Extends BaseIntField
#### Int64Field(default=None, alias=None, materialized=None)
### Float32Field
Extends BaseFloatField
#### Float32Field(default=None, alias=None, materialized=None)
### Float64Field
Extends BaseFloatField
#### Float64Field(default=None, alias=None, materialized=None)
### Enum8Field
Extends BaseEnumField
#### Enum8Field(enum_cls, default=None, alias=None, materialized=None)
### Enum16Field
Extends BaseEnumField
#### Enum16Field(enum_cls, default=None, alias=None, materialized=None)
infi.clickhouse_orm.engines
---------------------------
### Engine
### TinyLog
Extends Engine
### Log
Extends Engine
### Memory
Extends Engine
### MergeTree
Extends Engine
#### MergeTree(date_col, key_cols, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
### Buffer
Extends Engine
Here we define Buffer engine
Read more here https://clickhouse.yandex/reference_en.html#Buffer
#### Buffer(main_model, num_layers=16, min_time=10, max_time=100, min_rows=10000, max_rows=1000000, min_bytes=10000000, max_bytes=100000000)
### CollapsingMergeTree
Extends MergeTree
#### CollapsingMergeTree(date_col, key_cols, sign_col, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
### SummingMergeTree
Extends MergeTree
#### SummingMergeTree(date_col, key_cols, summing_cols=None, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
### ReplacingMergeTree
Extends MergeTree
#### ReplacingMergeTree(date_col, key_cols, ver_col=None, sampling_expr=None, index_granularity=8192, replica_table_path=None, replica_name=None)
infi.clickhouse_orm.query
-------------------------
### QuerySet
#### QuerySet(model_cls, database)
#### conditions_as_sql()
Return the contents of the queryset's WHERE clause.
#### count()
Returns the number of matching model instances.
#### exclude(**kwargs)
Returns a new QuerySet instance that excludes all rows matching the conditions.
#### filter(**kwargs)
Returns a new QuerySet instance that includes only rows matching the conditions.
#### only(*field_names)
Limit the query to return only the specified field names.
Useful when there are large fields that are not needed,
or for creating a subquery to use with an IN operator.
#### order_by(*field_names)
Returns a new QuerySet instance with the ordering changed.
#### order_by_as_sql()
Return the contents of the queryset's ORDER BY clause.
#### query()
Return the the queryset as SQL.

View File

@ -42,3 +42,45 @@
* [Contributing](contributing.md#contributing)
* [Class Reference](ref.md#class-reference)
* [infi.clickhouse_orm.database](ref.md#infi.clickhouse_orm.database)
* [Database](ref.md#database)
* [DatabaseException](ref.md#databaseexception)
* [infi.clickhouse_orm.models](ref.md#infi.clickhouse_orm.models)
* [Model](ref.md#model)
* [BufferModel](ref.md#buffermodel)
* [infi.clickhouse_orm.fields](ref.md#infi.clickhouse_orm.fields)
* [Field](ref.md#field)
* [StringField](ref.md#stringfield)
* [DateField](ref.md#datefield)
* [DateTimeField](ref.md#datetimefield)
* [BaseIntField](ref.md#baseintfield)
* [BaseFloatField](ref.md#basefloatfield)
* [BaseEnumField](ref.md#baseenumfield)
* [ArrayField](ref.md#arrayfield)
* [FixedStringField](ref.md#fixedstringfield)
* [UInt8Field](ref.md#uint8field)
* [UInt16Field](ref.md#uint16field)
* [UInt32Field](ref.md#uint32field)
* [UInt64Field](ref.md#uint64field)
* [Int8Field](ref.md#int8field)
* [Int16Field](ref.md#int16field)
* [Int32Field](ref.md#int32field)
* [Int64Field](ref.md#int64field)
* [Float32Field](ref.md#float32field)
* [Float64Field](ref.md#float64field)
* [Enum8Field](ref.md#enum8field)
* [Enum16Field](ref.md#enum16field)
* [infi.clickhouse_orm.engines](ref.md#infi.clickhouse_orm.engines)
* [Engine](ref.md#engine)
* [TinyLog](ref.md#tinylog)
* [Log](ref.md#log)
* [Memory](ref.md#memory)
* [MergeTree](ref.md#mergetree)
* [Buffer](ref.md#buffer)
* [CollapsingMergeTree](ref.md#collapsingmergetree)
* [SummingMergeTree](ref.md#summingmergetree)
* [ReplacingMergeTree](ref.md#replacingmergetree)
* [infi.clickhouse_orm.query](ref.md#infi.clickhouse_orm.query)
* [QuerySet](ref.md#queryset)

131
scripts/generate_ref.py Normal file
View File

@ -0,0 +1,131 @@
import inspect
from collections import namedtuple
DefaultArgSpec = namedtuple('DefaultArgSpec', 'has_default default_value')
def _get_default_arg(args, defaults, arg_index):
""" Method that determines if an argument has default value or not,
and if yes what is the default value for the argument
:param args: array of arguments, eg: ['first_arg', 'second_arg', 'third_arg']
:param defaults: array of default values, eg: (42, 'something')
:param arg_index: index of the argument in the argument array for which,
this function checks if a default value exists or not. And if default value
exists it would return the default value. Example argument: 1
:return: Tuple of whether there is a default or not, and if yes the default
value, eg: for index 2 i.e. for "second_arg" this function returns (True, 42)
"""
if not defaults:
return DefaultArgSpec(False, None)
args_with_no_defaults = len(args) - len(defaults)
if arg_index < args_with_no_defaults:
return DefaultArgSpec(False, None)
else:
value = defaults[arg_index - args_with_no_defaults]
if (type(value) is str):
value = '"%s"' % value
return DefaultArgSpec(True, value)
def get_method_sig(method):
""" Given a function, it returns a string that pretty much looks how the
function signature would be written in python.
:param method: a python method
:return: A string similar describing the pythong method signature.
eg: "my_method(first_argArg, second_arg=42, third_arg='something')"
"""
# The return value of ArgSpec is a bit weird, as the list of arguments and
# list of defaults are returned in separate array.
# eg: ArgSpec(args=['first_arg', 'second_arg', 'third_arg'],
# varargs=None, keywords=None, defaults=(42, 'something'))
argspec = inspect.getargspec(method)
arg_index=0
args = []
# Use the args and defaults array returned by argspec and find out
# which arguments has default
for arg in argspec.args:
default_arg = _get_default_arg(argspec.args, argspec.defaults, arg_index)
if default_arg.has_default:
args.append("%s=%s" % (arg, default_arg.default_value))
else:
args.append(arg)
arg_index += 1
if argspec.varargs:
args.append('*' + argspec.varargs)
if argspec.keywords:
args.append('**' + argspec.keywords)
return "%s(%s)" % (method.__name__, ", ".join(args[1:]))
def docstring(obj):
doc = (obj.__doc__ or '').strip()
if doc:
for line in doc.split('\n'):
print line.strip()
print
def class_doc(cls, list_methods=True):
bases = ', '.join([b.__name__ for b in cls.__bases__])
print '###', cls.__name__
print
if bases != 'object':
print 'Extends', bases
print
docstring(cls)
for name, method in inspect.getmembers(cls, inspect.ismethod):
if name == '__init__':
# Initializer
print '####', get_method_sig(method).replace(name, cls.__name__)
elif name[0] == '_':
# Private method
continue
elif method.__self__ == cls:
# Class method
if not list_methods:
continue
print '#### %s.%s' % (cls.__name__, get_method_sig(method))
else:
# Regular method
if not list_methods:
continue
print '####', get_method_sig(method)
print
docstring(method)
print
def module_doc(classes, list_methods=True):
mdl = classes[0].__module__
print mdl
print '-' * len(mdl)
print
for cls in classes:
class_doc(cls, list_methods)
def all_subclasses(cls):
return cls.__subclasses__() + [g for s in cls.__subclasses__() for g in all_subclasses(s)]
if __name__ == '__main__':
from infi.clickhouse_orm import database
from infi.clickhouse_orm import fields
from infi.clickhouse_orm import engines
from infi.clickhouse_orm import models
from infi.clickhouse_orm import query
print 'Class Reference'
print '==============='
print
module_doc([database.Database, database.DatabaseException])
module_doc([models.Model, models.BufferModel])
module_doc([fields.Field] + all_subclasses(fields.Field), False)
module_doc([engines.Engine] + all_subclasses(engines.Engine), False)
module_doc([query.QuerySet])

View File

@ -14,3 +14,4 @@ generate_one "table_engines.md"
generate_one "schema_migrations.md"
generate_one "system_models.md"
generate_one "contributing.md"
generate_one "ref.md"

View File

@ -27,5 +27,5 @@ class HeadersToMarkdownParser(HTMLParser):
self.text += data
HeadersToMarkdownParser.feed(sys.stdin.read())
HeadersToMarkdownParser().feed(sys.stdin.read())
print

View File

@ -16,12 +16,25 @@ Page = namedtuple('Page', 'objects number_of_objects pages_total number page_siz
class DatabaseException(Exception):
'''
Raised when a database operation fails.
'''
pass
class Database(object):
def __init__(self, db_name, db_url='http://localhost:8123/', username=None, password=None, readonly=False):
'''
Initializes a database instance. Unless it's readonly, the database will be
created on the ClickHouse server if it does not already exist.
- `db_name`: name of the database to connect to.
- `db_url`: URL of the ClickHouse server.
- `username`: optional connection credentials.
- `password`: optional connection credentials.
- `readonly`: use a read-only connection.
'''
self.db_name = db_name
self.db_url = db_url
self.username = username
@ -35,23 +48,41 @@ class Database(object):
self.server_timezone = self._get_server_timezone()
def create_database(self):
'''
Creates the database on the ClickHouse server if it does not already exist.
'''
self._send('CREATE DATABASE IF NOT EXISTS `%s`' % self.db_name)
def drop_database(self):
'''
Deletes the database on the ClickHouse server.
'''
self._send('DROP DATABASE `%s`' % self.db_name)
def create_table(self, model_class):
'''
Creates a table for the given model class, if it does not exist already.
'''
# TODO check that model has an engine
if model_class.readonly:
raise DatabaseException("You can't create read only table")
self._send(model_class.create_table_sql(self.db_name))
def drop_table(self, model_class):
'''
Drops the database table of the given model class, if it exists.
'''
if model_class.readonly:
raise DatabaseException("You can't drop read only table")
self._send(model_class.drop_table_sql(self.db_name))
def insert(self, model_instances, batch_size=1000):
'''
Insert records into the database.
- `model_instances`: any iterable containing instances of a single model class.
- `batch_size`: number of records to send per chunk (use a lower number if your records are very large).
'''
from six import next
from io import BytesIO
i = iter(model_instances)
@ -89,6 +120,12 @@ class Database(object):
self._send(gen())
def count(self, model_class, conditions=None):
'''
Counts the number of records in the model's table.
- `model_class`: the model to count.
- `conditions`: optional SQL conditions (contents of the WHERE clause).
'''
query = 'SELECT count() FROM $table'
if conditions:
query += ' WHERE ' + conditions
@ -97,6 +134,14 @@ class Database(object):
return int(r.text) if r.text else 0
def select(self, query, model_class=None, settings=None):
'''
Performs a query and returns a generator of model instances.
- `query`: the SQL query to execute.
- `model_class`: the model class matching the query's table,
or `None` for getting back instances of an ad-hoc model.
- `settings`: query settings to send as HTTP GET parameters
'''
query += ' FORMAT TabSeparatedWithNamesAndTypes'
query = self._substitute(query, model_class)
r = self._send(query, settings, True)
@ -110,17 +155,31 @@ class Database(object):
yield model_class.from_tsv(line, field_names, self.server_timezone, self)
def raw(self, query, settings=None, stream=False):
"""
Performs raw query to database. Returns its output
:param query: Query to execute
:param settings: Query settings to send as query GET parameters
:param stream: If flag is true, Http response from ClickHouse will be streamed.
:return: Query execution result
"""
'''
Performs a query and returns its output as text.
- `query`: the SQL query to execute.
- `settings`: query settings to send as HTTP GET parameters
- `stream`: if true, the HTTP response from ClickHouse will be streamed.
'''
query = self._substitute(query, None)
return self._send(query, settings=settings, stream=stream).text
def paginate(self, model_class, order_by, page_num=1, page_size=100, conditions=None, settings=None):
'''
Selects records and returns a single page of model instances.
- `model_class`: the model class matching the query's table,
or `None` for getting back instances of an ad-hoc model.
- `order_by`: columns to use for sorting the query (contents of the ORDER BY clause).
- `page_num`: the page number (1-based), or -1 to get the last page.
- `page_size`: number of records to return per page.
- `conditions`: optional SQL conditions (contents of the WHERE clause).
- `settings`: query settings to send as HTTP GET parameters
The result is a namedtuple containing `objects` (list), `number_of_objects`,
`pages_total`, `number` (of the current page), and `page_size`.
'''
count = self.count(model_class, conditions)
pages_total = int(ceil(count / float(page_size)))
if page_num == -1:
@ -143,6 +202,13 @@ class Database(object):
)
def migrate(self, migrations_package_name, up_to=9999):
'''
Executes schema migrations.
- `migrations_package_name` - fully qualified name of the Python package
containing the migrations.
- `up_to` - number of the last migration to apply.
'''
from .migrations import MigrationHistory
logger = logging.getLogger('migrations')
applied_migrations = self._get_applied_migrations(migrations_package_name)

View File

@ -8,7 +8,9 @@ from .utils import escape, parse_array
class Field(object):
'''
Abstract base class for all field types.
'''
creation_counter = 0
class_default = 0
db_type = None
@ -165,7 +167,9 @@ class DateTimeField(Field):
class BaseIntField(Field):
'''
Abstract base class for all integer-type fields.
'''
def to_python(self, value, timezone_in_use):
try:
return int(value)
@ -238,6 +242,9 @@ class Int64Field(BaseIntField):
class BaseFloatField(Field):
'''
Abstract base class for all float-type fields.
'''
def to_python(self, value, timezone_in_use):
try:
@ -262,6 +269,9 @@ class Float64Field(BaseFloatField):
class BaseEnumField(Field):
'''
Abstract base class for all enum-type fields.
'''
def __init__(self, enum_cls, default=None, alias=None, materialized=None):
self.enum_cls = enum_cls

View File

@ -81,8 +81,8 @@ class Model(with_metaclass(ModelBase)):
'''
Creates a model instance, using keyword arguments as field values.
Since values are immediately converted to their Pythonic type,
invalid values will cause a ValueError to be raised.
Unrecognized field names will cause an AttributeError.
invalid values will cause a `ValueError` to be raised.
Unrecognized field names will cause an `AttributeError`.
'''
super(Model, self).__init__()
@ -103,7 +103,7 @@ class Model(with_metaclass(ModelBase)):
def __setattr__(self, name, value):
'''
When setting a field value, converts the value to its Pythonic type and validates it.
This may raise a ValueError.
This may raise a `ValueError`.
'''
field = self.get_field(name)
if field:
@ -112,26 +112,25 @@ class Model(with_metaclass(ModelBase)):
super(Model, self).__setattr__(name, value)
def set_database(self, db):
"""
Sets _database attribute for current model instance
:param db: Database instance
:return: None
"""
'''
Sets the `Database` that this model instance belongs to.
This is done automatically when the instance is read from the database or written to it.
'''
# This can not be imported globally due to circular import
from .database import Database
assert isinstance(db, Database), "database must be database.Database instance"
self._database = db
def get_database(self):
"""
Gets _database attribute for current model instance
:return: database.Database instance, model was inserted or selected from or None
"""
'''
Gets the `Database` that this model instance belongs to.
Returns `None` unless the instance was read from the database or written to it.
'''
return self._database
def get_field(self, name):
'''
Get a Field instance given its name, or None if not found.
Gets a `Field` instance given its name, or `None` if not found.
'''
field = getattr(self.__class__, name, None)
return field if isinstance(field, Field) else None
@ -139,7 +138,9 @@ class Model(with_metaclass(ModelBase)):
@classmethod
def table_name(cls):
'''
Returns the model's database table name.
Returns the model's database table name. By default this is the
class name converted to lowercase. Override this if you want to use
a different table name.
'''
return cls.__name__.lower()
@ -168,9 +169,13 @@ class Model(with_metaclass(ModelBase)):
def from_tsv(cls, line, field_names=None, timezone_in_use=pytz.utc, database=None):
'''
Create a model instance from a tab-separated line. The line may or may not include a newline.
The field_names list must match the fields defined in the model, but does not have to include all of them.
The `field_names` list must match the fields defined in the model, but does not have to include all of them.
If omitted, it is assumed to be the names of all fields in the model, in order of definition.
:param database: if given, model receives database
- `line`: the TSV-formatted data.
- `field_names`: names of the model fields in the data.
- `timezone_in_use`: the timezone to use when parsing dates and datetimes.
- `database`: if given, sets the database that this instance belongs to.
'''
from six import next
field_names = field_names or [name for name, field in cls._fields]
@ -189,7 +194,8 @@ class Model(with_metaclass(ModelBase)):
def to_tsv(self, include_readonly=True):
'''
Returns the instance's column values as a tab-separated line. A newline is not included.
:param bool include_readonly: If False, returns only fields, that can be inserted into database
- `include_readonly`: if false, returns only fields that can be inserted into database.
'''
data = self.__dict__
fields = self._fields if include_readonly else self._writable_fields
@ -198,8 +204,9 @@ class Model(with_metaclass(ModelBase)):
def to_dict(self, include_readonly=True, field_names=None):
'''
Returns the instance's column values as a dict.
:param bool include_readonly: If False, returns only fields, that can be inserted into database
:param field_names: An iterable of field names to return
- `include_readonly`: if false, returns only fields that can be inserted into database.
- `field_names`: an iterable of field names to return (optional)
'''
fields = self._fields if include_readonly else self._writable_fields
@ -212,7 +219,7 @@ class Model(with_metaclass(ModelBase)):
@classmethod
def objects_in(cls, database):
'''
Returns a queryset for selecting instances of this model class.
Returns a `QuerySet` for selecting instances of this model class.
'''
return QuerySet(cls, database)