This commit is contained in:
Tom Christie 2013-03-26 08:59:40 +00:00
commit 9b56616750
6 changed files with 56 additions and 21 deletions

View File

@ -197,12 +197,16 @@ If you want to override this behavior, you'll need to declare the `DateTimeField
class Meta: class Meta:
model = Comment model = Comment
Note that by default, datetime representations are deteremined by the renderer in use, although this can be explicitly overridden as detailed below.
In the case of JSON this means the default datetime representation uses the [ECMA 262 date time string specification][ecma262]. This is a subset of ISO 8601 which uses millisecond precision, and includes the 'Z' suffix for the UTC timezone, for example: `2013-01-29T12:34:56.123Z`.
**Signature:** `DateTimeField(format=None, input_formats=None)` **Signature:** `DateTimeField(format=None, input_formats=None)`
* `format` - A string representing the output format. If not specified, this defaults to `None`, which indicates that python `datetime` objects should be returned by `to_native`. In this case the datetime encoding will be determined by the renderer. * `format` - A string representing the output format. If not specified, this defaults to `None`, which indicates that python `datetime` objects should be returned by `to_native`. In this case the datetime encoding will be determined by the renderer.
* `input_formats` - A list of strings representing the input formats which may be used to parse the date. If not specified, the `DATETIME_INPUT_FORMATS` setting will be used, which defaults to `['iso-8601']`. * `input_formats` - A list of strings representing the input formats which may be used to parse the date. If not specified, the `DATETIME_INPUT_FORMATS` setting will be used, which defaults to `['iso-8601']`.
DateTime format strings may either be [python strftime formats][strftime] which explicitly specifiy the format, or the special string `'iso-8601'`, which indicates that [ISO 8601][iso8601] style datetimes should be used. (eg `'2013-01-29T12:34:56.000000'`) DateTime format strings may either be [python strftime formats][strftime] which explicitly specifiy the format, or the special string `'iso-8601'`, which indicates that [ISO 8601][iso8601] style datetimes should be used. (eg `'2013-01-29T12:34:56.000000Z'`)
## DateField ## DateField
@ -318,5 +322,6 @@ As an example, let's create a field that can be used represent the class name of
[cite]: https://docs.djangoproject.com/en/dev/ref/forms/api/#django.forms.Form.cleaned_data [cite]: https://docs.djangoproject.com/en/dev/ref/forms/api/#django.forms.Form.cleaned_data
[FILE_UPLOAD_HANDLERS]: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-FILE_UPLOAD_HANDLERS [FILE_UPLOAD_HANDLERS]: https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-FILE_UPLOAD_HANDLERS
[ecma262]: http://ecma-international.org/ecma-262/5.1/#sec-15.9.1.15
[strftime]: http://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior [strftime]: http://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior
[iso8601]: http://www.w3.org/TR/NOTE-datetime [iso8601]: http://www.w3.org/TR/NOTE-datetime

View File

@ -242,21 +242,21 @@ This allows you to write views that update or create multiple items when a `PUT`
# True # True
serialize.save() # `.save()` will be called on each updated or newly created instance. serialize.save() # `.save()` will be called on each updated or newly created instance.
Bulk updates will update any instances that already exist, and create new instances for data items that do not have a corresponding instance. By default bulk updates will be limited to updating instances that already exist in the provided queryset.
When performing a bulk update you may want any items that are not present in the incoming data to be deleted. To do so, pass `allow_delete=True` to the serializer. When performing a bulk update you may want to allow new items to be created, and missing items to be deleted. To do so, pass `allow_add_remove=True` to the serializer.
serializer = BookSerializer(queryset, data=data, many=True, allow_delete=True) serializer = BookSerializer(queryset, data=data, many=True, allow_add_remove=True)
serializer.is_valid() serializer.is_valid()
# True # True
serializer.save() # `.save()` will be called on each updated or newly created instance. serializer.save() # `.save()` will be called on updated or newly created instances.
# `.delete()` will be called on any other items in the `queryset`. # `.delete()` will be called on any other items in the `queryset`.
Passing `allow_delete=True` ensures that any update operations will completely overwrite the existing queryset, rather than simply updating any objects found in the incoming data. Passing `allow_add_remove=True` ensures that any update operations will completely overwrite the existing queryset, rather than simply updating existing objects.
#### How identity is determined when performing bulk updates #### How identity is determined when performing bulk updates
Performing a bulk update is slightly more complicated than performing a bulk creation, because the serializer needs a way of determining how the items in the incoming data should be matched against the existing object instances. Performing a bulk update is slightly more complicated than performing a bulk creation, because the serializer needs a way to determine how the items in the incoming data should be matched against the existing object instances.
By default the serializer class will use the `id` key on the incoming data to determine the canonical identity of an object. If you need to change this behavior you should override the `get_identity` method on the `Serializer` class. For example: By default the serializer class will use the `id` key on the incoming data to determine the canonical identity of an object. If you need to change this behavior you should override the `get_identity` method on the `Serializer` class. For example:

View File

@ -40,9 +40,11 @@ You can determine your currently installed version using `pip freeze`:
## 2.2.x series ## 2.2.x series
### Master ### 2.2.5
* Serializers now support bulk create and bulk update operations. **Date**: 26th March 2013
* Serializer support for bulk create and bulk update operations.
* Regression fix: Date and time fields return date/time objects by default. Fixes regressions caused by 2.2.2. See [#743][743] for more details. * Regression fix: Date and time fields return date/time objects by default. Fixes regressions caused by 2.2.2. See [#743][743] for more details.
* Bugfix: Fix 500 error is OAuth not attempted with OAuthAuthentication class installed. * Bugfix: Fix 500 error is OAuth not attempted with OAuthAuthentication class installed.
* `Serializer.save()` now supports arbitrary keyword args which are passed through to the object `.save()` method. Mixins use `force_insert` and `force_update` where appropriate, resulting in one less database query. * `Serializer.save()` now supports arbitrary keyword args which are passed through to the object `.save()` method. Mixins use `force_insert` and `force_update` where appropriate, resulting in one less database query.

View File

@ -1,4 +1,4 @@
__version__ = '2.2.4' __version__ = '2.2.5'
VERSION = __version__ # synonym VERSION = __version__ # synonym

View File

@ -130,14 +130,14 @@ class BaseSerializer(WritableField):
def __init__(self, instance=None, data=None, files=None, def __init__(self, instance=None, data=None, files=None,
context=None, partial=False, many=None, context=None, partial=False, many=None,
allow_delete=False, **kwargs): allow_add_remove=False, **kwargs):
super(BaseSerializer, self).__init__(**kwargs) super(BaseSerializer, self).__init__(**kwargs)
self.opts = self._options_class(self.Meta) self.opts = self._options_class(self.Meta)
self.parent = None self.parent = None
self.root = None self.root = None
self.partial = partial self.partial = partial
self.many = many self.many = many
self.allow_delete = allow_delete self.allow_add_remove = allow_add_remove
self.context = context or {} self.context = context or {}
@ -154,8 +154,8 @@ class BaseSerializer(WritableField):
if many and instance is not None and not hasattr(instance, '__iter__'): if many and instance is not None and not hasattr(instance, '__iter__'):
raise ValueError('instance should be a queryset or other iterable with many=True') raise ValueError('instance should be a queryset or other iterable with many=True')
if allow_delete and not many: if allow_add_remove and not many:
raise ValueError('allow_delete should only be used for bulk updates, but you have not set many=True') raise ValueError('allow_add_remove should only be used for bulk updates, but you have not set many=True')
##### #####
# Methods to determine which fields to use when (de)serializing objects. # Methods to determine which fields to use when (de)serializing objects.
@ -448,6 +448,10 @@ class BaseSerializer(WritableField):
# Determine which object we're updating # Determine which object we're updating
identity = self.get_identity(item) identity = self.get_identity(item)
self.object = identity_to_objects.pop(identity, None) self.object = identity_to_objects.pop(identity, None)
if self.object is None and not self.allow_add_remove:
ret.append(None)
errors.append({'non_field_errors': ['Cannot create a new item, only existing items may be updated.']})
continue
ret.append(self.from_native(item, None)) ret.append(self.from_native(item, None))
errors.append(self._errors) errors.append(self._errors)
@ -457,7 +461,7 @@ class BaseSerializer(WritableField):
self._errors = any(errors) and errors or [] self._errors = any(errors) and errors or []
else: else:
self._errors = {'non_field_errors': ['Expected a list of items']} self._errors = {'non_field_errors': ['Expected a list of items.']}
else: else:
ret = self.from_native(data, files) ret = self.from_native(data, files)
@ -508,7 +512,7 @@ class BaseSerializer(WritableField):
else: else:
self.save_object(self.object, **kwargs) self.save_object(self.object, **kwargs)
if self.allow_delete and self._deleted: if self.allow_add_remove and self._deleted:
[self.delete_object(item) for item in self._deleted] [self.delete_object(item) for item in self._deleted]
return self.object return self.object

View File

@ -98,7 +98,7 @@ class BulkCreateSerializerTests(TestCase):
serializer = self.BookSerializer(data=data, many=True) serializer = self.BookSerializer(data=data, many=True)
self.assertEqual(serializer.is_valid(), False) self.assertEqual(serializer.is_valid(), False)
expected_errors = {'non_field_errors': ['Expected a list of items']} expected_errors = {'non_field_errors': ['Expected a list of items.']}
self.assertEqual(serializer.errors, expected_errors) self.assertEqual(serializer.errors, expected_errors)
@ -115,7 +115,7 @@ class BulkCreateSerializerTests(TestCase):
serializer = self.BookSerializer(data=data, many=True) serializer = self.BookSerializer(data=data, many=True)
self.assertEqual(serializer.is_valid(), False) self.assertEqual(serializer.is_valid(), False)
expected_errors = {'non_field_errors': ['Expected a list of items']} expected_errors = {'non_field_errors': ['Expected a list of items.']}
self.assertEqual(serializer.errors, expected_errors) self.assertEqual(serializer.errors, expected_errors)
@ -201,11 +201,12 @@ class BulkUpdateSerializerTests(TestCase):
'author': 'Haruki Murakami' 'author': 'Haruki Murakami'
} }
] ]
serializer = self.BookSerializer(self.books(), data=data, many=True, allow_delete=True) serializer = self.BookSerializer(self.books(), data=data, many=True, allow_add_remove=True)
self.assertEqual(serializer.is_valid(), True) self.assertEqual(serializer.is_valid(), True)
self.assertEqual(serializer.data, data) self.assertEqual(serializer.data, data)
serializer.save() serializer.save()
new_data = self.BookSerializer(self.books(), many=True).data new_data = self.BookSerializer(self.books(), many=True).data
self.assertEqual(data, new_data) self.assertEqual(data, new_data)
def test_bulk_update_and_create(self): def test_bulk_update_and_create(self):
@ -223,13 +224,36 @@ class BulkUpdateSerializerTests(TestCase):
'author': 'Haruki Murakami' 'author': 'Haruki Murakami'
} }
] ]
serializer = self.BookSerializer(self.books(), data=data, many=True, allow_delete=True) serializer = self.BookSerializer(self.books(), data=data, many=True, allow_add_remove=True)
self.assertEqual(serializer.is_valid(), True) self.assertEqual(serializer.is_valid(), True)
self.assertEqual(serializer.data, data) self.assertEqual(serializer.data, data)
serializer.save() serializer.save()
new_data = self.BookSerializer(self.books(), many=True).data new_data = self.BookSerializer(self.books(), many=True).data
self.assertEqual(data, new_data) self.assertEqual(data, new_data)
def test_bulk_update_invalid_create(self):
"""
Bulk update serialization without allow_add_remove may not create items.
"""
data = [
{
'id': 0,
'title': 'The electric kool-aid acid test',
'author': 'Tom Wolfe'
}, {
'id': 3,
'title': 'Kafka on the shore',
'author': 'Haruki Murakami'
}
]
expected_errors = [
{},
{'non_field_errors': ['Cannot create a new item, only existing items may be updated.']}
]
serializer = self.BookSerializer(self.books(), data=data, many=True)
self.assertEqual(serializer.is_valid(), False)
self.assertEqual(serializer.errors, expected_errors)
def test_bulk_update_error(self): def test_bulk_update_error(self):
""" """
Incorrect bulk update serialization should return error data. Incorrect bulk update serialization should return error data.
@ -249,6 +273,6 @@ class BulkUpdateSerializerTests(TestCase):
{}, {},
{'id': ['Enter a whole number.']} {'id': ['Enter a whole number.']}
] ]
serializer = self.BookSerializer(self.books(), data=data, many=True, allow_delete=True) serializer = self.BookSerializer(self.books(), data=data, many=True, allow_add_remove=True)
self.assertEqual(serializer.is_valid(), False) self.assertEqual(serializer.is_valid(), False)
self.assertEqual(serializer.errors, expected_errors) self.assertEqual(serializer.errors, expected_errors)