Python Imaging Library (Fork)
Go to file
Elijah 9f511d459a SIMD. Rewritten the Pillow-SIMD readme
SIMD. Updated according to the review

SIMD. fix markup
2022-01-03 12:20:08 +03:00
.ci Install numpy on Python 3.10 2021-09-06 07:36:40 +10:00
.github Merge pull request #5763 from python-pillow/new-badge 2021-12-25 07:21:33 -05:00
depends Updated libimagequant to 2.17.0 2021-12-08 19:00:50 +11:00
docs CVEs TBD 2022-01-02 18:09:45 +11:00
src 9.0.0 version bump 2022-01-02 20:51:23 +11:00
Tests Restrict builtins for ImageMath.eval 2022-01-02 17:23:49 +11:00
winbuild Updated libimagequant to 2.17.0 2021-12-30 15:42:30 +11:00
.appveyor.yml Removed setuptools install 2021-11-09 19:37:34 +11:00
.clang-format Adjust clang-format style 2021-01-03 14:16:42 +11:00
.coveragerc Removed unnecessary line 2020-01-03 16:40:31 +11:00
.editorconfig Remove duplicate line [CI skip] 2016-09-03 12:37:47 +03:00
.gitattributes Set EPS test files as binary 2018-12-29 21:53:07 -08:00
.gitignore Ignore new test_images file 2021-08-24 11:20:09 +03:00
.pre-commit-config.yaml Drop support for EOL Python 3.6 2021-10-19 15:34:52 +03:00
.readthedocs.yml Install project using pip in ReadTheDocs build 2018-10-24 19:46:43 +11:00
CHANGES.rst 9.0.0 version bump 2022-01-02 20:51:23 +11:00
CHANGES.SIMD.rst SIMD. fix list (+6 squashed commits) 2022-01-03 12:00:28 +03:00
codecov.yml Allow 0.01% drop in coverage 2020-03-29 12:34:00 +11:00
conftest.py Declare helper as a pytest plugin so asserts aren't ignored with PYTHONOPTIMIZE 2020-02-02 12:26:01 +02:00
LICENSE Updated copyright year 2022-01-01 10:40:57 +11:00
Makefile Replaced further direct invocations of setup.py 2021-12-19 12:13:37 +11:00
MANIFEST.in Lint fix 2021-12-21 11:02:14 -05:00
Pipfile Remove sphinx for testing tidelift alignment 2021-12-03 07:26:39 -05:00
Pipfile.lock Add lock file 2021-12-20 13:19:45 -05:00
PyPI.rst SIMD. pypi readme, update setup 2022-01-03 12:20:06 +03:00
README.md SIMD. Rewritten the Pillow-SIMD readme 2022-01-03 12:20:08 +03:00
RELEASING.md Rename master to main 2021-10-15 17:30:05 +03:00
requirements.txt Require sphinx_rtd_theme>=1 2021-11-08 23:34:31 +02:00
selftest.py Moved _info function into docstring 2021-09-13 17:14:59 +10:00
setup.cfg SIMD. pypi readme, update setup 2022-01-03 12:20:06 +03:00
setup.py SIMD. pypi readme, update setup 2022-01-03 12:20:06 +03:00
tox.ini Replaced further direct invocations of setup.py 2021-12-24 11:12:51 +11:00

Pillow-SIMD

Pillow-SIMD is "following" Pillow (which is a PIL's fork itself). "Following" here means than Pillow-SIMD versions are 100% compatible drop-in replacements for Pillow of the same version. For example, Pillow-SIMD 3.2.0.post3 is a drop-in replacement for Pillow 3.2.0, and Pillow-SIMD 3.3.3.post0 — for Pillow 3.3.3.

For more information on the original Pillow, please refer to: read the documentation, check the changelog and find out how to contribute.

Why SIMD

There are multiple ways to tweak image processing performance. To name a few, such ways can be: utilizing better algorithms, optimizing existing implementations, using more processing power and/or resources. One of the great examples of using a more efficient algorithm is replacing a convolution-based Gaussian blur with a sequential-box one.

Such examples are rather rare, though. It is also known, that certain processes might be optimized by using parallel processing to run the respective routines. But a more practical key to optimizations might be making things work faster using the resources at hand. For instance, SIMD computing might be the case.

SIMD stands for "single instruction, multiple data" and its essence is in performing the same operation on multiple data points simultaneously by using multiple processing elements. Common CPU SIMD instruction sets are MMX, SSE-SSE4, AVX, AVX2, AVX512, NEON.

Currently, Pillow-SIMD can be compiled with SSE4 (default) or AVX2 support.

Status

Pillow-SIMD project is production-ready. The project is supported by Uploadcare, a SAAS for cloud-based image storing and processing.

Uploadcare

In fact, Uploadcare has been running Pillow-SIMD for about two years now.

The following image operations are currently SIMD-accelerated:

  • Resize (convolution-based resampling): SSE4, AVX2
  • Gaussian and box blur: SSE4
  • Alpha composition: SSE4, AVX2
  • RGBA → RGBa (alpha premultiplication): SSE4, AVX2
  • RGBa → RGBA (division by alpha): AVX2

See CHANGES for more information.

Benchmarks

In order for you to clearly assess the productivity of implementing SIMD computing into Pillow image processing, we ran a number of benchmarks. The respective results can be found in the table below (the more — the better). The numbers represent processing rates in megapixels per second (Mpx/s). For instance, the rate at which a 2560x1600 RGB image is processed in 0.5 seconds equals to 8.2 Mpx/s. Here is the list of libraries and their versions we've been up to during the benchmarks:

  • Skia 53
  • ImageMagick 6.9.3-8 Q8 x86_64
  • Pillow 3.4.1
  • Pillow-SIMD 3.4.1.post1
Operation Filter IM Pillow SIMD SSE4 SIMD AVX2 Skia 53
Resize to 16x16 Bilinear 41.37 317.28 1282.85 1601.85 809.49
Bicubic 20.58 174.85 712.95 900.65 453.10
Lanczos 14.17 117.58 438.60 544.89 292.57
Resize to 320x180 Bilinear 29.46 195.21 863.40 1057.81 592.76
Bicubic 15.75 118.79 503.75 504.76 327.68
Lanczos 10.80 79.59 312.05 384.92 196.92
Resize to 1920x1200 Bilinear 17.80 68.39 215.15 268.29 192.30
Bicubic 9.99 49.23 170.41 210.62 112.84
Lanczos 6.95 37.71 130.00 162.57 104.76
Resize to 7712x4352 Bilinear 2.54 8.38 22.81 29.17 20.58
Bicubic 1.60 6.57 18.23 23.94 16.52
Lanczos 1.09 5.20 14.90 20.40 12.05
Blur 1px 6.60 16.94 35.16
10px 2.28 16.94 35.47
100px 0.34 16.93 35.53

A brief conclusion

The results show that Pillow is always faster than ImageMagick, Pillow-SIMD, in turn, is even faster than the original Pillow by the factor of 4-5. In general, Pillow-SIMD with AVX2 is always 16 to 40 times faster than ImageMagick and outperforms Skia, the high-speed graphics library used in Chromium.

Methodology

All rates were measured using the following setup: Ubuntu 14.04 64-bit, single-thread AVX2-enabled Intel i5 4258U CPU. ImageMagick performance was measured with the convert command-line tool followed by -verbose and -bench arguments. Such approach was used because there's usually a need in testing the latest software versions and command-line is the easiest way to do that. All the routines involved with the testing procedure produced identic results. Resizing filters compliance:

  • PIL.Image.BILINEAR == Triangle
  • PIL.Image.BICUBIC == Catrom
  • PIL.Image.LANCZOS == Lanczos

In ImageMagick, Gaussian blur operation invokes two parameters: the first is called 'radius' and the second is called 'sigma'. In fact, in order for the blur operation to be Gaussian, there should be no additional parameters. When the radius value is too small the blur procedure ceases to be Gaussian and if the value is excessively big the operation gets slowed down with zero benefits in exchange. For the benchmarking purposes, the radius was set to sigma × 2.5.

Following script was used for the benchmarking procedure: https://gist.github.com/homm/f9b8d8a84a57a7e51f9c2a5828e40e63

Why Pillow itself is so fast

No cheats involved. We've used identical high-quality resize and blur methods for the benchmark. Outcomes produced by different libraries are in almost pixel-perfect agreement. The difference in measured rates is only provided with the performance of every involved algorithm.

Why Pillow-SIMD is even faster

Because of the SIMD computing, of course. But there's more to it: heavy loops unrolling, specific instructions, which aren't available for scalar data types.

Why do not contribute SIMD to the original Pillow

Well, it's not that simple. First of all, the original Pillow supports a large number of architectures, not just x86. But even for x86 platforms, Pillow is often distributed via precompiled binaries. In order for us to integrate SIMD into the precompiled binaries we'd need to execute runtime CPU capabilities checks. To compile the code this way we need to pass the -mavx2 option to the compiler. But with the option included, a compiler will inject AVX instructions even for SSE functions (i.e. interchange them) since every SSE instruction has its AVX equivalent. So there is no easy way to compile such library, especially with setuptools.

Installation

If there's a copy of the original Pillow installed, it has to be removed first with $ pip uninstall -y pillow. The installation itself is simple just as running $ pip install pillow-simd, and if you're using SSE4-capable CPU everything should run smoothly. If you'd like to install the AVX2-enabled version, you need to pass the additional flag to a C compiler. The easiest way to do so is to define the CC variable during the compilation.

$ pip uninstall pillow
$ CC="cc -mavx2" pip install -U --force-reinstall pillow-simd

Contributing to Pillow-SIMD

Please be aware that Pillow-SIMD and Pillow are two separate projects. Please submit bugs and improvements not related to SIMD to the original Pillow. All bugfixes to the original Pillow will then be transferred to the next Pillow-SIMD version automatically.