Spark SQL - difference between gzip vs snappy vs lzo compression formats

Use Snappy if you can handle higher disk usage for the performance benefits (lower CPU + Splittable).

When Spark switched from GZIP to Snappy by default, this was the reasoning:

Based on our tests, gzip decompression is very slow (< 100MB/s), making queries decompression bound. Snappy can decompress at ~ 500MB/s on a single core.

Snappy:

  • Storage Space: High
  • CPU Usage: Low
  • Splittable: Yes (1)

GZIP:

  • Storage Space: Medium
  • CPU Usage: Medium
  • Splittable: No

1) http://boristyukin.com/is-snappy-compressed-parquet-file-splittable/


Compression Ratio : GZIP compression uses more CPU resources than Snappy or LZO, but provides a higher compression ratio.

General Usage : GZip is often a good choice for cold data, which is accessed infrequently. Snappy or LZO are a better choice for hot data, which is accessed frequently.

Snappy often performs better than LZO. It is worth running tests to see if you detect a significant difference.

Splittablity : If you need your compressed data to be splittable, BZip2, LZO, and Snappy formats are splittable, but GZip is not.

GZIP compresses data 30% more as compared to Snappy and 2x more CPU when reading GZIP data compared to one that is consuming Snappy data.

LZO focus on decompression speed at low CPU usage and higher compression at the cost of more CPU.

For longer term/static storage, the GZip compression is still better.

See extensive research and benchmark code and results in this article (Performance of various general compression algorithms – some of them are unbelievably fast!).

enter image description here


Based on the data below, I'd say gzip wins outside of scenarios like streaming, where write-time latency would be important.

It's important to keep in mind that speed is essentially compute cost. However, cloud compute is a one-time cost whereas cloud storage is a recurring cost. The tradeoff depends on the retention period of the data.


Let's test speed and size with large and small parquet files in Python.

Results (large file, 117 MB):

        +----------+----------+--------------------------+
        | snappy   | gzip     | snappy/gzip              |
+-------+----------+----------+--------------------------+
| write | 1.62 ms  | 7.65 ms  | 4.7x faster              |
+-------+----------+----------+--------------------------+
| size  | 35484122 | 17269656 |    2x larger             |
+-------+----------+----------+--------------------------+
| read  | 973 ms   | 1140 ms  |  1.2x faster             |
+-------+----------+----------+--------------------------+

Results (small file, 4 KB, Iris dataset):

        +---------+---------+--------------------------+
        | snappy  | gzip    | snappy/gzip               |
+-------+---------+---------+--------------------------+
| write | 1.56 ms | 2.09 ms |  1.3x faster             |
+-------+---------+---------+--------------------------+
| size  | 6990    | 6647    |  5.2% smaller            |
+-------+---------+---------+--------------------------+
| read  | 3.22 ms | 3.44 ms |  6.8% slower             |
+-------+---------+---------+--------------------------+

small_file.ipynb

import os, sys
import pyarrow
import pandas as pd
import numpy as np
from sklearn.datasets import load_iris

iris = load_iris()

df = pd.DataFrame(
    data= np.c_[iris['data'], iris['target']],
    columns= iris['feature_names'] + ['target']
)

# ========= WRITE =========
%timeit df.to_parquet(path='iris.parquet.snappy', compression='snappy', engine='pyarrow', index=True)
# 1.56 ms

%timeit df.to_parquet(path='iris.parquet.gzip', compression='snappy', engine='pyarrow', index=True)
# 2.09 ms

# ========= SIZE =========
os.stat('iris.parquet.snappy').st_size
# 6990

os.stat('iris.parquet.gzip').st_size
# 6647

# ========= READ =========
%timeit pd.read_parquet(path='iris.parquet.snappy', engine='pyarrow')
# 3.22 ms

%timeit pd.read_parquet(path='iris.parquet.gzip', engine='pyarrow')
# 3.44 ms

large_file.ipynb

import os, sys
import pyarrow
import pandas as pd

df = pd.read_csv('file.csv')

# ========= WRITE =========
%timeit df.to_parquet(path='file.parquet.snappy', compression='snappy', engine='pyarrow', index=True)
# 1.62 s

%timeit df.to_parquet(path='file.parquet.gzip', compression='gzip', engine='pyarrow', index=True)
# 7.65 s

# ========= SIZE =========
os.stat('file.parquet.snappy').st_size
# 35484122

os.stat('file.parquet.gzip').st_size
# 17269656

# ========= READ =========
%timeit pd.read_parquet(path='file.parquet.snappy', engine='pyarrow')
# 973 ms

%timeit pd.read_parquet(path='file.parquet.gzip', engine='pyarrow')
# 1.14 s

Just try them on your data.

lzo and snappy are fast compressors and very fast decompressors, but with less compression, as compared to gzip which compresses better, but is a little slower.