Do you benefit from the Kryo serializer when you use Pyspark?

Kryo won’t make a major impact on PySpark because it just stores data as byte[] objects, which are fast to serialize even with Java.

But it may be worth a try — you would just set the spark.serializer configuration and trying not to register any classe.

What might make more impact is storing your data as MEMORY_ONLY_SER and enabling spark.rdd.compress, which will compress them your data.

In Java this can add some CPU overhead, but Python runs quite a bit slower, so it might not matter. It might also speed up computation by reducing GC or letting you cache more data.

Reference : Matei Zaharia's answer in the mailing list.


It all depends on what you mean when you say PySpark. In the last two years, PySpark development, same as the Spark development in general, shifted from the low level RDD API towards high level APIs like DataFrame or ML.

These APIs are natively implemented on JVM and the Python code is mostly limited to a bunch of RPC calls executed on the driver. Everything else is pretty much the same code as executed using Scala or Java so it should benefit from Kryo in the same way as the native applications.

I will argue that at the end of the day there is not much to lose when you use Kryo with PySpark and potentially something to gain when your application depends heavily on the "native" APIs.