Overwrite specific partitions in spark dataframe write method

spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.toDF().write.mode("overwrite").format("parquet").partitionBy("date", "name").save("s3://path/to/somewhere")

This works for me on AWS Glue ETL jobs (Glue 1.0 - Spark 2.4 - Python 2)


Finally! This is now a feature in Spark 2.3.0: SPARK-20236

To use it, you need to set the spark.sql.sources.partitionOverwriteMode setting to dynamic, the dataset needs to be partitioned, and the write mode overwrite. Example:

spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.write.mode("overwrite").insertInto("partitioned_table")

I recommend doing a repartition based on your partition column before writing, so you won't end up with 400 files per folder.

Before Spark 2.3.0, the best solution would be to launch SQL statements to delete those partitions and then write them with mode append.


This is a common problem. The only solution with Spark up to 2.0 is to write directly into the partition directory, e.g.,

df.write.mode(SaveMode.Overwrite).save("/root/path/to/data/partition_col=value")

If you are using Spark prior to 2.0, you'll need to stop Spark from emitting metadata files (because they will break automatic partition discovery) using:

sc.hadoopConfiguration.set("parquet.enable.summary-metadata", "false")

If you are using Spark prior to 1.6.2, you will also need to delete the _SUCCESS file in /root/path/to/data/partition_col=value or its presence will break automatic partition discovery. (I strongly recommend using 1.6.2 or later.)

You can get a few more details about how to manage large partitioned tables from my Spark Summit talk on Bulletproof Jobs.