Overwrite parquet files from dynamic frame in AWS Glue

If you don't want your process to overwrite everything under "s3://bucket/table_name", you could use

spark.conf.set("spark.sql.sources.partitionOverwriteMode","dynamic")
data.toDF()
    .write
    .mode("overwrite")
    .format("parquet")
    .partitionBy("date", "name")
    .save("s3://folder/<table_name>")

This will only update the "selected" partitions in that S3 location. In my case, I have 30 date-partitions in my DynamicFrame "data".

I'm using Glue 1.0 - Spark 2.4 - Python 2.


Currently AWS Glue doesn't support 'overwrite' mode but they are working on this feature.

As a workaround you can convert DynamicFrame object to spark's DataFrame and write it using spark instead of Glue:

table.toDF()
  .write
  .mode("overwrite")
  .format("parquet")
  .partitionBy("var_1", "var_2")
  .save(output_dir)