FetchFailedException or MetadataFetchFailedException when processing big data set

This error is almost guaranteed to be caused by memory issues on your executors. I can think of a couple of ways to address these types of problems.

1) You could try to run with more partitions (do a repartition on your dataframe). Memory issues typically arise when one or more partitions contain more data than will fit in memory.

2) I'm noticing that you have not explicitly set spark.yarn.executor.memoryOverhead, so it will default to max(386, 0.10* executorMemory) which in your case will be 400MB. That sounds low to me. I would try to increase it to say 1GB (note that if you increase memoryOverhead to 1GB, you need to lower --executor-memory to 3GB)

3) Look in the log files on the failing nodes. You want to look for the text "Killing container". If you see the text "running beyond physical memory limits", increasing memoryOverhead will - in my experience - solve the problem.


In addition to the memory and network config issues described above, it's worth noting that for large tables (e.g. several TB here), org.apache.spark.shuffle.FetchFailedException can occur due to timeout retrieving shuffle partitions. To fix this problem, you can set the following:

SET spark.reducer.maxReqsInFlight=1;  -- Only pull one file at a time to use full network bandwidth.
SET spark.shuffle.io.retryWait=60s;  -- Increase the time to wait while retrieving shuffle partitions before retrying. Longer times are necessary for larger files.
SET spark.shuffle.io.maxRetries=10;

I've also had some good results by increasing the Spark timeout spark.network.timeout to a larger value like 800. The default 120 seconds will cause a lot of your executors to time out when under heavy load.