Does reducing the number of executor-cores consume less executor-memory?

spark.executor.cores = 5, spark.executor.memory=10G

This means an executor can run 5 tasks in parallel. That means 10 GB needs to be shared by 5 tasks.So effectively on an average - each task will have 2 GB available. If all the tasks consumes more than 2 GB, than overall JVM will end up consuming more than 10 GB and so YARN will kill the container.

spark.executor.cores = 1, spark.executor.memory=10G

This means an executor can run only 1 task. That means 10 GB is available to 1 task completely. So if the task uses more than 2 GB but less than 10 GB, it will work fine. That was the case in your Job and so it worked.


Yes, each executor uses an extra 7% of memoryOverhead.

This calculation will be created thinking that you have two nodes, so we have three executors in one node and two executors in the other node.

Memory per executor in the first node = 10GB/3 = 3,333GB
Counting off heap overhead = 7% of 3,333GB = 0,233GB. 

So, your executor-memory should be 3,333GB - 0,233GB = 3,1GB per node

You can read another explanation here: https://spoddutur.github.io/spark-notes/distribution_of_executors_cores_and_memory_for_spark_application.html