Spark: Could not find CoarseGrainedScheduler

Yeah now I know the meaning of that cryptic exception, the executor got killed because it exceeds the container memory threshold.
There are couple of reasons that could happen but the first culprit is to check your job (e.g. repartition) or try adding more nodes/executors to your cluster.


Basically it means that there is another reason for the failure. Try to find other exception in your job logs.

See "Exceptions" sections here: https://medium.com/@wx.london.cun/spark-on-yarn-f74e82ab6070