Hadoop Error - All data nodes are aborting

You seem to be hitting the open file handles limit of your user. This is a pretty common issue, and can be cleared in most cases by increasing the ulimit values (its mostly 1024 by default, easily exhaustible by multi-out jobs like yours).

You can follow this short guide to increase it: http://blog.cloudera.com/blog/2009/03/configuration-parameters-what-can-you-just-ignore/ [The section "File descriptor limits"]

Answered by Harsh J - https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/kJRUkVxmfhw