Spark dataframe reducebykey like operation

If you don't care about column names you can use groupBy followed by sum:

df.groupBy($"key").sum("value")

otherwise it is better to replace sum with agg:

df.groupBy($"key").agg(sum($"value").alias("value"))

Finally you can use raw SQL:

df.registerTempTable("df")
sqlContext.sql("SELECT key, SUM(value) AS value FROM df GROUP BY key")

See also DataFrame / Dataset groupBy behaviour/optimization


I think user goks missed out on some part in the code. Its not a tested code.

.map should have been used to convert the rdd to a pairRDD using .map(lambda x: (x,1)).reduceByKey. ....

reduceByKey is not available on a single value rdd or regular rdd but pairRDD.

Thx