Group By, Rank and aggregate spark data frame using pyspark

Add rank:

from pyspark.sql.functions import *
from pyspark.sql.window import Window

ranked =  df.withColumn(
  "rank", dense_rank().over(Window.partitionBy("A").orderBy(desc("C"))))

Group by:

grouped = ranked.groupBy("B").agg(collect_list(struct("A", "rank")).alias("tmp"))

Sort and select:

grouped.select("B", sort_array("tmp")["rank"].alias("ranks"))

Tested with Spark 2.1.0.


windowSpec = Window.partitionBy("col1").orderBy("col2")
ranked = demand.withColumn("col_rank", row_number().over(windowSpec))
ranked.show(1000)