scala - Spark : How to union all dataframe in loop

Steffen Schmitz's answer is the most concise one I believe. Below is a more detailed answer if you are looking for more customization (of field types, etc):

import org.apache.spark.sql.types.{StructType, StructField, StringType}
import org.apache.spark.sql.Row

//initialize DF
val schema = StructType(
  StructField("aCol", StringType, true) ::
  StructField("bCol", StringType, true) ::
  StructField("name", StringType, true) :: Nil)
var initialDF = spark.createDataFrame(sc.emptyRDD[Row], schema)

//list to iterate through
var fruits = List(
    "apple"
    ,"orange"
    ,"melon"
)

for (x <- fruits) {
  //union returns a new dataset
  initialDF = initialDF.union(Seq(("aaa", "bbb", x)).toDF)
}

//initialDF.show()

references:

  • How to create an empty DataFrame with a specified schema?
  • https://spark.apache.org/docs/2.0.1/api/java/org/apache/spark/sql/Dataset.html
  • https://docs.databricks.com/spark/latest/faq/append-a-row-to-rdd-or-dataframe.html

You could created a sequence of DataFrames and then use reduce:

val results = fruits.
  map(fruit => Seq(("aaa", "bbb", fruit)).toDF("aCol","bCol","name")).
  reduce(_.union(_))

results.show()

If you have different/multiple dataframes you can use below code, which is efficient.

val newDFs = Seq(DF1,DF2,DF3)
newDFs.reduce(_ union _)