How to use UDF to return multiple columns?

Struct method

You can define the udf function as

def myFunc: (String => (String, String)) = { s => (s.toLowerCase, s.toUpperCase)}

import org.apache.spark.sql.functions.udf
val myUDF = udf(myFunc)

and use .* as

val newDF = df.withColumn("newCol", myUDF(df("Feature2"))).select("Feature1", "Feature2", "Feature 3", "newCol.*")

I have returned Tuple2 for testing purpose (higher order tuples can be used according to how many multiple columns are required) from udf function and it would be treated as struct column. Then you can use .* to select all the elements in separate columns and finally rename them.

You should have output as

+--------+--------+---------+---+---+
|Feature1|Feature2|Feature 3|_1 |_2 |
+--------+--------+---------+---+---+
|1.3     |3.4     |4.5      |3.4|3.4|
+--------+--------+---------+---+---+

You can rename _1 and _2

Array method

udf function should return an array

def myFunc: (String => Array[String]) = { s => Array("s".toLowerCase, s.toUpperCase)}

import org.apache.spark.sql.functions.udf
val myUDF = udf(myFunc)

And the you can select elements of the array and use alias to rename them

val newDF = df.withColumn("newCol", myUDF(df("Feature2"))).select($"Feature1", $"Feature2", $"Feature 3", $"newCol"(0).as("Slope"), $"newCol"(1).as("Offset"))

You should have

+--------+--------+---------+-----+------+
|Feature1|Feature2|Feature 3|Slope|Offset|
+--------+--------+---------+-----+------+
|1.3     |3.4     |4.5      |s    |3.4   |
+--------+--------+---------+-----+------+

Also, you can return the case class:

case class NewFeatures(slope: Double, offset: Int)

val getNewFeatures = udf { s: String =>
      NewFeatures(???, ???)
    }

df
  .withColumn("newF", getNewFeatures($"Feature1"))
  .select($"Feature1", $"Feature2", $"Feature3", $"newF.slope", $"newF.offset")

I miss an explanation about how to assign the multiples values in the case class to several columns in the dataframe.

So, in summary, a complete example in Scala

import org.apache.spark.sql.functions.udf

val df = Seq((1L, 3.0, "a"), (2L, -1.0, "b"), (3L, 0.0, "c")).toDF("x", "y", "z")

case class Foobar(foo: Double, bar: Double)

val foobarUdf = udf((x: Long, y: Double, z: String) => 
  Foobar(x * y, z.head.toInt * y))

val df1 = df.withColumn("foo", foobarUdf($"x", $"y", $"z").getField("foo")).withColumn("bar", foobarUdf($"x", $"y", $"z").getField("bar"))

If you check the schema of the df1 dataframe, you'll get

scala> df1.printSchema
root
 |-- x: long (nullable = false)
 |-- y: double (nullable = false)
 |-- z: string (nullable = true)
 |-- foo: double (nullable = true)
 |-- bar: double (nullable = true)