How to resolve the AnalysisException: resolved attribute(s) in Spark

As mentioned in my comment, it is related to https://issues.apache.org/jira/browse/SPARK-10925 and, more specifically https://issues.apache.org/jira/browse/SPARK-14948. Reuse of the reference will create ambiguity in naming, so you will have to clone the df - see the last comment in https://issues.apache.org/jira/browse/SPARK-14948 for an example.


If you have df1, and df2 derived from df1, try renaming all columns in df2 such that no two columns have identical name after join. So before the join:

so instead of df1.join(df2...

do

# Step 1 rename shared column names in df2.
df2_renamed = df2.withColumnRenamed('columna', 'column_a_renamed').withColumnRenamed('columnb', 'column_b_renamed')

# Step 2 do the join on the renamed df2 such that no two columns have same name.
df1.join(df2_renamed)

This issue really killed a lot of my time and I finally got an easy solution for it.

In PySpark, for the problematic column, say colA, we could simply use

import pyspark.sql.functions as F

df = df.select(F.col("colA").alias("colA"))

prior to using df in the join.

I think this should work for Scala/Java Spark too.