Pyspark - Load file: Path does not exist

You are right about the fact that your file is missing from your worker nodes thus that raises the error you got.

Here is the official documentation Ref. External Datasets.

If using a path on the local filesystem, the file must also be accessible at the same path on worker nodes. Either copy the file to all workers or use a network-mounted shared file system.

So basically you have two solutions :

You copy your file into each worker before starting the job;

Or you'll upload in HDFS with something like : (recommended solution)

hadoop fs -put localfile /user/hadoop/hadoopfile.csv

Now you can read it with :

df = spark.read.csv('/user/hadoop/hadoopfile.csv', header=True)

It seems that you are also using AWS S3. You can always try to read it directly from S3 without downloading it. (with the proper credentials of course)

Some suggest that the --files tag provided with spark-submit uploads the files to the execution directories. I don't recommend this approach unless your csv file is very small but then you won't need Spark.

Alternatively, I would stick with HDFS (or any distributed file system).


I think what you are missing is explicitly setting the master node while initializing the SparkSession, try something like this

spark = SparkSession \
    .builder \
    .master("local") \
    .appName("Protob Conversion to Parquet") \
    .config("spark.some.config.option", "some-value") \
    .getOrCreate()

and then read the file in the same way you have been doing

df = spark.read.csv('file:///home/hadoop/observations_temp.csv')

this should solve the problem...