How to find Hadoop hdfs directory on my system?

If you run:

hdfs dfs -copyFromLocal foo.txt bar.txt

then the local file foo.txt will be copied into your own hdfs directory /user/popeye/bar.txt (where popeye is your username.) As a result, the following achieves the same:

hdfs dfs -copyFromLocal foo.txt /user/popeye/bar.txt

Before copying any file into hdfs, just be certain to create the parent directory first. You don't have to put files in this "home" directory, but (1) better to not clutter "/" with all sorts of files, and (2) following this convention will help prevent conflicts with other users.


Your approach is wrong or may be understanding is wrong

dfs.datanode.data.dir, is where you want to store your data blocks

If you type hdfs dfs -ls / you will get list of directories in hdfs. Then you can transfer files from local to hdfs using -copyFromLocal or -put to a particular directory or using -mkdir you can create new directory

Refer below link for more information

http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HDFSCommands.html


As per the first answer, I am elaborating it in detailed for Hadoop 1.x -

Suppose, you are running this script on pseudo distribution model, you will probably get one or two list of users(NameNodes) illustrated -

on our fully distribution model, first you have the administrator rights to perform these things and there will be N number of list of NameNodes(users).

So now we move to our point -

First reach to your Hadoop home directory and from there run this script -

bin/hadoop fs -ls /

Result will like this -

drwxr-xr-x   - xuiob78126arif supergroup          0 2017-11-30 11:20 /user

so here xuiob78126arif is my name node(master/user) and the NameNode(user) directory is -

/user/xuiob78126arif/

now you can go to your browser and search the address -

http://xuiob78126arif:50070

and from there you can get the Cluster Summary, NameNode Storage, etc.

Note : the script will provide results only in one condition, if at least any file or directory exist in DataNode otherwise you will get -

ls: Cannot access .: No such file or directory.

so, in that case you first put any file by bin/hadoop fs -put <source file full path>

and there after run the bin/hadoop fs -ls / script.

and now I hope, you have get a bit on your issue, thanks.