finding unique values in a data file

I tried using cat

File contains :(here file is foo.sh you can input any file name here)

$cat foo.sh

tar
world
class
zip
zip
zip
python
jin
jin
doo
doo

uniq will get each word only once

$ cat foo.sh | sort | uniq

class
doo
jin
python
tar
world
zip

uniq -u will get the word appeared only one time in file

$ cat foo.sh | sort | uniq -u

class
python
tar
world

uniq -d will get the only the duplicate words and print them only once

$ cat foo.sh | sort | uniq -d

doo
jin
zip

grep name1 filename | cut -d ' ' -f 4 | sort -u

This will find all lines that have name1, then get just the fourth column of data and show only unique values.

Tags:

Linux

Shell

Bash