Removing duplicate rows from a csv file using a python script

UPDATE: 2016

If you are happy to use the helpful more_itertools external library:

from more_itertools import unique_everseen
with open('1.csv','r') as f, open('2.csv','w') as out_file:
    out_file.writelines(unique_everseen(f))

A more efficient version of @IcyFlame's solution

with open('1.csv','r') as in_file, open('2.csv','w') as out_file:
    seen = set() # set for fast O(1) amortized lookup
    for line in in_file:
        if line in seen: continue # skip duplicate

        seen.add(line)
        out_file.write(line)

To edit the same file in-place you could use this

import fileinput
seen = set() # set for fast O(1) amortized lookup
for line in fileinput.FileInput('1.csv', inplace=1):
    if line in seen: continue # skip duplicate

    seen.add(line)
    print line, # standard output is now redirected to the file

you can achieve deduplicaiton efficiently using Pandas:

import pandas as pd
file_name = "my_file_with_dupes.csv"
file_name_output = "my_file_without_dupes.csv"

df = pd.read_csv(file_name, sep="\t or ,")

# Notes:
# - the `subset=None` means that every column is used 
#    to determine if two rows are different; to change that specify
#    the columns as an array
# - the `inplace=True` means that the data structure is changed and
#   the duplicate rows are gone  
df.drop_duplicates(subset=None, inplace=True)

# Write the results to a different file
df.to_csv(file_name_output, index=False)

You can use the following script:

pre-condition:

  1. 1.csv is the file that consists the duplicates
  2. 2.csv is the output file that will be devoid of the duplicates once this script is executed.

code



inFile = open('1.csv','r')

outFile = open('2.csv','w')

listLines = []

for line in inFile:

    if line in listLines:
        continue

    else:
        outFile.write(line)
        listLines.append(line)

outFile.close()

inFile.close()

Algorithm Explanation

Here, what I am doing is:

  1. opening a file in the read mode. This is the file that has the duplicates.
  2. Then in a loop that runs till the file is over, we check if the line has already encountered.
  3. If it has been encountered than we don't write it to the output file.
  4. If not we will write it to the output file and add it to the list of records that have been encountered already

Tags:

Python

File Io