Redshift COPY command delimiter not found

I don't think the problem is with missing <tab> at the end of lines. Are you sure that ALL lines have correct number of fields?

Run the query:

select le.starttime, d.query, d.line_number, d.colname, d.value,
le.raw_line, le.err_reason    
from stl_loaderror_detail d, stl_load_errors le
where d.query = le.query
order by le.starttime desc
limit 100

to get the full error report. It will show the filename with errors, incorrect line number, and error details.

This will help to find where the problem lies.


From my understanding the error message Delimiter not found may be caused also by not specifying correctly the COPY command, in particular by not specifying the Data format parameters https://docs.aws.amazon.com/redshift/latest/dg/r_COPY.html

In my case I was trying to load Parquet data with this expression:

COPY my_schema.my_table
FROM 's3://my_bucket/my/folder/'
IAM_ROLE 'arn:aws:iam::my_role:role/my_redshift_role'
REGION 'my-region-1';

and I received the Delimiter not found error message when looking into the system table stl_load_errors. But specifying I'm dealing with Parquet data in the expression in this way:

COPY my_schema.my_table
FROM 's3://my_bucket/my/folder/'
IAM_ROLE 'arn:aws:iam::my_role:role/my_redshift_role'
FORMAT AS PARQUET;

solved my problem and I was able to correctly load the data.


You can get the delimiter not found error if your row has less columns than expected. Some CSV generators may just output a single quote at the end if last columns are null.

To solve this you can use FILLRECORD on Redshift copy options.


I know this was answered, but I just dealt with the same error and I had a simple solution so i'll share it.

This error can also be solved by stating the specific columns of the table that are copied from the s3 files (if you know what are the columns in the data on s3). In my case the data had less columns than the number of columns in the table. Madahava's answer with the 'FILLRECORD' option DID solve the issue for me but then I noticed a column that was supposed to filled up with default values, remained null.

COPY <table> (col1, col2, col3) from 's3://somebucket/file' ...