Is there any way in Elasticsearch to get results as CSV file in curl API?

I've done just this using cURL and jq ("like sed, but for JSON"). For example, you can do the following to get CSV output for the top 20 values of a given facet:

$ curl -X GET 'http://localhost:9200/myindex/item/_search?from=0&size=0' -d '
    {"from": 0,
    "size": 0,
    "facets": {
      "sourceResource.subject.name": {
        "global": true,
        "terms": {
          "order": "count",
          "size": 20,
          "all_terms": true,
          "field": "sourceResource.subject.name.not_analyzed"
        }
      }
    },
    "sort": [
      {
        "_score": "desc"
      }
    ],
    "query": {
      "filtered": {
        "query": {
          "match_all": {}
        }
      }
    }
  }' | jq -r '.facets["subject"].terms[] | [.term, .count] | @csv'

"United States",33755
"Charities--Massachusetts",8304
"Almshouses--Massachusetts--Tewksbury",8304
"Shields",4232
"Coat of arms",4214
"Springfield College",3422
"Men",3136
"Trees",3086
"Session Laws--Massachusetts",2668
"Baseball players",2543
"Animals",2527
"Books",2119
"Women",2004
"Landscape",1940
"Floral",1821
"Architecture, Domestic--Lowell (Mass)--History",1785
"Parks",1745
"Buildings",1730
"Houses",1611
"Snow",1579

I've used Python successfully, and the scripting approach is intuitive and concise. The ES client for python makes life easy. First grab the latest Elasticsearch client for Python here:
http://www.elasticsearch.org/blog/unleash-the-clients-ruby-python-php-perl/#python

Then your Python script can include calls like:

import elasticsearch
import unicodedata
import csv

es = elasticsearch.Elasticsearch(["10.1.1.1:9200"])
# this returns up to 500 rows, adjust to your needs
res = es.search(index="YourIndexName", body={"query": {"match": {"title": "elasticsearch"}}},500)
sample = res['hits']['hits']

# then open a csv file, and loop through the results, writing to the csv
with open('outputfile.tsv', 'wb') as csvfile:   
    filewriter = csv.writer(csvfile, delimiter='\t',  # we use TAB delimited, to handle cases where freeform text may have a comma
                        quotechar='|', quoting=csv.QUOTE_MINIMAL)
    # create column header row
    filewriter.writerow(["column1", "column2", "column3"])    #change the column labels here
    for hit in sample: 
        # fill columns 1, 2, 3 with your data 
        col1 = hit["some"]["deeply"]["nested"]["field"].decode('utf-8') #replace these nested key names with your own
        col1 = col1.replace('\n', ' ')
        # col2 = , col3 = , etc...
        filewriter.writerow([col1,col2,col3])

You may want to wrap the calls to the column['key'] references in try / catch error handling, since documents are unstructured, and may not have the field from time to time (depends on your index).

I have a complete Python sample script using the latest ES python client available here:

https://github.com/jeffsteinmetz/pyes2csv