FIELDDATA Data is too large

I meet this problem,too. Then i check the fielddata memory.

use below request:

GET /_stats/fielddata?fields=*

the output display:

"logstash-2016.04.02": {
  "primaries": {
    "fielddata": {
      "memory_size_in_bytes": 53009116,
      "evictions": 0,
      "fields": {

      }
    }
  },
  "total": {
    "fielddata": {
      "memory_size_in_bytes": 53009116,
      "evictions": 0,
      "fields": {

      }
    }
  }
},
"logstash-2016.04.29": {
  "primaries": {
    "fielddata": {
      "memory_size_in_bytes":0,
      "evictions": 0,
      "fields": {

      }
    }
  },
  "total": {
    "fielddata": {
      "memory_size_in_bytes":0,
      "evictions": 0,
      "fields": {

      }
    }
  }
},

you can see my indexes name base datetime, and evictions is all 0. Addition, 2016.04.02 memory is 53009116, but 2016.04.29 is 0, too.

so i can make conclusion, the old data have occupy all memory, so new data cant use it, and then when i make agg query new data , it raise the CircuitBreakingException

you can set config/elasticsearch.yml

indices.fielddata.cache.size:  20%

it make es can evict data when reach the memory limit.

but may be the real solution you should add you memory in furture.and monitor the fielddata memory use is good habits.

more detail: https://www.elastic.co/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html


You can try to increase the fielddata circuit breaker limit to 75% (default is 60%) in your elasticsearch.yml config file and restart your cluster:

indices.breaker.fielddata.limit: 75%

Or if you prefer to not restart your cluster you can change the setting dynamically using:

curl -XPUT localhost:9200/_cluster/settings -d '{
  "persistent" : {
    "indices.breaker.fielddata.limit" : "40%" 
  }
}'

Give it a try.