Category Archives: Uncategorized

Elasticsearch configuration options

This is an attempt at a complete listing of elasticsearch config variables since they’re located all over the website.

The list is not complete, and will start to “rot” as soon as it’s published, but… If you know of some variables that aren’t listed, please let me know.

Note that static settings must be set in the config file on every machine in the cluster.



Name Type Notes Doc
bootstrap.mlockall Static Link


Name Type Notes Doc Unknown Link Unknown Link Unknown Link Unknown Link Unknown Link Unknown Link Unknown Link Unknown Link


Name Type Notes Doc
cluster.blocks.read_only Dynamic Link Dynamic Link Unknown Link
cluster.routing.allocation.allow_rebalance Dynamic Link
cluster.routing.allocation.awareness.attributes Dynamic Link Dynamic Link
cluster.routing.allocation.balance.shard Unknown Link
cluster.routing.allocation.balance.index Unknown Link
cluster.routing.allocation.balance.threshold Unknown Link
cluster.routing.allocation.cluster_concurrent_rebalance Dynamic Link
cluster.routing.allocation.disk.include_relocations Dynamic Link
cluster.routing.allocation.disk.threshold_enabled Dynamic Link
cluster.routing.allocation.disk.watermark.low Dynamic Link
cluster.routing.allocation.disk.watermark.high Dynamic Link
cluster.routing.allocation.enable Dynamic Link
cluster.routing.allocation.exclude Dynamic Link
cluster.routing.allocation.include Dynamic Link
cluster.routing.allocation.node_concurrent_recoveries Dynamic Link
cluster.routing.allocation.node_initial_primaries_recoveries Dynamic Link
cluster.routing.allocation.require Dynamic Link Dynamic Link
cluster.routing.allocation.total_shards_per_node Dynamic Link
cluster.routing.rebalance.enable Dynamic Link


ec2 discovery can also have: groups, host_type, availability_zones, any_group, ping_timeout, and node_cache_time. Use this inside discovery.ec2, e.g. discover.ec2.groups.

Name Type Notes Doc
discovery.type Dynamic Link
discovery.zen.minimum_master_nodes Dynamic Link Unknown Removed in ES 2.2 Link Unknown Link


Name Type Notes Doc
gateway.expected_nodes Unknw Link
gateway.expected_master_nodes Static Link
gateway.expected_data_nodes Static Link
gateway.recover_after_time Static Link
gateway.recover_after_nodes Static Link
gateway.recover_after_master_nodes Static Link
gateway.recover_after_data_nodes Static Link


Name Type Notes Doc
http.port Static Link
http.publish_port Static Link
http.bind_host Static Link
http.publish_host Static Link Static Link
http.max_content_length Static Link
http.max_initial_line_length Static Link
http.max_header_size Static Link
http.compression Static Link
http.compression_level Static Link
http.cors.enabled Static Link
http.cors.allow-origin Static Link
http.cors.max-age Static Link
http.cors.allow-methods Static Link
http.cors.allow-headers Static Link
http.cors.allow-credentials Static Link
http.detailed_errors.enabled Static Link
http.pipelining Static Link
http.pipelining.max_events Static Link


Name Type Notes Doc
index.analysis.analyzer Static Link
index.analysis.filter Static Link
index.analysis.tokenizer Static Link
index.auto_expand_replicas Dynamic Link
index.blocks.metadata Dynamic Link Dynamic Link
index.blocks.read_only Dynamic Link
index.blocks.write Dynamic Link
index.codec Static Link
index.gateway.local.sync Unknown Renamed to index.translog.sync_interval in ES 2.0 Link
index.max_result_window Dynamic Link
index.merge.policy.calibrate_size_by_deletes Unknown Removed in ES 2.0 Link
index.merge.policy.expunge_deletes_allowed Unknown Removed in ES 2.0 Link
index.merge.policy.max_merge_docs Unknown Removed in ES 2.0 Link
index.merge.policy.max_merge_size Unknown Removed in ES 2.0 Link
index.merge.policy.merge_factor Unknown Removed in ES 2.0 Link
index.merge.policy.min_merge_docs Unknown Removed in ES 2.0 Link
index.merge.policy.min_merge_size Unknown Removed in ES 2.0 Link
index.merge.policy.type Unknown Removed in ES 2.0 Link
index.merge.scheduler.max_thread_count Dynamic Link
index.number_of_replicas Dynamic Link
index.number_of_shards Static Link
index.recovery.initial_shards Dynamic Link
index.refresh_interval Dynamic Requires units in ES 2.0 Link
index.routing.allocation.exclude Dynamic Link
index.routing.allocation.include Dynamic Link
index.routing.allocation.require Dynamic Link
index.routing.allocation.total_shards_per_node Dynamic Link Dynamic Link
index.shard.check_on_startup Static Link
index.similarity.default.type Static Link Unknown Removed in Es 2.0 Link Unknown Removed in Es 2.0 Link Static memory and ram types removed in ES 2.0 Link
index.ttl.disable_purge Dynamic Link
index.translog.durability Dynamic Link
index.translog.fs.type Dynamic Link
index.translog.flush_threshold_ops Dynamic Link
index.translog.flush_threshold_period Dynamic Link
index.translog.flush_threshold_size Dynamic Link
index.translog.interval Dynamic Link
index.translog.sync_interval Static Link
index.unassigned.node_left.delayed_timeout Dynamic Link


Name Type Notes Doc
indices.analysis.hunspell.dictionary.location Unknown Removed in ES 2.0 Link
indices.recovery.concurrent_streams Dynamic Link
indices.recovery.concurrent_small_file_streams Dynamic Link Unknown Removed in ES 2.0 Link Unknown Removed in ES 2.0 Link


Name Type Notes Doc
logger.indexes.recovery Dynamic Link
logger.transport.tracer Dynamic Link


Name Type Notes Doc
network.bind_host Unknown Link Dynamic See special values Link
network.publish_host Unknown Link
network.tcp.no_delay Unknown Link
network.tcp.keep_alive Unknown Link
network.tcp.reuse_address Unknown Link
network.tcp.send_buffer_size Unknown Link
network.tcp.receive_buffer_size Unknown Link


Name Type Notes Doc Unknown Link
node.enable_custom_paths Unknown Removed in ES 2.0 Link
node.master Unknown Link
node.max_local_storage_nodes Static Link Unknown Link


Name Type Notes Doc
path.conf Static Link Static Link
path.home Static Link
path.logs Static Link
path.plugins Static Link
path.repo Static Link
path.scripts Static Link
path.shared_data Unknown Link


Name Type Notes Doc
plugin.mandatory Static Link


Name Type Notes Doc
resource.reload.enabled Unknown
resource.reload.interval Unknown Link
resource.reload.interval.low Unknown
resource.reload.interval.medium Unknown
resource.reload.interval.high Unknown


Name Type Notes Doc
repositories.url.allowed_urls Unknown Link


Name Type Notes Doc
script.auto_reload_enabled Static Link
script.default_lang Static Link
script.disable_dynamic Unknown Removed in ES 2.0
script.file Static Link
script.index Static Link
script.inline Static Link
script.update Static Link
script.mapping Static Link
script.engine.expression Static Link
script.engine.groovy Static Link
script.engine.javascript Static Link
script.engine.mustache Static Link
script.engine.python Static Link

Thread Pool

There are several thread pools. Elastic lists the “important” ones as including: generic, index, search, suggest, get, bulk, percolate, snapshot, warmer, refresh, listener. Some settings are documented, and are listed below.

You can also control the number of processors for a thread pool, which is briefly documented here.

Name Type Notes Doc
threadpool.generic.keep_alive Dynamic Link
threadpool.index.queue_size Dynamic Link
threadpool.index.size Dynamic Link


Transport allows you to bing to multiple ports on different interfaces. See the transport profiles doc for more info.

Name Type Notes Doc
transport.bind_host Unknown Link Unknown Link
transport.ping_schedule Unknown Link
transport.publish_host Unknown Link
transport.publish_port Unknown Link
transport.tcp.compress Unknown Link
transport.tcp.connect_time Unknown Link
transport.tcp.port Unknown Link
transport.tracer.exclude Dynamic Link
transport.tracer.include Dynamic Link


There are a lot of options for tribes that vary based on the tribe name. Some info is presented here.


Name Type Notes Doc
tribe.blocks.metadata Unknown Link
tribe.blocks.metadata.indices Unknown Link
tribe.blocks.write Unknown Link
tribe.blocks.write.indices Unknown Link Unknown Link
Name Type Notes Doc
watcher.enabled Unknown Renamed in ES 2.0 Link
watcher.interval Unknown Renamed in ES 2.0 Link
watcher.interval.low Unknown Renamed in ES 2.0 Link
watcher.interval.medium Unknown Renamed in ES 2.0 Link
watcher.interval.high Unknown Renamed in ES 2.0 Link

Elasticsearch disk space calculations

Each node provides storage capacity to your cluster.  Elasticsearch will stop indexing if the nodes start to fill up.  This is controlled with the cluster.routing.allocation.disk.watermark.low parameter.  By default, no new shards will be allocated when a machine goes above 85% disk space.

Clearly you must manage the disk space when all of your nodes are running, but what happens when a node fails?

Let’s look at a three-node cluster, setup with three shards and one replica, so data is evenly spread out across the cluster:


If each node has 1TB of disk space for data, they would hit the per-node 85% limit at 850GB.  If one node failed, the 6 total shards would need to be distributed across two nodes.   In our example, if we lost node #1, the primary for shard 1 and the replica for shard 3 would be lost.  The replica for shard 1 that is on node #2 would be promoted to primary, but we would then have no replica for either shards 1 or 3.  Elasticsearch would try to rebuild the replicas on the remaining hosts:


This is good on paper, except each of the remaining two nodes would need to absorb up to 425GB each.  The remaining nodes would be full, and no new shards would be created.

To plan for a node outage, you need to have enough free disk space on each node to reallocate the primary and replica data from the dead node.

This formula will yield the maximum amount of data a node can safely hold:

(disk per node * .85) * (node count - 1 / node count)

In my example, we would get:

( 1TB * .85 ) * ( 2 / 3 ) = 566GB

If your three nodes contained 566GB of data each and one node failed, 283GB of data would be rebuilt on the remaining two nodes, putting them at 849GB used space.  This is just below the 85% limit of 850GB.

I would pad the number a little, and limit the disk space used to 550GB for each node, with 1.65TB data total across the 3-node cluster.  This number plays a part in your data retention policy and cluster sizing strategies.

If 1.65TB is too low, you either need to add more space to each node, or add more nodes to the cluster.  If you added a 4th similarly-sized node, you’d get

( 1TB * .85 ) * ( 3 /4 ) = 637GB

which would allow 2.5GB of storage across the entire cluster.

The formula shown is based on one replica shard.  If you had configured your cluster with more replicas (to survive the outage of more than one node), note that the formula is really:

(space per node * .85) * ((node count - replica count) / node count)

If we had two replicas in our example, we’d get:

( 1TB * .85 ) * ( 1 / 3 ) = 283GB

So you would only allow 283GB of data per node if you wanted to survive a 2-node outage in a 3-node cluster.

Introduction to Elasticsearch Tokenization and Analysis

Elasticsearch is a text engine.  This is usually good if you have text to index, but can cause problems with other types of input (log files).  One of the more confusing elements of elasticsearch is the idea of tokenization and how fields are analyzed.


In a text engine, you might want to take a string and search for each “word”.  The rules that are used to convert a string into words are defined in a tokenizer.   A simple string:

The quick brown fox

can easily be processed into a series of tokens:

[“the”, “quick”, “brown”, “fox”]

But what about punctuation?

Half-blood prince



The default tokenizer in elasticsearch will split those up:

[“half”, “blood”, “prince”]

[“var”, “log”, “messages”]

Unfortunately, this means that searching for “half-blood price” might also find you an article about a royal prince who fell half way to the floor while donating blood.

As of this writing, there are 12 built-in tokenizers.

You can test some input text against a tokenizer on the command line:

curl -XGET 'localhost:9200/_analyze?analyzer=standard&pretty' -d '/var/log/messages'


An analyzer lets you combine a tokenizer with some other rules to determine how the text will be indexed.  This is not something I’ve had to do, so I don’t have examples or caveats yet.

You can test the analyzer rules on the command line as well:

curl -XGET 'localhost:9200/_analyze?tokenizer=keyword&filters=lowercase' -d 'The quick brown fox'


When you define the mapping for your index, you can control how each field is analyzed.  First, you can specify *if* the field is even to be analyzed or indexed:

"myField": {
    "index": "not_analyzed"

By using “not_analyzed”, the value of the field will not be tokenized in any way and will only be available as a raw string.  Since this is very useful for logs, the default template in logstash uses this to create the “.raw” fields (e.g. myField.raw).

You can also specify “no”, which will prevent the field from being indexed at all.

If you would like to use a different analyzer for your field, you can specify that:

"myField": {
    "analyzer": "spanish"


SNMP traps with logstash

The Basics

SNMP traps are generally easy to receive and process with logstash.  The snmptrap{} input sets up a listener, which processes each trap and replaces the OIDs with the string representation found in the given mibs.  If the OID can’t be found, logstash will make a new field, using the OID value as the field name, e.g.

"": "trap value"

(Note that this is currently broken if you use Elasticsearch 2.0).


Probably the biggest issues with most traps is that they are sent to port 162, which is a low-numbered “system” port.  For logstash to listen on that port, it must be run as root, which is not recommended.

The easiest workaround for this is to forward port 162 to a higher-numbered port to which logstash can connect.  iptables is the typical tool to perform the forwarding:

/sbin/iptables -A PREROUTING -t nat -i eth0 -p udp --dport 162 -j REDIRECT --to-port 5000

where ‘5000’ is the port on which logstash is listening.

SNMP Sequences

Some SNMP traps come in with a “sequence number”, which allows the receiver to know if all traps have been received.  In the ones we’ve seen, the sequence is appended to each OID, e.g.

"": "trap value"

where “90210” is the sequence number.

This seems like a handy feature, but it doesn’t appear to be supported by logstash (or perhaps the underlying SNMP library that is uses).  With the basic snmptrap config, logstash is unable to apply the mib definition and remove the sequence number, so you end up with a new field for each trap value.  That’s not good for you or for elasticsearch/kibana.

Since traps aren’t just simple plain text, you can’t use a “tcp” listener, apply your own filter to remove the sequence, and feed the result back into logstash’s “snmptrap” mechanism.  Without modifying the snmptrap input plugin, you have to fix the problem before it hits logstash.

I was a fan of logstash plugins (and have written a few), but logstash 1.5 requires everything to be done as ruby gems, which has been a painful path.  As such, I’m doing more outside of logstash, like this recommendation.


We’re now running snmptrapd on our logstash machines.  They listen for traps on port 162 and write them to a regular log file that can then be read by logstash.

Basic config

Update /etc/snmp/snmptrapd.conf to include:

disableAuthorization yes

Put your mib definitions in/usr/share/snmp/mibs.

Trap formatting

To make the traps easier to process by logstash, I format the output as json.  This is done with OPTIONS set in /etc/sysconfig/snmptrapd:

OPTIONS="-A -Lf /var/log/snmptrap -p /var/run/ -m ALL -F '{ \"type\": \"snmptrap\", \"timestamp\": \"%04y-%02m-%02l %02h:%02j:%02k\", \"host_ip\":\"%a\", \"trapEnterprise\": \"%N\", \"trapSubtype\": \"%q\", \"trapType\": %w, \"trapVariables\": \"%v\" }\n' "

The flags used are:

  • -A – append to the log file rather than truncating it
  • -Lf – log to a file
  • -m ALL – use all the mibs it can find
  • -F – use this printf-style string for formatting

Then, in logstash, use the json filter:

filter {
    json {
        source => "message"

I use a ruby filter to make the separate fields and cast them to the correct type.

Don’t forget to setup a log file rotation on your new /var/log/snmptrap file and setup a process monitor for snmptrapd.


Duplicated elasticsearch documents


The first thing to notice is that the documents probably have different _id values, so the problem then becomes, “who is inserting duplicates??”.

If you’re running logstash, some things to look at include:

  • duplicate input{} stanzas
  • duplicate output{} stanzas
  • two logstash processes running
  • bad file glob patterns
  • bad broker configuration

Duplicate Stanzas

Most people aren’t silly enough to deliberately create duplicate input or output stanzas, but there are still easy ways for them to occur:

  • a logstash config file you’ve forgotten (00-mytest.conf)
  • a backup file (00-input.conf.bak)

Remember that logstash will read in all the files it finds in your configuration directory!

Multiple Processes

Sometimes your shutdown script may not work, leaving you with two copies of your shipper running.  Check it with ‘ps’ and kill off the older one.

File Globs

If your file glob pattern is fairly open (e.g. “*”), you might be picking up files that have been rotated (“foo.log” and “foo.log.00”).

Logstash-forwarder sets a ‘file’ field that you can check in this case.

If you’ve enabled _timestamp in elasticsearch, it will show you when each of the duplicates was indexed, which might give you a clue.


As for brokers, if you have multiple logstash indexers trying to read from the same broker without some locking mechanism, it might cause problems.


Elasticsearch mappings and templates


In the relational database world, you create tables to store similar items.  In Elasticsearch, the equivalent of the table is a type.

You eventually get around to defining the properties of each field, be they char, varchar, auto-incrementing unsigned integer, decimal, etc. Elasticsearch is no different, except they call these mappings.


Mappings tell Elasticsearch how to deal with your field:

  • what type of data does it contain?
  • should the data be indexed?
  • should it be tokenized (and how)?

If you just blindly throw data at Elasticsearch, it will apply defaults based on the first value it sees.  A value of “foo” would indicate a string; 1.01 would indicate a decimal, etc.

A major problem comes when the value is not indicative of the type.  What if your first string value contained “2015-04-01”?  Elasticsearch thinks that is a date, so your next value of “foo” is now invalid.  The same with basic numbers – if the first value is 1, the type is now integer, and the next value of 1.01 is now invalid.

The best way to deal with this is to create your own mapping, where you explicitly define the types of each field.   Here’s a sample:

$ curl -XPUT 'http://localhost:9200/my_index/_mapping/my_type' -d '
  "my_type" : {
    "properties" : {
      "my_field" : {"type" : "string", "store" : true }

Defined as a string, a value of “2015-04-01” in my_field would not be interpreted as a date.

Nested fields are described as nested properties.  “” could be mapped like this:

  "my_type" : {
    "properties" : {
      "address" : {
        "properties" : {
          "city" : {
            "type" : "string"

There are a lot of properties that can be specified for a given field.  The Core Types page lists them.

Two of the more important ones are:

  • “index”: “not_analyzed”, which keeps Elasticsearch from tokenizing your value, which is especially useful for log data.
  • “doc_values”: true, which can help with memory usage as described in the doc.

If you use a new index every day, you would need to apply the mapping every day when the index was created.  Or, you can use templates.


Templates define settings and mappings that will be used when a new index is created.  They are especially useful if you create daily indexes (e.g. from logstash) or you have dynamic field names.

In this example, an index created whose name matches the pattern “my_*” will have its “my_field” field mapped as a string.

curl -XPUT localhost:9200/_template/my_template -d '
  "template" : "my_*",
  "mappings" : {
    "my_type" : {
      "my_field" : { "type" : "string" }

Note that the template name is global to the cluster, so don’t try to create a “fancy_template” on more than one index.

Templates still requires you to know the names of the fields in advance, though.

Dynamic Templates

A dynamic template lets you tell Elasticsearch what to do with any field that matches (or doesn’t match)  the definition, which can include:

  • name, including wildcards or partial path matches
  • type

This dynamic template will take any string and make it not_analyzed and use doc_values:

PUT /my_index
  "mappings": {
    "my_type": {
      "dynamic_templates": [
        { "my_dtemplate": { 
            "match_mapping_type": "string",
            "mapping": {
              "type": "string",
              "analyzer": "not_analyzed",
              "doc_values": true

Or force any nested field that ends in “counter” to be an integer:

PUT /my_index
  "mappings": {
    "my_type": {
      "dynamic_templates": [
        { "my_dtemplate": {
            "path_match": "*.counter", 
            "mapping": {
              "type": "integer"


One of the first things that early logstash users discovered was that Elasticsearch is a text search engine, not a log search engine.  If you gave it a string field, like:

"logfile": "/var/log/httpd/access_log"

Elasticsearch would tokenize it and index the tokens:

"logfile": ["var", "log", "httpd", "access_log"]

which make it impossible to search on or display the original value.

To alleviate this initial frustration, logstash was shipped with a default mapping that included a “raw” field for every string, set as not_analyzed.  Accessing logfile.raw would return you back the original, un-tokenized string.

This is a great work-around, and helped many logstash users not be immediately frustrated with the product, but it’s not the right solution.  Setting up your own mapping, and treating the fields as you know they should be treated, is the right solution.

Note that the extra “raw” field will be going away in a future release of logstash.

Using the Wrong Mapping

If you try to insert a document whose field types don’t match the mapping, Elasticsearch may try to help.  If possible, it will try to “coerce” (cast) the data from one type to another (“int to string”, etc).  Elasticsearch will even try “string to int” which will work for “2.0”, but not “hello”.  Check the value of the index.mapping.coerce parameter and any messages in the Elasticsearch logs.

Updating a Template

If you’re using logstash, it ships with a default template called “logstash”.  To make changes to this template, first pull it:

curl -XGET 'http://localhost:9200/_template/logstash?pretty' > /tmp/logstash.template

First, edit the file to remove the outside structure – the part that looks like this:

 "logstash" :

and the matching } at the end of the file.

Then, edit the file as desired (yes, that’s the tricky part!).

While you’re there, notice this line, which we’ll reference below.

"template" : "logstash-*"

Finally, post the template back into Elasticsearch:

curl -XPUT 'http://localhost:9200/_template/logstash' -d@logstash.template

Now, any index that is created after this with a name that matches the “template” value shown above will use this new template when creating the field mappings.

Testing your Template

Field mappings are set when a field is defined in the index.  They cannot be changed without reindexing all of the data.

If you use daily indexes, your next index will be created with the new mapping.  Rather than wait for that, you can test the template by manually creating a new index that also matches the pattern.

For example, if your template pattern was “logstash-*”, this will match the standard daily indexes like “logstash-2015.04.01” but will also match “logstash-test”.

Create a document by hand into that index:

$ curl -XPUT 'http://localhost:9200/my_index/my_type/1' -d '{
 "field1" : "value1",
 "field2" : 2



Welcome to!

Right now, this is just a blog, which draws on the experience of several Silicon Valley operations folks with a wide range of talents and opinions.

Please note that items that are being discussed may not relate to existing clients, but could be generic thoughts on the topic or relate to previous clients.

If something catches your eye, please leave a comment.