Category Archives: Uncategorized

Elasticsearch configuration options

This is an attempt at a complete listing of elasticsearch config variables since they’re located all over the elastic.co website.

The list is not complete, and will start to “rot” as soon as it’s published, but… If you know of some variables that aren’t listed, please let me know.

Note that static settings must be set in the config file on every machine in the cluster.

 

Bootstrap

Name Type Notes Doc
bootstrap.mlockall Static Link

Cloud

Name Type Notes Doc
cloud.aws.access_key Unknown Link
cloud.aws.ec2 Unknown Link
cloud.aws.protocol Unknown Link
cloud.aws.proxy Unknown Link
cloud.aws.region Unknown Link
cloud.aws.s3 Unknown Link
cloud.aws.secret_key Unknown Link
cloud.aws.signer Unknown Link

Cluster

Name Type Notes Doc
cluster.blocks.read_only Dynamic Link
cluster.info.update.interval Dynamic Link
cluster.name Unknown Link
cluster.routing.allocation.allow_rebalance Dynamic Link
cluster.routing.allocation.awareness.attributes Dynamic Link
cluster.routing.allocation.awareness.force.zone.values Dynamic Link
cluster.routing.allocation.balance.shard Unknown Link
cluster.routing.allocation.balance.index Unknown Link
cluster.routing.allocation.balance.threshold Unknown Link
cluster.routing.allocation.cluster_concurrent_rebalance Dynamic Link
cluster.routing.allocation.disk.include_relocations Dynamic Link
cluster.routing.allocation.disk.threshold_enabled Dynamic Link
cluster.routing.allocation.disk.watermark.low Dynamic Link
cluster.routing.allocation.disk.watermark.high Dynamic Link
cluster.routing.allocation.enable Dynamic Link
cluster.routing.allocation.exclude Dynamic Link
cluster.routing.allocation.include Dynamic Link
cluster.routing.allocation.node_concurrent_recoveries Dynamic Link
cluster.routing.allocation.node_initial_primaries_recoveries Dynamic Link
cluster.routing.allocation.require Dynamic Link
cluster.routing.allocation.same_shard.host Dynamic Link
cluster.routing.allocation.total_shards_per_node Dynamic Link
cluster.routing.rebalance.enable Dynamic Link

Discovery

ec2 discovery can also have: groups, host_type, availability_zones, any_group, ping_timeout, and node_cache_time. Use this inside discovery.ec2, e.g. discover.ec2.groups.

Name Type Notes Doc
discovery.type Dynamic Link
discovery.zen.minimum_master_nodes Dynamic Link
discovery.zen.ping.multicast.enabled Unknown Removed in ES 2.2 Link
discovery.zen.ping.unicast.hosts Unknown Link

Gateway

Name Type Notes Doc
gateway.expected_nodes Unknw Link
gateway.expected_master_nodes Static Link
gateway.expected_data_nodes Static Link
gateway.recover_after_time Static Link
gateway.recover_after_nodes Static Link
gateway.recover_after_master_nodes Static Link
gateway.recover_after_data_nodes Static Link

HTTP

Name Type Notes Doc
http.port Static Link
http.publish_port Static Link
http.bind_host Static Link
http.publish_host Static Link
http.host Static Link
http.max_content_length Static Link
http.max_initial_line_length Static Link
http.max_header_size Static Link
http.compression Static Link
http.compression_level Static Link
http.cors.enabled Static Link
http.cors.allow-origin Static Link
http.cors.max-age Static Link
http.cors.allow-methods Static Link
http.cors.allow-headers Static Link
http.cors.allow-credentials Static Link
http.detailed_errors.enabled Static Link
http.pipelining Static Link
http.pipelining.max_events Static Link

Index

Name Type Notes Doc
index.analysis.analyzer Static Link
index.analysis.filter Static Link
index.analysis.tokenizer Static Link
index.auto_expand_replicas Dynamic Link
index.blocks.metadata Dynamic Link
index.blocks.read Dynamic Link
index.blocks.read_only Dynamic Link
index.blocks.write Dynamic Link
index.codec Static Link
index.gateway.local.sync Unknown Renamed to index.translog.sync_interval in ES 2.0 Link
index.max_result_window Dynamic Link
index.merge.policy.calibrate_size_by_deletes Unknown Removed in ES 2.0 Link
index.merge.policy.expunge_deletes_allowed Unknown Removed in ES 2.0 Link
index.merge.policy.max_merge_docs Unknown Removed in ES 2.0 Link
index.merge.policy.max_merge_size Unknown Removed in ES 2.0 Link
index.merge.policy.merge_factor Unknown Removed in ES 2.0 Link
index.merge.policy.min_merge_docs Unknown Removed in ES 2.0 Link
index.merge.policy.min_merge_size Unknown Removed in ES 2.0 Link
index.merge.policy.type Unknown Removed in ES 2.0 Link
index.merge.scheduler.max_thread_count Dynamic Link
index.number_of_replicas Dynamic Link
index.number_of_shards Static Link
index.recovery.initial_shards Dynamic Link
index.refresh_interval Dynamic Requires units in ES 2.0 Link
index.routing.allocation.exclude Dynamic Link
index.routing.allocation.include Dynamic Link
index.routing.allocation.require Dynamic Link
index.routing.allocation.total_shards_per_node Dynamic Link
index.search.slowlog.threshold Dynamic Link
index.shard.check_on_startup Static Link
index.similarity.default.type Static Link
index.store.throttle.type Unknown Removed in Es 2.0 Link
index.store.throttle.max_bytes_per_sec Unknown Removed in Es 2.0 Link
index.store.type Static memory and ram types removed in ES 2.0 Link
index.ttl.disable_purge Dynamic Link
index.translog.durability Dynamic Link
index.translog.fs.type Dynamic Link
index.translog.flush_threshold_ops Dynamic Link
index.translog.flush_threshold_period Dynamic Link
index.translog.flush_threshold_size Dynamic Link
index.translog.interval Dynamic Link
index.translog.sync_interval Static Link
index.unassigned.node_left.delayed_timeout Dynamic Link

Indices

indices.store.throttle.max_bytes_per_sec

Name Type Notes Doc
indices.analysis.hunspell.dictionary.location Unknown Removed in ES 2.0 Link
indices.recovery.concurrent_streams Dynamic Link
indices.recovery.concurrent_small_file_streams Dynamic Link
indices.store.throttle.type Unknown Removed in ES 2.0 Link
indices.store.throttle.max_bytes_per_sec Unknown Removed in ES 2.0 Link

Logger

Name Type Notes Doc
logger.indexes.recovery Dynamic Link
logger.transport.tracer Dynamic Link

Network

Name Type Notes Doc
network.bind_host Unknown Link
network.host Dynamic See special values Link
network.publish_host Unknown Link
network.tcp.no_delay Unknown Link
network.tcp.keep_alive Unknown Link
network.tcp.reuse_address Unknown Link
network.tcp.send_buffer_size Unknown Link
network.tcp.receive_buffer_size Unknown Link

Node

Name Type Notes Doc
node.data Unknown Link
node.enable_custom_paths Unknown Removed in ES 2.0 Link
node.master Unknown Link
node.max_local_storage_nodes Static Link
node.name Unknown Link

Path

Name Type Notes Doc
path.conf Static Link
path.data Static Link
path.home Static Link
path.logs Static Link
path.plugins Static Link
path.repo Static Link
path.scripts Static Link
path.shared_data Unknown Link

Plugin

Name Type Notes Doc
plugin.mandatory Static Link

Resource

Name Type Notes Doc
resource.reload.enabled Unknown
resource.reload.interval Unknown Link
resource.reload.interval.low Unknown
resource.reload.interval.medium Unknown
resource.reload.interval.high Unknown

Repositories

Name Type Notes Doc
repositories.url.allowed_urls Unknown Link

Script

Name Type Notes Doc
script.auto_reload_enabled Static Link
script.default_lang Static Link
script.disable_dynamic Unknown Removed in ES 2.0
script.file Static Link
script.index Static Link
script.inline Static Link
script.update Static Link
script.mapping Static Link
script.engine.expression Static Link
script.engine.groovy Static Link
script.engine.javascript Static Link
script.engine.mustache Static Link
script.engine.python Static Link

Thread Pool

There are several thread pools. Elastic lists the “important” ones as including: generic, index, search, suggest, get, bulk, percolate, snapshot, warmer, refresh, listener. Some settings are documented, and are listed below.

You can also control the number of processors for a thread pool, which is briefly documented here.

Name Type Notes Doc
threadpool.generic.keep_alive Dynamic Link
threadpool.index.queue_size Dynamic Link
threadpool.index.size Dynamic Link

Transport

Transport allows you to bing to multiple ports on different interfaces. See the transport profiles doc for more info.

Name Type Notes Doc
transport.bind_host Unknown Link
transport.host Unknown Link
transport.ping_schedule Unknown Link
transport.publish_host Unknown Link
transport.publish_port Unknown Link
transport.tcp.compress Unknown Link
transport.tcp.connect_time Unknown Link
transport.tcp.port Unknown Link
transport.tracer.exclude Dynamic Link
transport.tracer.include Dynamic Link

Tribe

There are a lot of options for tribes that vary based on the tribe name. Some info is presented here.

Watcher

Name Type Notes Doc
tribe.blocks.metadata Unknown Link
tribe.blocks.metadata.indices Unknown Link
tribe.blocks.write Unknown Link
tribe.blocks.write.indices Unknown Link
tribe.t1.cluster.name Unknown Link
Name Type Notes Doc
watcher.enabled Unknown Renamed in ES 2.0 Link
watcher.interval Unknown Renamed in ES 2.0 Link
watcher.interval.low Unknown Renamed in ES 2.0 Link
watcher.interval.medium Unknown Renamed in ES 2.0 Link
watcher.interval.high Unknown Renamed in ES 2.0 Link

Elasticsearch disk space calculations

Each node provides storage capacity to your cluster.  Elasticsearch will stop indexing if the nodes start to fill up.  This is controlled with the cluster.routing.allocation.disk.watermark.low parameter.  By default, no new shards will be allocated when a machine goes above 85% disk space.

Clearly you must manage the disk space when all of your nodes are running, but what happens when a node fails?

Let’s look at a three-node cluster, setup with three shards and one replica, so data is evenly spread out across the cluster:

Untitled

If each node has 1TB of disk space for data, they would hit the per-node 85% limit at 850GB.  If one node failed, the 6 total shards would need to be distributed across two nodes.   In our example, if we lost node #1, the primary for shard 1 and the replica for shard 3 would be lost.  The replica for shard 1 that is on node #2 would be promoted to primary, but we would then have no replica for either shards 1 or 3.  Elasticsearch would try to rebuild the replicas on the remaining hosts:

Untitled

This is good on paper, except each of the remaining two nodes would need to absorb up to 425GB each.  The remaining nodes would be full, and no new shards would be created.

To plan for a node outage, you need to have enough free disk space on each node to reallocate the primary and replica data from the dead node.

This formula will yield the maximum amount of data a node can safely hold:

(disk per node * .85) * (node count - 1 / node count)

In my example, we would get:

( 1TB * .85 ) * ( 2 / 3 ) = 566GB

If your three nodes contained 566GB of data each and one node failed, 283GB of data would be rebuilt on the remaining two nodes, putting them at 849GB used space.  This is just below the 85% limit of 850GB.

I would pad the number a little, and limit the disk space used to 550GB for each node, with 1.65TB data total across the 3-node cluster.  This number plays a part in your data retention policy and cluster sizing strategies.

If 1.65TB is too low, you either need to add more space to each node, or add more nodes to the cluster.  If you added a 4th similarly-sized node, you’d get

( 1TB * .85 ) * ( 3 /4 ) = 637GB

which would allow 2.5GB of storage across the entire cluster.

The formula shown is based on one replica shard.  If you had configured your cluster with more replicas (to survive the outage of more than one node), note that the formula is really:

(space per node * .85) * ((node count - replica count) / node count)

If we had two replicas in our example, we’d get:

( 1TB * .85 ) * ( 1 / 3 ) = 283GB

So you would only allow 283GB of data per node if you wanted to survive a 2-node outage in a 3-node cluster.

Introduction to Elasticsearch Tokenization and Analysis

Elasticsearch is a text engine.  This is usually good if you have text to index, but can cause problems with other types of input (log files).  One of the more confusing elements of elasticsearch is the idea of tokenization and how fields are analyzed.

Tokens

In a text engine, you might want to take a string and search for each “word”.  The rules that are used to convert a string into words are defined in a tokenizer.   A simple string:

The quick brown fox

can easily be processed into a series of tokens:

[“the”, “quick”, “brown”, “fox”]

But what about punctuation?

Half-blood prince

or

/var/log/messages

The default tokenizer in elasticsearch will split those up:

[“half”, “blood”, “prince”]

[“var”, “log”, “messages”]

Unfortunately, this means that searching for “half-blood price” might also find you an article about a royal prince who fell half way to the floor while donating blood.

As of this writing, there are 12 built-in tokenizers.

You can test some input text against a tokenizer on the command line:

curl -XGET 'localhost:9200/_analyze?analyzer=standard&pretty' -d '/var/log/messages'

Analyzers

An analyzer lets you combine a tokenizer with some other rules to determine how the text will be indexed.  This is not something I’ve had to do, so I don’t have examples or caveats yet.

You can test the analyzer rules on the command line as well:

curl -XGET 'localhost:9200/_analyze?tokenizer=keyword&filters=lowercase' -d 'The quick brown fox'

Mappings

When you define the mapping for your index, you can control how each field is analyzed.  First, you can specify *if* the field is even to be analyzed or indexed:

"myField": {
    "index": "not_analyzed"
}

By using “not_analyzed”, the value of the field will not be tokenized in any way and will only be available as a raw string.  Since this is very useful for logs, the default template in logstash uses this to create the “.raw” fields (e.g. myField.raw).

You can also specify “no”, which will prevent the field from being indexed at all.

If you would like to use a different analyzer for your field, you can specify that:

"myField": {
    "analyzer": "spanish"
}

 

SNMP traps with logstash

The Basics

SNMP traps are generally easy to receive and process with logstash.  The snmptrap{} input sets up a listener, which processes each trap and replaces the OIDs with the string representation found in the given mibs.  If the OID can’t be found, logstash will make a new field, using the OID value as the field name, e.g.

"1.3.6.1.4.1.1234.1.2.3.4": "trap value"

(Note that this is currently broken if you use Elasticsearch 2.0).

Forwarding

Probably the biggest issues with most traps is that they are sent to port 162, which is a low-numbered “system” port.  For logstash to listen on that port, it must be run as root, which is not recommended.

The easiest workaround for this is to forward port 162 to a higher-numbered port to which logstash can connect.  iptables is the typical tool to perform the forwarding:

/sbin/iptables -A PREROUTING -t nat -i eth0 -p udp --dport 162 -j REDIRECT --to-port 5000

where ‘5000’ is the port on which logstash is listening.

SNMP Sequences

Some SNMP traps come in with a “sequence number”, which allows the receiver to know if all traps have been received.  In the ones we’ve seen, the sequence is appended to each OID, e.g.

"1.3.6.1.4.1.1234.1.2.3.4.90210": "trap value"

where “90210” is the sequence number.

This seems like a handy feature, but it doesn’t appear to be supported by logstash (or perhaps the underlying SNMP library that is uses).  With the basic snmptrap config, logstash is unable to apply the mib definition and remove the sequence number, so you end up with a new field for each trap value.  That’s not good for you or for elasticsearch/kibana.

Since traps aren’t just simple plain text, you can’t use a “tcp” listener, apply your own filter to remove the sequence, and feed the result back into logstash’s “snmptrap” mechanism.  Without modifying the snmptrap input plugin, you have to fix the problem before it hits logstash.

I was a fan of logstash plugins (and have written a few), but logstash 1.5 requires everything to be done as ruby gems, which has been a painful path.  As such, I’m doing more outside of logstash, like this recommendation.

snmptrapd

We’re now running snmptrapd on our logstash machines.  They listen for traps on port 162 and write them to a regular log file that can then be read by logstash.

Basic config

Update /etc/snmp/snmptrapd.conf to include:

disableAuthorization yes

Put your mib definitions in/usr/share/snmp/mibs.

Trap formatting

To make the traps easier to process by logstash, I format the output as json.  This is done with OPTIONS set in /etc/sysconfig/snmptrapd:

OPTIONS="-A -Lf /var/log/snmptrap -p /var/run/snmptrapd.pid -m ALL -F '{ \"type\": \"snmptrap\", \"timestamp\": \"%04y-%02m-%02l %02h:%02j:%02k\", \"host_ip\":\"%a\", \"trapEnterprise\": \"%N\", \"trapSubtype\": \"%q\", \"trapType\": %w, \"trapVariables\": \"%v\" }\n' "

The flags used are:

  • -A – append to the log file rather than truncating it
  • -Lf – log to a file
  • -m ALL – use all the mibs it can find
  • -F – use this printf-style string for formatting

Then, in logstash, use the json filter:

filter {
    json {
        source => "message"
    }
}

I use a ruby filter to make the separate fields and cast them to the correct type.

Don’t forget to setup a log file rotation on your new /var/log/snmptrap file and setup a process monitor for snmptrapd.

 

Duplicated elasticsearch documents

Intro

The first thing to notice is that the documents probably have different _id values, so the problem then becomes, “who is inserting duplicates??”.

If you’re running logstash, some things to look at include:

  • duplicate input{} stanzas
  • duplicate output{} stanzas
  • two logstash processes running
  • bad file glob patterns
  • bad broker configuration

Duplicate Stanzas

Most people aren’t silly enough to deliberately create duplicate input or output stanzas, but there are still easy ways for them to occur:

  • a logstash config file you’ve forgotten (00-mytest.conf)
  • a backup file (00-input.conf.bak)

Remember that logstash will read in all the files it finds in your configuration directory!

Multiple Processes

Sometimes your shutdown script may not work, leaving you with two copies of your shipper running.  Check it with ‘ps’ and kill off the older one.

File Globs

If your file glob pattern is fairly open (e.g. “*”), you might be picking up files that have been rotated (“foo.log” and “foo.log.00”).

Logstash-forwarder sets a ‘file’ field that you can check in this case.

If you’ve enabled _timestamp in elasticsearch, it will show you when each of the duplicates was indexed, which might give you a clue.

Brokers

As for brokers, if you have multiple logstash indexers trying to read from the same broker without some locking mechanism, it might cause problems.

 

Elasticsearch mappings and templates