17

Tuning ElasticSearch for multi-terabyte analytics

Embed Size (px)

DESCRIPTION

A talk by Andrew Clegg at the ElasticSearch London meetup in November 2013 on how Pearson does large-scale analytical queries on ElasticSearch.

Citation preview

Page 1: Tuning ElasticSearch for multi-terabyte analytics
Page 2: Tuning ElasticSearch for multi-terabyte analytics

ElasticSearch London

Tuning ElasticSearch for multi-terabyte analytics

or… “Counting stuff is hard”

Andrew CleggData Analytics & Visualization TeamPearson

@andrew_clegg

Page 3: Tuning ElasticSearch for multi-terabyte analytics

Introduction

Page 4: Tuning ElasticSearch for multi-terabyte analytics

Our data

Over 11 billion “docs” in production cluster.

Each doc is around 1-2KB of JSON.

~60 million docs/day == ~700 docs/sec.

Higher than this during peak times.

Much higher when backfilling historical data.

Conversely: not many end users yet: 5-20 on a typical day.

Page 5: Tuning ElasticSearch for multi-terabyte analytics

Our architecture

Palomino

Page 6: Tuning ElasticSearch for multi-terabyte analytics

Getting data in

Hardware

(Yes, actual hardware!)

Cisco UCS servers, 24 cores, 96GB memory.

8 x 1TB disks.7 for data, 1 for log files, temp files, etc.Reads/writes parallelized across segments.

Currently 5 of these in production cluster.

10GB switch.

Page 7: Tuning ElasticSearch for multi-terabyte analytics

Getting data in

Index configuration

We don’t store any data in ElasticSearch. All we need is facet counts.

Disable _source, _all, and individual field storage.

Disable term vectors and norms.

No analysis on text fields (just unbroken strings).

No date autodetection.

Page 8: Tuning ElasticSearch for multi-terabyte analytics

Getting data in

Weekly rolling indices mean shard level can increase as traffic does

NB currently we have steady state so it’s set to 5 shards each week.3 replicas per shard (including primary).Real-time implies: can’t disable replication during indexing!

Time (new index each week)

Shard count

Page 9: Tuning ElasticSearch for multi-terabyte analytics

Getting data in

Client configuration

Multiple writer threads on multiple machines: currently 6 x 3.

Bulk API: currently up to 1000 docs per batch.

Incoming docs queued until batch limit, or time or size limits, reached.

(e.g. 1000 docs or 100000 bytes or 2 secs since last batch)

Page 10: Tuning ElasticSearch for multi-terabyte analytics

Getting data in

Other things we could do -- but currently don’t

Tune indexer thread pool size?

Tune segment merge policy?

Reduce flush interval?

Even without these, our current record is over 20,000 docs indexed/sec.

(And think the bottleneck was the client machines…)

Page 11: Tuning ElasticSearch for multi-terabyte analytics

Getting data out

Typical queries

Date histogram and terms facet are the most common by far.

So we wrote our own versions with some optimizations :-)

https://github.com/pearson-enabling-technologies/elasticsearch-approx-plugin

Field data cache size important for speed: currently 30% of 80GB heap.(In fact it actually uses much more than this, with ES 0.90.2. Upgrade planned!)

We always use search_type=count.

Page 12: Tuning ElasticSearch for multi-terabyte analytics

Getting data out

Facet workflowClient request

Arbitrary master node:● Parses query● Distributes subqueries to data nodes

(including itself)● Combines results (reduce function)● Returns to clientData nodes:

● Find matching records● Perform groupings and counts

(and any other calculations)● Return to master

Page 13: Tuning ElasticSearch for multi-terabyte analytics

Getting data out

Facet plugin optimizations

Approximate data structures and sampling mode:Trade between speed/memory and accuracy.

Uses Lucene’s BytesRef & BytesRefHash instead of String & HashSet.

Micro-caching of local calculations, e.g. timestamp rounding.

Explicit “render” phase after “reduce” phase:Defer as much as possible until then.

Page 14: Tuning ElasticSearch for multi-terabyte analytics

Getting data out

General advice for plugin writers

Minimize object creation/destruction and type conversions.

Use arrays of primitives, or Trove collections, where possible. Reuse buffers.

Release objects as soon as possible when no longer needed.

Lucene has some neat tricks: bit fields, fast hashing algorithms.

So does ElasticSearch: CacheRecycler lets you reuse collections.

Page 15: Tuning ElasticSearch for multi-terabyte analytics

Getting data out

Hints for query performance tuning

Tools like jmap, jstat, Visual VM and MAT are very helpful.

Use ES “hot threads” API to see where it’s spending its time.

Set up unit/integration tests with time and RAM instrumentation.

Page 16: Tuning ElasticSearch for multi-terabyte analytics

Getting data out

Other things we could do -- but currently don’t

Non-data nodes to parse queries, and handle reduce & render phases.

Garbage collector tuning.

(Note to self: see if Trove still crashes Java 7 JVM under G1 GC…)

Use SSDs :-)

Page 17: Tuning ElasticSearch for multi-terabyte analytics

Thanks!

Any questions?

https://github.com/pearson-enabling-technologies/

https://twitter.com/andrew_clegg