26
Whither Hadoop? (but not wither)

Ted Dunning - Whither Hadoop

Embed Size (px)

Citation preview

Page 1: Ted Dunning - Whither Hadoop

Whither Hadoop?

(but not wither)

Page 2: Ted Dunning - Whither Hadoop

Questions I Have Been Asked

• I have a system that currently has 10 billion files

• Files range from 10K to 100MB and average about 1MB

• Access is from a legacy parallel app currently running on thousands of machines?

• I want to expand to 200 billion files of about the same size range

Page 3: Ted Dunning - Whither Hadoop

World-wide Grid

• I have 100,000 machines spread across 10 data centers around the world

• I have a current grid architecture local to each data center (it works OK)

• But I want to load-level by mirroring data from data center to data center

• And I want to run both legacy and Hadoop programs

Page 4: Ted Dunning - Whither Hadoop

Little Records in Real Time

• I have a high rate stream of incoming small records

• I need to have several years of data on-line– about 10-30 PB

• But I have to have aggregates and alarms based in real-time– maximum response should be less than 10s end to

end– typical response should be much faster (200ms)

Page 5: Ted Dunning - Whither Hadoop

Model Deployment

• I am building recommendation models off-line and would like to scale that using Hadoop

• But these models need to be deployed transactionally to machines around the world

• And I need to keep reference copies of every model and the exact data it was trained on

• But I can’t afford to keep many copies of my entire training data

Page 6: Ted Dunning - Whither Hadoop

Video Repository

• I have video output from thousands of sources indexed by source and time

• Each source wanders around a lot, I know where• Each source has about 100,000 video snippets,

each of which is 10-100MB• I need to run map-reduce-like programs on all

video for particular locations• 10,000 x 100,000 x 100MB = 10^17 B = 100PB• 10,000 x 100,000 = 10^9 files

Page 7: Ted Dunning - Whither Hadoop

And then they say

• Oh, yeah… we need to have backups, too

• Say, every 10 minutes for the last day or so

• And every hour for the last month

• You know… like Time Machine on my Mac

Page 8: Ted Dunning - Whither Hadoop

So what’s a poor guy to do?

Page 9: Ted Dunning - Whither Hadoop

Scenario 1

• 10 billion files x 1MB average– 100 federated name nodes?

• Legacy code access• Expand to 200 billion files– 2,000 name nodes?! Let’s not talk about HA

• Or 400 node MapR – no special adaptation

Page 10: Ted Dunning - Whither Hadoop

World Wide Grid

• 10 x 10,000 machines + legacy code• 10 x 10,000 or 10 x 10 x 1000 node cluster• NFS for legacy apps• Scheduled mirrors move data at end of shift

Page 11: Ted Dunning - Whither Hadoop

Little Records in Real Time

• Real-time + 10-30 PB• Storm with pluggable services• Or Ultra-messaging on the commercial side

• Key requirement is that real-time processors need distributed, mutable state storage

Page 12: Ted Dunning - Whither Hadoop

Model Deployment

• Snapshots– of training data at start of training– of models at the end of training

• Mirrors and data placement allow precise deployment

• Redundant snapshots require no space

Page 13: Ted Dunning - Whither Hadoop

Video Repository

• Media repositories typically have low average bandwidth

• This allows very high density machines– 36 x 3 TB = 100TB per 4U = 25TB net per 4U

• 100PB = 4,000 nodes = 400 racks• Can be organized as one cluster or several

pods

Page 14: Ted Dunning - Whither Hadoop

And Backups?

• Snapshots can be scheduled at high frequency• Expiration allows complex retention schedules

• You know… like Time Machine on my Mac

• (and off-site backups work as well)

Page 15: Ted Dunning - Whither Hadoop

How does this work?

Page 16: Ted Dunning - Whither Hadoop

MapR Areas of Development

Map Re-

duce

Storage Service

s

Ecosystem

HBase

Management

Page 17: Ted Dunning - Whither Hadoop

MapR Improvements

• Faster file system– Fewer copies– Multiple NICS– No file descriptor or page-buf competition

• Faster map-reduce– Uses distributed file system– Direct RPC to receiver– Very wide merges

Page 18: Ted Dunning - Whither Hadoop

MapR Innovations

• Volumes– Distributed management– Data placement

• Read/write random access file system– Allows distributed meta-data– Improved scaling– Enables NFS access

• Application-level NIC bonding• Transactionally correct snapshots and mirrors

Page 19: Ted Dunning - Whither Hadoop

MapR's Containers

Each container contains Directories & files Data blocks

Replicated on servers No need to manage

directly

Files/directories are sharded into blocks, whichare placed into mini NNs (containers ) on disks

Containers are 16-32 GB segments of disk, placed on nodes

Page 20: Ted Dunning - Whither Hadoop

MapR's Containers

Each container has a replication chain

Updates are transactional Failures are handled by

rearranging replication

Page 21: Ted Dunning - Whither Hadoop

Container locations and replication

CLDB

N1, N2

N3, N2

N1, N2

N1, N3

N3, N2

N1

N2

N3Container location database (CLDB) keeps track of nodes hosting each container and replication chain order

Page 22: Ted Dunning - Whither Hadoop

MapR ScalingContainers represent 16 - 32GB of data

Each can hold up to 1 Billion files and directories 100M containers = ~ 2 Exabytes (a very large cluster)

250 bytes DRAM to cache a container 25GB to cache all containers for 2EB cluster

But not necessary, can page to disk Typical large 10PB cluster needs 2GB

Container-reports are 100x - 1000x < HDFS block-reports Serve 100x more data-nodes Increase container size to 64G to serve 4EB cluster

Map/reduce not affected

Page 23: Ted Dunning - Whither Hadoop

Export to the world

NFSServerNFS

ServerNFSServerNFS

ServerNFSClient

Page 24: Ted Dunning - Whither Hadoop

Client

NFSServer

Local server

Application

Cluster Nodes

Page 25: Ted Dunning - Whither Hadoop

ClusterNode

NFSServer

Universal export to self

Task

Cluster Nodes

Page 26: Ted Dunning - Whither Hadoop

ClusterNode

NFSServer

Task

ClusterNode

NFSServer

Task

ClusterNode

NFSServer

Task

Nodes are identical