Microservices & Teraflops: Effortlessly Scaling Data Science with PyWren | AnacondaCON 2017

Preview:

Citation preview

MICROSERVICES & TERAFLOPS

Effortlessly scaling data science #thecloudistoodamnhard

Eric Jonas Postdoctoral Researcher jonas@eecs.berkeley.edu | @stochastician

A BIG FAN OF ANACONDA

“BIG” DATA(near-by) stars neurons nuclei

size 10^9 m 10^-5m 10^-14m

number 1 10^11 10^26

data size 2 PB 12 TB/sec ??/sec

images courtesy NASA SOHO

Sun in UV (304 Å)you are here

Solar Flare Prediction Using Photospheric and Coronal Image Data. Jonas, Bobra, Shankar, Recht. American Geophysical Union, 2016

NEUROSCIENCE AT ALL SCALES

Could a Neuroscientist understand a microprocessor? Jonas, Kording. PLOS Computational Biology, 2017

AND I WANT MORE!

Superresolution

Phase contrastTomography

Adaptive Optics

How do you get busy physicists and electrical engineers to give up Matlab?

How do we get busy astronomers

to give up IDL?

Why is there no “cloud button”?

PREVIOUSLY, ON

The cloud is too damn hard!

Jimmy McMillanFounder and Chairman The Rent is Too Damn High Party

Less than half of the graduatestudents in our group have

ever written a Spark or Hadoop job

–Eric Jonas, 2017“I hate computers”

#THECLOUDISTOODAMNHARD

• What type? what instance? What base image?

• How many to spin up? What price? spot?

• wait, Wait, WAIT oh god

• now what? DEVOPS

WHAT DO WE WANT?

1. Very little overhead for setup once someone has an AWS account. In particular, no persistent overhead -- you don't have to keep a large (expensive) cluster up and you don't have to wait 10+ min for a cluster to come up

WHAT DO WE WANT?

2. As close to zero overhead for users as possible In particular, anyone who can write python should be able to invoke it through a reasonable interface. It should support all legacy code

WHAT DO WE WANT?

3. Target jobs that run in the minutes-or-more regime.

WHAT DO WE WANT?

4. I don't want to run a service. That is, I personally don't want to offer the front-end for other people to use, rather, I want to directly pay AWS.

WHAT DO WE WANT?

5. It has to be from a cloud player that's likely to give out an academic grant -- AWS, Google, MS Azure. There are startups in this space that might build cool technology, but often don't want to be paid in AWS research credits.

WHAT WE WANT1.Very little overhead for setup once someone has an AWS account. In particular, no persistent overhead -- you don't have to keep a large (expensive) cluster up and you don't have to wait 10+ min for a cluster to come up

2.As close to zero overhead for users as possible -- in particular, anyone who can write python should be able to invoke it through a reasonable interface.

3.Target jobs that run in the minutes-or-more regime.

4.I don't want to run a service. That is, I personally don't want to offer the front-end for other people to use, rather, I want to directly pay AWS.

5.It has to be from a cloud player that's likely to give out an academic grant -- AWS, Google, Azure. There are startups in this space that might build cool technology, but often don't want to be paid in AWS research credits.

Powered by Continuum Analytics

+

–Eric Jonas, 2017“I hate computers”

servers

• 300 seconds single-core (AVX2)

• 512 MB in /tmp

• 1.5GB RAM

• Python, Java, Node

AWS LAMBDA

THE API

LAMBDA SCALABILITYCompute Data

YOU CAN DO A LOT OF WORK WITH MAP!

ETL parametertuning

IMAGENET EXAMPLEPreprocess 1.4M images from

IMAGENETCompute GIST image descriptor(some random python code off

the internet)

HOW IT WORKS

pull job from s3download anaconda runtime

python to run codepickle resultstick in S3

your laptop the cloud

future = runner.map(fn, data)

Serialize func and dataPut on S3Invoke Lambda

func datadatadata

future.result()

poll S3unpickle and return

result

A BRIEF HISTORY OF SHARING

Overhead

Isolat

ion

Processes1960s, MULTICS

Virtual Machines

1990s, VMWare, Xen

Renting/VPS1990s, SGE

HW VMs2000s, Intel VT-X

Containers2008 chroot/LXC

(mostly wrong)

• Process isolation

• network isolation

• filesystem isolation

• memory / cpu constraints

(Leptotyphlops carlae)

Start

Delete non-AVX2 MKL

strip shared libs

conda clean

eliminate pkg

delete pyc

977 MB

1205MB

441MB

946 MB

670 MB

510MB

Want our runtime to include

MAP IS NOT ENOUGH? A lot of data analytics looks like:

ETL / preprocessing featurizationData machine learning

Distributed! Scale! TensorFlow

Deep MLBaseGreat PyWren Fit

–Paul Barnum, quoted in McSherry, 2015

“You can have a second computer when you’ve shown you know how to use the first one.”

Scalability! But at what COST? Frank McSherry, Michael Isard, Derek G. Murray. USENIX Hot Topics In Operating Systems, 2015

SINGLE-MACHINE REDUCE

But I don’t have a big server!

futures = exec.map(function, data)answer = exec.reduce(reduce_func, futures)

cores RAM COST

x1.32xlarge 64 2 TB $14/hr

x1.16xlarge 32 1TB $7/hr

p2.16xlarge 32 + 16 GPUs 750 GB $14/hr

r4.16xlarge 32 500 GB $4/hr

STUPID LAMBDA TRICKS

Shivaram told me todayhe has this up to 6M/sec

transactions (!)

BUT I CAN’T USE THE CLOUD!

PYWREN MAKES SCALE A BIT EASIER• Do you have a python

function?

• Do you want to scale it?

• Try it out!

• Map : Today

• BigReduce : 1.0 in a week

• Parameter server : Experimental

THANKS! https://github.com/ericmjonas/pywren

ShivaramVenkataraman

BenRecht

IonStoica

EXTRA SLIDES

BEHIND THE HOOD

UNDERSTANDINGHOST ALLOCATION

SO WHEN IS THIS USEFUL?• Parameter searching

• Last-minute NIPS experiments

• Expensive forward modelsm

assiv

ely p

arall

el co

mpu

te

serial/ local

mas

sively

par

allel

com

pute

serial/ local

mas

sively

par

allel

com

pute

serial/ local

mas

sively

par

allel

com

pute

serial/ local

GETTING AROUND THE LIMITATIONS

• Runtime [anaconda]

• Job lifetime [generators]

• Synchronization (memcache/redis?)

• inter-lambda IPC

WORKER REUSE

COORDINATION?