54
A Deep Dive into Structured Streaming Tathagata “TD” Das @tathadas Spark Meetup @ Bloomberg 27 September 2016

A Deep Dive into Structured Streaming: Apache Spark Meetup at Bloomberg 2016

Embed Size (px)

Citation preview

Page 1: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

A Deep Dive into Structured StreamingTathagata “TD” Das @tathadas

Spark Meetup @ Bloomberg27 September 2016

Page 2: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Streaming in Apache Spark Spark Streaming changed how people write streaming apps

2

SQL Streaming MLlib

Spark Core

GraphXFunctional, concise and expressive

Fault-tolerant state management

Unified stack with batch processing

More than 50% users consider most important part of Apache Spark

Page 3: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Streaming apps are growing more

complex

3

Page 4: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Streaming computations don’t run in isolation

Need to interact with batch data, interactive analysis, machine learning, etc.

Page 5: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Use case: IoT Device Monitoring

IoT events from Kafka

ETL into long term storage- Prevent data loss- Prevent duplicatesStatus monitoring

- Handle late data- Aggregate on windows

on event time

Interactively debug issues- consistency

event stream

Anomaly detection- Learn models offline- Use online + continuous

learning

Page 6: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Use case: IoT Device Monitoring

IoT events from Kafka

ETL into long term storage- Prevent data loss- Prevent duplicatesStatus monitoring

- Handle late data- Aggregate on windows

on event time

Interactively debug issues- consistency

event stream

Anomaly detection- Learn models offline- Use online + continuous

learningContinuous ApplicationsNot just streaming any more

Page 7: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

1. Processing with event-time, dealing with late data- DStream API exposes batch time, hard to incorporate event-time

2. Interoperate streaming with batch AND interactive - RDD/DStream has similar API, but still requires translation

3. Reasoning about end-to-end guarantees- Requires carefully constructing sinks that handle failures correctly- Data consistency in the storage while being updated

Pain points with DStreams

Page 8: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Structured Streaming

Page 9: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

The simplest way to perform streaming analyticsis not having to reason about streaming at all

Page 10: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

New ModelTrigger: every 1 sec

1 2 3Time

data upto 1

Input data upto 2

data upto 3

Que

ry

Input: data from source as an append-only table

Trigger: how frequently to check input for new data

Query: operations on input usual map/filter/reduce new window, session

ops

Page 11: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

New ModelTrigger: every 1 sec

1 2 3

output for data up to 1

Result

Que

ry

Time

data upto 1

Input data upto 2

output for data up to 2

data upto 3

output for data up to 3

Result: final operated table updated every trigger

interval

Output: what part of result to write to data sink after every trigger Complete output: Write full result table every time

Output complete output

Page 12: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

New ModelTrigger: every 1 sec

1 2 3

output for data up to 1

Result

Que

ry

Time

data upto 1

Input data upto 2

output for data up to 2

data upto 3

output for data up to 3

Output deltaoutput

Result: final operated table updated every trigger

interval

Output: what part of result to write to data sink after every trigger Complete output: Write full result table every timeDelta output: Write only the rows that changed in result from previous batchAppend output: Write only new rows

*Not all output modes are feasible with all queries

Page 13: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Static, bounded data

Streaming, unbounded data

Single API !

API - Dataset/DataFrame

Page 14: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

14

DatasetA distributed bounded or unbounded collection of data with a known schema

A high-level abstraction for selecting, filtering, mapping, reducing, and aggregating structured data

Common API across Scala, Java, Python, R

Page 15: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Standards based: Supports most popular constructs in HiveQL, ANSI SQL

Compatible: Use JDBC to connect with popular BI tools

15

Datasets with SQL

SELECT type, avg(signal)FROM devicesGROUP BY type

Page 16: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Concise: great for ad-hoc interactive analysis

Interoperable: based on R / pandas, easy to go back and forth

16

Dynamic Datasets (DataFrames)

ctxt.table("device-data") .groupBy("type") .agg("type", avg("signal")) .map(lambda …) .collect()

Page 17: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

17

Statically-typed DatasetsNo boilerplate: automatically convert to/from domain objects

Safe: compile time checks for correctness

// Compute histogram of age by name.val hist = ds.groupBy(_.type).mapGroups { case (type, data: Iter[DeviceData]) => val buckets = new Array[Int](10) data.map(_.signal).foreach { a => buckets(a / 10) += 1 } (type, buckets)}

case class DeviceData(type:String, signal:Int)

// Convert data to Java objectsval ds: Dataset[DeviceData] = ctx.read.json("data.json").as[Device]

Page 18: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Unified, Structured APIs in Spark 2.0

18

SQL DataFrames DatasetsSyntax Errors

AnalysisErrors

Runtime CompileTime

RuntimeCompile

Time

CompileTime

Runtime

Page 19: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Batch ETL with DataFramesinput = spark.read

.format("json")

.load("source-path")

result = input.select("device", "signal").where("signal > 15")

result.write.format("parquet").save("dest-path")

Read from Json file

Select some devices

Write to parquet file

Page 20: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Streaming ETL with DataFramesinput = spark.readStream

.format("json")

.load("source-path")

result = input.select("device",

"signal").where("signal > 15")

result.writeStream.format("parquet").start("dest-path")

Read from Json file streamReplace read with readStream

Select some devicesCode does not change

Write to Parquet file streamReplace save() with start()

Page 21: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Streaming ETL with DataFramesread…stream() creates a streaming DataFrame, does not start any of the computation

write…startStream() defines where & how to output the data and starts the processing

input = spark.readStream.format("json").load("source-path")

result = input.select("device",

"signal").where("signal > 15")

result.writeStream.format("parquet").start("dest-path")

Page 22: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Streaming ETL with DataFrames1 2 3

Result[append-only table]

Input

Output[append mode]

new rows in result

of 2new rows in result

of 3

input = spark.readStream.format("json").load("source-path")

result = input.select("device",

"signal").where("signal > 15")

result.writeStream.format("parquet").start("dest-path")

Page 23: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Demo

Page 24: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Continuous AggregationsContinuously compute average signal across all devices

Continuously compute average signal of each type of device

24

input.avg("signal")

input.groupBy("device-type") .avg("signal")

Page 25: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Continuous Windowed Aggregations

25

input.groupBy( $"device-type", window($"event-time-col", "10 min"))

.avg("signal")

Continuously compute average signal of each type of device in last 10 minutes using event-time

Simplifies event-time stream processing (not possible in DStreams)Works on both, streaming and batch jobs

Page 26: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Joining streams with static datakafkaDataset = spark.readStream .format("kafka") .option("subscribe", "iot-updates") .option("kafka.bootstrap.servers", ...) .load()

staticDataset = ctxt.read .jdbc("jdbc://", "iot-device-info")

joinedDataset = kafkaDataset.join(  staticDataset, "device-type")

26

Join streaming data from Kafka with static data via JDBC to enrich the streaming data …

… without having to think that you are joining streaming data

Page 27: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Output ModesDefines what is outputted every time there is a triggerDifferent output modes make sense for different queries

27

input.select("device", "signal").writeStream.outputMode("append").format("parquet").start("dest-path")

Append mode with non-aggregation queries

input.agg(count("*")).writeStream.outputMode("complete").format("parquet").start("dest-path")

Complete mode withaggregation queries

Page 28: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Query Managementquery = result.write

.format("parquet")

.outputMode("append")

.start("dest-path")

query.stop()query.awaitTermination()query.exception()

query.sourceStatuses()query.sinkStatus()

28

query: a handle to the running streaming computation for managing it

- Stop it, wait for it to terminate- Get status- Get error, if terminated

Multiple queries can be active at the same time

Each query has unique name for keeping track

Page 29: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Logically:Dataset operations on table(i.e. as easy to understand as batch)

Physically:Spark automatically runs the query in streaming fashion(i.e. incrementally and continuously)

DataFrame

Logical Plan

Continuous, incremental execution

Catalyst optimizer

Query Execution

Page 30: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Structured StreamingHigh-level streaming API built on Datasets/DataFrames

Event time, windowing, sessions, sources & sinksEnd-to-end exactly once semantics

Unifies streaming, interactive and batch queriesAggregate data in a stream, then serve using JDBCAdd, remove, change queries at runtimeBuild and apply ML models

Page 31: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

What can you do with this that’s hard with other engines?

True unification Combines batch + streaming + interactive workloadsSame code + same super-optimized engine for everything

Flexible API tightly integrated with the engineChoose your own tool - Dataset/DataFrame/SQLGreater debuggability and performance

Benefits of Sparkin-memory computing, elastic scaling, fault-tolerance, straggler mitigation, …

Page 32: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Underneath the Hood

Page 33: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Batch Execution on Spark SQL

33

DataFrame/Dataset

Logical Plan

Abstract representation

of query

Page 34: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Batch Execution on Spark SQL

34

DataFrame/Dataset

Logical Plan Planner

SQL AST

DataFrame Unresolved Logical Plan Logical Plan Optimized

Logical Plan RDDsSelected Physical

Plan

Analysis LogicalOptimization

PhysicalPlanning

Cost

Mod

el

Physical Plans

CodeGeneration

CatalogDataset

Helluvalot of magic!

Page 35: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Batch Execution on Spark SQL

35

DataFrame/Dataset

Logical Plan Execution PlanPlanner

Run super-optimized Spark jobs to compute results

Bytecode generationJVM intrinsics, vectorizationOperations on serialized data

Code Optimizations Memory Optimizations

Compact and fast encodingOffheap memory

Project Tungsten - Phase 1 and 2

Page 36: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Continuous Incremental Execution

Planner knows how to convert streaming logical plans to a continuous series of incremental execution plans, for each processing the next chunk of streaming data

36

DataFrame/Dataset

Logical Plan

Incremental Execution Plan 1

Incremental Execution Plan 2

Incremental Execution Plan 3

Planner

Incremental Execution Plan 4

Page 37: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Continuous Incremental Execution

37

Planner

Incremental Execution 2Offsets: [106-197] Count: 92

Planner polls for new data from sources

Incremental Execution 1Offsets: [19-105] Count: 87

Incrementally executes new data and writes to sink

Page 38: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Continuous AggregationsMaintain running aggregate as in-memory state backed by WAL in file system for fault-tolerance

38

state data generated and used across incremental executions

Incremental Execution 1

state:87

Offsets: [19-105] Running Count: 87

memoryIncremental Execution 2

state:179

Offsets: [106-179] Count: 87+92 = 179

Page 39: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Fault-tolerance

All data and metadata in the system needs to be recoverable / replayable

stat

e

Planner

source sink

Incremental Execution 1

Incremental Execution 2

Page 40: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Fault-toleranceFault-tolerant Planner

Tracks offsets by writing the offset range of each execution to a write ahead log (WAL) in HDFS

stat

e

Planner

source sink

Offsets written to fault-tolerant WALbefore execution

Incremental Execution 2

Incremental Execution 1

Page 41: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Fault-toleranceFault-tolerant Planner

Tracks offsets by writing the offset range of each execution to a write ahead log (WAL) in HDFS

stat

e

Planner

source sink

Failed planner fails current execution

Incremental Execution 2

Incremental Execution 1

Failed Execution

FailedPlanner

Page 42: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Fault-toleranceFault-tolerant Planner

Tracks offsets by writing the offset range of each execution to a write ahead log (WAL) in HDFS

Reads log to recover from failures, and re-execute exact range of offsets

stat

e

RestartedPlanner

source sink

Offsets read back from WAL

Incremental Execution 1

Same executions regenerated from offsets

Failed ExecutionIncremental Execution 2

Page 43: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Fault-toleranceFault-tolerant Sources

Structured streaming sources are by design replayable (e.g. Kafka, Kinesis, files) and generate the exactly same data given offsets recovered by planner st

ate

Planner

sink

Incremental Execution 1

Incremental Execution 2

source

Replayable source

Page 44: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Fault-toleranceFault-tolerant State

Intermediate "state data" is a maintained in versioned, key-value maps in Spark workers, backed by HDFS

Planner makes sure "correct version" of state used to re-execute after failure

Planner

source sink

Incremental Execution 1

Incremental Execution 2

stat

e

state is fault-tolerant with WAL

Page 45: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Fault-toleranceFault-tolerant Sink

Sink are by design idempotent, and handles re-executions to avoid double committing the output

Planner

source

Incremental Execution 1

Incremental Execution 2

stat

e

sink

Idempotent by design

Page 46: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

46

offset tracking in WAL+

state management+

fault-tolerant sources and sinks=

end-to-end exactly-once guarantees

Page 47: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

47

Fast, fault-tolerant, exactly-once stateful stream processing

without having to reason about streaming

Page 49: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Comparison with Other Engines

49Read the blog to understand this table

Page 50: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Current Status: Spark 2.0Basic infrastructure and API

- Event time, windows, aggregations- Append and Complete output modes- Support for a subset of batch queries

Source and sink- Sources: Files- Sinks: Files and in-memory table

Experimental release to set the future direction

Not ready for production but good to experiment with and provide feedback

Page 51: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

In Progress

Page 52: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Future DirectionMetrics

Stability and scalability

Support for more queriesWatermarks and late dataSessionizationMore output modes

ML integrations

Make Structured Streaming ready for production workloads as soon as possible

Page 53: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Stay tuned on our Databricks blogs for more information and examples on Structured Streaming

Try latest version of Apache Spark

Try Apache Spark with Databricks

53

http://databricks.com/try

Page 54: A Deep Dive into Structured Streaming:  Apache Spark Meetup at Bloomberg 2016

Structured StreamingMaking Continuous Applications easier, faster, and smarter

Follow me @tathadas