Upload
duyhai-doan
View
555
Download
4
Tags:
Embed Size (px)
Citation preview
@doanduyhai
Spark/Cassandra integration Theory and Practice DuyHai DOAN, Technical Advocate
@doanduyhai
Who Am I ?!Duy Hai DOAN Cassandra technical advocate • talks, meetups, confs • open-source devs (Achilles, …) • OSS Cassandra point of contact
☞ [email protected] ☞ @doanduyhai
2
@doanduyhai
Datastax!• Founded in April 2010
• We contribute a lot to Apache Cassandra™
• 400+ customers (25 of the Fortune 100), 200+ employees
• Headquarter in San Francisco Bay area
• EU headquarter in London, offices in France and Germany
• Datastax Enterprise = OSS Cassandra + extra features
3
Spark & Cassandra Presentation !
Spark & its eco-system!Cassandra & token ranges!
Stand-alone cluster deployment!!
@doanduyhai
What is Apache Spark ?!Created at Apache Project since 2010 General data processing framework Faster than Hadoop, in memory One-framework-many-components approach
5
@doanduyhai
Spark code example!Setup
Data-set (can be from text, CSV, JSON, Cassandra, HDFS, …)
val$conf$=$new$SparkConf(true)$$ .setAppName("basic_example")$$ .setMaster("local[3]")$$val$sc$=$new$SparkContext(conf)$
val$people$=$List(("jdoe","John$DOE",$33),$$$$$$$$$$$$$$$$$$$("hsue","Helen$SUE",$24),$$$$$$$$$$$$$$$$$$$("rsmith",$"Richard$Smith",$33))$
6
@doanduyhai
RDDs!RDD = Resilient Distributed Dataset val$parallelPeople:$RDD[(String,$String,$Int)]$=$sc.parallelize(people)$$val$extractAge:$RDD[(Int,$(String,$String,$Int))]$=$parallelPeople$$ $ $ $ $ $ .map(tuple$=>$(tuple._3,$tuple))$$val$groupByAge:$RDD[(Int,$Iterable[(String,$String,$Int)])]=extractAge.groupByKey()$$val$countByAge:$Map[Int,$Long]$=$groupByAge.countByKey()$
7
@doanduyhai
RDDs!RDD[A] = distributed collection of A • RDD[Person] • RDD[(String,Int)], … RDD[A] split into partitions Partitions distributed over n workers à parallel computing
8
@doanduyhai
Direct transformations!
9
map(f: A => B): RDD[B] filter(f: A => Boolean): RDD[A] …
@doanduyhai
Transformations requiring shuffle!
10
groupByKey(): RDD[(K,V)] reduceByKey(f: (V,V) => V): RDD[(K,V)] join[W](otherRDD: RDD[(K,W)]): RDD[(K, (V,W))] …
@doanduyhai
Actions!
11
collect(): Array[A] take(number: Int): Array[A] foreach(f: A => Unit): Unit …
@doanduyhai
Partitions transformations!
12
map(tuple => (tuple._3, tuple))
Direct transformation
Shuffle (expensive !)
groupByKey()
countByKey()
partition
RDD
Final action
@doanduyhai
Spark eco-system!
Local Standalone cluster YARN Mesos
Spark Core Engine (Scala/Java/Python)
Spark Streaming MLLib GraphX Spark SQL
Persistence
Cluster Manager
…
13
etc…
@doanduyhai
Spark eco-system!
Local Standalone cluster YARN Mesos
Spark Core Engine (Scala/Java/Python)
Spark Streaming MLLib GraphX Spark SQL
Persistence
Cluster Manager
…
14
etc…
@doanduyhai
What is Apache Cassandra?!Created at Apache Project since 2009 Distributed NoSQL database Eventual consistency (A & P of the CAP theorem) Distributed table abstraction
15
@doanduyhai
Cassandra data distribution reminder!Random: hash of #partition → token = hash(#p) Hash: ]-X, X] X = huge number (264/2)
n1
n2
n3
n4
n5
n6
n7
n8
16
@doanduyhai
Cassandra token ranges!A: ]0, X/8] B: ] X/8, 2X/8] C: ] 2X/8, 3X/8] D: ] 3X/8, 4X/8] E: ] 4X/8, 5X/8] F: ] 5X/8, 6X/8] G: ] 6X/8, 7X/8] H: ] 7X/8, X] Murmur3 hash function
n1
n2
n3
n4
n5
n6
n7
n8
A
B
C
D
E
F
G
H
17
@doanduyhai
Linear scalability!
n1
n2
n3
n4
n5
n6
n7
n8
A
B
C
D
E
F
G
H
user_id1
user_id2
user_id3
user_id4
user_id5
18
@doanduyhai
Linear scalability!
n1
n2
n3
n4
n5
n6
n7
n8
A
B
C
D
E
F
G
H
user_id1
user_id2
user_id3
user_id4
user_id5
19
@doanduyhai
Cassandra Query Language (CQL)!
INSERT INTO users(login, name, age) VALUES(‘jdoe’, ‘John DOE’, 33);
UPDATE users SET age = 34 WHERE login = ‘jdoe’;
DELETE age FROM users WHERE login = ‘jdoe’;
SELECT age FROM users WHERE login = ‘jdoe’;
20
Spark & Cassandra Connector!
Spark Core API!SparkSQL!
SparkStreaming!
@doanduyhai
Spark/Cassandra connector architecture!All Cassandra types supported and converted to Scala types Server side data filtering (SELECT … WHERE …) Use Java-driver underneath !Scala and Java support
22
@doanduyhai
Connector architecture – Core API!Cassandra tables exposed as Spark RDDs
Read from and write to Cassandra
Mapping of C* tables and rows to Scala objects • CassandraRDD and CassandraRow • Scala case class (object mapper) • Scala tuples
23
Spark Core
https://github.com/doanduyhai/Cassandra-Spark-Demo
@doanduyhai
Connector architecture – Spark SQL !Mapping of Cassandra table to SchemaRDD • CassandraSQLRow à SparkRow • custom query plan • push predicates to CQL for early filtering
SELECT * FROM user_emails WHERE login = ‘jdoe’;
25
Spark SQL
https://github.com/doanduyhai/Cassandra-Spark-Demo
@doanduyhai
Connector architecture – Spark Streaming !Streaming data INTO Cassandra table • trivial setup • be careful about your Cassandra data model when having an infinite
stream !!!
Streaming data OUT of Cassandra tables (CDC) ? • work in progress …
27
Spark Streaming
https://github.com/doanduyhai/Cassandra-Spark-Demo
Q & R
! " !
Spark/Cassandra best practices!
Data locality!Failure handling!
Cross-region operations!
@doanduyhai
Cluster deployment!
C* SparkM SparkW
C* SparkW
C* SparkW
C* SparkW
C* SparkW
Stand-alone cluster
31
@doanduyhai
Cluster deployment!
Spark Master
Spark Worker Spark Worker Spark Worker Spark Worker
Executor Executor Executor Executor
Driver Program
Cassandra – Spark placement 1 Cassandra process ⟷ 1 Spark worker
C* C* C* C*
32
@doanduyhai
Remember token ranges ?!A: ]0, X/8] B: ] X/8, 2X/8] C: ] 2X/8, 3X/8] D: ] 3X/8, 4X/8] E: ] 4X/8, 5X/8] F: ] 5X/8, 6X/8] G: ] 6X/8, 7X/8] H: ] 7X/8, X]
n1
n2
n3
n4
n5
n6
n7
n8
A
B
C
D
E
F
G
H
33
@doanduyhai
Data Locality!C*
SparkM SparkW
C* SparkW
C* SparkW
C* SparkW
C* SparkW
Spark partition RDD
Cassandra tokens ranges
34
@doanduyhai
Data Locality!C*
SparkM SparkW
C* SparkW
C* SparkW
C* SparkW
C* SparkW
Use Murmur3Partitioner
35
@doanduyhai
Read data locality!Read from Cassandra
36
@doanduyhai
Read data locality!Spark shuffle operations
37
@doanduyhai
Write to Cassandra without data locality!
Async batches fan-out writes to Cassandra
38
Because of shuffle, original data locality is lost
@doanduyhai
Write to Cassandra with data locality!
Write to Cassandra
rdd.repartitionByCassandraReplica("keyspace","table")
39
@doanduyhai
Write data locality!
40
• either stream data in Spark layer using repartitionByCassandraReplica() • or flush data to Cassandra by async batches • in any case, there will be data movement on network (sorry no magic)
@doanduyhai
Joins with data locality!
41
CREATE TABLE artists(name text, style text, … PRIMARY KEY(name));
CREATE TABLE albums(title text, artist text, year int,… PRIMARY KEY(title));
val join: CassandraJoinRDD[(String,Int), (String,String)] = sc.cassandraTable[(String,Int)](KEYSPACE, ALBUMS) // Select only useful columns for join and processing .select("artist","year") .as((_:String, _:Int)) // Repartition RDDs by "artists" PK, which is "name" .repartitionByCassandraReplica(KEYSPACE, ARTISTS) // Join with "artists" table, selecting only "name" and "country" columns .joinWithCassandraTable[(String,String)](KEYSPACE, ARTISTS, SomeColumns("name","country")) .on(SomeColumns("name"))
@doanduyhai
Joins pipeline with data locality!
42
LOCAL READ FROM CASSANDRA
@doanduyhai
Joins pipeline with data locality!
43
SHUFFLE DATA WITH SPARK
@doanduyhai
Joins pipeline with data locality!
44
REPARTITION TO MAP CASSANDRA REPLICA
@doanduyhai
Joins pipeline with data locality!
45
JOIN WITH DATA LOCALITY
@doanduyhai
Joins pipeline with data locality!
46
ANOTHER ROUND OF SHUFFLING
@doanduyhai
Joins pipeline with data locality!
47
REPARTITION AGAIN FOR CASSANDRA
@doanduyhai
Joins pipeline with data locality!
48
SAVE TO CASSANDRA WITH LOCALITY
@doanduyhai
Perfect data locality scenario!
49
• read localy from Cassandra • use operations that do not require shuffle in Spark (map, filter, …) • repartitionbyCassandraReplica() • à to a table having same partition key as original table • save back into this Cassandra table
Sanitize, validate, normalize, transform data USE CASE
@doanduyhai
Failure handling!Stand-alone cluster
50
C* SparkM SparkW
C* SparkW
C* SparkW
C* SparkW
C* SparkW
@doanduyhai
Failure handling!What if 1 node down ? What if 1 node overloaded ?
51
C* SparkM SparkW
C* SparkW
C* SparkW
C* SparkW
C* SparkW
@doanduyhai
Failure handling!What if 1 node down ? What if 1 node overloaded ? ☞ Spark masterwill re-assign the job to another worker
52
C* SparkM SparkW
C* SparkW
C* SparkW
C* SparkW
C* SparkW
@doanduyhai
Failure handling!
Oh no, my data locality !!!
53
@doanduyhai
Failure handling!
54
@doanduyhai
Data Locality Impl!
55
Remember RDD interface ?
abstract'class'RDD[T](…)'{'' @DeveloperApi'' def'compute(split:'Partition,'context:'TaskContext):'Iterator[T]''' protected'def'getPartitions:'Array[Partition]'' '' protected'def'getPreferredLocations(split:'Partition):'Seq[String]'='Nil'''''''''''''''}'
@doanduyhai
Data Locality Impl!
56
def getPreferredLocations(split: Partition): Cassandra replicas IP address corresponding to this Spark partition
@doanduyhai
Failure handling!If RF > 1 the Spark master chooses the next preferred location, which
is a replica 😎 Tune parameters: ① spark.locality.wait ② spark.locality.wait.process ③ spark.locality.wait.node
57
C* SparkM SparkW
C* SparkW
C* SparkW
C* SparkW
C* SparkW
@doanduyhai
val confDC1 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "DC_1_hostnames") .set("spark.cassandra.connection.local_dc", "DC_1") val confDC2 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "DC_2_hostnames") .set("spark.cassandra.connection.local_dc", "DC_2 ") val sc = new SparkContext(confDC1) sc.cassandraTable[Performer](KEYSPACE,PERFORMERS) .map[Performer](???) .saveToCassandra(KEYSPACE,PERFORMERS) (CassandraConnector(confDC2),implicitly[RowWriterFactory[Performer]])
Cross-DC operations!
58
@doanduyhai
val confCluster1 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "cluster_1_hostnames") val confCluster2 = new SparkConf(true) .setAppName("data_migration") .setMaster("master_ip") .set("spark.cassandra.connection.host", "cluster_2_hostnames") val sc = new SparkContext(confCluster1) sc.cassandraTable[Performer](KEYSPACE,PERFORMERS) .map[Performer](???) .saveToCassandra(KEYSPACE,PERFORMERS) (CassandraConnector(confCluster2),implicitly[RowWriterFactory[Performer]])
Cross-cluster operations!
59
Spark/Cassandra use-cases!
Data cleaning!Schema migration!
Analytics!!
@doanduyhai
Use Cases!
Load data from various sources
Analytics (join, aggregate, transform, …)
Sanitize, validate, normalize, transform data
Schema migration, Data conversion
61
@doanduyhai
Data cleaning use-cases!
62
Bug in your application ? Dirty input data ? ☞ Spark job to clean it up! (perfect data locality)
Sanitize, validate, normalize, transform data
Data Cleaning
https://github.com/doanduyhai/Cassandra-Spark-Demo
@doanduyhai
Schema migration use-cases!
64
Business requirements change with time ? Current data model no longer relevant ? ☞ Spark job to migrate data !
Schema migration, Data conversion
Data Migration
https://github.com/doanduyhai/Cassandra-Spark-Demo
@doanduyhai
Analytics use-cases!
66
Given existing tables of performers and albums, I want:
① top 10 most common music styles (pop,rock, RnB, …) ?
② performer productivity(albums count) by origin country and by decade ?
☞ Spark job to compute analytics !
Analytics (join, aggregate, transform, …)
@doanduyhai
Analytics pipeline!
67
① Read from production transactional tables
② Perform aggregation with Spark
③ Save back data into dedicated tables for fast visualization
④ Repeat step ①
Analytics
https://github.com/doanduyhai/Cassandra-Spark-Demo
The “Big Data Platform”!
@doanduyhai
Our vision!
70
We had a dream …
to provide a Big Data Platform … built for the performance & high availabitly demands
of IoT, web & mobile applications
@doanduyhai
Until now in Datastax Enterprise!Cassandra + Solr in same JVM Unlock full text search power for Cassandra CQL syntax extension
71
SELECT * FROM users WHERE solr_query = ‘age:[33 TO *] AND gender:male’;
SELECT * FROM users WHERE solr_query = ‘lastname:*schwei?er’;
Cassandra + Solr
@doanduyhai
Now with Spark!Cassandra + Spark Unlock full analytics power for Cassandra Spark/Cassandra connector
73
@doanduyhai
Tomorrow (DSE 4.7)!Cassandra + Spark + Solr Unlock full text search + analytics power for Cassandra
74
@doanduyhai
Tomorrow (DSE 4.7)!Cassandra + Spark + Solr Unlock full text search + analytics power for Cassandra
75
@doanduyhai
How to speed up Spark jobs ?!Bruce force solution for rich people: • Add more RAM (100Gb, 200Gb, …)
76
@doanduyhai
How to speed up Spark jobs ?!Smart solution for broke people: • Filter at maximum the initial dataset
77
@doanduyhai
The idea!① Filter the maximum with Cassandra and Solr
② Fetch only small data set in memory
③ Aggregate with Spark ☞ near real time interactive analytics query possible if restrictive criteria
78
@doanduyhai
Datastax Enterprise 4.7!With a 3rd component for full text search …
how to preserve data locality ?
79
@doanduyhai
Stand-alone search cluster caveat!The bad way v1: perform search from the Spark « driver program »
80
@doanduyhai
Stand-alone search cluster caveat!The bad way v2: search from Spark workers with restricted routing
81
@doanduyhai
Stand-alone search cluster caveat!The bad way v3: search from Cassandra nodes with a connector to Solr
82
@doanduyhai
Stand-alone search cluster caveat!The ops won’t be your friend • 3 clusters to manage: Spark, Cassandra & Solr/whatever • lots of moving parts Impacts on Spark jobs • increased response time due to latency • the 99.9 percentile latency can be very high
83
@doanduyhai
Datastax Enterprise 4.7!The right way, distributed « local search »
84
@doanduyhai
val join: CassandraJoinRDD[(String,Int), (String,String)] =
sc.cassandraTable[(String,Int)](KEYSPACE, ALBUMS)
// Select only useful columns for join and processing
.select("artist","year").where("solr_query = 'style:*rock* AND ratings:[3 TO *]' ")
.as((_:String, _:Int))
.repartitionByCassandraReplica(KEYSPACE, ARTISTS)
.joinWithCassandraTable[(String,String)](KEYSPACE, ARTISTS, SomeColumns("name","country"))
.on(SomeColumns("name")).where("solr_query = 'age:[20 TO 30]' ")
Datastax Enterprise 4.7!
① compute Spark partitions using Cassandra token ranges ② on each partition, use Solr for local data filtering (no distributed query!) ③ fetch data back into Spark for aggregations
85
@doanduyhai
Datastax Enterprise 4.7!
86
SELECT … FROM … WHERE token(#partition)> 3X/8 AND token(#partition)<= 4X/8 AND solr_query='full text search expression';
1
2
3
Advantages of same JVM Cassandra + Solr integration
1
Single-pass local full text search (no fan out) 2
Data retrieval
Token Range : ] 3X/8, 4X/8]
@doanduyhai
Datastax Enterprise 4.7!Scalable solution: x2 volume à x2 nodes à constant processing time
87
Cassandra + Spark + Solr
https://github.com/doanduyhai/Cassandra-Spark-Demo
Q & R
! " !