39
7/16/2014 1 Map Reduce Introduction Simon Shim The problems… Performing database operations in parallel is complicated, and scalability is challenging to implement correctly How do we store 1,000x as much data? How do we process data where we don’t know the schema in advance? What does it mean to be reliable? Ken Arnold, CORBA designer*: “Failure is the defining difference between distributed and local programming” *(Serious Über-hacker) What we need An efficient way to decompose problems into parallel parts A way to read and write data in parallel A way to minimize bandwidth usage A reliable way to get computation done

Big data shim

  • Upload
    tistrue

  • View
    139

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Big data shim

7/16/2014

1

Map Reduce Introduction

Simon Shim

The problems…

• Performing database operations in parallel is complicated, and scalability is challenging to implement correctly

• How do we store 1,000x as much data?

• How do we process data where we don’t know the schema in advance?

What does it mean to be reliable?

Ken Arnold, CORBA designer*:

“Failure is the defining difference between distributed and local programming”

*(Serious Über-hacker)

What we need

• An efficient way to decompose problems into parallel parts

• A way to read and write data in parallel

• A way to minimize bandwidth usage

• A reliable way to get computation done

Page 2: Big data shim

7/16/2014

2

A Radical Way Out…

• Nodes talk to each other as little as possible – maybe never

– “Shared nothing” architecture

• Programmer should not explicitly be allowed to communicate between nodes

• Data is spread throughout machines in advance, computation happens where it’s stored.

Locality

• Master program divides up tasks based on location of data: tries to have map tasks on same machine as physical file data, or at least same rack

• Map task inputs are divided into 64—128 MB blocks: same size as filesystem chunks

– Process components of a single file in parallel

Fault Tolerance

• Tasks designed for independence

• Master detects worker failures

• Master re-executes tasks that fail while in progress

• Restarting one task does not require communication with other tasks

• Data is replicated to increase availability, durability

How MapReduce is Structured

• Functional programming meets distributed computing

• A batch data processing system

• Factors out many reliability concerns from application logic

Page 3: Big data shim

7/16/2014

3

Functional Programming Review

• Functional operations do not modify data structures: They always create new ones

• Original data still exists in unmodified form

• Data flows are implicit in program design

• Order of operations does not matter

Functional Programming Review

fun foo(l: int list) =

sum(l) + mul(l) + length(l)

Order of sum() and mul(), etc does not matter – they do not modify l

“Updates” Don’t Modify Structures

fun append(x, lst) =

let lst' = reverse lst in

reverse ( x :: lst' )

The append() function above reverses a list, adds a new

element to the front, and returns all of that, reversed,

which appends an item.

But it never modifies lst!

Functions Can Be Used As Arguments

fun DoDouble(f, x) = f (f x)

It does not matter what f does to its

argument; DoDouble() will do it twice.

Page 4: Big data shim

7/16/2014

4

Map

Creates a new list by applying f to each element of the input list; returns output in order.

f f f f f f

Fold

Moves across a list, applying f to each element plus an accumulator. f returns the next accumulator value, which is combined with the next element of the list

f f f f f returned

initial

Example

fun foo(l: int list) =

sum(l) + mul(l) + length(l)

How can we implement this?

Example (Solved)

fun foo(l: int list) =

sum(l) + mul(l) + length(l)

fun sum(lst) = foldl (fn (x,a)=>x+a)

fun mul(lst) = foldl (fn (x,a)=>x*a)

fun length(lst) = foldl (fn (x,a)=>1+a)

Page 5: Big data shim

7/16/2014

5

Implicit Parallelism In map

• In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements

• If order of application of f to elements in list is commutative, we can reorder or parallelize execution

• This is the “secret” that MapReduce exploits

Motivation: Large Scale Data Processing

• Want to process lots of data ( > 1 TB)

• Want to parallelize across hundreds/thousands of CPUs

• … Want to make this easy

MapReduce

• Automatic parallelization & distribution

• Fault-tolerant

• Provides status and monitoring tools

• Clean abstraction for programmers

Programming Model

• Borrows from functional programming

• Users implement interface of two functions:

– map (in_key, in_value) ->

(out_key, intermediate_value) list

– reduce (out_key, intermediate_value list) ->

out_value list

Page 6: Big data shim

7/16/2014

6

map (in_key, in_value) ->

(out_key, intermediate_value) list

map reduce

• After the map phase is over, all the intermediate values for a given output key are combined together into a list

• reduce() combines those intermediate values into one or more final values for that same output key

• (in practice, usually only one final value per key)

reduce

reduce (intermediate_key, int_value list) ->

(out_key, out_value) list

returned

initial

Example: Filter Mapper

let map(k, v) =

if (isPrime(v)) then emit(k, v)

(“foo”, 7) (“foo”, 7)

(“test”, 10) (nothing)

Page 7: Big data shim

7/16/2014

7

Example: Sum Reducer

let reduce(k, vals) =

sum = 0

foreach int v in vals:

sum += v

emit(k, sum)

(“A”, [42, 100, 312]) (“A”, 454)

(“B”, [12, 6, -2]) (“B”, 16)

Data store 1 Data store nmap

(key 1,

values...)

(key 2,

values...)(key 3,

values...)

map

(key 1,

values...)

(key 2,

values...)(key 3,

values...)

Input key*value

pairs

Input key*value

pairs

== Barrier == : Aggregates intermediate values by output key

reduce reduce reduce

key 1,

intermediate

values

key 2,

intermediate

values

key 3,

intermediate

values

final key 1

values

final key 2

values

final key 3

values

...

Parallelism

• map() functions run in parallel, creating different intermediate values from different input data sets

• reduce() functions also run in parallel, each working on a different output key

• All values are processed independently

• Bottleneck: reduce phase can’t start until map phase is completely finished.

Example: Count word occurrences map(String input_key, String input_value):

// input_key: document name

// input_value: document contents

for each word w in input_value:

EmitIntermediate(w, "1");

reduce(String output_key, Iterator intermediate_values):

// output_key: a word

// output_values: a list of counts

int result = 0;

for each v in intermediate_values:

result += ParseInt(v);

Emit(AsString(result));

Page 8: Big data shim

7/16/2014

8

Example vs. Actual Source Code

• Example is written in pseudo-code

• Actual implementation is in C++, using a MapReduce library

• Bindings for Python and Java exist via interfaces

• True code is somewhat more involved (defines how the input key/values are divided up and accessed, etc.)

Locality

• Master program divvies up tasks based on location of data: tries to have map() tasks on same machine as physical file data, or at least same rack

• map() task inputs are divided into 64 MB blocks: same size as Google File System chunks

Fault Tolerance

• Master detects worker failures – Re-executes completed & in-progress map() tasks

– Re-executes in-progress reduce() tasks

• Master notices particular input key/values cause crashes in map(), and skips those values on re-execution. – Effect: Can work around bugs in third-party

libraries!

Optimizations

• No reduce can start until map is complete:

– A single slow disk controller can rate-limit the whole process

• Master redundantly executes “slow-moving” map tasks; uses results of first copy to finish

Page 9: Big data shim

7/16/2014

9

Combining Phase

• Run on mapper nodes after map phase

• “Mini-reduce,” only on local map output

• Used to save bandwidth before sending data to full reducer

• Reducer can be combiner if commutative & associative

Combiner, graphically

Combiner

replaces with:

Map output

To reducer

On one mapper machine:

To reducer

Word Count Example redux map(String input_key, String input_value):

// input_key: document name

// input_value: document contents

for each word w in input_value:

EmitIntermediate(w, 1);

reduce(String output_key, Iterator<int> intermediate_values):

// output_key: a word

// output_values: a list of counts

int result = 0;

for each v in intermediate_values:

result += v;

Emit(result);

wordcount

public void map(WritableComparable key, Writable value, OutputCollector output,

Reporter reporter) throws IOException {

String line = ((UTF8)value).toString();

StringTokenizer itr = new StringTokenizer(line);

while (itr.hasMoreTokens()) {

word.set(itr.nextToken());

output.collect(word, one); } }

public void reduce(WritableComparable key, Iterator values,

OutputCollector output,Reporter reporter) throws IOException { int sum = 0;

while (values.hasNext()) {

sum += ((IntWritable) values.next()).get();

}

output.collect(key, new IntWritable(sum)); }

Page 10: Big data shim

7/16/2014

10

Distributed “Tail Recursion”

• MapReduce doesn’t make infinite scalability automatic.

• Is word count infinitely scalable? Why (not)?

What About This?

UniqueValuesReducer(K key, iter<V> values) {

Set<V> seen = new HashSet<V>();

for (V val : values) {

if (!seen.contains(val)) {

seen.put(val);

emit (key, val);

}

}

}

A Scalable Implementation

KeyifyMapper(K key, V val) {

emit ((key, val), 1);

}

IgnoreValuesCombiner(K key, iter<V> values) {

emit (key, 1);

}

UnkeyifyReducer(K key, iter<V> values) {

let (k', v') = key;

emit (k', v');

}

MapReduce Conclusions

• MapReduce has proven to be a useful abstraction

• Greatly simplifies large-scale computations at Google

• Functional programming paradigm can be applied to large-scale applications

• Fun to use: focus on problem, let library deal w/ messy details

Page 11: Big data shim

7/16/2014

11

Hadoop Introduction

MapReduce/Hadoop

Google calls it: Hadoop equivalent:

MapReduce Hadoop

GFS HDFS

Bigtable HBase

Chubby Zookeeper

Some MapReduce Terminology

• Job – A “full program” - an execution of a Mapper and Reducer across a data set

• Task – An execution of a Mapper or a Reducer on a slice of data

– a.k.a. Task-In-Progress (TIP)

• Task Attempt – A particular instance of an attempt to execute a task on a machine

MapReduce: High Level

JobTrackerMapReduce job

submitted by

client computer

Master node

TaskTracker

Slave node

Task instance

TaskTracker

Slave node

Task instance

TaskTracker

Slave node

Task instance

Page 12: Big data shim

7/16/2014

12

Job Distribution

• MapReduce programs are contained in a Java “jar” file + an XML file containing serialized program configuration options

• Running a MapReduce job places these files into the HDFS and notifies TaskTrackers where to retrieve the relevant program code

• … Where’s the data distribution?

Data Distribution

• Implicit in design of MapReduce!

– All mappers are equivalent; so map whatever data is local to a particular node in HDFS

• If lots of data does happen to pile up on the same node, nearby nodes will map instead

– Data transfer is handled implicitly by HDFS

What Happens In MapReduce? Depth First

Job Launch Process: Client

• Client program creates a JobConf – Identify classes implementing Mapper and

Reducer interfaces • JobConf.setMapperClass(), setReducerClass()

– Specify inputs, outputs • FileInputFormat.addInputPath(),

• FileOutputFormat.setOutputPath()

– Optionally, other options too: • JobConf.setNumReduceTasks(),

JobConf.setOutputFormat()…

Page 13: Big data shim

7/16/2014

13

Job Launch Process: JobClient

• Pass JobConf to JobClient.runJob() or submitJob()

– runJob() blocks, submitJob() does not

• JobClient:

– Determines proper division of input into InputSplits

– Sends job data to master JobTracker server

Job Launch Process: JobTracker

• JobTracker:

– Inserts jar and JobConf (serialized to XML) in shared location

– Posts a JobInProgress to its run queue

Job Launch Process: TaskTracker

• TaskTrackers running on slave nodes periodically query JobTracker for work

• Retrieve job-specific jar and config

• Launch task in separate instance of Java

Job Launch Process: TaskRunner

• TaskRunner, MapTaskRunner, MapRunner work in a daisy-chain to launch your Mapper – Task knows ahead of time which InputSplits it

should be mapping

– Calls Mapper once for each record retrieved from the InputSplit

• Running the Reducer is much the same

Page 14: Big data shim

7/16/2014

14

Mapper

• void map(K1 key,

V1 value,

OutputCollector<K2, V2> output,

Reporter reporter)

• K types implement WritableComparable

• V types implement Writable

What is Writable?

• Hadoop defines its own classes for strings (Text), integers (IntWritable), etc.

• All values are instances of Writable

• All keys are instances of WritableComparable

Getting Data To The Mapper

Input file

InputSplit InputSplit InputSplit InputSplit

Input file

RecordReader RecordReader RecordReader RecordReader

Mapper

(intermediates)

Mapper

(intermediates)

Mapper

(intermediates)

Mapper

(intermediates)

Inp

utF

orm

at

Data: Stream of keys and values

are 1

Hi 1

how 1

You 1 are 1

Hello 1

Hello 1

how 1

You 1

are 2

Hello 2 Hi 1

how 2

you 2

Input Output Intermediate results

Map Reduce

<0>Hi how are you <100>I am good <0>Hello Hello how are you

<105>Not so good Sorted

are [1 1]

Hello [1 1] Hi [1]

how [1 1]

you [1 1] Merged

Page 15: Big data shim

7/16/2014

15

wordcount

public void map(WritableComparable key, Writable value, OutputCollector output,

Reporter reporter) throws IOException {

String line = ((UTF8)value).toString();

StringTokenizer itr = new StringTokenizer(line);

while (itr.hasMoreTokens()) {

word.set(itr.nextToken());

output.collect(word, one); } }

public void reduce(WritableComparable key, Iterator values,

OutputCollector output,Reporter reporter) throws IOException { int sum = 0;

while (values.hasNext()) {

sum += ((IntWritable) values.next()).get();

}

output.collect(key, new IntWritable(sum)); }

• public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); System.out.println("Started Job"); Job job = new Job(conf, "wordcount"); job.setJarByClass(WordCount.class);

job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(IntWritable.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class);

FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 :1);

Reading Data

• Data sets are specified by InputFormats

– Defines input data (e.g., a directory)

– Identifies partitions of the data that form an InputSplit

– Factory for RecordReader objects to extract (k, v) records from the input source

FileInputFormat and Friends

• TextInputFormat – Treats each ‘\n’-terminated line of a file as a value

• KeyValueTextInputFormat – Maps ‘\n’- terminated text lines of “k SEP v”

• SequenceFileInputFormat – Binary file of (k, v) pairs with some add’l metadata

• SequenceFileAsTextInputFormat – Same, but maps (k.toString(), v.toString())

Page 16: Big data shim

7/16/2014

16

Filtering File Inputs

• FileInputFormat will read all files out of a specified directory and send them to the mapper

• Delegates filtering this file list to a method subclasses may override

– e.g., Create your own “xyzFileInputFormat” to read *.xyz from directory list

Record Readers

• Each InputFormat provides its own RecordReader implementation

– Provides (unused?) capability multiplexing

• LineRecordReader – Reads a line from a text file

• KeyValueRecordReader – Used by KeyValueTextInputFormat

Input Split Size

• FileInputFormat will divide large files into chunks

– Exact size controlled by mapred.min.split.size

• RecordReaders receive file, offset, and length of chunk

• Custom InputFormat implementations may override split size – e.g., “NeverChunkFile”

Sending Data To Reducers

• Map function receives OutputCollector object

– OutputCollector.collect() takes (k, v) elements

• Any (WritableComparable, Writable) can be used

• By default, mapper output type assumed to be same as reducer output type

Page 17: Big data shim

7/16/2014

17

WritableComparator

• Compares WritableComparable data

– Will call WritableComparable.compare()

– Can provide fast path for serialized data

• JobConf.setOutputValueGroupingComparator()

Partition And Shuffle

Mapper

(intermediates)

Mapper

(intermediates)

Mapper

(intermediates)

Mapper

(intermediates)

Reducer Reducer Reducer

(intermediates) (intermediates) (intermediates)

Partitioner Partitioner Partitioner Partitioner

sh

ufflin

g

Partitioner

• int getPartition(key, val, numPartitions)

– Outputs the partition number for a given key

– One partition == values sent to one Reduce task

• HashPartitioner used by default

– Uses key.hashCode() to return partition num

• JobConf sets Partitioner implementation

Reduction

• reduce( K2 key,

Iterator<V2> values,

OutputCollector<K3, V3> output,

Reporter reporter)

• Keys & values sent to one partition all go to the same reduce task

• Calls are sorted by key – “earlier” keys are reduced and output before “later” keys

Page 18: Big data shim

7/16/2014

18

Finally: Writing The Output

Reducer Reducer Reducer

RecordWriter RecordWriter RecordWriter

output file output file output file

Ou

tpu

tFo

rma

t

OutputFormat

• Analogous to InputFormat

• TextOutputFormat – Writes “key val\n” strings to output file

• SequenceFileOutputFormat – Uses a binary format to pack (k, v) pairs

• NullOutputFormat – Discards output

Pig Latin

Reference: Programming Pig by Alan Gates

72

Pig Too many lines of Hadoop code even for simple

logic How many lines do you have for word count?

High level dataflow language (Pig Latin) Much simpler than Java: 10 lines of Pig Latin V.S 200

lines in Java Simplifies the data processing

Framework for analyzing large un-structured and semi-structured data on top of Hadoop. – Pig Engine Parses, compiles Pig Latin scripts into MapReduce

jobs run on top of Hadoop. – Pig Latin is declarative, SQL-like language; the high level

language interface for Hadoop.

Page 19: Big data shim

7/16/2014

19

73

Pig runs over Hadoop Who uses Pig?

• 70% of production jobs at Yahoo (10ks per day)

• Twitter, LinkedIn, Ebay, AOL,…

• Used to

– Process web logs

– Build user behavior models

– Process images

– Build maps of the web

– Do research on large data sets

Word Count using MapReduce Word Count using Pig

Lines=LOAD ‘input/hadoop.log’ AS (line: chararray); Words = FOREACH Lines GENERATE FLATTEN(TOKENIZE(line)) AS word; Groups = GROUP Words BY word; Counts = FOREACH Groups GENERATE group, COUNT(Words); Results = ORDER Words BY Counts DESC; Top5 = LIMIT Results 5; STORE Top5 INTO /output/top5words;

Page 20: Big data shim

7/16/2014

20

77

Pig Join

Join in Pig – Various algorithms are already available.

– Some of them are generic to support multi-way join

– No need to consider integration into Select-Project-Join-Aggregate (SPJA) workflow.

A = LOAD 'input/join/A';

B = LOAD 'input/join/B';

C = JOIN A BY $0, B BY $1;

DUMP C;

Join

79

Motivation by Example

Suppose we have user data in one file, website data in another file.

We need to find the top 5 most visited pages by users aged 18-25

80

In Pig Latin

Page 21: Big data shim

7/16/2014

21

81

Map Data

How to map the data to records

– By default, one line → one record

– User can customize the loading process

How to identify attributes and map them to the schema

– Delimiter to separate different attributes

– By default, delimiter is tab. Customizable.

Pig Data Types

• Scalar Types: – Int, long, float, double, boolean, null, chararray, bytearry;

• Complex Types: fields, tuples, bags, relations; – A Field is a piece of data – A Tuple is an ordered set of fields – A Bag is a collection of tuples – A Relation is a bag

• Samples: – Tuple Row in Database

• ( 0002576169, Tome, 20, 4.0)

– Bag Table or View in Database {(0002576169 , Tome, 20, 4.0), (0002576170, Mike, 20, 3.6), (0002576171 Lucy, 19, 4.0), …. }

Pig Operations

• Loading data – LOAD loads input data – Lines=LOAD ‘input/access.log’ AS (line: chararray);

• Projection – FOREACH … GENERTE … (similar to SELECT) – takes a set of expressions and applies them to every record.

• Grouping – GROUP collects together records with the same key

• Dump/Store – DUMP displays results to screen, STORE save results to file system

• Aggregation – AVG, COUNT, MAX, MIN, SUM

Pig Operations

• Pig Data Loader – PigStorage: loads/stores relations using field-delimited

text format

– TextLoader: loads relations from a plain-text format – BinStorage:loads/stores relations from or to binary

files – HBaseStorage: loads/stores from HBase – PigDump: stores relations by writing the toString()

representation of tuples, one per line

students = load 'student.txt' using PigStorage(‘,') as (studentid: int, name:chararray, age:int, gpa:double);

(John, 18, 4.0F) (Mary, 19, 3.8F) (Bill, 20,3.9F)

Page 22: Big data shim

7/16/2014

22

85

Schema User can optionally define the schema of the input data

Once the schema of the source data is given, the schema of the intermediate relation will be induced by Pig

Schema 1 A = LOAD 'input/A' as (name:chararray, age:int);

B = FILTER A BY age == 20;

Schema 2 A = LOAD 'input/A' as (name:chararray, age:chararray);

B = FILTER A BY age != '20';

86

Date Types

Pig Operations - Foreach

• Foreach ... Generate

– The Foreach … Generate statement iterates over the members of a bag

– The result of a Foreach is another bag

– Elements are named as in the input bag

studentid = FOREACH students GENERATE studentid, name;

Pig Operations – Positional Reference

• Fields are referred to by positional notation or by name (alias).

First Field Second Field Third Field

Data Type chararray int float

Position notation $0 $1 $2

Name (variable) name age Gpa

Field value John 18 4.0

students = LOAD 'student.txt' USING PigStorage() AS (name:chararray, age:int, gpa:float); DUMP A; (John,18,4.0F) (Mary,19,3.8F) (Bill,20,3.9F) studentname = Foreach students Generate $1 as studentname;

Page 23: Big data shim

7/16/2014

23

Pig Operations- Group

• Groups the data in one or more relations

– Collect records with the same key into a bag.

daily = load ‘NYSE_daily’ as (exchange, stock); grpd = group daily by stock; Store grpd into ‘by_group’;

Pig Operations – Dump&Store

• DUMP Operator:

– display output results, will always trigger execution

• STORE Operator:

– Pig will parse entire script prior to writing for efficiency purposes

daily = load ‘NYSE_daily’ as (exchange, stock); grpd = group daily by stock; Store grpd into ‘by_group’ using PigStorage(‘,’);

Pig Operations - Count

• Compute the number of elements in a bag

• Use the COUNT function to compute the number of elements in a bag.

• COUNT requires a preceding GROUP ALL statement for global counts and GROUP BY statement for group counts.

X = FOREACH B GENERATE COUNT(A);

Pig Operation - Order

• Sorts a relation based on one or more fields

• In Pig, relations are unordered. If you order relation A to produce relation X relations A and X still contain the same elements.

student = ORDER students BY gpa DESC;

Page 24: Big data shim

7/16/2014

24

93

Operators

Diagnostic Operators

– Show the status/metadata of the relations

– Used for debugging

– Will not be integrated into execution plan

– DESCRIBE, EXPLAIN, ILLUSTRATE.

94

Functions

Built-in Functions

– Hard-coded routines offered by Pig.

User Defined Function (UDF)

– Supports customized functionalities

– Piggy Bank, a warehouse for UDFs

How to run Pig Latin scripts

• Local mode – Local host and local file system is used – Neither Hadoop nor HDFS is required – Useful for prototyping and debugging

• MapReduce mode – Run on a Hadoop cluster and HDFS

• Batch mode - run a script directly – Pig –x local my_pig_script.pig – Pig –x mapreduce my_pig_script.pig

• Interactive mode use the Pig shell to run script – Grunt> Lines = LOAD ‘/input/input.txt’ AS (line:chararray); – Grunt> Unique = DISTINCT Lines; – Grunt> DUMP Unique;

96

Pig Execution Modes

• Local mode

– Launch single JVM

– Access local file system

– No MR job running

• Hadoop mode

– Execute a sequence of MR jobs

– Pig interacts with Hadoop master node

Page 25: Big data shim

7/16/2014

25

97

Compilation Compilation Hive

• Hive: data warehousing application in Hadoop

– Query language is HQL, variant of SQL

– Tables stored on HDFS as flat files

– Developed by Facebook, now open source

Programming Hive by Edward Capriolo, etc

https://cwiki.apache.org/confluence/display/Hive/Tutorial

HIVE Architecture

7/16/2014 99 HIVE - A warehouse solution over Map

Reduce Framework

Hive Database

Tables ○ Analogous to tables in relational database

○ Each table has a corresponding HDFS dir

○ Data is serialized and stored in files within dir

○ Support external tables on data stored in HDFS, NFS or local directory.

Partitions ○ @table can have 1 or more partitions (1-level) which

determine the distribution of data within subdirectories of table directory.

100

Page 26: Big data shim

7/16/2014

26

HIVE Database cont.

e.g : Table T under /wh/T and is partitioned on column ds + ctry

For ds=20140101

ctry=US

Then data is stored within dir /wh/T/ds=20090101/ctry=US

– Buckets • Data in each partition are divided into buckets based on

hash of a column in the table. Each bucket is stored as a file in the partition directory.

101

hiveQL

• Support SQL-like query language called HiveQL for select,join, aggregate, union all and sub-query in the from clause

• Support DDL stmt such as CREATE table with serialization format, partitioning and bucketing columns

• Command to load data from external sources and INSERT into HIVE tables.

• DO NOT support UPDATE and DELETE

102

Data Model

• Tables

– Typed columns (int, float, string, boolean)

– Also, list: map (for JSON-like data)

• Partitions

– For example, range-partition tables by date

• Buckets

– Hash partitions within ranges (useful for sampling, join optimization)

Source: cc-licensed slide by Cloudera

Examples – DDL Operations CREATE TABLE sample (foo INT, bar STRING) { PARTITIONED BY (ds STRING) };

DESCRIBE sample;

ALTER TABLE sample ADD COLUMNS (new_col INT);

DROP TABLE sample;

Page 27: Big data shim

7/16/2014

27

CREATE TABLE page_view(viewTime INT, userid BIGINT,

page_url STRING, referrer_url STRING,

ip STRING COMMENT 'IP Address of the User')

COMMENT 'This is the page view table'

PARTITIONED BY(country STRING)

STORED AS SEQUENCEFILE;

Examples – DML Operations

LOAD DATA LOCAL INPATH './sample.txt' OVERWRITE INTO TABLE sample PARTITION (ds='2012-02-24');

LOAD DATA INPATH '/user/shim/hive/sample.txt' OVERWRITE INTO TABLE sample PARTITION (ds='2012-02-24');

SELECTS and FILTERS SELECT foo FROM sample WHERE ds='2012-02-24';

INSERT OVERWRITE DIRECTORY '/tmp/hdfs_out' SELECT * FROM sample WHERE ds='2012-02-24';

INSERT OVERWRITE LOCAL DIRECTORY '/tmp/hive-sample-out' SELECT * FROM sample;

Aggregations and Groups SELECT MAX(foo) FROM sample;

SELECT ds, COUNT(*), SUM(foo) FROM sample GROUP BY ds;

Page 28: Big data shim

7/16/2014

28

Join

SELECT * FROM customer c JOIN order_cust o ON (c.id=o.cus_id);

SELECT c.id,c.name,c.address,ce.exp FROM customer c JOIN (SELECT cus_id,sum(price) AS exp FROM order_cust GROUP BY cus_id) ce ON (c.id=ce.cus_id);

CREATE TABLE customer (id INT,name STRING,address STRING)

ROW FORMAT DELIMITED FIELDS TERMINATED BY '#';

CREATE TABLE order_cust (id INT,cus_id INT,prod_id INT,price INT)

ROW FORMAT DELIMITED FIELDS TERMINATED BY '\t';

Hive: Example • Relational join on two tables:

– Table of word counts from Shakespeare and Bible collection

SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10; the 25848 62394 I 23031 8854 and 19671 38985 to 18038 13526 of 16700 34654 a 14170 8057 you 12702 2720 my 11297 4135

Hive: Behind the Scenes SELECT s.word, s.freq, k.freq FROM shakespeare s JOIN bible k ON (s.word = k.word) WHERE s.freq >= 1 AND k.freq >= 1 ORDER BY s.freq DESC LIMIT 10;

(TOK_QUERY (TOK_FROM (TOK_JOIN (TOK_TABREF shakespeare s) (TOK_TABREF bible k) (= (. (TOK_TABLE_OR_COL s) word) (. (TOK_TABLE_OR_COL k) word)))) (TOK_INSERT (TOK_DESTINATION (TOK_DIR TOK_TMP_FILE)) (TOK_SELECT (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) word)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL s) freq)) (TOK_SELEXPR (. (TOK_TABLE_OR_COL k) freq))) (TOK_WHERE (AND (>= (. (TOK_TABLE_OR_COL s) freq) 1) (>= (. (TOK_TABLE_OR_COL k) freq) 1))) (TOK_ORDERBY (TOK_TABSORTCOLNAMEDESC (. (TOK_TABLE_OR_COL s) freq))) (TOK_LIMIT 10)))

(one or more of MapReduce jobs)

(Abstract Syntax Tree)

References:

1. http://pig.apache.org (Pig official site)

2. http://hive.apache.org (Hive official site)

3. Pig Docs: http://pig.apache.org/docs/r0.9.0

4. Hive Docs: https://cwiki.apache.org/confluence/display/Hive/Home#Home-UserDocumentation

5. Hive Tutorial: https://cwiki.apache.org/confluence/display/Hive/Tutorial

6. Pig Papers: http://wiki.apache.org/pig/PigTalksPapers

7. http://en.wikipedia.org/wiki/Pig_Latin http://en.wikipedia.org/wiki/Apache_Hive

8. Hive – A petabyte scale data warehouse using hadoop infolab.stanford.edu/~ragho/hive-icde2010.pdf‎

Page 29: Big data shim

7/16/2014

29

Recommendation Engine

Simon Shim

Partial contents from K. Han’s presentation

Data Mining

• Mining patterns from Data

• Statistics?

• Machine learning?

115

Data Mining in Use

• The US Government uses Data Mining to track fraud

• A Supermarket becomes an information broker

• Basketball teams use it to track game strategy

• Recommend similar items

• Holding on to Good Customers

• Weeding out Bad Customers

Examples of data mining Frequently bought together Movie recommendation

Page 30: Big data shim

7/16/2014

30

examples

Heart Monitoring

Genome Mining

Keyword search

Data Mining

• Frequent pattern mining

• Machine learning – Supervised

– Unsupervised

• Recommendation system

• Graph mining

• Unstructured data

• Big data

• Stream mining

Frequent Pattern Mining

?

Process

Page 31: Big data shim

7/16/2014

31

Machine Learning

• Supervised

• Unsupervised (Clustering)

Binary Classification

Checking Duration Savings

(years) ($k)

Current

Loans

Loan

Purpose

Risky?

Yes 1 10 Yes TV 0

Yes 2 4 No TV 1

No 5 75 No Car 0

Yes 10 66 No Car 1

Yes 5 83 Yes Car 0

Yes 1 11 No TV 0

Yes 4 99 Yes Car 0

Decision Tree

Neural Network

• Perceptron multi-layer NN

Page 32: Big data shim

7/16/2014

32

Support Vector Machine (SVM)

SVM Spam Filter

• Transmission – IP address --167.12.24.555 – Sender URL -- spam.com

• Email header – From --“[email protected]” – To --“undisclosed”

• Email Body – # of paragraphs – # words

• Email structure – # of attachments – # of links

Regression

• Linear regression Non-linear regression

• Application – Stock price prediction

– Credit scoring

– Employment forecast

Logistic regression

Page 33: Big data shim

7/16/2014

33

Supervised learning

machine learning

Graph Analysis

• Friend Recommendation Drug discovery

Twitter analysis

• TwitterRank: PageRank approach

Page 34: Big data shim

7/16/2014

34

Facebook Graph Search

• Restaurants liked by my friend

Prediction

• Rating prediction – Given how an user rated other items, predict the user’s rating for a

given item

• Top-N Recommendation – Given the list of items liked by an user, recommend new items that the

user might like

Feedback data

• Explicit feedback

– Ratings, reviews

• Implicit feedback

– Purchase behavior: frequency, recency,

– Browsing behavior: # of visits, time of visit, stay duration, clicks,

Data Analysis Example

• Carl Morris (Harvard statistics professor) used Markov Chain

• Baseball: discrete game

• There are four states

• State (0, 0, 0, 0) : (out count, reach 1st base, reach 2nd base, reach 3rd base)

• Then there are 3x2x2x2 = 24 states

• Each inning starts as (0,0,0,) and ends as (3,0,0,0)

Page 35: Big data shim

7/16/2014

35

NERV

MoneyBall

• Oakland Athletics – Lowest team salary in 2002 : 41 million

– Difficult to recruit good players

– Paul DePodesta: assistant to general manager Billy Beane

• 1999 – 2003: advanced to post season 4 years in a row

• Baseball management based on statistics and scientific data

Case Study: Item Based Recommendation

• Using meta data from item, compute similarity between items – Description, price, category

– Normalize these into a feature vector

– N-dimension

• Computer the distance between vectors – Euclidean distance score

– Cosine similarity score

– Pearson correlation score

Page 36: Big data shim

7/16/2014

36

Architecture

Item Based Recommendation

• Collaborative Filtering

– Leverage users’ collective intelligence

– Similar users tend to like similar items

– Amazon’s product recommendation is a good example

How to implement Collaborative Filtering

• Construct co-occurrence matrix (item similarity matrix)

– Increment S[i,j] and S[j,i] if item I and item j are liked by the same user

– Repeat this for all users

• For item k, find the most co-occurred items from matrix as recommendation

Item Based Similarity

Out of

Africa

Star

Wars

Air Force

One

Liar,

Liar

John 4 4 5 1

Adam 1 1 2 5

Laura ? 4 5 2

Page 37: Big data shim

7/16/2014

37

User Based Recommendation

• First group users into different clusters – Represent users as feature vectors

– Information about users • Geo-location, gender, age, …

– Items users liked (rated)

– K-nearest neighbors (KNN) is used

• From each cluster, find representative items – Graph traversal

– Highest rated items

– Most liked items

User Based Similarity

• User’s movie rating

Out of

Africa

Star

Wars

Air Force

One

Liar,

Liar

John 4 4 5 1

Adam 1 1 2 5

Laura ? 4 5 2

Latent Factors

Challenges

• Cold starter

– For new users/items, no information

• Sparse Data

– Item reviews are not there

• Scalability Issue

– Big data means more computation

Page 38: Big data shim

7/16/2014

38

What is Mahout?

• Open source machine learning library

– Supports mapreduce

• Recommendation/collaborative filtering

• Classification: Supervised Learning

• Clustering: Unsupervised Learning

Personalized Recommendation

• Build co-occurrence (S) matrix for items

• Build a preference vector (P) for user

• Multiply both matrix R = S x P

• Sort the final vector from R

Example

• N items and M users

• Create S (N x N)

• Create P (1 x N) (all P(i) liked by a user)

• S X P

• Sort results by the score

Example

• Co-occurrence matrix (NxN) Preference vector(1xN)

1 X 2 4 … X 1 2 3 1 X X

100,000 movies

Sort

5 1 1 x … x x x x x … x 1 x 1 x … 1 x 4 2 x … x x x x x … x … x 1 x … 1 x 1 1 x … x x x x 4 … x x 2 5 x … 1 2 1 1 x … x 3 6 x 4 … x x x 1 x … 1

100,000 movies

Userid [itemid, score), (itemid, score), … …

Page 39: Big data shim

7/16/2014

39

Similarity

• Co-occurrence

• Log likelihood

• Location based

• Gender, age

• Cosine similarity

• Euclidean distance

What is key?

• Understand business domain

• Garbage in, garbage out

– Filtering, cleaning

• One size does not fit all

• Start with simple way and improve

• Create automation for experiments and tweaks