26
Distributed Distributed Computing Computing and introduction of and introduction of Hadoop Hadoop - Ankit Minocha

Distributed computing presentation

Embed Size (px)

DESCRIPTION

Delhi Hadoop User Group MeetUp - 10th September 2011 : Slides

Citation preview

Page 1: Distributed computing presentation

Distributed ComputingDistributed Computing

……and introduction of Hadoopand introduction of Hadoop

- Ankit Minocha

Page 2: Distributed computing presentation

OutlineOutline

Introduction to Distributed ComputingIntroduction to Distributed Computing

Parallel vs Distributed Computing.Parallel vs Distributed Computing.

Page 3: Distributed computing presentation

Computer SpeedupComputer Speedup

Moore’s Law: “The density of transistors on a chip doubles every 18 months, for the same cost” (1965)

Page 4: Distributed computing presentation

Scope of problemsScope of problems

What can you do with 1 computer?What can you do with 1 computer?

What can you do with 100 computers?What can you do with 100 computers?

What can you do with an entire data What can you do with an entire data center?center?

Page 5: Distributed computing presentation

Rendering multiple frames of high-quality Rendering multiple frames of high-quality animationanimation

Page 6: Distributed computing presentation

Parallelization IdeaParallelization Idea

Parallelization is “easy” if processing can be Parallelization is “easy” if processing can be cleanly split into n units:cleanly split into n units:

work

w1 w2 w3

Partition problem

Page 7: Distributed computing presentation

Parallelization Idea (2)Parallelization Idea (2)

w1 w2 w3

thread thread thread

Spawn worker threads:

In a parallel computation, we would like to have as many threads as we have processors. e.g., a four-processor computer would be able to run four threads at the same time.

Page 8: Distributed computing presentation

Parallelization Idea (3)Parallelization Idea (3)

thread thread thread

Workers process data:

w1 w2 w3

Page 9: Distributed computing presentation

Parallelization Idea (4)Parallelization Idea (4)

results

Report results

thread thread threadw1 w2 w3

Page 10: Distributed computing presentation

Parallelization PitfallsParallelization Pitfalls

But this model is too simple! But this model is too simple!

How do we assign work units to worker threads?How do we assign work units to worker threads?What if we have more work units than threads?What if we have more work units than threads?How do we aggregate the results at the end?How do we aggregate the results at the end?How do we know all the workers have finished?How do we know all the workers have finished?What if the work cannot be divided into What if the work cannot be divided into completely separate tasks?completely separate tasks?

What is the common theme of all of these problems?

Page 11: Distributed computing presentation

Parallelization Pitfalls (2)Parallelization Pitfalls (2)

Each of these problems represents a point Each of these problems represents a point at which multiple threads must at which multiple threads must communicate with one another, or access communicate with one another, or access a shared resource.a shared resource.

Golden rule: Any memory that can be Golden rule: Any memory that can be used by multiple threads must have an used by multiple threads must have an associated associated synchronization systemsynchronization system!!

Page 12: Distributed computing presentation

What is Wrong With This?What is Wrong With This?

Thread 1:Thread 1:

void foo() {void foo() {

x++;x++;

y = x;y = x;

}}

Thread 2:

void bar() {

y++;

x+=3;

}

If the initial state is y = 0, x = 6, what happens after these threads finish running?

Page 13: Distributed computing presentation

Multithreaded = UnpredictabilityMultithreaded = Unpredictability

When we run a multithreaded program, we don’t When we run a multithreaded program, we don’t know what order threads run in, nor do we know know what order threads run in, nor do we know when they will interrupt one another.when they will interrupt one another.

Thread 1:

void foo() {

eax = mem[x];

inc eax;

mem[x] = eax;

ebx = mem[x];

mem[y] = ebx;

}

Thread 2:

void bar() {

eax = mem[y];

inc eax;

mem[y] = eax;

eax = mem[x];

add eax, 3;

mem[x] = eax;

}

Many things that look like “one step” operations actually take several steps under the hood:

Page 14: Distributed computing presentation

Multithreaded = UnpredictabilityMultithreaded = Unpredictability

This applies to more than just integers:This applies to more than just integers:

Pulling work units from a queue.Pulling work units from a queue.Reporting work back to master unitReporting work back to master unitTelling another thread that it can begin the Telling another thread that it can begin the “next phase” of processing“next phase” of processingeg. Torrent leechers, seederseg. Torrent leechers, seeders

… … All require synchronization!All require synchronization!

Page 15: Distributed computing presentation

Synchronization PrimitivesSynchronization Primitives

A A synchronization primitivesynchronization primitive is a special is a special shared variable that guarantees that it can shared variable that guarantees that it can only be accessed only be accessed atomicallyatomically. .

Hardware support guarantees that Hardware support guarantees that operations on synchronization primitives operations on synchronization primitives only ever take one steponly ever take one step

Page 16: Distributed computing presentation

SemaphoresSemaphores

A semaphore is a flag A semaphore is a flag that can be raised or that can be raised or lowered in one steplowered in one step

Semaphores were Semaphores were flags that railroad flags that railroad engineers would use engineers would use when entering a when entering a shared trackshared track

Set: Reset:

Only one side of the semaphore can ever be red! (Can both be green?)

Page 17: Distributed computing presentation

SemaphoresSemaphores

set() and reset() can be thought of as set() and reset() can be thought of as lock() and unlock()lock() and unlock()

Calls to lock() when the semaphore is Calls to lock() when the semaphore is already locked cause the thread to already locked cause the thread to blockblock..

Pitfalls: Must “bind” semaphores to Pitfalls: Must “bind” semaphores to particular objects; must remember to particular objects; must remember to unlock correctlyunlock correctly

Page 18: Distributed computing presentation

The “corrected” exampleThe “corrected” example

Thread 1:

void foo() {

sem.lock();

x++;

y = x;

sem.unlock();

}

Thread 2:

void bar() {

sem.lock();

y++;

x+=3;

sem.unlock();

}

Global var “Semaphore sem = new Semaphore();” guards access to x & y

Page 19: Distributed computing presentation

Condition VariablesCondition Variables

A condition variable notifies threads that a A condition variable notifies threads that a particular condition has been met particular condition has been met

Inform another thread that a queue now Inform another thread that a queue now contains elements to pull from (or that it’s contains elements to pull from (or that it’s empty – request more elements!)empty – request more elements!)

Pitfall: What if nobody’s listening?Pitfall: What if nobody’s listening?

Page 20: Distributed computing presentation

The final exampleThe final example

Thread 1:

void foo() {

sem.lock();

x++;

y = x;

fooDone = true;

sem.unlock();

fooFinishedCV.notify();

}

Thread 2:

void bar() {

sem.lock();

if(!fooDone) fooFinishedCV.wait(sem);

y++;

x+=3;

sem.unlock();

}

Global vars: Semaphore sem = new Semaphore(); ConditionVar fooFinishedCV = new ConditionVar(); boolean fooDone = false;

Page 21: Distributed computing presentation

Too Much Synchronization? Too Much Synchronization? DeadlockDeadlock

Synchronization becomes even more complicated when multiple locks can be used

Can cause entire system to “get stuck”

Thread A:Thread A:semaphore1.lock();semaphore2.lock();/* use data guarded by semaphores */semaphore1.unlock(); semaphore2.unlock();

Thread B:semaphore2.lock();semaphore1.lock();/* use data guarded by semaphores */semaphore1.unlock(); semaphore2.unlock();

(Image: RPI CSCI.4210 Operating Systems notes)

Page 22: Distributed computing presentation

The Moral: Be Careful!The Moral: Be Careful!

Synchronization is hardSynchronization is hard Need to consider all possible shared stateNeed to consider all possible shared state Must keep locks organized and use them Must keep locks organized and use them

consistently and correctlyconsistently and correctly

Knowing there are bugs may be tricky; Knowing there are bugs may be tricky; fixing them can be even worse!fixing them can be even worse!

Keeping shared state to a minimum Keeping shared state to a minimum reduces total system complexityreduces total system complexity

Page 23: Distributed computing presentation

Hadoop to the rescue!Hadoop to the rescue!

Page 24: Distributed computing presentation

Prelude to MapReducePrelude to MapReduce

We saw earlier that explicit We saw earlier that explicit parallelism/synchronization is hardparallelism/synchronization is hard

Synchronization does not even answer Synchronization does not even answer questions specific to distributed computing, like questions specific to distributed computing, like how to move data from one machine to anotherhow to move data from one machine to another

Fortunately, MapReduce handles this for usFortunately, MapReduce handles this for us

Page 25: Distributed computing presentation

Prelude to MapReducePrelude to MapReduce

MapReduce is a paradigm designed by MapReduce is a paradigm designed by Google for making a subset (albeit a large Google for making a subset (albeit a large one) of distributed problems easier to codeone) of distributed problems easier to codeAutomates data distribution & result Automates data distribution & result aggregation aggregation Restricts the ways data can interact to Restricts the ways data can interact to eliminate locks (no shared state = no eliminate locks (no shared state = no locks!)locks!)

Page 26: Distributed computing presentation

Thank you so much “Clickable” for Thank you so much “Clickable” for your sincere efforts. your sincere efforts.