41
Distributed data processing on the Cloud – Lecture 2 Introduction to MapReduce Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar, 2007 (licensed under Creation Commons Attribution 3.0 License)

Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

  • Upload
    others

  • View
    9

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Distributed data processing on the Cloud – Lecture 2

Introduction to MapReduce

Satish Srirama

Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball,

& Sierra Michels-Slettvet, Google Distributed Computing Seminar, 2007 (licensed

under Creation Commons Attribution 3.0 License)

Page 2: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Summary of last lecture

• Discussed the basics of cloud computing

• Introduced the large-scale data and processing

• Scaling information systems on the cloud

• Discussed the structure of the course

curriculum

9/14/2018 Satish Srirama 2/41

Page 3: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Outline

• MapReduce

• Hadoop

9/14/2018 Satish Srirama 3/41

Page 4: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Economics of Cloud Providers –

Failures

• Cloud Computing providers bring a shift from high reliability/availability servers to commodity servers– At least one failure per day in large datacenter

• Caveat: User software has to adapt to failures

• Solution: Replicate data and computation– MapReduce & Distributed File System

• MapReduce = functional programming meets distributed processing on steroids – Not a new idea… dates back to the 50’s (or even 30’s)

9/14/2018 Satish Srirama 4/41

Page 5: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Functional Programming ->

MapReduce

• Two important concepts in functional

programming

– Map: do something to everything in a list

– Fold: combine results of a list in some way

9/14/2018 Satish Srirama 5/41

Page 6: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Map

• Map is a higher-order function

• How map works:– Function is applied to every element in a list

– Result is a new list

• map f lst: (’a->’b) -> (’a list) -> (’b list)– Creates a new list by applying f to each element of the

input list; returns output in order.

9/14/2018f f f f f f

Satish Srirama 6/41

Page 7: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Fold

• Fold is also a higher-order function

• How fold works:– Accumulator set to initial value

– Function applied to list element and the accumulator

– Result stored in the accumulator

– Repeated for every item in the list

– Result is the final value in the accumulator

9/14/2018 Satish Srirama 7/41

Page 8: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Map/Fold in Action

• Simple map example:

• Fold examples:

• Sum of squares:

(map (lambda (x) (* x x))

'(1 2 3 4 5))

→ '(1 4 9 16 25)

(fold + 0 '(1 2 3 4 5)) → 15

(fold * 1 '(1 2 3 4 5)) → 120

(define (sum-of-squares v)

(fold + 0 (map (lambda (x) (* x x)) v)))

(sum-of-squares '(1 2 3 4 5)) → 55

9/14/2018 Satish Srirama 8/41

Page 9: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Implicit Parallelism In map

• In a purely functional setting, elements of a list being computed by map cannot see the effects of the computations on other elements

• If order of application of f to elements in list is commutative, we can reorder or parallelize execution

• Let’s assume a long list of records: imagine if...– We can parallelize map operations

– We have a mechanism for bringing map results back together in the fold operation

• This is the “secret” that MapReduce exploits

• Observations:– No limit to map parallelization since maps are independent

– We can reorder folding if the fold function is commutative and associative

9/14/2018 Satish Srirama 9/41

Page 10: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Typical Large-Data Problem

• Iterate over a large number of records

• Extract something of interest from each

• Shuffle and sort intermediate results

• Aggregate intermediate results

• Generate final outputKey idea: provide a functional abstraction for

these two operations – MapReduce

9/14/2018 Satish Srirama 10/41

Page 11: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce

• Programmers specify two functions:

map (k, v) → <k’, v’>*

reduce (k’, [v’]) → <k’’, v’’>*

– All values with the same key are sent to the same reducer

• The execution framework handles everything else…

9/14/2018 Satish Srirama 11/41

Page 12: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

mapmap map map

Shuffle and Sort: aggregate values by keys

reduce reduce reduce

k1 k2 k3 k4 k5 k6v1 v2 v3 v4 v5 v6

ba 1 2 c c3 6 a c5 2 b c7 8

a 1 5 b 2 7 c 2 3 6 8

r1 s1 r2 s2 r3 s3

MapReduce

9/14/2018 Satish Srirama 12

Page 13: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

“Hello World” Example: Word Count

Map(String docid, String text):

for each word w in text:

Emit(w, 1);

Reduce(String term, Iterator<Int> values):

int sum = 0;

for each v in values:

sum += v;

Emit(term, sum);

9/14/2018 Satish Srirama 13/41

Page 14: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce

• Programmers specify two functions:

map (k, v) → <k’, v’>*

reduce (k’, [v’]) → <k’’, v’’>*

– All values with the same key are sent to the same reducer

• The execution framework handles everything else…

What’s “everything else”?

9/14/2018 Satish Srirama 14/41

Page 15: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce “Runtime”• Handles scheduling

– Assigns workers to map and reduce tasks

• Handles “data distribution”– Moves processes to data

• Handles synchronization– Gathers, sorts, and shuffles intermediate data

• Handles errors and faults– Detects worker failures and automatically restarts

• Handles speculative execution– Detects “slow” workers and re-executes work

• Everything happens on top of a distributed FS (later)

Sounds simple, but many challenges!9/14/2018 Satish Srirama 15/41

Page 16: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce - extended

• Programmers specify two functions:map (k, v) → <k’, v’>*reduce (k’, [v’]) → <k’’, v’’>*– All values with the same key are reduced together

• The execution framework handles everything else…• Not quite…usually, programmers also specify:

partition (k’, number of parRRons) → parRRon for k’– Often a simple hash of the key, e.g., hash(k’) mod n– Divides up key space for parallel reduce operationscombine (k’, v’) → <k’’, v’’>*– Mini-reducers that run in memory after the map phase– Used as an optimization to reduce network traffic

9/14/2018 Satish Srirama 16/41

Page 17: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

combinecombine combine combine

ba 1 2 c 9 a c5 2 b c7 8

partition partition partition partition

mapmap map map

k1 k2 k3 k4 k5 k6v1 v2 v3 v4 v5 v6

ba 1 2 c c3 6 a c5 2 b c7 8

Shuffle and Sort: aggregate values by keys

reduce reduce reduce

a 1 5 b 2 7 c 2 9 8

r1 s1 r2 s2 r3 s3

c 2 3 6 8

9/14/2018 Satish Srirama 17

Page 18: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Two more details…

• Barrier between map and reduce phases

– But we can begin copying intermediate data

earlier

• Keys arrive at each reducer in sorted order

– No enforced ordering across reducers

9/14/2018 Satish Srirama 18/41

Page 19: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

split 0

split 1

split 2

split 3

split 4

worker

worker

worker

worker

worker

Master

User

Program

output

file 0

output

file 1

(1) submit

(2) schedule map (2) schedule reduce

(3) read(4) local write

(5) remote read (6) write

Input

files

Map

phase

Intermediate files

(on local disk)

Reduce

phase

Output

files

Adapted from (Dean and Ghemawat, OSDI 2004)

MapReduce Overall Architecture

9/14/2018 Satish Srirama 19

Page 20: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce can refer to…

• The programming model

• The execution framework (aka “runtime”)

• The specific implementation

Usage is usually clear from context!

9/14/2018 Satish Srirama 20/41

Page 21: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce Implementations

• Google has a proprietary implementation in C++

– Bindings in Java, Python

• Hadoop is an open-source implementation in Java

– Development led by Yahoo, used in production

– Now an Apache project

– Rapidly expanding software ecosystem

• Lots of custom research implementations

– For GPUs, cell processors, etc.

9/14/2018 Satish Srirama 21/41

Page 22: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Cloud Computing Storage, or how do

we get data to the workers?

Compute Nodes

NAS

SAN

What’s the problem here?

9/14/2018 Satish Srirama

Network-attached storage (NAS)

Storage area network (SAN)22/41

Page 23: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Distributed File System

• Don’t move data to workers… move workers to the data!– Store data on the local disks of nodes in the cluster

– Start up the workers on the node that has the data local

• Why?– Network bisection bandwidth is limited

– Not enough RAM to hold all the data in memory

– Disk access is slow, but disk throughput is reasonable

• A distributed file system is the answer– GFS (Google File System) for Google’s MapReduce

– HDFS (Hadoop Distributed File System) for Hadoop

9/14/2018 Satish Srirama 23/41

Page 24: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

GFS: Assumptions

• Choose commodity hardware over “exotic” hardware

– Scale “out”, not “up”

• High component failure rates

– Inexpensive commodity components fail all the time

• “Modest” number of huge files

– Multi-gigabyte files are common, if not encouraged

• Files are write-once, mostly appended to

– Perhaps concurrently

• Large streaming reads over random access

– High sustained throughput over low latency

GFS slides adapted from material by (Ghemawat et al., SOSP 2003)

9/14/2018 Satish Srirama 24/41

Page 25: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

GFS: Design Decisions

• Files stored as chunks

– Fixed size (64MB)

• Reliability through replication

– Each chunk replicated across 3+ chunkservers

• Single master to coordinate access, keep metadata

– Simple centralized management

• No data caching

– Little benefit due to large datasets, streaming reads

• Simplify the API

– Push some of the issues onto the client (e.g., data layout)

HDFS = GFS clone (same basic ideas implemented in Java)

9/14/2018 Satish Srirama 25/41

Page 26: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

From GFS to HDFS

• Terminology differences:

– GFS master = Hadoop namenode

– GFS chunkservers = Hadoop datanodes

• Functional differences:

– No file appends in HDFS

– HDFS performance is (likely) slower

For the most part, we’ll use the Hadoop terminology…

9/14/2018 Satish Srirama 26/41

Page 27: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Adapted from (Ghemawat et al., SOSP 2003)

(file name, block id)

(block id, block location)

instructions to datanode

datanode state(block id, byte range)

block data

HDFS namenode

HDFS datanode

Linux file system

HDFS datanode

Linux file system

File namespace/foo/bar

block 3df2

Application

HDFS Client

HDFS Architecture

9/14/2018 Satish Srirama 27

Page 28: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Namenode Responsibilities

• Managing the file system namespace:

– Holds file/directory structure, metadata, file-to-block mapping, access permissions, etc.

• Coordinating file operations:

– Directs clients to datanodes for reads and writes

– No data is moved through the namenode

• Maintaining overall health:

– Periodic communication with the datanodes

– Block re-replication and rebalancing

– Garbage collection

9/14/2018 Satish Srirama 28/41

Page 29: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Hadoop Processing Model

• Create or allocate a cluster

• Put data onto the file system– Data is split into blocks

– Replicated and stored in the cluster

• Run your job– Copy Map code to the allocated nodes

• Move computation to data, not data to computation

– Gather output of Map, sort and partition on key

– Run Reduce tasks

• Results are stored in the HDFS

9/14/2018 Satish Srirama 29/41

Page 30: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Some MapReduce Terminology w.r.t.

Hadoop

• Job – A “full program” - an execution of a

Mapper and Reducer across a data set

• Task – An execution of a Mapper or a Reducer

on a data chunk

– Also known as Task-In-Progress (TIP)

• Task Attempt – A particular instance of an

attempt to execute a task on a machine

9/14/2018 Satish Srirama 30/41

Page 31: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Terminology example

• Running “Word Count” across 20 files is one

job

• 20 files to be mapped imply 20 map tasks

+ some number of reduce tasks

• At least 20 map task attempts will be

performed

– more if a machine crashes or slow, etc.

9/14/2018 Satish Srirama 31/41

Page 32: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Task Attempts

• A particular task will be attempted at least

once, possibly more times if it crashes

– If the same input causes crashes over and over,

that input will eventually be abandoned

• Multiple attempts at one task may occur in

parallel with speculative execution turned on

– Task ID from TaskInProgress is not a unique

identifier; don’t use it that way

9/14/2018 Satish Srirama 32/41

Page 33: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Hadoop MapReduce Architecture:

High Level

9/14/2018

datanode daemon

Linux file system

tasktracker

slave node

datanode daemon

Linux file system

tasktracker

slave node

datanode daemon

Linux file system

tasktracker

slave node

namenode

namenode daemon

job submission node

jobtracker

MapReduce job

submitted by

client computer

Master node

Task instance Task instance Task instance

Satish Srirama 33/41

Page 34: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce/GFS Summary

• Simple, but powerful programming model

• Scales to handle petabyte+ workloads

– Google: six hours and two minutes to sort 1PB (10

trillion 100-byte records) on 4,000 computers

– Yahoo!: 16.25 hours to sort 1PB on 3,800 computers

• Incremental performance improvement with

more nodes

• Seamlessly handles failures, but possibly with

performance penalties

9/14/2018 Satish Srirama 34/41

Page 35: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Limitations with MapReduce V1

• Scalability

– Maximum Cluster Size – 4000 Nodes

– Maximum Concurrent Tasks – 40000

• Coarse synchronization in Job Tracker

– Single point of failure

– Failure kills all queued and running jobs

• Jobs need to be resubmitted by users

– Restart is very tricky due to complex state

• Problems with resource utilization

9/14/2018 Satish Srirama 35/41

Page 36: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

MapReduce NextGen aka YARN aka

MRv2• New architecture introduced in hadoop-0.23

• Divides two major functions of the JobTracker– Resource management and job life-cycle management are

divided into separate components

• An application is either a single job in the sense of classic MapReduce jobs or a DAG of such jobs

9/14/2018 Satish Srirama 36/41

Page 37: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

YARN Architecture

• RescourceManager:– Arbitrates resources among all the

applications in the system

– Has two main components: Scheduler and ApplicationsManager

• NodeManager:– Per-machine slave

– Responsible for launching the applications’ containers, monitoring their resource usage

• ApplicationMaster:– Negotiate appropriate resource containers

from the Scheduler, tracking their status and monitoring for progress

• Container:– Unit of allocation incorporating resource

elements such as memory, cpu, disk, network etc.

– To execute a specific task of the application

– Similar to map/reduce slots in MRv1

9/14/2018 Satish Srirama 37/41

Page 38: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Execution Sequence with YARN

• A client program submits the application

• ResourceManager allocates a specified container to start the ApplicationMaster

• ApplicationMaster, on boot-up, registers with ResourceManager

• ApplicationMaster negotiates with ResourceManager for appropriate resource containers

• On successful container allocations, ApplicationMaster contacts NodeManager to launch the container

• Application code is executed within the container, and then ApplicationMaster is responded with the execution status

• During execution, the client communicates directly with ApplicationMaster or ResourceManager to get status, progress updates etc.

• Once the application is complete, ApplicationMaster unregisters with ResourceManager and shuts down, freeing its own container process

9/14/2018 Satish Srirama 38/41

Page 39: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

This week in Lab

• Work with MapReduce jobs

• Try out WordCount example

– MapReduce in hadoop-2.x maintains API

compatibility with previous stable release (hadoop-1.x)

– MapReduce jobs still run unchanged on top of YARN with just a recompile

• Improving existing MapReduce examples

– Details will be provided in the lab exercises

9/14/2018 Satish Srirama 39/41

Page 40: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

Next Lecture

• Let us have a look at some MapReduce

algorithms

9/14/2018 Satish Srirama 40/41

Page 41: Satish Srirama - ut...Satish Srirama Some material adapted from slides by Jimmy Lin, Christophe Bisciglia, Aaron Kimball, & Sierra Michels-Slettvet, Google Distributed Computing Seminar,

References

• Hadoop wiki http://wiki.apache.org/hadoop/

• Hadoop MapReduce tutorial http://hadoop.apache.org/docs/r1.0.4/mapred_tutorial.html

• Data-Intensive Text Processing with MapReduce

Authors: Jimmy Lin and Chris Dyer

http://lintool.github.io/MapReduceAlgorithms/MapReduce-book-final.pdf

• White, Tom. Hadoop: the definitive guide. O'Reilly, 2012.

• J. Dean and S. Ghemawat, “MapReduce: Simplified Data Processing on Large Clusters”, OSDI'04: Sixth Symposium on Operating System Design and Implementation, Dec, 2004.

• Apache Hadoop YARN -https://hadoop.apache.org/docs/r2.7.2/hadoop-yarn/hadoop-yarn-site/YARN.html

9/14/2018 Satish Srirama 41/41