27
A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future Research Directions

A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Embed Size (px)

Citation preview

Page 1: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

A. VaniachineXXIV International Symposium on Nuclear Electronics & ComputingVarna, Bulgaria, 9-16 September 2013

Big Data Processing on the Grid:Future Research Directions 

Page 2: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

2A. Vaniachine

Page 3: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

3

A Lot Can be Accomplished in 50 Years: Nuclear Energy Took 50 Years from Discovery to Use 1896: Becquerel discovered radioactivity 1951: Reactor at Argonne generated electricity for light bulbs

A. Vaniachine

Page 4: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

4

A Lot Has Happened in 14 Billion Years

A. Vaniachine

Electroweek phase

transition

Everything is a remnant of the Big Bang, including the energy we use:– Chemical energy: scale is eV

• Stored millions of years ago– Nuclear energy: scale is MeV or

million times higher than chemical• Stored billions of years ago

– Electroweek energy: scale is 100 GeV or 100,000 times higher than nuclear

• Stored right after the Big Bang– Can this energy be harnessed

in some useful way?

Page 5: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

5

2012: Higgs Boson Discovery

A. Vaniachine

JHEP 08 (2012) 098

Meta-stability: a prerequisite

for energy use

Page 6: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

6

Higgs Boson Study Makes LHC a Top Priority

European Strategy

A. Vaniachine

US Snowmass Study

http://cds.cern.ch/record/1551933http://science.energy.gov/~/media/hep/hepap/pdf/201309/Hadley_HEPAP_Intro_Sept_2013.pdf

Page 7: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

7

The LHC Roadmap

A. Vaniachine

Page 8: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

8

Big Data

http://www.wired.com/magazine/2013/04/bigdataA. Vaniachine

LHC RAW data per year

In 2010 the LHC experiments produced 13 PB of data– That rate outstripped any other

scientific effort going on

Page 9: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

9

Big Data

http://www.wired.com/magazine/2013/04/bigdataA. Vaniachine

LHC RAW data per year

WLCG data on the Grid

LHC RAW data volumes are inflated by storing derived data products, replication for safety and efficient access, and by the need for storing even more simulated data than the RAW data

In 2010 the LHC experiments produced 13 PB of data– That rate outstripped any other

scientific effort going on

Page 10: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

10

Big Data

http://www.wired.com/magazine/2013/04/bigdataA. Vaniachine

LHC RAW data per year

WLCG data on the Grid

In 2010 the LHC experiments produced 13 PB of data– That rate outstripped any other

scientific effort going on

Scheduled LHC upgrades will increase RAW data taking rates tenfold

LHC RAW data volumes are inflated by storing derived data products, replication for safety and efficient access, and by the need for storing even more simulated data than the RAW data

Page 11: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

11

Big Data

http://www.wired.com/magazine/2013/04/bigdataA. Vaniachine

Brute force approach to scale up Big Data processing on the Grid for LHC upgrade

needs is not an option

Page 12: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

12

Physics Facing Limits 

The demands on computing resources to accommodate the Run2 physics needs increase– HEP now risks to compromise physics because of lack of computing resources

• Has not been true for ~20 years From I. Bird presentation at the “HPC and super-computing workshop for Future Science Applications” (BNL, June 2013)

The limits are those of tolerable cost for storage and analysis. Tolerable cost is established in an explicit or implicit optimization of physics dollars for the entire program. The optimum rate of data to persistent storage depends on the capabilities of technology, the size and budget of the total project, and the physics lost by discarding data. There is no simple answer!

From US Snowmass Study: https://indico.fnal.gov/getFile.py/access?contribId=342tisessionId=100tiresId=0timaterialId=1ticonfId=6890

Physics needs drives future research directions in Big Data processing on the Grid

A. Vaniachine

Page 13: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

13A. Vaniachine

Page 14: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

14

US Big Data Research and Development Initiative

At the time of the “Big Data Research and Development Initiative” announcement, a $200 million investment in tools to handle huge volumes of digital data needed to spur U.S. science and engineering discoveries, two examples of successful HEP technologies were already in place:– Collaborative big data management ventures include PanDA (Production and Distributed

Analysis) Workload Management System and XRootD , a high performance, fault tolerant software for fast, scalable access to data repositories of many kinds.

Supported by the DOE Office of Advanced Scientific Computing Research, PanDA is now being generalized and packaged, as a Workload Management System already proven at extreme scales, for the wider use of the Big Data community– Progress in this project was reported by A. Klimentov earlier in this session

A. Vaniachine

Page 15: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

15

Synergistic Challenges

As HEP is facing the Big Data processing challenges ahead of other sciences, it is instructive to look for commonalities in the discovery process across the sciences– In 2013 the Subcommittee of the

US DOE Advanced Scientific Computing Advisory Committee prepared the Summary Report on Synergistic Challenges in Data-Intensive Science and Exascale Computing

A. Vaniachine

Page 16: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

16

Knowledge-Discovery Life-Cycle for Big Data: 1

A. Vaniachine

Data may be generated by instruments, experiments, sensors, or supercomputers

Page 17: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

17

Knowledge-Discovery Life-Cycle for Big Data: 2

A. Vaniachine

(Re)organizing, processing, deriving subsets, reduction, visualization, query analytics, distributing, and other aspects

In LHC experiments, this includes common operations on and derivations from raw data. The output of data processing is used by thousands of scientists for knowledge discovery.

Page 18: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

18

Knowledge-Discovery Life-Cycle for Big Data: 3

A. Vaniachine

Although the discovery process can be quite specific to the scientific problem under consideration, repeated evaluations, what-if scenarios, predictive modeling, correlations, causality and other mining operations at scale are common at this phase

Given the size and complexity of data and the need for both top-down and bottom up discovery, scalable algorithms and software need to be deployed in this phase

Page 19: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

19

Knowledge-Discovery Life-Cycle for Big Data: 4

A. Vaniachine

Insights and discoveries from previous phases help close the loop to determine new simulations, models, parameters, settings, observations, thereby,making the closed loop

While this represents a common high-level approach to data-driven knowledge discovery, there can be important differences among different sciences as to how data is produced, consumed, stored, processed, and analyzed

Page 20: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

20

Data-Intensive Science Workflow

The Summary Report identified an urgent need to simplify the workflow for Data-Intensive Science– Analysis and visualization of increasingly larger-scale data sets will require integration of

the best computational algorithms with the best interactive techniques and interfaces– The workflow for data-intensive science is complicated by the need to simultaneously

manage large volumes of data as well as large amounts of computation to analyze the data, and this complexity is increasing at an inexorable rate

These complications can greatly reduce the productivity of the domain scientist, if the workflow is not simplified and made more flexible– For example, the workflow should be able to transparently support decisions such as

when to move data to computation or computation to data

A. Vaniachine

Page 21: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

21

Lessons Learned

The distributed computing environment for the LHC has proved to be a formidable resource, giving scientists access to huge resources that are pooled worldwide and largely automatically managed– However, the scale of operational effort required is burdensome for the HEP community,

and will be hard to replicate in other science communities• Could the current HEP distributed environments be used as a distributed systems laboratory to

understand how more robust, self-healing, self-diagnosing systems could be created?

Indeed, Big Data processing on the Grid must tolerate a continuous stream of failures, errors, and faults– Transient job failures on the Grid can be recovered by managed re-tries

• However, workflow checkpointing at the level of a file or a job delays turnaround times

Advancements in reliability engineering provide a framework for fundamental understanding of the Big Data processing turnaround time– Designing fault tolerance strategies that minimize the duration of Big Data processing on

the Grid is an active area of research

A. Vaniachine

Page 22: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

22

Future Research Direction: Workflow Management  To significantly shorten the time needed to transform scientific data into

actionable knowledge, the US DOE Advance Scientific Computing Research office is preparing a call that will include

From R. Carlson presentation at the “HPC and super-computing workshop for Future Science Applications” (BNL, June 2013)https://indico.bnl.gov/materialDisplay.py?contribId=16&sessionId=8&materialId=slides&confId=612

A. Vaniachine

Page 23: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

23

Maximizing Physics Output through Modeling

In preparations for LHC data taking future networking perceived as a limit– Monarc model serves as an example how to circumvent the resource limitation

• WLCG implemented hierarchical data flow maximizing reliable data transfers

A. Vaniachine

Today networking is not a limit and WLCG abandoned the hierarchy– No fundamental technical barriers to

transport 10x more traffic within 4 years In contrast, future CPU and storage are

perceived as a limit– HEP now risks to compromise physics

because of lack of computing resources• As in the days of Monarc, HEP needs

comprehensive modeling capabilities that would enable maximizing physics output within the resource constraints

Picture by I. Bird

Page 24: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

24

Future Research Direction: Workflow Modeling

From R. Carlson presentation at the “HPC and super-computing workshop for Future Science Applications” (BNL, June 2013)https://indico.bnl.gov/materialDisplay.py?contribId=16&sessionId=8&materialId=slides&confId=612

A. Vaniachine

Page 25: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

25

Conclusions

Study of Higgs boson properties is a top priority for LHC physics– LHC upgrades increase demands for computing resources beyond flat budgets

• HEP now risks to compromise physics because of lack of computing resources

A comprehensive end-to-end solution for the composition and execution of Big Data processing workflow within given CPU and storage constraints is necessary– Future research in workflow management and modeling are necessary to provide the

tools for maximizing scientific output within given resource constraints By bringing Nuclear Electronics and Computing experts together, the NEC

Symposium continues to be in unique position to promote HEP progress as the solution requires optimization cross-cutting Trigger and Computing domains

A. Vaniachine

Page 26: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Extra Slides

Page 27: A. Vaniachine XXIV International Symposium on Nuclear Electronics & Computing Varna, Bulgaria, 9-16 September 2013 Big Data Processing on the Grid: Future

Big Data Processing on the Grid

27A. Vaniachine