28
Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed Computing and Grid technologies in Science and Education"

Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Embed Size (px)

Citation preview

Page 1: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Distributed Computing Beyond The Grid

Alexei Klimentov

Brookhaven National Laboratory

Grid2012 Conference. Dubna

5th International Conference"Distributed Computing and Grid technologies in Science and Education"

Page 2: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

2

Main topics

• ATLAS experiment

• Grid and 2012 discovery– From distributed computing to the Grid– MONARC model and computing model evolution– Progress in networking– Evolution of Data placement model

• From planned replicas to global data access

• Grids and Clouds– Cloud computing and virtualization

• From Internet to ….

Grid2012 Conference. Dubna7/17/12

Page 3: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

3

ATLAS

• A Thoroidal LHC ApparatuS– ATLAS is one of the six particle detectors experiments at Large Hadron Collider

(LHC) at CERN, one of the two general purpose detectors– The project involves more than 3000 scientists and engineers in ~40 countries– ATLAS detector is 44 meters long and 25 meters in diameter, weighs about 7,000

tons. It is about half as big as the Notre Dame cathedral in Paris and weighs the same as the Eiffel Tower or a hundred 747 jets

– The detector has 150 million sensors to deliver data– The collaboration is huge and highly distributed

Grid2012 Conference. Dubna

Theory

We use experimentsto inquire about what“reality” (nature) does

The goal is to understandin the most general; that’susually also the simplest.- A. Eddington

We intend to fill this gap

7/17/12

ATLAS

The ATLAS Collaboration

Page 4: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS Grid2012 Conference. Dubna 4

Proton-Proton Collisions at the LHC

→ collisions every 50 ns = 20 MHz crossing rate

1.6 x 1011 protons per bunch at Lpk ~ 0.7x1034/cm2/s ≈ 35 pp interactions per crossing – pile-up

→ ≈ 109 pp interactions per second !!!

in each collision ≈ 1600 charged particles producedATLAS RAW event size 1.2 MB ATLAS Reco Event Size 1.9 MB

enormous challenge for the detectors and for data collection/storage/analysis

Research in High Energy Physics cannot be done without computers.

7/17/12

GRID

Candidate Higgs decay to four electrons recorded by ATLAS in 2012.

Page 5: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

5

The Complexity of the HEP Computing

• Research in High Energy Physics cannot be done without computers.– Enormous data volume– The complexity of the data processing algorithms– The statistical nature of data analysis– several (re)processing and Monte-Carlo simulation campaigns per year

Requires sufficient computing capacity• The Computing posed by LHC

– Very large international collaborations– PetaBytes of data to be treated and analyzed

The volume of data and the need to share them across collaboration is the key issue for LHC data analysis

• ATLAS Computing requirements over time• 1995 : 100 TB disk space, 107 MIPS : Computing Technical Proposal• 2001: 1900 TB 7*107 MIPS : LHC Computing Review• 2007: 70000 TB 55*107 MIPS : Technical design report• 2010 LHC START• 2011: 83000 TB 61*107 MIPS

Grid2012 Conference. Dubna

ATLAS Data at CERN 2010-Jun 2012

15+ PBytesFrom the start on it was clear that no center could provide ALL computing even for one LHC experiment

• Buildings, Power, Cooling, Money ..

7/17/12

Page 6: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

6

MONARC Model and LHC Grid

• In 1998 MONARC project defined tiered architecture deployed later as LHC Computing Grid – a distributed model

• Integrate existing centres, department clusters,recognising that funding is easier if the equipment is installed at home

• Devolution of control– local physics groups have more influence over how local resources are used, how the service evolves

– a multi-Tier model• Enormous data volumes looked after by

a few (expensive) computing centres• Network costs favour regional data access• Simple model that HEP can develop and get

into production ready for data in 2005

• Around the same time I.Foster and C.Kesselman proposed a general solution to distributed computing

Grid2012 Conference. Dubna

Hierarchy in data placement

7/17/12

Page 7: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Click to edit Master title styleWLCG

7

40%

15%

45%

Tier-0 (CERN): (15%)•Data recording•Initial data reconstruction•Data distribution

Tier-1 (11 centres): (40%)•Permanent storage•Re-processing•Analysis•Connected by direct 10 Gb/s network linksTier-2 (~200 centres): (45%)• Simulation• End-user analysis

Grid2012 Conference. Dubna

Page 8: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

8

D.Duellmann talkStorage Strategy and Cloud StorageEvaluationsGrid2012 July 18

LHC Grid model over time• One of the major objectives reached was to enable physicists from all sites, large and

small to access and to analyse LHC data• Grid complexity was masked to the users and still to deliver full functionality was one

of the greatest challenges– PanDA* (ATLAS Production and Analysis workload management system) allows users to run analysis on the

same way on local clusters and on the Grid

• Evolution of data placement model – From planned replicas to dynamic data placement

• Mantra “Jobs go to data” served well in 2010, but in 2011/12 ATLAS moved to dynamic data placement concept• Additional copies of the data made• Unused copies are cleaned

• Data Storage Evolution

Grid2012 Conference. Dubna7/17/12

• Role separation and decoupling– Separating archive and disk

activities– Separating Tier0 and analysis

storage. Analysis storage can evolve more rapidly w/o posing risks for high priority T0 tasks

• CERN implementation – EOS– Mostly to eliminate CASTOR

constraints– EOS high performance and highly

scalable redundant disk storage, based on xrootd framework

– intensively (stress)tested by ATLAS in 2010 /11. In production since 2011

TB

*) K.De talk STATUS AND EVOLUTION OF THE ATLAS WORKLOAD MANAGEMENT SYSTEM PanDA later today

ATLAS EOS space occupancy

Page 9: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

Grid2012 Conference. Dubna 9

Progress in Networking

7/17/12

• LHCONE an initiative for Tier-2 Network

– Network providers, jointly working with the experiments, have proposed a new network model for supporting the LHC experiments, known as the LHC Open Network Environment (LHCONE).

– The goal of LHCONE is to provide a collection of access locations that are effectively entry points into a network that is reserved to the LHC T1/2/3 sites.

– LHCONE will complement LHCOPN.

• Network : – As important as site infrastructure– Key point to optimize storage usage and

jobs brokering to sites– WAN is very stable and performance is

good • It allows migration from hierarchy

to mesh model

• Production and analysis workload management system will use networking status/performance metrics to send jobs/data to sites

• Progress in networking ->– Remote access via WAN is a reality– allowed to relax MONARC model

LHC Optical Private Network

LHC Open Network Environment

LHCOPN

Page 10: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

10

Globalized Data Access• Canonical HEP strategy

– “Jobs go to Data”• Data are partitioned between sites

– Some sites are more important (get more important data) than others– Planned replicas

» A dataset (collection of files produced under the same conditions and the same SW) is a unit of replication

» Dataset size up to several TBytes• Data and replica catalogs are needed to broker jobs• Analysis job requires data from several sites triggers data replication and consolidation at

one site or job splitting on several jobs running on all sites– A data analysis job must wait for all its data to be present at the site

» The situation can easily degrade into a complex n-to-m matching problem• The CERN HEP pioneered the concept of “Data Grid” where data locality is the main

feature

• Data popularity concept– Used to decide when the number of replicas of a sample needs to be adjusted either up or

down and replicate or clean-up• Dynamic storage usage • Analysis jobs waiting time decreased

– But still we have extra transfers to increase number of dataset replicas

• Directly accessing data ATLAS wide (world-wide in ideal case) could reduce the need of extra replicas and enhance the performance of the system and its stability

Grid2012 Conference. Dubna7/17/12

Page 11: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

11

Storage Federation

• “Storage Federation”– Provide new access modes & redundancy

• Jobs access data on shared storage resources via WAN– Share storage resources

• And move to files and (even) event level caching– Do not replicate a dataset– Do not cache a dataset

– Examining in T1-T2 context and also off-Grid T3

Grid2012 Conference. Dubna

Federated ATLAS XROOTD (FAX) Deployment• Since Sep 2011 ~10 sites reporting to global

federation (US ATLAS Computing project)• Performance studies were conducted with

various caching options • Adoption if decent WAN performance

achievable • Subject the current set of sites to regular

testing at significant analysis job scale• Monitoring of I/O• Evaluation file caching

– With target to have event caching

• Test being extended from regional to global

7/17/12

Page 12: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

12

CERN 2012 Discovery (and Grid) • On 4 July, 2012, the ATLAS experiment presented updated results on the

search of Higgs Boson. Fabiola Gianotti, ATLAS spokesperson :– “We observe in our data clear signs of a new particle, at the level of 5 sigma, in the mass

region around 126 GeV… It would have been impossible to release physics results so quickly without outstanding performance of the Grid”

• Available resources fully used/stressed• Very effective and flexible Computing Model and Operation -> accommodate high trigger

rates and pile-up, intense MC simulation, analysis demands from world-wide user

Grid2012 Conference. Dubna

~830K daily average completed ATLAS Grid jobs

Apr 1 – Jul 12 2012 Completed Grid Jobs

Apr 1 – Jul 4 2012 Data Transfer Throughput (MB/s) All ATLAS sites

Up to 6GB/s week average

ATLAS 2012 datasets transfer time.Data are available for Physics analysis in ~ 5h

5hATLAS Distributed Computing on the Grid :10 Tier-1s + CERN + ~70 Tier-2s +…(more than 80 Production sites)

7/17/12

AnalysisMC prod

Page 13: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

In meantime….

• While we were developing the Grid, the rest of the world had other ideas.

Grid2012 Conference. Dubna 13

1998

2004

2006

1996

The external world of computing is changing now as fast as it ever has and should open paths to knowledge in physics. HEP needs to be readyfor new technical challenges posed bothby our research demands and by external developments.

Glen Crawford’s HEP Office DoE , introduction to CHEP 2012

Amazon EC2

7/17/12

Page 14: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

14

Balancing between stability and innovation

ATLAS Distributed Computing infrastructure is working. ATLAS Computing is facing challenges ahead as LHC performance is ramping up for the remainder of 2012 data taking 

– Our experience provides confidence that future challenges will be handled without compromising  physics results

• Despite of popular opinion, the Grid is doing us very well – 1500s ATLAS users process PetaBytes of data with billion of

jobs

• …but we are starting to hit some limits :– Databases scalability, CPU resources, storage utilization

• We also need to learn lessons to watch what others are doing

• …probably is it time for check up– Although there is no universal recipe

Grid2012 Conference. Dubna7/17/12

T.Wenaus talkTechnical Evolution in LHC ComputingGrid2012 July 18

Page 15: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

15

Grids and Clouds : Friends or Foes ?

Grid2012 Conference. Dubna7/17/12

Page 16: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

16

Cloud Computing and Grid

• For approximately one decade Grid computing has been hailed by many as “the next big thing”

• Cloud computing is increasingly gaining popularity and it has become another buzzword (as Web 2.0)

– We start to compute on centralized facilities operated by third-party compute and storage utilities

– The idea is not new, in early sixties computing pioneers like John McCarthy predicted that “computation may someday be organized as a public utility”

• Cloud and Grid computing vision is the same :– to reduce the cost of computing, increase reliability, and increase flexibility by transforming

computers from something that we buy and operate ourselves to something that is operated by a third party.

• But things are different than they were 10 years ago• We have experience with LHC data processing and analysis• We need to analyze Petabytes of LHC data• We found that it is quite expensive to operate commodity clusters• And Amazon, Google, Microsoft,… created real commercial large-scale systems

containing hundreds of thousands of computers.

• There is a long list of cloud computing definitions“A large-scale distributed computing paradigm that is driven by economies of scale, in which a pool of abstracted, virtualized, dynamically-scalable, managed computing power, storage, platforms, and services are delivered on demand to external customers over the Internet.” I.Foster et al

Grid2012 Conference. Dubna7/17/12

Page 17: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

17

Relationship of Clouds with other domains

• Foster’s definition of Cloud computing overlaps with many existing technologies, such as Grid Computing, Utility Computing and Distributed Computing in general.

Grid2012 Conference. Dubna

• Web 2.0 covers almost the whole spectrum of service-oriented applications

• Cloud Computing lies at the large-scale side.

• Supercomputing and Cluster Computing have been more focused on traditional non-service applications.

• Grid computing vision (and definition) is evolving with time and Fathers of the Grid see it slightly differently than a decade ago.

• Grid Computing overlaps with all these fields where it is generally considered of lesser scale than supercomputers and Clouds.

7/17/12

I.Foster et al. Cloud Computing and Grid Computing 360-Degree Compared

Page 18: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

18

Cloud Computing and HEP• Pioneering work of Melbourne University Group (M.Sevior et al, ~2009)

commercial cloud has been used to run Monte-Carlo simulation for BELLE experiment, around the same time ATLAS Cloud Computing R&D project was started

Grid2012 Conference. Dubna

• “Clouds” in ATLAS Distributed Computing

– Integrated with Production System (PanDA *)

– Transparent to end-users

7/17/12

ATLAS Jobs running in IAAS CA site

Monitoring page for ATLAS Clouds

Page 19: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

ATLAS Cloud Computing R&D

• ATLAS Cloud Computing R&D Project

– Goal: How we can integrate cloud resources with our current grid resources?

• Data processing and workload management – Production (PanDA) queues in the cloud

• Centrally managed, non-trivial deployment but scalable• Benefits ATLAS & sites, transparent to users

– Tier3 analysis clusters: instant cloud sites• Institute managed, low/medium complexity

– Personal analysis queue: one click, run my jobs• User managed, low complexity (almost transparent)

• Data storage– Short term data caching to accelerate above data processing use cases

• Transient data– Object storage and archival in the cloud

• Integrate with data management system

EF

FIC

IEN

CY,

EL

AS

TIC

ITY

Grid2012 Conference. Dubna 197/17/12

Page 20: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

Grid2012 Conference. Dubna 20

• Partnership between European companies and research organizations (CERN, EMBL, ESA)

• Establish a sustainable European cloud computing infrastructure to provide stable computing capacities and services that elastically meet demand

• Pilot phase : Proof of Concept deployments on 3 commercial cloud providers (ATOS, CloudSigma, T-Systems)

Helix Nebula : The Science Cloud

ATLAS setup

2012/14

ATLAS has been chosen as one of the Helix Nebula flagships to make Proof of Concept deployments on three different commercial cloud providers.

7/17/12

Page 21: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

21

ATLAS Cloud Computing and Commercial Cloud Providers

Grid2012 Conference. Dubna

• During this pilot phase ATLAS Distributed Computing has integrated Helix Nebula cloud resources into the PanDA workload management system, tested the cloud sites in the same way as any standard grid resource and finally ran MonteCarlo simulation jobs on several hundred cores on the cloud. 

• All deployments in the Proof of Concept phase have been successful and have been useful to identify the future directions for HelixNebula:

• Agree on a common cloud model• Provide common interface • Understand common model for cloud

storage• Involve other organizations or

experiments• Understand costs, SLAs, legal

constraints…

7/17/12

ATLAS Jobs in Helix Clouds

Page 22: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

22

ATLAS Cloud Computing. Achievements

• We saw some good achievements already :– Production and analysis queues are ported to the cloud

• Enable users to access extra computing resources on demand

– Private and commercial cloud providers» PanDA submission is transparent for users

• User can access new cloud resources with minimal changes to their analysis workflow on grid sites

– Orchestrators for dynamic provisioning (i.e. adjust the size of the cloud according to existing demand) have been implemented and one is already in use for PanDA queues

– Different storage options were evaluated (EBS, S3) and xrootd storage cluster in the cloud

Grid2012 Conference. Dubna7/17/12

Page 23: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

23

Virtualization

• Virtualization technology offers an opportunity to decouple infrastructure, operating system and experiment software life-cycles

• The concept is not new in Information Technology and it can be contended that the whole evolution of the computing machines is accompanied by a process of virtualization intended to offer a friendly and functional interface to the underlying hardware and software layers

– Penalty in performance is a known issue for decades

• It was reborn in age of Grid and Clouds. Virtualization has become an indispensable ingredient for almost every Cloud, the most obvious reasons are for abstraction and encapsulation.

• Hardware factors also favour virtualization– Many more cores per processor– Need to share processors between applications– AMD and Intel have been introducing support for virtualization

• One of interesting applications is a virtual Grid, when pool of virtual machines is deployed on top of physics hardware resources

• Virtualization could be chosen for HEP data preservation

Grid2012 Conference. Dubna

RIP

CERNVM

June 1996

7/17/12

Page 24: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

24

Cloud Computing. Summary

• We have a vision about the future of Grid and Cloud Computing as fully complementary technologies that will coexist and cooperate at different levels of abstraction in e-infrastructures.

• We are planning to incorporate virtualization and Cloud computing to enhance ATLAS Distributed Computing

• Data processing– Many activities are reaching a point where we can start getting feedback from users. We

should• Determine what we can deliver in production• Start focusing and eliminate options

– Improve automation and monitoring– Still suffering from lack of standardization amongst providers

• Cloud storage– This is the hard part– Looking forward to good progress in caching (xrootd in cloud)– Some “free” S3 endpoints are just coming online, so effective R&D is only starting now

• We believe that finally there will be a Grid of Clouds integrated with the LHC Computing Grid, as we know it now

– Right now we have Grid of Grids (LCG, NorduGrid, OSG)Grid2012 Conference. Dubna7/17/12

Page 25: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

25

Outlook

• “It would have been impossible to release physics results so quickly without outstanding performance of the Grid”

Fabiola Gianotti Jul 4, 2012

• There is no one technology to fit all. There is no one stop solution

• ATLAS Distributed Computing are pioneering Cloud R&D project and actively evaluates storage federation solution. ATLAS was the first LHC experiment implemented data popularity and dynamic data placement, we need to go forward to file and event level caching. Distributed Computing Resources will be used more dynamically and flexible, and it will make more efficient use of resources

• Grid and Cloud Computing are fully complementary technologies that will coexist and cooperate at different levels of abstraction in e-infrastructures

• The evolution of virtualization will ease the integration of clouds and Grid.

• HEP data placement is moving to data caching and data access via WAN

Grid2012 Conference. Dubna7/17/12

Page 26: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

26

From Internet to Gutenberg

• Hermes, the alleged inventor of writing, presented his invention to the Pharaoh Thamus, he praised his new technique that was supposed to allow human beings to remember what they would otherwise forget.

• XX century (TV, radio, telephone…) brought another culture, people watch the whole world under the form of images which would have involved a decline of literacy.

• Computer screen is an ideal book on which one reads about the world in form of words and pages. If teen-agers by chance they want to program their own home computer, must know, or learn, logical procedures and algorithms, and must type words and numbers on a keyboard, at a great speed. In this sense one can say that the computer made us to return to a Gutenberg Galaxy.

• People who spend their night implementing an unending Internet conversation are principally dealing with words.

From lecture presented by Umberto Eco

(Italian philosopher, novelist,

author “Il nome della rosa”)

Grid2012 Conference. Dubna7/17/12

Page 27: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

27

Summary

• The last decade stimulated High Energy Physics to organize computing in a widely distributed way– Active participation in the LHC Grid service gives the

institute (not just the physicist) a continuing and key role in the data analysis

which is where the physics discovery happens• Encourages novel approaches to analysis .... ... and to

the provision of computing resources

• One of the major objectives reached was to enable physicists from all sites, large and small to access and to analyse LHC data

Grid2012 Conference. Dubna7/17/12

Page 28: Distributed Computing Beyond The Grid Alexei Klimentov Brookhaven National Laboratory Grid2012 Conference. Dubna 5 th International Conference "Distributed

Alexei KlimentovBNL/PAS

28

Acknowledgements

• Many thanks to my colleagues, F.Barreiro, I.Bird, J.Boyd, R.Brun, P.Buncic, F.Carminati, K.De, D.Duellmann, A.Filipcic, V.Fine, I.Fisk, R.Gardner, J.Iven, S.Jezequel, A.Hanushevsky, L.Robertson, M.Sevior, J.Shiers, D. van der Ster, H. von der Schmitt, I.Ueda, A.Vaniachine, T.Wenaus and many-many others for materials used in this talk.

Grid2012 Conference. Dubna7/17/12