View
214
Download
1
Embed Size (px)
Citation preview
GridLab Conference, Zakopane, Poland, September 13, 2002
CrossGrid: Interactive Applications, Tool
Environment, New Grid Services, and Testbed
Marian Bubak
X# TATInstitute of Computer Science & ACC CYFRONET
AGH, Cracow, Poland
www.eu-crossgrid.org
GridLab Conference, Zakopane, Poland, September 13, 2002
Overview
– Applications and their requirements
– X# architecture
– Tools for X# applications development
– New grid services
– Structure of the X# Project
– Status and future
GridLab Conference, Zakopane, Poland, September 13, 2002
CrossGrid in a Nutshell
Interactive and Data Intensive Applications Interactive simulation and visualization of a biomedical system Flooding crisis team support Distributed data analysis in HEP Weather forecast and air pollution modeling
Grid Application Programming Environment
MPI code debugging and verification Metrics and benchmarks Interactive and semiautomatic performance evaluation tools
Grid Visualization Kernel Data Mining
New CrossGrid Services
Globus Middleware
Fabric
DataGrid...
Services
HLA
Portals and roaming access Grid resource management Grid monitoring Optimization of data access
GridLab Conference, Zakopane, Poland, September 13, 2002
Biomedical Application
– Input: 3-D model of arteries
– Simulation: LB of blood flow
– Results: in a virtual reality
– User: analyses results in near real-time, interacts, changes the structure of arteries
GridLab Conference, Zakopane, Poland, September 13, 2002
VR-Interaction
GridLab Conference, Zakopane, Poland, September 13, 2002
Steering in the Biomedical Application
CT / MRI scan
MedicalDB
Segmentation
MedicalDB
LB flowsimulation
VEWDPC
PDA
Visualization
Interaction
HDB
10 simulations/day60 GB20 MB/s
GridLab Conference, Zakopane, Poland, September 13, 2002
Modules of the Biomedical Application
1. Medical scanners - data acquisition system2. Software for segmentation – to get 3-D images3. Database with medical images and metadata4. Blood flow simulator with interaction capability5. History database6. Visualization for several interactive 3-D platforms7. Interactive measurement module8. Interaction module9. User interface for coupling visualization, simulation,
steering
GridLab Conference, Zakopane, Poland, September 13, 2002
Interactive Steering in the Biomedical Application
CT / MRI scan
MedicalDB
Segmentation
MedicalDB
LB flowsimulation
VEWDPC
PDA
Visualization
Interaction
HDB
the user can adjust simulation parameters while the simulation is
running
GridLab Conference, Zakopane, Poland, September 13, 2002
Biomedical Application Use Case (1/3)
– Obtaining an MRI scan for the patient– Image segmentation (clear picture of important blood
vessels, location of aneurisms and blockages)– Generation of a computational mesh for a LB simulation– Start of a simulation of normal blood flow in the vessels
CT / MRI scan
MedicalDB
Segmentation
MedicalDB
LB flowsimulation
GridLab Conference, Zakopane, Poland, September 13, 2002
Biomedical Application Use Case (2/3)
– Generation of alternative computational meshes (several bypass designs) based on results from the previous step
– Allocation of appropriate Grid resources (one cluster for each computational mesh)
– Initialization of the blood flow simulations for the bypasses• The physician can monitor the progress of the simulations
through his portal
• Automatic completion notification, (e.g. through SMS messages.
GridLab Conference, Zakopane, Poland, September 13, 2002
Biomedical Application Use Case (3/3)
– Online presentation of simulation results via a 3D environment
– Adding small modifications to the proposed structure (i.e. changes in angles or positions)
– Immediate initiation of the resulting changes in the blood flow
• The progress of the simulation and the estimated time of convergence should be available for inspection.
LB flowsimulation
VEWDPC
PDA
Visualization
Interaction
GridLab Conference, Zakopane, Poland, September 13, 2002
Asynchronous Execution of Biomedical Application
GridLab Conference, Zakopane, Poland, September 13, 2002
Flooding Crisis Team Support
Storage systems
databases
surface automatic meteorological and hydrological stations
systems for acquisition and processing of satellite information
meteorological radars
External sources of informationGlobal and regional centers GTSEUMETSAT and NOAAHydrological services of other countries
Data sources
meteorological models
hydrological models
hydraulic models
High performance computers
Grid infrastructure
Flood crisis teams meteorologistshydrologistshydraulic engineers
Users
river authoritiesenergyinsurance companiesnavigation
mediapublic
GridLab Conference, Zakopane, Poland, September 13, 2002
Cascade of Flood Simulations Data sources
Meteorological simulations
Hydraulic simulations
Hydrological simulations
Users
Output visualization
GridLab Conference, Zakopane, Poland, September 13, 2002
Basic Characteristics of Flood Simulation
– Meteorological
• intensive simulation (1.5 h/simulation) – maybe HPC• large input/output data sets (50MB~150MB /event)• high availability of resources (24/365)
– Hydrological
• Parametric simulations - HTC• Each sub-catchment may require different models
(heterogeneous simulation)– Hydraulic
• Many 1-D simulations - HTC• 2-D hydraulic simulations need HPC
GridLab Conference, Zakopane, Poland, September 13, 2002
Váh River Pilot Site
Váh River Catchment Area: 19700km2, 1/3 of Slovakia
(Inflow point)
Nosice
Strečno
(Outflow point)
Pilot Site Catchment Area: 2500km2
(above Strečno: 5500km2)
GridLab Conference, Zakopane, Poland, September 13, 2002
Typical Results - Flow and Water Depth
GridLab Conference, Zakopane, Poland, September 13, 2002
Distributed Data Analysis in HEP
– Objectives• Distributed data access• Distributed data mining techniques with neural networks
– Issues• Typical interactive requests will run on o(TB) distributed data• Transfer/replication times for the whole data of order of one
hour • Data transfers once and in advance of the interactive session.• Allocation, installation and set up the corresponding database
servers before the interactive session starts
Weather Forecast and Air Pollution Modeling
– Distributed/parallel codes on Grid
• Coupled Ocean/Atmosphere Mesoscale Prediction System
• STEM-II Air Pollution Code
– Integration of distributed databases
– Data mining applied to downscaling weather forecast
GridLab Conference, Zakopane, Poland, September 13, 2002
COAMPSCoupled Ocean/Atmosphere Mesoscale Prediction System: Atmospheric Components
•Complex Data Quality Control•Analysis:
• Multivariate Optimum Interpolation Analysis (MVOI) of Winds and Heights• Univariate Analyses of Temperature and Moisture• OI Analysis of Sea Surface Temperature
•Initialization:• Variational Hydrostatic Constraint on Analysis Increments• Digital Filter
•Atmospheric Model:• Numerics: Nonhydrostatic, Scheme C, Nested Grids, Sigma-z, Flexible Lateral BCs• Physics: PBL, Convection, Explicit Moist Physics, Radiation, Surface Layer
•Features:• Globally Relocatable (5 Map Projections)• User-Defined Grid Resolutions, Dimensions, and Number of Nested Grids• 6 or 12 Hour Incremental Data Assimilation Cycle • Can be Used for Idealized or Real-Time Applications• Single Configuration Managed System for All Applications• Operational at FNMOC:
• 7 Areas, Twice Daily, using 81/27/9 km or 81/27 km grids• Forecasts to 72 hours
• Operational at all Navy Regional Centers (w/GUI Interface)
GridLab Conference, Zakopane, Poland, September 13, 2002
Air Pollution Model – STEM-II
– Species: 56 chemical, 16 long-lived, 40 short-lived, 28 radicals (OH, HO2 )
– Chemical mechanisms:• 176 gas-phase reactions• 31 aqueous-phase reactions.• 12 aqueous-phase solution equilibria.
– Equations are integrated with locally 1-D finite element method (LOD-FEM)
– Transport equations are solved with Petrov-Crank-Nicolson-Galerkin (FEM)
– Chemistry & mass transfer terms are integrated with semi-implicit Euler and pseudo-analytic methods
GridLab Conference, Zakopane, Poland, September 13, 2002
Key Features of X# Applications
– Data • Data generators and data bases geographically distributed
• Selected on demand
– Processing• Needs large processing capacity; both HPC & HTC
• Interactive
– Presentation• Complex data require versatile 3D visualisation
• Support interaction and feedback to other components
GridLab Conference, Zakopane, Poland, September 13, 2002
Overview of the CrossGrid Architecture
Supporting Tools
1.4Meteo
Pollution
1.4Meteo
Pollution
3.1 Portal & Migrating Desktop
3.1 Portal & Migrating Desktop
ApplicationsDevelopment
Support
2.4Performance
Analysis
2.4Performance
Analysis
2.2 MPI Verification
2.2 MPI Verification
2.3 Metrics and Benchmarks
2.3 Metrics and Benchmarks
App. Spec Services
1.1 Grid Visualisation
Kernel
1.1 Grid Visualisation
Kernel
1.3 DataMining on Grid (NN)
1.3 DataMining on Grid (NN)
1.3 Interactive Distributed
Data Access
1.3 Interactive Distributed
Data Access
3.1Roaming Access
3.1Roaming Access
3.2Scheduling
Agents
3.2Scheduling
Agents
3.3Grid
Monitoring
3.3Grid
Monitoring
MPICH-GMPICH-G
Fabric
1.1, 1.2 HLA and others
1.1, 1.2 HLA and others
3.4Optimization of
Grid Data Access
3.4Optimization of
Grid Data Access
1.2Flooding
1.2Flooding
1.1BioMed
1.1BioMed
Applications
Generic Services
GRAMGRAM GSIGSIReplica CatalogReplica CatalogGIS / MDSGIS / MDSGridFTPGridFTP Globus-IOGlobus-IO
DataGridReplica
Manager
DataGridReplica
Manager
DataGrid Job Submission
Service
DataGrid Job Submission
Service
Resource Manager
(CE)
Resource Manager
(CE)
CPUCPU
ResourceManagerResourceManager
Resource Manager
(SE)
Resource Manager
(SE)Secondary
StorageSecondary
Storage
ResourceManagerResourceManager
Instruments ( Satelites,
Radars)
Instruments ( Satelites,
Radars)
3.4Optimization of
Local Data Access
3.4Optimization of
Local Data Access
Tertiary StorageTertiary Storage
Replica CatalogReplica Catalog
GlobusReplica
Manager
GlobusReplica
Manager
1.1User Interaction
Services
1.1User Interaction
Services
GridLab Conference, Zakopane, Poland, September 13, 2002
Tool Environment
GridMonitoring(Task 3.3)
PerformancePrediction
Component
High LevelAnalysis
Component
User Interface and Visualization Component
PerformanceMeasurementComponent
Benchmarks(Task 2.3)
Applications (WP1)executing
on Grid testbed
Applicationsourcecode
G-PM
RMD PMD
LegendRMD – raw monitoring data
PMD – performance measurement data
data flow
manual information transfer
GridLab Conference, Zakopane, Poland, September 13, 2002
MPI Verification
– A tool that verifies the correctness of parallel, distributed Grid applications using the MPI paradigm.
– To make end-user applications
• portable,
• reproducible,
• reliable on any platform of the Grid.
– The technical basis: MPI profiling interface which allows a detailed analysis of the MPI application
GridLab Conference, Zakopane, Poland, September 13, 2002
Benchmark Categories
– Micro-benchmarks • For identifying basic performance properties of Grid services, sites, and
constellations• To test a single performance aspect, through “stress testing” of a simple
operation invoked in isolation• The metrics captured represent computing power (flops), memory capacity
and throughput, I/O performance, network ...– Micro-kernels
• “Stress-test” several performance aspects of a system at once• Generic HPC/HTC kernels, including general and often-used kernels in Grid
environments– Application kernels
• Characteristic of representative CG applications• Capturing higher-level metrics, e.g. completion time, throughput, speedup.
GridLab Conference, Zakopane, Poland, September 13, 2002
Performance Measurement Tool G-PM
– Components:
• performance measurement component (PMC),
• component for high level analysis (HLAC),
• component for performance prediction (PPC) based on analytical performance models of application kernels,
• user interface and visualization component UIVC.
GridLab Conference, Zakopane, Poland, September 13, 2002
For Interactive X# Applications ...
– Resource allocation should be done in near-real time (a challenge for the resource broker & scheduling agents).
– The resource reservation (i.e. by prioritizing jobs)
– Network bandwidth reservation (?)
– Near-real time synchronization between visualization and simulation should be achieved in both directions: user to simulation and simulation to user (rollback etc)
– Fault tolerance
– Post-execution cleanup
GridLab Conference, Zakopane, Poland, September 13, 2002
User Interaction Service
Condor-GCondor-GCondor-GCondor - G NimrodNimrodNimrodNimrod
User Interaction Services
User Interaction Services
User Interaction Services
User Interaction Service
Resource Broker
Resource Broker
Resource Broker
Resource Broker
Scheduler(3.2)
Running Simulation 1
Running Simulation 2
Running Simulation 3
User Interaction Services
User Interaction Services
User Interaction ServicesServiceFactory
VisualisationIn VE
CM CM CM
CM
CM
CM forSim 1
CM forSim 2
CM
Control module
Pure module UIS service
CM forSim 3
user site
UIS connections
Other connections
GridLab Conference, Zakopane, Poland, September 13, 2002
Tools Environment and Grid Monitoring
ApplicationsApplicationsPortals(3.1)
Portals(3.1)
G-PMPerformance
Measurement Tools (2.4)
G-PMPerformance
Measurement Tools (2.4)
MPI Debugging and Verification
(2.2)
MPI Debugging and Verification
(2.2)
Metrics and Benchmarks
(2.4)
Metrics and Benchmarks
(2.4)
Grid Monitoring (3.3)(OCM-G, RGMA)
Grid Monitoring (3.3)(OCM-G, RGMA)
Application programming environment requiresinformation from the Grid about current status of applications and it should be able to manipulate them
GridLab Conference, Zakopane, Poland, September 13, 2002
Monitoring of Grid Applications
– Monitor = obtain information on or manipulate target application
– e.g. read status of application’s processes, suspend application, read / write memory, etc.
– Monitoring module needed by tools
– Debuggers
– Performance analyzers
– Visualizers
– ...
GridLab Conference, Zakopane, Poland, September 13, 2002
CrossGrid Monitoring System
GridLab Conference, Zakopane, Poland, September 13, 2002
Very Short Overview of OMIS
– Target system view• hierarchical set of objects
• nodes, processes, threads
• For the Grid: new objects – sites• objects identified by tokens, e.g. n_1, p_1, etc.
– Three types of services• information services• manipulation services• event services
GridLab Conference, Zakopane, Poland, September 13, 2002
OMIS Services
– Information services• obtain information on target system
• e.g. node_get_info = obtain information on nodes in the target system
– Manipulation services• perform manipulations on the target system
• e.g. thread_stop = stop specified threads
– Event services• detect events in the target system
• e.g. thread_started_libcall = detect invocations of specified functions
– Information + manipulation services = actions
GridLab Conference, Zakopane, Poland, September 13, 2002
Components of OCM-G
– Service Managers• one per site in the system• permanent• request distribution• reply collection
– Local Monitors• one per [node, user] pair• transient (created or destroyed
when needed)• handle local objects• actual execution of requests
GridLab Conference, Zakopane, Poland, September 13, 2002
Monitoring Environment
ServiceManager
LocalMonitor
Tool
SharedMemory
OMIS
OMIS
ExternalLocalization
ApplicationProcess
<<start>>
<<use>>
<<use>>
<<use>>
<<use>>
– OCM-G Components• Service Managers• Local Monitors
– Application processes
– Tool(s)
– External name service• Component discovery
GridLab Conference, Zakopane, Poland, September 13, 2002
Security Issues
– OCM-G components handle multiple users, tools and applications• possibility to issue a fake request (e.g., posing as a
different user)• authentication and authorization needed
– LMs are allowed for manipulations• unauthorized user can do anything
GridLab Conference, Zakopane, Poland, September 13, 2002
Portals and Roaming Access
ApplicationsApplicationsPortals(3.1)
Portals(3.1)
Roaming Access Server (3.1)
Roaming Access Server (3.1)
Scheduler(3.2)
Scheduler(3.2)
GIS / MDS(Globus)
GIS / MDS(Globus)
Grid Monitoring (3.3)
Grid Monitoring (3.3)
–Allow access user environment from remote computers
–Independent of the system version and hardware
–Run applications, manage data files, store personal settings
•Remote Access Server•user profiles •authentication, authorization•job submission
•Migrating Desktop•Application portal
GridLab Conference, Zakopane, Poland, September 13, 2002
Optimization of Grid Data Access
ApplicationsApplicationsPortals(3.1)
Portals(3.1)
Optimization of Grid Data Access (3.4)
Optimization of Grid Data Access (3.4)
Scheduling Agents(3.2)
Scheduling Agents(3.2)
Replica Manager(DataGrid / Globus)Replica Manager
(DataGrid / Globus)Grid Monitoring
(3.3)Grid Monitoring
(3.3)
GridFTPGridFTP
Service consists of•Component-expert system•Data-access estimator•GridFTP plugin
–Different storage systems and applications’ requirements
–Optimization by selection of data handlers
GridLab Conference, Zakopane, Poland, September 13, 2002
CrossGrid Collaboration
Poland:Cyfronet & INP CracowPSNC PoznanICM & IPJ Warsaw
Portugal:LIP Lisbon
Spain:CSIC SantanderValencia & RedIrisUAB BarcelonaUSC Santiago & CESGA
Ireland:TCD Dublin
Italy:DATAMAT
Netherlands:UvA Amsterdam
Germany:FZK KarlsruheTUM MunichUSTU Stuttgart
Slovakia:II SAS Bratislava
Greece:AlgosystemsDemo AthensAuTh Thessaloniki
Cyprus:UCY Nikosia
Austria:U.Linz
GridLab Conference, Zakopane, Poland, September 13, 2002
Tasks
1.0 Co-ordination and management (Peter M.A. Sloot, UvA)
1.1 Interactive simulation and visualisation of a biomedical system
(G. Dick van Albada, Uva)
1.2 Flooding crisis team support (Ladislav Hluchy, II SAS)
1.3 Distributed data analysis in HEP (C. Martinez-Rivero, CSIC)
1.4 Weather forecast and air pollution modelling (Bogumil Jakubiak, ICM)
WP1 – CrossGrid Application Development
GridLab Conference, Zakopane, Poland, September 13, 2002
Tasks
2.0 Co-ordination and management (Holger Marten, FZK)
2.1 Tools requirement definition (Roland Wismueller, TUM)
2.2 MPI code debugging and verification (Matthias Mueller, USTUTT)
2.3 Metrics and benchmarks (Marios Dikaiakos, UCY)
2.4 Interactive and semiautomatic performance evaluation tools
(Wlodek Funika, Cyfronet)
2.5 Integration, testing and refinement (Roland Wismueller, TUM)
WP2 - Grid Application Programming Environments
GridLab Conference, Zakopane, Poland, September 13, 2002
Tasks
3.0 Co-ordination and management (Norbert Meyer, PSNC)
3.1 Portals and roaming access (Miroslaw Kupczyk, PSNC)
3.2 Grid resource management (Miquel A. Senar, UAB)
3.3 Grid monitoring (Brian Coghlan, TCD)
3.4 Optimisation of data access (Jacek Kitowski, Cyfronet)
3.5 Tests and integration (Santiago Gonzalez, CSIC)
WP3 – New Grid Services and Tools
GridLab Conference, Zakopane, Poland, September 13, 2002
Tasks
4.0 Coordination and management
(Jesus Marco, CSIC, Santander)–Coordination with WP1,2,3–Collaborative tools–Integration Team
4.1 Testbed setup & incremental evolution (Rafael Marco, CSIC, Santander)
–Define installation–Deploy testbed releases–Trace security issues
WP4 - International Testbed Organization
Testbed site responsibles:– CYFRONET (Krakow) A.Ozieblo– ICM(Warsaw) W.Wislicki– IPJ (Warsaw) K.Nawrocki– UvA (Amsterdam) D.van Albada– FZK (Karlsruhe) M.Kunze– IISAS (Bratislava) J.Astalos– PSNC(Poznan) P.Wolniewicz– UCY (Cyprus) M.Dikaiakos– TCD (Dublin) B.Coghlan– CSIC (Santander/Valencia) S.Gonzalez– UAB (Barcelona) G.Merino– USC (Santiago) A.Gomez– UAM (Madrid) J.del Peso– Demo (Athenas) C.Markou– AuTh (Thessaloniki) D.Sampsonidis– LIP (Lisbon) J.Martins
GridLab Conference, Zakopane, Poland, September 13, 2002
Tasks
4.2 Integration with DataGrid (Marcel Kunze, FZK)–Coordination of testbed setup–Exchange knowledge–Participate in WP meetings
4.3 Infrastructure support (Josep Salt, CSIC, Valencia)–Fabric management–HelpDesk–Provide Installation Kit–Network support
4.4 Verification & quality control (Jorge Gomes, LIP)–Feedback –Improve stability of the testbed
WP4 - International Testbed Organization
GridLab Conference, Zakopane, Poland, September 13, 2002
CrossGrid Testbed Map
UCY NikosiaDEMO Athens
Auth Thessaloniki
CYFRONET Cracow
ICM & IPJ Warsaw
PSNC Poznan
CSIC IFIC Valencia
UAB Barcelona
CSIC-UC IFCA
Santander
CSIC RedIris Madrid
LIP Lisbon
USC Santiago
TCD Dublin
UvA Amsterdam
FZK Karlsruhe
II SAS Bratislava
Géant
GridLab Conference, Zakopane, Poland, September 13, 2002
Tasks
5.1 Project coordination and administration (Michal Turala, INP)
5.2 CrossGrid Architecture Team (Marian Bubak, Cyfronet)
5.3 Central dissemination (Yannis Perros, ALGO)
WP5 – Project Management
GridLab Conference, Zakopane, Poland, September 13, 2002
EU Funded Grid Project Space (Kyriakos Baxevanidis)
GRIDLAB
GRIA
EGSO
DATATAG
CROSSGRID
DATAGRID
Applications
GRIP EUROGRID
DAMIENMiddleware
& Tools
Underlying Infrastructures
ScienceIndustry / business
- Links with European National efforts
- Links with US projects (GriPhyN, PPDG, iVDGL,…)
GridLab Conference, Zakopane, Poland, September 13, 2002
Project Phases
M 1 - 3: requirements definition and merging
M 4 - 12: first development phase: design, 1st prototypes, refinement of requirements
M 13 -24: second development phase: integration of components, 2nd prototypes
M 25 -32: third development phase: complete integration, final code versions
M 33 -36: final phase: demonstration and documentation
GridLab Conference, Zakopane, Poland, September 13, 2002
Rules for X# SW Development
– Iterative improvement:
– development, testing on testbed, evaluation, improvement
– Modularity
– Open source approach
– SW well documented
– Collaboration with other # projects
GridLab Conference, Zakopane, Poland, September 13, 2002
Collaboration with other # Projects
– Objective – exchange of
– information
– software components
– Partners
– DataGrid
– DataTag
– Others from GRIDSTART (of course, with GridLab)
– Participation in GGF
GridLab Conference, Zakopane, Poland, September 13, 2002
Status after M6
– Software Requirements Specifications together with use cases
– CrossGrid Architecture defined
– Detailed Design documents for tools and the new Grid services (OO approach, UML)
– Analysis of security issues and the first proposal of solutions
– Detailed description of the test and integration procedures
– Testbed first experience• Sites: LIP, FZK, CSIC+USC, PSNC, AuTH+Demo
• Basic: EDG release 1.2
• Applications:
• EDG HEP simulations (Atlas,CMS)
• first distributed prototypes using MPI:
• NN distributed training
• Evolutionary Algorithms
GridLab Conference, Zakopane, Poland, September 13, 2002
Near Future
– Participation in production testbed with DataGrid• All sites will be ready to join by end of September• Common DEMO at IST 2002, Copenhagen, November 4th-6th
– Collaboration with DataGrid in specific points (e.g. user support and helpdesk software)
– CrossGrid Workshop, Linz (w/ EuroPVM/MPI 2002), September 28th-29th
– Conference Across Grids together with R&I Forum• Santiago de Compostella, Spain, February 9th-14th,2003• With Proceedings (reviewed papers)
GridLab Conference, Zakopane, Poland, September 13, 2002
Linz CrossGrid Workshop Sep 28th-29th
–Evaluate the current status of all tasks
–Discuss interfaces and functionality
–Understand what we may expect as first prototypes
–Coordinate the operation of the X# testbed
–Agree about common rules for software development (SOP)
–Start to organize the first CrossGrid EU review
–Meet with EU DataGrid representatives
–Discuss the technology for the future (OGSA)
Details at
http://www.gup.uni-linz.ac.at/crossgrid/workshop/
GridLab Conference, Zakopane, Poland, September 13, 2002
Summary
– Layered structure of the all X# applications– Reuse of SW from DataGrid and other # projects– Globus as the bottom layer of the middleware– Heterogeneous computer and storage systems– Distributed development and testing of SW
– 12 partners in applications– 14 partners in middleware– 15 partners in testbeds– In total – 21 partnes
– First 6 months – successful
GridLab Conference, Zakopane, Poland, September 13, 2002
Thanks to
– Michal Turala
– Kasia Zajac
– Maciek Malawski
– Marek Garbacz
– Peter M.A. Sloot
– Roland Wismueller
– Wlodek Funika
– Ladislav Hluchy
– Bartosz Balis
– Jacek Kitowski
– Norbert Meyer
– Jesus Marco
– Marcel Kunze
GridLab Conference, Zakopane, Poland, September 13, 2002
www.eu-crossgrid.org