PRAGMA Summit, 3/17/2008
Cindy ZhengPRAGMA Grid Coordinator
Pacific Rim Application and Grid Middleware Assemblyhttp://www.pragma-grid.nethttp://goc.pragma-grid.net
Applications and Middleware Work in PRAGMA Grid
PRAGMA Summit, 3/17/2008
Overview• Introduction to PRAGMA Grid
– People– Hardware– Software– Operations
• Grid Applications• Grid Middleware
– Security– Infrastructure– Services
• Education Grid• Collaborations/Integrations• Grid Interoperations
PRAGMA Summit, 3/17/2008
PRAGMA GridPRAGMA Grid
32 institutions in 16 countries/regions, 27 compute sites (+ 10 in preparation)
UZHSwitzerland
NECTECThaiGridThailand
UoHydIndia
MIMOSUSMMalaysia
CUHKHongKong
ASGCNCHCTaiwan
HCMUTHUTIOIT-HCMVietnam
AISTOsakaUUTsukubaTITechJapan
BIIIHPCNGONTUSingapore
MUAustralia
APACQUTAustralia
KISTIKorea
JLUChina
SDSCUSA
CICESEMexico
UNAMMexico
UCNChile
UChileChile
UUtahUSA
NCSAUSA BU
USA
CeNAT-ITCRCosta Rica
BESTGridNew Zealand
CNICGUCASChinaLZU
China
UPRMPuerto Rico
UZHSwitzerland
LZUChina
ASTIPhilippines
SKUUIIndonesia
PRAGMA Summit, 3/17/2008
PRAGMA Grid Members and Teamhttp://goc.pragma-grid.net/wiki/index.php/Site_status_and_tasks
• Sites– 23 sites from PRAGMA member institutions– 15 sites from Non-PRAGMA member institutions– 27 sites contributed compute clusters
• Team members– 190 and growing– one management contact / site– 1~3 technical support contact / site– 1~4 application drivers / application team– 1~5 / Middleware development team
PRAGMA Summit, 3/17/2008
PRAGMA Grid Compute Resourceshttp://goc.pragma-grid.net/pragma-doc/computegrid.html
PRAGMA Summit, 3/17/2008
Characteristics of PRAGMA Grid
• Grass-root– Voluntary contribution– Open (PRAGMA member or not, pacific rim or not)– Long-term collaborative working experiment
• Heterogeneous– Funding– No uniform infrastructure management– Variety of sciences and applications– Site policies, system and network environments
• Realistically tough – Good for development, collaborations, integrations and
testing
PRAGMA Summit, 3/17/2008
PRAGMA Grid Software Layershttp://goc.pragma-grid.net/pragma-doc/userguide/join.html
Local job scheduler (require one)
Globus (required)
Application Middleware
Applications
Infrastructure Middleware
SGE PBS LSF SQMS …
Ninf-G Nimrod/G Mpich-GX … Gfarm SCMSWeb MOGASCSF
Phylogenetic …FMO CSTFTSavannah MM5 AMBERSiesta
…
PRAGMA Summit, 3/17/2008
PRAGMA Grid Operations
PRAGMA Summit, 3/17/2008
One of the major lessons from PRAGMA Grid, that everybody has
noticed and would agree– “You have to Grid People before
you can Grid machines”
Rajesh Chhabra
Australia
PRAGMA Summit, 3/17/2008
Grid Operationhttp://goc.pragma-grid.net, http://wiki.pragma-grid.net
• Develop and maintain mutual beneficial and happy relationships among all people involved– Geographies, time-zones, languages– Funding, chain-of-command, priorities– Mutual benefit, consensus, active leadership– Coordinator, site contacts
• Collaboration tools– Mailing lists, VTCs, Skype, semi-annual workshops – Grid Operation Center (GOC)– Wiki, all sites and application, middleware teams collaborate
• Heterogeneity– Tolerate, technology, overcome and take advantage– Software inventory instead of software stack– Many sub-grids for applications– Recommendation instead of requirements– Software license (Amber grid-wide license)
PRAGMA Summit, 3/17/2008
Create New Ways To Operate http://goc.pragma-grid.net, http://wiki.pragma-grid.net
• Lack precedence• Everyone contributes ideas, suggestions• Evolving and improving over time • Everyone document and update (wiki)
– Create new procedures• New site setup to join PRAGMA Grid
http://goc.pragma-grid.net/pragma-doc/userguide/join.html• New user/application to run in PRAGMA grid
http://goc.pragma-grid.net/pragma-doc/userguide/pragma_user_guide.html
– Tabulate information• Application pages, site pages, resources tables, status pages
– Publish instructions• Software deployment procedures, tools
PRAGMA Summit, 3/17/2008
Application Driven
PRAGMA Summit, 3/17/2008
Applications and Middleware http://goc.pragma-grid.net/applications/default.html
• Real science applications paired with and drive middleware development
• Open to applications of all scientific disciplines• Achieve long-run and scientific results• ~30 applications in 3 years:
– Structure biology• Phaser (MU, Australia)
– Quantum-mechanics, quantum-chemistry:• TDDFT, QM-MD, FMO/Ninf-G (AIST, Japan)
– Climate simulation• Savannah/Nimrod (MU, Australia)• MM5/Mpich-Gx (CICESE, Mexico; KISTI, Korea)
– Genomics and meta-genomics• iGAP/Gfarm/CSF (UCSD, USA; AIST, Japan; JLU, China)• HPM: genomics (IOIT-HCM, Vietnam)• mpiBlast/Mpich-G2 (ASGC, Taiwan)• Phylogenetic/Gfarm/CFS (UWisc and UCSD, USA)
– Computational chemistry and fluid dynamics• CSE-Online (UUtah, USA)• e-AIRS (KISTI, Korea)• Gamess-APBS/Nimrod (UZurich, Switzerland)
– Molecular simulation• Siesta/Nimrod (UZurich, Switzerland; MU, Australia)• Amber/Gfarm ( USM, Malaysia; AIST, Japan)
– Environmental Science• CSTFT/Ninf-G (UPRM, Puerto Rico)
PRAGMA Summit, 3/17/2008
Grid Middleware
PRAGMA Summit, 3/17/2008
Ninf-G http://ninf.apgrid.org
• Developed by AIST, Japan• Based on GridRPC model• Support parallel computing • Integrated to NMI release 8 (first non-US software in NMI)• Integrate with Rocks• OGF standard• 4 applications ran in PRAGMA grid, 2 ran across multi-grid
– TDDFT– QM/MD– FMO– CSTFT (UPRM)
• Achieved long runs (50 days)• Improved fault-tolerance• Simplified deployment procedures• Speed-up development cycles
PRAGMA Summit, 3/17/2008
Nimrod/Ghttp://www.csse.monash.edu.au/~davida/nimrod
• Developed by Monash University (MU), Australia• Supports large scale parameter sweeps on Grid infrastructure• Easy user interface – Nimrod portals
– MU, Australia– UZurich, Switzerland– UCSD, USA
• 4 applications ran in PRAGMA grid and 2 run in multi-grids– Savanah climate simulation (MU, Australia))– GAMESS/APBS (UZurich, Switzerland)– Siesta (UZurich, Switzerland)– Structure biology (MU, Australia)
• Developed interface to Unicore with UZH• Achieved long runs (90 different scenarios of 6 weeks each• Improved fault-tolerance (innovate time_step)• Enhancements in data and storage handling
Job 1 Job 2 Job 3Job 4 Job 5 Job 6Job 7 Job 8 Job 9Job 10Job 11Job 12Job 13Job 14Job 15Job 16Job 17Job 18
Description of ParametersPLAN FILE
PRAGMA Summit, 3/17/2008
Mpich-Gxhttp://www.moredream.org/mpich.htm
• Mpich-GX– Korea Institute of Science and Technology
Information (KISTI), Korea– Based on Mpich-g2– Grid-enabled MPI, support
• Private IP• Fault tolerance
• MM5 and WRF – CICESE, Mexico– Medium scale atmospheric simulation model
• Experiment– KGrid– WRF work well with MPICH-GX
– MM5 experienced scaling problems with MPICH-GX when use more than 24 processors in a cluster
– Functionality of the private IP is usable
– Performance of the private IP is reasonable
PRAGMA Summit, 3/17/2008
Education Gridhttp://prime.ucsd.edu http://prius.ics.es.osaka-u.ac.jp/en/index.html
PRIME - Pacific Rim Undergraduate Experiences, providing UCSD undergraduate students international interdisciplinary research internships and Cultural experiences, in collaboration with PRAGMA since 2004. PRIUS - Pacific Rim International UniverSity, provide Osaka University students expert lectures and internship abroad, in collaboration with PRAGMA since 2005
Sample middleware projects– MOGAS– Grid security analysis
Sample applications ran in PRAGMA grid this year:– Climate modeling– Multi-walled carbon nanotube and polyethylene oxide
composite computer visualization model– Metabolic regulation of ionic currents and pumps in rabbit
ventricular myocyte model– Improving binding energy using quantum mechanics– Cardiac mechanics modeling– H5N1 simulation – Shp2 Protein Tyrosine Phosphatase Inhibitor simulation for
cancer research
PRAGMA Summit, 3/17/2008
Science
Technologies
Collaborations
Integrations
PRAGMA Summit, 3/17/2008
• Grid security– Naregi (Japan), APGrid, GAMA (SDSC, USA)
• Grid infrastructure– Monitoring - SCMSWeb (ThaiGrid, Thailand)– Accounting - MOGAS (NTU Singapore)– Metascheduling - Community Scheduler
Framework (JLU, China)– Cyber-environment - CSE-Online (UUtah,
USA)– Rocks and middleware (SDSC, USA; …)
• Ninf-G, SCE, Gfarm, Bio, K*Rocks, Condor, …• Science, datagrid, sensor, network
– Biosciences – Avian Flu, portal, …– Gfarm-fuse (AIST, Japan)– GEON data network– GLEON sensor network– OptIPuter
• High performance networked TDW• Telescience
Collaborations With Science and Technology Teams
PRAGMA Summit, 3/17/2008
Grid Security• Trust in PRAGMA grid,
http://goc.pragma-grid.net/pragma-doc/certificates.html– IGTF distribution– Non-IGTF distribution (trust all PRAGMA Grid sites)
• APGrid PMA– One of three IGTF founding PMAs– Many PRAGMA grid sites are members– PRAGMA CA
• Naregi-CA– AIST, UCSD, UChile, UoHyd, UPRM
• PRAGMA CA (experimental and production)– Based on Naregi-CA– Catch-all CA for PRAGMA– Production CA will be IGTF compliant
• MyProxy and VOMS services– APAC and UCSD
• Work with GAMA– Integrate with Naregi-CA (Naregi, UCSD)– Integration with VOMS (AIST)– Add servelet for account management (UChile)
PRAGMA Summit, 3/17/2008
Gfarm Grid File Systemhttp://datafarm.apgrid.org
• UTsukuba, Open source development at SourceForge.net• Grid file system that Federates storage of each site• Meta-server keeps track of file copies and locations• Can be mounted from cluster nodes and clients (GfarmFS-FUSE)• Parallel I/O, near site copy for scalable performance• Replication for fault tolerance• Use GSI authentication• Easy application deployment, file sharing• Distributed Meta-servers?
PRAGMA Summit, 3/17/2008
Testing with applications• Igap (Gfarm, Japan, UCSD, USA; JLU, China)
– Huge number of small files
– High meta-data access overhead
– Meta-data cache server
– Dramatic improvements (44sec -> 3.54sec)
• AMBER (USM, Malaysia; Gfarm, Japan)– Remote Gfarm meta-server
– Meta-server is bottle-neck
– File sharing permission, security
– 2.0 improved performance
– Use as a shared storage only
Version 1.4 works well in local or regional grid• GeoGrid, Japan• CLGrid, ChileIntegration• SCMSWeb (ThaiGrid, Thailand)• Rocks (SDSC, USA; UZH, Switzerland)
Develop and Test GfarmFS-FUSE in PRAGMA Gridhttp://goc.pragma-grid.net/wiki/index.php/Resources_and_Data
PRAGMA Summit, 3/17/2008
SCMSWebhttp://www.opensce.org/components/SCMSWeb
• Developed by Kasetsart University and ThaiGrid• Web-based real-time grid monitoring system
– System usage, Job/queue status– Probe – Globus authentication, job submission, gridftp,
Gfarm access, …– Network bandwidth measurements with Iperf– PRAGMA grid geo map
• Support Linux, Solaris. Good meta-view, easy user interface, excellent user support
• Develop and test in PRAGMA grid– Deployed in 27 sites, improve scalability and
performance– Sites help with porting to ia64 and Solaris– Demands push fast expansion of functionalities
• More regional/national grids learned and adopted
PRAGMA Summit, 3/17/2008
SCMSWeb Collaborations and Integrations• Grid Interoperation Now (GIN, OGF)
http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps – Worked with PRAGMA grid, TeraGrid, OSG, NorduGrid and EGEE on GIN testbed
monitoring http://goc.pragma-grid.net/cgi-bin/scmsweb/probe.cgi, added probes to handle various grid service configurations/tests.
– Worked with CERN and Implemented a XML-> LDIF translator for GIN geo map http://maps.google.com/maps?q=http://lfield.home.cern.ch/lfield/gin.kml
– Worked with many grid monitor software developers on a common schema for cross-grid monitoring http://wiki.pragma-grid.net/index.php?title=GIN_%28Grid_Inter-operation_Now%29_Monitoring
• Software integration and interoperations– Rocks – SCE roll– MOGAS – grid accounting– Bandwidth measurements– Data federator for grid applications
• Provide site software information• Standardize data extractions and formats• Improve data storage with RDBMS
– Condor, CSF, … – provide resource info– Interoperate with other monitoring software
• Ganglia support
PRAGMA Summit, 3/17/2008
MOGAShttp://ntu-cg.ntu.edu.sg/pragma/index.jsp
• Multi-Organization Grid Accounting System (MOGAS)– Lead by NanYang University, funded by National Grid Office in Singapore– Build on globus core (gridftp, GRAM, GSI) – Support GT2,3,4; SGE, PBS– Job/user/cluster/OU/grid levels usages; job logs; metering and charging tools
• Develop and test in PRAGMA grid– Deployed on 14 sites: different GT versions, job schedulers, GRAM scripts,
security policies– Feedbacks, improve, automate deployment procedure– Decentralized servers and better database to improve scalability and
performance– Collaborations and integrations with applications and other middleware teams
push the development of easy database interface and more efficient data collection
PRAGMA Summit, 3/17/2008
• Community Scheduler Framework, v4 – meta-scheduler– Developed by Jilin University, China– Grid services host in GT4, WSRF compliant, execution Component in Globus
Toolkit 4– Open Source, http://sourceforge.net/projects/gcsf– Support GT2&4, LSF, PBS, SGE, Condor– Easy user interface - portal
• Testing and collaborating in PRAGMA– Testing with application iGAP, AFG (UCSD, AIST, KISTI, …) – Collaborate and integrate with Gfarm on data staging (AIST, Japan)– Setup a CSF server and portal (SDSC, USA)– Collaborate/integrate with SCMSWeb for resource information (Thaigrid, Thailand)– Leverage resources and global grid testing environment
CSF4http://goc.pragma-grid.net/wiki/index.php/CSF_server_and_portal
PRAGMA Summit, 3/17/2008
Computational Science & Engineering Onlinehttp://cse-online.net
• Developed by University of Utah, USA (Thanh N. Truong)• Desktop tool, user friendly interface enables seamless access to
remote data, tools and grid computing resources• Currently support computational Chemistry• Can be customized for other domain science• Developed interface to TeraGrid• Collaborate with ThaiGrid as case study
– Used for Computational workshop– Extend grid access to portal architecture– Improved security
• Working on interface PRAGMA grid– Heterogeneity
Quantum Chemistry
Nano-materialsDrug Design
PRAGMA Summit, 3/17/2008
Collaborations with OptIPuter, GLIF and CAMERAhttp://www.optiputer.net
• OptIPuter (Optical networking, Internet Protocol, computer storage, processing and visualization technologies)– Infrastructure that will tightly couple computational resources over
parallel optical networks using the IP communication mechanism– central architectural element is optical networking, not computers – enable scientists who are generating terabytes and petabytes of
data to interactively visualize, analyze, and correlate their data from multiple storage sites connected to optical networks
• Rocks/SAGE VIS-roll (SDSC)• Networked Tile Display Walls (TDW)
– Low cost– For research collaboration– For remote education and conferencing– Deployed at many PRAGMA grid sites
PRAGMA Summit, 3/17/2008
Build a Rocks / SAGE OptIPortal
NCMIR@UCSD SIO@UCSD
UIC
Calit2@UCI
KISTI
NCSA & TRECC
Calit2@UCSD
AIST UZurich CNIC
NCHC
Osaka U
PRAGMA Summit, 3/17/2008
Global Lambda Integrated Facility (GLIF)http://www.glif.is
Visualization courtesy of Bob Patterson, NCSA.
– Map to many PRAGMA grid sites
– PRAGMA grid use GLIF to solve grid application bandwidth problem
PRAGMA Summit, 3/17/2008
Intergrate CAMERA and PRAGMA Grid Microbial Metagenomicist Userbase
Over 1300 Registered Users From 48 Countries
PRAGMA Summit, 3/17/2008
Calit2/PRAGMA/CAMERA/LambdaGrid Collaborations
USA 761China 24Australia 20India 15Mexico 14J apan 14Chile* 6Switzerland* 5Taiwan 4Singapore 3New Zealand* 2Malaysia 2Korea 1Vietnam* 0Thailand 0Costa Rica* 0
• Add CAMERA Server to PRAGMA Grid Testbed– Ad hoc Supercomputing– NIMROD?– New Bioinformatics Apps
• Set up PRAGMA OptIPortal LambdaGrid for several PRAGMA Sites– KISTI, Konkuk U (Korea)– AIST, Osaka U (Japan)– CNIC (China)– NCHC (Taiwan)– APAC, UMelbourne,
Monash U, U Queensland (Australia)
– CICESE (Mexico)– UZurich (Switzerland)– Plus Other Volunteers!Source: Paul Gilna,
Kayo Arima, Calit2
PRAGMA Countries with CAMERA Registered Users
PRAGMA Summit, 3/17/2008
Grid Interoperation
PRAGMA Summit, 3/17/2008
• Open Grid Forum and GIN• GIN-OPS (lead by PRAGMA)• GIN testbed (February, 2006 – On
going)– One or more clusters from each grid– Still part of each production grid– Running real science applications– Explore interoperation issues– Develop solutions– Provide insight to standardization
effort• Application driven
– TDDFT/Ninf-G (PRAGMA - AIST, Japan)
• PRAGMA, TeraGrid, OSG, NorduGrid; EGEE
– Savanah fire simulation (PRAGMA – Monash University, Australia)
• PRAGMA, TeraGrid, OSG
Grid Interoperation Now (GIN)http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps
PRAGMA Summit, 3/17/2008
• Software interface and integration– Ninf-G (AIST/PRAGMA) - NorduGrid– Nimrod/G (MU-PRIME/PRAGMA) – Unicore– SCMSWeb (ThaiGrid/PRAGMA) – Condor (UWisc/OSG)– SCMSWeb (ThaiGrid/PRAGMA) – BDII (CERN)– VDT (OSG) and Rocks (SDSC/PRAGMA) integration
• Multi-Grid monitoring– Lead by ThaiGrid/PRAGMA– SCMSWeb probe matrix (PRAGMA - ThaiGrid, Thailand)– Common schema (http://goc.pragma-grid.net/wiki/index.php?title=GIN_
%28Grid_Inter-operation_Now%29_Monitoring)• PRAGMA – SCMSWeb, MOGAS• TeraGrid – Globus Gt4.0.1, Ganglia, NAGIOS• EGEE – MonAlisa• NorduGrid/ARC – NorduGrid/MDS2, NorduGrid/Grid Monitor
Grid Interoperation Now (GIN)http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps
PRAGMA Summit, 3/17/2008
PRAGMA Summit, 3/17/2008
• Different from GIN testbed– More resources and support from each grid– Either uni-directional or bi-directional application run– Long run to achieve scientific results
• OSG<->PRAGMA (January, 2007 – on-going)– How
• Each grid identify management, application drivers, resources supporters
• All participants document application requirements, meetings, issues, solutions, status, results, … at wiki.pragma-grid.net
– Resources• OSG – FermilabGrid, will add UWisc• PRAGMA grid - any sites application driver choose to use
– Applications• OSG – GISolve, spatial Interpolation (UIowa, USA)• PRAGMA
– FMO/Ninf-G, quantum Chemistry (AIST, Japan) – completed– Structure biology (MU, Australia) – start soon
Peer-grid Interoperation Experimentshttp://goc.pragma-grid.net/wiki/index.php/Main_Page#Grid_Inter-operations
PRAGMA Summit, 3/17/2008
OSG-PRAGMA Grid Interoperation Experimentshttp://goc.pragma-grid.net/wiki/index.php/Main_Page#Grid_Inter-operations
• More resources and support from each grid, but no special arrangements
• Application long-run– GridFMO/Ninf-G – Large scale quantum Chemistry (Tsutomo Ikegami,
AIST, Japan)– 240 CPUs from OSG and PRAGMA grid, 10 days x 7 calculations– Fault-tolerance enabled long-run– Meaningful and usable scientific results
PRAGMA Summit, 3/17/2008
• Start a series of three workshops– First – March~April 2008
• Organized by UZurich• Swiss Grid, European Federated Grid
(Euro-Grid), and PRAGMA• Goal
– Inform and learn from each other– Seek ways to collaborate
• Collaboration work started summer 2007– Nimrod interface to UNICORE
PRAGMA Summit
PRAGMA Summit, 3/17/2008
Lessons Learned From Grid Interoperation http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps
• Grid interoperation make large scale calculations possible• Differences among grids provide learning, collaboration and integration
opportunities– IGTF, VOMS (GIN)– Common Software Area (TeraGrid)– Ninf-G – NorduGrid– Nimrod/G – Unicore– SCMSWeb – Condor– SCMSWeb – BDII– SCMSWeb probe matrix for GIN testbed monitoring– Common schema among many grid monitoring software– VDT (OSG) and Rocks (SDSC/PRAGMA) integration
• Differences in grid environment are source of difficulties for users and applications
– Different user access setup procedure - take extra effort– Different job submission protocols
• GRAM, Sandbox, gridftp, modified GRAM, …• One-to-one interface - is it scalable? Possible standards?
• Middleware fault tolerance and flexible resource management is important– Cope with unfamiliar fault conditions, lack of parallel computation support, …
PRAGMA Summit, 3/17/2008
Collaborate in Publishing Research Results
Some published papers in 2007:• Amaro, RE, Minh DDL, Cheng LS, Lindstrom, WM Jr, Olson AJ, Lin JH, Li WW, and McCammon JA. Remarkable
Loop Flexibility in Avian Influenza N1 and Its Implications for Antiviral Drug Design . J. AM. CHEM. SOC. 2007, 129, 7764-7765 (PRIME)
• Choi Y, Jung S, Kim D, Lee J, Jeong K, Lim SB, Heo D, Hwang S, and Byeon OH."Glyco-MGrid: A Collaborative Molecular Simulation Grid for e-Glycomics," in 3rd IEEE International Conference on e-Science and Grid Computing, Banglore, India, 2007. Accepted.
• Ding Z, Wei W, Luo Y, Ma D, Arzberger PW, and Li WW, "Customized Plug-in Modules in Metascheduler CSF4 for Life Sciences Applications," New Generation Computing, p. In Press, 2007.
• Ding Z, Wei S, Ma, D and Li WW, "VJM -- A Deadlock Free Resource Co-allocation Model for Cross Domain Parallel Jobs," in HPC Asia 2007, Seoul, Korea, 2007, p. In Press.
• Görgen K, Lynch H, Abramson D, Beringer J and Uotila P. "Savanna fires increase monsoon rainfall as simulated using a distributed computing environment", to appear, Geophysical Research Letters.
• Ichikawa K, Date S, Krishnan S, Li W, Nakata K, Yonezawa Y, Nakamura H, and Shimojo S, "Opal OP: An extensible Grid-enabling wrapping approach for legacy applications", GCA2007 - Proceedings of the 3rd workshop on Grid Computing & Applications -, pp.117-127 , Singapore, June 2007 a. (PRIUS)
• Ichikawa K, Date S, and Shimojo S. “A Framework for Meta-Scheduling WSRF Based Services”, Proceedings of 2007 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 2007), Victoria, Canada, pp. 481-484, Aug. 2007 b. (PRIUS)
• Kuwabara S, Ichikawa K, Date S, and Shimojo S. “A Built-in Application Control Module for SAGE”, Proceedings of 2007 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 2007), Victoria, Canada, pp. 117-120, Aug. 2007. (PRIUS)
• Takeda S, Date S, Zhang J, Lee BU, and Shimojo S. “Security Monitoring Extension For MOGAS”, GCA2007 - Proceedings of the 3rd workshop on Grid Computing & Applications - , pp.128-137 Singapore, June 2007. (PRIUS)
• Tilak S, Hubbard P, Miller M, and Fountain T, ``The Ring Buffer Network Bus (RBNB) DataTurbine Streaming Data Middleware for Environmental Observing Systems," to appear in the Proceedings of the e-Science 2007
• Zheng C, Katz M, Papadopoulos P, Abramson D, Ayyub S, Enticott C, Garic S, Goscinski W, Arzberger P, Lee B S, Phatanapherom S, Sriprayoonsakul S, Uthayopas P, Tanaka Y, Tanimura Y, Tatebe O. Lesson Learned Through Driving Science Applications in the PRAGMA Grid. Int. J. Web and Grid Servies, Vol.3, No.3, pp287-312. 2007 …
PRAGMA Summit, 3/17/2008
Summary• PRAGMA grid
– Shared vision lower resistance to use others software, test on others resources
– Formed new development collaborations– Size and heterogeneity, explore issues which functional grid must resolve
• Management, resources and software coordination• Identity and fault management• Scalability and performance• Feedback between application and middleware help improve software and
promote software integration• Heterogeneous global grid
– Is realistic and challenging– Can be good for middleware development and testing– Can be useful for real science
• Impact– Software dissemination (Rocks, Ninf-G, Nimrod, SCMSWeb, Naregi-CA, …)– Help new national/regional grids (Chile, Vietnam, Hong kong, …)
• Key is people, is collaboration
PRAGMA Summit, 3/17/2008
How Can I Participate?
• Get involved now– PRAGMA or similar collaborative communities
• Cost a little• Benefit a lot
– Being a part of larger grid effort – Learn from doing– Build collaborations– Develop bigger/better ideas/projects– Push the use of network and other infrastructure
PRAGMA Summit, 3/17/2008
A Grass Roots Effort
“One of the most important lessons of the Internet is that it grows most successfully where grass roots initiatives are encouraged and enabled. The Internet has historically grown from the bottom up, and this aspect continues to fuel its continued growth in the academic and commercial sectors.”
– Vint Cert, UN Economic and Social Council in 2000
PRAGMA Summit, 3/17/2008
http://www.pragma-grid.nethttp://goc.pragma-grid.nethttp://wiki.pragma-grid.net
Thank You
• PRAGMA is supported by the National Science Foundation (Grant No. INT-0216895, INT-0314015, OCI -0627026) and by member institutions
• PRIME is supported by the National Science Foundation under NSF INT 04007508
• PRAGMA grid is the result of contributions and support from all PRAGMA grid team members
PRAGMA Summit, 3/17/2008
http://www.pragma-grid.net
Overarching Goals
PRAGMA“A Practical Collaborative Framework”
People and applications
Strengthen Existing and Establish New Collaborations
Work with Science Teams to Advance Grid Technologies and Improve the
Underlying Infrastructure
In the Pacific Rim and Globally
PRAGMA Summit, 3/17/2008
PRAGMA Member InstitutionsPRAGMA Member Institutions
KUNECTECTNGCThailand
UoHydIndia
MIMOSUSMMalaysia
ASGCCNCHCTaiwan
AISTCCSCMCNARCOsakaUTITechJapan
BIIIHPCNGOSingapore
MUAustralia
APACAustralia
JLUChina
CalIT2CRBSSDSCUCSDUSA CICESE
Mexico
NCSAStarLightTransPAC2USA
CNICChina
CRAYPNWGUSA
KBSIKISTIKonkukKorea
APANJapan
http://www.pragma-grid.net
33 institutions from 12 countries/regions Founded 2002Supported by Members
UUtahUSA
BeSTGRIDNew Zealand
PRAGMA Summit, 3/17/2008
Overview and ApproachProcess to Promote Routine Use Team Science
Application-Driven CollaborationsApplications Middleware
Routine Use Lab/TestbedTesting Applications
Building Grid and GOC
Multiway DisseminationKey Middleware
Workshops and Organization
Information Exchange
Planning and Review
New Collaborations
New Members
Expand Users
Expand Impact
Outcomes
Improved middlewareBroader Use
New CollaborationsTransfer Tech.
StandardsPublications
New KnowledgeData AccessEducation
PRAGMA Summit, 3/17/2008
PRAGMA Working Groups
• Bioscience• Telescience• Geo-science• Resources and data
–Grid middleware interoperability–Global grid usability and productivity
PRAGMA Grid effort is led by resources and data working group, but rely on collaborations and contributions among all working groups.
PRAGMA Summit, 3/17/2008
• PRAGMA grid and its heterogeneous environment is great for– Testing– Collaborating– Integrating – Sharing
• Not easy– Middleware needs improvements
• Work in heterogeneous environment• Fault tolerance
– Need user friendly portals and services• Automate and integrate
– Information collections (grid monitoring, workflow)– Decisions and executions (scheduling)– Domain specific easy user interfaces (portals, CE
tools)– …
Lessons Learned From Running Applications
PRAGMA Summit, 3/17/2008
MM5-WRF/Mpich-GX Experiment
pluto
eolo
Hurricane Marty Simulation
Santana Winds Simulation
output
CICESE Ensenada
USA
México
SDSC
Private IP support
Fault Tolerance support
Mpich-GX
KGrid
PRAGMA Summit, 3/17/2008
PRAGMA is a great model and needs to be emulated. Has helped weaken barriers between different research
groups across different continents and allowed people to trust and collaborate
rather than compete.
Arun Agarwal
UoHyd, India
PRAGMA Summit, 3/17/2008
PRAGMA Gfarm Datagridhttp://goc.pragma-grid.net/pragma-doc/datagrid.html
- Compute Cluster