17
Future Grid Future Grid All Hands Meeting Introduction Indianapolis October 2-3 2009 www.futuregrid.org Geoffrey Fox [email protected] www.infomall.org School of Informatics and Computing and Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University

Future Grid All Hands Meeting Introduction

  • Upload
    elana

  • View
    34

  • Download
    0

Embed Size (px)

DESCRIPTION

Future Grid All Hands Meeting Introduction. Indianapolis October 2-3 2009 www.futuregrid.org Geoffrey Fox [email protected] www.infomall.org School of Informatics and Computing and Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University. - PowerPoint PPT Presentation

Citation preview

Page 1: Future Grid  All Hands Meeting Introduction

FutureGrid

Future Grid All Hands Meeting

Introduction

Indianapolis October 2-3 2009www.futuregrid.org

Geoffrey [email protected] www.infomall.org

School of Informatics and Computingand Community Grids Laboratory,

Digital Science CenterPervasive Technology Institute

Indiana University

Page 2: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid FutureGrid

• The goal of FutureGrid is to support the research that will invent the future of distributed, grid, and cloud computing.

• FutureGrid will build a robustly managed simulation environment or testbed to support the development and early use in science of new technologies at all levels of the software stack: from networking to middleware to scientific applications.

• The environment will mimic TeraGrid and/or general parallel and distributed systems

• This test-bed will enable dramatic advances in science and engineering through collaborative evolution of science applications and related software.

Page 3: Future Grid  All Hands Meeting Introduction

FutureGrid

Add Network Fault Generator and other systems running FutureGrid Hardware

Page 4: Future Grid  All Hands Meeting Introduction

FutureGrid FutureGrid Hardware

System type # CPUs # Cores TFLOPS RAM (GB) Secondary

storage (TB) Default local file system Site

Dynamically configurable systems IBM iDataPlex 256 1024 11 3072 335* Lustre IU Dell PowerEdge 192 1152 12 1152 15 NFS TACC IBM iDataPlex 168 672 7 2016 120 GPFS UC IBM iDataPlex 168 672 7 2688 72 Lustre/PVFS UCSD Subtotal 784 3520 37 8928 542 Systems not dynamically configurable Cray XT5m 168 672 6 1344 335* Lustre IU Shared memory system TBD 40** 480** 4** 640** 335* Lustre IU Cell BE Cluster 4 IBM iDataPlex 64 256 2 768 5 NFS UF High Throughput Cluster 192 384 4 192 PU Subtotal 552 2080 21 3328 10 Total 1336 5600 58 10560 552

Page 5: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid FutureGrid Partners

• Indiana University• Purdue University• San Diego Supercomputer Center at University of California

San Diego• University of Chicago/Argonne National Labs• University of Florida• University of Southern California Information Sciences

Institute, University of Tennessee Knoxville• University of Texas at Austin/Texas Advanced Computing

Center• University of Virginia• Center for Information Services and GWT-TUD from

Technische Universtität Dresden.

Page 6: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid Other Important Collaborators

• Early users from an application and computer science perspective and from both research and education

• Grid5000/Aladin and D-Grid in Europe• Commercial partners such as

– Eucalyptus– Microsoft (Dryad + Azure) – Note Azure external to FutureGrid like

GPU systems– We should identify other partners – should we have a formal

Corporate Partners program?• TeraGrid • Open Grid Forum• NSF

Page 7: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid FutureGrid Architecture

Page 8: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid FutureGrid Usage Scenarios

• Developers of end-user applications who want to develop new applications in cloud or grid environments, including analogs of commercial cloud environments such as Amazon or Google.– Is a Science Cloud for me?

• Developers of end-user applications who want to experiment with multiple hardware environments.

• Grid middleware developers who want to evaluate new versions of middleware or new systems.

• Networking researchers who want to test and compare different networking solutions in support of grid and cloud applications and middleware. (Some types of networking research will likely best be done via through the GENI program.)

• Interest in performance requires that bare metal important

Page 9: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid Identified Early Users

• Probably other applications – for example IU has strong suite of biology and epidemic spread applications on Hadoop/Dryad/Clouds

• Shantenu Jha will give his perspective • Partners might review local work for other uses/users

Researcher Planned use Minaxi Gupta Reliability and security in peer to peer networks Kelvin Drogenmaier Testing of LEAD on clouds Klaus Schulten NAMD and VMD Randy Katz Development of multi-site cloud management tools Sriram Krishnan NEESit (earthquake engineering) Shantenu Jha SAGA (genome analysis) implementation with MapReduce Mark Miller Cloud-based version of CIPRES phylogenetics portal U. Minnesota (Ann Duin and others)

Graduate student classes in operating systems

Renato Figueiredo Cloud environments as educational collaborative tools—education in virtual machines, virtual networks

Bruce Berriman Astronomy, applications such as Montage Ben Berman and Peter Laird

Epigenomic applications

James Bower K–12 interactive learning in parallel and distributed computing

Page 10: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid

Management Structure

Change Control Board

(CCB):

TeraGrid

Committees organize work

Page 11: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid FutureGrid Working Groups

• Systems Administration and Network Management Committee: This committee will be responsible for all matters related to systems administration, network management, and security. David Hancock of IU will be the inaugural chair of this committee.

• Software Adaptation, Implementation, Hardening, and Maintenance Committee: This committee will be responsible for all aspects of software creation and management. It should interface with TAIS in TeraGrid. Gregor von Laszewski from IU will chair this committee.

• Performance Analysis Committee: This committee will be responsible for coordination of performance analysis activities. Shava Smallen of UCSD will be the inaugural chair of this committee.

• Training, Education, and Outreach Services Committee: This committee will coordinate Training, Education, and Outreach Service activities and will be chaired by Renato Figueiredo.

• User Support Committee: This committee will coordinate the management of online help information, telephone support, and advanced user support. Jonathan Bolte of IU will chair this committee.

• Operations and Change Management Committee (including CCB): This committee will be responsible for operational management of FutureGrid, and is the one committee that will always include at least one member from every participating institution, including those participating without funding. This is led by Craig Stewart

Page 12: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid Change Control Board (CCB)

• CCB: This is comprised of the chairs of each of the FutureGrid operational and planning committees and one key point of contact for each institution participating in FutureGrid.

• The CCB will review proposed changes to the project baselines and WBS, including changes to the design or configuration of the FutureGrid hardware and software.

• The CCB will evaluate proposed changes on the basis of their impact on scope, cost and schedule and ensure that all appropriate stakeholders are informed of the proposed change. If a proposed change is endorsed by the CCB, the CCB chair will recommend the change for final approval by the Project Director and, if necessary, by NSF.

• Recommended changes should be referred to NSF if any of the following apply: (i) the change is associated with a change in schedule of greater than 1 month, or (ii) the change is associated with a change in the allocation of funds of greater than $100,000, or (iii) the change would result in a significant change of scope.

• This committee function is defined by NSF and is a subset of “Operations and Change Management Committee”. Chair is Craig Stewart from IU.

Page 13: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid Selected FutureGrid Timeline

• October 1 2009 Project Starts• October 2-3 2009 First All Hands Meeting• October 2009 ….. : Define in detail and negotiate

subcontracts, go to weekly committee telecons• November 16-19 SC09 Demo/F2F Committee Meetings• January 2010 First Science Board Meeting• March 2010 FutureGrid network complete• March 2010 FutureGrid Annual Meeting• September 2010 All hardware (except Track IIC lookalike)

accepted• October 1 2011 FutureGrid allocatable via TeraGrid process –

first two years by user/science board led by Andrew Grimshaw

Page 14: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid FutureGrid Reporting

and Collaboration• Biweekly to monthly reports Committees <-> PI and PI NSF• NSF also requires quarterly and annual reports• Science board meets in January and early summer of each year• There is an Annual meeting in March of each year• The Awardee will participate in the TeraGrid Forum, the TeraGrid

Quarterly Meeting, the activities of the TeraGrid Science Advisory Board, and any TeraGrid Working Groups required by TeraGrid policy. Through participation in the TeraGrid Forum, the Awardee shares responsibility with the other TeraGrid partners for developing and implementing TeraGrid policy.

• Project will start with weekly calls moving to biweekly as we mature – Could use Adobe Connect and/or Video conferencing

Page 15: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid NSF and FutureGrid I

• Beginning with the seventh month after the award, the Awardee will provide monthly performance data to both the NSF Program Official and the User Advisory Committee members. These will include weekly operations data.

• During the deployment phase, anticipated to occupy most of the first project year, the Awardee will email bi-weekly status reports to the NSF Program Official and follow these up with bi-weekly telephone conference calls. At other times, the NSF Program Official and the Awardee will conduct monthly telephone conference calls to review status with additional calls scheduled as necessary.

• The Awardee will provide the NSF Program Official with a quarterly progress report that includes monthly expenditures. The quarterly progress report should also include detailed descriptions of the progress, achievements, and expenditures of the sub-awardees.

Page 16: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid NSF and FutureGrid II

• Annually, one of the quarterly progress reports will include a detailed plan for the following year, and, if necessary, an update to the Project Execution Plan.

• Annually, the Awardee will provide a Government Performance and Results Act (GPRA) Facilities Operations Report. The reporting period covers the federal fiscal year. Each report will have two reporting cycles: the estimates submitted at the beginning of the fiscal year and actuals submitted at the end the fiscal year.

• Beginning in the second year of the project, the Awardee, in consultation with experts in survey methodology, will develop a minimally intrusive user survey instrument and conduct semi-annual user surveys. The Awardee will provide the NSF Program Official with a copy of the results of each semi-annual user survey.

Page 17: Future Grid  All Hands Meeting Introduction

FutureGridFutureGrid

PossibleFutureGrid

Logos