Data Management in Dynamic Environment-driven ... plale/papers/ Management in Dynamic Environment-driven Computational Science ... and accessing data, resource management services ... Data mining applications

  • Published on
    12-Apr-2018

  • View
    214

  • Download
    2

Embed Size (px)

Transcript

  • Data Management in Dynamic

    Environment-driven Computational Science

    Yogesh L. Simmhan, Sangmi Lee Pallickara, Nithya N. Vijayakumar, andBeth Plale

    Computer Science Department, Indiana University, Bloomington IN 47405 USA{ysimmhan,leesangm,nvijayak,plale}@cs.indiana.edu

    Abstract. Advances in numerical modeling, computational hardware,and problem solving environments have driven the growth of computa-tional science over the past decades. Science gateways, based on serviceoriented architectures and scientific workflows, provide yet another stepin democratizing access to advanced numerical and scientific tools, com-putational resource and massive data storage, and fostering collabora-tions. Dynamic, data-driven applications, such as those found in weatherforecasting, present interesting challenges to Science Gateways, whichare being addresses as part of the LEAD Cyberinfrastructure project.In this article, we discuss three important data related problems facedby such adaptive data-driven environments: managing a users personalworkspace and metadata on the Grid, tracking the provenance of sci-entific workflows and data products, and continuous data mining overobservational weather data.

    Key words: LEAD, science gateways, cyberinfrastructure, data & metadatamanagement, provenance, data quality, data mining, streams

    1 Introduction

    Science has evolved over the past several decades, from an empirical and the-oretical approach to one that includes simulations and modeling [4]. Addition-ally, scientific discoveries are increasingly propelled by large, inter-disciplinarygroups working across geographical boundaries [40]. For instance, projects suchas the Large Hadron Collider aim to solve grand-challenges in science througha collaboration of over 4000 scientists from 40 countries and having access to acentral particle accelerator facility costing over US$2 billion [32].

    Several advancements in scientific application and computer science havecontributed to this evolution. Numerical techniques and algorithms have im-proved, allowing the real world to be modeled more accurately than ever before[16]. Weather forecasting models such as WRF, short for Weather Research andForecasting, can now accurately predict regional mesoscale weather at resolu-tions of 1Km grid spacing, with an accuracy of over 80%, 5 days in advance of

  • 2 Simmhan, Y.L., Pallickara, S.L., Vijayakumar, N.N., and Plale, B.

    the weather, by integrating data streams across dozens of physical dimensions[20].

    Similar advances in computational hardware can now be leveraged trans-parently through Science Gateways [12] that are built on top of standards suchas the Common Component Architecture [3] and the Open Grid Services Ar-chitecture (OGSA) [10]. Science Gateways, also known as Grids or Cyberinfras-tructure, have democratized access to advanced numerical tools, computationalcycles, and data resources, that can be uniformly and conveniently accessed bythe average scientist through online Portal interfaces [21].

    However, environmental sciences such as mesoscale meteorology pose specialchallenges to these Science Gateways since they are largely triggered by eventsoccurring in the external environment. A common requirement that exemplifiesthis is when a coarse-resolution regional weather forecasting simulation detectsa precursor signature of a tornado in a certain region, it should spawn offanother fine-resolution simulation in that specific geographical location to seeif a tornado is indeed going to form. There are three key implications of suchscientific applications that need to be addressed. One, scientific simulationshave to be designed such that their structure is dynamic. Secondly, computeand data resources provisioned for the scientific experiments need to adapt tosuch external events. And lastly, there should be the ability to manage largevolumes of data and associated metadata that are generated by various sensorsand instruments deployed globally and from the experiments themselves.

    In the subsequent section, we delve deeper into the challenges posed by theadaptive and dynamic needs of environmental sciences, and use mesoscale mete-orology forecasting in the context of the Linked Environments for AtmosphericDiscovery (LEAD) [9] project as an example to motivate the problems. In Sec-tion 3, we discuss the LEAD Cyberinfrastructure that we are building and thevarious enabling technologies in it. In Sections 4, 5, and 6, we will look moreclosely at the data management problems when dealing with terascale data,and successively look at the myLEAD personal metadata catalog to describeand manage users data, the Karma provenance framework to track scientificdata products and execution of experiments, and the Calder data mining toolused with streaming data. Finally, in Section 7, we summarize and present ourconclusions.

    2 Motivation: Mesoscale Weather Forecasting

    Weather forecasting is a static process. Models ingest data generated from sen-sors like radars, mobile meso-nets, upper-air balloons, geostationary and polarorbiting satellites, commercial aircrafts, and surface observations, for a cer-tain temporal and spatial range required by the forecast model. Then, analysisand assimilation of these data sources take place by performing quality controlchecks, extrapolating missing data points, and creating a 3D model grid of theforecast region at the given resolution. This is followed by running the actual

  • Data Management in Dynamic Environment-driven Computational Science 3

    prediction algorithms using weather models configured by the scientists, andmining of the data to detect abnormal weather patterns. The final step gener-ates 2D/3D images and animations of the forecast that can be disseminated tothe scientists and end users. A typical regional prediction of this nature takesabout 4 hours to complete, depending on the size of the region and resolutionof the forecast.

    The key problem with such a model is that it is completely static and theforecast is pre-scheduled to run at certain intervals. Even if a hurricane signatureis detected during the data-mining part of the experiment, no action can betaken till the experiment completes, the weather researcher reviews the results,and manually configures and starts another forecast for that particular region.The LEAD project aims to take this well-oiled static computational sciencemesoscale meteorology forecasting process and tear it apart to be dynamic inresponse to the environment. There are several benefits to doing this and aresupported by recent advancements in weather research.

    Firstly, regional observational models have better forecast accuracy for a re-gion than do continental models because the resolution of the latter has to becoarser in order to even run on todays computer systems. The solution is toselectively nest regional forecasts within a larger continental model. Secondly,steerable radars, notably the CASA Radars, are now being deployed. These al-low the focus and collection of high-resolution data on narrow regions, insteadof performing 360 swathes all the time. These dynamically steered instrumentscan be leveraged to increase the forecasting accuracy. And lastly, democratiza-tion of scientific resources is now possible through community resources suchas Teragrid [6] and the availability of well established standards for accessingthem. High-schools students can now get access to and learn about the sametools and resources used by weather researchers.

    These advances require concomitant advances in fundamental ways in whichcomputational science is done, before they can be leveraged to the fullest extent.These advancements include:

    1. Adaptivity in computational models, allowing them to react to externalevents,

    2. Adaptive detection and response to weather, through continuous data min-ing and instrument steering,

    3. Adaptive use of available resources to respond to current computationaland data load, and priorities of tasks, and

    4. Ability for the underlying data subsystem to mine, record, track, and an-notate data products in real time.

    In the next section, we give an overview of the overall LEAD architecture. Anappreciation of the portal interface to the system and the experiment executiontool is useful for the understanding of the remainder of the paper; so we providethat as well.

  • 4 Simmhan, Y.L., Pallickara, S.L., Vijayakumar, N.N., and Plale, B.

    Gateway ServicesGateway Services

    Core Grid ServicesCore Grid Services

    Grid

    Portal Server

    Execution

    ManagementInformation

    Services

    Self

    Management

    Data

    Services

    Resource

    ManagementSecurity

    Services

    Resource Virtualization (OGSA)Resource Virtualization (OGSA)

    Compute Resources Data Resources Instruments & Sensors

    User Services & Grid DesktopUser Services & Grid Desktop

    Proxy Certificate

    Server (Vault)

    Events & Messaging Resource BrokerCommunity & User

    Metadata Catalog

    Workflow Engine Resource

    Registry

    Application

    Deployment

    Application

    Services

    Users Portal Home

    [Web Browser]

    Fig. 1. The LEAD infrastructure is assembled as a set of services that interact withcompute, data, and sensor resources, and accessed by a user through a portal.

    3 The LEAD Cyberinfrastructure

    The LEAD vision is to effect a paradigm shift in the way computation sciencein mesoscale meteorology is performed, brought about by a service frameworkfor data search and model execution, for weather researchers, and studentsand teachers at K-12 levels and beyond. The LEAD Cyberinfrastructure buildsupon a Service Oriented Architecture (SOA) to provide a uniform and secureinterface to access resources of common interest to a distributed communityof users [12]. Figure 1 illustrates this architecture. At the bottom are physicalresources, such as computational clusters, mass-storage, instruments, and sen-sors. The service architecture virtualizes these resources so that they can beaccesses using standard protocols and interfaces, without worrying about theunderlying architecture or implementation. The OGSA standard [10] is com-monly used as the foundation for resource virtualization in many Grid systems.These resources can be grouped as a set of core services that include securityservices for authentication and identity mapping, data services for moving, stor-ing, replicating, searching, and accessing data, resource management servicesto schedule and monitor resources, and execution management services to plan,schedule, and manage the lifecycle of jobs run on the Grid.

    On top of these core services are gateway services that provide value-addedfunctionality and are directly exposed to the user community. These include cer-tificate services for identity management and single sign-on capability, metadatacatalogs, resource registries, notification services, workflow engines, and appli-cation services. The LEAD Portal acts as an online desktop for the users ofthe gateway, and provides visual interfaces to interact with the various gatewayservices.

  • Data Management in Dynamic Environment-driven Computational Science 5

    Surface

    Pre-Processor

    Terrain

    Pre-Processor3D Model

    Data Interpolator

    ARPS to WRFData Interpolator

    WRF

    Forecast Model

    WRF to ARPSData Interpolator

    Postscript to

    Image Converter

    Workflow Engine

    (e.g. BPEL, Jython)

    ARPS Plotting

    Program

    Workflow engine

    orchestrates

    invocation &

    message passing

    between services

    Scientific Apps

    wrapped as Web

    Services

    Dataflow simulated throservice invocation sequence

    Fig. 2. Typical weather forecasting workflow in LEAD. It goes through stages ofingesting and preprocessing observational data, assimilating it into a 3D grid, runningthe prediction model on it, and disseminating the forecast as graphics and animations.Each box is a service and their execution is orchestrated by the workflow engine inthe center.

    Scientists compose experiments, such as complex data searches or modelruns, as workflows, which consist of domain and middleware services connectedas a graph. Domain applications are wrapped as web-services using the Ap-plication Factory Toolkit [19]. These application services can be graphicallyconnected together to represent the dataflow between them using the XBayaworkflow composer GUI [33], which then compiles the workflow into a Jython orBPEL [1] script that can be executed. A typical weather forecasting workflowis shown in Figure 2. The rectangular boxes represent scientific applicationsexposed as web-services using the Application Factory. The dotted lines signifydataflow, in the form of files consumed, transformed, and produced by the ap-plications. A workflow engine [39] acts as a central service that orchestrates theinvocation of each service in the workflow according to the execution logic.

    The adaptivity requirements posited in the previous section are addressed bythe LEAD Cyberinfrastructure. The workflow engine is capable of receiving no-tifications about external weather events that take place, and dynamically alterthe execution logic for experiments. This allows for the adaptation of the com-putational model at runtime. Data mining applications constantly monitor datastreams from various sensors looking for abnormal weather signatures. Basedon the type of weather activity, these applications can configure and launchan appropriate workflow for the specific geographical region. Resource brokers,self-management services, and monitoring services detect and adapt to failuresin the hardware and service substrate, and also provision available services tothe required tasks at hand. Provenance services recording workflow executionhelp workflows resume from points of failure. Personal catalogs tracking a usersexperiment assist in reconfiguring and restarting workflows, as also in providingthe current status of workflows to the user. These data management tools thatenable adaptivity are described in the sections below.

  • 6 Simmhan, Y.L., Pallickara, S.L., Vijayakumar, N.N., and Plale, B.

    4 myLEAD: Personal Metadata Catalog

    The myLEAD personal workspace comprises of a metadata catalog service anda separate back end storage manager. The notion of a separate DBMS hostedcatalog for metadata is gaining a foothold in computational science throughtools such as myGrid [14], MCS [38], and SRB [31], in distributed computingthrough Lustre [15], and even in enterprise networks through the Acopia ARX[26].

    The myLEAD metadata catalog accepts data product descriptions on awide range of data products including text, binary, images, workflow scripts,and input parameters. Data product descriptions arrive at the catalog as XMLdocuments coded according to the LEAD Metadata Schema (LMS) [30]. A thinservice layer provides atomic inserts into the catalog and back end, and performsother duties in cooperation with the workflow system [13]. Early inspiration forthe metadata catalog is the Globus MCS metadata catalog [38], and it utilizesthe service interfaces provided by the UK e-Science OGSA-DAI tool [2]. Itis a distributed service with an instance located at each site in the L...

Recommended

View more >