3
14 COMPUTING IN SCIENCE & ENGINEERING E SSAYS be uncertain or unknown. What are the mechanisms of friction? What exactly hap- pens when a spark ignites combustion in an engine? What makes a bubble form at one point in a heated pot and not at another? What actually happens at the point where solid, liquid, and gas meet in a coffee cup? And three, interface physics has received less study than single physics phenomena. Of course, this might indicate only hundreds of relevant studies rather than thousands. Many people define computational sci- ence as studies involving multiple scientific disciplines such as multiphysics phenomena. This is certainly plausible, especially when both computer science and mathematics are necessary for advances in this direction. It presents a whole new class of problems that require basically new mathematical and nu- merical problem-solving methods. Work on such methods has started only in the past decade, and the literature is still quite sparse. Software The dominant cost in most large-scale com- putational science applications comes from writing new code. Computational science is not unique in being held back by the high cost of writing code in Fortran, C, Java, Ada, and similar languages, but it does have a better chance than many other fields in es- caping this burden. The panacea is to create a problem-solving environment—a software system that provides all the computational facilities necessary to solve a target class of problems. 2 Further, a PSE uses the target class’s language, implements advanced solu- tion methods automatically, and insulates users from learning specialized information about computer hardware or software. Many facets of science and engineering are particularly suitable for PSEs because their problem-solving methodologies are highly developed, well-understood, and well- structured. So, these PSEs can solve “rou- tine” problems without programming, and other parts of the problem can be handled at a much higher level. PSE methodologies are A PERSPECTIVE ON COMPUTATIONAL SCIENCE IN THE 21ST CENTURY John R. Rice C OMPUTATIONAL SCIENCE’S DRIVING FORCE HAS BEEN AND WILL CONTINUE TO BE THE STEADY AND RAPID GROWTH IN AVAILABLE RAW COMPUTING POWER. THIS GROWTH EXCEEDS ANYTHING ELSE WITNESSED IN the history of technology. The challenge for the 21st century is to exploit prop- erly this enormous potential. Several fairly obvious directions of development can (and do) present great technical challenges, but my focus here is on the less obvious challenges. These obvious directions include 1. Dimensionality extensions. We can go from 2D steady state to 3D steady state to 3D unsteady state as computing power increases. The simulation world is full of jargon such as “thin plates” and “rotational symmetry” that come from idealizations used to reduce dimensionality. We can discard these idealizations, but this might incur large programming costs from increased complexity, es- pecially in the geometry. 2. Finer scales. It is rather easy to use finer grids or meshes to increase simula- tion accuracy, at least until we have to deal with parallel computers. 3. Better mathematical models. We can make physics models more accurate and complete (for example, we can discard linearization assumptions). The sim- ulations might need more sophisticated methods to handle situations such as nonlinearities or additional terms in differential equations. 4. Parallel computing. Future computing-power increases will come partially from parallel computing and its methodology, which is neither mature nor auto- mated. Thus computational scientists might be facing the direct management of parallelism for some years to come. 5. Algorithms. The creation of new algorithms has traditionally been just as im- portant in powering computational science as has the construction of faster machines. This effort should continue unabated. For example, I conjectured that the solution of partial-differential-equation systems takes no more effort than just plotting closed-form solutions, but such algorithms are still un- available in many instances and rarely used when available. 1 Other development directions are also important but are either less obvious than those described above or rather vaguely defined—I’ve ordered them ap- proximately by vagueness, from least to most. Multiphysics phenomena These phenomena involve two or more separate physical regimes (such as in heat- ing a pot of water—the heat source and the fluid dynamics of the water). The treat- ment of the interfaces between these phenomena is challenging, for several reasons. One, obtaining data about the interfaces (for example, between the atmosphere and the oceans or outer space) might be difficult. Two, interface physics models might .

A perspective on computational science in the 21st Century

  • Upload
    j-rice

  • View
    221

  • Download
    2

Embed Size (px)

Citation preview

14 COMPUTING IN SCIENCE & ENGINEERING

E S S A Y S

be uncertain or unknown. What are themechanisms of friction? What exactly hap-pens when a spark ignites combustion in anengine? What makes a bubble form at onepoint in a heated pot and not at another?What actually happens at the point wheresolid, liquid, and gas meet in a coffee cup?And three, interface physics has received lessstudy than single physics phenomena. Ofcourse, this might indicate only hundreds ofrelevant studies rather than thousands.

Many people define computational sci-ence as studies involving multiple scientificdisciplines such as multiphysics phenomena.This is certainly plausible, especially whenboth computer science and mathematics arenecessary for advances in this direction. Itpresents a whole new class of problems thatrequire basically new mathematical and nu-merical problem-solving methods. Work onsuch methods has started only in the pastdecade, and the literature is still quite sparse.

SoftwareThe dominant cost in most large-scale com-putational science applications comes fromwriting new code. Computational science isnot unique in being held back by the highcost of writing code in Fortran, C, Java,Ada, and similar languages, but it does havea better chance than many other fields in es-caping this burden. The panacea is to createa problem-solving environment—a softwaresystem that provides all the computationalfacilities necessary to solve a target class ofproblems.2 Further, a PSE uses the targetclass’s language, implements advanced solu-tion methods automatically, and insulatesusers from learning specialized informationabout computer hardware or software.

Many facets of science and engineeringare particularly suitable for PSEs becausetheir problem-solving methodologies arehighly developed, well-understood, and well-structured. So, these PSEs can solve “rou-tine” problems without programming, andother parts of the problem can be handled ata much higher level. PSE methodologies are

A PERSPECTIVE ON COMPUTATIONALSCIENCE IN THE 21ST CENTURYJohn R. Rice

COMPUTATIONAL SCIENCE’S DRIVING FORCE HAS

BEEN AND WILL CONTINUE TO BE THE STEADY AND

RAPID GROWTH IN AVAILABLE RAW COMPUTING POWER.

THIS GROWTH EXCEEDS ANYTHING ELSE WITNESSED IN

the history of technology. The challenge for the 21st century is to exploit prop-erly this enormous potential.

Several fairly obvious directions of development can (and do) present greattechnical challenges, but my focus here is on the less obvious challenges. Theseobvious directions include

1. Dimensionality extensions. We can go from 2D steady state to 3D steady stateto 3D unsteady state as computing power increases. The simulation world isfull of jargon such as “thin plates” and “rotational symmetry” that come fromidealizations used to reduce dimensionality. We can discard these idealizations,but this might incur large programming costs from increased complexity, es-pecially in the geometry.

2. Finer scales. It is rather easy to use finer grids or meshes to increase simula-tion accuracy, at least until we have to deal with parallel computers.

3. Better mathematical models. We can make physics models more accurate andcomplete (for example, we can discard linearization assumptions). The sim-ulations might need more sophisticated methods to handle situations such asnonlinearities or additional terms in differential equations.

4. Parallel computing. Future computing-power increases will come partially fromparallel computing and its methodology, which is neither mature nor auto-mated. Thus computational scientists might be facing the direct managementof parallelism for some years to come.

5. Algorithms. The creation of new algorithms has traditionally been just as im-portant in powering computational science as has the construction of fastermachines. This effort should continue unabated. For example, I conjecturedthat the solution of partial-differential-equation systems takes no more effortthan just plotting closed-form solutions, but such algorithms are still un-available in many instances and rarely used when available.1

Other development directions are also important but are either less obviousthan those described above or rather vaguely defined—I’ve ordered them ap-proximately by vagueness, from least to most.

Multiphysics phenomenaThese phenomena involve two or more separate physical regimes (such as in heat-ing a pot of water—the heat source and the fluid dynamics of the water). The treat-ment of the interfaces between these phenomena is challenging, for several reasons.One, obtaining data about the interfaces (for example, between the atmosphere andthe oceans or outer space) might be difficult. Two, interface physics models might

.

MARCH–APRIL 1999 15

still emerging, but their potential for successis clear. There is considerable hope that PSEscan automate much of the management ofcollaborative and distributed problem solv-ing that arises in multiphysics applications.

Multiscale phenomenaThese phenomena involve vastly differenttime and space scales. For example, a jet en-gine is several meters long, yet its fuel-spraydroplets are 5 to 10 microns. Its blades aretens of centimeters long, but cracks form inareas of tens of angstroms in size. Its turbinesrotate at 50,000 RPM, but an aircraft ma-neuver takes many minutes. The question is,How can we simulate such phenomena ac-curately? It is hopeless to think of simulatingan entire jet engine at the micron/microsec-ond scale (never mind the atomic scale) evenwith the computers that will be constructedtwo or three decades from now. The answermust lie in increasing the algorithm powerfor such applications.

The current, somewhat heuristic, ap-proach is as follows: We have different mod-els for different scales and use some sort of“average” from a fine-scale model as inputto a coarse-scale model. This could be con-ceptually done several times over. Take, forexample, a metal: the average results froman atomic-scale simulation are input into amicrograin-model simulation, whose aver-age results are input into a granular model,whose average results are input into a tradi-tional continuum model. Usually, we havefeedback from coarse-scale models to finer-scale models as well. Hopefully, we can for-malize this ad hoc approach into validatedalgorithms for such simulations.

Model and software validationThere is no doubt that the validation of mostcomputational science efforts is inadequate,poor, or worse. Only comparison with ex-perimental measurements keeps these efforts“honest.” But, as simulation cost decreasesdramatically and physical-prototype cost in-creases, there is more and more pressure to

accept the simulation results as correct. Fur-ther, no experimental data exist for some veryimportant simulations—the next majorearthquake in Tokyo, the next great fire inChicago, or the next great hurricane in Mex-ico. Even though validation has serious in-trinsic difficulties, the principal danger is thatthere seems to be modest, even minimal,concern that computational science resultsmight be completely wrong. Here “concern”is not measured by the level of hand-wring-ing, but by the level of effort expended(money spent) on validation. Incorporatingvalidation as an intrinsic part of a PSE is oneobviously helpful step, but little activity istaking place in this direction.

Computational intelligenceThe scope and complexity of computa-tional science projects are increasingrapidly as ambitions and computationalpower grow. The PSE concept includes in-telligence in its simple visualization:

Natural language + problem solvers +intelligence + software bus = PSE.

The key question is, how do we codifyand use the accumulated wisdom of people,the mountains of experimental data, andthe tomes of scientific knowledge? Indeed,how do we handle any one of these sourcesof information? Expert systems based oninterviewing appropriate sages have hadmediocre success (but they improve uponusing a single expert person). The intelli-gence level and scope needed in a PSE ishigh; we need to check the problem’s for-mulation, select good algorithms (software)from all the codes and servers available onthe Web, choose appropriate computa-tional resources, allocate and distribute theproblem pieces appropriately to these re-sources, monitor the execution for reason-ableness, and, assess the results computedto solve the problem posed. None of thesetasks is simple or straightforward.

The most promising (but barely tested)

approach is that of a recommender system,where we have some performance criteria, alarge amount of relevant data, and a decisionprocedure to select an item whose perfor-mance on the existing data indicates its per-formance is likely to be high for the situationat hand.3 Recommender systems are in voguenow to locate cheap cars or good music onthe Web, but they are also capable of locat-ing good software or computing resources (ifthe data exist and the performance criteriaare suitably codified). Recommender systemshave been “just around the corner” in artifi-cial intelligence for decades, so long thatmost observers have given up waiting. How-ever, there is now solid evidence that reliableones can be created for computational sci-ence applications.4 The methodology is stilldeveloping, but the promise should turn intoreality within the next decade.

A language for computationalscienceTraditional high-level languages (Fortran,C, Algol, Java) are, in fact, low-level com-putational science. Already in the mid-1960s, people were proposing and trying tobuild natural scientific-programming lan-guages for computational science.5 Theselanguages were to be at the level of a sopho-more in college for mathematics, science,and engineering. The principal capabilitiesrequired for language development arethose of algebra and calculus, introductorylinear algebra and numerical analysis, ana-lytic geometry and 3D objects, and a rea-sonable level of intelligence. Those effortsfailed because of totally inadequate com-puting power; the researchers never got tothe harder problems involved in imple-menting such a language.

I n the intervening decades, many of thereally difficult implementation problems

have been solved. We now have very capablesymbolic systems and rather capable numer-ical-analysis systems. Graphic systems havebecome quite capable and versatile, and rec-

Continues on p. 16

.

16 COMPUTING IN SCIENCE & ENGINEERING

E S S A Y S

References1. A.F. Bakker and C. Bruin, “Design and Imple-

mentation of the Delft Molecular-DynamicsProcessor,” Special-Purpose Computers, Acad-emic Press, San Diego, Calif., 1988, pp.183–232.

2. J. Makino and M. Taiji, Special-Purpose Com-puters for Scientific Simulations: The GRAPESystems, John Wiley & Sons, Chichester, Eng-land, 1998.

Toshiyuki Fukushige is a research associate in

the Department of General Systems Studies,

College of Arts and Sciences, University of

Tokyo. He received his BS, MS, and PhD degrees

in systems science from the University of Tokyo.

He is a member of the Astronomical Society of

Japan. Contact him at the Dept. of General Sys-

tems Studies, College of Arts and Sciences, Univ.

of Tokyo, 3-8-1 Komaba, Meguro-ku, Tokyo

153, Japan; [email protected].

Piet Hut is a professor at the Institute for Ad-

vanced Study at Princeton. He got his PhD in

theoretical astrophysics from the University of

Amsterdam, the Netherlands. He is a member

of the American Astronomical Society, the

Dutch Astronomy Club, and the Astronomical

Society of Japan. Contact him at the Institute

for Advanced Study, Princeton, NJ 08540;

[email protected].

Junichiro Makino is an associate professor at

the Department of Information Science and

Graphics, College of Arts and Sciences, Univer-

sity of Tokyo. He received his BS, MS, and PhD

degrees in systems science from the University

of Tokyo. He is a member of the Astronomical

Society of Japan and the IEEE Computer Soci-

ety. Contact him at the Dept. of Information

Science and Graphics, College of Arts and Sci-

ences, Univ. of Tokyo, 3-8-1 Komaba, Meguro-

ku, Tokyo 153, Japan; [email protected]

tokyo.ac.jp.

6. B.L. Holian and P.S. Lomdahl, “Plasticity In-duced by Shock Waves in NonequilibriumMolecular-Dynamics Simulations,” Science,Vol. 280, No. 5372, 1998, pp. 2085–2088.

7. S.J. Zhou et al., “Large-Scale Molecular Dy-namics Simulations of Dislocation Intersec-tion in Copper,” Science, Vol. 279, No. 5356,1998, pp. 1525–1527.

8. M. S. Warren et al., “Avalon: An Alpha/LinuxCluster Achieves 10 Gflops for $150k,” Proc.Supercomputing ’98 ACM/IEEE SC98 Conf.,CDROM, IEEE, 1998. Also to be publishedonline at http://www.supercomp.org/sc98/proceedings/.

9. D.M. Beazley and P.S. Lomdahl, “Control-ling the Data Glut in Large-Scale Molecular-Dynamics Simulations,” Computers in Physics,Vol. 11. No. 3,1997, p. 230.

Timothy Germann is a postdoctoral fellow in

the Condensed Matter and Statistical Physics

Group in the Theoretical Division at Los Alamos

National Laboratory. In addition to large-scale

MD simulations, his research interests include

quantum mechanical reactive scattering meth-

ods, dimensional perturbation theory, and

methods for accelerating MD simulations of

rare event processes. He earned BS degrees in

chemistry and in computer science from the

University of Illinois at Urbana-Champaign and

his PhD in chemical physics from Harvard Uni-

versity. Contact him at [email protected].

Peter Lomdahl is a staff member in the Con-

densed Matter and Statistical Physics Group in

the Theoretical Division. His research in large-

scale computational physics has focused on ap-

plications to condensed matter and materials

science problems, but he also has a strong in-

terest in nonlinear waves and high perfor-

mance parallel computing. He earned an MSc

in electrical engineering and a PhD in mathe-

matical physics from the Technical University of

Denmark. Contact him at [email protected].

ommender systems could supply the intelli-gence needed. So, it appears that the ingre-dients exist to create a much higher-level lan-guage for computational science. It certainlywill not be an easy task and it is very uncer-tain who should pay for it. Perhaps some ex-isting systems will evolve this way naturally(and, no doubt, slowly). Perhaps in a decade(or two?) we will actually rewrite legacy soft-ware with reasonable effort where we focuson the science (methods and models), not onthe intricacies of algorithmic details andcomputer systems.

References1. J.R. Rice, “Future Challenges for Scientific

Simulation,” Proc. 15th IMACS WorldCongress, Wissenschaft & Technic Verlag,Berlin, 1997, pp. 541–546.

2. R. Boisvert and J.R. Rice, “From ScientificSoftware Libraries to Problem-Solving Envi-ronments,” IEEE Computational Science & En-gineering, Volume 3, Number 3, Fall 1996,pp. 44–53.

3. P. Resnick and H. Varian, “Recommender Sys-tems,” Communication Association ComputingMachinery, Volume 40, 1997, pp. 56–89.

4. N. Ramakrishnan and J.R. Rice, “Recom-mender Systems for Problem Solving Environ-ments,” Proc. AAAI-98 Workshop on Recom-mender Systems, AAAI Report WS-98-08, AAAIPress, Menlo Park, Calif., 1998, pp. 91–95.

5. M. Klerer and J. Reinfelds, Interactive Systemsfor Experimental Applied Mathematics, Acade-mic Press, New York, 1968.

John R. Rice is the W. Brooks Fortune professor

of computer science at Purdue University. He

authored several books on approximation the-

ory, numerical analysis, computer science, and

scientific software. He founded ACM’s Transac-

tions on Mathematical Software and was its ed-

itor in chief until 1993. Rice received his PhD

from Caltech and his BS and MS from Okla-

homa State University, all in math. He is a

member of the National Academy of Engineer-

ing, the IEEE Computer Society, ACM, IMAC,

and SIAM. Contact him at the Department of

Computer Sciences, 1398 Computer Sciences

Bldg., West Lafayette, IN 47907-1398; rice@cs.

purdue.edu.

Continued from p. 11

Continued from p. 13

Continued from p. 15

.