103
1 ©Bull, 2010 Bull Extreme Computing

1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

Embed Size (px)

Citation preview

Page 1: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

1 ©Bull, 2010 Bull Extreme Computing

Page 2: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

2 ©Bull, 2010 Bull Extreme Computing

Table of Contents

BULL – Company introduction

bullx – European HPC offer

BULL/Academic community co-operation

+ references

ISC’10 – News from Top European HPC Event

Discussion / Forum

Page 3: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

3 ©Bull, 2010 Bull Extreme Computing

Bull Strategy Fundamentals

Jaroslav Vojtěch, BULL HPC sales representative

Page 4: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

4 ©Bull, 2010 Bull Extreme Computing

Bull Group

A growing and profitable company

A solid customer base

- Public sector, Europe

Bull, Architect of an Open World™

- Our motto, our heritage, our culture

Group commitment to become a leading player in Extreme Computing in Europe

- The largest HPC R&D effort in Europe

- 500 Extreme Computing experts = the largest pool in Europe

REVENUE BREAKDOWN

Page 5: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

5 ©Bull, 2010 Bull Extreme Computing

Business segments and key offerings

Services &

Solutions€483 m

Hardware & System Solutions €358 m

Global offers and third-party products

€77 m

Maintenance & Product-Related

Services €192 m

- Mainframes- Unix AIX systems- x86 systems

(Windows/Linux)

- Systems Integration- Open Source- Business Intelligence- Security solutions- Infrastructure integration

- Supercomputers- Solution integration- Consultancy,

optimization

- Green IT- Data Center relocation- Disaster recovery- Third-party product maintenance

- Outsourcing- Operations

Outsourcing

e-govt.

Telco

OthersEnterprise

Extreme Computing& Storage

Page 6: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

Extreme Computing offer

Boris Mittelmann, HPC Conslutant

Page 7: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

7 ©Bull, 2010 Bull Extreme Computing

Bull positioning in Extreme Computing

Presence in education, government and industrial markets

From mid-size solutions to high end

On the front line for innovation: large hybrid system for GENCI

Prepared to deliver petaflops-scale systems starting in 2010

The European Extreme Computing provider

Page 8: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

8 ©Bull, 2010 Bull Extreme Computing

Addressing the HPC marketHPC market evolution

(EMEA)

HPC Grand Challenges• PetaFlops-class HPC• Tera-100 CEA project• Hybrid architectures

• Expand into manufacturing, oil and gas• Open Framework: OS, NFS, DB and Services

• Leverage Intel Xeon roadmap, time to market • Manage and deliver complex projects

2007 2012

• Divisional• Departmental• Workstation

• Super Computers• Divisional• Departmental

Super Computers

Workstation

Targeted HPC market

3 B$

5 B$

Our ambition: be the European leader in Extreme Computing

Page 9: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

9 ©Bull, 2010 Bull Extreme Computing

Target markets for Bull in Extreme Computing

Government- Defense - Economic Intelligence- National Research Centers- Weather prediction - Climate research, modeling and change- Ocean circulation

Oil & Gas- Seismic: Imaging, 3D interpretation, Prestack

data analysis- Reservoir modeling & simulation- Geophysics sites Data Center- Refinery Data Center

Automotive & Aerospace- CAE: Fluid dynamics, Crash simulation- EDA: Mechatronics, Simulation & Verification

Finance- Derivatives Pricing - Risk Analysis - Portfolio Optimization

Page 10: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

10 ©Bull, 2010 Bull Extreme Computing

The most complete HPC value chain in Europe

Hosting /

Outsourcing

Hosting /

Outsourcing

System designSystem design

Offer Services Bull organizations Customers

Operations and management

(SLA)

Design and architecture deployment

Design and architecture deployment

Security Encryption Access Control

Security Encryption Access Control

Services

R&D

G e r m a n y

+500 specialists in Europe

Bull Systems

Page 11: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

11 ©Bull, 2010 Bull Extreme Computing

Innovation through partnerships

Bull’s experts are preparing the intensive computing technologies of tomorrow by having an active role in many European cooperative research projects

A strong commitment to many projects, such as:- Infrastructure projects: FAME2, CARRIOCAS, POPS, SCOS- Application projects: NUMASIS (seismic), TELEDOS (health),

ParMA (manufacturing, embedded), POPS (pharmaceutical, automotive, financial applications, multi-physical simulations…), HiPIP (image processing), OPSIM (Numerical optimization and robust design techniques), CSDL (complex systems design), EXPAMTION (CAD Mechatronics), CILOE (CAD Electronics), APAS-IPK (Life Sciences).

- Tools: application parallelizing, debugging and optimizing (PARA, POPS, ParMA)

- Hybrid systems: OpenGPU

Hundreds of Bull experts are dedicated to cooperative projects related to HPC innovation

Page 12: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

12 ©Bull, 2010 Bull Extreme Computing

Major Extreme Computing trends and issues

Multi-core• Multi core is here to stay

and multiply• Programming for multi

core is THE HPC challenge

Storage• Vital component• Integrated through

parallel file system

Accelerators• Incredible performance

per watt• Turbo charge

performance… by a factor of 1 to 100…

Networking• Prevalence of Open

architectures (Ethernet, InfiniBand)

Page 13: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

13 ©Bull, 2010 Bull Extreme Computing

Bull’s vision

Extreme Computing

Extreme Computing - Green data center design

- Off the shelf components

- bullx blade system

- Mid-size systems

- Cost efficiency

Integration

- bullx blade system

- bullx accelerator blade

- bullx supernode SMP servers

- bullx cluster suite peta-scalability

- mobull containers

- Research with European Institutes

Innovation

- Accelerators

- Mid-size to high end

- Cost efficiency

- Optimization an issue

- Green data center design

- Accelerators

- Mid-size to High-end

- Cost efficiency

Performance/Watt

Page 14: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

14 ©Bull, 2010 Bull Extreme Computing

The bullx rangeDesigned with Extreme Computing in mind

Page 15: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

15 ©Bull, 2010 Bull Extreme Computing

hardware for peta-scalability

Water cooling

bullx supernodes

bullx blade system

bullx rack-mounted servers Storage

Architecture

ACCELERATORS

HardwareXPF platforms InfiniBand/GigE interconnectsGigE network switches Disk arrays

Lustreconfig

KsisNscontrol// cmds

NagiosGanglia

Cluster database

Administration network HPC interconnect

Linux kernel

Interconnect access layer (OFED,…)

MPIBull2

Libraries& toolsLustre

NFSv4NFSv3

Job scheduling

Resourcemanagement

System environmentInstallation/configuration Monitoring/control/diagnostics

Application environmentExecution environment File systems Development

Linux OScluster suite

Page 16: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

16 ©Bull, 2010 Bull Extreme Computing

Dense and open

The bullx blade system

Best HPC server product or technology

Top 5 new products or technologies to watch

Page 17: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

17 ©Bull, 2010 Bull Extreme Computing

No compromise on:No compromise on:

PerformancePerformance

DensityDensity

EfficiencyEfficiency

OpennessOpenness

Latest Xeon processors from Intel (Westmere EP)Memory-denseFast I/OsFast interconnect: full non blocking InfiniBand QDRAccelerated blades

Latest Xeon processors from Intel (Westmere EP)Memory-denseFast I/OsFast interconnect: full non blocking InfiniBand QDRAccelerated blades

12% more compute power per rack than the densest equivalent competitor solutionUp to 1296 cores in a standard 42U rackUp to 15,2 Tflops of compute power per rack (with CPUs)

12% more compute power per rack than the densest equivalent competitor solutionUp to 1296 cores in a standard 42U rackUp to 15,2 Tflops of compute power per rack (with CPUs)

All the energy efficiency of Westmere EP + exclusive ultra capacitorAdvanced reliability (redundant power and fans, diskless option)Water-cooled cabinet available

All the energy efficiency of Westmere EP + exclusive ultra capacitorAdvanced reliability (redundant power and fans, diskless option)Water-cooled cabinet available

Based on industry standards and open source technologies

Runs all standard software

Based on industry standards and open source technologies

Runs all standard software

Dense and open

bullx blade system

Page 18: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

18 ©Bull, 2010 Bull Extreme Computing

bullx supernodeAn expandable SMP node for memory-hungry applications

SMP of up to 16 sockets based on Bull-designed BCS:

• Intel Xeon Nehalem EX processors

• Shared memory of up to 1TB (2TB with 16GB DIMMS)

SMP of up to 16 sockets based on Bull-designed BCS:

• Intel Xeon Nehalem EX processors

• Shared memory of up to 1TB (2TB with 16GB DIMMS)

Available in 2 formats:• High-density 1.5U

compute node• High I/O connectivity

node

Available in 2 formats:• High-density 1.5U

compute node• High I/O connectivity

node

RAS features:• Self-healing of the QPI

and XQPI• Hot swap disk, fans,

power supplies

RAS features:• Self-healing of the QPI

and XQPI• Hot swap disk, fans,

power supplies

Green features:• Ultra Capacitor• Processor power

management features

Green features:• Ultra Capacitor• Processor power

management features

Page 19: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

19 ©Bull, 2010 Bull Extreme Computing

R422 E2

bullx rack-mounted systems

Supports latest graphics & accelerator cards

4U or tower2-SocketXeon 560018 DIMMs2 PCI-Express x16 Gen2Up to 8 SATA2 or SAS

HDDPowerful power supplyHot-swap Fans

Enhanced connectivityand storage

2UXeon 56002-Socket18 DIMMs2 PCI-Express x16 Gen2Up to 8 SATA2 or SAS

HDDRedundant 80 PLUS

Gold power supplyHot-swap fans

2 nodes in 1U for unprecedented densityNEW: more memoryXeon 56002x 2-Socket2x 12 DIMMsQPI up to 6.4 GT/s2x 1 PCI-Express x16 Gen2 InfiniBand DDR/QDR

embedded (optional)2x 2 SATA2 hot-swap HDD80 PLUS Gold PSU

CO

MP

UT

E N

OD

E

R423 E2

SE

RV

ICE

NO

DE

R425 E2

VIS

UA

LIZ

AT

ION

A large choice of options

Page 20: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

20 ©Bull, 2010 Bull Extreme Computing

To turn an R425 E2 server into a supercomputer Dual slot wide card Tesla T10P chip 240 cores Performance: close to 1 Tflops (32 bit FP) Connects to PCIe x16 Gen2

GPU accelerators for bullx

The ideal booster for R422 E2 or S6030 -based clusters

1U drawer 4 x Tesla T10P chips 960 cores Performance: 4 Tflops (32 bit FP) Connects to 2 PCIe x16 Gen2

NVIDIA® Tesla™ computing systems: teraflops many-core processors that provide outstanding energy efficient parallel computing power

NVIDIA Tesla C1060 NVIDIA Tesla S1070

Page 21: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

21 ©Bull, 2010 Bull Extreme Computing

Bull Storage Systems for HPC

StoreWayOptima 1500

StoreWayOptima 1500

DataDirect NetworksS2A 9900

(consult us)

DataDirect NetworksS2A 9900

(consult us)

SAS/SATA3 to 144 HDDs

Up to 12 host ports2U drawers

SAS/SATAUp to 1200

HDDs8 host ports4+ 2/3/4 U drawers

FC/SATAUp to 480 HDDs

Up to 16 host ports3U drawers

StoreWayEMC CX4

StoreWayEMC CX4

cluster suitecluster suite

Page 22: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

22 ©Bull, 2010 Bull Extreme Computing

Bull Cool Cabinet Door

Enables world densest Extreme Computing solution !

- 28kW/m² (40kW on 1.44m²)

- 29 ‘U’/m² (42U + 6PDUs on 1.44m²)

77% energy saving compared to air conditioning!

- Water thermal density much more efficient than air

- 600W instead of 2.6kW to extract 40kW

Priced well below all competitors- 12K€ for fully redundant rack

- same price as Schroff for twice the performance (20kW)

- half price from HP for better performance (35kW) and better density

Bull’s contribution to reducing energy consumption

Page 23: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

23 ©Bull, 2010 Bull Extreme Computing

Extreme Computing solutions

Services

Design

Architecture

Project Management

Optimisation

Built from standard components, optimized by Bull’s innovation

cluster suite

StoreWayStoreWay

Hardware platforms

Software environments

Interconnect

Storage systems

Page 24: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

24 ©Bull, 2010 Bull Extreme Computing

Choice of Operating Systems

Standard Linux distribution +bullx cluster suite

A fully integrated and optimized HPC cluster software environment

A robust and efficient solution delivering global cluster management…

… and a comprehensive development environment

Microsoft’sWindows HPC Server 2008

An easy to deploy, cost-effective solution with enhanced productivity and scalable performance

cluster suitecluster suite

Page 25: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

25 ©Bull, 2010 Bull Extreme Computing

Mainly based on Open Source components Engineered by Bull to deliver RAS features Reliability, Availability, Serviceability

cluster suite

Cluster DB benefits

Master complexity

+100,000 nodes

Make management easier, monitoring accurate and maintenance quicker

Improve overall utilization rate and application performances

bullx cluster suite advantages- Unlimited Scalability – Boot management and boot time- Automated configuration – Hybrid systems Management- Fast installation and updates – Health monitoring & preventive maintenance

Page 26: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

26 ©Bull, 2010 Bull Extreme Computing

Software Partners

Batch management

- Platform Computing LSF

- Altair PBS Pro

Development, debugging, optimisation

- TotalView

- Allinea DDT

- Intel Software

Parallel File System

- Lustre

Page 27: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

27 ©Bull, 2010 Bull Extreme Computing

Industrialized project management

Solution design

─ Servers─ Interconnect─ Storage─ Software

1

Computer room design

─ Racks─ Cooling─ Hot spots─ Air flow (CFD)

2

Factory Integration &

Staging3

─ Rack integration─ Cabling─ Solution staging

TrainingsWorkshops

─ Administrators─ Users

5

Start production

─ On-site engineers─ Support from Bull HPC expertise centre

6

Partnership

─ Joint research projects

8

Supportand

Assistance

─ Call centre 7 days/week─ On-site intervention─ Software support─ HPC expert support

7

On site Installation & acceptance

4

─ Installation─ Connection─ Acceptance

Expertise & project management

Page 28: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

28 ©Bull, 2010 Bull Extreme Computing

Factory integration and staging

5 300 m2 technical platforms available for assembly, integration, tests and staging12 000 m2 logistics platforms60 test stations150 technicians150 logistics employeesCertified ISO 9001, ISO 14001 and OHSAS 18000

Two integration and staging sites in EuropeTo guarantee delivery of « turn key » systems

Page 29: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

29 ©Bull, 2010 Bull Extreme Computing

mobull, the container solution from Bull

The plug & boot data center

Up to 227.8 Teraflops per container

Up and running within 8 weeks

Innovative cooling system

Can be installed indoor or outdoor

Available in Regular or Large sizes

Page 30: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

30 ©Bull, 2010 Bull Extreme Computing

Hosts any 19’’ equipment - servers, storage, network

Hosts any 19’’ equipment - servers, storage, network

Modular Buy / lease

Fast deployment

Modular Buy / lease

Fast deployment

Transportable container

Transportable container

550 kW227,8 Tflops

18 PB

550 kW227,8 Tflops

18 PB

Complete turnkey solution

Complete turnkey solution

Can host bullx serversCan host bullx servers

mobull, the container solution from Bull

Page 31: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

31 ©Bull, 2010 Bull Extreme Computing

A worldwide presence

Page 32: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

32 ©Bull, 2010 Bull Extreme Computing

Worldwide references in a variety of sectors

Educ/Research Industry Aerospace/Defence Other

… and many others

Page 33: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

33 ©Bull, 2010 Bull Extreme Computing

bullx design honoured by OEM of CRAY

Page 34: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

34 ©Bull, 2010 Bull Extreme Computing

bullx design honoured by OEM of CRAY

Page 35: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

Perspectives

Page 36: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

36 ©Bull, 2010 Bull Extreme Computing

next platform

HPC Roadmap for 2009-2011

IB interconnect

Cluster suite

ProcessorNehalem EP

Ser

vers

09Q2 09Q3 09Q4 10Q1 10Q2

R480-E1 (Dunnington - 24x cores)

MESCA w/Nehalem EX

INCA - 18x blades w/2x NH-EPBlade

SMP

Twin

10Q3 10Q4 11Q1 11Q2 11Q3

Westmere EP Sandy Bridge EP

Nehalem EX Westmere EX

R422-E2 (W/Westmere-EP)

R423-E2 (W/2xNH-EP)

R425-E2 (W/2xNH-EP)

std

Gfx

R423-E2 (W/Westmere-EP)

R425-E2 (W/Westmere-EP)

R423-E3R425-E3

R422-E3R422-E2(W/2x2 NH-EP-Std/DDR/QDR) 2x Eth switch Interc. IB/QDR Blade

GPU

Ultracapa.

INCA - 18x blades w/Westmere-EP

ESM. 10Gb

GPU

QDR 36p QDR 324p

On R422-E1/E2 w/Tesla S1070)

nVIDIA C1060 - S1070 nVidia T20 / new generation of accelerators & GPUs

RacksAir cooled

20kw Air (20kw) or water-cooled (40kw)

4S3U/4SNL

Storage Optima/EMC CX4 DDN 9900/66xx/LSI-Panasas (TBC) Future storage offer

8SNL/16S3U

On blade/SMP system

Tentative dates, subject to change without notice

XR 5v3.1U1 - ADDON2 XR 5v3.1U1 ADDON3

11Q4

EDR

direct liquid

New generation cluster manager Extreme/Enterprise

INCA w/ SB. EP

MESCA w/Westm. EX

Page 37: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

37 ©Bull, 2010 Bull Extreme Computing

A constantly innovating Extreme Computing offer

2008 20102009

Hybrid clusters (Xeon + GPUs)

Integrated clusters High-end servers with petaflops performance

Page 38: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

38 ©Bull, 2010 Bull Extreme Computing

Our R&D initiatives address key HPC design points

The HPC market makes extensive use of two architecture models

Scale-outMassive deployment of “thin nodes”,

with low electrical consumptionand limited heat dissipation

Scale-up“Fat nodes” with many processors and large memories to reduce the complexity of very large centers

Bull R&D investments

bullx blade system

bullx super-nodes

Key issues:

Network topologySystem ManagementApplication software

Key issues:

Internal architectureMemory coherency

Ambitious long-term developments in cooperation with CEA and System@tic competitiveness cluster, Ter@tec competence centre, and many organizations from the worlds of industry and education

Page 39: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

39 ©Bull, 2010 Bull Extreme Computing

Tera 100: designing the 1st European petascale system

Collaboration contract between Bull and CEA, the French Atomic Authority

Joint R&D- High-performance servers

- Cluster software for large-scale systems

- System architecture

- Application development

- Infrastructures for very large Data Centers

Operational in 2010

Tera 100 in 5 figures

100,000 cores (X86 processors)

300 TB memory

500 GB/s data throughput

580 m² footprint

5 MW estimated power consumption

Page 40: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

Product Descriptions – ON REQUEST

Page 41: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

BULL IN EDUCATION/RESEARCH - References

Page 42: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

42 ©Bull, 2010 Bull Extreme Computing

University of Münster

NeedMore computing power and a high degree of flexibility, to meet

the varied requirements of the different codes to be run Solution

A new-generation bullx system, installed in 2 phases

Phase 1- 2 bullx blade chassis containing 36 bullx B500 compute blades

- 8 bullx R423 E2 service nodes

- DataDirect Networks S2A9900 storage system

- Ultra fast InfiniBand QDR interconnect

- Lustre shared parallel file system

- hpc.manage cluster suite (from Bull and s+c) and CentOS Linux

Phase 2 (to be installed in 2010)- 10 additional bullx blade chassis containing 180 bullx B500

compute blades equipped with Intel® Xeon® ‘Westmere’

- 4 future SMP bullx servers, with 32 cores each

Germany's 3rd largest university and one of the

foremost centers of German intellectual life

27 Tflops peak performance at the end of phase 2

Page 43: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

43 ©Bull, 2010 Bull Extreme Computing

University of Cologne

NeedMore computing power to run new simulations and refine existing simulations, in such diverse areas as genetics, high-tech materials, meteorology, astrophysics, economy

Solution

A new-generation bullx system, installed in 2 phases

Phase 1 (2009)- 12 bullx blade chassis containing 216 bullx B500 compute

blades- 12 bullx R423 E2 service nodes- 2 DataDirect Networks S2A9900 storage systems - Ultra fast InfiniBand QDR interconnect - Lustre shared parallel file system - bullx cluster suite and Red Hat Enterprise Linux- Bull water-cooled racks for compute racks

Phase 2 (2010)- 34 additional bullx blade chassis containing 612 bullx B500

compute blades equipped with Intel® Xeon® ‘Westmere’ - 4 future SMP bullx servers, with 128 cores each

One of Germany’s largest universities, it has been

involved in HPC for over 50 years

Performance at the end of phase 2: peak 100 Tflops26 TB RAM and 500 TB disk storage

Page 44: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

44 ©Bull, 2010 Bull Extreme Computing

Jülich Research Center

JuRoPa supercomputer- “Jülich Research on Petaflops Architectures“:

accelerating the development of high performance cluster computing in Europe

- 200-teraflops general purpose supercomputer- Bull is prime contractor in this project which also

includes Intel, Partec, and Sun

HPC-FF supercomputer- 100 Teraflops to host applications for the

European Union Fusion community- Bull cluster :

- 1,080 Bull R422 E2 computing nodes - New generation Intel® Xeon series 5500 processors- interconnected via an InfiniBand® QDR network- water-cooled cabinets for maximum density and optimal

energy efficiency

Together, the 2 supercomputers rank #10 in the TOP500, with 274.8 Tflops (Linpack)

Efficiency: 91.6 %

The leading and largest HPC centre in Germany

A major contributor to European-wide HPC projects

Page 45: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

45 ©Bull, 2010 Bull Extreme Computing

GENCI - CEA

A hybrid architecture designed to meet production and research needs, with a large cluster combining general purpose servers and specialized servers:

1068 Bull nodes, i.e. 8544 Intel® Xeon® 5500 cores providing a peak performance of 103 Tflops48 GPU NVIDIA® nodes, i.e. 46000 cores, providing an additional theoretical performance of 192 Tflops25 TB memoryInfiniBand interconnect networkIntegrated Bull software environment based on Open Source componentsCommon Lustre® file systemOutstanding density with water cooling

295 Tflops peak: first large European hybrid system

“In just two weeks, a common team from Bull and CEA/DAM successfully installed the GENCI's new supercomputer for the CCRT. Three days after the installation, we are already witnessing the exceptional effectiveness of the new 8000X5570 cores cluster, which has achieved a 88% efficiency on the Linpack benchmark, demonstrating the sheer scalability of Bull architecture and the remarkable performance of Intel's Xeon 5500 processor." (Jean Gonnord, Program Director for Numerical Simulation at CEA/DAM, at the occasion of the launch of the Intel Xeon 5500 processor in Paris)

Page 46: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

46 ©Bull, 2010 Bull Extreme Computing

Cardiff University

NeedProvide central HPC service to users in the various academic schools, who previously had to use small departmental facilitiesFoster the adoption of advanced research computing across a broad range of disciplinesFind a supplier that will take a partnership approach – including knowledge transfer

Solution25 Teraflops peak performance Over 2,000 Intel® Xeon® Harpertown cores with Infiniband Interconnect Over 100 TBs of storage

The partnership between Cardiff and Bull involves the development of a centre of excellence for high end computing in the UK, with Cardiff particularly impressed by Bull’s collaborative spirit.

One of Britain’s leading teaching and research universities

“The University is delighted to be working in partnership with Bull on this project that will open up a range of new research frontiers” said Pr Martyn Guest, Director of Advanced Research Computing

Page 47: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

47 ©Bull, 2010 Bull Extreme Computing

Commissariat à l’Energie Atomique

Need- A world-class supercomputer to run the CEA/DAM’s

Nuclear Simulation applications

Solution- A cluster of 625 Bull NovaScale servers, including

567 compute servers, 56 dedicated I/O servers and 2 administration servers

- 10,000 Intel® Itanium® 2 cores- 30 terabytes of core memory- Quadrics interconnect network- Bull integrated software environment based on Open

Source components- A processing capacity in excess of 52 teraflops- # 1 European supercomputer (# 5 in the world) in

the June 2006 TOP500 Supercomputer ranking

France's Atomic Energy Authority (CEA) is a key player in European research. It operates in three main areas: energy,

information technology and healthcare, defence and security.

“Bull offered the best solution both in terms of global performance and cost of ownership, in other words, acquisition and operation over a five-year period.” Daniel Verwaerde, CEA, Director of nuclear armaments

“It is essential to understand that what we are asking for is extremely complex. It is not simply a question of processing, networking or software. It involves ensuring that thousands of elements work effectively together and integrating them to create a system that faultlessly supports the different tasks it is asked to perform, whilst also being confident that we are supported by a team of experts.” Jean Gonnord, Program Director for Numerical Simulation & Computer Sciences at CEA/DAM

Page 48: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

48 ©Bull, 2010 Bull Extreme Computing

Atomic Weapons Establishment

NeedA substantial increase in production computing resources for scientific and engineering numerical modeling The solution must fit within strict environmental constraints on footprint, power consumption and cooling

Solution

Two identical bullx clusters + a test cluster with a total of:

53 bullx blade chassis containing 944 bullx B500 compute blades, i.e. 7552 cores6 bullx R423 E2 management nodes, 8 login nodes16 bullx R423 E2 I/O and storage nodesDataDirect Networks S2A9900 storage system Ultra fast InfiniBand QDR interconnect Lustre shared parallel file system bullx cluster suite to ensure total cluster management

AWE provides the warheads for the United Kingdom’s nuclear deterrent.

It is a centre of scientific and technological excellence.

Combined peak performance in excess of 75 Tflops

Page 49: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

49 ©Bull, 2010 Bull Extreme Computing

Petrobras

NeedA super computing system:

to be installed at Petrobras’ new Data Center, at the University Campus of Rio de Janeiroequipped with GPU accelerator technologydedicated to the development of new subsurface imaging techniques to support oil exploration and production

SolutionA hybrid architecture coupling 66 general-purpose servers to 66 GPU systems:

66 bullx R422 E2 servers, i.e. 132 compute nodes or 1056 Intel® Xeon® 5500 cores providing a peak performance of 12.4 Tflops66 NVIDIA® Tesla S1070 GPU systems, i.e. 63360 cores, providing an additional theoretical performance of 246 Tflops1 bullx R423 E2 service nodeUltra fast InfiniBand QDR interconnect bullx cluster suite and Red Hat Enterprise Linux

Leader in the Brazilian petrochemical sector, and one of the largest integrated

energy companies in the world

Over 250 Tflops peakOne of the largest supercomputers in Latin America

Page 50: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

50 ©Bull, 2010 Bull Extreme Computing

ILION Animation Studios

Need

Ilion Animation Studios (Spain) needed to double their render farm to produce Planet 51

Solution

Bull provided:

64 Bull R422 E1 servers, i.e. 128 compute nodes

1 Bull R423 E1 head node

GB Ethernet interconnect

running Microsoft Windows Compute Cluster Server 2003

Released end 2009

Rendered on Bull servers

Page 51: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

51 ©Bull, 2010 Bull Extreme Computing

25TH ANNIVERSARY – Record breaking attendance

ISC’10 – Top News

Intel Unveils Plans for HPC Coprocessor -Tens of GPGPU-like cores with x86 instructionsA 32-core development version of the MIC coprocessor, codenamed "Knights Ferry," is now shipping to selected customers. A team at CERN has already migrated one

of its parallel C++ codes.

TOP500 released – China gained 2nd place!!!

TERA100 in production - by BULL / CEA:

-FIRST EUROPEAN PETA-SCALE ARCHITECTURE-WORLD’S LARGEST INTEL-BASED CLUSTER-WORLD’S FASTEST FILESYSTEM (500GB/s)

Page 52: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

52 ©Bull, 2010 Bull Extreme Computing

Questions

Answers

&

Page 53: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

53 ©Bull, 2010 Bull Extreme Computing

bullx blade system – Block Diagram

18x compute blades- 2x Westmere-EP sockets- 12x memory DDR3 DIMMs- 1x SATA HDD/SSD slot (optional –

diskless an option)- 1x IB ConnectX/QDR chip

1x InfiniBand Switch Module (ISM) for cluster interconnect - 36 ports QDR IB switch- 18x internal connections- 18x external connections

1x Chassis Management Module (CMM)- OPMA board- 24 ports GbE switch - 18x internal ports to Blades- 3x external ports

1x optional Ethernet Switch Module (ESM)- 24ports GbE switch - 18x internal ports to Blades- 3x external ports

1x optional Ultra Capacitor Module (UCM)

3x GbE/1G

ISM18x IB/QDR

UCM

ESM

3x GbE/1G

CMM

18x blades

Page 54: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

54 ©Bull, 2010 Bull Extreme Computing

SATASSD

diskless

I/O Controller(Tylersburg)I/O Controller(Tylersburg)

QPI

QPI QPI

Westmere EP Nehalem EPWestmere EP

31.2GB/s

12.8GB/sEach direction

31.2GB/s

QPI

QPI

PCIe 8x4GB/s

QPI

Westmere EP Nehalem EPWestmere EP

31.2GB/s

12.8GB/sEach direction

31.2GB/s

Infin

iBan

d

PCIe 16x8GB/s

Acc

eler

ator

PCIe 8x4GB/s

Infin

iBan

d

PCIe 16x8GB/s

Acc

eler

ator

bullx B500 compute blade bullx B505 accelerator blade

PCIe 8x4GB/s

Infin

iBan

d

QPI

I/OController

(Tylersburg)

I/OController

(Tylersburg)

I/OController

(Tylersburg)

I/OController

(Tylersburg)

bullx blade system – blade block diagrams

SATASSD

diskless

GBE GBE

Page 55: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

55 ©Bull, 2010 Bull Extreme Computing

bullx B500 compute blade

DDR III (x12)

WESTMERE EPw/ 1U heatsink

Fans

HDD/SSD 1.8"

Connectorto backplane

Tylersburg w/ short heatsink

143.5

425

iBMC

ICH10

ConnectXQDR

© CEA

Page 56: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

56 ©Bull, 2010 Bull Extreme Computing

Ultracapacitor Module (UCM)

NESSCAP Capacitors (2x6)

Leds

Board

Embedded protection against short power outages

Protect one chassis with all its equipment under load

Up to 250ms

Avoid on site UPSsave on infrastructure costs

save up to 15% on electrical costs

Improve overall availabilityRun longer jobs

Page 57: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

57 ©Bull, 2010 Bull Extreme Computing

Bull StoreWay Optima1500

750MB/s bandwidth 12 x 4Gbps front-end connections4 x 3Gbps point to point back-end disk connectionsSupports up to 144 SAS and/or SATA HDDs:- SAS 146GB (15krpm), 300GB (15krpm),

- SATA 1000GB (7,2Krpm)

RAID 1, 10, 5, 50, 6, 10, 50, 3, 3DualParity, Triple Mirror2GB to 4GB cache memoryWindows, Linux, VmWare Interoperability (SFR for UNIX)3 Models: - Single Controller 2 front-end ports

- Dual Controllers 4 front-end ports

- Dual Controllers 12 front-end ports

Page 58: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

58 ©Bull, 2010 Bull Extreme Computing

58

CLARiiON CX4-120

UltraScale ArchitectureTwo 1.2Ghz dual core LV-Woodcrest CPU Modules

6 GB system memory

Connectivity128 high-availability hosts

Up to 6 I/O modules (FC or ISCSI)

- 8 front-end 1 Gb/s ISCSI host ports max

- 12 front-end 4 Gb/s / 8 Gb/s Fibre Channel host ports max

- 2 back-end 4 Gb/s Fibre Channel disk ports

ScalabilityUp to 1,024 LUNs

Up to 120 drives

Page 59: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

59 ©Bull, 2010 Bull Extreme Computing

59

CLARiiON CX4-480

UltraScale ArchitectureTwo 2.2Ghz dual core LV-Woodcrest CPU Modules16 GB system memory

Connectivity256 high-availability hostsUp to 10 I/O modules (FC or iSCSI at GA)- 16 front-end 1 Gb/s iSCIS host ports max

- 16 front-end 4 Gb/s / 8 Gb/s Fibre Channel host ports max

- 8 back-end 4 Gb/s Fibre Channel disk ports

ScalabilityUp to 4,096 LUNsUp to 480 drives

Page 60: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

60 ©Bull, 2010 Bull Extreme Computing

DataDirect Networks S2A 9900

Performance

Single System S2A9900 delivers 6GB/s Reads & Writes Multiple System Configurations are Proven to Scale beyond 250GB/s Real-Time, Zero-Latency Data Access, Parallel Processing Native FC-4, FC-8 and/or InfiniBand 4X DDR

CapacitySingle System: Up to 1.2 Petabytes Multiple Systems Scale Beyond: 100’s of Petabytes Intermix SAS & SATA in Same Enclosure Manage up to 1,200 Drives

1.2PB in Just Two Racks

InnovationHigh Performance DirectRAID™ 6 Zero Degraded Mode SATAssure™ PlusData Integrity Verification & Drive Repair Power Saving Drive Spin-down with S2A SleepMode Power Cycle Individual Drives

Page 61: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

61 ©Bull, 2010 Bull Extreme Computing

bullx R422 E2 characteristics

1U rackmount – 2 nodes in a 1U form factorIntel S5520 Chipset (Tylersburg) QPI up to 6.4 GT/SProcessor: 2x Intel® Xeon® 5600 per nodeMemory: 12 x DIMM sockets Reg ECC DDR3 1GB / 2GB / 4GB / 8GB- Up to 96 GB per node at 1333MHz

(with 8GB DIMMs)

2 x HDD per node- Hot swap SATA2 drives @7.2k

rpm

- 250/500/750/1000/1500/2000 GB

Independent power control circuitry built in for power management

1x shared Power Supply Unit 1200w max- fixed / no redundancy

- 80 PLUS Gold

InfiniBand- 1 optional on-board DDR or QDR

controller per node

Expansion slots- 1 PCI-E x 16 Gen2 (per node)

Rear I/O- 1 external IB, 1 COM port, VGA, 2

Gigabit NIC, 2 USB ports (per node)

Management- BMC (IPMI 2.0 with virtual media-

over-LAN) Embedded Winbond WPCM450-R (per node)

Page 62: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

62 ©Bull, 2010 Bull Extreme Computing

bullx R423 E2

DisksWithout add-on adapter :6 SATA2 HDD (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go)With PCI-E RAID SAS/SATA add-on adapter :Support of RAID 0, 1, 5, 108 SATA2 (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go) or SAS HDD (15000 tpm - 146/ 300/ 450 Go)All disks 3.5 inchesExpansion slots (low profile)- 2 PCI-E Gen 2 x16- 4 PCI-E Gen 2 x8- 1 PCI-E Gen 2 x4

Redundant Power Supply UnitMatrox Graphics MGA G200eW embedded video controller Management- BMC (IPMI 2.0 with virtual media-over-

LAN) Embedded Winbond WPCM450-R on dedicated RJ45 port

WxHxD: 437mm X 89mm x 648mm

2U rack mount Processor: 2x Intel® Xeon® 5600 Chipset: 2x Intel® S5520 (Tylersburg) QPI: up to 6.4 GT/s Memory: 18 DIMM sockets DDR3 - up to 144GB at 1333MHz

The perfect server for service nodes

Page 63: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

63 ©Bull, 2010 Bull Extreme Computing

Bull System Manager Suite

Consistent administration environment thanks to the cluster database

Ease of use through- centralized monitoring- fast and reliable

deployment- configurable

notifications

Built from the best Open Source and commercial software packages- integrated- tested- supported

Page 64: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

64 ©Bull, 2010 Bull Extreme Computing

Detailed knowledge of cluster structure

Logical NetList

Physical NetList

Architecturedrawing

Equipmentand IP@

Description

CableLabels

Generator

Preload File

Preload FileModel « B »

CustomizedClusters

StandardClusters

Preload FileModel « A »

R &

DE

xper

tise

Cen

tre

Fac

tory

Installer

Page 65: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

65 ©Bull, 2010 Bull Extreme Computing

bullx blade system

Bullx supernodes

bullx rack-mounted systems

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 66: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

66 ©Bull, 2010 Bull Extreme Computing

bullx blade system – overall concept

Page 67: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

67 ©Bull, 2010 Bull Extreme Computing

bullx blade system – overall concept

General purpose, versatile

- Xeon Westmere processor

- 12 memory slots per blade

- Local HDD/SSD or Diskless

- IB / GBE

- RH, Suse, Win HPC2008, CentOs, …

- Compilers: GNU, Intel, …

Uncompromised performances

- Support of high frequency Westmere

- Memory bandwidth: 12x mem slots

- Fully non blocking IB QDR interconnect

- Up to 2.53 TFLOPS per chassis

- Up to 15.2 TFLOPS per rack (with CPUs)

High density

- 7U chassis

- 18x blades with 2 proc, 12x DIMMs, HDD/SSD slot/IB connection

- 1x IB switch (36 ports)

- 1x GBE switch (24p)

- Ultracapacitor

Leading edge technologies

- Intel Nehalem

- InfiniBand QDR

- Diskless

- GPU blades

Optimized Power consumption

- Typical 5.5 kW / chassis

- High efficiency (90%) PSU

- Smart fan control in each chassis

- Smart fan control in water-cooled rack

- Ultracapacitor no UPS required

Page 68: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

68 ©Bull, 2010 Bull Extreme Computing

bullx chassis packaging

LCDunit

Fan

s

18x blades

7U c

has

sis

IB Switch ModulePSU x4

ESM

CMM

Ultracapacitor

Page 69: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

69 ©Bull, 2010 Bull Extreme Computing

bullx B505 accelerator blade

• 2 x Intel Xeon 5600

• 2 x NVIDIA T10(*)

• 2 x IB QDR

7U

18.9 TFLOPS in 7 U2

.1 T

FL

OP

S0

.86

3kw

(*) T20 is on the roadmap

Embedded Accelerator for high performance with high energy efficiency

Page 70: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

70 ©Bull, 2010 Bull Extreme Computing

bullx B505 accelerator blade

Front view Exploded view

2 x CPUs 2 x GPUs

Double-width blade 2 NVIDIA Tesla M1060 GPUs 2 Intel® Xeon® 5600 quad-core CPUs 1 dedicated PCI-e 16x connection for each GPU Double InfiniBand QDR connections between blades

Page 71: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

71 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx supernodes

Bullx rack-mounted systems

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 72: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

72 ©Bull, 2010 Bull Extreme Computing

bullx supernodeAn expandable SMP node for memory-hungry applications

SMP of up to 16 sockets based on Bull-designed BCS:

• Intel Xeon Nehalem EX processors

• Shared memory of up to 1TB (2TB with 16GB DIMMS)

SMP of up to 16 sockets based on Bull-designed BCS:

• Intel Xeon Nehalem EX processors

• Shared memory of up to 1TB (2TB with 16GB DIMMS)

Available in 2 formats:• High-density 1.5U

compute node• High I/O connectivity

node

Available in 2 formats:• High-density 1.5U

compute node• High I/O connectivity

node

RAS features:• Self-healing of the QPI

and XQPI• Hot swap disk, fans,

power supplies

RAS features:• Self-healing of the QPI

and XQPI• Hot swap disk, fans,

power supplies

Green features:• Ultra Capacitor• Processor power

management features

Green features:• Ultra Capacitor• Processor power

management features

Page 73: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

73 ©Bull, 2010 Bull Extreme Computing

bullx supernode: CC-NUMA server

Max configuration

- 4 modules

- 4 sockets/module

- 16 sockets

- 128 cores

- 128 memory slots

1

BCS BCS

BCSBCS

NHMEX

NHMEX

NHMEX

NHMEX

IOHIOH BCS

SMP (CC-NUMA)128 cores

Up to 1TB RAM(2TB with 16 GB DIMMs)

Page 74: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

74 ©Bull, 2010 Bull Extreme Computing

Bull's Coherence Switch (BCS)

Heart of the CC-NUMA

- Insure global memory and cache coherence

- Optimize traffic and latencies

- MPI collective operations in HW

- Reductions

- Synchronization

- Barrier

Key characteristics

- 18x18 mm in 90 nm technology

- 6 QPI and 6 XQPI

- High speed serial interfaces up to 8GT/s

- Power-conscious design with selective power-down capabilities

- Aggregate data rate: 230GB/s

Page 75: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

75 ©Bull, 2010 Bull Extreme Computing

bullx S6030 – 3U Service Module / Node

BCS

Nehalem EX(4 max)

2 Power suppliesHot swap 6 PCI-e

1-x16, 5-x8 32 DDR3

Disks (8 max)Hot swap

SATA RAID 1SAS RAID 5

FansHot swap

Ultra-capacitor

Page 76: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

76 ©Bull, 2010 Bull Extreme Computing

bullx S6010 - Compute Module / Node

3U

2 Modules (64 cores / 512GB RAM)BCS

Nehalem EX(4 max)

1 PCI-e x16

32 DDR3

Disk SATA

Fans

1 Power supply

Ultra-capacitor

Page 77: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

77 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx supernodes

bullx rack-mounted systems

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 78: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

78 ©Bull, 2010 Bull Extreme Computing

R422 E2

bullx rack-mounted systems

Supports latest graphics & accelerator cards

4U or tower2-SocketXeon 560018 DIMMs2 PCI-Express x16 Gen2Up to 8 SATA2 or SAS

HDDPowerful power supplyHot-swap Fans

Enhanced connectivityand storage

2UXeon 56002-Socket18 DIMMs2 PCI-Express x16 Gen2Up to 8 SAS or SATA2

HDDRedundant 80 PLUS

Gold power supplyHot-swap fans

2 nodes in 1U for unprecedented densityNEW: more memoryXeon 56002x 2-Socket2x 12 DIMMsQPI up to 6.4 GT/s2x 1 PCI-Express x16 Gen2 InfiniBand DDR/QDR

embedded (optional)2x 2 SATA2 hot-swap HDD80 PLUS Gold PSU

CO

MP

UT

E N

OD

E

R423 E2

SE

RV

ICE

NO

DE

R425 E2

VIS

UA

LIZ

AT

ION

A large choice of options

Page 79: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

79 ©Bull, 2010 Bull Extreme Computing

bullx R425 E2

For high performance visualization

DisksWithout add-on adapter :6 SATA2 HDD (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go)With PCI-E RAID SAS/SATA add-on adapter :Support of RAID 0, 1, 5, 108 SATA2 (7200 tpm - 250/ 500/ 750/ 1000/ 1500/ 2000 Go) or SAS HDD (15000 tpm - 146/ 300/ 450 Go)All disks 3.5 inchesExpansion slots (high profile)- 2 PCI-E Gen 2 x16- 4 PCI-E Gen 2 x8- 1 PCI-E Gen 2 x4

Powerful Power Supply UnitMatrox Graphics MGA G200eW embedded video controller Management- BMC (IPMI 2.0 with virtual media-over-

LAN) Embedded Winbond WPCM450-R on dedicated RJ45 port

WxHxD: 437mm X 178mm x 648mm

4U / tower rack mount Processor: 2x Intel® Xeon® 5600 Chipset: 2x Intel® S5520 (Tylersburg) QPI up to 6.4 GT/s Memory: 18 DIMM sockets DDR3 - up to 144GB at 1333MHz

Page 80: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

80 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx rack-mounted systems

bullx supernodes

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 81: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

81 ©Bull, 2010 Bull Extreme Computing

To turn an R425 E2 server into a supercomputer Dual slot wide card Tesla T10P chip 240 cores Performance: close to 1 Tflops (32 bit FP) Connects to PCIe x16 Gen2

GPU accelerators for bullx

The ideal booster for R422 E2 or S6030 -based clusters

1U drawer 4 x Tesla T10P chips 960 cores Performance: 4 Tflops (32 bit FP) Connects to 2 PCIe x16 Gen2

NVIDIA® Tesla™ computing systems: teraflops many-core processors that provide outstanding energy efficient parallel computing power

NVIDIA Tesla C1060 NVIDIA Tesla S1070

Page 82: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

82 ©Bull, 2010 Bull Extreme Computing

Ready for future Tesla processors (Fermi)

Q4 Q1 Q2 Q3 Q4

2009 2010

Tesla C1060933 Gigaflop SP78 Gigaflop DP4 GB Memory

Tesla C2050520-630 Gigaflop DP

3 GB MemoryECC

8x Peak DP PerformanceP

e r f

o r

m a

n c

e

Tesla C2070520-630 Gigaflop DP

6 GB MemoryECC

Large Datasets

Mid-Range Performance

Disclaimer: performance specification may change

Page 83: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

83 ©Bull, 2010 Bull Extreme Computing

Ready for future Tesla 1U Systems (Fermi)

Q4 Q1 Q2 Q3 Q4

2009 2010

Tesla S1070-500

4.14 Teraflop SP345 Gigaflop DP

4 GB Memory / GPU

Tesla S20502.1-2.5 Teraflop DP3 GB Memory / GPU

ECC

8x Peak DP Performance

P e

r f o

r m

a n

c e

Tesla S20702.1-2.5 Teraflop DP6 GB Memory / GPU

ECC

Large Datasets

Mid-Range Performance

Disclaimer: performance specification may change

Page 84: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

84 ©Bull, 2010 Bull Extreme Computing

NVIDIA Tesla 1U system & bullx R422 E2

Tesla 1U system connection to the host

NVIDIA Tesla1U system

bullx R422 E2server

PCIe Gen2 Host Interface Cards

PCIe Gen2 Cables

1U

1U

PCIe Gen2 Host Interface

Cards

PCIe Gen2 Cables

Page 85: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

85 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx rack-mounted systems

bullx supernodes

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 86: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

86 ©Bull, 2010 Bull Extreme Computing

Bull Storage for HPC clusters

*: with Lustre

A complete line of storage systems

• Performance• Modularity• High Availability*

A rich management suite

• Monitoring• Grid & standalone system deployment• Performance analysis

Page 87: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

87 ©Bull, 2010 Bull Extreme Computing

Bull Storage Systems for HPC - details

Optima1500 CX4-120 CX4-480S2A 9900couplet

#disk 144 120 480 1200

Disk typeSAS 146/300/450 GB

SATA 1TB

FC 146/300/400/450 GB

SATA 1TB

FC 10Krpm 400 GB

15Krpm 146/300/450 GB

SATA 1TB

SAS 15Krpm 300/450/600GB

SATA 500/750/1000/2000 GB

RAIDR1, 3, 3DP, 5, 6, 10, 50

and TMR0, R1, R10, R3, R5, R6 R0, R1, R10, R3, R5, R6 8+2 (RAID 6)

Host ports 2/12 FC 4 4/12 FC4 8/16 FC4 8 FC4

Back end ports 2 SAS 4X 2 8 20 SAS 4X

Cache size(max)

4 GB 6GB 16GB5 GB

RAID-protected

Controller size 2 U base with disks 3 U 3 U4 U

(couplet)

Disk drawer2 U

12 slots3 U

15 slots3 U

15 slots3/2/4 U

16/24/60 slots

Performance(MB/s; Raid5)

R: Read; W:Write

R: up to 900 MB/sW: up to 440 MB/s

R: up to 720 MB/sW: up to 410 MB/s

R: up to 1.25 GB/sW: up to 800 MB/s

R&W: up to 6 GB/s

Page 88: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

88 ©Bull, 2010 Bull Extreme Computing

Bull storage systems - Administration & monitoring

HPC-specific Administration Framework Specific administration commands developed on CLI: - ddn_admin, nec_admin, dgc_admin, xyr_admin

Model file for configuration deployment:- LUNs information, Access Control information, etc.- Easily Replicable for many Storage Subsystems

HPC specific Monitoring Framework

Specific SNMP trap managementPeriodic monitoring of all Storage Subsystems in clusterStorage Views in Bull System Manager HPC edition: - Detailed status for each item (power supply, fan, disk, FC port,

Ethernet port, etc.)- LUN/zoning information

Page 89: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

89 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx supernodes

bullx rack-mounted systems

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 90: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

90 ©Bull, 2010 Bull Extreme Computing

Bull Cool Cabinet Door

Innovative Bull design- ‘Intelligent’ door (self regulates fan

speed depending on temperature)- survives handily fan or water

incidents (fans increase speed and extract hot air)

- optimized serviceability- A/C redundancy

Side benefits for customer- no more hot spots in computer

room – Good for overall MTBF !

Ready for upcoming Bull Extreme Computing systems

- 40kW is perfect match for a rack configured with bullx blades or future SMP servers at highest density

Page 91: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

91 ©Bull, 2010 Bull Extreme Computing

Jülich Research Center: water-cooled system

Page 92: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

92 ©Bull, 2010 Bull Extreme Computing

Cool cabinet door: Characteristics

Width 600mm (19”)Height 2020mm (42U)Depth 200mm (8”)Weight 150 kgCooling capacity Up to 40 kWPower supply Redundant Power consumption 700 WInput water temperature 7-12 °COutput water temperature 12-17 °CWater flow 2 liter/second (7 m3/hour)Ventilation 14 managed multi-speed fansRecommended

cabinet air inlet 20°C +- 2°CCabinet air outlet 20°C +- 2°CManagement Integrated management board for local regulation

and alert reporting to Bull System Manager

Page 93: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

93 ©Bull, 2010 Bull Extreme Computing

Cool Cabinet Door: how it works

Page 94: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

94 ©Bull, 2010 Bull Extreme Computing

Flexible operating conditions

Operating parameters adaptable to various customer conditions

Energy savings further optimized depending on servers activity

Next step: Mastering water distribution

- Predict Temperature, flow velocity, pressure drop within customer water distribution system

- Promote optimized solution

Page 95: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

95 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx supernodes

bullx rack-mounted systems

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 96: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

96 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx supernodes

bullx rack-mounted systems

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 97: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

97 ©Bull, 2010 Bull Extreme Computing

bullx cluster suite benefits

Cluster Management

Cluster Management

Development toolsDevelopment tools

Kernel debugging and optimization tools

Kernel debugging and optimization tools

SMP & NUMA architectures

SMP & NUMA architectures

MPIMPI

File system: LustreFile system: Lustre

With the features developed and optimized by Bull, your HPC cluster is all that a production system should be: efficient, reliable, predictable, secure, easy to manage.

Optimized performance Reliable and predictable performance

Optimized performance Reliable and predictable performance

Better performance on memory-intensive applications

Easy optimization of applications

Better performance on memory-intensive applications

Easy optimization of applications

Fast deploymentReliable management Intuitive monitoringPowerful control tools

Fast deploymentReliable management Intuitive monitoringPowerful control tools

Improved I/O performanceScalability of storage Easy management of storage system

Improved I/O performanceScalability of storage Easy management of storage system

Scalability of parallel application performanceStandard interface compatible with many

interconnects

Scalability of parallel application performanceStandard interface compatible with many

interconnects

Fast development and tuning of applications

Easy analysis of code behaviour

Fast development and tuning of applications

Easy analysis of code behaviour

Red Hat distributionRed Hat

distribution Standard Application support / certification

Standard Application support / certification

InterconnectInterconnect Cutting edge InfiniBand stack supportCutting edge InfiniBand stack support

Improve system performance Improve system performance

Save development and optimization time

Save development and optimization time

Save administration timeHelp prevent system downtime

Save administration timeHelp prevent system downtime

Improve system performanceProvide unparalleled flexibility

Improve system performanceProvide unparalleled flexibility

Improve system productivity Increase system flexibility Improve system productivity Increase system flexibility

Help get the best performance and thus the best return on investment

Help get the best performance and thus the best return on investment

Large variety of supported applications

Large variety of supported applications

Improve system performance Improve system performance

cluster suite

Page 98: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

98 ©Bull, 2010 Bull Extreme Computing

bullx cluster suite components

HardwareXPF SMP platforms InfiniBand/GigE interconnectsGigE network switches Bull StoreWay/disk arrays

Lustreconfig

KsisNscontrol// cmds

NagiosGanglia

Bull System Manager cluster database

Administration network HPC interconnect

Linux kernel

Interconnect access layer (OFED,…)

MPIBull2

Libraries& tools Lustre

NFSv4NFSv3

Job scheduling

Resourcemanagement

System environmentInstallation/configuration Monitoring/control/diagnostics

Application environmentExecution environment Development File systems

Linux OS

cluster suite

Page 99: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

99 ©Bull, 2010 Bull Extreme Computing

bullx blade system

bullx supernodes

bullx rack-mounted systems

NVIDIA Tesla Systems

Bull Storage

Cool cabinet door

mobull

bullx cluster suite

Windows HPC Server 2008

Product descriptions

Page 100: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

100 ©Bull, 2010 Bull Extreme Computing

Bull and Windows HPC Server 2008

Clusters of bullx R422 E2 servers

- Intel® 5500 processors - Compact rack design: 2 compute nodes in 1U or

18 compute nodes in 7U, depending on model- Fast & reliable InfiniBand interconnect

supporting

Microsoft® Windows HPC Server 2008

- Simplified cluster deployment and management- Broad application support - Enterprise-class performance and scalability

Common collaboration with leading ISVs to provide complete solutions

The right technologies to handle industrial applications efficiently

Page 101: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

101 ©Bull, 2010 Bull Extreme Computing

Windows HPC Server 2008

Microsoft® Windows Server® 2008 HPC

Edition

Microsoft® HPC Pack 2008+ =

Microsoft® Windows® HPC

Server 2008

• Support for high performance hardware (x64 bit architecture)

• Winsock Direct support forRDMA for high performance interconnects (Gigabit Ethernet, InfiniBand, Myrinet,and others)

• Support for Industry Standards MPI2

• Integrated Job Scheduler

• Cluster Resource Management Tools

• Integrated “out of the box” solution

• Leverages past investments in Windows skills and tools

• Makes cluster operation just as simple and secure as operating a single system

Combining the power of the Windows Server platform with rich, out-of-the-box functionality to help improve the productivity

and reduce the complexity of your HPC environment

Page 102: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

102 ©Bull, 2010 Bull Extreme Computing

A complete turn-key solution

Bull delivers a complete ready-to-run solution

- Sizing

- Factory pre-installed and pre-configured (R@ck’n Roll)

- Installation, integration in the existing infrastructure

- 1st and 2nd level support

- Monitoring, audit

- Training

Bull has a Microsoft Competence Center

Page 103: 1©Bull, 2010Bull Extreme Computing. 2©Bull, 2010Bull Extreme Computing Table of Contents BULL – Company introduction bullx – European HPC offer BULL/Academic

103 ©Bull, 2010 Bull Extreme Computing

bullx cluster 400-W

bullx cluster 400-W4- 4 compute nodes to relieve the

strain on your work stations

bullx cluster 400-W8 - 8 compute nodes to give

independent compute resources to a small team of users, enabling them to submit large jobs or several jobs simultaneously

bullx cluster 400-W16- 16 compute nodes to equip a

workgroup with independent high performance computing resources that can handle their global compute workload

A solution that combines:

The performance of bullx rack servers equipped with Intel® Xeon® processorsThe advantages of Windows HPC Server 2008- Simplified cluster deployment

and management

- Easy integration with IT infrastructure

- Broad application support

- Familiar development environment

And expert support from Bull’s Microsoft Competence Center

Enter the world of High Performance Computing with bullx cluster 400-W running Windows HPC Server 2008