32
On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge- 2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev St.Petersburg State University, Russia Contents SPbSU computing and communication center in Petrodvorets (structure, capabilities, communications, general activity). The progress obtained in summer 2004 in St.Petersburg in DC for ALICE. Future plans 20.10.2004 The 1 st Nordic Grid Neighborhood Workshop, Linkoping, Sweden

On St.Petersburg State University Computing Centre and our 1st results in the Data Challenge-2004 for ALICE V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev,

  • View
    217

  • Download
    3

Embed Size (px)

Citation preview

On St.Petersburg State University Computing Centre and our 1st

results in the Data Challenge-2004 for ALICE

V.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev St.Petersburg State University, Russia

Contents• SPbSU computing and communication center in Petrodvorets (structure,

capabilities, communications, general activity).• The progress obtained in summer 2004 in St.Petersburg in DC for

ALICE.• Future plans

20.10.2004The 1st Nordic Grid Neighborhood Workshop, Linkoping,

SwedenReported by G.Feofilov

St.Petersburg state university today

See http://www.spbu.ru/e/

SPbSU Informational and Computing Center:

Some history

Due to the historical reasons Saint Petersburg State University consists from 2 space-divided part. One of them located in the central part of the St. Petersburg. Other in the Petrodvorets – 40 kilometers from it. That’s why and also due to the fact that many other educational centers of the St. Petersburg are located in the central part, the optical channel from Petrodvorets to central part of St. Petersburg was created (during 1992-2004) .

SPbSU Informational-computing center External net - channels

SD

Catalyst8500

Power Supply 0CISCO YSTEMSS Power Supply 1

SwitchProcessor

SERIES

SD

Catalyst8500

Power Supply 0CISCO YSTEMSS Power Supply 1

SwitchProcessor

SERIES

Здании НИИФСт. Петергоф

1357 компьютеров, 68 серверов,62 локальных сети

1272+600 компьютеров, 75 серверов,132 локальная сеть

RUNNet

Марсово поле

ESC

ON BATTLOAD ONBYPASSFAULT

ESC

ON BATTLOAD ONBYPASSFAULT

8 Mbps 1 Gbps

100 MbpsВасилеостровский

телекоммуникационный центр

Петродворцовыйтелекоммуникацион

ный центр

UPSИнформацион-ный комплекс

Информацион-ный комплекс

Вычислительный комплекс

UPSCISCO

CISCO

Сегмент корпоративной сети Василеостровского учебно-научного комплекса

Сегмент корпоративной сети Петродворцового учебно-научного комплекса

Новые волоконно-оптическиемагистральные каналы

(60 км.)

Боровая

1 Gbps

SPbSU Informational-computing center

External net - channels

Петродворец

Площадь Победы

ул. Севастьянова

Главное здание

Марсово поле

ул. Большая Морская

ул. Чайковского

SPbSU Informational-Computing Center

in 2002-2003:

2002 2003

Computers (summary)From them: servers

121458

127275

Local nets (summary)From them: virtual

12172

13281

Magistral networksFrom them: optical

4746

5148

The network equipment «Cisco»From them : routers switchs

45342

48444

Dynamics of performance of SPbSU computational center (MFlops) in 2000-

2003:

2530036100

110000

140000

0

20000

40000

60000

80000

100000

120000

140000

160000

2000 г. 2001 г. 2002 г. 2003 г.

Pe

rfo

rma

nc

e,

MF

lop

s

SPbSU Informational-Computing Centernet structure(2003)

C-clusterPIII-933*2/1GB/40GB

SCALI SCI

U-clusterPIII-933*2/1GB/30GB

X-clusterXeon 2200*2/1GB/40GB

chem. dep. H-clusterPII-400/128MB/6.4GB

NISNFS

Internet

Cisco WS 2924 XL

PBSSSP

100 Mbps

ether_exec

scali_exec

PVM 3.4.3

phys. dep. F-cluster

AM-CP dep. A-cluster

math-mech. dep. M-cluster

Grid G-кластер

Cserver

Compic

Userver

100 Mbps

WEB WS Portal,FW

http://cc.ptc.spbu.ru

8

SPbSU Informational-computing centerclusters photos

11

SPbSU Informational-computing center:

Software evolution 1999 – os freeBSD 3.3 2000 – OPEN PBS as users job scheduling system 2000-2001 – os redHat 6.2, systems of the quantum-

chemicals calculations: CRYSTAL 95 and GAMESS 2001-2004 design and development of the Portal of the

High Performance Computing (WEBWS) (due to our legend it is called so from words “web work space”)

2002 – the first cluster to study the grid-technologies and grid-applications

2003 - participating in the alien project (site http://alice.spbu.ru) 2004 - participating in the ALICE Data Challenge(see monitoring at http://aliens3.cern.ch:8080/)

SPbSU Informational-computing center:

More on software evolution 2003-…

2003 – collaboration with the IBM startedDue to collaboration with IBM we changed

many parts of the informational-computing center:

New net-monitoring system New storage system with SAN (storage

area network) New portal development technologies -

portlets and websphere

SPbSU Informational-computing center net monitoring status

TivoliNetView

Tivoli Enterprise Console

Tivoli Data WarehouseTivoli Decision Support

DB

2

Reports, statistics

Structure, monitoring

visualization

events

SPbSU Informational-computing center storage system status

HACMP (High Availability Cluster Multi-Processing)

Monitoring and management of the net - Tivoli SAN Manager

Management of the storage elements: IBM Total Storage Manager, IBM Total Storage Specialist, Brocade Advanced Web Tools

Archivating, reserve coping and restoring system TSM - Tivoli Storage Manager

RDBMSRDBMS DB2 UDB DB2 UDB (8.1)(8.1) Content Management SystemContent Management System CMCM 8.1 8.1

Ultrim TapeLibrary3583Model L18

TotalStorageSAN Switch3534 - F08

IBM TotalStorageFAStT700

FastTExp700

pSeries 630Model 6E4

pSeries 630Model 6E4

FCFC

FC FC

Ethernet 10/100Tivoli

TSM

Система храненияданных

По

рта

л

Мо

ни

тор

ин

г, уп

ра

вл

ен

ие

се

тью

и д

ан

ны

ми CM 8.1 WPS 4.2

SPbSU Informational-computing center

storage photos

Portal of the High Performance

Computing (WEBWS)

SPbSU Informational-computing center. Portal of the High Performance Computing (WEBWS)

WEBWS consists from 3 main part :1. Informational part – monitoring of the

computational resources based on Ganglia (open source software product) , monitoring users tasks queries

2. Work space – users work space for development and starting tasks

3. Administrative part

WEBWS logical structure

Informational part

Administrative part

WEBWS (work space)

WEBWS logical structure

IDC

Internet

ADM DB WEBWS DB

Clusters

WEBWS logical structure

Authorizationsystem

Interface

WEBWS DB

PBS Server

WEBWS Server

WEBWS modules

User Info Session Info

WEBWS Server Info

User Projects

Info

WS module

Crystal module

WEBWS monitoring part - ganglia

WEBWS monitoring part - ganglia

SPbSU Informational-computing center.

Some plans for the future

Continue collaboration with IBM Continue development WEBWS … Continue Alice data challenge Transition to gLite in some near future

together with ALICE Parton String Model in parallel mode and

physics performance analysis for ALICE Participation in the Mammogrid

SPbSU in Data Challenge 2004

2002: Globus Toolkit 2.4 was installed, tests started

July 2003: AliEn was installed ( P. Saiz)

July 2004: start of tests jobs in grid-cluster “alice” .

Cluster alice.

alice.spbu.ru, Alien-Services

alice07.spbu.ru

alice03.spbu.ru

alice02.spbu.ru

alice08.spbu.ru

alice06.spbu.ru

alice05.spbu.ru

alice04.spbu.ruworknodes

alice09.spbu.ruSE,

CE(pbs-server)

Configuration of cluster in July 2004

alice: 512 MB RAM, PIII 1x733 CPUalice09: 256 MB RAM, Celeron

1x1200 CPUAlice02-08: 512 MB RAM (512 MB

swap), PIII 2x600 CPU, 2x4.5 GB SCSI HDD

Configuration of cluster in September 2004 (upgraded)

alice: 512 MB RAM, PIII 1x733 CPUalice09: 256 MB RAM,40 GB + 0.3 TB

HDD, Celeron 1x1200 CPUAlice02-08: 1 GB RAM (4 GB swap),

PIII 2x600 CPU (only one CPU is used) , 40 GB IDE HDD.

Available disks space 01.07-19.09

Running jobs on SPbSU CE from 01.07 to 19.09 (min 1job max 7 job)

Started jobs on SPbSU CE from 01.07-19.09

Ganglia monitoring of alice cluster.

Some Further Plans for Alice DC:

SPbSU is planning to continue DC2004 participation with the resources:

alice: 512 MB RAM, 40 GB., PIII 1x733 CPU

alice09: 256 MB RAM,40 GB + 0.3 TB HDD, Celeron 1x1200 CPU

Alice02-08: 1 GB RAM (4 GB swap), PIII 2x600 CPU (two CPUs) , 40 GB IDE HDD.

….more in the next report at the present workshop