45
Design and Implementation of a Server Cluster Backend for Thin Client Computing MTP final stage presentation By Khurange Ashish Govind Under the guidance of Prof. Om P. Damani

mtp Thin Net

Embed Size (px)

DESCRIPTION

PC STATION

Citation preview

  • Design and Implementation of a Server Cluster Backend for Thin Client ComputingMTP final stage presentation ByKhurange Ashish GovindUnder the guidance ofProf. Om P. Damani

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 1.1 Thin Client SystemConsists of multiple heterogeneous workstations connected to a single server.

    Diskless work stationOld PCNew PC

  • 1.1 Thin Client SystemThin ClientHardware requirement of a Thin Client is : Keyboard, monitor, mouse and some computation power.

  • 1.1 Thin Client SystemThin ClientServerServer performs all application processing and stores all the users data

  • 1.1 Thin Client SystemThin ClientServerCommunication protocolKeyboard and mouse inputScreen updatesThin Client and Server communicates using a display protocol such as X or RDP

  • 1.2 Advantages of Thin Client SystemTerminals are less expensive and maintenance freeReduced cost of securityReduced cost of data backupReduced cost of software installation

  • 1.3 Limitations of Thin Client SystemServer is a single point of failureSystem is not scalableAdding more independent servers helps with scalability but not with high availability

  • 1.4 Windows SolutionTerminals cost $500 (without monitor)Needs third party load balancing solutionNeeds a separate file server

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 2.1 LTSPLinux Terminal Server Project Thin Clients need following services to runDHCPTFTPNFSX protocol and a display manager (XDM, GDM, KDM)

  • Send keyboard and mouse inputSend graphical outputDHCP requestDHCP replyDownload OSMount root filesystem

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 3. Design IssuesProvide highly available DHCP, TFTP, NFS and XDM servicesThin Clients binding with XDM server must be dynamicSoftware for finding clusters status, load balancing and managing clusterHighly available filesystem

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 4 High Availability of Various ServicesIn this section we discuss about HA of following servicesDHCPTFTPNFSXDM

  • 4.1 DHCPMore than one DHCP servers can exist on same subnetIP addresses are offered to client on lease basisDHCP protocol has phases : INIT RENEW REBIND

  • 4.1.1 HA of DHCPMultiple independent DHCP servers can not provide HAFor HA DHCP servers need to be arranged inFailover protocolStatic binding

  • 4.1.2 Failover ProtocolPair of DHCP servers (primary & secondary)lease database is synchronized between themIn normal mode only primary server works, updating secondary about its lease informationWhen primary goes down, secondary starts workingAssign IP addresses to new thin clientsRenewing existing leases

  • 4.1.3 Static BindingDHCP servers static IP allocation method is usedAll servers have the same binding of Thin Clients MAC to IP address.Thin Client can continue as long as one DHCP server is running

  • 4.2 TFTP TFTP service do not have any states and bindings like DHCPMultiple independent TFTP servers running provides HATFTP server run on nodes on which DHCP server runDHCP advertise its own address as TFTP

  • 4.3 XDMXDM server runs on all nodes in clusterEach node provides XDM service to users who have home directory on this serverStatic binding of Thin Client to XDM server will not workDynamic binding is based on username

  • 4.4 NFSMultiple independent NFS servers running provide HATo support this NFS server runs on all the nodes in cluster

    Two separate dependenciesSingle dependency

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 5 Software Components of The SystemMain software components of the system Health Status ServiceLoad BalancerCluster Manager

  • 5.1 Health Status ServiceConsists of two partsHealth Status ServerHealth Status ClientHealth Status server, runs on all servers and provides information of server healthHealth Status Client is used by Load Balancer to find load on servers

  • 5.2 Load BalancerAccepts username from Thin ClientDatabase, mapping username to group of serversReplies back with least loaded servers IP, hosting users home directoryCluster has two independent dependencies : DHCP, Load Balancer

  • 5.3 Cluster ManagerTool provided to system administrator to manage clusterMakes sure that all the nodes in the cluster has latest information about the systemProvide service toAdd / remove nodeAdd / remove user

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 6.1 Arrangement of Components in The ClusterXDM ServerNFS ServerHealth Status ServerXDM ServerNFS ServerHealth Status ServerXDM ServerNFS ServerHealth Status ServerXDM ServerNFS ServerHealth Status ServerDHCP TFTPLoad balancerXDM ServerNFS ServerHealth Status ServerDHCP TFTPLoad balancer

  • 6.2 Working of The System:- Nodes which has home directory for user 'A':- Nodes where Load balancer is runningUser 'A'134522

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 7.1 Filesystem IssuesKeep hardware cost as minimum as possibleUse servers storage to store users dataReplicate users filesystem on multiple serversUse open source tools to build filesystem

  • 7.2 DRBDKernel module that maintains real time mirror of block device on remote machinePair of nodes one together. One acts as primary and other as secondary.Read write access to replicated block device is allowed on primary only

  • 7.3 HeartbeatDetects liveliness of a nodeRuns between pair of nodesPrimary node runs all services and has virtual IP of clusterWhen primary goes down, secondary take over virtual IP and starts all services

  • 7.4 HA-NFSHost-a10.129.22.12 Host-b10.129.22.13 DRBD messagesHeartbeatDRBD Primary nodeVirtual NFS server10.129.22.14Host-a10.129.22.12 Host-b10.129.22.13 DRBD Primary nodeVirtual NFS server10.129.22.14

  • 7.5 Cluster FilesystemUsers are divide into mutually exclusive groupsEach user groups files are replicated between separate pair of serversEach user group can tolerate failure of one server

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 8 Future WorkScalable filesystemDRBD solutionHeartbeat Version 2Changes to DRBDCoda filesystemFilesystem performance measurementServer sizing

  • Talk outlineIntroductionWorking of Thin ClientsDesign issuesHigh availability of various servicesSoftware components of the systemWorking of the systemFilesystem for the clusterFuture workConclusion

  • 9.1 ContributionScalable and highly available cluster design for Thin Client computingHA solutions for DHCP, TFTP, NFS servicesFilesystem using open source tools : DRBD and HeartbeatHealth Status Service, Load Balancer, Cluster Manager developed from scratch

  • 9.2 Cluster CharacteristicsGeneric designNo special hardware requiredDHCP can tolerate n-1 failuresLoad balancer also tolerates n-1 failuresEach user group tolerate one server failure

  • ReferencesInternet Draft DHCP failover protocolLinux Terminal Server Project http://www.ltsp.orgDRBD Homepage http://www.drbd.orgHeartbeat project http://linuxha.org/Heartbeat

  • Thank You !!!