112
SynfinityCluster Installation/Administration Guide for Netcompo

SynfinityCluster Installation/Administration Guide for

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SynfinityCluster Installation/Administration Guide for

SynfinityCluster

Installation/Administration Guide

for Netcompo

Page 2: SynfinityCluster Installation/Administration Guide for

Preface

This guide contains the installation and operation management procedures for constructing a cluster system in the Netcompo Series by using SynfinityCluster/HA for Netcompo.

Intended Readers

This guide is for system administrators who install and manage cluster systems in the Netcompo Series. This guide assumes the reader is familiar with SynfinityCluster, which is the basic section of cluster control. Therefore, some terms and item descriptions are omitted.

Structure of This Guide

This guide consists of the following parts.

Part 1 Overview

The overview gives a general summary of the cluster system in the Netcompo Series.

Part 2 Description of Individual Products

A detailed procedure for constructing a cluster system is explained for each product in the Netcompo Series.

Appendix A Example of Environment Definition

Appendix A gives an example of an environment definition using a typical Netcompo Series application.

Appendix B Command Reference

Appendix B explains the use of each command.

Manual Organization

The manual organization of the cluster system with SynfinityCluster is shown below.

The basic section of SynfinityCluster documentation consists of two manuals: "SynfinityCluster Handbook" and "SynfinityCluster Installation/Administration Guide." This guide describes the Netcompo Series among cluster agent product manuals indicated as references from the "SynfinityCluster Installation/Administration Guide."

We recommend that you read "SynfinityCluster Handbook" and "SynfinityCluster Installation/Administration Guide" together with this guide.

i

Page 3: SynfinityCluster Installation/Administration Guide for

ii

Trademarks

Solaris is a trademark of Sun Microsystems, Inc., U.S.

Ethernet is a registered trademark of Fuji Xerox Co., Ltd.

SNA is a trademark of International Business Machines Corporation.

April 2001

Rev. 1 April 2001

No part of this guide may be reproduced or distributed in any form or by any means without prior permission of Fujitsu Limited. This guide is subject to change without prior notice.

All Rights Reserved, Copyright (C) Fujitsu Limited, 2001

Page 4: SynfinityCluster Installation/Administration Guide for

Part 1 Overview

Part 1 is an overview of the cluster system in the Netcompo Series.

1

Page 5: SynfinityCluster Installation/Administration Guide for

2

Page 6: SynfinityCluster Installation/Administration Guide for

Chapter 1 Function Overview

The Netcompo Series can be operated in a cluster system by being linked with SynfinityCluster.

By clustering the communication functions in the Netcompo Series, a highly reliable and readily available system can be constructed. For example, if communication with the current operating system is disconnected because of a failed node or other failures, communication can be restored and resumed by reconnecting through the standby system that has taken over operation.

Figure 1.1 is an example of a cluster configuration of a host-server connection.

Figure 1.1 Example of cluster configuration in a host-server connection

In the example in Figure 1, above, the terminals connected on the LAN are communicating with the host computer through the operating system in ordinary operating mode (figure at left). If a system failure occurs due to an abnormality in the operating system (figure at right), the communication is disconnected but it can be restored and resumed by reconnection through the standby system.

3

Page 7: SynfinityCluster Installation/Administration Guide for

4

Page 8: SynfinityCluster Installation/Administration Guide for

Chapter 2 System Configuration

Figures 2.1 through 2.3 are examples of cluster system configurations with Netcompo products.

Figure 2.1 Example 1 of cluster system configuration (1:1 standby system)

5

Page 9: SynfinityCluster Installation/Administration Guide for

Figure 2.2 Example 2 of cluster system configuration (mutual standby system)

Figure 2.3 Example 3 of a cluster system configuration (N:1 standby system)

Figure 2.4 is a diagram of the software configuration in each node.

Figure 2.4 Software configuration of each node

The function of each software product is described below.

SynfinityCluster

Basic section constituting cluster control.

For information about the software configuration of the basic section for cluster control, see the "SynfinityCluster Handbook."

SynfinityCluster/HA for Netcompo (Cluster agent products)

Optional software that makes the Netcompo Series compatible with a cluster system.

In cluster systems, agent products exist in each function. A cluster system consists of a cluster basic section, cluster

6

Page 10: SynfinityCluster Installation/Administration Guide for

agent products, and other products. Agent products are classified as follows:

HA: Cold standby system HS: Hot standby system SC: Scalable system

SynfinityCluster/HA for Netcompo is an HA product.

Netcompo products

Netcompo Series products consist of software products that are similar to those in ordinary single-system operation.

The Netcompo products that can be used in cluster systems are listed below.

Lower protocols

Netcompo WAN SUPPORT Netcompo FNA-LAN

Upper protocols

Netcompo FNA-BASE Netcompo for SNA EXPANSION OPTION

Network applications

Netcompo Conversation Library Netcompo NMC Server Netcompo Communication Service Netcompo TN Gateway Service

7

Page 11: SynfinityCluster Installation/Administration Guide for

8

Page 12: SynfinityCluster Installation/Administration Guide for

Chapter 3 Cluster Topology

Tables 3.1 and 3.2 show the supported cluster topology for each product in the Netcompo Series.

Table 3.1 Supported cluster topology of lower products

9

Page 13: SynfinityCluster Installation/Administration Guide for

Table 3.2 Supported cluster topology of upper products

Precautions

(1) Netcompo FNA-BASE, Netcompo for SNA EXPANSION OPTION

For the number of supported resources in one node in the cluster system, the total number of resources for cluster operation (Active and Standby nodes) and for non-cluster operation should not exceed the maximum number of supported resources in single-system operation.

(2) Netcompo Communication Library, Netcompo Communication Service

For the number of supported resources in one node in the cluster system, the total number of resources for cluster operation (Active and Standby nodes) and for non-cluster operation should not exceed the maximum number of supported resources in single-system operation.

Whether a circuit type is supported can be determined by referencing the Netcompo FNA-BASE and Netcompo for SNA EXPANSION OPTION.

10

Page 14: SynfinityCluster Installation/Administration Guide for

(3) Netcompo NMC Server

For the number of supported resources in one node in the cluster system, the total number of resources for cluster operation (Active and Standby nodes) and for non-cluster operation should not exceed the maximum number of supported resources in single-system operation.

For a host connection, refer to the Netcompo FNA-BASE and Netcompo for SNA EXPANSION OPTION to confirm whether a circuit type is supported.

In configurations other than 1:1 standby mode, different LANs should be used if the LNDFC protocol is used for host connection and the TCP/IP protocol is used for a client connection or if LNDFC and TCP/IP protocols are mixed in a connection to a client.

(4) Netcompo TN Gateway Service

For a host connection, refer to the Netcompo FNA-BASE and Netcompo for SNA EXPANSION OPTION to confirm whether a circuit type is supported.

In configurations other than 1:1 standby mode, different LANs should be used for the host connection and the client connection if the LNDFC protocol is used for the host connection.

11

Page 15: SynfinityCluster Installation/Administration Guide for

12

Page 16: SynfinityCluster Installation/Administration Guide for

Chapter 4 Procedure for Cluster Construction

This chapter describes the procedure used to construct a cluster system with the Netcompo Series.

4.1 Design

The following procedure is used to design a cluster system:

- Determine what jobs are to be performed with the cluster.

- Determine the cluster topology.

- Determine the service.

- Determine the resources required for the service.

For the general design procedures, see the "SynfinityCluster Installation/Administration Guide." This section discusses only Netcompo-related subjects.

4.1.1 Determining jobs to be performed with the cluster

Determine which products in the Netcompo Series are used in which network configuration and what cluster operations are performed.

4.1.2 Determining the cluster topology

After the contents of the job are determined, determine the cluster topology. The cluster topology includes operation by standby class (Standby, Mutual standby) and operation by scalable class. Keep in mind the characteristics of each operation in determining an appropriate cluster topology.

For supported cluster topologies for each product, see Part I of Chapter 3, "Cluster Topology," of this guide.

4.1.3 Determining the service

Next, determine the service. Specifically, of the ones defined in the domain, select the nodes to be used based on the determined cluster topology.

4.1.4 Determining the resources required for the service

After the service is determined, determine the resources (network resources) required for the service. Table 4.1 lists the resources provided for each product.

Table 4.1 Resources (network resources) for each product

Product name Resource (Network resource) Procedure for registering the resource

Netcompo WAN SUPPORT Interface (circuit) The system administrator executes the registered

command while defining the environment.

Netcompo FNA-LAN Interface (LAN) The system administrator executes the registered command while defining the environment.

Netcompo FNA-BASE / Netcompo for SNA EXPANSION OPTION

Communication path (FNA/SNA primary station) PU (FNA/SNA secondary station)

The system administrator executes the registered command while defining the environment.

Netcompo Communication Library Session with the host The user creates and registers in accordance with the

job.

Netcompo NMC Server Communication path between host and client

Registered automatically when the agent product is installed, or the user registers by editing a template.

Netcompo Communication Service

Communication path between host and client No registration is required.

Netcompo TN Gateway Service

Communication path between host and client No registration is required.

4.1.5 Design example

Figure 4.1 is an example of a cluster design.

13

Page 17: SynfinityCluster Installation/Administration Guide for

Figure 4.1 Example of cluster design

The designing procedure in Figure 4.1 is shown below.

Determining the jobs to be performed with the cluster

Suppose the application conversation service are used as network applications. (Data is communicated through the FNA application conversation. These two jobs are performed with the cluster.)

In this case, Netcompo FNA-BASE is required for the connection with the host. Since two types of transmission channels are used, a LAN (Ethernet) and a leased circuit, Netcompo FNA-LAN and Netcompo WAN SUPPORT are required as lower protocols.

Defining the cluster topology

In the above example, the 1:1 standby configuration is adopted to communicate data through FNA application conversation at the same node.

Determining the service

Because the 1:1 standby configuration is adopted, two nodes (Node 1 and Node 2) are prepared and the service is defined with this pair. The service name is defined as Service01.

Determining the resources required for the service

Each of the following resources are prepared for Node 0 and Node 1.

- LAN Interface: hme0 (hme1 and hme2 are used for the path between the cluster systems.) - Circuit interface: pc4a-00-l00 - Circuit switching unit: SWU2001 - PU: PU01 (for LAN connection), PU02 (for circuit connection) - Session with the host used in the application conversation service: APL01

14

Page 18: SynfinityCluster Installation/Administration Guide for

- Session with the host used in the application conversation service: APL02

4.2 Installation

The general installation procedure of cluster systems is covered in the "SynfinityCluster Installation/Administration Guide." Items requiring Netcompo installation are listed below.

- Installing the application

- Setting up the takeover network

- Defining the application environment

- Setting up the resources

- Setting up the service

Figure 4.2 shows the relationship between the procedure for constructing the entire cluster and the Netcompo-related installation.

15

Page 19: SynfinityCluster Installation/Administration Guide for

Figure 4.2 Relationship between procedures for constructing the entire cluster and Netcompo-related installation

4.2.1 Installing the applications

Installing Netcompo products

Install the Netcompo products required for all nodes in the same way as for a single-system configuration. For details, see the manuals of the individual products, software guides, and installation guides. For products requiring a license (FLEXlm), specify the license here.

Installing SynfinityCluster/HA for Netcompo

Install the following cluster agent product if it has not already been installed.

SynfinityCluster/HA for Netcompo

For details, see the installation guide supplied with the product.

4.2.2 Configure the replacement network

Depending on the type of network used, set the following takeover network as required.

Setting the circuit switching unit

If Netcompo WAN SUPPORT is used, set the circuit-switching unit.

Setting the MAC address replacement

If Netcompo FNA-LAN is used, set the MAC address takeover.

Setting the IP address replacement

If TCP/IP is used as the lower procedure, set the IP address takeover.

Setting is required when:

- The FNA on TCP/IP protocol is used in Netcompo FNA-BASE,

- TCP/IP is used for client connection in Netcompo NMC Server, or

- The Netcompo TN Gateway Service is used

Each setting method is described in the "SynfinityCluster Installation/Administration Guide."

4.2.3 Defining the application environment

For each product, define the environment, edit the state transition procedure, and register the resources.

The specific definition method is in the chapter corresponding to each product in Part 2, "Description of Individual Products."

Note that it is assumed that setting up of the related lower procedures (Netcompo WAN SUPPORT, Netcompo FNA-LAN, Netcompo FNA-BASE, Netcompo for SNA EXPANSION OPTION) has been completed before the individual products are set up.

If using an SNA LAN connection, see the "Netcompo FNA-LAN Handbook" for information about Netcompo FNA-LAN and the "Network Function User's Guide (Mainframe Connection)" for information about the Netcompo for SNA EXPANSION OPTION.

Figure 4.3 shows the process flow of defining the environment. Table 4.2 lists the sequence of defining the environments.

16

Page 20: SynfinityCluster Installation/Administration Guide for

Figure 4.3 Process flow of environment definition

Table 4.2 Sequence of environment definition

Application Lower connection protocol Definition order

WAN connection A (1) -> E (10) -> G

LNDFC connection B (2) -> D (5) -> E (10) -> G Server

connection

FNA on TCP/IP connection B (3) -> C (4) -> E (10) -> G

LNDFC connection B (2) -> D (6) -> G

Netcompo NMC Server

Client connectionTCP/IP connection B (3) -> C (7) -> G

WAN connection A (1) -> E (11) -> H Netcompo TN Gateway Service

Server connection

LNDFC connection B (2) -> D (5) -> E (11) -> H

17

Page 21: SynfinityCluster Installation/Administration Guide for

Application Lower connection protocol Definition order

FNA on TCP/IP connection B (3) -> C (4) -> E (11) -> H

Client connection TCP/IP connection B (3) -> C (8) -> H

Note 1) Includes the following applications:

- Netcompo Communication Library

- Netcompo Communication Service

4.2.4 Setting up the resources

Usually, resources are set when the environment is defined for each Netcompo product. In some cases, however, the user must create a state transition procedure. In such a case, resources are set up at that time.

4.2.5 Setting up the service

After all resources are set up, use the cluster management view for setting up the service for each job. For details, see the "SynfinityCluster Installation/Administration Guide."

4.3 Backup/Restore Function

4.3.1 Overview

The Backup/Restore function saves and restores the environment definition files related to Netcompo products to and from the specified directory or external storage media.

This function is used when:

- SynfinityCluster or SynfinityCluster/HA for Netcompo is upgraded or improved.

- The entire system is replaced

4.3.2 Environment definition files to be Backup/Restore

Table 4.1 lists the environment definition files to be saved and restored.

In the Backup process, the corresponding environment definition files are saved when the following network products are installed.

In the Restore process, the saved environment definition files are restored as is. Therefore, be sure to install any required products before restoring the environment definition files. If a product is installed after the restoration, perform the restoration again.

Table 4.3.1 Environment definition files to be Backup/Restore

Product name Environment definition files

Netcompo WAN SUPPORT

/etc/opt/FJSVwan/etc/wanconf /etc/opt/FJSVwan/etc/hdlc/* /etc/opt/FJSVwan/etc/ipco/* /etc/opt/FJSVwan/etc/isdn/* /etc/opt/FJSVwan/etc/x25/* /etc/opt/FSUNnet/hdlc/* /etc/opt/FSUNnet/ipco/* /etc/opt/FSUNnet/isdn/* /etc/opt/FSUNnet/nlco/*

Netcompo FNA-LAN

/etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL /etc/opt/FSUNnet/lndfc/* /etc/opt/FSUNnet/fjllc/* $BASEDIR/FSUNlndfc/bin/lndfinit $BASEDIR/FSUNlndfc/bin/lndfinit.pkg

Netcompo FNA-BASE Netcompo for SNA EXPANSION OPTION

/etc/opt/FSUNnet/cluster/vcp/"service name" /etc/opt/FSUNnet/cluster/vcph/"service name" /etc/opt/FSUNnet/vcp/* /etc/opt/FSUNnet/vcph/*

18

Page 22: SynfinityCluster Installation/Administration Guide for

Product name Environment definition files /etc/opt/FSUNnet/ticf/*

Netcompo Communication Library None (has no environment definition files)

Netcompo NMC Server /etc/opt/FSUNnet/nmcsv/* /etc/opt/FJSVclntc/rc.d/SystemState2/cluster.FSUNnmcsv

Netcompo Communication Service /etc/opt/FSUNnet/cmsv/path/pathfile /etc/opt/FSUNnet/cmsv/hgrp/* /etc/opt/FSUNnet/cmsv/wgrp/*

Netcompo TN Gateway Service

/etc/opt/FSUNnet/tngw/htgrp/* /etc/opt/FSUNnet/tngw/lugrp/* /etc/opt/FSUNnet/tngw/lufile /etc/opt/FSUNnet/tngw/nextfl /etc/opt/FSUNnet/tngw/termtype /etc/opt/FSUNnet/tngw/tnparm

*: $BASEDIR refers to the installation destination of the product.

4.3.3 Backup/Restore environment definition files

Before replacing cluster products or Netcompo products, save the environment definition files by using the following command at each node.

To replace the entire system, save the files by specifying the name of the device to which the environment definition information is saved.

To restore the saved environment definition files, first install the cluster products and Netcompo products, then restore the environment definition information by using the following command at each node.

If the entire system is replaced, restore the environment definition files by specifying the name of the device in which they were saved.

For more information about the clntc_bkrs command, see Appendix B, "Command Reference."

4.3.4 Precautions

In restoring the environment definition files, the installation destination directory of the cluster products and the Netcompo products should be the same as the installation destination at the time they were saved. If the installation destination of the products at the time they were saved is different from that at the time they are restored, the definition information cannot be corrected with the environment setting menu even if the environment definition files are restored correctly.

/etc/opt/FJSVwan/etc/wanconf in WAN SUPPORT and /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL in FNA-LAN are environment definition files requiring correction before resources are registered after restoration. For details, see "Changing the WAN environment setting" in Section 1.1.5, "Environment setting procedure," and "Editing the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file" in Section 1.2.5, "Environment setting procedure," in Part 2, "Explanation of Individual Products."

19

Page 23: SynfinityCluster Installation/Administration Guide for

4.4 Function for Copying to All Nodes

4.4.1 Overview

This function calls node-to-node copy commands of individual network products in one operation and automatically copies the environment definition files to all the nodes.

The user can copy the environment definition information to all the nodes that constitute the cluster by copying from the defined node. In addition to copying, this function also supports resource registration.

This function is used when:

The copy source and copy destination nodes have the same system configuration and the environment definition information in the source node can also be used in the destination node.

When the all-node copy function is used, the node-to-node copy command is not required for each network product.

4.4.2 Copy procedure by cluster topology

All-node copy in 1:1 standby mode

Figure 4.4.1 Copy in 1:1 standby mode

The system-recognized adapter configuration between nodes is identical, and the copy destination definition needs no correction. -> The all-node copy function is effective and recommended.

The system-recognized adapter configuration between nodes is different, requiring correction of the copy destination definition. -> Although the all-node copy function is effective, it is not recommended.

20

Page 24: SynfinityCluster Installation/Administration Guide for

Environment definition using the all-node copy function in 1:1 standby mode

Figure 4.4.2 Definition in 1:1 standby mode

All-node copy in 1:1 mutual standby mode

Figure 4.4.3 Copy in 1:1 mutual standby mode

The system recognized adapter configuration between nodes is identical, and the copy destination definition needs no correction. -> The all-node copy function is effective and recommended.

The system-recognized adapter configuration between nodes is different, requiring correction of the copy destination definition. -> Although the all-node copy function is effective, it is not recommended.

21

Page 25: SynfinityCluster Installation/Administration Guide for

Environment definition using the all-node copy function in 1:1 mutual standby mode

Figure 4.4.4 Definition in 1:1 mutual standby mode

All-node copy in N:1 standby mode (1 standby node)

Figure 4.4.5 Copy in N:1 standby mode (1 standby node)

The system-recognized adapter configuration between nodes is identical, and the copy destination definition needs no correction. -> The all-node copy function is effective and recommended.

The system-recognized adapter configuration between nodes is different, requiring correction of the copy destination definition. -> Although the all node copy function is effective, it is not recommended.

22

Page 26: SynfinityCluster Installation/Administration Guide for

Environment definition using the all-node copy function in N:1 mutual standby mode (1 standby node)

Figure 4.4.6 Definition in N:1 standby mode (1 standby node)

4.4.3 Effectiveness of function for copying to all nodes

Table 4.4.1 lists the cluster configurations for which all-node copying can be used effectively.

Legend: AA: Strongly recommended A: Recommended X: Not recommended

Table 4.4.1 Effectiveness of the all-node copy function

1:1 Standby mode N:1 Standby mode

Standby Mutual standby

1 standby node

Multiple standby nodes

All-node copy processing AA AA A X

Automatic resource registration processing AA AA X X

For multiple standby nodes in N:1 standby mode, use of the all-node copy function is not recommended because it does not work effectively. The most effective method is to define the environment and individually register the resources in each node.

4.4.4 Copying all nodes of the environment definition files

In the cluster configuration in 1:1 standby mode or 1:1 mutual standby mode, all-node copy and resource registration of the environment definition files are performed with the following command:

23

Page 27: SynfinityCluster Installation/Administration Guide for

In the N:1 standby cluster configuration, the following command is used because only all-node copy of the environment definition files is performed.

For information about the commands for registering resources of individual nodes after definition copy, see Section X.X.5, "Environment setting procedure," of each Netcompo product.

For information about the clntc_syncfile command, see Appendix B, "Command Reference."

24

Page 28: SynfinityCluster Installation/Administration Guide for

Part 2 Description of Individual Products

Part 2 describes a detailed procedure for constructing a cluster system for each product in the Netcompo Series.

25

Page 29: SynfinityCluster Installation/Administration Guide for

26

Page 30: SynfinityCluster Installation/Administration Guide for

Chapter 1 Lower Protocols

This chapter describes a procedure for constructing a cluster system for each lower protocol product in the Netcompo Series.

- Netcompo WAN SUPPORT

- Netcompo FNA-LAN

1.1 Netcompo WAN SUPPORT

1.1.1 Product overview

Netcompo WAN SUPPORT is software for managing multi-port communication adapters for communication in the HDLC and X.25 protocols.

1.1.2 Overview of support to a cluster system

Netcompo WAN SUPPORT supports the following cluster topologies:

- Standby mode

- Mutual standby mode

Netcompo WAN SUPPORT can register resources for the state transition procedure. To register such resources, execute the commands provided by WAN SUPPORT after setting the network environment.

When all nodes that consist of the cluster system have the same line configuration, if the commands provided by WAN SUPPORT are executed after the environment setting of one node has been completed, the setting of the node can be distributed to all nodes and resources for the state transition procedure can automatically be registered in all nodes.

Since resources are generated on a line-by-line basis, resources for each line can be set to a service.

1.1.3 Standby mode

Figure 1.1.1 shows a normal operation in the standby mode. Node 0 is an active node and Node 1 is a standby node.

In normal operation, Netcompo WAN SUPPORT on the active node handles communications, and Netcompo WAN SUPPORT on the standby node is left inactive.

Figure 1.1.1 Example of the active and standby system (before switched)

If a failure occurs in Node 0 (active node), all lines registered as resources in Service A are switched to the standby system by the line switching unit, and the lines are activated. The standby system then takes over network processing of the application related to Service A. While the system is switched, all communication sessions related to Service A performed in Node 0 are disconnected.

27

Page 31: SynfinityCluster Installation/Administration Guide for

Figure 1.1.2 Example of the active and standby system (after switched)

1.1.4 Mutual standby mode

Figure 1.1.3 is an example of the mutual standby mode during normal operation.

In Node 0, Application 1 is used and Application 2 is on standby, while in Node 1, Application 1 is on standby and Application 2 is used. Thus, each application is operating in one node and on standby in the other node.

In the example of Figure 1.1.3, the settings of the network environment must be the same in Nodes 0 and 1.

Figure 1.1.3 Example of the active and standby system in the mutual standby mode (before switched)

In the example of Figure 1.1.3, if Node 0 is down, the line for Service A is switched to Node 1 and is activated. At this time, the application registered in Service A is also activated in Node 1.

In this way, operations in Application 1 are taken over to Node 1. When the line is switched, all communication sessions in Service A are temporally disconnected.

28

Page 32: SynfinityCluster Installation/Administration Guide for

Figure 1.1.4 Example of switching operation in mutual standby mode (after failover)

1.1.5 Environment setting

Procedure for a cluster environment definition

Figure 1.1.5 shows a procedure for defining a WAN SUPPORT cluster environment.

Figure 1.1.5 Procedure for defining a WAN SUPPORT cluster environment

An outline of the environment definition is described below.

The term "service" in this section refers to a service for the cluster system. The user should be careful of this term because it is different from a network service (i.e., /etc/opt/FJSVwan/etc/hdlc/services, or the parameter such as "service name" that is entered for the simplified settings of X.25 or HDLC in fnbtool).

The procedure for setting the environment is described below. (When WAN SUPPORT uses TCP/IP, a TCP/IP interface can be implemented in a cluster system only if the setting described below is specified. Because an IP address is taken over in this setting, the IP address does not have to be taken over by setting the "network takeover" in the cluster management view.) 1. Using fnbtool (and ipcosetup, if TCP/IP on a WAN is used) or an editor, set the WAN environment in the same

way as in a single system. 2. Check the connection. (Not required, but it is recommended. If a communication error occurs, the user can

easily identify whether a problem is in the network parameters or in the settings for the cluster system.) 3. Using fnbtool (and ipcosetup, if TCP/IP on a WAN is used) or an editor, change the lines that are to be used in

29

Page 33: SynfinityCluster Installation/Administration Guide for

the cluster system. 4. Execute the command of WAN SUPPORT to set resources. 5. After registering resources of WAN SUPPORT and other related products, start "Service setting" from the

cluster management view to set the WAN SUPPORT resource (line) to the service. The procedures for individual settings are explained below.

Environment setting for ordinary WAN

This procedure is the same as the ordinary WAN setting. For details about this setting, see the "Netcompo WAN SUPPORT Guide.”

Environment setting for a cluster system (line setting for a cluster system)

Change the settings of lines based on either of following procedures so that the desired lines can be used in a cluster system. If the user wants to change the environment settings again by using fnbtool or ipcosetup, use fnbtool or ipcosetup to change the settings in this time. Since processing is executed with the number of spaces considered in fnbtool and ipcosetup consider during processing, operation may be incorrect if a file is changed using an editor.

It is recommended that communication connections be checked to make sure that network settings are correct before performing the following change (before changing the cluster-related settings in the network environment).

- If WAN is set with fnbtool, select "yes" for the "cluster selection" item of the lines that are used for the cluster system. If using ipcosetup, enter "y" in the "Using by cluster service" of the interface for the cluster system. If the cluster system is not in use, this setting cannot be selected.

- If WAN is set with an editor, edit /etc/opt/FJSVwan/etc/wanconf, as shown below. Do not add any blank lines or comments. Delete the keyword "none" from the setting of the lines that are used in the cluster system.

Example:

Before correction

$BASE_DIR/FJSVwan/usr/bin/wanconfig pc4a-00-l00 hdlc hostnrms $1 none &

After correction

$BASE_DIR/FJSVwan/usr/bin/wanconfig pc4a-00-l00 hdlc hostnrms $1 &

- If TCP/IP is used on a WAN, the user must edit /etc/opt/FJSVwan/etc/ipco/wanipstart as shown below. Do not add any blank lines or comments. Specify the keyword $RSC in the setting of the interface for the cluster.

Example:

Before correction

wanifconfig ipco0 psb1 $UPDOWN

After correction

wanifconfig ipco0 psb1 $UPDOWN $RSC

- If WAN uses TCP/IP, associate the host name and the IP address of WAN SUPPORT with corresponding name address conversion mechanism. The easiest way is to add the host name and IP address in the /etc/hosts file. The clwanset command that is executed in the following procedure does not copy the /etc/hosts file.

Registering resources and copying to other nodes

Execute the following command to register specified lines as a resource to a cluster system:

Command

/opt/FJSVwan/usr/bin/clwanset [-a]

Execute the above command in a node where the WAN environment is defined.

If "-a" is added to the command, the WAN environment settings on the node where the above command have been executed are copied to all nodes, and the user can register the resources related to WAN SUPPORT in all nodes at a time. The user should be careful that the settings of lines not specified for the cluster are also copied at this time. Therefore, when the configurations and settings of adapters and lines managed by WAN SUPPORT in all nodes are the same, including lines not specified for the cluster, the user can significantly cut setting cost by using this command, compared with defining the WAN settings on all nodes.

The user should add "-a" to the command only when the configurations and settings of adapters and lines related to

30

Page 34: SynfinityCluster Installation/Administration Guide for

WAN SUPPORT in all nodes are the same.

This command reconstructs part of the environment setting files. At this time, comment lines and lines having abnormal setting (lines having incorrect command syntax) are deleted. When the user uses fnbtool or ipcosetup, such lines do not exist (such lines are not created by fnbtool or ipcosetup). However, when using an editor, the user should be careful of such things.

Be sure that resource registration has been completed for the line switching unit that connects the relevant line. If registration has not been completed, register the resources before "Setting resources to the service," which is explained in the next section.

Setting resources to the service

After the resource registration of WAN SUPPORT and other related products has been completed, start "Service setting" from the cluster management view to set WAN SUPPORT resources (lines) to the service. The resource name of WAN SUPPORT is the line name.

At this time, be sure to specify the resources of the line switching unit to be connected to the relevant lines.

The setting of the cluster environment definition has thus been completed. After rebooting the system, the user can use the cluster system.

Note that the cluster environment definition that is already set must be deleted and set again in the following cases:

- The cluster configuration (e.g., node configuration, hardware configuration, cluster topology) is changed,

- SynfinityCluster is reinstalled, or the cluster system is changed into a single system.

- A procedure for deleting a cluster environment definition is described below.

Deleting a cluster environment definition

Figure 1.1.6 shows a procedure for deleting a cluster environment definition.

Figure 1.1.6 Procedure for deleting a WAN SUPPORT cluster environment definition

Deleting resources from service

Delete resources of the corresponding WAN SUPPORT from the cluster management view.

Changing WAN environment settings

Change the settings of lines into so that lines a cluster stops using a line specified for the cluster.

- If WAN is set using fnbtool, change the "Cluster selection" of the line specified for the cluster from "yes" to "no." If ipcosetup is executed, change the "Using by cluster service" to "n." If the cluster system is not in operation, this setting cannot be selected.

- If changing this setting with an editor, edit /etc/opt/FJSVwan/etc/wanconf, as shown below. Do not add any blank lines or comments. Specify the keyword "none" between $1 and & in the setting of the line specified for the cluster.

Example:

Before correction

$BASE_DIR/FJSVwan/usr/bin/wanconfig pc4a-00-l00 hdlc hostnrms $1 &

After correction

$BASE_DIR/FJSVwan/usr/bin/wanconfig pc4a-00-l00 hdlc hostnrms $1 none &

- If TCP/IP is used on a WAN, the user must edit /etc/opt/FJSVwan/etc/ipco/wanipstart, as shown below. Do not add any blank lines or comments.

31

Page 35: SynfinityCluster Installation/Administration Guide for

Delete the keyword $RSC from the setting of the interface for the cluster.

Example:

Before correction

wanifconfig ipco0 psb1 $UPDOWN $RSC

After correction

wanifconfig ipco0 psb1 $UPDOWN

1.1.6 Precautions

- When a switching operation is started in a cluster system, all lines included in the service to be switched in the active system are inactivated, the line switching unit then performs switching, the standby system is activated. Therefore, communication temporally stops in all lines associated with the service containing the line being switched.

- When the environment is set, the cluster system must be in operation. The services associated or to be associated with the line that has the setting to be defined or changed must be stopped by executing the clstopsvc command. For example, if the service of the line resource pc4a-00-l01 is to be changed from svcA to svcB, both svcA and svcB must be in a stop state.

- To change the WAN environment with fnbtool or ipcosetup after the cluster environment is defined, use fnbtool or ipcosetup to change the settings for the cluster system. After the settings are changed with an editor, performance of fnbtool and ipcosetup is not reliable.

- One line (resource) cannot be associated with multiple services. (On the other hand, multiple lines (resources) can be associated with one service.)

- If the settings and configurations of the adapters and lines related to WAN SUPPORT of all nodes are not the same, the /opt/FJSVwan/usr/bin/clwanset command must not be executed with the "-a" option. In this case, if the command is executed with "-a," the environment definition of all nodes becomes the same and the WAN environment will be out of use.

- Executing the /opt/FJSVwan/usr/bin/clwanset command reconstructs part of the environment setting files. At this time, comment lines and lines having incorrect command syntax) are deleted. If fnbtool or ipcosetup are used, such lines do not exist (because such lines are not created by using fnbtool or ipcosetup). If an editor is used, note this precaution.

- Netcompo WAN SUPPORT does not provide a resource monitor. Therefore, a failure on a line cannot be detected and a line cannot be switched unless the user creates a resource monitor.

- To change WAN settings after the cluster environment has been set, change settings using fnbtool or ipcosetup or using an editor. After settings are changed, perform "Environment setting for a cluster (line setting for cluster)" -> "Registering resources and copying to other nodes" -> "Setting resources to the service," like in the initial setting procedure. The command executed in "Resource registration" deletes all WAN resources already registered, and then registers them again to prevent failure. Therefore, check not only the line for which the settings are changed in "Setting resources to the service" but also the settings to the service of all lines (resources).

- Executing the clwanset command is the only way to register resources of WAN SUPPORT and register the state transition procedure.

- During cluster control operation, do not stop WAN SUPPORT (do not execute /etc/opt/FJSVwan/usr/bin/waninit stop). Since the common daemons required for all lines are also stopped when WAN SUPPORT is stopped with waninit, all lines operating at this point are stopped irrespective of whether or not the cluster uses the line. The common daemons are then stopped. Therefore, cluster switching occurs after a waninit stop operation, and WAN lines may not be able to be activated correctly even if WAN line activation is started because the daemons are stopped. To stop WAN during cluster control operation, stop all services containing WAN resources by executing the clstopsvc command, stop the upper protocols using WAN SUPPORT (a special operation is not required because WAN SUPPORT determines the start and stop of TCP/IP interfaces managed by WAN SUPPORT), and then stop WAN SUPPORT by executing the /etc/opt/FJSVwan/usr/bin/waninit command. To restart WAN SUPPORT, follow the above procedure in reverse, that is, start WAN SUPPORT by executing the /etc/opt/FJSVwan/usr/bin/waninit command, start the upper protocols of WAN SUPPORT, and then start all services containing WAN resources by executing the clstartsvc command.

1.2 Netcompo FNA-LAN

1.2.1 Product overview

Netcompo FNA-LAN is software for communicating with the FNA host in the LNDFC protocol, and software for communicating with the SNA host in the LLC TYPE2 protocol. The LNDFC protocol is for communications with FNA (Fujitsu Network Architecture) via Ethernet or FastEthernet

32

Page 36: SynfinityCluster Installation/Administration Guide for

transmission channels. And the LLC TYPE2 protocol is for communications with SNA (System Network Architecture) via Ethernet or FastEthernet transmission channels.

Here, a cluster environment setting of the LNDFC protocol is explained. Please refer to "Netcompo FNA-LAN User's Guide" for the cluster environment setting of the LLC TYPE2 protocol.

1.2.2 Overview of support to cluster system

Netcompo FNA-LAN supports the following cluster topologies:

- Standby mode

- Mutual standby mode

Figure 1.2.1 is an example of a network configuration using FNA-LAN during standby.

Figure 1.2.1 Example of network configuration using FNA-LAN

1.2.3 Standby mode

For the standby system, communication is operated using the LNDFC protocol via the interface (device) on the active node in the normal operation state.

If a failure occurs in the active node, the interface (device) is switched to the standby node, and the standby node takes over communication that was operated by the active node.

In this case, communication using the LNDFC protocol is temporally disconnected, and reconnected to the standby node after communication is established again.

1.2.4 Mutual standby mode

Assume that the following states are the normal operation states in mutual standby system:

Node 0 Active system (hme3), Standby system (hme4)

Node 1 Active system (hme4), Standby system (hme3)

In the normal operation state, the LNDFC connection in Node 0 uses hme3 and the LNDFC connection in Node 1 uses hme4 to communicate with the remote host.

The following description shows the process, when a failure occurs, between the occurrence of system transition and the setback operation after the system is restored.

If a failure occurs in Node 0, the interface hme3 in the standby system in Node 1 is activated, and then communication with remote hosts is switched from Node 0 to Node 1.

When Node 0 is restored, setback processing is operated by the operator. During the processing, the LNDFC connection for communication is disconnected.

33

Page 37: SynfinityCluster Installation/Administration Guide for

After the setback processing, communication with remote hosts is switched from Node 1 to Node 0.

1.2.5 Environment setting

Setting a cluster environment definition

The procedure for adding a cluster environment definition of FNA-LAN is described and shown Figure 1.2.2.

Figure 1.2.2 Procedure for adding a cluster environment definition of FNA-LAN

Setting FNA-LAN environment

To switch a cluster interface (device) used in Netcompo FNA-LAN, specify the network environment for FNA-LAN and the cluster environment. The procedure for defining the environment of FNA-LAN is described below.

Define the network environment of all interfaces (devices) in both nodes. The network environment is defined using fnbtool or an editor, in the same procedure as an ordinary single system. For details, see "Netcompo FNA-LAN User's Guide".

After definition has been completed, the interfaces (devices) can be operated from the upper protocols using FNA-LAN by rebooting the system or activating FNA-LAN.

The interface hme0 is in the cluster system and the interface hme3 is not in the cluster system in the configuration example shown in Figure 1.2.1. The FNA-LAN cluster environment definition is described below.

Environment setting for a cluster (active node)

For details on the network environment definition of the active node, see "Netcompo FNA-LAN User's Guide". Then, edit the following files.

Target file name Description

/etc/opt/FSUNnet/lndfc/lndfc.config

An FNA-LAN activating file. A file defining all interfaces activated with fnbtool or an editor. This file is the source when the definition is copied to the activating file for the cluster.

34

Page 38: SynfinityCluster Installation/Administration Guide for

/etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CLAn activating file for FNA-LAN cluster. Edit this file to copy the definition in the lndfc.config file to specify the cluster interfaces and noncluster interfaces.

The following section describes the way of editing the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file.

This file can be edited with an editor only, not with fnbtool.

Editing the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file

In the configuration example in Figure 1.2.1, the /etc/opt/FSUNnet/lndfc/lndfc.config file is defined as shown below.

Copy the definition of all interfaces defined in the /etc/opt/FSUNnet/lndfc/lndfc.config file to the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file.

The contents of the /etc/opt/FSUNnet/lndfc/cluster/lndfc.config_CL file before editing are shown below.

After copying, the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file is defined as shown below.

To define cluster interfaces and noncluster interface, specify the following in the resource-ID parameters in the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file.

Cluster interface: Space (blank) Noncluster interface: none

In the example of Figure 1.2.1, the resource-ID parameter for hme0 remains a blank because it is a cluster interface; and, "none" is specified for the resource-ID parameter for hme3 because it is a noncluster interface.

The /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file after editing is shown below.

35

Page 39: SynfinityCluster Installation/Administration Guide for

Environment setting for a cluster system (Standby node)

For the environment definition of the standby node, define the environment in the same way as the active node, or define the FNA-LAN environment by executing the definition file distributing (lndf_syncfile) command.

To define the environments of individual nodes, be sure that the parameter values related to the protocol are the same in each interface (device).

It is recommended that the definition file distributing (lndf_syncfile) command be executed only when the LAN adopter configuration and interface name are the same in each node.

If the definition file distributing (lndf_syncfile) command is executed in when the LAN adopter configuration or interface name is different in each node, the environment definition files are copied, but the interfaces (devices) may be different. Therefore, specify the environment again using an editor.

After FNA-LAN definition has been completed, define the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file in the same way as the definition of the active node.

Copying to other nodes

By executing the definition file distributing (lndf_syncfile) command, the environment definition information is distributed to other nodes. The environment definition files copied and not copied are shown below.

Execute this command in a node in which all devices have been inactivated and all environment definitions have been completed.

Command

/opt/FSUNnet/bin/lndf_syncfile

Environment definition files copied

- /etc/opt/FSUNnet/lndfc/hosts

- /etc/opt/FSUNnet/lndfc/lndfc.config

- /etc/opt/FSUNnet/lndfc/networks

- /etc/opt/FSUNnet/lndfc/parameters

Environment definition file that is not copied

- /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL

Since the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file is not copied, edit the file in the same way as the above in the copy destination node.

Setting the activation procedure

Set the activation procedure for the cluster system.

Set the activation procedure under the following conditions: a. After the Netcompo FNA-LAN package is installed b. After the cluster system package is reinstalled The activation procedure is set by executing the processing once after the above package is installed.

If an interface (device) is added or deleted after the above setting, the setting does not have to be done again.

The activation procedure setting (lndf_setproc) command is executed to not only set the activation procedure but also delete the activation procedure.

36

Page 40: SynfinityCluster Installation/Administration Guide for

Before deleting the activation procedure, be sure to read "Deleting activation procedure" described below.

Execute the activation procedure setting (lndf_setproc) command for all nodes when all interfaces (devices) have been inactivated.

Command

/opt/FSUNnet/bin/lndf_setproc [ -r | -b ]

-r: Sets the activation procedure for cluster system

-b: Deletes the activation procedure for cluster system

Registering resources

After all environment definitions have been completed, register the resources of the cluster interfaces.

For the interfaces (devices) for resources that have already been registered and for noncluster interfaces (devices), resources are not registered.

Register resources for all nodes when all interfaces (devices) have been inactivated.

The FNA-LAN operating state can be checked by executing the lndfstat command.

Resources are registered by executing the following command:

Command

/opt/FSUNnet/bin/lndf_addrid [ -a | -d Interface-name | -h ]

-a: Registers the resources of all cluster interfaces (devices) defined in /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL.

-d: Interface name: Registers the resource of the specified interface (device) defined in /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL. Specify this option to register the resources of a specific interface (device) if, for example, a cluster interface (device) is added.

-h: Displays the explanation of parameters.

When the command terminates normally, the resource ID is automatically stored in the resource-ID parameter in the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file.

The /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file after resource registration is shown below.

The above example indicates that "18" is set as a resource ID for hme0.

Setting resources to the service

After the resource registration of FNA-LAN and other related products has been completed, start "Service setting" in the cluster management view to set the FNA-LAN resources (interfaces) to the service. The resource name of FNA-LAN is "FNALAN_interface name." For example, if the interface name is hme0, the resource name is FNALAN_hme0.

After the setting of the cluster environment definition has been completed and the system is restarted, the cluster system is ready for operation.

Note that in the situations described below, delete the cluster environment definition resulting from this setting, and specify the settings again:

- The cluster configuration (e.g., node configuration, hardware configuration, cluster topology) have changed,

- SynfinityCluster is reinstalled, or

- Cluster operation is stopped, and the environment is returned to ordinary single-system operation.

37

Page 41: SynfinityCluster Installation/Administration Guide for

The procedure for deleting a cluster environment definition is described below.

Deleting a cluster environment definition

Figure 1.2.3 shows a procedure for deleting a cluster environment definition, and the deletion method is described below.

Figure 1.2.3 Procedure for deleting an FNA-LAN cluster environment definition

When deleting the environment definition of an interface (device) operating from a cluster setting, all devices in each node must be in an inactivated state.

Before deleting a cluster interface (device), delete the resources from the service first.

Deleting resources from service

Delete the relevant FNA-LAN resources from the cluster management view.

Resource deletion

By executing the resource deletion (lndf_delrid) command, delete the relevant FNA-LAN resources from the service.

For noncluster interfaces (devices), resources are not required to be deleted.

Delete the resources for all nodes after all interfaces (devices) have been inactivated.

The FNA-LAN operating state can be checked by executing the lndfstat command.

Resources are deleted by executing the following command:

Command

/opt/FSUNnet/bin/lndf_delrid [ -a | -d Interface-name | -h ]

-a: Deletes the resources of all cluster interfaces (devices) defined in /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL.

-d: Interface name: Deletes the resource of the specified interface (device) defined in /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL. Specify this option to delete the resources of a specific interface (device).

-h: Displays the explanation of parameters.

When the command terminates normally, the resource ID is deleted from the resource-ID parameter in the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file.

Changing a cluster environment setting

After resources are deleted, delete the line of the appropriate cluster interface from the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file with an editor. To specify it as a noncluster interface, set "none" in

38

Page 42: SynfinityCluster Installation/Administration Guide for

the resource-ID parameter.

To delete a noncluster interface, delete the line of the appropriate noncluster interface from the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file with an editor.

Changing the FNA-LAN environment setting

When the line of the interface definition is deleted in the above change of cluster environment settings, delete the appropriate interface from the environment definition information using fnbtool or an editor in the same way as in an ordinary single system. For details, see "Netcompo FNA-LAN User's Guide".

Deleting the activation procedure

Finally, delete the activation procedure as necessary.

Delete the activation procedure in the following cases: a. The Netcompo FNA-LAN package is uninstalled, or b. The cluster system package is uninstalled Be sure to delete the activation procedure before uninstalling the above packages.

For details about deleting the activation procedure, see the "activation procedure setting command" (lndf_setproc) mentioned above.

1.2.6 Precautions

Precautions on environment setting

- When defining the local MAC address of a cluster interface (device), the address must be the same value as the takeover MAC address set in the cluster management view.

- It is recommended that the definition file distribution (lndf_syncfile) command be executed only when the LAN adopter configuration and interface name are the same in each node. If the definition file distributing (lndf_syncfile) command is executed when the LAN adopter configuration or interface name is different in each node, definition files are copied, but the environment must be defined again using an editor because the interfaces (devices) may be different.

- To define environments node-by-node, be sure that the parameter values related to the protocol are the same in each interface (device).

Precaution on activation procedure setting

- If the activation procedure is not set, all interfaces (devices) defined in the /etc/opt/FSUNnet/lndfc/lndfc.config file are activated when the system is booted.

Precaution on resource registration

- After resources are registered, do not delete or edit the resource-ID parameter in the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file. Otherwise, interfaces (devices) cannot be deleted from the cluster management view.

Precaution on resource deletion

- Do not delete or edit the resource-ID parameter stored in the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file. Otherwise, interfaces (devices) cannot be deleted from the cluster management view.

Precaution on operation

- If the booted system is an ordinary system in which no cluster system is installed, interfaces (devices) defined in the /etc/opt/FSUNnet/lndfc/lndfc.config file are activated. If a cluster system is installed, interfaces (devices) defined in the /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL file are activated.

39

Page 43: SynfinityCluster Installation/Administration Guide for

40

Page 44: SynfinityCluster Installation/Administration Guide for

Chapter 2 Higher Protocols

This chapter describes a procedure for building a cluster of high-level protocol products in the Netcompo series.

Netcompo FNA-BASE / Netcompo for SNA EXPANSION OPTION

- FNA/SNA primary station communication function

- FNA/SNA secondary station communication function

2.1 FNA/SNA Primary Station Communication Function

2.1.1 Product overview

Netcompo FNA-BASE is an FNA protocol (software) enabling the FNA secondary station communication function to communicate with a host system, such as the GS series, and the FNA primary station communication function to communicate with terminals, such as the FMV series.

Combined with Netcompo for SNA EXPANSION OPTION, communications with IBM host systems and IBM terminal systems can be conducted based on the SNA protocol.

2.1.2 Overview of support to a cluster system

In the FNA/SNA primary station communication function, a remote host becomes a resource whose network address is to be switched. This resource is first deactivated. The cluster system function is implemented by activating or deactivating resources using the state transition procedure that manages resources whose network addresses are to be switched. FNA-BASE supports the following cluster topologies:

- Standby mode

- Mutual standby mode

2.1.3 Standby mode

In normal operation, a remote terminal is connected via a communication path of nodes on which the service is active. At this point, the communication path of nodes on which the standby service operates is deactivated to prepare for any failures of nodes on which the service is active. If a failure occurs during operation, the standby communication path is activated to take over operation resources.

In normal operation, high-level applications of FNA-BASE conduct communications on the communication path of nodes on which the service is active. If a failure occurs during operation, the communication path is switched to the standby communication path, and the LU-LU session is reconnected so that the standby node can take over communication operations.

Figure 2.1.1 is an outline of the operations.

41

Page 45: SynfinityCluster Installation/Administration Guide for

Figure 2.1.1 Example of the standby system (before the system is switched)

App1, App2: High-level network application names of FNA-BASE

LU1001, LU1002, LU2001, LU2002: LU name defined on the communication path to be switched

PU1001, PU2001: PU name defined on the communication path to be switched

host, host: Name of the communication path (remote host) to be switched

host1, host2: Name of the local host corresponding to the communication path to be switched

hme0, pc4a-00-l00: Name of the low-level interface corresponding to the communication path to be switched

SERVICE0: Service name of the cluster system

Figure 2.1.1 shows a normal operation state. In this example, communication is conducted with the remote terminal via the communication paths (hostA and hostB) of Node 0.

Figure 2.1.2 shows the switching operation from normal operation because of a failure.

42

Page 46: SynfinityCluster Installation/Administration Guide for

Figure 2.1.2 Example of the standby system (after switched to Node 1 because of a failure on Node 0)

If a failure occurs at Node 0 (active system), the communication paths (hostA and hostB) of Node 1 (standby system) are activated as shown in Figure 2.1.2 and the connection to the remote terminal is switched from Node 0 (active system) to Node 1 (standby system).

2.1.4 Mutual standby mode

In normal operation, the remote terminal is connected via the communication path on which each service on the node is defined. At this point, the communication path defined for standby is deactivated to prepare for any failure of the communication path set for each service in the nodes. If a failure occurs during operation, the standby communication path is activated to take over operation resources. Then, in the node in which the failure occurred is started, the operation state before the failure can be restored by performing failback processing according to operator instructions.

In normal operation, high-level applications of FNA-BASE conduct communications via the communication path defined for operation. If a failure occurs during operation, the system is switched to the standby communication path, and the LU-LU session is reconnected so that the operation communication can be taken over.

Figure 2.1.3 to Figure 2.1.5 are some examples of switching operations during mutual standby.

43

Page 47: SynfinityCluster Installation/Administration Guide for

Figure 2.1.3 Example of the mutual standby system (before the system is switched)

App1, App2: High-level network application names of FNA-BASE

LU1001, LU1002, LU2001, LU2002: LU name defined on the communication path to be switched

PU1001, PU2001: PU name defined on the communication path to be switched

hostA, hostB: Name of the communication path (remote host) to be switched

host1, host2: Name of the local host corresponding to the communication path to be switched

hme0, hme1: Name of the low-level interface corresponding to the communication path to be switched

SERVICE0, SERVICE1: Service name of the cluster system

Figure 2.1.3 shows a normal operation state. In this example, communication is conducted with the remote terminal via a communication path (hostA) of Node 0 and a path (hostB) of Node 1.

Figure 2.1.4 shows a switching operation from normal operation because of a failure.

44

Page 48: SynfinityCluster Installation/Administration Guide for

Figure 2.1.4 Example of the mutual standby system (after switched to Node 1 because of a Node 0 failure)

If a failure occurs at Node 0, the standby communication path of Node 1 is activated, as shown in Figure 2.1.4, and the connection to the remote terminal is switched from Node 0 to Node 1.

Figure 2.1.5 shows the operation of failback after recovery from the switched line.

45

Page 49: SynfinityCluster Installation/Administration Guide for

Figure 2.1.5 Example of failback during mutual standby (failback after Node 0 recovery)

After Node 0 is restored, failback processing is performed according to operator instructions, as shown in Figure 2.1.5. At this point, terminate the communication of applications on FNA-BASE because the network connection is disconnected for resources of the failover operation.

2.1.5 Environment setting

Adding a cluster environment definition

Figure 2.1.6 shows a procedure for adding a cluster environment definition of the FNA/SNA primary station.

46

Page 50: SynfinityCluster Installation/Administration Guide for

Figure 2.1.6 Procedure for adding a cluster environment definition of the FNA/SNA primary station

To operate a cluster system using Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION), the following environment definitions are required:

- Environment setting for a normal FNA/SNA primary station of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION)

- Environment setting for a cluster of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION)

Define the environment of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION) according to the following procedure. The low-level protocol definition is assumed to have already been completed.

Environment setting for a normal FNA/SNA primary station

Specify the following definitions for the active node and standby node:

1) Specify the same environment settings as those of an ordinary single system of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION). For details, see "Network Function User's Guide, Mainframe Connection."

2) Check the communication path (remote host) to be switched, and then set the "Active flag" parameter of the "Remote host information" definition to "Inactive."

At this point, definitions under the resource to be switched must be the same in the active node and standby node.

Environment setting for a cluster (editing of the switched resource file)

In an editor, open the switched resource file.

To control activation and deactivation of resources if failover occurs in Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION), create a switched resource file under the following directory. Specify the "Service name" defined on the "Service setting" screen of the cluster operation setting view as the file name.

For example, if the "Service name" defined by the cluster operation setting view is "SERVICE0", the name of the switched resource file must be /etc/opt/FSUNnet/cluster/vcph/SERVICE0. If multiple services are available, create as many files as the number of definitions, such as SERVICE0, SERVICE1,*.

For each line of the file created as described above, set the "Remote host name" of the communication path to be switched.

Registering resources and copying to other nodes

After editing of the definition file has been completed, register the file with the cluster system as a resource.

To use the same environment definition information and switched resource file for all nodes, register the file with the cluster system as a resource by executing the following command:

47

Page 51: SynfinityCluster Installation/Administration Guide for

Command

/opt/FSUNnet/bin/vcph_cluster_all

*1) This command distributes definition information to all nodes for resource registration.

To use different environment definition information and a different switched resource file for each node, register the file with the cluster system as a resource by executing the following command in each node:

Command

/opt/FSUNnet/bin/vcph_cluster

Precautions on definition information

- Define information about the remote host that has the communication path to be switched on resources to be switched in the low-level protocol definition.

- The active and standby systems of each service must have the same resource configuration in the remote host information definition to be switched. At this point, set the "Active flag" parameter to "Inactive."

Resource setting to the service

After the resource registration of the FNA/SNA primary station and other related products has been completed, start "Service setting" from the cluster operation setting view to set the resources of the FNA/SNA primary station to the service. The service name must always be the file name in /etc/opt/FSUNnet/cluster/vcph. To obtain the resource name, add "FNA1_" prior to the file name in /etc/opt/FSUNnet/cluster/vcph. For example, if the file name is "SERVICE0", the resource name will be "FNA1_SERVICE0".

After setting the service, check whether the service name under which the resource has been set matches the service name ("FNA1_service name") of the resource.

At this point, the cluster environment definition has been completed. After the system is restarted, operations using the cluster system can be started.

In the following cases, set a cluster environment definition again after deleting the existing cluster environment definition:

- If the cluster configuration (e.g., the node configuration, hardware configuration, and cluster topology) is changed,

- If SynfinityCluster is reinstalled, or

- If the cluster operation is stopped to return to the environment of ordinary single-system operation.

The following section describes a procedure for deleting the cluster environment definition.

Deleting the cluster environment definition

Figure 2.1.7 shows a procedure for deleting the cluster environment definition of an FNA/SNA primary station. An explanation about deleting the cluster environment definition is described below.

Figure 2.1.7 Procedure for deleting the cluster environment definition of an FNA/SNA primary station

Deleting resources from the service

From the cluster operation management view, delete the resources of the relevant FNA/SNA primary station.

48

Page 52: SynfinityCluster Installation/Administration Guide for

Changing the environment setting of the FNA/SNA primary station

Delete the relevant switched resource file, and then execute the resource registration command (i.e., vcph_cluster, vcph_cluster_all).

Definition examples

Standby mode

Examples of each definition for Figure 2.1.1 are described below:

/etc/opt/FSUNnet/vcph/fnahparm (common to Node 0 and Node 1)

/etc/opt/FSUNnet/cluster/vcph/SERVICE0 (common to Node 0 and Node 1)

*1) The file name must always be the same as the service name of the cluster system that are to be registered as resources.

Mutual standby mode

Examples of each definition for Figure 2.1.3 are described below:

49

Page 53: SynfinityCluster Installation/Administration Guide for

/etc/opt/FSUNnet/vcph/fnahparm (common to Node 0 and Node 1)

/etc/opt/FSUNnet/cluster/vcph/SERVICE0 (common to Node 0 and Node 1)

/etc/opt/FSUNnet/cluster/vcph/SERVICE1 (common to Node 0 and Node 1)

*1) The file name must always be the same as the service name of the cluster system that are to be registered as resources.

2.1.6 Precautions

Precautions for applications in the standby system

Precautions after being notified the system has been switched

If the system is switched because of a failure, the user needs to start the recovery by re-establishing the LU-LU session because the session is disconnected.

In the terminal start operation, the user needs to call from the remote office (switched line) and session establishment request messages (such as INIT-SELF) are in a wait status.

In the host start operation, a session establishment instruction command (BIND) is sent after a call is submitted (switched line) from the local office.

For details about operation examples of the terminal start operation and host start operation, see "Mainframe Connection" in "Network Function User's Guide."

Precautions for applications in other systems

If the system is switched because of a failure, the user needs to start the recovery by re-establishing the LU-LU session because a line failure disconnected the session. The recovery method is the same as that described in "Standby procedure."

50

Page 54: SynfinityCluster Installation/Administration Guide for

2.2 FNA/SNA Secondary Station Communication Function

2.2.1 Product overview

For details, see Section 2.1.1, "Product overview," in Section 2.1, "FNA/SNA Primary Station Communication Function."

2.2.2 Overview of support to a cluster system

In the FNA/SNA secondary station communication function, PU becomes a resource whose network address is to be switched. This resource is first deactivated. The cluster system function is implemented by activating or deactivating resources using the system procedure that manages resources whose network addresses are to be switched. FNA-BASE supports the following cluster topologies:

- Standby mode

- Mutual standby mode

2.2.3 Standby mode

In normal operation, a remote terminal is connected via the PU of nodes on which the service is operated. At this point, the PU of nodes on which the standby service operates is deactivated to prepare for any failures of nodes on which the service is active. If a failure occurs during operation, the standby PU is activated to take over operation resources.

In normal operation, high-level applications of FNA-BASE conduct communications on the PU of nodes on which the service is active. If a failure occurs during operation, the PU is switched to the standby communication path, and the LU-LU session is reconnected so that the standby node can take over communication operations.

Figure 2.2.1 is an outline of the operations.

51

Page 55: SynfinityCluster Installation/Administration Guide for

Figure 2.2.1 Example of the standby system (before the system is switched)

App1, App2: High-level network application names of FNA-BASE

LU1001, LU1002, LU2001, LU2002: LU name defined on the PU to be switched

PU1001, PU2001: name of the PU to be switched

hme0, pc4a-00-l00: Name of the low-level interface corresponding to the PU to be switched

SERVICE0: Service name of the cluster system

Figure 2.2.1 shows a normal operation state. In this example, communication is conducted with the remote host using the PU (PU1001 and PU2001) of Node 0.

Figure 2.2.2 shows a switching operation from normal operation because of a failure.

52

Page 56: SynfinityCluster Installation/Administration Guide for

Figure 2.2.2 Example of the standby system (after switched to Node 1 because of a failure on Node 0)

If a failure occurs at Node 0 (active system), the PU (PU1001 and PU2001) of Node 1 (standby system) are activated, as shown in Figure 2.2.2 and the connection to the remote host is switched from Node 0 (active system) to Node 1 (standby system).

2.2.4 Mutual standby mode

In normal operation, the remote host is connected using the PU defined on each service on the node. At this point, the PU defined for standby is deactivated to prepare for any failure of the PU set for each service in the nodes. If a failure occurs during operation, the standby PU is activated to take over operation resources. Then, after the node that has the failure is started, the operation state before the failure can be restored by performing failback processing according to operator instructions.

In normal operation, high-level applications of FNA-BASE conduct communication using the PU defined for operation. If a failure occurs during operation, failover to the standby PU occurs, and the LU-LU session is reconnected so that the operation communication can be taken over.

Figure 2.2.3 to Figure 2.2.5 are some examples of switching operations during mutual standby.

53

Page 57: SynfinityCluster Installation/Administration Guide for

Figure 2.2.3 Example of the mutual standby system (before the system is switched)

App1, App2: High-level network application names of FNA-BASE

LU1001, LU1002, LU2001, LU2002: LU name defined on the PU to be switched

PU1001, PU2001: Name of the PU to be switched

hme0, hme1: Name of the low-level interface corresponding to the PU to be switched

SERVICE0, SERVICE1: Service name of the cluster system

Figure 2.2.3 shows a normal operation state. In this example, communication is conducted with the remote host using the PU (PU1001) of Node 0 and the PU (PU2001) of Node 1.

Figure 2.2.4 shows a switching operation from normal operation because of a failure.

54

Page 58: SynfinityCluster Installation/Administration Guide for

Figure 2.2.4 Example of the mutual standby system (after switched to Node 1 because of a failure on Node 0)

If a failure occurs at Node 0, the standby PU (PU1001) of Node 1 is activated, as shown in Figure 2.2.4, and the connection to the remote host is switched from Node 0 to Node 1.

Figure 2.2.5 shows the failback operation after recovery from the switched line.

55

Page 59: SynfinityCluster Installation/Administration Guide for

Figure 2.2.5 Example of the failback operation in the mutual standby system (failback after Node 0 is recovered)

After Node 0 is restored, failback operation is performed according to operator instructions, as shown in Figure 2.2.5. At this point, terminate the communication of applications on FNA-BASE because the network connection is disconnected for resources of the failover operation.

2.2.5 Environment setting

Adding a cluster environment definition

Figure 2.2.6 shows a procedure for adding a cluster environment definition of the FNA/SNA secondary station.

56

Page 60: SynfinityCluster Installation/Administration Guide for

Figure 2.2.6 Procedure for adding a cluster environment definition of the FNA/SNA secondary station

To operate a cluster system using Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION), the following environment definitions are required:

- Environment setting for a normal FNA/SNA secondary station of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION)

- Environment setting for a cluster of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION)

Define the environment of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION) according to the following procedure. The low-level protocol definition is assumed to have already been completed.

Environment setting for a normal FNA/SNA secondary station

Specify the following definitions for the active node and standby node:

1) Specify the same environment settings as those of an ordinary single system of Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION). For details, see "Network Function User's Guide, Mainframe Connection."

2) Check the PU to be switched and then set the "Active flag" parameter of the "PU information" definition to "Inactive."

At this point, definitions under the resource to be switched must be the same in the active node and standby node.

Environment setting for a cluster (editing of the switched resource file)

In an editor, edit the resource file to be switched.

To control activation and deactivation of resources if failover occurs in Netcompo FNA-BASE (Netcompo for SNA EXPANSION OPTION), create a switched resource file under the following directory. Specify the "Service name" defined on the "Service setting" screen of the cluster operation setting view as the file name.

For example, if the "Service name" defined by the cluster operation setting view is "SERVICE0", the name of the switched resource file must be /etc/opt/FSUNnet/cluster/vcp/SERVICE0. If multiple services are available, create as many files as the number of definitions, such as SERVICE0, SERVICE1,*.

For each line of the file created as described above, set the "PU name" of the PU to be switched.

Resource registration and copying to other nodes

After editing of the definition file has been completed, register the file with the cluster system as a resource.

To use the same environment definition information and switched resource file for all nodes, register the file with the cluster system as a resource by executing the following command:

57

Page 61: SynfinityCluster Installation/Administration Guide for

Command

/opt/FSUNnet/bin/vcp_cluster_all

*1) This command distributes definition information to all nodes for resource registration.

To use different environment definition information and a different switched resource file for each node, register the file with the cluster system as a resource by executing the following command in each node:

Command

/opt/FSUNnet/bin/vcp_cluster

Precautions on definition information

- Define information about the remote host that has the PU to be switched on resources to be switched in the low-level protocol definition.

- The active and standby systems of each service must have the same resource configuration in the remote host information definition to be switched. At this point, set the "Active flag" parameter to "Inactive."

Resource setting to the service

After the resource registration of the FNA/SNA secondary station and other related products has been completed, start "Service setting" from the cluster operation setting view to set the resources of the FNA/SNA secondary station to the service. The service name must always be the file name in /etc/opt/FSUNnet/cluster/vcp. To obtain the resource name, add "FNA2_" prior to the file name in /etc/opt/FSUNnet/cluster/vcp. For example, if the file name is "SERVICE0", the resource name will be "FNA2_SERVICE0".

After setting the service, check whether the service name under which the resource has been set matches the service name ("FNA2_service name") of the resource.

At this point, the cluster environment definition has been completed. After the system is restarted, operations using the cluster system can be started.

In the following cases, set a cluster environment definition again after deleting the existing cluster environment definition:

- If the cluster configuration (e.g., the node configuration, hardware configuration, and cluster topology) is changed,

- If SynfinityCluster is reinstalled, or

- If the cluster operation is stopped to return to the environment of ordinary single-system operation.

The following section describes a procedure for deleting the cluster environment definition.

Deleting the cluster environment definition

Figure 2.2.7 shows a procedure for deleting the cluster environment definition of an FNA/SNA secondary station. An explanation about deleting the cluster environment definition is described below.

Figure 2.2.7 Procedure for deleting the cluster environment definition of an FNA/SNA secondary station

Deleting resources from the service

From the cluster operation management view, delete the resources of the relevant FNA/SNA secondary station.

58

Page 62: SynfinityCluster Installation/Administration Guide for

Changing the environment setting of the FNA/SNA secondary station

Delete the relevant switched resource file, and then execute the resource registration command (i.e., vcp_cluster, vcp_cluster_all).

Definition examples

Standby mode

Examples of each definition for Figure 2.2.1 are described below:

/etc/opt/FSUNnet/vcp/fnaparm (common to Node 0 and Node 1)

/etc/opt/FSUNnet/cluster/vcp/SERVICE0 (common to Node 0 and Node 1)

*1) The file name must always be the same as the service name of the cluster system that are to be registered as resources.

Mutual standby mode

Examples of each definition for Figure 2.2.3 are described below:

59

Page 63: SynfinityCluster Installation/Administration Guide for

/etc/opt/FSUNnet/vcp/fnaparm (common to Node 0 and Node 1)

/etc/opt/FSUNnet/cluster/vcp/SERVICE0 (common to Node 0 and Node 1)

/etc/opt/FSUNnet/cluster/vcp/SERVICE1 (common to Node 0 and Node 1)

*1) The file name must always be the same as the service name of the cluster system that has the registered resources.

2.2.6 Precautions

Precautions for applications in the standby system

Precautions for applications after receiving notification of failover completion

If a failover occurs, start the recovery by re-establishing the LU-LU session because the session is disconnected.

In the terminal start operation, session establishment request messages (such as INIT-SELF) are sent after submitting a call from the local office (switched line).

In the host start operation, calls from the remote office (switched line) and a session establishment instruction command (BIND) are in a wait status.

For details about operation examples of the terminal start operation and host start operation, see "Mainframe Connection" in "Network Function User's Guide."

Procedure for applications in other systems

If a failover event occurs, start the recovery by re-establishing the LU-LU session because a line failure disconnected the session. The recovery method is the same as that described in "Standby procedure."

60

Page 64: SynfinityCluster Installation/Administration Guide for

Chapter 3 Network Applications

This chapter describes a procedure for constructing a cluster system of network application products in the Netcompo Series.

- Netcompo Communication Library

- Netcompo NMC Server

- Netcompo Communication Service

- Netcompo TN Gateway Service

3.1 Netcompo Communication Library

3.1.1 Product overview

Netcompo Communication Library is a program for connecting conversations in the FNA/SNA LU type 0 protocol between applications on Solaris and applications on a host.

3.1.2 Overview of support to a cluster system

Netcompo Communication Library implements the cluster system function by activating or inactivating the PU resources defined in the environment setting of Netcompo FNA-BASE (secondary station) and the Netcompo for SNA EXPANSION OPTION (secondary station) and, then, by linking with the higher application using Netcompo Communication Library.

Netcompo Communication Library supports the following cluster topologies:

- Standby mode

- Mutual standby mode

3.1.3 Standby mode

In normal operation, the user application communicates with the host using the Netcompo Communication Library in the active node.

If a failure occurs in the active node, Netcompo FNA-BASE switches resources to the standby node, which takes over communication with the operation system by reconnecting communications.

In this case, the session with the host is disconnected once and reconnected to the standby system after communication is established again.

3.1.4 Mutual standby mode

Assume that the following states are normal operation states in mutual standby mode:

Node 0 Operation system (LU1), Standby system (LU2)

Node 1 Operation system (LU2), Standby system (LU1)

In normal operation, Netcompo Communication Library in Node 0 uses LU1 and Netcompo Communication Library in Node 1 uses LU2 to communicate with the host. LU2 in Node 0 and LU1 in Node 1 are set in standby states to prepare for any failure in the active node.

In this operation, the environment definitions in Nodes 0 and 1 must be the same. The environment definition in the host requires a definition for each remote node.

The operation described below is from occurring a failure to restoring the communication.

If a failure occurs in Node 0, Netcompo FNA-BASE activates resources that are on standby in Node 1, resources are switched from Node 0 to Node 1, and Node 1 is connected to the remote host.

When Node 0 is restored, failback processing is performed as instructed by the operator. Since the LU of the resource in the failback operation of Netcompo FNA-BASE is disconnected, Netcompo Communication Library terminates abnormally.

Next, resources are switched from Node 1 to Node 0, and Node 0 is connected to the remote host.

In this operation, the environment definitions of the FNA-BASE (secondary station) and the SNA-compliant extended option (secondary station) in both Node 0 and Node 1 must be the same. The environment definition in the host requires a definition for each remote node.

61

Page 65: SynfinityCluster Installation/Administration Guide for

3.1.5 Environment setting

Environment setting

Since Netcompo Communication Library communicates via the LU defined in the Netcompo FNA-BASE environment setting, its own environment setting is not required. For details, see the environment setting of Netcompo FNA-BASE.

Editing a state transition

For Netcompo Communication Library, create a state transition procedure, and register it as a resource.

Register user applications, as necessary, to the state transition procedure that started from cluster control.

3.1.6 Precautions

Precautions on creating an environment definition

When using Netcompo Communication Library in a cluster system, note the following precautions for each cluster topology:

- Netcompo Communication Library defines the LU name as an interface with application programs. However, Netcompo Communication Library does not have an environment setting and depends on the environment settings of Netcompo FNA-BASE and applications. Therefore, in standby operation, the environment definitions of Netcompo FNA-BASE and applications must be the same in both the active and standby systems.

- In mutual standby, for both the active and standby nodes, all of the resources must be defined in advance for Netcompo FNA-BASE and applications that are to be taken over.

Switching operation

If a failure occurs in the active node, the system is disconnected from the host, and jobs are stopped once. The jobs can be restarted by switching resources and reconnecting the session with the host.

3.2 Netcompo NMC Server

3.2.1 Product overview

The Netcompo NMC Server provides the function of a communication server that relays messages between the client (F6680/I3270 and the workstation, PC, and printer with the K series terminal emulator function) and a host computer connected to a LAN. The client is connected via the CU-DEV(NMC-LAN) interface.

3.2.2 Overview of support to a cluster system

The NMC Server supports the following cluster topologies in a cluster system:

- Standby mode

- Mutual standby mode

Figure 3.2.1 is an example of the system configuration.

62

Page 66: SynfinityCluster Installation/Administration Guide for

Figure 3.2.1 Example of the system configuration

3.2.3 Standby mode

In normal operation, Node 0 is active and Node 1 is on standby. At this point, the NMC Server of Node 0 is operating and that of Node 1 is stopped. Figure 3.2.2 shows the state during normal operation.

Figure 3.2.2 Normal operation in the standby system

The following table lists the correspondence between the service and resources (network resources) in the above figure.

63

Page 67: SynfinityCluster Installation/Administration Guide for

Service Resource Node 0 Node 1

SV0 (lu01,lu02,lu03,lu04) / (ws01,ws02,ws03,ws04) Active Standby

If any failure occurs at Node 0, failover occurs, and the NMC Server of Node 1 is activated.

Figure 3.2.3 shows the operation after the system is switched.

Figure 3.2.3 Operation after the system is switched

3.2.4 Mutual standby mode

During normal operation, device 11/12, LU11/12 of Node 0, device 21/22, and LU21/22 of Node 1 are active. Device 21/22, LU21/22 of Node 0, devices 11/12, LU11/12 of Node 1 are on standby.

Figure 3.2.4 shows the state during normal operation.

64

Page 68: SynfinityCluster Installation/Administration Guide for

Figure 3.2.4 Normal operation in the mutual standby system

The following table lists the correspondence between the service and resources (network resources) in the above figure.

Service Resource Node 0 Node 1

SV0 (lu11,lu12), (ws01,ws02) Active Standby

SV1 (lu21,lu22), (ws03,ws04) Standby Active

If any failure occurs at Node 0, failover occurs, and the paths of LU11/12 and devices 11/12 are activated for operation.

Figure 3.2.5 is example of operation after the system is switched.

65

Page 69: SynfinityCluster Installation/Administration Guide for

Figure 3.2.5 Operation after the system is switched in the mutual standby system

After Node 0 is restored, Node 1 remains active and Node 0 is standby.

Figure 3.2.6 is an example of operation after Node 0 is restored.

Figure 3.2.6 Operation after recovery in the mutual standby system

After Node 0 is restored, normal operation can be restored by executing the failback processing as follows: 1. Terminate all emulators of client #1 that uses LU11/12 and devices 11/12 of Node 1. 2. Execute the failback processing from Node 1 to Node 0 according to the operator instructions. After failback

processing is executed, the paths for LU11/12 of Node 1 and for device 11/12 are deactivated. The paths for LU11/12 path of Node 0 and for device 11/12 are activated.

3. Start the emulators of client #1, and connect them to Node 0.

3.2.5 Environment setting

Adding a cluster environment definition

Figure 3.2.7 shows the procedure for adding a cluster environment definition of the NMC Server. An explanation follows about how to set the environment definition.

66

Page 70: SynfinityCluster Installation/Administration Guide for

Figure 3.2.7 Procedure for adding a cluster environment definition of the NMC Server

To operate a cluster system using the NMC Server, the following environment definitions are required: a. CU-DEV primary station environment definition (required)....... /etc/opt/FSUNnet/nmcsv/cudpprm b. Delay function environment definition (required)....... /etc/opt/FSUNnet/nmcsv/gwparm c. Hard copy environment definition (as necessary)....... /etc/opt/FSUNnet/nmcsv/hdcparm d. Download environment definition (as necessary)....... /etc/opt/FSUNnet/nmcsv/plopparm e. Definition of the state transition procedure of the NMC Server (for mutual standby).......

/etc/opt/FJSVcluster/rc.d/SystemState2/cluster.FSUNnmcsv f. Shell script file for resource registration of the NMC Server (for mutual standby).......

/etc/opt/FJSVcluster/regrsc/script.FSUNnmcsv Set the environment definition according to the procedure described below. The low-level protocol is assumed to have already been defined.

Environment setting for normal NMC Server

Define the environment for the active node and standby node in the same way as for a ordinary single system of the NMC Server. For details, see "Netcompo NMC Server Manual V2.1."

To specify the same environment definition information at all nodes, copy the environment definition file by executing the internode copy command below. For details about the nmcsv_syncfile command, see Appendix B, "Command Reference."

Copy the environment definition file to the specified node by specifying the node name.

If each node has different environment definition information, the user needs to modify the information at each node after copying it, or define the environment at each node.

Editing the state transition procedure

Edit the state transition procedure.

The state transition procedure that controls the switching resources on the NMC Server is contained in the following file:

67

Page 71: SynfinityCluster Installation/Administration Guide for

The following example shows the contents of the above file look just after installing "SynfinityCluster/HA for Netcompo."

68

Page 72: SynfinityCluster Installation/Administration Guide for

*1: Indicates that editing may be required.

*2: For mutual standby operation, insert the comment "#" at the beginning of the line.

In the above figure, delete the comment "#" of (*1) as necessary.

For the LNDFC connection in mutual standby, define all line names to be switched in the LINEname parameter. The line name is linename in the CU-DEV primary station environment definition.

For example, if the line name is line1, follow the procedure below to modify the line of (*1):

(Before modification)

(After modification)

Then, copy the state transition procedure by executing the internode copy command as necessary.

The state transition procedure can be copied to a node by specifying the node name.

Registering resources

The edited state transition procedure can be registered as a resource with the cluster system.

69

Page 73: SynfinityCluster Installation/Administration Guide for

Use the following script for the registration.

The following example shows the contents of the above file just after installing "SynfinityCluster/HA for Netcompo."

The contents of the above file do not have to be edited.

Add and register resources by executing the resource registration command as necessary. For details about the Nmcsv_rid command, see Appendix B, "Command Reference."

Resources can be registered to a node by specifying the node name.

Precautions for the definition information

- The service that includes the communication path to be switched has to match the one that includes resources in the low-level protocol definition.

- As for the environment definition of the NMC Server, the definition of an active node must be the same as that of a standby node.

Service setting

After completing the resource registration of the NMC Server and other related products, set the resources of the NMC Server to the service by activating "Service settings." The resource name is the same as the name (such as NmcsvCl1 and NmcsvCl2) set in resource registration.

The addition of the cluster environment definition settings is complete. After restarting the system, operation using the cluster system can be started.

In the following cases, set a cluster environment definition again after deleting the existing cluster environment definition:

- If the cluster configuration (such as node configuration, hardware configuration, or operation configuration) is changed.

- If SynfinityCluster is reinstalled.

- If the user stops using the cluster operation and define the environment as that of a single system.

The following explanation describes the procedure for deleting the cluster environment definition.

70

Page 74: SynfinityCluster Installation/Administration Guide for

Deleting the cluster environment definition

Figure 3.2.8 shows the procedure for deleting the cluster environment definition of the NMC Server. An explanation follows about how to delete the cluster environment definition.

Figure 3.2.8 Procedure for deleting the cluster environment definition of the NMC Server

Deleting resources from the server

Delete resources of the NMC Server from the cluster operation management view.

Deleting resources

Delete resources by executing the resource registration command.

Resources of a specific node can be deleted by specifying the node name.

Definition examples

Standby mode

The following examples show each definition for Figure 3.2.2.

/etc/opt/FSUNnet/nmcsv/cudpprm (the same for Node 0 and Node 1)

71

Page 75: SynfinityCluster Installation/Administration Guide for

/etc/opt/FSUNnet/nmcsv/gwparm (the same for Node 0 and Node 1)

/etc/opt/FJSVcluster/regrsc/script.FSUNnmcsv (the same for Node 0 and Node 1)

Use the shell script that is used to register resources at the installation (modification is not required).

Resources are automatically registered in the standby system. Register the service using the acquired ID.

/etc/opt/FJSVcluster/rc.d/SystemState2/cluster.FSUNnmcsv (the same for Node 0 and Node 1)

Use the state transition procedure used for installation (modification is not required).

Mutual standby mode

The following examples show each definition for Figure 3.2.4.

For TCP/IP connection with the client

/etc/opt/FSUNnet/nmcsv/cudpprm (the same for Node 0 and Node 1)

/etc/opt/FSUNnet/nmcsv/gwparm (the same for Node 0 and Node 1)

/etc/opt/FJSVcluster/regrsc/script.FSUNnmcsv (the same for Node 0 and Node 1)

Use the script used for installation (modification is not required).

Register additional resources by using the internode copy command.

72

Page 76: SynfinityCluster Installation/Administration Guide for

/etc/opt/FJSVcluster/rc.d/SystemState2/cluster.FSUNnmcsv (the same for Node 0 and Node 1)

Modification for the mutual standby operation is required. For modifications, refer to "State transition procedure editing" in this chapter.

For LNDFC connection with the client

/etc/opt/FSUNnet/nmcsv/cudpprm (the same for Node 0 and Node 1)

/etc/opt/FSUNnet/nmcsv/gwparm (the same for Node 0 and Node 1)

/etc/opt/FJSVcluster/regrsc/script.FSUNnmcsv (the same for Node 0 and Node 1)

Use the script used for installation (modification is not required).

Register additional resources by using the internode copy command.

73

Page 77: SynfinityCluster Installation/Administration Guide for

/etc/opt/FJSVcluster/rc.d/SystemState2/cluster.FSUNnmcsv (the same for Node 0 and Node 1)

74

Page 78: SynfinityCluster Installation/Administration Guide for

*1, *2: Indicates modifications.

3.2.6 Precautions

- For standby and mutual standby, the environment definition of the NMC Server must be the same in the active node and standby node.

- If failover occurs when an emulator on the client is operating, the emulator may become unusable. In this case, terminate the emulator, restart it, and then log on again.

- When the NMC Server operates in a cluster system, the following console message may be displayed when failover or failback is executed. The failover or failback processing is executed normally.

UX:NMCSERVER:ERROR:0506:NMC-Server is already active.

3.3 Netcompo Communication Service

3.3.1 Product overview

Netcompo Communication Service enables the "pass-through" control of FNAS and SNA protocols between the host and clients.

Using Netcompo Communication Service, existing Fujitsu and IBM hosts can be connected with existing FNA and SNA clients (line sharing).

Since host LUs and client LUs are "passed through," the relay function is enabled without applications (not only emulators but also HICS and the application conversation can be relayed).

With the LU dynamic selecting function by group, host and client LU resources can be used effectively.

3.3.2 Overview of support to a cluster system

When using Communication Service in the cluster system, start Communication Service in advance in each node.

Unless otherwise specified, "Starting Communication Service" means "Installing Communication Service, setting its environment, and starting the system."

75

Page 79: SynfinityCluster Installation/Administration Guide for

The takeover processing is performed in the host and the client as shown below.

Host

Takeover processing is performed by activating or inactivating PU resources in FNA-BASE (secondary station) and SNA-compliant extended option (secondary station) and by switching the resources (network resources).

Client

Takeover processing is performed by activating or inactivating the communication path resources in FNA-BASE (primary station) and SNA-compliant extended option (primary station) and by switching resources (network resources).

Unless otherwise specified, FNA-BASE (secondary station) and SNA-compliant extended option (secondary station) are collectively referred to as FNA-BASE (secondary station); and, FNA-BASE (primary station) and SNA-compliant extended option (primary station) are collectively referred to as FNA-BASE (primary station).

Communication Service supports the following cluster topology in the cluster system:

- Standby mode

- Mutual standby mode

3.3.3 Standby mode

Figure 3.3.1 shows an overview of normal operations in standby mode. The active system is Node 0, and the standby system is Node 1. Define "PU1" as a resource (network address switching target resource) in FNA-BASE (secondary station) and define "PATH1" as that in FNA-BASE (primary station). Start Communication Service in Nodes 0 and 1 (strcmsv).

In normal operation, each client communicates with the host using Communication Service in Node 0. Though Communication Service is also in operation in Node 1, it cannot be used because "PU1" and "PATH1" are inactivated.

Figure 3.3.1 Standby system (normal operation)

The resources corresponding to the service (network resources) is shown below.

Service Resources Node 0 Node 1

76

Page 80: SynfinityCluster Installation/Administration Guide for

SV0 PATH1 (LU1, LU2, LU3, LU4) / PU1 (LU1, LU2, LU3, LU4) Active Standby

Figure 3.3.2 shows the switching operations when a failure occurs in Node 0.

Figure 3.3.2 Switching operation in standby mode

If a failure occurs in Node 0 (the active node), FNA-BASE (secondary station) switches to activate "PU1" in Node 1 (the standby node). In addition, FNA-BASE (primary station) also performs switching processing to activate "PATH1" in Node 1 (the standby node). Because of the above operation, the clients can continue communications using Communication Service in Node 1.

3.3.4 Mutual standby mode

Figure 3.3.3 shows an overview of normal operations in mutual standby mode. "LU1"/"LU2" in Node 0 and "LU3"/"LU4" in Node 1 are the active system. "LU3"/"LU4" in Node 0 and "LU1"/"LU2" in Node 1 are the standby system. Define "PU1" and "PU2" in FNA-BASE (secondary station) and "PATH1" and "PATH2" in FNA-BASE (primary station) as resource (network address) switching target resources, and start Communication Service in Nodes 0 and 1 (strcmsv).

In normal operation, Clients #0 and #1 use Communication Service in Node 0, and Clients #2 and #3 use Communication Service in Node 1 to communicate with the host. At this time, "PU2" and "PATH2" in Node 0 and "PU1" and "PATH1" in Node 1 are inactive. To prevent clients from accessing LU resources in these nodes, define the environment for each client.

77

Page 81: SynfinityCluster Installation/Administration Guide for

Figure 3.3.3 Mutual standby system (normal operation)

The resources corresponding to the service (network resources) is shown below.

Service Resources Node 0 Node 1

SV0 PATH1 (LU1, LU2)/PU1 (LU1, LU2) Active Standby

SV1 PATH2 (LU3, LU4)/PU2 (LU3, LU4) Standby Active

Figure 3.3.4 shows switching operations when a failure occurs in Node 0.

78

Page 82: SynfinityCluster Installation/Administration Guide for

Figure 3.3.4 Switching operations in the mutual standby system

If a failure occurs in Node 0, FNA-BASE (secondary station) performs switching processing to activate "PU1" in Node 1. In addition, FNA-BASE (primary station) also performs switching processing to activate "PATH1" in Node 1. Because of the above operation, clients #0 and #1 can continue communication using Communication Service in Node 1.

Figure 3.3.5 shows the failback processing after restoration of Node 0.

79

Page 83: SynfinityCluster Installation/Administration Guide for

Figure 3.3.5 Failback operation in mutual standby mode

After Node 0 is restored, the operator's failback operation causes FNA-BASE (secondary station) and FNA-BASE (primary station) to execute the failback processing. In this operation, "PU1" and "PATH1" in Node 1 are inactivated, and "PU1" and "PATH1 in Node 0 are activated. After this operation, Clients #0 and #1 can communicate with the host using Communication Service in Node 0. Because of the above operation, the system returns to a normal operating state.

Before executing the failback processing, terminate all applications that run with "LU1"and "LU2" in Node 1.

3.3.5 Environment setting

In the active and standby nodes, the environment definitions of Communication Service and Netcompo FNA-BASE must be the same.

3.3.6 Precautions

Precautions on environment definition

When creating an environment definition of Communication Service in the cluster system, note the following:

- When resources are defined using Communication Service, the service including resources on the host LU must match the service including resources on the client LU resources.

- As for the active and standby nodes in the same service, the environment definition of Communication Service must be the same.

- In a node where multiple services are operating, set the environment so that it matches the environment definition of each service. In this case, the number of communication paths in Communication Service in each node must not exceed the maximum value.

80

Page 84: SynfinityCluster Installation/Administration Guide for

Operation at switching

Wait for the state that the system is switched, and restart communication by establishing a session from the client or host.

3.4 Netcompo TN Gateway Service

3.4.1 Product overview

Netcompo TN Gateway Service is software that provides the gateway function to the FNA/SNA network for terminals in which a TN6680 or TN3270 emulator operates on TCP/IP.

In this document, Netcompo TN Gateway Service is described as TN Gateway Service.

3.4.2 Overview of support to a cluster system

When TN Gateway Service operates in a cluster system, define in advance an appropriate environment in each node.

Takeover processing is performed by activating or inactivating PU resources in Netcompo FNA-BASE and the Netcompo for SNA EXPANSION OPTION and by switching resources (network resources).

TN Gateway Service supports the following cluster topology in the cluster system:

- Standby mode

- Mutual standby mode

3.4.3 Standby mode

Figure 3.4.1 shows an overview of normal operation in standby mode. The active system is Node 0, and the standby system is Node 1. Define "PU1" as a resource (network address switching target resource) in Netcompo FNA-BASE and the Netcompo for SNA EXPANSION OPTION.

In normal operation, each client communicates with the host using TN Gateway Service in Node 0. TN Gateway Service in Node 1 cannot be used because "PU1" is inactivated and a connection cannot be established from the terminal because the network address is not switched.

Figure 3.4.1 Standby (normal operation)

The service corresponding to resources (network resources) is listed below.

Service Resources Node 0 Node 1

SV0 PU1 (LU1, LU2, LU3, LU4) Active Standby

Figure 3.4.2 shows a switching operation when a failure occurs in Node 0.

81

Page 85: SynfinityCluster Installation/Administration Guide for

Figure 3.4.2 Switching operation in standby mode

If a failure occurs in Node 0 (the active node), operation is performed in Node 1. "PU1" in Node 1 (the standby node) is activated by the switching processing of Netcompo FNA-BASE and the Netcompo for SNA EXPANSION OPTION. In addition, TN clients can connect to Node 1 by the switching processing of resources (network address), and TN clients can continue communication using TN Gateway Service in Node 1.

3.4.4 Mutual standby mode

Figure 3.4.3 shows an overview of normal operation in mutual standby mode. "LU1"/"LU2" in Node 0 and "LU3"/"LU4" in Node 1 are operating as active systems. "LU3"/"LU4" in Node 0 and "LU1"/"LU2" in Node 1 are operating as standby systems. Define "PU1" and "PU2"as resources (network address switching target resources) in Netcompo FNA-BASE and the Netcompo for SNA EXPANSION OPTION.

In normal operation, TN clients (#0, #1) use TN Gateway Service in Node 0 and TN clients (#2, #3) use TN Gateway Service in Node 1 to communicate with the host. At this time, "PU2" in Node 0 and "PU1" in Node 1 are inactive. To prevent terminals from accessing LU resources in these nodes, define client host names separately between active LU resources and inactive LU resources so that each LU resource is operated with a particular client host name.

82

Page 86: SynfinityCluster Installation/Administration Guide for

Figure 3.4.3 Mutual standby (normal operation)

The services corresponding to resources (network resources) are listed below.

Service Resources Node 0 Node 1

SV0 PU1 (LU0, LU1), (HOST1, HOST2) Active Standby

SV1 PU2 (LU3, LU4), (HOST3, HOST4) Active Standby

Figure 3.4.4 shows a switching operation when a failure occurs in Node 0.

83

Page 87: SynfinityCluster Installation/Administration Guide for

Figure 3.4.4 Switching operation in mutual standby

If a failure occurs in Node 0, TN clients (#0, #1) can connect to Node 1 because Netcompo FNA-BASE and the Netcompo for SNA EXPANSION OPTION perform switching processing to activate "PU1" in Node 1 and switch the resource (network address). Because of the above operation, clients (#0, #1) can continue communication using TN Gateway Service in Node 1.

Figure 3.4.5 shows failback processing after the restoration of Node 0.

84

Page 88: SynfinityCluster Installation/Administration Guide for

Figure3.4.5 Failback operation in mutual standby mode

After Node 0 is restored, the operator's failback operation causes Netcompo FNA-BASE and the Netcompo for SNA EXPANSION OPTION to perform failback processing. In this operation, "PU1" in Node 1 is inactivated, and "PU1" in Node 0 is activated. In addition, the resource (network address) is switched, and TN clients (#0, #1) can communicate with the host using TN Gateway Service in Node 1. Because of the above operation, the system returns to a normal operating state.

Before performing failback processing, terminate all active applications using "LU1"and "LU2" in Node 1.

3.4.5 Environment setting

In the active and standby nodes, the environment definitions of TN Gateway Service and Netcompo FNA-BASE must be the same.

3.4.6 Precautions

Precautions on environment definition

When creating an environment definition of TN Gateway Service in the cluster system, note the following:

- In the settings of the standby mode, the environment definition in each node must be the same.

- In a node where multiple services are operating, set the environment so that the environment matches the environment definition of each node.

- In the setting of the mutual standby mode, define the LU groups corresponding to the services and the client host groups so that they are associated with each other.

Switching operation

If a failure occurs in the active node, the connection with the TN client is disconnected, and the job is stopped. (How the TN client is seen differs from the TN client. For details, see the manual supplied with the client). To restart the job, wait for the state that the system is switched, and then reconnect the TN client to the node that becomes active.

85

Page 89: SynfinityCluster Installation/Administration Guide for

86

Page 90: SynfinityCluster Installation/Administration Guide for

Appendix A Environment Definition Example

This appendix is an example of the environment definition using a typical application.

- Netcompo NMC Server

A.1 Netcompo NMC Server

87

Page 91: SynfinityCluster Installation/Administration Guide for

88

Page 92: SynfinityCluster Installation/Administration Guide for

System configuration (1:1 standby mode)

89

Page 93: SynfinityCluster Installation/Administration Guide for

90

Page 94: SynfinityCluster Installation/Administration Guide for

Setting

1. Set the takeover network (Cluster management view) 2. Check the takeover network (Cluster management view) 3. Set Netcompo FNA-LAN 4. Set Netcompo FNA-BASE 5. Set Netcompo NMC Server 6. Set service (Cluster management view)

91

Page 95: SynfinityCluster Installation/Administration Guide for

92

Page 96: SynfinityCluster Installation/Administration Guide for

Setting and defining the environment

Setting the takeover network (cluster management view)

From the screen of the top menu, select "SynfinityCluster."

-> From the [SynfinityCluster] screen, select "Cluster management setting."

-> From the [Cluster management-setting menu] screen, select "Takeover network setting."

-> From the [Takeover network name setting] screen, set the following parameters.

- Takeover network name: shd_net_1

- Network type: SHDHost

Note: Default values may be set.

-> From the [Node setting] screen, set the following node names.

- Nodes to be set

node0--- Active node

node1--- Standby node

Note: Set the node name defined in the cluster initial setting.

-> From the [Network interface selection] screen, set the following parameters.

- Type of address to be taken over

Select "MAC address takeover"

Note: Setting of "IP address takeover" is not required.

- Interface

For each of node0 and node1, select the interface "hme0."

-> Setting on the [Takeover network address setting] screen is not required. Proceed to the next screen.

-> From the [Confirmation and registration of takeover network setting information] screen, execute <Register>.

Checking takeover network (Cluster management view)

From the screen of the top menu, select "SynfinityCluster."

-> From the [SynfinityCluster] screen, select "Cluster management."

-> From "Node view" on the [Cluster management] screen, select the resource of the switching LAN.

(The node view displays resources in the hierarchical structure. If a resource of the switching LAN is not displayed, double-click the corresponding resource at a higher level of the hierarchy.)

Node view

Domain0 SHD_Domain0 shd_net_1 (Resource of switching LAN)

-> By selecting "Detail" from "Display" from the pull-down menu when the resource of the switching LAN is selected, the [Detail (resource attribute)] sub-window is displayed.

-> Specify the attribute value displayed in the [Detail (resource attribute)] screen as a network address.

Note: The network address (MAC address) checked here defines the /etc/opt/FSUNnet/lndfc/hosts file in Netcompo FNA-LAN setting.

Netcompo FNA-LAN environment definition (common to all nodes) - /etc/opt/FSUNnet/lndfc/hosts

93

Page 97: SynfinityCluster Installation/Administration Guide for

Note: Set the takeover MAC address referenced in "Checking takeover network" here.

- /etc/opt/FSUNnet/lndfc/lndfc.config

- /etc/opt/FSUNnet/lndfc/networks

- /etc/opt/FSUNnet/lndfc/parameters

- /etc/opt/FSUNnet/cluster/lndfc/lndfc.config_CL

Note) The value of Resource-ID differs depending on the conditions at setting.

94

Page 98: SynfinityCluster Installation/Administration Guide for

Netcompo FNA-BASE environment setting (common to all nodes)

- /etc/opt/FSUNnet/vcp/fnaparm

- /etc/opt/FSUNnet/cluster/vcp/SERVICE0

Note: The file name must be the same as the service name set in the cluster management view.

Netcompo NMC Server setting (common to all nodes)

- /etc/opt/FSUNnet/nmcsv/cudpprm

- /etc/opt/FSUNnet/nmcsv/gwparm

- /etc/opt/FJSVcluster/rc.d/SystemState2/cluster.FSUNnmcsv

95

Page 99: SynfinityCluster Installation/Administration Guide for

96

Page 100: SynfinityCluster Installation/Administration Guide for

Note: Specify the setting set at installation as-is (editing is not required).

- /etc/opt/FJSVcluster/regrsc/script.FSUNnmcsv

Note: Specify the setting set at installation as-is (editing is not required).

Setting the service (Cluster management view)

From the screen of the top menu, select "SynfinityCluster."

-> From the [SynfinityCluster] screen, select "Cluster management setting."

-> From the [Cluster management-setting menu] screen, select "Service setting."

-> From the [Service name-cluster topology setting] screen, set the following parameters:

- Service name: SERVICE0

- Cluster topology: Standby

-> From the [Node setting (standby class)] screen, set the following parameters:

- Domain: Domain0

- Active node: node0

- Standby node: node1

Note: Set the information defined in the initial cluster setting.

-> From the [Resource setting] screen, register the following resources:

- Resources to be set

shd_net_1 --- Resource of switching LAN

FNALAN_hme0 --- Resource of FNA-LAN (hme0)

97

Page 101: SynfinityCluster Installation/Administration Guide for

98

FNA2_SERVICE0 --- Resource of FNA-BASE (PU1001)

NmcsvCl1 --- Resource of NMC Server

-> Setting on the [Application launch-stop priority] screen is not required. Proceed to the next screen.

-> From the [Confirmation and registration of service setting information] screen, execute <Register>.

Note: By this execution, setting processing is performed for all related nodes.

Page 102: SynfinityCluster Installation/Administration Guide for

Appendix B Command Reference

99

Page 103: SynfinityCluster Installation/Administration Guide for

100

Page 104: SynfinityCluster Installation/Administration Guide for

Command reference

101

Page 105: SynfinityCluster Installation/Administration Guide for

102

Page 106: SynfinityCluster Installation/Administration Guide for

clntc_bkrs command

Command

clntc_bkrs

Format

/opt/FSUNnet/bin/clntc_bkrs -b | -r [ -f save-name ] [ -d storage-destination ]

Description

This command saves the network environment definition file of the node that issued this command. This command also restores the saved network environment definition file to the node that issued this command.

Save and restore processing must be individually performed in all nodes that constitute a cluster.

The file is saved in /var/tmp or in the specified storage destination in cpioFormat format. As the storage destination, a directory or device can be specified. Nonexistent directory or device cannot be specified.

If a network environment definition file is saved by specifying the directory, the saved information is deleted when restoration completes normally. If the file is saved in a device, the information is not deleted.

Options

Option Description

-b Saves the file.

-r Restores the file.

-f save-name Specify the save name. This option is only effective when the directory is specified as the storage destination with the option "d." If omitted, clntc_backup is assumed.

-d storage-destination Specify the storage destination of the network environment definition file.

Return values

0 Normal termination

1 Abnormal termination

Output messages

Message Meaning

Clntc_bkrs: Not super user. Operate as a super user.

Clntc_bkrs: Parameter error. There is an error in parameter specification.

Clntc_bkrs: Storage-destination no such file or directory. The specified storage destination cannot be found.

Clntc_bkrs: Save-name already exist. The saved information of the same name already exists.

Clntc_bkrs: Backup file not found. The saved information is not found.

Clntc_bkrs: Backup error. A problem occurred during save processing.

Clntc_bkrs: Restore error. A problem occurred during restore processing.

Clntc_bkrs: Normal end. Command terminated normally.

Usage examples

# clntc_bkrs -b

Saves the environment definition file. (Saved information: /var/tmp/clntc_backup)

# clntc_bkrs -r

Restores the environment definition file. (Save information of restoration target: /var/tmp/clntc_backup)

# clntc_bkrs -b -d /export/home/fujitsu -f BKUPDAT.001

103

Page 107: SynfinityCluster Installation/Administration Guide for

104

Saves the environment definition file in the specified directory with the saved information name "BKUPDAT.001."

# clntc_bkrs -r -d /export/home/fujitsu -f BKUPDAT.001

Restores the environment definition file from /export/home/fujitsu/BKUPDAT.001. (After restoration is completed, saved information is automatically deleted.)

# clntc_bkrs -b -d /dev/rmt/0c

Saves the environment definition file on tape media.

# clntc_bkrs -r -d /dev/rmt/0c

Restores the environment definition file from the tape media that has the saved file. (Saved information is not deleted.)

Page 108: SynfinityCluster Installation/Administration Guide for

clntc_syncfile command

Command

clntc_syncfile

Format

/opt/FSUNnet/bin/clntc_syncfile [ -r ]

Description

This command copies the network environment definition file and activation procedure of the node that issued this command to all nodes. By specifying an option, the resources are automatically registered after copying processing is completed.

In the N:1 standby system, however, do not automatically register the resources after a copying operation. If a resource is automatically registered in the N:1 standby system, the resources must be registered again by executing the resource registration command of each product after the definition is corrected in each node.

Options

Option Description

No specification Copies the environment definition file and activation procedure.

-r Copies the environment definition file and activation procedure and registers the resources.

Return values

0 Normal termination

1 Abnormal termination

Output messages

Message Meaning

clntc_syncfile: Not super user. Operate as a super user.

clntc_syncfile: Parameter error. There is an error in parameter specifications.

clntc_bkrs: Normal end. Command terminated normally.

Usage examples

# clntc_syncfile

Copies the environment definition file and activation procedure to all nodes.

# clntc_syncfile -r

Copies the environment definition file and activation procedure to all nodes and automatically registers resources.

105

Page 109: SynfinityCluster Installation/Administration Guide for

106

Page 110: SynfinityCluster Installation/Administration Guide for

nmcsv_syncfile command

Command

nmcsv_syncfile

Format

/opt/FSUNnet/bin/nmcsv_syncfile [ -a | -p ] [ -n node-name ]

Description

This command is for the Netcompo NMC server.

This command copies the state transition procedure of the node that issued this command, or it copies the environment definition file and state transition procedure to all nodes or a specified node. By specifying an option, only the environment definition file can be copied.

[Range of this command] AA: Supported -: Not supported Not required: Correction and registration not required

Mode Standby Mutual standby Parallel

Cluster configuration 1:1 N:1 1:1 N

Copying environment definition file AA AA (*1) AA -

Copying state transition procedure Not required AA (*1) AA -

(*1) For the standby node, individual corrections are required.

Options

Option Description

No specification Copies only the environment definition file for the NMC server.

-p Copies only the state transition procedure for the NMC server.

-a Copies only the environment definition file and the state transition procedure for the NMC server.

-n node-name Specifies the node name of the copy destination. If omitted, information is copied to all nodes.

Return values

0 Normal termination

1 Abnormal termination

Output messages

Messages Meaning

nmcsv_syncfile: Not super user. Operate as a super user.

nmcsv_syncfile: Invalid argument.

Usage: nmcsv_syncfile [ -a | -p ] [-n node]

There is an error in parameter specification.

nmcsv_syncfile: Package not installed. The NMC server is not installed.

nmcsv_syncfile: /etc/opt/FSUNnet/nmcsv/cudp prm copy failed.

An attempt to copy the /etc/opt/FSUNnet/nmcsv/cudpprm file failed.

nmcsv_syncfile: /etc/opt/FSUNnet/nmcsv/gwpa rm copy failed.

An attempt to copy the /etc/opt/FSUNnet/nmcsv/gwparm file failed.

Nmcsv_syncfile: /etc/opt/FSUNnet/nmcsv/hdcp arm

An attempt to copy the /etc/opt/FSUNnet/nmcsv/hdcparm file failed.

107

Page 111: SynfinityCluster Installation/Administration Guide for

108

copy failed.

Nmcsv_syncfile: /etc/opt/FSUNnet/nmcsv/plopp arm copy failed.

An attempt to copy the /etc/opt/FSUNnet/nmcsv/plopparm file failed.

Nmcsv_syncfile: /etc/opt/FJSVcluster/rc.d/S ystemState2/cluster.FSUNnmcsv copy failed.

An attempt to copy the /etc/opt/FJSVcluster/rc.d/SystemState2/cluster.FSUNnmcsv file failed.

Nmcsv_syncfile: Normal end. Command terminated normally.

Usage examples

# nmcsv_syncfile

Copies the environment definition file to all nodes.

# nmcsv_syncfile -n node-name

Copies the environment definition file to the specified node.

(This is useful, for example, for copying the file to the N:1 standby node.)

# nmcsv_syncfile -p

Copies the state transition procedure to all nodes.

# nmcsv_syncfile -p -n node-name

Copies the state transition procedure to the specified node.

(This is useful, for example, for copying the file to the N:1 standby node.)

# nmcsv_syncfile -a

Copies the environment definition file and state transition procedure to all nodes.

# nmcsv_syncfile -a -n node-name

Copies the environment definition file and state transition procedure to the specified node.

(This is useful, for example, for copying the file to the N:1 standby node.)

Page 112: SynfinityCluster Installation/Administration Guide for

nmcsv_rid command

Command

nmcsv_rid

Format

/opt/FSUNnet/bin/nmcsv_rid -a -k registration-ID-number | -d [ -n node-name ]

Description

This command is for the Netcompo NMC server.

This command registers or deletes the resource ID to all nodes or a specified node.

Options

Option Description

-a Registers the resource.

-d Deletes the resource.

-k registration-ID-number Specify the registration-ID-number of the resource with a number.

-n node-name Specify the node name to be registered or deleted. If omitted, the processing is performed for all nodes.

Return values

0 Normal termination

1 Abnormal termination

Output message

Messages Meaning

nmcsv_rid: Not super user. Operate as a super user.

nmcsv_rid: Invalid argument.

Usage: nmcsv_rid -a -k keynum | -d [-n node] There is an error in parameter specification.

nmcsv_rid: claddprocrsc NmcsvCl registration-ID-number failed on node-name.

An attempt to register the resource failed.

nmcsv_rid: cldelprocrsc resource-ID failed on node-name. An attempt to delete the resource failed.

nmcsv_rid: Normal end. This command terminates normally.

Usage examples

# nmcsv_rid -a -k registration-ID-number

Registers the specified registration ID number to all nodes.

# nmcsv_rid -a -k registration-ID-number -n node-name

Registers the specified registration ID number to the specified node.

(This is useful for registering the ID number for each destination in the N:1 standby node.)

# nmcsv_rid -d

Deletes the registered resource ID from all nodes.

# nmcsv_rid -d -n node-name

Deletes the registered resource ID from the specified node.

(This is useful for registering the ID number for each destination in the N:1 standby node.)

109