Upload
kyle-brown
View
205
Download
4
Tags:
Embed Size (px)
DESCRIPTION
High Availability and Disaster Recovery in PureApplication System
Citation preview
A05 High Availability and Disaster Recovery for IBM PureApplication Systemfor IBM PureApplication System
© IBM Corporation 2011
Kyle Brown | Distinguished Engineer | IBM
High Availability and Disaster Recovery for IBM PureApplication System
© IBM Corporation 2012
for IBM PureApplication System
| Distinguished Engineer | IBM
Agenda
• DR and HA Definitions
• HA within the IPAS rack
• HA Approaches for WAS IPAS
• DB2 HADR in IPAS• DB2 HADR in IPAS
• MQ HA considerations in IPAS
• DR strategies for IPAS
HA Approaches for WAS IPAS
© IBM Corporation 2012
MQ HA considerations in IPAS
Executive Summary
• High Availability and Disaster Recovery in PureApplication System is accomplished using the tools and procedures that most customers are already familiar with
• What changes is the automation aspects that can • What changes is the automation aspects that can actually make the process more repeatable and easier for many customers
High Availability and Disaster Recovery in PureApplication System is accomplished using the tools and procedures that most customers are already
What changes is the automation aspects that can
© IBM Corporation 2012
What changes is the automation aspects that can actually make the process more repeatable and
DEFINITIONS
© IBM Corporation 2012
Definitions• High availability (HA)
• Application cannot undergo an unplannedseconds/minutes at a time, but can do so as often as necessary, or may be down for a few hours for scheduled
• Short of Continuous availability which
• two scenarios: Inter-rack, Intra-rack
5 IBM PureApplication System - Failover and Recovery
• two scenarios: Inter-rack, Intra-rack
• Disaster recovery (DR)
• The reconstruction of your physical production site in an alternate physical site, occurring after the loss of your primary data center.
• The process of bringing up servers and applications, in priority order, to support the business from the alternate site.
• This environment may be substantially smaller than the entire production environment, as only a subset of production applications demand DR
unplanned outage for more than a few seconds/minutes at a time, but can do so as often as necessary, or may
scheduled maintenance.
which does not allow for any outage
© IBM Corporation 2012
Failover and Recovery
The reconstruction of your physical production site in an alternate physical site, occurring after the loss of your primary data center.
The process of bringing up servers and applications, in priority order, to support the business from the alternate site.
This environment may be substantially smaller than the entire production environment, as only a subset of production applications
DefinitionsRecovery Point Objective (RPO)
maximum tolerable period in which data might be lost from an IT service due to a major incident
Recovery Time Objective (RTO)
duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.
Accepted service tiers for business applications
Tier 1: Continuous availability required, no loss of data tolerated
Tier 2: A few minutes of outage tolerated. Data restorable in time
Tier 3: A few hours of outage. Some in flight transaction loss is acceptable
Tier 4: Application could be unavailable for a day or more
maximum tolerable period in which data might be lost from an IT service due to a major incident
duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business
© IBM Corporation 2012
Accepted service tiers for business applications
Tier 1: Continuous availability required, no loss of data tolerated
Tier 2: A few minutes of outage tolerated. Data restorable in time
Tier 3: A few hours of outage. Some in flight transaction loss is acceptable
Tier 4: Application could be unavailable for a day or more
HIGH AVAILABILITY FEATURES IN IPAS
© IBM Corporation 2012
HIGH AVAILABILITY FEATURES IN
High Availability in IPAS
• PureApplication System has intra
• Redundant hardware, networking and storage
• No single points of failure
• Recovery from hardware failures with workload mobility
• Additional capacity can be added & utilized without any service interruption
• Additional local racks can increase HA further
• WebSphere clustering across racks can be configured
• MQ and other patterns can also be clustered similarly
PureApplication System has intra-rack HA by design
Redundant hardware, networking and storage
Recovery from hardware failures with workload mobility
© IBM Corporation 2012
Additional capacity can be added & utilized without any service interruption
Additional local racks can increase HA further
WebSphere clustering across racks can be configured
MQ and other patterns can also be clustered similarly
Hardware high availability• Management nodes and Virtualization mgmt node
• Both have redundant backup servers
• Floating IP address is assigned to the active Management Node that has management (workload deployer) function
• Network controllers
• Switches and cabling are redundant (2 of each).
• Failure of 1 leads to reduced bandwidth, not service
9 IBM PureApplication System - Failover and Recovery
• Storage controllers
• Each controller has two canisters that can service all traffic needed
• If one fails, the other handles all I/O
• Storage
• SSD and HDD storage is configured in RAID5 + spares
• Tolerates 2 concurrent drive failures without data loss (after spares are in use)
• Compute nodes
• Management system will route around failed DIMMs or cores
• If entire node fails, PureApplication System will try to move the VM to another compute node within its Cloud Group if free space is available – this is called “Rebalance or Workload evacuation”
• If limited resources available on other nodes to handle the extra load, VMs are started based on their priorities
Floating IP address is assigned to the active Management Node that has management
© IBM Corporation 2012
Failover and Recovery
Each controller has two canisters that can service all traffic needed
Tolerates 2 concurrent drive failures without data loss (after spares are in use)
If entire node fails, PureApplication System will try to move the VM to another compute node this is called “Rebalance or Workload
If limited resources available on other nodes to handle the extra load, VMs are started
Virtual applications – WebSphere App. Server failover and recovery
• WebSphere Application Server as part of virtual application deployment is considered non(should not have any state in the VM)
• Hence in case of VM failure, PureApplication System can re
Type of failure What happens
PureApplication System will monitor WebSphere process failures and will restart WebSphere once a day (to prevent spinning
1 IBM PureApplication System - Failover and Recovery
WebSphere App. Server failure within VM (VM is still running)
WebSphere once a day (to prevent spinning of middleware)
If scaling is enabled, PureApplication System will start another instance if SLA is not satisfied
VM containing WebSphere fails
PureApplication System detects VM failure and will re-spin another VM, assigning a new IP address (WebSphere in virtual application is to be non-persistent)
If scaling, routing or caching policies are enabled, PureApplication System will link the instance to the appropriate Shared service
WebSphere App. Server failover and recovery
WebSphere Application Server as part of virtual application deployment is considered non-persistent
Hence in case of VM failure, PureApplication System can re-create another VM with no side effect
What happens Deployer Action
PureApplication System will monitor WebSphere process failures and will restart WebSphere once a day (to prevent spinning
None needed for recovery.
© IBM Corporation 2012
Failover and Recovery
WebSphere once a day (to prevent spinning
If scaling is enabled, PureApplication System will start another instance if SLA is not
Deployer needs to follow-up on this scenario to understand the cause of failure using SSH, logs, and so on.
PureApplication System detects VM failure spin another VM, assigning a new
IP address (WebSphere in virtual application
If scaling, routing or caching policies are enabled, PureApplication System will link the instance to the appropriate Shared service
None needed – this is handled by PureApplication System
Virtual systems – WebSphere App. Server failover and recovery• This is similar to traditional WebSphere Application Server failure scenarios and handled like normal
case - PureApplication System does not take any special action
Type of failure What happens
PureApplication System does not monitor
WebSphere process failure and will not
restart WebSphere
If WebSphere application server node in
1 IBM PureApplication System - Failover and Recovery
WebSphere process
failure within VM
If WebSphere application server node in
Network Deployment topology goes down,
the Node agent will restart WebSphere
node – normal WebSphere function
If Node Agent or Deployment Manager
fails, follow normal WebSphere debug
procedure
VM containing
WebSphere fails
PureApplication System detects VM failure
but there is no VM recovery
WebSphere App. Server failover and recoveryThis is similar to traditional WebSphere Application Server failure scenarios and handled like normal
PureApplication System does not take any special action
What happens Deployer Action
PureApplication System does not monitor
WebSphere process failure and will not
If WebSphere application server node in
Deployer needs to look into
WebSphere logs.
© IBM Corporation 2012
Failover and Recovery
If WebSphere application server node in
Network Deployment topology goes down,
the Node agent will restart WebSphere
normal WebSphere function
If Node Agent or Deployment Manager
fails, follow normal WebSphere debug
Deployer can log into VM from
PureApplication System console
and restart WebSphere as
needed
Look at Monitoring data
PureApplication System detects VM failure
Try restarting VM
Add another clone for Virtual
Machine instance that
represents WebSphere node
Virtual applications – DBaaS failover and recovery• DBaaS (DB2) VM as part of virtual application is considered persistent
• Hence in case of failure, PureApplication System will not create another Virtual Machine
Type of failure What happens
DB2 failure within PureApplication System does not
1
VM – VM is still running
monitor DB2 failures and hence does not attempt to restart DB2
VM containing DB2 fails
PureApplication System detects VM
failure and will restart DB2 VM once (only) at the same IP address (since DB2 is considered persistent)
DBaaS failover and recoveryDBaaS (DB2) VM as part of virtual application is considered persistent
Hence in case of failure, PureApplication System will not create another Virtual Machine
What happens Deployer Action
PureApplication System does not
Need to look into DB2 logs to
identify failure
© IBM Corporation 2012
monitor DB2 failures and hence does not attempt to restart DB2
SSH into the VM and start DB2
Restart VM
PureApplication System detects VM
failure and will restart DB2 VM once (only) at the same IP address (since DB2 is considered persistent)
If VM does not come up after
1 retry, deployer will need to
create a new DBaaS
instance (create a new VM) and then restore DB2 data from TSM backup (hence backup is essential)
WEBSPHERE HA IN IPAS
© IBM Corporation 2012
HA IN IPAS
WAS HA – intra site
DeployPattern 1
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
Active (primary)
Deployment manager and application server nodes
DeployPattern 2
Additional application server nodes federated into Dmgr on Primary
Load Balancer
Active (primary) Active (secondary)
© IBM Corporation 2012
Shared Database(accessed from
both sites)
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Creating your patterns
Step 1: Create a
Clustered Pattern on IPAS A
© IBM Corporation 2012
Step 2: Create a Custom Node
Pattern on IPAS B
Make sure that the Federate
node drop down is selected
Deploying your patterns Step 3: Deploy Cluster Pattern on IPAS A
• Note the Deployment
Step 3: Deploy Cluster Pattern on IPAS A
Note the Deployment Managers’s Hostname.
Step 4: Deploy The Custom Node Pattern
• Make sure the Deployment Manager Hostname under Custom Nodes properties
© IBM Corporation 2012
Hostname under Custom Nodes properties matches the hostname from the previous step.
Pros and Cons of the Single-Cell approach
• The single-cell approach has both advantages and disadvantages, but the disadvantages are enough that we do not recommend it for cross
• Advantages
• Single point of administration
• Only need to perform administration actions once
• Disadvantages
• Single point of failure (Dmgr)
• System will continue to run, although you can not make any administration changes
• Susceptible to “split brain” issues when network connectivity between racks is cut
• Core groups in particular are susceptible to potentially long split and reconnect times in large cells
Cell approach
cell approach has both advantages and disadvantages, but the disadvantages are enough that we do not recommend it for cross-site use
Only need to perform administration actions once
© IBM Corporation 2012
System will continue to run, although you can not make any administration changes
Susceptible to “split brain” issues when network connectivity between racks is cut
Core groups in particular are susceptible to potentially long split and reconnect times in large cells
WAS HA – cross site
DeployPattern 1
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
Active (primary)
Deployment manager server and application servers
DeployPattern 1'
Clone of Pattern 1(Export/Import)
Load Balancer
Active (primary) Active (secondary)
© IBM Corporation 2012
Shared Database(accessed from
both sites)
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
Exporting and Importing the pattern
Active (primary)
SSHStorageServer Server
Patterns withImages,Scripts,Add
Export
• Recommended to setup an external ssh storage server accessible from both the racks for Export/Import tasks• Server should have enough space to host patterns, images and scripts during the export/import tasks• In a typical environment, 100 GB space should be sufficient, though it can change based on the size of images you are using, and how frequently you are transferring patterns and images• Download and install the CLI from IBM Pure Application System, and configure it work with the racks
SSHStorageServer –
Active (secondary)
© IBM Corporation 2012
Server –Patterns with
Images,Scripts,Add ons
Import
Recommended to setup an external ssh storage server accessible from both the racks for Export/Import
Server should have enough space to host patterns, images and scripts during the export/import tasksIn a typical environment, 100 GB space should be sufficient, though it can change based on the size of
images you are using, and how frequently you are transferring patterns and imagesDownload and install the CLI from IBM Pure Application System, and configure it work with the racks
Deployment through Normal Operation
Load BalancerPrimary Load Balancer
5. Routing rules manually configured for Http server addresses & ports (must be done after each pattern deployment)
Http Server
Http Server
Http Server
WAS Node
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
Off rack storage
2. CLI runs script to export cluster pattern
Shared DatabaseDMgr
1. WAS cluster pattern deploys on primary
Deployment through Normal Operation
Load Balancer SecondaryLoad Balancer
5. Routing rules manually configured for Http server addresses & ports (must be done after each pattern
Http Server
WAS Node
a
© IBM Corporation 2012
Off rack storage
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
3. CLI runs script to import cluster pattern4. WAS cluster pattern deploys on secondary rack
Shared DatabaseDMgr
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
DB2 HADR
© IBM Corporation 2012
Virtual systems – DB2 failover and recoveryThis is similar to traditional DB2 failure scenarios and handled like normal case
PureApplication System does not take any special action
Type of failure What happens
DB2 failure within
VM – VM is still
PureApplication System does not
monitor DB2 failures and hence does
2
VM – VM is still
running.
monitor DB2 failures and hence does
not attempt to restart DB2
VM containing DB2 fails
PureApplication System detects VM
failure, but there is no restart of VM in virtual system
DB2 failover and recoveryThis is similar to traditional DB2 failure scenarios and handled like normal case -
PureApplication System does not take any special action
What happens Deployer Action
PureApplication System does not
monitor DB2 failures and hence does
Client needs to view DB2 logs, log into the VM to fix or
restart DB2
Follow normal best practice
© IBM Corporation 2012
monitor DB2 failures and hence does
not attempt to restart DB2Follow normal best practice for DB2 recovery
Use DB2 HADR
PureApplication System detects VM
failure, but there is no restart of VM in
Client will need to look into logs and determine the
failure and restart of VM or
create new VM.
Standard DB2 backup/restore will need to be implemented
A simple DB2 HADR Topology for two IPAS racks
Rack A
DB2 HADRPrimaryDB2 HADRPrimary
A simple DB2 HADR Topology for two IPAS racks
Rack B
© IBM Corporation 2012
DB2 HADRSecondaryDB2 HADRSecondary
Creating the Standby DB2 HADR1. On the Secondary system, create a virtual system using the "DB2 Enterprise HADR Standby" part
3. View the deployed pattern and write down the information about the standby such as password for root, port, and host (Network interface 1: ipas020.purescale.raleigh.ibm.com (172.17.8.20))
2. Deploy the virtual system
Creating the Standby DB2 HADR1. On the Secondary system, create a virtual system using the "DB2 Enterprise HADR Standby"
© IBM Corporation 2012
3. View the deployed pattern and write down the information about the standby such as password for root, port, and host (Network interface 1: ipas-lpar-008-020.purescale.raleigh.ibm.com (172.17.8.20))
Creating and configuring the Primary DB2 HADR
1. On the Primary system, create a virtual system using the "DB2 Enterprise HADR Primary" part
2. Deploy the virtual system
3. Configure the virtual part; Enter the hostname you wrote down for the standby hostname field, the standby root password, and the standby DB2 service port.
Creating and configuring the Primary DB2 HADR
1. On the Primary system, create a virtual system using the "DB2 Enterprise HADR Primary" part
© IBM Corporation 2012
MQ HIGH AVAILABILITY
© IBM Corporation 2012
MQ HIGH AVAILABILITY
Achieving MQ High Availability in IPAS
• MQ Multi-instance is a feature added in MQ 7.0 to provide basic failover support
• Can be used in situations like IPAS where an HA coordinator is not a good option
• Queue manager can be started on different machines
• Active instance
• Standby instance
• Shared Queue Manager Data retained on network storage
Achieving MQ High Availability in IPAS
instance is a feature added in MQ 7.0 to provide
Can be used in situations like IPAS where an HA coordinator is
© IBM Corporation 2012
Queue manager can be started on different machines
Shared Queue Manager Data retained on network storage
Highly Available MQ and Message Broker Configuration
<<node>> bk01.ipas.ibm.com
<<qmgr>> QM.BK.01
<<node>> bk02.ipas
<<qmgr>> QM
<<broker>> BK.01 <<broker>>
CLUS.BKR CLUSR
MIQM
MIQM
<<node>> nfs.ipas.ibm.com
/opt/MBShareFS
/opt/MQShareFS
Note that in a real customer environment, theshared storage would be held outside the IPAS box
Highly Available MQ and Message Broker Configuration
•• Two Message Brokers are Two Message Brokers are configured in Activeconfigured in Active--Active HA Active HA pattern on nodes:pattern on nodes:bk01.ipas.ibm.com
bk02.ipas.ibm.com
•• The Queue Managers The Queue Managers hosting the Brokers are hosting the Brokers are clustered for load balancing of clustered for load balancing of messagesmessagesCLUS.BK
ipas.ibm.com
QM.BK.02
BK.02
CLUS.BK
© IBM Corporation 2012
•• The Queue Managers and The Queue Managers and the Broker components are the Broker components are shared from the NFS 4 node:shared from the NFS 4 node:nfs.ipas.ibm.com
•• MultiMulti--Instance Queue Instance Queue Manager (MIQM) technology Manager (MIQM) technology detects and fails over the detects and fails over the Broker and Queue ManagerBroker and Queue Manager
Note that in a real customer environment, thethe IPAS box
Details of MQ Multi Instance Deployment•• ImagesImages
•• WebSphere Message Broker Hypervisor Edition 8.0.0.1WebSphere Message Broker Hypervisor Edition 8.0.0.1
•• WebSphere MQ Hypervisor Edition 7.5.0.1 (when using only MQ)WebSphere MQ Hypervisor Edition 7.5.0.1 (when using only MQ)
•• Parts:Parts:•• WebSphere Message Broker AdvancedWebSphere Message Broker Advanced
•• WebSphere MQ WebSphere MQ –– Basic (when using only MQ)Basic (when using only MQ)
•• Core OS (RHEL part for NFS Server)Core OS (RHEL part for NFS Server)
•• Scripts:Scripts:•• pasconn.addgrouppasconn.addgroup –– Create or Modify group with specified group idCreate or Modify group with specified group id•• pasconn.adduserpasconn.adduser –– Create or Modify user with the specified user idCreate or Modify user with the specified user id•• pasconn.delmbpasconn.delmb –– Delete default Message Broker on the nodeDelete default Message Broker on the node•• pasconn.nfssharepasconn.nfsshare –– Create and Share the NFS directoryCreate and Share the NFS directory•• pasconn.nfsmountpasconn.nfsmount –– Create and Mount the shared NFS directoryCreate and Mount the shared NFS directory•• pasconn.hambpasconn.hamb –– Create a MIQM Queue Manager and Broker Primary and Standby on remote nodeCreate a MIQM Queue Manager and Broker Primary and Standby on remote node•• pasconn.clusqmgrpasconn.clusqmgr –– Create Cluster Repository on Queue Manager and join the other RepositoryCreate Cluster Repository on Queue Manager and join the other Repository
Details of MQ Multi Instance Deployment
WebSphere MQ Hypervisor Edition 7.5.0.1 (when using only MQ)WebSphere MQ Hypervisor Edition 7.5.0.1 (when using only MQ)
© IBM Corporation 2012
Create or Modify group with specified group idCreate or Modify group with specified group idCreate or Modify user with the specified user idCreate or Modify user with the specified user id
Delete default Message Broker on the nodeDelete default Message Broker on the nodeCreate and Share the NFS directoryCreate and Share the NFS directoryCreate and Mount the shared NFS directoryCreate and Mount the shared NFS directory
Create a MIQM Queue Manager and Broker Primary and Standby on remote nodeCreate a MIQM Queue Manager and Broker Primary and Standby on remote nodeCreate Cluster Repository on Queue Manager and join the other RepositoryCreate Cluster Repository on Queue Manager and join the other Repository
DISASTER RECOVERY
© IBM Corporation 2012
DISASTER RECOVERY
Disaster Recovery
• Business critical applications that require DR can be hosted on PureApplication systems
• Different techniques are available to cope with Tier 1, 2 and lower tier requirements
• Specific scripting and configuration will be necessary for each • Specific scripting and configuration will be necessary for each product in the portfolio
• DR details are different for WAS, MQ, DB2, etc.
• PureApplication integrates with existing client HA/DR capabilities in their environments
• Network load balancing and failover
• Database architectures
• Existing backup/recovery tools
Business critical applications that require DR can be hosted on
Different techniques are available to cope with Tier 1, 2 and
Specific scripting and configuration will be necessary for each
© IBM Corporation 2012
Specific scripting and configuration will be necessary for each
DR details are different for WAS, MQ, DB2, etc.
PureApplication integrates with existing client HA/DR capabilities in their environments
DR basicsActive (primary)
• DR in PureApplication System consists of three basic steps
• Replicate the management data
• Replicate the application state
• Redirect network traffic from • Redirect network traffic from the primary to the backup
• What distinguishes each tier is the toleration for loss for the application data; that dictates different approaches that meet different levels of RPO * Application State includes messages in queues,
Database contents, Transaction logs, etc.** Management data includes pattern definitions, User ids, cloud group definitions, etc.
*Application
Active (primary)
**Management Data
Manual Network Change
Standby (backup)copy
© IBM Corporation 2012
*ApplicationState
* Application State includes messages in queues, Database contents, Transaction logs, etc.** Management data includes pattern definitions, User ids, cloud group definitions, etc.
DR solution ranges
File Replication
MinutesSecondsZero
File Replication
Shared File Systems
Backup and Restore
File Replication
© IBM Corporation 2012
RPO time
Hours DaysMinutes
File Replication
DR – Backup and Restore
DeployPattern
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
Applies to Tier 3 & 4
• Deploy patterns on both systems (Export/Import)
• Backup and restore criticalapplication state using backup solutions like Tivoli Storage Manager on a regular basis
Offsite backup storage is a standard best practice
Active (primary)
Manual Network Change
Cold Standby (secondary)
Deploy Copy of Pattern
Re
sto
re a
nd
Re
sta
rt
© IBM Corporation 2012
Offsite Storage
Re
sto
re a
nd
Re
sta
rt
Backup
Resto
re
ApplicationState
Offsite backup storage is a standard best practice
DR – File ReplicationActive (primary)
Applies to Tier 2 & 3
DeployPattern
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
Database
• Deploy patterns on both systems (Export/Import)
• Install a File Synchronization solution on both racks, actively copying selected data from primary to secondary
• Use existing replication solutions like DB2 HADR where available on or off PAS
Active (primary)
Manual Network Change
Warm Standby (secondary)
De
plo
y &
Qu
iesce
Deploy Copy of Pattern
© IBM Corporation 2012
ApplicationState copy
Database Database Replica
De
plo
y &
Qu
iesce
DR – Shared File System
DeployPattern
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
ApplicationServer
Operatingsystem
Virtual Appliance
Metadata
HTTPServer
Operatingsystem
Active (primary)Applies to Tier 1
• Develop custom patterns
Database
• Develop custom patterns utilizing shared file system, deploy on both racks, stop on the backup
• On failure, change over network and start middleware on standby server
Active (primary)
Manual Network Change
Warmer Standby (secondary)
Mid
dle
wa
re s
top
pe
dDeploy Copy of Pattern
© IBM Corporation 2012
Database Database Replica
Shared file system
Mid
dle
wa
re s
top
pe
d
ApplicationState
SUMMARY
© IBM Corporation 2012
We’ve seen
• What DR and HA are
• IPAS’s HA features
• HA Approaches for WAS IPAS
• DB2 HADR in IPAS• DB2 HADR in IPAS
• MQ HA considerations in IPAS
• DR strategies for IPAS
HA Approaches for WAS IPAS
© IBM Corporation 2012
MQ HA considerations in IPAS
IBM WebSphere Technical Convention 2012
Questions?
As a reminder, please fill out a session evaluation
IBM WebSphere Technical Convention 2012 – Berlin, Germany
© IBM Corporation 2012
As a reminder, please fill out a session evaluation
IBM WebSphere Technical Convention 2012
Copyright Information
© Copyright IBM Corporation 2012. All Rights Reserved. IBM, the IBM logo, ibm.com, AppScan, CICS, Cloudburst, Cognos, CPLEX, DataPower, DB2, FileNet, ILOG, IMS, InfoSphere, Lotus, Lotus Notes, Maximo, Quickr, Rational, Rational Team Concert, Sametime, Tivoli, WebSphere, and z/OS are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on thibm.com/legal/copytrade.shtml. ibm.com/legal/copytrade.shtml.
Coremetrics is a trademark or registered trademark of Coremetrics, Inc., an IBM Company.
SPSS is a trademark or registered trademark of SPSS, Inc. (or its affiliates), an IBM Company.
Unica is a trademark or registered trademark of Unica Corporation, an IBM Company.
Java and all Java-based trademarks and logos are trademarks of Oracle and/or its affiliates. Other company, product and service names may be trathis publication to IBM products and services do not imply that IBM intends to make them available in all countries in which IBM operates.
IBM WebSphere Technical Convention 2012 – Berlin, Germany
© Copyright IBM Corporation 2012. All Rights Reserved. IBM, the IBM logo, ibm.com, AppScan, CICS, Cloudburst, Cognos, CPLEX, DataPower, DB2, FileNet, ILOG, IMS, InfoSphere, Lotus, Lotus Notes, Maximo, Quickr, Rational, Rational Team Concert, Sametime, Tivoli, WebSphere, and z/OS are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries.
the Web at “Copyright and trademark information” at
© IBM Corporation 2012
Coremetrics is a trademark or registered trademark of Coremetrics, Inc., an IBM Company.
SPSS is a trademark or registered trademark of SPSS, Inc. (or its affiliates), an IBM Company.
Unica is a trademark or registered trademark of Unica Corporation, an IBM Company.
based trademarks and logos are trademarks of Oracle and/or its affiliates. Other trademarks or service marks of others. References in
this publication to IBM products and services do not imply that IBM intends to make them available