36
© 2015 IBM Corporation IBM InfoSphere Data Replication’s 11.3.3.1 Change Data Capture (CDC) WebHDFS

IBM InfoSphere Data Replication’s 11.3.3.1 Change Data ... · – CDC install is outside of the Hadoop cluster which provides the following benefits: • When a cluster node fails,

Embed Size (px)

Citation preview

© 2015 IBM Corporation

IBM InfoSphere Data Replication’s 11.3.3.1Change Data Capture (CDC) WebHDFS

2 © 2015 IBM Corporation

© IBM Corporation 2015. All Rights Reserved.

Disclaimer: Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision. The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.

3 © 2015 IBM Corporation

WebHDFS Support Overview

� WebHDFS support utilizes Rest APIs– CDC install is outside of the Hadoop cluster which provides the following benefits:

• When a cluster node fails, CDC will not be affected• Changes/upgrades of the Hadoop cluster will not impact the server where the

CDC target engine is running– Allows CDC to target any Hadoop distribution

Database Logs(online/archive)

Capture Apply

TCP/IPTransport

Unified GUIfor Admin & Monitoring

Source Application

Source Server CDC Target Server

Cluster

https

4 © 2015 IBM Corporation

WebHDFS Support Overview

� No restriction on what underlying file system is being used– For instance, supports replicating to Hadoop on GPFS

� Allows CDC to interact with a Hadoop install which is configured to use Kerberos security

5 © 2015 IBM Corporation

Installation information for WebHDFS Support� As of IIDR 11.3.3.2, CDC moved to using a single installer per platform� You should download the latest build for the server platform you require

from Fix Central : http://www-933.ibm.com/support/fixcentral/� When installing, select the IBM InfoSphere DataStage selection as

highlighted below:

� Note : The CDC WebHDFS target engine is packaged with the CDC DataStage target engine. DataStage is not used as part of the solution.

6 © 2015 IBM Corporation

WebHDFS Create CDC Instance

� The first step after installing the CDC DataStage target engine is to create the WebHDFS instance

• Since there is no database associated with an HDFS instance, a “special” user named ‘tsuser’ will be created. You need to provide a password for this user

• Later, you will need to enter tsuser and the password when you configure the user in Access Manager for the datastore

7 © 2015 IBM Corporation

WebHDFS Set Datastore Properties

Note, you need to enter ‘tsuser’ and the password entered when you created the CDC HDFS Instance

8 © 2015 IBM Corporation

WebHDFS Support Configuration

� New WebHDFS option is available when mapping tables

9 © 2015 IBM Corporation

WebHDFS Support Configuration…

Specify the name of the directory where the HDFS files will reside

Note, the server name is specified in the Hadoop Properties

10 © 2015 IBM Corporation

Configuring Hadoop Properties for the Subscription

� WebHDFS Connection Information specified in the Hadoop Properties– Supports Simple and Kerberos Authentication– Supports both http and https

� Note that the fully qualified connection string must be supplied including the /webhdfs/v1/� Additional examples by Hadoop service for WebHDFS with default configuration:

– Through HttpFS proxy : BigInsights 3.0• http://<HOSTNAME>:14000/webhdfs/v1/• https://<HOSTNAME>:14443/webhdfs/v1/

– Through Knox gateway : BigInsights 4.0• https://<HOSTNAME>:8443/gateway/default/webhdfs/v1/

– Directly to HDFS NameNode : rarely permitted in production• http://<HOSTNAME>:50070/webhdfs/v1/• https://<HOSTNAME>:50470/webhdfs/v1/

11 © 2015 IBM Corporation

Configuring Hadoop Properties for the Subscription…

� The following illustrates the configuration to utilize Kerberos authentication:

� Note that the fully qualified connection string must be supplied including the /webhdfs/v1/

� Principal:– Specify the Default Principal (which can be displayed using klist)

� Keytab Path:– Specify the fully qualified path with the keytab name

12 © 2015 IBM Corporation

Naming Convention of Files

� CDC uses the following convention to name the HDFS flat files that are produced during replication

– (_)[Table].D[Date].[Time][# Records]• _ = Currently open HDFS file. Removed when completed• [Date] = Julian date (year, day number within year)• [Time] = hh24mmss when flat file was created (in GMT)• [# Records] = Optionally the number of records can be added

� For those who are familiar with standard IIDR flat file production, there are some behavior difference with IIDR HDFS files compa red with standard flat file production

– File prefix is different• HDFS uses _ instead of @ for working file

– Fields are not quoted in files produced in HFDS– HDFS doesn’t create [Table].STOPPED file when subscription is stopped

13 © 2015 IBM Corporation

HDFS Record Format

� The output file written to HDFS contains an audit t rail of what occurred on the source system– Each record is inserted into HDFS and additional meta-columns are included with

each record

2015-07-15 22:09:46,6674163,I,GSAKUTH,\N,\N,\N,4381 Kelly Ave,San Jose,CA

“When”

Timestamp of when the record changed

Transaction Identifier

“What”

What was the change (Insert, Update, Delete)

“Who”

Who made the change

Nulls here since the before image of an insert is empty

14 © 2015 IBM Corporation

HDFS Record Format – Meta column detail– DM_TIMESTAMP - The timestamp obtained from the log of when the

operation occurred

• Contains the value from the &TIMSTAMP journal control field

– DM_TXID - Transaction identifier

• Contains the value from the &CCID journal control field

– DM_OPERATION_TYPE contains a single character indicating the type of operation:

• "I" for an insert.

• "D" for a delete.

• For Single Record Format there is one type that represents the update image

� "U" represents an update.

• For Multiple Record Format there are two separate types that represent before and after image

� "B" for the row containing the before image of an update.

� "A" for the row containing the after image of an update.

– DM_USER - The user that performed the operation

• Contains the value from the &USER journal control field

15 © 2015 IBM Corporation

HDFS Record Format…

� Single record– In this format an update operation is sent as a single row– The before and after image is contained in the same record

� Multiple record format– An update operation is sent as two rows, the first row being the before image

and the second row containing the after image

16 © 2015 IBM Corporation

HDFS Record Format…

� Note that the following characters will be escaped:– Comma: escaped with “\”– Escape: escaped with “\\”– Null: escaped with “\N” (as illustrated in the example above)

� Binary Data is encoded in base64

� Sample customer formatter (SampleDataFormatForWebHdfs.java) is provided with product if customization of output format required

© 2015 IBM Corporation

Appendix: Extra Detail & Troubleshooting

18 © 2015 IBM Corporation

Background on Hadoop for WebHDFS (1/2)� NameNode : The centerpiece of an HDFS file system

– Generally a NameNode manage multiple DataNodes– SNameNode (Secondary NameNode) can be configured– WebHDFS is originally serviced by NameNode– Generally doesn’t allow to connect from outside of Hadoop Cluster mainly due to

concern about security– Default port : http:50070, https:50470

� HttpFS : REST API gateway for WebHDFS– Support Hadoop pseudo authentication, HTTP SPNEGO Kerberos and other

pluggable authentication mechanisms– Shipped with IBM BigInsights 3.0 and Cloudera– Endpoint URL is same as that of NameNode– Default port : http:14000, https:14443– http://hadoop.apache.org/docs/r2.4.1/hadoop-hdfs-httpfs/

19 © 2015 IBM Corporation

Background on Hadoop for WebHDFS (2/2)

� Knox : REST API Gateway for Hadoop cluster including WebHDFS– Support Kerberos, LDAP, Active Directory, SSO, SAML and other authentication

systems– Shipped with IBM BigInsights 4.0 and Hortonworks– Default port : https:8443– Default URL prefix : /gateway/default/– https://knox.apache.org/

20 © 2015 IBM Corporation

How to check Knox endpoint� Check by using Ambari UI : Click Knox then move to Configs Tab

https://HOSTNAME:PORT/gateway/default/webhdfs/v1/

21 © 2015 IBM Corporation

How to configure/check Knox internal LDAP server� By default, Knox uses internal LDAP server

– To check User ID information, open Advanced users-ldif– LDAP user can be added or modified from here

� To start internal LDAP Server– By root user, move to <Knox_DIR>/bin– Execute ./ldap.sh start– Supported parameters : ldap.sh {start|stop|status|clean}– Must check and start LDAP server manually before te st. It may not be

managed by Ambari.

22 © 2015 IBM Corporation

Basic of hdfs/hadoop command� hdfs is hdfs client command and hadoop is hadoop client command

– These commands are available in nodes that hadoop or hdfs client is configured– Both commands are for multiple purpose but for our purpose we only interested in

filesystem command!– You can use any available one in your environment

� Format– hdfs dfs -<command> for example hdfs dfs -ls /– hadoop fs -<command> for example hadoop fs -ls /

� To check help message for filesystem command– hdfs dfs -help– hadoop fs -help

� These commands are available only if the shell console is available

� ls, cat, chmod and chown are frequently used during initial test and basic troubleshooting

23 © 2015 IBM Corporation

Who is dr.who ?� It’s Anonymous user within HDFS

– File or directory is generated with owner as dr.who if request comes to a service with authentication disabled

– If you see such files, possibly owner of directory is dr.who or permission of directory allows that others can write, for example 777

– Example case is that use WebHDFS apply directly to NameNode URL without additional security configuration

� A Troubleshooting tip for FileNotFoundException– Condition

• Use NameNode URL directly• Destination directory is not writable with dr.who user

– Error message : FileNotFoundException

– Simple workaround : change permission of directory to 777 or change owner to dr.who

– Best practices : Enable authentication or use already enabled components such as HttpFS or Knox

24 © 2015 IBM Corporation

Utilize curl for initial validation� What is curl

– curl is an open source command line tool and library for transferring data with URL syntax

– Supporting various protocols include HTTP/HTTPS with various authentication mechanism

– Useful to simply validate server functionality of REST API based services such as WebHDFS and Cloudant

– Commonly available in UNIX and LINUX platform– See more detail from http://curl.haxx.se/

� Parameters that will be used in following examples– -i : include protocol header in the output. Useful to identify problem– -u : Server user and password. Required for simple authentication test– -k : Allow connections to SSL sites without certs. Required if server certification is

self signed.– -X : Specify request command to use. Required for HTTP POST– --negotiate : use Kerberos authentication that is initiated by kinit

25 © 2015 IBM Corporation

� This sample is run on BigInsights V4.0 with default configuration– In this example, both hadoop and curl commands are executed inside of hadoop cluster– To validate environment for IIDR, You must run curl command from IIDR CDC system

� For more detail usage of curl with WebHDFS, reference hadoop document. Following URL is for hadoop V2.7.1

– https://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/WebHDFS.html

� Pre-task to test WebHDFS with curl command– Create new directory for test with permission 777 by using hadoop or hdfs

command– It’s just for our sample in following pages. You can use any available directory.– Recommend to use directory with permission 777 for initial test

[hdfs@bigdata ~]$ hadoop fs -mkdir /user/hdfs/curltest

[hdfs@bigdata ~]$ hadoop fs -chmod 777 /user/hdfs/curltest

[hdfs@bigdata ~]$ hadoop fs -ls /user/hdfs

Found 2 items

drwx------ - hdfs hdfs 0 2015-10-14 12:19 /user/hdfs/.Trash

drwxrwxrwx - hdfs hdfs 0 2015-10-14 12:19 /user/hdfs/curltest

Test WebHDFS with curl

26 © 2015 IBM Corporation

Test WebHDFS with curl directly to NameNode

[hdfs@bigdata ~]$ curl -i http://bigdata:50070/webhdfs/v1/user/hdfs/curltest?op=GETFILESTATUS

HTTP/1.1 200 OK

Cache-Control: no-cache

Expires: Wed, 14 Oct 2015 03:20:55 GMT

Date: Wed, 14 Oct 2015 03:20:55 GMT

Pragma: no-cache

Expires: Wed, 14 Oct 2015 03:20:55 GMT

Date: Wed, 14 Oct 2015 03:20:55 GMT

Pragma: no-cache

Content-Type: application/json

Transfer-Encoding: chunked

Server: Jetty(6.1.26-ibm)

{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":23630,"group":"hdfs","length":0,"modificationTime":1444792788155,"owner":

"hdfs","pathSuffix":"","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}

� Get file status of /user/hdfs/curltest directory

[hdfs@bigdata ~]$ curl -i -X PUT "http://bigdata:50070/webhdfs/v1/user/hdfs/curltest/direct?op=MKDIRS"

HTTP/1.1 200 OK

Cache-Control: no-cache

Expires: Wed, 14 Oct 2015 03:26:48 GMT

Date: Wed, 14 Oct 2015 03:26:48 GMT

Pragma: no-cache

Expires: Wed, 14 Oct 2015 03:26:48 GMT

Date: Wed, 14 Oct 2015 03:26:48 GMT

Pragma: no-cache

Content-Type: application/json

Transfer-Encoding: chunked

Server: Jetty(6.1.26-ibm)

{"boolean":true}

[hdfs@bigdata ~]$ hadoop fs -ls /user/hdfs/curltest

Found 1 items

drwxr-xr-x - dr.who hdfs 0 2015-10-14 12:26 /user/hdfs/curltest/direct

� Create /user/hdfs/curltest/direct directory and check.

27 © 2015 IBM Corporation

Test WebHDFS with curl for Kerberos (1/2)

/home/khjang$ kinit -Vkt /db/keytab/biadmin.cdclnxy.canlab.ibm.com.keytab biadmin/[email protected]

Using default cache: /tmp/krb5cc_2636

Using principal: biadmin/[email protected]

Using keytab: /db/keytab/biadmin.cdclnxy.canlab.ibm.com.keytab

Authenticated to Kerberos v5

/home/khjang$ klist

Ticket cache: FILE:/tmp/krb5cc_2636

Default principal: biadmin/[email protected]

Valid starting Expires Service principal

10/28/15 20:23:36 10/29/15 20:23:36 krbtgt/[email protected]

� Initialize by using keytab file and check principal

/home/khjang$ curl -i --negotiate -u : http://cdclnxy.canlab.ibm.com:14000/webhdfs/v1/user/?op=GETFILESTATUS

HTTP/1.1 401

WWW-Authenticate: Negotiate

Set-Cookie: hadoop.auth=;Path=/;Expires=Thu, 01-Jan-1970 00:00:00 GMT

Content-Type: text/html; charset=iso-8859-1

Cache-Control: must-revalidate,no-cache,no-store

Content-Length: 1363

Server: Jetty(6.1.x)

HTTP/1.1 200 OK

Expires: Thu, 01-Jan-1970 00:00:00 GMT

Set-Cookie: hadoop.auth="u=biadmin&p=biadmin/[email protected]&t=kerberos-

dt&e=1446051015925&s=q75UMQvMt9AWnJFalzVdj8/94+E=";Path=/

Content-Type: application/json

Transfer-Encoding: chunked

Server: Jetty(6.1.x)

{"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"hdfs","group":"biadmin","permission":"777","accessTime":0,"modificationT

ime":1446014900992,"blockSize":0,"replication":0}}

� Get file status by using Kerberos authentication that was initialized by kinit

28 © 2015 IBM Corporation

Test WebHDFS with curl for Kerberos (2/2)

/home/khjang$ curl -i -X PUT --negotiate -u : "http://cdclnxy.canlab.ibm.com:14000/webhdfs/v1/user/khjang?op=MKDIRS"

HTTP/1.1 401

WWW-Authenticate: Negotiate

Set-Cookie: hadoop.auth=;Path=/;Expires=Thu, 01-Jan-1970 00:00:00 GMT

Content-Length: 0

Server: Jetty(6.1.x)

HTTP/1.1 200 OK

Expires: Thu, 01-Jan-1970 00:00:00 GMT

Set-Cookie: hadoop.auth="u=biadmin&p=biadmin/[email protected]&t=kerberos-

dt&e=1446050900934&s=xNPXl2Qv1ss7Aj2Zviepfsi4rjc=";Path=/

Content-Type: application/json

Transfer-Encoding: chunked

Server: Jetty(6.1.x)

{"boolean":true}

� Create a directory

/home/khjang$ curl -i --negotiate -u : http://cdclnxy.canlab.ibm.com:14000/webhdfs/v1/user/khjang?op=GETFILESTATUS

HTTP/1.1 401

WWW-Authenticate: Negotiate

Set-Cookie: hadoop.auth=;Path=/;Expires=Thu, 01-Jan-1970 00:00:00 GMT

Content-Type: text/html; charset=iso-8859-1

Cache-Control: must-revalidate,no-cache,no-store

Content-Length: 1369

Server: Jetty(6.1.x)

HTTP/1.1 200 OK

Expires: Thu, 01-Jan-1970 00:00:00 GMT

Set-Cookie: hadoop.auth="u=biadmin&p=biadmin/[email protected]&t=kerberos-

dt&e=1446114369078&s=lvgSUa2eGMtKsqlTYEEhKETVOi8=";Path=/

Content-Type: application/json

Transfer-Encoding: chunked

Server: Jetty(6.1.x)

{"FileStatus":{"pathSuffix":"","type":"DIRECTORY","length":0,"owner":"biadmin","group":"biadmin","permission":"755","accessTime":0,"modificati

onTime":1446078023426,"blockSize":0,"replication":0}}

� Get file status of newly created directory

29 © 2015 IBM Corporation

Test WebHDFS with curl to Knox gateway (1/2)

[hdfs@bigdata ~]$ curl -i -k https://bigdata:8443/gateway/default/webhdfs/v1/user/hdfs/curltest?op=GETFILESTATUS

HTTP/1.1 401 Unauthorized

WWW-Authenticate: BASIC realm="application"

Content-Length: 0

Server: Jetty(8.1.14.v20131031)

� Error message if try to get file status without user information

[hdfs@bigdata ~]$ curl -i -u guest:guest-password https://bigdata:8443/gateway/default/webhdfs/v1/user/hdfs/curltest?op=GETFILESTATUS

curl: (60) Peer certificate cannot be authenticated with known CA certificates

More details here: http://curl.haxx.se/docs/sslcerts.html

curl performs SSL certificate verification by default, using a "bundle"

of Certificate Authority (CA) public keys (CA certs). If the default

bundle file isn't adequate, you can specify an alternate file

using the --cacert option.

If this HTTPS server uses a certificate signed by a CA represented in

the bundle, the certificate verification probably failed due to a

problem with the certificate (it might be expired, or the name might

not match the domain name in the URL).

If you'd like to turn off curl's verification of the certificate, use

the -k (or --insecure) option.

� Error message if server certification of Knox server is self signed one

30 © 2015 IBM Corporation

Test WebHDFS with curl to Knox gateway (2/2)

[hdfs@bigdata ~]$ curl -i -k -X PUT -u guest:guest-password "https://bigdata:8443/gateway/default/webhdfs/v1/user/hdfs/curltest/knox?op=MKDIRS"

HTTP/1.1 200 OK

Set-Cookie: JSESSIONID=1qe7wbdsu6wu65lncgler1tts;Path=/gateway/default;Secure;HttpOnly

Expires: Thu, 01 Jan 1970 00:00:00 GMT

Cache-Control: no-cache

Expires: Wed, 14 Oct 2015 03:28:27 GMT

Date: Wed, 14 Oct 2015 03:28:27 GMT

Pragma: no-cache

Expires: Wed, 14 Oct 2015 03:28:27 GMT

Date: Wed, 14 Oct 2015 03:28:27 GMT

Pragma: no-cache

Server: Jetty(6.1.26-ibm)

Content-Type: application/json

Content-Length: 16

{"boolean":true}

[hdfs@bigdata ~]$ hadoop fs -ls /user/hdfs/curltest

Found 2 items

drwxr-xr-x - dr.who hdfs 0 2015-10-14 12:26 /user/hdfs/curltest/direct

drwxr-xr-x - guest hdfs 0 2015-10-14 12:28 /user/hdfs/curltest/knox

� Create /user/hdfs/curltest/knox directory and check.

[hdfs@bigdata ~]$ curl -i -k -u guest:guest-password https://bigdata:8443/gateway/default/webhdfs/v1/user/hdfs/curltest?op=GETFILESTATUS

HTTP/1.1 200 OK

Set-Cookie: JSESSIONID=1npjuss0c5f901to1tbw81a9m4;Path=/gateway/default;Secure;HttpOnly

Expires: Thu, 01 Jan 1970 00:00:00 GMT

Cache-Control: no-cache

Expires: Wed, 14 Oct 2015 03:23:02 GMT

Date: Wed, 14 Oct 2015 03:23:02 GMT

Pragma: no-cache

Expires: Wed, 14 Oct 2015 03:23:02 GMT

Date: Wed, 14 Oct 2015 03:23:02 GMT

Pragma: no-cache

Server: Jetty(6.1.26-ibm)

Content-Type: application/json

Content-Length: 238

{"FileStatus":{"accessTime":0,"blockSize":0,"childrenNum":0,"fileId":23630,"group":"hdfs","length":0,"modificationTime":1444792788155,"owner":

"hdfs","pathSuffix":"","permission":"777","replication":0,"storagePolicy":0,"type":"DIRECTORY"}}

� Get file status with user information (Try without –k option first)

31 © 2015 IBM Corporation

HTTPS requires hostname� Depends on server certification, WebHDFS with HTTPS require fully

qualified hostname.– If use ip-address instead of hostname, it will throw SSLException

� Must use hostname that was used in server certification� Required to modify /etc/hosts file if this hostname is not available via DNS

32 © 2015 IBM Corporation

Kerberos authentication require time sync� Kerberos authentication throw error if there is time gap between IIDR CDC

Server and Hadoop Cluster

� When replication throws the following error, check time gap between Hadoop cluster and IIDR CDC Server and fix it first

– Kerberos throws this error if time gap is more than 5 minutes.

� If time gap is not the case, need to validate keytab and principle again

Login error: com.ibm.security.krb5.KrbException, status code: 37

message: PREAUTH_FAILED

33 © 2015 IBM Corporation

Certification key has to be 128bit or lower� IIDR CDC shipped with128 bit Key encryption support as it’s JVM default

� If customer uses 256 bit encryption, IIDR CDC will throw java.security.InvalidKeyException exception.

� This problem is described in following BigInsights Knowledge Center page and can be solved with instruction on it:

– https://www-01.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.biginsights.install.doc/doc/bi_install_download_jce.html

34 © 2015 IBM Corporation

Additional Resources

� IBM Developer Works CDC community: – https://www.ibm.com/developerworks/mydeveloperworks/groups/service/html/community

view?communityUuid=a9b542e4-7c66-4cf3-8f7b-8a37a4fdef0c

� IBM CDC Knowledge Center:– http://www-01.ibm.com/support/knowledgecenter/SSTRGZ_11.3.3/

� CDC Redbook:– http://www.redbooks.ibm.com/Redbooks.nsf/RedbookAbstracts/sg247941.html?Open

� IBM CDC Support:– http://www-

947.ibm.com/support/entry/portal/product/information_management/infosphere_change_data_capture?productContext=-873715215

� Passport Advantage:– https://www-112.ibm.com/software/howtobuy/softwareandservices/passportadvantage

35 © 2015 IBM Corporation

36 © 2015 IBM Corporation

Legal Disclaimer

• © IBM Corporation 2015. All Rights Reserved.• The information contained in this publication is provided for informational purposes only. While efforts were made to verify the completeness and accuracy of the information contained

in this publication, it is provided AS IS without warranty of any kind, express or implied. In addition, this information is based on IBM’s current product plans and strategy, which are subject to change by IBM without notice. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this publication or any other materials. Nothing contained in this publication is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use of IBM software.

• References in this presentation to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates. Product release dates and/or capabilities referenced in this presentation may change at any time at IBM’s sole discretion based on market opportunities or other factors, and are not intended to be a commitment to future product or feature availability in any way. Nothing contained in these materials is intended to, nor shall have the effect of, stating or implying that any activities undertaken by you will result in any specific sales, revenue growth or other results.

• If the text contains performance statistics or references to benchmarks, insert the following language; otherwise delete:Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

• If the text includes any customer examples, please confirm we have prior written approval from such customer and insert the following language; otherwise delete:All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

• Please review text for proper trademark attribution of IBM products. At first use, each product name must be the full name and include appropriate trademark symbols (e.g., IBM Lotus® Sametime® Unyte™). Subsequent references can drop “IBM” but should include the proper branding (e.g., Lotus Sametime Gateway, or WebSphere Application Server). Please refer to http://www.ibm.com/legal/copytrade.shtml for guidance on which trademarks require the ® or ™ symbol. Do not use abbreviations for IBM product names in your presentation. All product names must be used as adjectives rather than nouns. Please list all of the trademarks that you use in your presentation as follows; delete any not included in your presentation. IBM, the IBM logo, Lotus, Lotus Notes, Notes, Domino, Quickr, Sametime, WebSphere, UC2, PartnerWorld and Lotusphere are trademarks of International Business Machines Corporation in the United States, other countries, or both. Unyte is a trademark of WebDialogs, Inc., in the United States, other countries, or both.

• If you reference Adobe® in the text, please mark the first use and include the following; otherwise delete:Adobe, the Adobe logo, PostScript, and the PostScript logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States, and/or other countries.

• If you reference Java™ in the text, please mark the first use and include the following; otherwise delete:Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.

• If you reference Microsoft® and/or Windows® in the text, please mark the first use and include the following, as applicable; otherwise delete:Microsoft and Windows are trademarks of Microsoft Corporation in the United States, other countries, or both.

• If you reference Intel® and/or any of the following Intel products in the text, please mark the first use and include those that you use as follows; otherwise delete:Intel, Intel Centrino, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

• If you reference UNIX® in the text, please mark the first use and include the following; otherwise delete:UNIX is a registered trademark of The Open Group in the United States and other countries.

• If you reference Linux® in your presentation, please mark the first use and include the following; otherwise delete:Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others.

• If the text/graphics include screenshots, no actual IBM employee names may be used (even your own), if your screenshots include fictitious company names (e.g., Renovations, Zeta Bank, Acme) please update and insert the following; otherwise delete: All references to [insert fictitious company name] refer to a fictitious company and are used for illustration purposes only.