24
Amit Sharma [email protected] Contact for Hyperion Training and consultancies Some Important Consideration while working with Essbase (Developer Reference Guide)

Developer Ref Guide

Embed Size (px)

Citation preview

Page 1: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Some Important Consideration

while working with Essbase

(Developer Reference Guide)

Page 2: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Essbase Calculation Performance Tuning

1. After we enabled Parallel Calculation, by default, Essbase uses the last sparse dimension in an outline to identify tasks that can be performed concurrently. But the distribution of data may cause one or more tasks to be empty; that is, there are no blocks to be calculated in the part of the database identified by a task. This situation can lead to uneven load balancing, reducing parallel calculation effectiveness.

2. To resolve this situation, you can enable Essbase to use additional sparse dimensions in the identification of tasks for parallel calculation. For example, if you have a FIX statement on a member of the last sparse dimension, you can include the next-to-last sparse dimension from the outline as well. Because each unique member combination of these two dimensions is identified as a potential task, more and smaller tasks are created, increasing the opportunities for parallel processing and improving load balancing.

3. Add or modify CALCTASKDIMS in the essbase.cfg file on the server, or use the calculation script command SET CALCTASKDIMS at the top of the script.

Sample Code: SET CALCTASKDIMS 2

This will enable last 2 sparse dimensions to be included in the checking, it may significantly increase the running performance. (416-3025810)

Parallel Calculation and Tuning

1. We can enable parallel calculation in Essbase.cfg file in system level, or enable in calculation sript level. Sample code:SET CALCPARALLELSET CALCTASKDIMS

2. Parallel calculation only works with Uncommitted Access.

3. There is a risk that the parallel calculation may freeze the computer.

4. Use FIX commands so that special data block is calculated, don’t use cross dimension operator in most cases.

Intelligent Calculation and Performance Tuning

Switch on the Intelligent Calculation. It seems if we switch off Intelligent Calculation, the calculation result will be correct. Because all of the data blocks no matter if it is marked as dirty or clean will be recalculated. But the price is too many data blocks will be included in calculation.

We can switch on and off the intelligent calculation inside Calculation Script in different cases so that only the necessary data blocks are recalculated.

Page 3: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Use command: Set CLEARUPDATASTATUS ONLY/AFTER/OFF, SET UPDATECALC ON/OFF

Sample: SET UPDATECALC OFF;SET CLEARUPDATESTATUS AFTER;CALC TWOPASS;

Time Dim - Calculation Performance

By default, the time dimension is set to be dense. But if you use incremental data loading in MaxL srcipt. And the data is loaded in the end of every month. You can set Time dimension as sparse dimension, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated.This will significantly increase the data loading performance.

Incremental Data Loading Many companies load data incrementally. For example, a company may load data each month for that month. To optimize calculation performance when you load data incrementally, make the dimension tagged as time a sparse dimension. If the time dimension is sparse, the database contains a data block for each time period. When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated. For example, if you load data for March, only the data blocks for March and the dependent parents of March are updated. However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index. If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled. Note: This method works only for ASO

Data Block - Calculation Performance

1. Use FIX instead of cross-dimensional operator

Compare the next 2 statements:Fix(Jan)Sales = Sales * 1.05;EndFIX

Sales(Sales -> Jan = Sales -> Jan * 1.05);

Page 4: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

The 2nd is not efficient, it will look through all of time dimension even if only the Jan is calculated. The 1st one only calculates the Jan for sales block which is more efficient.

2. The data block size setting. It should be 10k - 100k, if the data block size is too big (>100k), the intelligent calculation will not work well. If the data block size is too small (nearby 10k), the index may become too huge, and this will affect the calculation speed.

Essbase Committed Setting

Under uncommitted access, Essbase locks blocks for write access until Essbase finishes updating the block. Under committed access, Essbase holds locks until a transaction completes.With uncommitted access, blocks are released more frequently than with committed access. The essbase performace is better if we set uncommitted access. Besides, parallel calculation only works with uncommitted access.

Database performance:

Uncommitted access always yields better database performance than committed access. When using uncommitted access, Essbase does not create locks that are held for the duration of a transaction but commits data based on short-term write locks.

Data consistency:

Committed access provides a higher level of data consistency than uncommitted access. Retrievals from a database are more consistent. Also, only one transaction at a time can update data blocks when the isolation level is set to committed access. This factor is important in databases where multiple transactions attempt to update the database simultaneously.

Data concurrency:

Uncommitted access provides better data concurrency than committed access. Blocks are released more frequently than during committed access. With committed access, deadlocks can occur.

Database rollbacks:

If a server crash or other server interruption occurs during active transactions, the Essbase kernel rolls back the transactions when the server is restarted. With committed access, rollbacks return the database to its state before transactions began. With uncommitted access, rollbacks may result in some data being committed and some data not being committed.

Essbase Restructure

There are 3 restructure: Dense restructure, sparse restructure, and outline only restreucture. If a member of dense dimension is changed, thr resturcture command will make a dense restructure. Dense restructure use a long time, becuase some data blocks will be created. Sparse restructure happens only if a sparse member is changed, sparse restructure only resturcture index, it should not use too much time. Outline only restucture don't change data block or index, no data block or index restructure happen, it uses no time.

Page 5: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

• Dense restructure: If a member of a dense dimension is moved, deleted, or added, Essbase restructures the blocks in the data files and creates new data files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are not removed. Essbase marks all restructured blocks as dirty, so after a dense restructure you must recalculate the database. Dense restructuring, the most time-consuming of the restructures, can take a long time to complete for large databases.

• Sparse restructure: If a member of a sparse dimension is moved, deleted, or added, Essbase restructures the index and creates new index files. Restructuring the index is relatively fast; the time required depends on the index size.

• Outline-only restructure: If a change affects only the database outline, Essbase does not restructure the index or data files. Member name changes, creation of aliases, and dynamic calculation formula changes are examples of changes that affect only the database outline.

Validata Essbase Structure

Using VALIDATE to Check Integrity. The VALIDATE command performs many structural and data integrity checks:

• Verifies the structural integrity of free space information in the index.

• Compares the data block key in the index page with the data block key in the corresponding data block.

• The Essbase index contains an entry for every data block. For every read operation, VALIDATE automatically compares the index key in the index page with the index key in the corresponding data block and checks other header information in the block. If it encounters a mismatch, VALIDATE displays an error message and continues processing until it checks the entire database.

• Restructures data blocks whose restructure was deferred with incremental restructuring.

• Checks every block in the database to make sure each value is a valid floating point number.

• Verifies the structural integrity of the LROs catalog.

Note:

When you issue the VALIDATE command, we recommend placing the database in read-only mode.

As Essbase encounters mismatches, it records error messages in the VALIDATE error log. You can specify a file name for error logging; Essbase prompts you for this information if you do not provide it. The VALIDATE utility runs until it has checked the entire database.

You can use the VALIDATE command in ESSCMD to perform these structural integrity checks.

During index free space validation, the VALIDATE command verifies the structural integrity of free space information in the index. If integrity errors exist, Essbase records them in the VALIDATE log. The file that you specified on the VALIDATE command holds the error log.

Page 6: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

If VALIDATE detects integrity errors regarding the index free space information, the database must be rebuilt. You can rebuild in three ways:

• Restore the database from a recent system backup.

• Restore the data by exporting data from the database; creating an empty database; and loading the exported data into the new database.

The principle for design Essbase

Point 1:

Don't use too deep levels; it's ok to have a lot of members in one level. But it is not wise if we have deep nested level, and each level doesn't have too much members. This is very important principle when we design a cube.

The outline secuqence: time, account, the other dense dimensions, the sparse dimension that has fewest members, other sparse dimensions, the attribute dimensions.

Point 2:

Calculation performance may be affected if a database outline has multiple flat dimensions. A flat dimension has very few parents, and each parent has many thousands of children; in other words, flat dimensions have many members and few levels. You can improve performance for outlines with multiple flat dimensions by adding intermediate levels to the database outline.

The above 2 points are from different source, they looks some different and even in the opporsite. my understanding is: we should have fewer levels anyway, but the huge amount of member should happen in the parent level. That means we have thousands of parents, but each parent has few members.

Incremental Data loading

Many companies load data incrementally. For example, a company may load data each month for that month. To optimize calculation performance when you load data incrementally, make the dimension tagged as TIME a SPARSE dimension. If the time dimension is sparse, the database contains a datablock for each time period.

When you load data by time period, Essbase accesses fewer data blocks because fewer blocks contain the relevant time period. Thus, if you have Intelligent Calculation enabled, only the data blocks marked as dirty are recalculated.

For example, if you load data for March, only the data blocks for March and the dependent parents of March are updated. However, making the time dimension sparse when it is naturally dense may significantly increase the size of the index, creating possibly slower performance due to more physical I/O activity to accommodate the large index.

If the dimension tagged as time is dense, you still receive some benefit from Intelligent Calculation when

Page 7: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

you do a partial data load for a sparse dimension. For example, if Product is sparse and you load data for one product, Essbase recalculates only the blocks affected by the partial load, although time is dense and Intelligent Calculation is enabled.

Simulated Calculation

You can simulate a calculation using SET MSG ONLY in a calculation script. A simulated calculation produces results that help you analyze the performance of a real calculation that is based on the same data and outline. By running a simulated calculation with a command such as SET NOTICE HIGH, you can mark the relative amount of time each sparse dimension takes to complete. Then, by performing a real calculation on one or more dimensions, you can estimate how long the full calculation will take, because the time a simulated calculation takes to run is proportional to the time that the actual calculation takes to run.

For example, if the calculation starts at 9:50:00 AM, and the first notice is time-stamped at 09:50:10 AM and the second is time-stamped at 09:50:20 AM, you know that each of part of the calculation took 10 seconds. If you then run a real calculation on only the first portion and note that it took 30 seconds to run, you know that the other portion also will take 30 seconds. If there were two messages total, then you would know that the real calculation will take approximately 60 seconds (20 / 10 * 30 = 60 seconds). Use the following topics to learn how to perform a simulated calculation and how to use a simulated calculation to estimate calculation time.

Performing a Simulated Calculation

Before you can estimate calculation time, you must perform a simulated calculation on a data model that is based on your actual database.

To perform a simulated calculation:

1. Create a data model that uses all dimensions and all levels of detail about which you want information.

2. Load all data. This procedure calculates only data loaded in the database.

3. Create a calculation script with these entries:SET MSG ONLY;SET NOTICE HIGH;CALC ALL;

If you are using dynamic calculations on dense dimensions, substitute the CALC ALL command with the specific dimensions that you need to calculate; for example, CALC DIM EAST.

Note: If you try to validate the script, Essbase reports an error. Disregard the error.

4. Run the script.

5. Find the first sparse calculation message in the application log and note the time in the message.

6. Note the time for each subsequent message.

7. Calculate the dense dimensions of the model that are not being dynamically calculated:CALC DIM (DENSE_DIM1, DENSE_DIM2, …);

Page 8: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

8. Calculate the sparse dimensions of the model:CALC DIM (SPARSEDIM1, SPARSEDIM2, …);

9. Project the intervals at which notices will occur, and then verify against sparse calculation results. You canthen estimate calculation time.

Select Essbase Compression method

1. If you have not too much information for the essbase, you don't need to make any compression setting on the Essbase. By default, the essbase is compressed using BITMAP which is the best way in most cases.

2. If your essbase is 90% dense, you may use ZLIB for the compression method.

3. If your Essbase is sparse, and you have huge repeated no missing data cells, you should use RLE for essbase compression method.

By the way, the "Index Value Pair" compression is selected automatically by the Essbase system. Index Value Pair addresses compression on databases with larger block sizes, where the blocks are highly sparse. This compression algorithm is not selectable but is automatically used whenever appropriate by the database. The user must still choose between the compression types None, bitmap, RLE, and zlib through Administration Services.

Recovery from Spreadsheet Log

The Essbase Spreadsheet addin can update Essbase in data cell level. In case of Essbase disaster, the essbase can be restored from the last backup. Suppose the last backup is yesterday night, and the Essbase disaster happens in lunch time today. The data this morning is not restored by default. How to restore all of the detail data include the data one second before the Essbase disaster? Here is the method:

1. Set SSAUDIT or SSAUDITR in essbase.cfg fileSample code:SSAUDITR Sample Basic C:\logfoldernameThe above statement will set the spreadsheet log on Sample application, and basic database. If logfoldername is not specified, the default log folder is used.SSAUDIT SampleThe above statement will set the spreadsheet log on Sample application for all sub databases.SSAudit xxxxx xxxxx c:\sslogThe above statement will set the spreadsheet log on all application for all sub databases.

2. After adding the above code to the essbase.cfg, restart Essbase Server. You will see the next words in the C:\Hyperion\logs\essbase\app\Sample\Sample.log[Sun Nov 15 22:22:50 2009]Local/Sample///Info(1002088)Starting Spreadsheet Log [C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.alg] For Database [Basic]This means the setting is successful.

Page 9: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

3. You can go on to make updating in Sample.Basic using Excel Spreadsheet addin, everything is logged now.

4. Now, suppose your Essbase is just restored from a backup, further more, you can recover transactions from the update log. To do so, use the Essbase command-line facility, ESSCMD, from the server console. The following ESSCMD command sequence loads the update log:LOGIN hostnode username passwordSELECT appname dbname //Example: Select Sample BasicLOADDATA 3 filepath:appname.ATX //LOADDATA 3 C:\Hyperion\products\Essbase\EssbaseServer\APP\Sample\Basic\Basic.atxEXIT

5. The difference between SSAUDIT and SSAUDITR is:SSAUDIT append logdata to existing logs after archiving. SSAUDITR clear the logs at the end of the archiving process. 6. Please note, you should mannully backup and clear the Basic.atx and Basic.alg log files, so that they will not become too huge.

Hot backup - LDAP & Shared Service

Steps:

1. Back up any related components, including Shared Services relational database and the OpenLDAP database. Note: The Shared Services relational database and the OpenLDAP database must be backed up at the same time. Ensure that the administrator does not register a product application or create an application group at backup time.

2. Run this command to create a hot backup of OpenLDAP:Windows:c:/Hyperion/products/Foundation/server/scripts/backup.bat HSS_backupUNIX:/home/username/Hyperion/products/Foundation/server/scripts/backup.sh /home/username/backups/HSS_backup

To recover Shared Services from a hot backup:

1. Stop OpenLDAP and Shared Services.

2. Recover the Shared Services relational database with RDBMS tools, using the backup with the same date as the OpenLDAP backup.

3. If you use OpenLDAP as Native Directory, recover the OpenLDAP database by running: Examples:Windows noncatastrophic recovery—C:/Hyperion/products/Foundation/server/scripts/recover.bat c:/HSS_backupUNIX catastrophic recovery—/home/username/Hyperion/products/Foundation/server/scripts/recover.sh /home/username/HSS_backup catRecovery

Page 10: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Note: Physical backup and logical backup. A physical backup can be hot or cold:

*. Hot backup—Users can make changes to the database during a hot backup. Log files of changes made during the backup are saved, and the logged changes are applied to synchronize the database and the backup copy. A hot backup is used when a full backup is needed and the service level does not allow system downtime for a cold backup.

*. Cold backup—Users cannot make changes to the database during a cold backup, so the database and the backup copy are always synchronized. Cold backup is used only when the service level allows for the required system downtime.Note: A cold full physical backup is recommended.* Full—Creates a copy of data that can include parts of a database such as the control file,transaction files (redo logs), archive files, and data files. This backup type protects data from application error and safeguards against unexpected loss by providing a way to restore original data. Perform this backup weekly, or biweekly, depending on how often your data changes. Making full backups cold, so that users cannot make changes during the backups, is recommended.

Note: The database must be in archive log mode for a full physical backup.

* Incremental—Captures only changes made after the last full physical backup. The files differ for databases, but the principle is that only transaction log files created since the last backup are archived. Incremental backup can be done hot, while the database is in use, but it slows database performance.In addition to backups, consider the use of clustering or log shipping to secure database content.

Logical Backup

A logical backup copies data, but not physical files, from one location to another. A logical backup is used for moving or archiving a database, tables, or schemas and for verifying the structures in a database.

Cold backup - LDAP & Shared Service

1. Stop OpenLDAP and Shared Services.

2. Back up the Shared Services directory from the file system.Shared Services files are in HYPERION_HOME/deployments and HYPERION_HOME/products/Foundation.

3. Optional:* Windows—Back up these Windows registry entries using REGEDIT and export: HKLM/SOFTWARE/OPENLDAPHKLM/SOFTWARE/Hyperion Solutions* UNIX—Back up these items:.hyperion.* files in the home directory of the user name used for configuring theproduct user profile (.profile or equivalent) file for the user name used for configuring the product.

4. Shut down the Shared Services relational database and perform a cold backup using RDBMS tools.

Page 11: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

To recover Shared Services from a cold backup:

1. Restore the OS.

2. Using Oracle Hyperion Enterprise Performance Management System Installer, Fusion Edition, install Shared Services binaries. Note: Do not configure the installation.OpenLDAP Services is created during installation.

3. Restore the Shared Services cold backup directory from the file system.

4. Restore the cold backup of the Shared Services relational database using database tools.

5. Optional: Restore the Windows registry entries from the cold backup.

6. (Windows) If Shared Services Web application service must be recreated, run HYPERION_HOME/deployments/AppServer/bin/installServiceSharedServices9.bat.

7. Start the OpenLDAP service and Oracle's Hyperion Shared Services.

Essbase backup and Recovery

Note: Essbase Outline change will NOT be logged!

Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive file to smaller size(<2 GB).

MaxL Sample:

alter database Sample.Basic force archive to file '/Hyperion/samplebasic.arc';query archive_file 'C:/Hyperion/samplebasic.arc' get overview;alter database appname.dbname force restore from file BACKUP-FILE;

To enable transaction log backup, in the essbase.cfg file:TRANSACTIONLOGDATALOADARCHIVE SERVER_CLIENT

query database Sample.Basic list transactions;query database Sample.Basic list transactions after '11_20_2007:12:20:00' write to file '/Hyperion/products/Essbase/EssbaseServer/app/Sample/Basic/listoutput.csv';

alter database Sample.Basic replay transactions using sequence_id_range 2 to 2;Note: when reply with transaction,please notice the log type.

You should clear the log file and the replay file from time to time.

/Hyperion/trlog/Sample/BasicARBORPATH/app/appname/dbname/Replay

-------------------------------------------------------------------------------

Page 12: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

BSO Essbase: automated Essbase backup and restore is preferredASO Essbase: manual backup and restore is the only choiceFull backup and transaction log backup, after restoring from a full backed-up, you can replay the logged transactions that took place after the backup operation. However, outline changes are not logged and, therefore, cannot be replayed. Therefore, everytime there is an outline change, you must make an backup to avoid having the outline out of sync.

In backing up a database, Essbase performs the following tasks:

1. Places the database in read-only mode, protecting the database from updates during the archive process while allowing requests to query the database.

2. Writes a copy of the database files to an archive file that resides on the Essbase Server computer. The files include:essxxxxx.pag -- Essbase data filesessxxxxx.ind -- Essbase index filesdbname.esm -- Essbase Kernel filedbname.tct -- Transaction control tabledbname.ind -- Free fragment filedbname.otl -- Outline filedbname.otl.keep -- Temporary backup of dbname.otlessx.lro -- Linked reporting objects dbname.otn -- Temporary outline filedbname.db -- Database file containing database settingsdbname.ddb -- Partition definition filedbname.ocl -- Outline change log created during incremental dimension build.essxxxx.chg -- Outline synchronization change logdbname.alg -- Spreadsheet update log that stores spreadsheet update transactions dbname.atx -- Spreadsheet update log that contains historical transaction information essbase.sec* -- Essbase security fileessbase.bak -- Backup of the Essbase security fileessbase.cfg -- Essbase Server configuration filedbname.app -- Application file containing application settings.otl,.csc,.rul,.rep,.eqd,.selESSCMD or MaxL scripts

3. Returns the database to read-write mode-----------------------------------------------------------------------

Page 13: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Special Notices for Essbase

Note: there are 2 ways for Essbase backup and recovery

Method 1:

backup = full essbase backup + transaction log backuprecovery = full essbase recovery + replay transaction log

Method 2:

backup = backup all Essbase system filesrecovery=replace essbase system files with the backup files You can backup files while Essbase is in read only mode

The Essbase should be stopped while recovery is processing in method 1 or 2.---------------------------------------------------------------------------

Note: Essbase Outline change will NOT be logged!

Note: in the essbase.cfg file, set the SPLITARCHIVEFILE configuration to TRUE. This will split archive file to smaller size(<2 GB).

Note: Partition commands (for example, synchronization commands) are not logged and,therefore, cannot be replayed. When recovering a database, you must replay loggedtransactions and manually make the same partition changes in the correct chronological order.When using partitioned databases or using the @XREF function in calculation scripts, you must selectively replay logged transactions in the correct chronological order between the source and target databases.

Note: for ASO Essbase,the only way is system file backup:

1. Stop the application.

2. Use the file system to copy the contents of the application directory (ARBORPATH/app/appname),excluding the temp directory

-----------------------------------------------------------------------------Set Essbase in read only mode when backing up:alter database begin archiveSet Essbase back to read/write after backing up:alter database end archive----------------------------------------------------------------------------Use export command to export data file is also a simple option to keep text format data backup, level 0 data is ok

Page 14: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Essbase Integration Backup

To back up Integration Services:

1. Perform a complete backup of the Oracle Essbase Integration Services catalog repository.

2. Optional: Export all models and metaoutlines into XML files.

3. Create and save a list of all source Open Database Connectivity (ODBC) Data Source Names (DSNs) that were set up.

4. Keep a current copy of installed software, along with all property files such as ais.cfg.

To recover Integration Services:

• If Integration Services installation files are lost because of hardware failure, you must reinstall Integration Services.

• If the database containing the catalog is corrupted, you must restore it and then create an ODBC DSN to the catalog and use it to retrieve models and metaoutlines.

• If the backup catalog database is also corrupted, then, from Oracle Essbase Integration Services Console, create an empty catalog and import each model and metaoutline using XML files.

Performance Management Architecture

Dimension Library—A centralized location to manage dimensions and dimension properties.

Application Library—A summary of applications that have been created and/or deployed to Financial Management, Planning, Profitability and Cost Management, Essbase Aggregate Storage Option (ASO), or Essbase Block Storage Option (BSO).

Calculation Manager— Enables you to create, validate, and deploy business rules and business rule sets.

Data Synchronization—Enables data synchronization between or within Hyperion applications.

Application Upgrade—Enables upgrades from previous Financial Management and Planning releases.

Library Job Console—Provides a summary, including status, of Dimension library and application activities, including imports, deployments, and data synchronizations.

To use Performance Management Architect for application administration, you can move applications being managed using Financial Management or Planning Classic administration. After you upgrade classic application to PMA with workspace, you cannot move it back.

Page 15: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Calculation Script Tuning

Case that runs slow:We have a big database and the cube has been in a 9.3.1 environment. This is a 10 dimensional cube with 2 dense dimensions. We have a script running in more than 2 days, which has commands :

SET CALCPARALLEL 4; SET CALCTASKDIMS 3;SET UPDATECALC OFF;SET AGGMISSG ON;SET FRMLBOTTOMUP ON;

FIX("21000", "22600", @REMOVE(@DESCENDANTS("All Products"), @DESCENDANTS("Reclaimed Rubber")), @REMOVE(@DESCENDANTS("Total Customer"),@DESCENDANTS("MICHELIN NORTH AMERICA, INC.")))SET CREATENONMISSINGBLK ON;IDR = NA_Partner->NA_Misc->NA_Product * ("Sales Volume"->IDR->Marketing / ("Sales Volume"->IDR->Marketing->"All Products"->"Total Customer" - "Sales Volume"->IDR->Marketing->"All Products"->"MICHELIN NORTH AMERICA, INC." - "Sales Volume"->IDR->"21000"->"Reclaimed Rubber"->"C-0005-01"));SET CREATENONMISSINGBLK OFF;ENDFIX

and more aggregate commands similarly ....----------------------------

Method to Increase Calculation speed:

1. When the intelligent calculation is turned off and CREATENONMISSINGBLK is ON, within the scope of the calculation script, all blocks are calculated, regardless if they are marked clean or dirty.

2. Cross dimension operators can be reduced, include more items inside FIX instead of using "->"

Bulk User creation in Essbase

How can we create bulk users in Essbase ? say if there are 500 users need to be created at a time. what is the technique ?

Suppose you have 500 users in c:\user.csv, you can create batch of MaxL command using next JavaScript code. Copy the below code and name as gm.js, execute gm.js in Windows.

var fso = new ActiveXObject("Scripting.FileSystemObject");var rs = fso.OpenTextFile("C:\ \user.csv");var fso1 = new ActiveXObject("Scripting.FileSystemObject");

var ws = fso1.CreateTextFile("C:\ \MaxL.txt");

Page 16: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

for(var i=1;i<=500;i++){username = rs.ReadLine(); var x="create user " + username + "identified by " +"'password'" +" member of group " + "'testgroup'" ;ws.WriteLine(x);}

rs.Close();ws.close();

Essbase ASO Performance

• Use stored hierarchies (rather than dynamic hierarchies) as much as possible.

• Use alternate hierarchies (shared members) only when necessary.

• Minimize the number of hierarchies. (For example, each additional stored hierarchy slows down view selection and potentially increases the size of the aggregated data).

• If a hierarchy is a small subset of the first hierarchy, consider making the small hierarchy a dynamic hierarchy. Considerations include how often the hierarchy data is queried and the query performance impact when it is dynamically queried at the time of retrieval.

• The performance of attributes is the same as for members on a stored hierarchy.

• The higher the association level of an attribute to the base member, the faster the retrieval query

• Convert BSO to ASO in case, ASO has much faster running performance compared to BSO. ASO cannot include Calculation Script however. Must use migration wizard for this transmision, besides, as soon as it is transferred to ASO, it cannot be transferred back t BSO.

Page 17: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Select Compression Dimension

The choice of compression dimension can significantly affect ASO performance.Sample MaxL command to query the Compression Estimation Statistics:query database 'ASOsamp'.'Sample' list aggregate_storage compression_info;

Principles:

• The candidate for compression dimension should not have too many "Stored level 0 members".

• Average bundle fill:is the average number of values stored in the groups. It is between 1 to 16. 16 is the best, 1 is the worst. For the candidate dimension, it should have the greatest value compared to other dimensions.

• Expected level 0 size: This field indicates the estimated size of the compressed database. A smaller expected level 0 size indicates that choosing this dimension is expected to enable better compression.

Page 18: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Optimizing ASO Outline Page

Aggregate storage database outline is page able; therefore, ASO can put huge members in memory that will significantly increase the running performance. Depending on how you want to balance memory usage and data retrieval time, you can customize outline paging for aggregate storage outlines by using one or more of the following settings in the essbase.cfg file:

• PRELOADMEMBERNAMESPACE to turn off preloading of the member namespace.

• PRELOADALIASNAMESPACE to turn off preloading of the alias namespace.

• Compact outline from time to time, that is because when a member is deleted, it is not really deleted inside the file, it is actually marked as 'deleted', if you don't compact the outline, it will become huger and huger.

Write data from BSO to ASO

BSO can have "write back" functionality to any level, and BSO can perform complex calculations. ASO is for large aggregation focused databases with many dimensions and many members. Sometimes, we want to use the advantage of both ASO and BSO.

In the world where I can now make an ASO database the source of a partition, I can take advantage of the BSO strengths (write back to any level, powerful calculation engine) and then source this information to a consolidated ASO database that maybe has the volumes of detail from other sources.

Note - the new Hyperion Profitability and Cost Management solution uses this model: BSO for allocation calcs and loads to an ASO cube for reporting.

Steps:

• Create the BSO database in a separate application from the one in which the ASO database is located.Typically, the block storage database contains a subset of the dimensions in the aggregate storage database.

• Create a transparent partition based on where you want the data to be stored. Make the block storage database the target and the aggregate storage database the source.

Page 19: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

ASO Data-Time Dimension

Note: It is better to delete the dimension and use the Create Date-Time Dimension wizard to recreate it with the changes built in by the wizard, particularly if changes involve adding or removing members. It is risky if simply delete or add a member for Date-Time dimension.

Linked attribute dimensions can be associated only with the date-time dimension. Linked attribute dimension can be built up when in the process of creating Date-Time Dimensions using wizard.

Page 20: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

Increase Data Load Performance

If you use multiple import database data MaxL statements to load data values to aggregate storage databases, you can significantly improve performance by loading values to a temporary Data Load Buffer first, with a final write to storage after all data sources have been read.

For ASO database, the data load method is:

1. Create Data Load Buffer.

2. Import data to data load buffer, many importing process can happen in the same time.

3. Export data from data load buffer to ASO database.

MaxL Code:

alter database AsoSamp.Sample initialize load_buffer with buffer_id 1;

import database AsoSamp.Sample data from server data_file 'file_1.txt' to load_buffer with buffer_id 1 on error abort;

Page 21: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

import database AsoSamp.Sample data from server data_file 'file_2.dat' using server rules_file ‘rule’ to load_buffer with buffer_id 1 on error abort;

import database AsoSamp.Sample data from server excel data_file 'file_3.xls' to load_buffer with buffer_id 1 on error abort;

import database AsoSamp.Sample data from load_buffer with buffer_id 1;By default, when cells with identical keys are loaded into the same data load buffer, Essbase resolves the cell conflict by adding the values together.

To create a data load buffer that combines duplicate cells by accepting the value of the cell that was loaded last into the load buffer, use the alter database MaxL statement with the aggregate_use_last grammar.

For example:

alter database AsoSamp.Sample initialize load_buffer with buffer_id 1 property aggregate_use_last;

query database appname.dbname list load_buffers;

For example, to create a slice by overriding values (the default), use this statement:import database AsoSamp.Sample data from load_buffer with buffer_id 1 override values create slice;

alter database AsoSamp.Sample merge all data;

MDX Query - BottomPercent

WITHSET [Lowest 5% products] AS'BottomPercent ({ [Product].members },5,([Measures].[Sales], [Year].[Qtr2]))'

MEMBER [Product].[Sum of all lowest prods] AS'Sum ( [Lowest 5% products] )'

MEMBER [Product].[Percent that lowest sellers hold of all product sales] AS'Sum ( [Lowest 5% products] ) / [Product] '

SELECT{[Year].[Qtr2].children}on columns,{[Lowest 5% products],[Product].[Sum of all lowest prods],

Page 22: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

[Product],[Product].[Percent that lowest sellers hold of all product sales]}on rowsFROM Sample.BasicWHERE ([Measures].[Sales])

MDX - Open Inventory

WITHMEMBER [Measures].[Starting Inventory] AS'IIF (IsLeaf (Year.CurrentMember),[Measures].[Opening Inventory],([Measures].[Opening Inventory],OpeningPeriod ([Year].Levels(0),[Year].CurrentMember)))'

MEMBER [Measures].[Closing Inventory] AS'IIF (Isleaf(Year.CurrentMember),[Measures].[Ending Inventory], ([Measures].[Closing Inventory],ClosingPeriod ([Year].Levels(0),[Year].CurrentMember)))'

SELECTCrossJoin ({ [100-10] },{ [Measures].[Starting Inventory], [Measures].[Closing Inventory] })ON COLUMNS,Hierarchize ( [Year].Members , POST)ON ROWSFROM Sample.Basic

returns the grid:

Page 23: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

V11 Service Starting Sequence

The next is the sequence to correctly start the EAS for version 11:

1. Start the RLDB that hold the reporsitory of LDAP, Shared Service, and Essbase Server, in Windows system, it is related the next 3 services:

OracleOraDb11g_home1ConfigurationManager

OracleServiceORCL

OracleOraDb11g_home1TNSListener

2. Start the LDAP service, and the Hyperion Shared Service

Hyperion Foundation OpenLDAP

Hyperion Foundation Shared Services - Web Application

3. Start Essbase Server

Hyperion Essbase Services 11.1.1 - hypservice_1

Page 24: Developer Ref Guide

Amit Sharma [email protected] Contact for Hyperion Training and consultancies

4. Start the Essbase Administration Service

Hyperion Administration Services - Web Application

ASO - add huge dim members

ASO always have a lot of members. the most polular methods are generation reference method and children reference method.

1. Create some rule files.

2. Create some data files, the data files are not for data loading, but for member creation. Check the consistence between the data file and the rule files.

3. If there are too much data members, you may have to use a computer language (such as JavaScript) to generate the better formated data files from a more basic original data file.

4. Use Maxl Import command to load data to buffer from the data files in parallel.

5. Use Maxl import command to load data from buffer to essbase

import database AsoSamp.Sample data connect as TBC identified by 'password' using multiple rules_file 'rule1','rule2' to load_buffer_block starting with buffer_id 100 on error write to "error.txt";import database AsoSamp.Sample data from load_buffer with buffer_id 1, 2;