94
1. Info Objects 2. DSO: Data Store objects permit complete granular (document level) and historic data storage. As for DataSources, the data is stored in flat database tables. A Data Store object consists of a key (for example, document number, item) and a data area. The data area can contain both key figures (for example, order quantity) and characteristics (for example, order status). In addition to aggregating the data, you can also overwrite the data contents, for example to map the status changes of the order. This is particularly important with document-related structures. A Data Store object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic) level. This data can be evaluated using a BEx query.A DataStore object contains key fields (such as document number, document item) and data fields that, in addition to key figures, can also contain character fields (such as order status, customer). The data from a DataStore object can be updated with a delta update into InfoCubes (standard) and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems. Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables. In BI 7.0, three types of DataStore objects exist: 1. Standard DataStore (Regular ODS). 2. DataStore Object for Direct Updates ( APD ODS). 3. Write-Optimized DataStore (new). Standard DataStore (Regular ODS) Features of Change Log & Active Queue of Standard Data Store Object (DSO) in BI 7.0 Motivation for DSO Consolidation & Cleansing o A further motivation is the need for a place where data can be consolidated and cleansed. This is important when we upload data from completely different Source Systems. o After consolidation and cleansing, data can be uploaded to Info Cubes. To store data on document level Overwrite capability of characteristics

Total Topics of SAP BI

Embed Size (px)

DESCRIPTION

SAP BW Concepts

Citation preview

1. Info Objects2. DSO: Data Store objectspermit complete granular (document level) and historic data storage. As for DataSources, the data is stored in flat database tables. A Data Store object consists of a key (for example, document number, item) and a data area. The data area can contain both key figures (for example, order quantity) and characteristics (for example, order status). In addition to aggregating the data, you can also overwrite the data contents, for example to map the status changes of the order. This is particularly important with document-related structures.A Data Store object serves as a storage location for consolidated and cleansed transaction data or master data on a document (atomic) level. This data can be evaluated using a BEx query.A DataStore object contains key fields (such as document number, document item) and data fields that, in addition to key figures, can also contain character fields (such as order status, customer). The data from a DataStore object can be updated with a delta update into InfoCubes (standard) and/or other DataStore objects or master data tables (attributes or texts) in the same system or across different systems.Unlike multidimensional data storage using InfoCubes, the data in DataStore objects is stored in transparent, flat database tables. The system does not create fact tables or dimension tables.In BI 7.0,threetypes of DataStore objects exist:1. Standard DataStore (Regular ODS).2. DataStore Object for Direct Updates ( APD ODS).3. Write-Optimized DataStore (new).

Standard DataStore (Regular ODS)Features of Change Log & Active Queue of Standard Data Store Object (DSO) in BI 7.0Motivation for DSO Consolidation & Cleansingo A further motivation is the need for a place where data can be consolidated and cleansed. This is important when we upload data from completely different Source Systems.o After consolidation and cleansing, data can be uploaded to Info Cubes. To store data on document level Overwrite capability of characteristicso Not possible to overwrite data in Info Cube as whenever data is added to Info Cube, this data is aggregated. So data can be overwritten in DSO and this provides a significant capability to BW. Reportingo Direct on document level datao Drilldown from Info cube to document levelArchitecture of Standard ODS / DSO (7.x)"ODS Objects consist of three tables as shown in the architecture" - Source: SAP Docs

Fig.A - ODS Object Structure (C) SAPThe Transition: ODS Objects (3.X) to DSO (BI 7.0)The ODS consists of consolidated data from several Info Sources on a detailed (document) level, in order to support the document analysis. In the context of the DSO, the PSA makes up the first level and the DSO table makes up the second level of the DSO. Therefore, the first level consists of the transaction data from the source system, and the second level consists of the consolidated data and data from several source systems and Info Sources. You can run this analysis directly on the contents of the table, or run it from an Info Cube query into a query by means of a drilldown.

Fig. B. Sample schema for Reporting using ODS Objects (using Update Rules & Transfer Rules) * Note: UR refers to Update RulesPrior to existence of DSO, decisions on granularity were based solely on data in Info Cube. Now Info Cube can be less granular with data held for a longer period of time versus the DSO which can be very granular but hold data for a shorter period of time. Data from the ODS can be updated into appropriate Info Cubes or other ODS Objects. Reporting on ODS can be done with the OLAP processor or directly with an ODS query.In this Fig. B, data from Data Source A and Data Source B is uploaded to a PSA. The PSA (Persistent Staging Area) corresponds to DSO. From the PSA we have the possibility, via transfer rules, to upload data to DSO. The DSO is represented here as one layer, but depending on the business scenario, BI DSO can be structured with multiple levels. Thus, the ODS objects offer data that are subject oriented, consolidated and integrated with respect to same process on different source systems. After data has been stored, or while the data is updated in the ODS, we have option of making technical changes as well as data changes. In the ODS, data is stored in a de-normalized data structure.Structure of ODSWhile transferring data from PSA to ODS objects, rules (Transfer Rules) can be applied to clean records and transform them to company-wide standards for characteristic values. If it is meaningful at this stage, business logic may also be applied(Update Rules).Sample Scenario for a Standard DSOConsider an example involving a Standard DSO in SAP BI 7.0.Let's check flat file records, the key fields are customer and material and we have a duplicate record(Check Rec.2). The 'Unique Data Records option is unchecked which means it can expect duplicate records.

Figure C. Explains how records are captured in a DSO (Refer selected options below)After update rule, Record 2 in PSA is overwritten as it has got same keys. It's overwritten with most recent record. The key here is[M1000 | Customer A].If we note the monitor entries, 3 records are transferred to update rules & two records are loaded in to Active Queue table. This is because we haven't activated request yet & that duplicate record for key in DSO gets overwritten. Note: Activation Queue can also be expressed as 'New Data' table The key figures will have the overwrite option by default, additionally we have the summation option to suit certain scenarios and the characteristics will overwrite always.Naming Conventions Tech. Name of New data / Activation queue table is always for customer objects - /bic 140 and for SAP objects - /bio140. Name of active data table /BIC/A100 and /BI0 for SAP. Name of change log table - The technical name is always /BIC/.Once we activate we will have two records in DSO's Active Data table. The Active Data table always has contains the semantic key(E.g. Customer & Material for instance)Change LogThe Change Log table has 2 entries with the image N(stands for New').The technical key (REQID, DATAPACKETID, RECORDNUMBER) will be part of change log table.(Refer Fig. D)

Fig. D - Data is loaded to CL & ADT (Pl. refer Fig. A for more details)Introducing a few changes, we get the following result as in Fig. E.

Fig. E - Changes Introduced from the Flat file is reflected on PSA to ADT & PSA to CLDetailed Study on Change Logs

We will check Change log table to see how the deltas are handled. The records are from first request that is uniquely identified by technical key(Request Number, Data packet number, Partition value of PSA and Data record number). With the second request the change log table puts the before and after Image for the relevant records.

Fig. F - Study on the Change Log on how the Deltas are handledIn this example Customer and Material has the before image with record mode "X". And also note that all key figures will be having "-" sign if we opted to overwrite option & characteristics will be overwritten always. A new record(last row in the Fig. F)is added is with the status "N" as it's a new record.

Fig. G - Final Change Log OutputRecord modesThe record mode(s) that a particular data source uses for the delta mechanism largely depends on the type of the extractor.

Fig.H - Types of Record modes (C) SAPRef. OSS notes399739for more details.Work ScenarioLet's go through a sample real time scenario. In this example we will take the Master data object Customer, Material with a few attributes for the demonstration purpose. Here we define a ODS / DSO as below where material and customer is a key and the corresponding attributes as data fields. ODS / DSO definition Definition of the transformation Flat file Loading Monitoring the Entries Monitoring Activation Queue Monitoring PSA data for comparison Checking Active Data Table Monitoring Change Log Table Displaying data in suitable Info provider (E.g. Flat File to PSA to DSO to Info Cube)Note: In 7.0 the status data is written to active data table in parallel while writing to Change log. This is an advantage of parallel processes which can be customized globally or at object level in system

Write Optimized DSO

This blog describes a new DataStore in BI 7.0, "Write-Optimized DataStore" that supports the most detail level of tracking history, retaining the document status and a faster upload without activation.

In a database system, read operations are much more common than write operations and consequently, most of database systems have been read optimized. As the size of the main memory increases, more of the database read requests will be satisfied from the buffer system and also the numberof disk write operations when compared to total disk operations will relatively increase. This feature has turned the focus on writeoptimized database systems.

In SAP Business Warehouse, it is necessary to activate the data loaded into a Data Store object to make it visible for reporting or to update it to further InfoProviders. As of SAP NetWeaver 2004, a new type of Data Store object was introduced: theWrite-OptimizedDataStoreobject. The objective of this new DataStore is to save data as efficiently as possible to further process it without any activation, additional effort of generating SIDs, aggregation and data-record based delta. This is a staging DataStore used for a faster upload.In BI 7.0,threetypes of DataStore objects exist:1. Standard DataStore (Regular ODS).2. DataStore Object for Direct Updates ( APD ODS).3. Write-Optimized DataStore (new).In this weblog, I would like to focus on the features, usage and the advantages ofWrite-Optimzied DataStore.Write-Optimized DSO has been primarily designed to be the initial staging of the source system data from where the data could be transferred to the Standard DSO or the InfoCube. oThe data is saved in the write-optimized Data Store object quickly. Data is stored in at most granular form. Document headers and items are extracted using a DataSource and stored in the DataStore. oThe data is then immediately written to the further data targets in the architected data mart layer for optimized multidimensional analysis.The key benefit of using write-optimized DataStore object is that the data is immediately available for further processing in active version. YOU SAVE ACTIVATION TIME across the landscape. The system does not generate SIDs for write-optimized DataStore objects to achive faster upload. Reporting is also possible on the basis of these DataStore objects. However, SAP recommends to use Write-Optimized DataStoreas a EDW inbound layer, and update the data into further targetssuch as standard DataStore objects or InfoCubes.Fast EDW inbound layer - An IntroductionData warehousing has been developed into an advanced and complex technology. For some time it was assumed that itis sufficient to store data in a star schema optimized for reporting. However, this does not adequately meet the needs of consistency and flexibility in the long run. Therefore data warehouses are structured using layer architecture like Enterprise data warehouse layer and Architectured data mart layer. These different layers contain data at different levels of granularity as shown inFigure 1.Figure 1Enterprise Data Warehouse Layer is a corporate information repositoryThe benefit of Enterprise Data warehouse Layer includes the following:Reliability, Trace back - Prevent Silos o'Single point of truth'. oAll data have to pass this layer on it's path from the source to the summarized EDW managed data marts. Controlled Extraction and Data staging (transformations, cleansing) oData are extracted only once and deployed many. oMerging data that are commonly used together.Flexibility, Reusability and Completeness. oThe data is not manipulated to please specific project scopes (unflavored). oThe coverage of unexpected adhoc requirements. oThe data is not aggregated.oNormally not used for reporting, used for staging, cleansing and transformation one time. oOld versions like document status are not overwritten or changed but useful information may be added. oHistorical completeness - different levels of completeness are possible from availability of latest version with change date to change history of all versions including extraction history. oModeled using Write-Optimized DataStore or standard DataStore.Integration oData is integrated. oRealization of the corporate data integration strategy.Architectured data marts are used for analysis reporting layer, aggregated data, data manipulation with business logic, and can be modeled using InfoCubes or Multi Cubes.When is it recommended to use Write-Optimized DataStoreHere are the Scenarios for Write-Optimized DataStore. (As shown inFigure 2). oFast EDW inbound layer. oSAP recommends Write-Optimized DSO to be used as the first layer. It is called Enterprise Data Warehouse layer. Asnot all business content come with this DSO layer, you mayneed to build your own. You may check in table RSDODSO for version D and type "Write-Optimized". oThere is always the need for faster data load. DSOs can be configured to be Write optimized. Thus, the data load happens faster and the load window is shorter.oUsed where fast loads are essential. Example: multiple loads per day (or) short source system access times (world wide system landscapes).oIf the DataSource is not delta enabled. In this case, you would want to have a Write-Optimized DataStore to be the first stage in BI and then pull the Delta request to a cube.oWrite-optimized DataStore object is used as a temporary storage area for large sets of data when executing complex transformations for this data before it is written to the DataStore object. Subsequently, the data can be updated to further InfoProviders. You only have to create the complex transformations once for all incoming data.oWrite-optimized DataStore objects can be the staging layer for saving data. Business rules are only applied when the data is updated to additional InfoProviders.oIf you want to retain history at request level. In this case you may not need to have PSA archive; instead you can use Write-Optimized DataStore.oIf a multi dimensional analysis is not required and you want to have operational reports, you might want to use Write Optimized DataStore first, and then feed data into Standard Datastore. oProbably you can use it for preliminary landing space for your incoming data from diffrent sources. oIf you want to report daily refresh data with out activation.In this case it can be used in reporting layer with InfoSet (or) MultiProvider.I have discussed possible scenarios but request you to decide where this data store can fit in your data flow.Typical Data Flow using Write-Optimized DataStoreFigure 2Typical Data flow using write-optimized DataStore.Functionality of Write-Optimized DataStore(As shown inFigure 3).Only active data table (DSO key: request ID, Packet No, and Record No): oNo change log table and no activation queue. oSize of the DataStore is maintainable. oTechnical key is unique. oEvery record has a new technical key, only inserts.oData is stored at request level like PSA table.No SID generation: oReporting is possible(but you need make sureperformance is optimized ) oBEx Reporting is switched off. oCan be included in InfoSet or Multiprovider. oPerformence improvement during dataload.Fully integrated in data flow: oUsed as data source and data targetoExport into info providers via request deltaUniqueness of Data: oCheckbox Do not check Uniqueness of data. oIf this indicator is set, the active table of the DataStore object could contain several records with the same key.Allows parallel load.Can be included in Process chain without activation step.Support Archive.You cannot use reclustering for write-optimized DataStore objects since this DataStore data is not meant for querying. You can only use reclustering for standard DataStore objects and the DataStore objects for direct update.PSA and Write optimized DSO are the two different entities in the data flow as each one has its own features and usage. Write optimized DSO will not replace the PSA in a data flow but it allows to stage (or) store the data without activation and to apply business rules. Write-optimized DataStore Object is automatically partitioned. Manual Partitioning can be done according to SAP Notes 565725/742243. Optimized Write performance has been achieved by request level insertions, similarly like F table in InfoCube. As we are aware that F fact table is write-optimized while the E fact table is read optimized.

Figure 3Overview of various DataStore objects types in BI 7.0To define Write-Optimized DataStore, just change Type of DataStore Object to Write-Optimized as shown inFigure 4.

Figure 4Technical settings for Write-Optimized DataStore.Understanding Write-Optimized DataStore keys:Since data is written into Write-optimized DataStore active-table directly, you may not need to activate the request as is necessary with the standard DataStore object. The loaded data is not aggregated; the history of the data is retained at request level. . If two data records with the same logical key are extracted from the source, both records are saved in the DataStore object. The record mode responsible for aggregation remains, however, the aggregation of data can take place later in standard DataStore objects.The system generates a unique technical key for the write-optimized DataStore object. The technical key consists of the Request GUID field (0REQUEST), the Data Package field (0DATAPAKID) and the Data Record Number field (0RECORD) as shown inFigure4. Only new data records are loaded to this key.The standard key fields are not necessary with this type of DataStore object. Also you can define Write-Optimized DataStore without standard key. If standard key fields exist anyway, they are called semantic keys so that they can be distinguished from the technical key.Semantic Keyscanbe defined as primary keys in further target Data Store but it depends on requirement. For example if you areloading data into Schedule Line Level ODS thru Write-optimized DSO, you can have header, item, SCL as the semantic keys in your Write-optimized DSO. The purpose of the semantic key is to identify error in the incoming recordsor duplicate records. All subsequent data records with same key are written to error stack along with the incorrect data records. These are not updated to data targets; these are updated to error stack. A maximum of 16 key fields and 749 data fields are permitted. Semantic Keys protect the data quality. Semantic keys wont appear in database level. In order to process error records or duplicate records, you must have to define Semantic group in DTP (data transfer process) that is used to define a key for evaluation as shown inFigure 5. If you assume that there are no incoming duplicates or error records, there is no need to define semantic group, its not mandatory.The semantic key determines which records should be detained when processing. For example, if you define "order number" and item as the key, if you have one erroneous record with an order number 123456 item 7, then any other records received in that same request or subsequent requests with order number 123456 item 7 will also be detained. This is applicable for duplicate records as well.

Figure 5Semantic group in data transfer process.Semantic key definition integrates the write-optimized DataStore and the error stack through the semantic group in DTP as shown inFigure 5. With SAP NetWeaver 2004s BI SPS10, the write-optimized DataStore object is fully connected to the DTP error stack function.If you want to use write-optimized DataStore object in BEx queries, it is recommend that you define semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query.Delta Administration:Data that is loaded into Write-Optimized Data Store objects is available immediately for further processing. The activation step that has been necessary up to now is no longer required. Note here that the loaded data is not aggregated. If two data records with the same logical key are extracted from the source, both records are saved in the Data Store object, since the technical key for the both records not unique. The record mode (InfoObject 0RECORDMODE (space,X,A,D,R)) responsible for aggregation remains, however, the aggregation of data can take place at a later time in standard Data Store objects (or) InfoCube. Write-Optimized DataStore does not support the image based delta(RECORDMODE), it supports request level delta, and you will get brand new delta request for each data load.When you load a DataStore object that is optimized for writing, the delta administration is supplied with the change log request and not the load request.Since write-optimized DataStore objects do not have a change log, the system does not create delta (in the sense of a before image and an after image). When you update data into the connected InfoProviders, the system only updates the requests that have not yet been posted.Write-Optimized Data Store supports request level delta. In order to capture before and after image delta, you must have to post latest request into further targets like Standard DataStore or Infocubes.Extraction method - Transformations thru DTP (or) Update Rules thru InfoSourcePrior to using DTP, you must have to migrate 3.x DataSource into BI 7.0 DataSource by using transaction code RSDS as shown inFigure 6.

Figure 6Migration of 3.x Data Source -> Data Source using Tcode RSDS, and then replicate the data source into BI 7.0.After data source replication into BI 7.0, you may have to create data transfer process (DTP) to load data into Write-Optimized DataStore. Write-optimized DataStore objects can force a check of the semantic key for uniqueness when data is stored. If this option is active and if duplicate records are loaded with regard to semantic key, these are logged in the error stack of the Data Transfer Protocol (DTP) for further evaluation.In BI7 you are having the option to create error DTP. If any error occurs in data, the error data will be stored in Error stack. So, you can correct the errors in stack, and if you schedule the error DTP, the error data will be stored to target. Otherwise, you have to delete the error request from target and you need to reschedule the DTP. In order to integrate Write-Optimized DataStore into Error stack, you must have to define semantic keys in DataStore definition and create semantic group in DTP as shown inFigure 5.Semantic group definition is necessary to do parallel loads to Write-Optimized DataStore. You can update write-optimized DataStore objects in parallel after you have implemented OSS1007769note. When you include a DTP in process chain for write-optimized DataStore Object, you will need to make sure that there is no subsequent activation step for this DataStore.On the other hand you can just link this DSO thru the Infosource with update rules as well by using 3.x functionality.Reporting Write-Optimized DataStore Data:For performance reasons, SID values are not created for the characteristics that are loaded. The data is still available for BEx queries. However, in comparison to standard DataStore objects, you can expect slightly worse performance because the SID values have to be created during reporting. However, it is recommended that you use them as a staging layer, and update the data to standard DataStore objects or InfoCubes.OLAP BEx query perspective, there is no big difference between Write-Optimized DataStore and Standard DataStore, the technical key is not visible for reporting, so the look and feel is just like regular DataStore. If you want to use write-optimized DataStore object in BEx queries, it is recommended that they have a semantic key and that you run a check to ensure that the data is unique. In this case, the write-optimized DataStore object behaves like a standard DataStore object. If the DataStore object does not have these properties, unexpected results may be produced when the data is aggregated in the query. In a nut shell, Write Optimized DSO is not for reporting purpose unless otherwise required to do so, its a staging DataStore used for faster upload.The direct reporting on this object is also possible without activation but keeping in mind the performance, you can use an infoset or multi-provider.Conclusion:Using Write-Optimized DataStore, you will have snapshot for each extraction. This data can be used for trending old KPIsor deriving new KPIs at any time because the data is stored at request level. This most granular level data by calendar day/time can be used for slice and dice, data mining, root-cause analysis, behavioral analysis which will help in better decision making. Moreover you need not worry about the status of extracted documents into BI since data is stored as of extracted date/time.For example Order-to-Cash/Spend analysis...etc lifecycle can be monitored in detail to identify the bottlenecks in the process.Although there is help documentation available from SAP on Write-Optimzied DataStore, I thought it would be useful to write this blog that gives a clear view onWrite-Optimized DataStoreconcept, the typical scenarios of where, when andhow to use; you can customize the data flow/ data model as per reporting(or)downstream requirement. A more detailed step-by-step technical document will be released soon.Useful OSS notes:Please check the latest OSS notes / support packages from SAP to overcome any technical difficulties occurred and make sure to implement them.OSS 1077308: In a write-optimized DataStore object, 0FISCVARNT is treated as a key, even though it is only a semantic key.OSS 1007769: Parallel updating in write-optimized DataStore objectsOSS1128082 - P17:DSO:DTP:Write-optimized DSO and parallel DTP loadingOSS 966002: Integration of write-opt DataStore in DTP error stackOSS 1054065: Archiving supports.You can attend SAP classDBW70E BI Delta Enterprise Data Warehousing SAP NetWeaver 2004s. Or you can visithttp://www50.sap.com/useducation/References:SAP Help documentationhttp://help.sap.com/saphelp_nw04s/helpdata/en/f9/45503c242b4a67e10000000a114084/content.htmhttp://help.sap.com/saphelp_sem60/helpdata/en/b6/de1c42128a5733e10000000a155106/content.htmNew BI Capabilities in SAP NetWeaver2004s (Pls open in separate link)https://www.sdn.sap.com/irj/servlet/prt/portal/prtroot/docs/library/uuid/5c46376d-0601-0010-83bf-c4f5f140e3d6Enterprise Data Warehousing - SAP BI (Pls open in separate link)https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/67efb9bb-0601-0010-f7a2-b582e94bcf8aSAP NetWeaver 7.0 Business Intelligence Warehouse Management (Pls open in separate link)http://www.tacook.co.uk/media/pdf/00075_pre3.pdf

Difference Between Standard and Write Optimized DSO - Technical Settings

DSO activation job log and settings explainedIn BI 7.x you have three different kinds of DataStoreObjects (DSO): Standard, write-optimized and direct update. A Standard DSO consists of a new data and an active date table and a changelog table, which records the changes. Write-optimized DSO and DSO for direct update consist of an active table only. InBI 7.x the background process how data in standard DataStoreObjects is activated has changed in comparison to BW 3.5 or prior. In this blog I will explain the DSO activation job log and the settings / parameters of transaction "RSODSO_Settings". I will describe how the parameters you can set in this transaction influence DSO activation performance. I will notdescribe the different activation types. h1. 2. Manually activation of a request If you loaded a new request with a Data Transfer Process (DTP) in your standard DSO, data is written to new data table. You can manually activate the request or within a process chain. If you manually activate requests, you get following popup screen:+Picture 1: Manual DSO Activation+ Button"Activate in Parallel" sets the settings for parallel activating. In this popup you select either dialog or background. In background you select job class and server. For both you define the number of jobs for parallel processing. By default it is set to '3'. This means, you have two jobs that can be scheduled parallel to activate your data, the BIBCTL* jobs. The third job is needed for controlling the activation process and scheduling the processes. That's the BI_ODSA* job. h1. 3. BI_ODSA* and BIBCT* Jobs The main job for activating your data is "*BI_ODSAxxxxxxxxxxxxxxxxxxxxxxxxx*" with a unique 25-letter-GUID at the end. Let's have a look at the job log with SM37.Picture 2: Job log for BI_ODSA* job Activating data is done in 3 steps. First it checks status of request in DSO if it can be activated, marked green in the log. If there is another yellow or red request before this request in the DSO, activation terminates. In a second stepdata is checked against archived data, marked blue in the log. In a third step the activation of data takes place, marked red in the log. Duringstep 3a number of sub-jobs "*BIBCTL_xxxxxxxxxxxxxxxxxxxxxxxx*" with a unique 25-letter-GUID at the end arescheduled. This is done for the reason to get a higher parallelism and so a better performance. But how isthe data split up into the BIBCTL* jobs? How does the system know, how many jobs shouldbe scheduled? I will answer this question in the nextchapter. But often, the counterpart seems to happen. You set a high parallel degree and start activation. But activation even of a few data records takes a long time. In {code:html}DSO activation collection note{code}you will find some general hints for DataStore performance. I will show you in chapter 4, which settings can be the reason for these long-running activations. After the last "BIBCTL*" has been executed the SID activation will be started. Unfortunately this is not written to the job log for each generation step but at the end if the last of your SID generation jobs has been finished. Let's look at the details and performance settings, how they influence DSO activating so that you may reduce your DSO activation time. h1. 4. Transaction for DSO settings Youcan view and change this DSO settings with "Goto->Customizing DataStore" in your manage DSO view. Youre now in transaction RSODSO_SETTINGS. In[help.sap.com |http://help.sap.com/saphelp_nw70ehp2/helpdata/en/e6/bb01580c9411d5b2df0050da4c74dc/content.htm] you can find some general hints for runtime parameters of DSO.Picture 3: RSODSO_SETTINGS As you can see, you can make cross-DataStore settings or DataStore specific settings. I choose cross-DataStore and choose "Change" for now. A new window opens whichis divided into three sections:Picture 4: Parameters for cross-datastore settings Section1 for activation,section 2 for SID generation andsection 3 for rollback. Let's have a look at the sections one after another. h1. 5. Settings for activation In the first section you can set the data package size for activation, the maximum wait time and change process parameters. If you click on thebutton "Change process params" in part "Parameter for activation" a new popup window opens:Picture 5: Job parameters The parameter described hereare your default '3' processes provided for parallel processing. By default also background activation is chosen. You can now save and transport these settings to your QAS or productive system. But be careful: These settings are valid for any DSO in your system. As you can see from picture 4 the number of data records per data package is set to 20000 and wait time for process is set to 300, it may defer from your system. What does this mean? This means simply that all your records which have to be activatedare split into smaller packages with maximum of 20000 records each package. A new job "BIBCTL*" is scheduled for each data package.The main activation job calculates the number of "BIBCTL*" jobs to be scheduled with this formula:*One important point: You can change the parametersfor data package size and maximum wait time for process onlyif there is no requestin your DSO. If you have loaded one request and you change the parameters, the next request will be loaded with the previous parameter settings. You first have to delete the data in your DSO, change the parameter settings and restart loading.* h1. 6. Settings for SID generation In section SID generation you can set parameters for maximum Package size and wait time for processes too.With button "Change process params"popup described in picture 5 appears. In this popup you define how many processes will be used for SID generation in parallel. It's again your default value. Minimum package size describes the minimum number of records that are bundled into one package for SID activation. With SAP Layered Scalable Arcitecture (LSA) in mind, you need SID generationfor your DSO only if you want to report on them and have queries built on them. Even if you have queries built on top of DSO without SID generation at query execution time missing SIDs will be generated, which slows down query execution. For more information to LSAyou canwatch a really goodwebinar from {code:html}Webinars{code}.Unfortunately SID generation is set as default if you create your DSO. My recommendation is: +Switch off SIDgeneration for any DSO+! If you use the DataStore object as the consolidation level, SAP recommends that you use the write-optimized DataStore object instead. This makes it possible to provide data in the Data Warehouse layer 2 to 2.5 times faster than with a standard DataStore object with unique data records and without SID generation! See performance tips for details. From[ performance tips for DataStore Objects in help.sap.com |http://help.sap.com/saphelp_nw70ehp2/helpdata/en/48/146cb408461161e10000000a421937/content.htm] you can also find this performance table and how the parameters SID generation and Unique records influence DSO activation: | Flag | | Saving in Runtime | | Generation of SIDs During Activation Unique Data Records | x x | approx. 25% | | Generation of SIDs During Activation Unique Data Records | | approx. 35% | | Generation of SIDs During Activation Unique Data Records | x | approx. 45% | The saving in runtime is influenced primarily by the SID determination. Other factors that have a favorable influence on the runtime are a low number of characteristics and a low number of disjointed characteristic attributes. h1. 7. Settingsfor Rollback Finally last section describes rollback. Here you set the maximum wait time for rollback processes and with button Change process params you set the number of processes available for rollback. If anything goes wrong during activation, e.g. your database runs out of table space,an error during SID generation occurs, rollback will be started and your data is reset to the state before activation. The most important parameter is maximum wait time for Rollback. If time is over, rollback job will be canceled. This could leave your DSO in an unstable state. My recommendation set this parameter to a high value. If you've large amount of data to activate you should take at least double the time of maximum wait time for activation for rollback. You should give your database enough time to execute rollback and reset your DSO to the state before activation started. Button "Save" saves all your cross-datastore settings. h1. 8. DataStore-specific settings For a DataStore-specific setting youenter your DSO in the input field as you can seefrom picture 3. With this DSO local setting you overwritethe global DSO settings for this selected DSO. Especially if you expect to have very large DSOs with lot of records you can change your parameters here. If you pressbutton "Change process params" the same popup opens as under global settings, see picture 5. h1. 9. Activation in Process chains I explained the settings for manual activation of requests in a standard DSO. For process chains youhave to create a variant for DSO activation as step in your chain, see picture 6. In this variant you can set the number ofparallel jobs foractivationaccordingly with button "Parallel Processing".

Performance issue during DSO request activationPurposeThis page contains some general tips onto improve the performance ofactivating DSO requests.OverviewThese tips will ensure that your DSO requests are activated with optimal performance.Tips DSO activation canbe slow if the batch tables are large as these are run through for object activations. So as a starter, please clean down the batch system with the usualhouse-keeping tools(report RSBTCDEL2, tcode SM65, etc); your Basis team will be aware of these & should run these for you. Ensure that statistics for the DSO are up to date. If you are not reporting on the DSO, the activation of SIDs is not required (this will take up some considerable time in activation); Often the logs show that the activation job takes almost all the time to schedule the RSBATCH_EXECUTE_PROCESS as job BIBCTL_*. RSBATCH_EXECUTE_PROCESS is for scheduling and executing the SID-generation process. If you don't need the relevant DSO for reporting & you don't have queries on it, you can delete the reporting flag in the DSO maintenance. This would be a good way to speed this process up significantly. Check under 'Settings' in the DSO maintenance whether you have flagged the option "SID Generation upon activation". By making some adjustments to the Data Store Object parameters in tcode RSODSO_SETTINGS you should be able to accelerate the request. You can adjust this for all, or specific, DSOs.

Questions on DSO:Q1. Hi,I need a clarification in adding objects to an existing DSO.I want to add 2 characteristics one as key field and other one as data field in DSO that is existing. The characteristics are not present in the DSO now.This DSO in Production system has millions of records in it.So my questions are1. Is it possible to add info objects to the existing DSO?2. Even if I am able to add, due to millions of records in production systems for this DSO will there be any problem while transports?We are using BI 7.0A. a. You can add key fields only in empty DSO. Even if you add some data fields, you have to reload data to fill this field for existing records.b. 1) yes it's possible to add IO to the existing DSO2) There will be no problem in adding the field and transporting it to production even if the DSO contains millions of records.but if business is requesting the historical data for this newly updated key IO then you need to drop the entire data and reload the data.if historical data is not needed then you are good to use it once its transported to Production.

Q2. can I upload data from two data source into same ODS at the same time ? how about one of two data source is RDA?A. a. Yes, you can load data from multiple source to one DSO. but activation step need to happen at one time only.After the successful loading two loads from sources you can activate dso request(All) at one time.If any loads have invalid/bad data, activation step will fail here. need to do manual correction at PSA(after deleting req at dso level) and relaod from psa to dso. do activation.If you load thru process chain, you will face lock issue.For Ex: two of them, if any one request have less data and loading will finish early and start activation. but due to the another load , it won't allow to activate its need to wait finish that load and later it will activate.You can design your process chain: use parallel steps upto loading of dso and later you can add activation step in series.

b. You can very well load your DSO from two datasources.You can load by Real Time also. No issues..But the DSO requests activation can be done when both datasources have been successfully loaded to DSO. I mean both requests can be activated in one shot.You can make your process chain in the above fashion.

Q3. I'm faces with some issue - i need to load Data From 2 Data Sources to one DSO. But i only need to have one data record.Example: At first Data Source we have information about accounting documents, g/l account, reference transactions, reference keys. In second data source - material document number, material, storage location.First Data SourceAccounting DocumentG/l AccountRef. TRNRef Key

11MKPF22

Second Data SourceMaterial DocumentMaterialStorage Location

22pencilHR

Need to have in DSO one record:Accounting DocumentG/l AccountRef. TRNRef KeyMaterial DocumentMaterialStorage Location

11MKPF2222pencilHR

Data can be assigned by material document and reference key - material document=reference key I'm tried to create an Info-Source by the desired communication structure. Then Created Transformation between Info-Source and DSO Then i configured to Data Transfer processes - one per each DataSource Then Execute DTP Activate data And in result i have two records, exept oneA. a. Your datasources should have at least one and above common fields, so that you can use these common fields as your DSO key fields. When you load data from 2 datasources, then you will get one record.Otherwise you will always get 2 records.Work around :Create two separate DSOs for the two datasources individually.You can create an Infoset by inner joining "Ref Key" and "Material Document". Hence you will get your required single record

b. Make a DSO with material as keyfield and rest all the fields of datsource as data fields.Create a transformation between datsource 2 and DSO --Map all the fields of itCreate another transformation between datasource 1 and dso...Make sure to map ref key to material keyfield in dso.Load the data for both in the sequence as mentioned above.This should work and give you one record as desired.c. Creating 2 separate DSO and using Infoset will work by joining reference key & material document.With Infoset, performance of the query will not be good. I will suggest going for 3rd Cube or DSO having first DSO transformation mapped and having look up from 2nd DSO(if 2nd DSO has less Infoobjects).d. The second datasource seems to contain material master data, so I would suggest du load them into the attributes of the material infoobject. Then you add the material to the dso where you load the data from first datasource. You can use the data from second datasource as navigational attributes or load them to the dso in a end routine of the Transformation from datasource 1 to the dso if you need the attributes there.

3. Info Cube4. Multi Provider5. Info Set6. PSA: Persistent staging area (PSA), the structure of the source data is represented byDataSources. The data of a business unit (for example, customer master data or item data of an order) for a DataSource is stored in a transparent, flat database table, the PSA table. The data storage in the persistent staging area is short- to medium-term. Since it provides the backup status for the subsequent data stores, queries are not possible on this level and this data cannot be archived.7. Info Package

8. DTP: DTP determines the process for transfer of data between two persistent objects within BI.As of SAP NetWeaver 7.0, InfoPackage loads data from a Source System only up to PSA. It is DTP that determines the further loading of data thereafter.

Use Loading data from PSA to InfoProvider(s). Transfer of data from one InfoProvider to another within BI. Data distribution to a target outside the BI system; e.g. Open HUBs, etc.In the process of transferring data within BI, the Transformations define mapping and logic of data updating to the data targets whereas, the Extraction mode and Update mode are determined using a DTP.NOTE: DTP is used to load data within BI system only; except when they are used in the scenarios of Virtual InfoProviders where DTP can be used to determine a direct data fetch from the source system at run time.Key Benefits of using a DTP over conventional IP loading1. DTP follows one to one mechanism between a source and a Target i.e. one DTP sources data to only one data target whereas, IP loads data to all data targets at once. This is one of the major advantages over the InfoPackage method as it helps in achieving a lot of other benefits.2. Isolation of Data loading from Source to BI system (PSA) and within BI system. This helps in scheduling data loads to InfoProviders at any time after loading data from the source.3. Better Error handling mechanism with the use of Temporary storage area, Semantic Keys and Error Stack.ExtractionThere are two types of Extraction modes for a DTP FullandDelta.Full:Update mode full is same as that in an InfoPackage.It selects all the data available in the source based on the Filter conditions mentioned in the DTP.When the source of data is any one from the below InfoProviders, only FULL Extraction Mode is available. InfoObjects InfoSets DataStore Objects for Direct UpdateDelta is not possible when the source is anyone of the above.Delta:Unlike InfoPackage, delta transfer using a DTP doesnt require an explicit initialization. When DTP is executed with Extraction mode Delta for the first time, all existing request till then are retrieved from the source and the delta is automatically initialized.

The below 3 options are available for a DTP with Extraction Mode: Delta. Only Get Delta Once. Get All New Data Request By Request. Retrieve Until No More New Data.I Only get delta once:If this indicator is set, a snapshot scenario is built. The Data available in the Target is an exact replica of the Source Data.Scenario:Let us consider a scenario wherein Data is transferred from a Flat File to an InfoCube. The Target needs to contain the data from the latest Flat File data load only. Each time a new Request is loaded, the previous request needs to be deleted from the Target. For every new data load, any previous Request loaded with the same selection criteria is to be removed from the InfoCube automatically. This is necessary, whenever the source delivers only the last status of the key figures, similar to a Snap Shot of the Source Data.Solution Only Get Delta OnceA DTP with a Full load should suffice the requirement. However, it is not recommended to use a Full DTP; the reason being, a full DTP loads all the requests from the PSA regardless of whether these were loaded previously or not. So, in order to avoid the duplication of data due to full loads, we have to always schedule PSA deletion every time before a full DTP is triggered again.Only Get Delta Once does this job in a much efficient way; as it loads only the latest request (Delta) from a PSA to a Data target.1. Delete the previous Request from the data target.2. Load data up to PSA using a Full InfoPackage.3. Execute DTP in Extraction Mode: Delta with Only Get Delta Once checked.The above 3 steps can be incorporated in a Process Chain which avoids any manual intervention. IIGet all new data request by request:If you set this indicator in combination with Retrieve Until No More New Data, a DTP gets data from one request in the source. When it completes processing, the DTP checks whether the source contains any further new requests. If the source contains more requests, a new DTP request is automatically generated and processed.NOTE: If Retrieve Until No More New Datais unchecked, the above option automatically changes to Get One Request Only. This would in turn get only one request from the source.Also, once DTP is activated, the option Retrieve Until No More New Datano more appears in the DTP maintenance.Package SizeThe number of Data records contained in one individual Data package is determined here.Default value is 50,000.FilterThe selection Criteria for fetching the data from the source is determined / restricted by filter.We have following options to restrict a value / range of values: Multiple selections OLAP variable ABAP RoutineAon the right of theFilterbutton indicates the Filter selections exist for the DTP.Semantic GroupsChoose Semantic Groups to specify how you want to build the data packages that are read from the source (DataSource or InfoProvider). To do this, define key fields. Data records that have the same key are combined in a single data package.This setting is only relevant for DataStore objects with data fields that are overwritten. This setting also defines the key fields for the error stack. By defining the key for the error stack, you ensure that the data can be updated in the target in the correct order once the incorrect data records have been corrected.Aon the right side of the Semantic Groups button indicates the Semantic keys exist for the DTP.Update

Error Handling Deactivated:If an error occurs, the error is reported at the package level and not at the data record level.The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety.This results in faster processing. No Update, No Reporting:If errors occur, the system terminates the update of the entire data package. The request is not released for reporting. The incorrect record is highlighted so that the error can be assigned to the data record.The incorrect records are not written to the error stack since the request is terminated and has to be updated again in its entirety. Valid Records Update, No Reporting (Request Red):This option allows you to update valid data. This data is only released for reporting after the administrator checks the incorrect records that are not updated and manually releases the request (by a QM action, that is, setting the overall status on the Status tab page in the monitor).The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP. Valid Records Update, Reporting Possible (Request Green):Valid records can be reported immediately. Automatic follow-up actions, such as adjusting the aggregates, are also carried out.The incorrect records are written to a separate error stack in which the records are edited and can be updated manually using an error DTP.Error DTPErroneous records in a DTP load are written to a stack called Error Stack.Error Stack is a request-based table (PSA table) into which erroneous data records from a data transfer process (DTP) are written. The error stack is based on the data source (PSA, DSO or Info Cube), that is, records from the source are written to the error stack.In order to upload data to the Data Target, we need to correct the data records in the Error Stack and manually run the Error DTP.Execute

Processing ModeSerial Extraction, Immediate Parallel Processing:A request is processed in a background process when a DTP is started in a process chain or manually.Serial in dialog process (for debugging):A request is processed in a dialog process when it is started in debug mode from DTP maintenance.This mode is ideal for simulating the DTP execution in Debugging mode. When this mode is selected, we have the option to activate or deactivate the session Break Points at various stages like Extraction, Data Filtering, Error Handling, Transformation and Data Target updating.You cannot start requests for real-time data acquisition in debug mode.Debugging Tip:When you want to debug the DTP, you cannot set a session breakpoint in the editor where you write the ABAP code (e.g. DTP Filter). You need to set a session break point(s) in the Generated program as shown below:

No data transfer; delta status in source: fetched:This processing is available only when DTP is operated in Delta Mode. It is similar to Delta Initialization without data transfer as in an InfoPackage.In this mode, the DTP executes directly in Dialog. The request generated would mark the data found from the source as fetched, but does not actually load any data to the target.We can choose this mode even if the data has already been transferred previously using the DTP.Delta DTP on a DSOThere are special data transfer options when the Data is sourced from a DTP to other Data Target.

Active Table (with Archive) The data is read from the DSO active table and from the archived data. Active Table (Without Archive)The data is only read from the active table of a DSO. If there is data in the archive or in near-line storage at the time of extraction, this data is not extracted. Archive (Full Extraction Only)The data is only read from the archive data store. Data is not extracted from the active table. Change LogThe data is read from the change log and not the active table of the DSO.

Change Status of DTP RequestSometimes there arises a situation, wherein you need to change the status of a request that is getting loaded via a DTP. This article describes the solution for this scenario.Author's BioRahul Bhandare is currently working in Patni Computer Systems Ltd as a SAP BW Consultant since last two years. He is mainly involved in Development & Production Support maintenance related work to SAP BI.ScenarioConsider following situations1. DTP load for DSO is running more than its due time i.e. (taking more time to load data) and hence stays in yellow state for a long time and you want to stop the load to the DSO by changing the status of the loading request from yellow to red manually, but you deleted the ongoing background job for the DTP load.2. A master data load through DTP failed as the background job for DTP failed with a short dump and you want to start a new DTP load but you cannot as there is a message sayingThe old request is still running.You cannot change the status for the old request to red or green as there is messageQM-action not allowed for master data.You cannot delete the old request due to the message Request cannot be locked for delete.SolutionWhen old request in Scenario 1 & 2 is in yellow status and you are not able to change / delete the request, its actually in a pseudo status. This request sits in table RSBKREQUEST with processing type as 5 in data elements USTATE, TSTATE and this 5 is actually "Active" status which is obviously wrong. One of the possible solutions is to ask a basis person to change the status to 3 in both USTATE and TSTATE and then it allows reloading the data. Once the data is successfully loaded, you can delete the previous bad request even though it is still in yellow. Once the request is deleted, the request status gets updated as "4" in table RSBKREQUEST.There is one more alternative solution, wherein you can manually change the status of the old request to red or green by using the function moduleRSBM_GUI_CHANGE_USTATE.Following are the steps to change the QM Status of a yellow request to red/green by usingRSBM_GUI_CHANGE_USTATE1. Select Request Id from target.

2. Go to SE37 and execute the function module RSBM_GUI_CHANGE_USTATE.

3. Enter Request Id here and execute it.

4. Then change the status of the request to either red/green.

5. Request will have the status you selected in step 4 and delete the request if turned to red.

9. Error DTP10. Data Source11. Transformations12. Transfer Rules13. Update Rules14. LO Cock Pit15. Generic Extraction16. Delta Management17. Process Chains:Triggering the Failed Process Chain Step's Status Manually to Green

In many situations while monitoring Process Chains, we find that the status of the chain is red indicating errors while data loading. We check and find that in the Process Chain, one of the steps has failed and the status is appearing in red color. But in actual everything has loaded, in order, still the chain is not moving forward because of the error.Not to worry as there is a workaround to manually set it to a success/green status. But it requires us to understand how the chain works.Whenever we create a chain we create a sequence of variants which are associated with events. Its the events, which when triggered, start a particular instance of variant and the step is completed.Each event is scheduled withrelation to the previous step, so as soon as the previous step gets completed; it triggers the ensuing step, in the process, completing the chain.Therefore, if a particular step has failed inside the chain, or stuckeven ifthe data has loaded correctly,and because of its stuck status it will not trigger the next event, which we will have to do manually.So the logical action would be to manually turn the status of this step, in the chain, to green, which will automatically trigger the next event then.But, if only we knew what the variants were, the instance, the start date and time of the particular step of the chain which failed, we could have directly gone to the function module and would have executed using the details, to manually turn the chain status to green and trigger the following events. But alas! Its not humanly possible to remember it everytime given the long BW technical names.In order to obtain this information we go to the failed step of the process chain, right click it and select Display Messages and open the tab which reads Chain. Here we note down the values in the field Variant and Instance.Next we need to go to the table which stores information on process log entries for process chains - RSPCPROCESSLOG. We use transaction SE16 which takes us to Data Browser: Initial screen asking for the table name whose entries we want to check.Input RSPCPROCESSLOG and press F7 which displays the table content input fields. Enter the noted values for the particular instance and the variant run on a particular day and press F8 to execute. .This returns the entry for the failed step giving details which are needed for the subsequent use in Function module. We again take a note of the values returned under the fields; logid, type, variant, instance.Lastly we go to SE38 transaction which opens the ABAP Editor: Initial Screen. Here enter RSPC_PROCESS_FINISH in the program field and press F8 which will execute this function module.Here enter the values jotted earlier namely; logid, type, variant, instance, Batch date, batch time. Here important thing to note is the field STATE. To turn it to green select option G which stands for successfully completed.Execute the function module by pressing F8 and in the process chain that process step status will be turned to green, triggering the next step in the chain.In a nutshell:-A)Go to the step where the process chain is stuck Right click on context menu to display messages Select tab chain Copy variant, instance & start dateB) Enter Transaction Code SE16 Enter table name RSPCPROCESSLOG Enter values for: Variant Instance Batch date. It will return a single row from table RSPCPROCESSLOG.C)Execute the function module RSPC_PROCESS_FINISH& Enter the below details INSTANCE. VARIANT LOGID TYPE Enter 'G' for the field STATE and press execute (F8).Process Chain Monitoring - Approach and CorrectionThe process chain monitoring is an important activity in Business Warehouse management mainly during support. So it is necessary to know different errors which occurs during monitoring and take necessary actions to correct process chains in order to support smoothly.Some points to remember while monitoring:1. In case of running local chain, right click and go to process monitor to check if there is any failure in any process step. It may happen that there might be failure in one of the parallel running process in the chain.2. Right click and select Display messages option to know more about reason of the failure in any step of process chain.3. Try to correct step which takes longer time to complete by comparing with other processes which are running in parallel.4. Check the lock entries on Targets in SM12 transaction. This will give you an idea fpr all the locks for the target.5. Perform RSRV check to analyze the error and correct it in relevant scenarios.Monitoring - Approach and Correction:DescriptionApproach/AnalysisCorrection

Failure in the Delete Index or Create Index stepGo to the target and check if any load is runningTrigger the chain ahead if indexes are already deleted in other process chain

Long running Delete Index or Create Index jobCompare the last run time of job. If it taking more time, check system logs in SM21 and server processes in SM51Inform BASIS team with system log and server processes details

Check the SM12 transaction entries for any locks on target from some other stepStop the delete index or create index and repeat it once lock is released

Attribute change run failureCheck the error messageCheck the lock on Master Data objects used inInfocubeand repeat the step

Roll up failureError message roll up failed due toMaster Datatable lock in attribute change runWait till attribute change run is completed and start the roll up again

-Open RSDDBIAMON2 transaction to checkany issues in BI Accelerator server.

Failure in Full data load

SQL error-Manually delete the indexes of target and start the load again

This happens generally due insufficient memory space available while reading data from tables.Coordinate with BASIS team to analyze the database space (DB02) and take necessary action to increase the database size.

Incorrect data records in PSA tableCheck the data in PSA table for any lower case letter or special characters etcAsk concerned R/3 person to correct the data at R/3 side and load the data again

It is very important to check the PSA table to have complete data in PSA before reconstruction. Otherwise all the data will not be updated in target.On immediate basis, delete the request from target, correct the data in PSA table and reconstruct the request

Delta data load Failure

Delta load failed without any error message.Go to infopackage and check for last delta load in target if it is successful or not.Wait till last delta load is completed or correct it then start the infopackage again.

PSA update failed due to job termination or target lockMake sure that all data records are available in PSA table and reconstruct the request

Delta load job failed in R/3 system or EDWH system after all data has come to PSA tableDelete the request from targets and reconstruct it.

Delta load job failed at R/3 side without data PSA tableMake the status of the request Red in process monitor and repeat the delta load again. Make sure that data is not doubled as repeat delta brings last successful records as well. Check the update mode of all key figures i.e. overwrite or addition mode for this.

Delta load job failed in EDWH system without data PSA tableRemove data mart tick from below source and start the delta package again.Or make the status of the load to Red in case all key figures are in overwrite mode.

Long running delta loadCheck the R/3 job in SM37 if it is not releasedAsk BASIS team to release the job from R/3 system

Compare the last successful delta load time along with R/3 connectionAsk BASIS team to repair the R/3 connection

DSO ActivationActivation failure due to incorrect value in the ODS field.Correct the data in PSA or check the update logic for the corresponding source field

Activation job terminationTry to activate the data manually else check the jobs in SM37. If most of the activation jobs are failing, contact BASIS team.

MiscellaneousSome process types are not triggered even if above process type is completed. Triggering line doesnt show any status.Chain logs takes time to update in chain log table i.e. RSPCPROCESSLOG. Check the system performance with BASIS team as well.

List of useful tables and programs:Table NameUsage

RSPCPROCESSLOGCheck the process type log

RSPCCHAINCheck the main chain for local chain

Program NameUsage

RS_TRANSTRU_ACTIVATE_ALLActivate transfer structure

RSDG_CUBE_ACTIVATEActivate Infocube

RSDG_TRFN_ACTIVATEActivate transformation

RSDG_ODSO_ACTIVATEActivate DSO

RSDG_IOBJ_ACTIVATEActivate Info object

SAP_PSA_PARTNO_CORRECTDelete the entries from partition table

RSPC_PROCESS_FINISHTo trigger the process type ahead

Hope this will be very useful while monitoring the process chain.Step by step process to clear extract & delta queues during patch/upgrade in ECC system

I am writing this blog to give you the steps to be performed during ECC system patch/upgrade:Process: In SAP ECC system any transaction posted into data base tables it will post entries in BW related extract Queues (LBWQ or SMQ1) related to LO cockpit. These queues need to be cleared before applying any Patches or upgrade into ECC to minimize data loss if any changes in extract structures.This document show step by step method to clear LO queues before applying the patches/upgrade into SAP ECC system.Note: Below given JOBS/INFOPACKAGES names may vary in your scenario.Procedure for V3 jobs of R/3 and Info packages schedulingbefore taking the down time:1)Schedule the below mentioned V3 jobs 4- 5 hrs before taking the down time continuously on hourly basis in SAP ECC system.Ex. Jobs, a.LIS-BW-VB_APPLICATION_02_500 b.LIS-BW-VB_APPLICATION_03_500 c.LIS-BW-VB_APPLICATION_11_500 d.LIS-BW-VB_APPLICATION_12_500 e.LIS-BW-VB-APPLICATION_17_500 f.PSD:DLY2330:LIS-BW-VB_APPLICATIO 2) 2)Schedule the below mentioned info packages 4-5 hrs before taking the downtime in SAP BW/BI system.BW client XXX.Ex. Info Package Name:a.MM_2LIS_03_BF_RegularDelta_1b.MM_2LIS_03_UM_Regulardelta_1c.2LIS_13_VDKON(DELTA)1d.Billing Document Item Data:2LIS_13_VDITM:Delta1e.2LIS_12_VCHDR(Delta)1f.2LIS_12_VCITM(delta)1g.Sales Document Header Data:2LIS_11_VAHDR:h.Order Item Delta update: 2LIS_11_VAITM:i.Order Alloctn Item Delta1 updat :2LIS_11_V_ITM :3) 3)Ensure that there should be minimum data in Queues (i.e. in SMQ1or LBWQ and RSA7) if data is very high then again schedule the V3 Jobs in R/3 & info packages. Steps 1 to 3 are to be followed before taking down time to minimize data extraction time during down time for patch application.4) 4)After taking the Down time SAP Basis team will inform BW team for clearing the Queues in ECC system.5)5)Follow the following procedure to clear Extract Queues (SMQ1 or LBWQ) and delta Queues (RSA7) (i.e. Before Application of Patches or upgrade)a) Request SAP basis team to Lock all users in SAP ECC system (except persons who clearing the queues) and take down time of 45 minutes or depending upon your data volume or plan.b) Make sure that all jobs are terminated nothing should be in Active status except V3 & BW extraction Jobs in SAP ECC system.c) Take screen shot of Tr. Code: SMQ1 or LBWQ before scheduling the V3 Jobs

d) Screen shot of Tr Code: RSA7 before extracting the data to BW

e) Screen shot of LBWE extraction structure

6) 6)Copy following V3 Jobs in SAP ECC system - and schedule it immediately in down time for every five minutes to move data from Extract Queues (SMQ1 or LBWQ) to Delta queues(RSA7).Ex.V3 Jobs, LIS-BW-VB_APPLICATION_02_500 LIS-BW-VB_APPLICATION_03_500 LIS-BW-VB_APPLICATION_11_500 LIS-BW-VB_APPLICATION_12_500 LIS-BW-VB-APPLICATION_17_500 PSD:DLY2330:LIS-BW-VB_APPLICATIO 6.1 To Delete unwanted Queues in SAP ECC system.These queues Ex. MCEX04, MCEX17 & MCEX17_1 are not being used in your project hence you need to delete these queues in ECC system.Deleting procedure: Enter the Tr. Code SMQ1 and select the MCEX04 then press the delete button, it will take few minutes to delete the entries.Follow the same procedure to delete other not required queues in your project.7) Then schedule the info packages in SAP BW (XXX client) until the RSA7 entries become 0. BW client XXX. Ex. Info Package Name: MM_2LIS_03_BF_RegularDelta_1 MM_2LIS_03_UM_Regulardelta_1 2LIS_13_VDKON(DELTA)1 Billing Document Item Data:2LIS_13_VDITM:Delta1 2LIS_12_VCHDR(Delta)1 2LIS_12_VCITM(delta)1 Sales Document Header Data:2LIS_11_VAHDR: Order Item Delta update: 2LIS_11_VAITM: Order Alloctn Item Delta1 updat :2LIS_11_V_ITM :8) If still Extraction queue (SMQ1 or LBWQ) has entries, repeat the step 6 to 7 until both the extract Queues and delta Queues read ZERO records.9) After zero records repeat the step 6 to 7 for double confirmation to avoid any possible data entries.10) Screen shot of Tr Code: SMQ1 after become ZERO. 11) Screen shot of Tr. Code: RSA7 after become ZERO.

12)After ensuring that SMQ1 or LBWQ and RSA7 read ZERO entries, release the system for Basis for any upgrade or patch application. 13)After patch or upgrade is over SAP Basis team will inform SAP BW team to check the extract Queues and delta Queues are getting populated or not. 14)Request SAP Basis team Restore the all V3 jobs in ECC system to Original position and unlock all the users or system/communication users. 15)Check the Tr. Code: SMQ1 and RSA7 whether the entries are getting posted or not after restoring the V3 jobs in ECC system. See the screen shot

RSA7

16)Check the Tr, code LBWE whether all the Extract structure or active or not see the screen shot after patch application.

17)Schedule the any of the info package in SAP BW ( from above list) See the screen shot

This ends the Queues clearing activity.

1. Can you please detail how to run and clear V3 jobs in ECC2. Can you please detail suspend process chains which are scheduled for nightly load or delete/release PCs...do we need to change timings in their variants ?3. post upgrade, how to reschedule the process chains1) For scheduling V3 Jobs: Enter T code LBWE --> select particular Application Area say Purchasing--> Drill down to extract structures--> check all the required data sources are active or not--> select "Job Control"--> new window will open as "Job Maintenance for collective update"-->Enter Job parameters like-->Start date, time,Period values etc..->and in control parameters-->click on schedule job tab to schedule the JOB.it will create background job as per above given parameters likeLIS-BW-VB_APPLICATION_02_500.this completes V3 job schedule.2. All the Suspended Process chains will start immediatly after server goes up but load on the source system will increase drastically, it will impact on transactions. hence best practise will be stop all the process chains at the time of upgrade or patch application and reschedule all the jobs as per suitable timings once the servers goes up.Overview important BI performance transactionsOne of my colleagues asked me how he can check quickly the performance settings in a BI system. That just gave me the idea to write a little blog about performance relvant BI transactions, tables and tasks. I try to compress it on one page so that you can print it out easily and hang it on your wall.1.Loading Performance TransactionsRSMO RequestmonitorRSRQ Single Request monitor if you have the request id or SIDRSPC ProcesschainRSPCM Processchain monitor (all process chains at a glance)BWCCMS BW Computer Center Management SystemRSRV Check system for any inconsistenciesDB02 Database monitorST04 SQL monitor2.Reporting Performance TransactionsRSRT Query debug and runtime monitorRSTT- execute query, get tracesST03 Statistics3.Important Tables:RSDDSTAT_DMStatistics DAtaManager for Query executionRSDDSTATWHMWarehouse management statisticsRSDDSTAT_OLAPOLAP statisticsRSADMIN RSADMIN parameters4.MandatorytasksCheck Cube performance tab in RSA1 (indexes and statistics) every dayCheck ST22 for shortdumps and SM37 for failed jobs every morningCheck DB02 for table space and growth every dayCheck request monitor every morningCheck process chains for errors every dayLoad and update technical content every dayRun BI Technical Content queries or check BI Administrator Cockpit every dayOk, that's it for now.Restarting ProcesschainsHow is it possible to restart a process chain at a failed step/request?Sometimes, it doesn't help to just set a request to green status in order to run the process chain from that step on to the end.You need to set the failed request/step to green in the database as well as you need to raise the event that will force the process chain to run to the end from the next request/step on.

Therefore you need to open the messages of a failed step by right clicking on it and selecting 'display messages'.In the opened popup click on the tab 'Chain'.In a parallel session goto transaction se16 for table rspcprocesslog and display the entries with the following selections:1. copy the variant from the popup to the variante of table rspcprocesslog2. copy the instance from the popup to the instance of table rspcprocesslog3. copy the start date from the popup to the batchdate of table rspcprocesslogPress F8 to display the entries of table rspcprocesslog.Now open another session and goto transaction se37. Enter RSPC_PROCESS_FINISH as the name of the function module and run the fm in test mode.Now copy the entries of table rspcprocesslog to the input parameters of the function module like described as follows:1. rspcprocesslog-log_id -> i_logid2. rspcprocesslog-type -> i_type3. rspcprocesslog-variante -> i_variant4. rspcprocesslog-instance -> i_instance5. enter 'G' for parameter i_state (sets the status to green).Now press F8 to run the fm.Now the actual process will be set to green and the following process in the chain will be started and the chain can run to the end.Of course you can also set the state of a specific step in the chain to any other possible value like 'R' = ended with errors, 'F' = finished, 'X' = cancelled ....Check out the value help on field rspcprocesslog-state in transaction se16 for the possible values.Interrupting the Process chainScenario:Lets say there are two Process chains A and B and C. A and B are the Master chains which extracts the data from different non sap source systems. All our BW chains are dependent on a non sap source system, when the jobs get completed on non sap systems, itll send a message to the third party system from where our BW jobs will get triggered an Event based.Note: Reason why the non sap system sends the message to third party triggering tool is because when ever there is failure in the non sap System; it will not raise an event to kick off the BW chain(s). we have to trigger them manually. To avoid this we use the third party triggering toll to trigger the chains at a time using an Event .Now C is dependent both on A and B, In other words C has to trigger only after A and B is completed.We can achieve this using the Interrupt Process type. For example, if Process chain A got completed and B is still running, then using the Interrupt we can make the Process chain C to wait until both the Chains got completed.Lets see step by step.Process Chain A and B

Process chain CProcess chain C is dependent on both A and B chains, we use interrupts (A_interrupt, B_interrupt) which will wait till those chains got completed.

Now lets see how interrupt worksA_interrupt:Interrupting the PC C until PC A gets completed.

Copy the highlighted Event and Parameters

Enter the above copied Event and Parameter in the Interrupt Process type like below screen

Activate and schedule all the three process chains.Note: All the three process chains (A, B, C) get triggers on Event based.When the process chain C is in Scheduling, you can see the job BI_INTERRUPT_WAIT in both A and B chains like below screens.

Three chains (A , B , C) got triggered by the same Event